Job Description :
- Minimum 3 years of experience in Big Data technologies
- Hands-on experience with the Hadoop stack – HDFS, sqoop, Kafka, Pulsar, NiFi, Spark, Spark Streaming, Flink, Storm, hive, oozie, airflow, and other components required in building end-to-end data pipelines. Working knowledge of real-time data pipelines is added advantage.
- Strong experience in at least the programming language Java, Scala, and Python. Java preferable
- Hands-on working knowledge of NoSQL and MPP data platforms like Hbase, MongoDB, Cassandra, AWS Redshift, Azure SQLDW, GCP BigQuery, etc.
- Well-versed and working knowledge with data platform-related services on AWS
- Bachelor’s degree and year of work experience of 6 to 8 years or any combination of education, training, and/or experience that demonstrates the ability to perform the duties of the position