Hands-on experience on Spark/Python Core, Spark-SQL, Scala-programming Big Data platform
Should be able to understand the transformation logic and translate them to Spark-Pyspark SQL queries
Familiarity with data warehouse concepts like ETL, etc.
Able to understand the ETL/ELT process and have dealt with a huge volume of data ingestion, transformation, and consumption
Added knowledge of Delta Lake and related concepts - data storage strategies data standards like Avro, Parquet, JSON
The candidate should have 1 to 2 years of experience in a data engineer role, and has attained a graduate degree in computer science, information systems, or another quantitative field
Should also have experience using the following software/tools: big data tools: Python, Spark, Kafka, Scala, RDBMS (added advantage: Azure Data Factory, Databricks, Blob)
Someone who likes solving problems, is passionate about technology, likes working in small teams, is a quick learner, and wants to leave an impact
Annual CTC: ₹ 4,00,000 - 8,00,000 /year
Annual CTC breakup:
1. Fixed component: 90%
2. Variable component: 5%
3. Other incentives: 5%
OR