Hadoop Engineering Internship in Gurgaon at Myananta Technologies Private Limited
Applications are closed for this internship. Click here to browse more internships.
Start Date
Starts immediatelyImmediately
3 Months
10,000 /month
7 Feb' 23
Posted 3 weeks ago
About Myananta Technologies Private Limited
We are a millennial and millennial organization. We work on software and application development, both on-premise and on-demand. We are expert software professionals. We believe in a catalyst for change and new technical insight and expertise. We learn and train through project implementation in agile methodology. We follow the complete cycle of SDLC and PDLC. We are in the digital world going through an IT industry landscape change. Analytics is the bread & butter for decision making though we are still dependent on on-premise. We incorporate rising today's & future' demand of industry dynamics and corporate functions. We believe technology allows us to design, develop, and deploy expertise, analytics, and other intellectual properties through software tools. We develop software and application. We provide information technology production support.
Activity on Internshala
Hiring since April 2021
40 opportunities posted
30 candidates hired
About the internship
Selected intern's day-to-day responsibilities include:

1. Install, configure and maintain big data components or technologies (Hadoop, Hive, Spark, and Airflow) as standalone and as clusters
2. Create Scala/Python jobs for data transformation and aggregation
3. Create databases and tables in Hive and load the data into tables
4. Clean, transform, and analyze vast amounts of raw data from various source systems
5. Design workflows and data processing pipelines
6. Create big data infrastructures (clusters) and maintain the same
7. Work on query performance optimization in Hive, Spark, and Airflow
8. Apply the required security framework applicable to big data

Skills Required:

1. B.Tech or BE or similar qualifications in computer science or IT or major in computer science or IT
2. Working knowledge of Hadoop, Hive, Spark, and Airflow
3. Knowledge of Python (with a focus on the functional programming paradigm)
4. Knowledge of Scala (with a focus on the functional programming paradigm)
5. Knowledge of ScalaTest, JUnit, PyTest/PyUnit
6. Knowledge of big data query tuning and performance optimization
7. Knowledge of SQL database integration (Microsoft, Oracle, Postgres, and/or MySQL)
Skill(s) required
Who can apply

Only those candidates can apply who:

1. are available for full time (in-office) internship

2. can start the internship between 23rd Jan'23 and 27th Feb'23

3. are available for duration of 3 months

4. have relevant skills and interests

5 days a week
Number of openings

Sign up to continue


By signing up, you agree to our Terms and Conditions.