Recro

Senior Data Engineer - Hive/Hadoop

Click Here to Apply

Job Location

bangalore, India

Job Description

Job Description : We are growing rapidly and seeking a strong Data Engineer to be a key member of the Data and Business Intelligence organization with a focus on deep data engineering projects. You will be joining as one of the few initial data engineers as part of the data platform team in our Bengaluru office. You will have an opportunity to help define our technical strategy and data engineering team culture in India. You will design and build data platforms and services while managing our data infrastructure in cloud environments that fuel strategic business decisions across products. A successful candidate will be a self-starter, who drives excellence, is ready to jump into a variety of big data technologies and frameworks, and is able to coordinate and collaborate with other engineers, as well as mentor other engineers in the team Responsibilities : - Build highly scalable, available, fault-tolerant distributed data processing systems (batch and streaming systems) processing over 100s of terabytes of data ingested every day and petabyte-sized data warehouse and elastic search cluster. - Build quality data solutions and refine existing diverse datasets to simplified models encouraging self-service. - Build data pipelines that optimize data quality and are resilient to poor-quality data sources. - Own the data mapping, business logic, transformations, and data quality. - Low-level systems debugging, performance measurement, and optimization on large production clusters. - Participate in architecture discussions, influence product roadmap, and take ownership and responsibility over new projects. - Maintain and support existing platforms and evolve to newer technology stacks and : - Proficiency in Python and pyspark. - Deep understanding of Apache Spark, Spark tuning, creating RDDs, and building data frames. - Create Java/ Scala Spark jobs for data transformation and aggregation. - Experience in big data technologies like HDFS, YARN, MapReduce, Hive, Kafka, Spark, Airflow, Presto, etc. - Experience in building distributed environments using any of Kafka, Spark, Hive, Hadoop, etc. - Good understanding of the architecture and functioning of Distributed database systems. - Experience working with various file formats like Parquet, Avro, etc for large volumes of data. - Experience with one or more NoSQL databases. - Experience with AWS, GCP. - 5 years of professional experience as a data or software engineer. Experience : 5-8 years (ref:hirist.tech)

Location: bangalore, IN

Posted Date: 11/19/2024
Click Here to Apply
View More Recro Jobs

Contact Information

Contact Human Resources
Recro

Posted

November 19, 2024
UID: 4943654124

AboutJobs.com does not guarantee the validity or accuracy of the job information posted in this database. It is the job seeker's responsibility to independently review all posting companies, contracts and job offers.