Fulcrum digital

Data Engineer - ETL

Job Location

coimbatore, India

Job Description

Job Summary : We are seeking a highly motivated and experienced Data Engineer to join our growing team in Coimbatore. As a Data Engineer, you will be responsible for designing, building, and maintaining scalable and reliable data pipelines and infrastructure that enable data-driven decision-making across the organization. You will work closely with data scientists, analysts, and other engineers to understand data needs and deliver robust solutions. The ideal candidate possesses a strong technical background in data warehousing, ETL/ELT processes, cloud technologies, and programming, along with excellent problem-solving and communication skills. Responsibilities : Data Pipeline Development and Maintenance : - Design, develop, and maintain robust and scalable data pipelines for ingestion, transformation, and loading (ETL/ELT) of data from various sources (structured and unstructured). - Optimize data pipelines for performance, reliability, and cost-efficiency. - Implement data quality checks and monitoring to ensure data accuracy and integrity. - Troubleshoot and resolve data pipeline issues in a timely manner. Data Warehousing and Database Management : - Design and implement data warehouse solutions (e.g, dimensional modeling, star schema, snowflake schema). - Manage and optimize database systems (both SQL and NoSQL) for performance and scalability. - Develop and maintain data models and schemas. - Ensure data security and compliance with relevant regulations. Cloud Infrastructure and Technologies : - Design and implement data solutions on cloud platforms (e.g, AWS, Azure, GCP). - Utilize cloud-based data warehousing and ETL/ELT services (e.g, AWS Redshift, Azure Synapse Analytics, Google BigQuery, AWS Glue, Azure Data Factory, Google Cloud Dataflow). - Implement and manage data infrastructure using Infrastructure-as-Code (IaC) principles. Programming and Scripting : - Write clean, efficient, and well-documented code in languages such as Python, Scala, or Java. - Develop and maintain scripts for data processing and automation. - Utilize version control systems (e.g, Git) for code management. Collaboration and Communication : - Collaborate effectively with data scientists, analysts, and other engineers to understand data requirements and deliver solutions. - Communicate technical concepts clearly and concisely to both technical and non-technical audiences. - Participate in code reviews and contribute to team knowledge sharing. Performance Optimization and Monitoring : - Monitor data pipelines and infrastructure for performance and identify areas for optimization. - Implement performance tuning techniques for databases and data processing jobs. - Set up and maintain monitoring and alerting systems for data pipelines and infrastructure. Emerging Technologies : - Stay up-to-date with the latest trends and technologies in data engineering and big data. - Evaluate and recommend new technologies and tools to improve data infrastructure and processes. - Potentially contribute to the development and implementation of real-time data processing solutions. Required Skills and Experience : - Bachelor's or Master's degree in Computer Science, Engineering, or a related field. - 6-7 years of hands-on experience in designing, building, and maintaining data pipelines and data infrastructure. - Strong understanding of data warehousing concepts, dimensional modeling, and ETL/ELT processes. - Proficiency in at least one programming language such as Python (essential), Scala, or Java. - Experience working with SQL and NoSQL databases (e.g, PostgreSQL, MySQL, MongoDB, Cassandra). - Solid understanding of cloud platforms such as AWS, Azure, or GCP, and experience with cloud-based data services. - Experience with data pipeline tools and frameworks (e.g, Apache Spark, Apache Kafka, Airflow, NiFi). - Familiarity with data visualization tools (e.g, Tableau, Power BI) is a plus. - Experience with version control systems (Git). - Excellent problem-solving, analytical, and troubleshooting skills. - Strong communication and collaboration skills. - Ability to work independently and as part of a team. - A proactive and results-oriented attitude. Preferred Skills and Experience : - Experience with real-time data processing technologies (e.g, Apache Flink, Apache Storm). - Knowledge of data governance and data quality best practices. - Experience with Infrastructure-as-Code (IaC) tools (e.g, Terraform, CloudFormation). - Familiarity with DevOps practices and CI/CD pipelines. - Experience working with big data technologies and distributed systems (ref:hirist.tech)

Location: coimbatore, IN

Posted Date: 3/29/2025
View More Fulcrum digital Jobs

Contact Information

Contact Human Resources
Fulcrum digital

Posted

March 29, 2025
UID: 5117062414

AboutJobs.com does not guarantee the validity or accuracy of the job information posted in this database. It is the job seeker's responsibility to independently review all posting companies, contracts and job offers.