Visionyle Solutions
Azure Databricks Engineer - Spark/Hadoop
Job Location
bangalore, India
Job Description
Position : Databricks Engineer Experience : 8 to 10 yrs Relevant Notice Period : Immediate - 15 Days Location : Bengaluru - Hybrid Role Overview : As a Databricks Engineer, you will collaborate with a globally distributed team to design, develop, and implement advanced data engineering solutions on cloud platforms like AWS, Azure, or GCP. This role requires expertise in building, maintaining, and optimizing data workflows and supporting robust data warehousing and analytics solutions. You will work closely with cross-functional teams to align data engineering solutions with business requirements, improve data accessibility, and support strategic data initiatives. Job Description : - Data Bricks Engineer, you will work with multiple teams to deliver solutions on the Cloud using core cloud data warehouse tools. - Must be able to analyze data and develop strategies for populating data lakes if required. Responsibilities : - Work as part of GIS A2V globally distributed team to design and implement Hadoop big data solutions in alignment with business needs and project schedules. - 5 years of data warehousing/engineering, software solutions design and development experience. - Code, test, and document new or modified data systems to create robust and scalable applications for data analytics. - Work with other Bigdata developers to make sure that all data solutions are consistent. - Partner with business community to understand requirements, determine training needs and deliver user training sessions. - Perform technology and product research to better define requirements, resolve important issues and improve the overall capability of the analytics technology stack. - Evaluate and provides feedback on future technologies and new releases/upgrades. - Supports Big Data and batch/real-time analytical solutions leveraging transformational technologies. - Works on multiple projects as a technical team member or drive user requirement analysis and elaboration, design and development of software applications, testing, and builds automation tools. - Ability to Research and incubate new technologies and frameworks. - Experience with agile or other rapid application development methodologies and tools like bitbucket jira confluence. - Have built solutions with public cloud providers such as AWS,Azure, or GCP Data Architecture & Design : - Architect and design scalable data engineering solutions using Databricks and other cloud-native tools, aligning solutions with business needs, data strategies, and project timelines. Data Lake & Data Warehouse Development : - Analyze data sources and design strategies for populating data lakes and data warehouses. - Develop, optimize, and implement ETL processes to facilitate smooth data integration and accessibility. Big Data Solutions Development : - Work within a distributed team to create and deploy Hadoop-based big data solutions. - Code, test, and document data pipelines and applications to support high-performance, scalable data analytics. Collaboration & Requirement Gathering : - Partner with business stakeholders to understand project goals and data requirements. - Facilitate the translation of business needs into technical requirements, providing guidance on data strategy and best practices. Expertise in : 1. Hands-on experience in Databricks stack 2. Data Engineering technologies (Ex: Spark, Hadoop, Kafka etc.,) 3. Must me proficient in Streaming technologies 4. Hands-on in Python, SQL 5. Expertise in implementing Data warehousing solutions 6. Expertise in Any ETL tool i.e. (SSIS, Redwood) 7. Good understanding on submitting jobs using Workflows, API & CLI (ref:hirist.tech)
Location: bangalore, IN
Posted Date: 11/9/2024
Location: bangalore, IN
Posted Date: 11/9/2024
Contact Information
Contact | Human Resources Visionyle Solutions |
---|