Snowflake Computing
Senior Software Engineer - Polaris & Data Lake Catalog
Job Location
Job Description
Build the future of the AI Data Cloud. Join the Snowflake team.
Snowflake's vision is to enable every organization to be data-driven. We're at the forefront of innovation, helping customers realize the full potential of their data with our data cloud. We are now going beyond the traditional data warehouse and helping customers unlock the power of the open data lake house architecture with our Polaris project-an open-source implementation of the Iceberg REST catalog.
As a Senior Software Engineer on the Polaris and Data Lake Catalog team, you'll play a key role in building and evolving our open and interoperable data lake ecosystem. You'll work on some of the most complex and exciting challenges in distributed systems, contributing to Snowflake's mission of providing a truly open data lake architecture, free from vendor lock-in.
AS A SENIOR SOFTWARE ENGINEER, YOU WILL:-
Design and implement scalable, distributed systems to enable support for Iceberg DML/DDL transactions, schema evolution, partitioning, time travel, and more.
Architect and build systems that integrate Snowflake queries with external Iceberg catalogs (e.g., AWS Glue, Databricks Unity) and various data lake architectures, enabling seamless interoperability across cloud providers.
Develop high-performance, low-latency solutions for catalog federation, allowing customers to manage and query their data lake assets across multiple catalogs from a single interface.
Collaborate with Snowflake's open-source team and the Apache Iceberg community to contribute new features and enhance the Iceberg REST specification.
Work on core data access control and governance features for Polaris, including fine-grained permissions such as row-level security, column masking, and multi-cloud federated access control.
Contribute to our managed Polaris service, ensuring that external query engines like Spark and Trino can read from and write to Iceberg tables through Polaris in a way that's decoupled from Snowflake's core data platform.
Build tooling and services that automate data lake table maintenance, including compaction, clustering, and data retention for enhanced query performance and efficiency.
-
8+ years of experience designing and building scalable, distributed systems.
Strong programming skills in Java, Scala, or C++ with an emphasis on performance and reliability.
Deep understanding of distributed transaction processing, concurrency control, and high-performance query engines.
Experience with open-source data lake formats (e.g., Apache Iceberg, Parquet, Delta) and the challenges associated with multi-engine interoperability.
Experience building cloud-native services and working with public cloud providers like AWS, Azure, or GCP.
A passion for open-source software and community engagement, particularly in the data ecosystem.
Familiarity with data governance, security, and access control models in distributed data systems.
-
Contributing to open-source projects, especially in the data infrastructure space.
Designing or implementing REST APIs, particularly in the context of distributed systems.
Managing large-scale data lakes or data catalogs in production environments.
Working on highly-performant and scalable query engines such as Spark, Flink, or Trino.
-
Be part of a pioneering effort to build the most open and interoperable data lake ecosystem in the industry.
Work on a high-impact open-source project that solves real-world data challenges for enterprise customers like Netflix, AWS, and others.
Collaborate with some of the brightest minds in the data ecosystem, including core contributors to Apache Iceberg.
Have the opportunity to innovate in one of the fastest-growing and evolving areas in data infrastructure, where you can make a direct impact on Snowflake's growth and the broader open-source community.
Every Snowflake employee is expected to follow the company's confidentiality and security standards for handling sensitive data. Snowflake employees must abide by the company's data security plan as an essential part of their duties. It is every employee's duty to keep customer information secure and confidential.
Snowflake is growing fast, and we're scaling our team to help enable and accelerate our growth. We are looking for people who share our values, challenge ordinary thinking, and push the pace of innovation while building a future for themselves and Snowflake.
How do you want to make your impact?
The following represents the expected range of compensation for this role:
- The estimated base salary range for this role is $187,000 - $276,000.
- Additionally, this role is eligible to participate in Snowflake's bonus and equity plan.
The successful candidate's starting salary will be determined based on permissible, non-discriminatory factors such as skills, experience, and geographic location. This role is also eligible for a competitive benefits package that includes: medical, dental, vision, life, and disability insurance; 401(k) retirement plan; flexible spending & health savings account; at least 12 paid holidays; paid time off; parental leave; employee assistance program; and other company benefits.
Snowflake is growing fast, and we're scaling our team to help enable and accelerate our growth. We are looking for people who share our values, challenge ordinary thinking, and push the pace of innovation while building a future for themselves and Snowflake.
How do you want to make your impact?
Location: Bellevue, WA, US
Posted Date: 11/24/2024
Contact Information
Contact | Human Resources Snowflake Computing |
---|