Senior Snowflake Data Engineer | Large NYC Hospital | Salary + Bonus!!
0
0
0
0
Mid-Senior level
Posted March 14, 2026
Job link
Thinking about this job
Responsibilities
Commitments
Responsibilities
- Design, build, and maintain robust, scalable, and secure ETL/ELT pipelines.
- Develop and optimize data models to support analytics, BI, and machine learning use cases.
- Partner closely with data analysts, scientists, and business stakeholders to translate requirements into technical solutions.
- Monitor, tune, and optimize data processing workflows for efficiency, scalability, and cost-effectiveness.
- Troubleshoot pipeline failures and performance issues quickly and effectively.
- Support production workflows through CI/CD pipelines and version control (Git).
- Ensure data quality, integrity, and governance across the data lifecycle.
- Prototype and research new tools, technologies, and architectures for improving the existing data infrastructure.
Commitments
This is a Hybrid position (4 days Onsite, 1 Day Remote – no exceptions) offering a competitive salary, annual bonus, 401(k), PTO, and top-tier medical, dental, and vision benefits, along with additional excellent company perks.
A proven ability to operate in a fast-paced work environment.
Not Met Priorities
What still needs stronger evidence
Requirements
- Hands-on experience with cloud-based data warehouse platforms (e.g.
- Snowflake ).
- Experience designing and deploying data pipelines using orchestration tools like DBT or Airflow.
- 5+ years’ experience supporting data-driven or ML applications in production environments.
- Understanding of modern data architecture, ELT/ETL best practices, and cloud data platforms i.e. (Azure or AWS or GCP).
- Ability to analyze large datasets to identify data quality issues and other contextual insights.
- Experience in developing data models for integration and analysis that support business intelligence and data analytics initiatives.
- Experience with CI/CD and version control tools (e.g.
- Git).
- Proficiency with SQL and experience in at least one programming language (e.g., Python, Scala).
- A proven ability to operate in a fast-paced work environment.
- Excellent communication and organizational skills with a demonstrated team orientation with strong ability to drive projects autonomously as needed.
- Familiarity with Infrastructure as Code (IaC), and containerization (Docker, Kubernetes).
- Knowledge of data governance, security, and compliance frameworks.
- Experience in data modeling for large-scale distributed data warehouses with a strong understanding of design trade-offs and best practices.
Preferred Skills
- Familiarity with Infrastructure as Code (IaC), and containerization (Docker, Kubernetes).
- Knowledge of data governance, security, and compliance frameworks.
- Experience in data modeling for large-scale distributed data warehouses with a strong understanding of design trade-offs and best practices.
Education
- (Not required) – Bachelor’s or Master’s degree in Computer Science, Engineering, Information Systems, or equivalent experience.
Integris Group is partnering with a leading New York City hospital to hire 1-Senior Snowflake Data Engineer for a full-time, permanent position. This is a Hybrid position (4 days Onsite, 1 Day Remote – no exceptions) offering a competitive salary, annual bonus, 401(k), PTO, and top-tier medical, dental, and vision benefits, along with additional excellent company perks. About the Role: Our client is seeking a highly motivated and experienced Senior Snowflake Data Engineer to join their data team and help design, build, and optimize their modern data platform. You will work with tools such as Snowflake, Fivetran, and DBT to create scalable, high-performance data pipelines and analytics solutions. This role will be instrumental in enabling advanced analytics, machine learning, and data-driven decision-making across the organization. The job itself is not just architecture (these senior engineers will do some as they are designing the data models, pipelines, version control, governance, etc.), but then they will also execute (engineer) & deliver that work. Key Responsibilities:
Design, build, and maintain robust, scalable, and secure ETL/ELT pipelines.
Develop and optimize data models to support analytics, BI, and machine learning use cases.
Partner closely with data analysts, scientists, and business stakeholders to translate requirements into technical solutions.
Monitor, tune, and optimize data processing workflows for efficiency, scalability, and cost-effectiveness.
Troubleshoot pipeline failures and performance issues quickly and effectively.
Support production workflows through CI/CD pipelines and version control (Git).
Ensure data quality, integrity, and governance across the data lifecycle.
Prototype and research new tools, technologies, and architectures for improving the existing data infrastructure. The ideal candidate has:
Bachelor’s or Master’s degree in Computer Science, Engineering, Information Systems, or equivalent experience.
Hands-on experience with cloud-based data warehouse platforms (e.g. Snowflake ).
Experience designing and deploying data pipelines using orchestration tools like DBT or Airflow.
5+ years’ experience supporting data-driven or ML applications in production environments.
Understanding of modern data architecture, ELT/ETL best practices, and cloud data platforms i.e. (Azure or AWS or GCP).
Ability to analyze large datasets to identify data quality issues and other contextual insights.
Experience in developing data models for integration and analysis that support business intelligence and data analytics initiatives.
Experience with CI/CD and version control tools (e.g. Git).
Proficiency with SQL and experience in at least one programming language (e.g., Python, Scala).
A proven ability to operate in a fast-paced work environment.
Excellent communication and organizational skills with a demonstrated team orientation with strong ability to drive projects autonomously as needed. We’d like to see:
Familiarity with Infrastructure as Code (IaC), and containerization (Docker, Kubernetes).
Knowledge of data governance, security, and compliance frameworks.
Experience in data modeling for large-scale distributed data warehouses with a strong understanding of design trade-offs and best practices.
Design, build, and maintain robust, scalable, and secure ETL/ELT pipelines.
Develop and optimize data models to support analytics, BI, and machine learning use cases.
Partner closely with data analysts, scientists, and business stakeholders to translate requirements into technical solutions.
Monitor, tune, and optimize data processing workflows for efficiency, scalability, and cost-effectiveness.
Troubleshoot pipeline failures and performance issues quickly and effectively.
Support production workflows through CI/CD pipelines and version control (Git).
Ensure data quality, integrity, and governance across the data lifecycle.
Prototype and research new tools, technologies, and architectures for improving the existing data infrastructure. The ideal candidate has:
Bachelor’s or Master’s degree in Computer Science, Engineering, Information Systems, or equivalent experience.
Hands-on experience with cloud-based data warehouse platforms (e.g. Snowflake ).
Experience designing and deploying data pipelines using orchestration tools like DBT or Airflow.
5+ years’ experience supporting data-driven or ML applications in production environments.
Understanding of modern data architecture, ELT/ETL best practices, and cloud data platforms i.e. (Azure or AWS or GCP).
Ability to analyze large datasets to identify data quality issues and other contextual insights.
Experience in developing data models for integration and analysis that support business intelligence and data analytics initiatives.
Experience with CI/CD and version control tools (e.g. Git).
Proficiency with SQL and experience in at least one programming language (e.g., Python, Scala).
A proven ability to operate in a fast-paced work environment.
Excellent communication and organizational skills with a demonstrated team orientation with strong ability to drive projects autonomously as needed. We’d like to see:
Familiarity with Infrastructure as Code (IaC), and containerization (Docker, Kubernetes).
Knowledge of data governance, security, and compliance frameworks.
Experience in data modeling for large-scale distributed data warehouses with a strong understanding of design trade-offs and best practices.