Associate
Posted April 3, 2026
Job link
Thinking about this job
Responsibilities
Responsibilities
- This role is ideal if you enjoy turning raw data into reliable pipelines, collaborating with architects and analysts, and building systems that help organizations make smarter decisions.
- What You’ll Work On Data Pipelines + Integration
- Design, build, and maintain scalable ETL pipelines using Azure Data Factory, Databricks, and cloud-based tools
- Integrate data across platforms to enable reliable analytics and operational use cases
- Collaborate with architects and analysts to translate business requirements into data solutions Performance + Optimization
- Optimize data pipelines for performance, reliability, and scalability
- Automate workflows and orchestration to ensure stable and efficient data movement
- Identify and resolve pipeline issues, bottlenecks, and failures proactively Data Quality + Governance
- Implement validation checks, monitoring, and governance controls to ensure data accuracy and integrity
- Support data lineage, documentation, and standards across pipelines and systems
- Ensure compliance with data governance and security best practices Architecture + Documentation
- Develop and maintain documentation for ETL processes, data models, and data architecture
- Contribute to the design of scalable cloud data environments, including lakes and warehouses
- Stay current with evolving tools, technologies, and practices in cloud data engineering How You’ll Show Up
- Build pipelines and workflows that teams can trust
- Translate data requirements into scalable engineering solutions
- Collaborate closely with architects, analysts, and consultants
- Take ownership of pipeline reliability and data quality
- Document systems clearly so others can understand and build on your work
- Continuously improve tooling, automation, and engineering practices What We’re Looking For
- Data Engineering Expertise: Experience designing and building ETL pipelines and scalable data workflows.
Not Met Priorities
What still needs stronger evidence
Requirements
- Design, build, and maintain scalable ETL pipelines using Azure Data Factory, Databricks, and cloud-based tools
- Translate data requirements into scalable engineering solutions
- Collaborate closely with architects, analysts, and consultants
- Take ownership of pipeline reliability and data quality
- Document systems clearly so others can understand and build on your work
- Continuously improve tooling, automation, and engineering practices What We’re Looking For
- Data Engineering Expertise: Experience designing and building ETL pipelines and scalable data workflows.
- Cloud Data Platforms: Hands-on experience with Azure Data Factory, Databricks, or similar cloud data engineering tools.
- SQL + Data Modeling: Strong SQL skills with the ability to write complex queries and support relational data structures.
- Data Architecture Understanding: Familiarity with data lakes, warehouses, and modern cloud-based data platforms.
- Problem Solving: Strong analytical thinking with the ability to troubleshoot pipeline issues and optimize performance.
- Collaboration: Works effectively with architects, analysts, and cross-functional teams to deliver reliable data solutions.
- Experience with Python, Scala, or other programming languages used in data engineering
- Familiarity with data governance, security, and compliance frameworks
- Experience working with AWS or GCP data platforms
- Exposure to DevOps practices and CI/CD for data pipelines
Preferred Skills
- Experience with Python, Scala, or other programming languages used in data engineering
- Familiarity with data governance, security, and compliance frameworks
- Experience working with AWS or GCP data platforms
- Exposure to DevOps practices and CI/CD for data pipelines
- Understanding of marketing or customer data ecosystems What Success Looks Like
- Pipelines you build run reliably and scale as data grows
- Data flows cleanly across systems without manual intervention
- Teams trust the data infrastructure you’ve created
Data Engineer About the Job At Block+Tackle, we help organizations turn complex marketing ecosystems into scalable, connected systems. Not just pipelines. Not just platforms. Data environments that transform messy, real-world information into reliable intelligence teams can actually use. We’re looking for a Data Engineer who can design and build the infrastructure that powers modern marketing and analytics. Someone who thrives at the intersection of data architecture, ETL development, and cloud platforms—and who understands how data should move, behave, and scale across tools and teams. This role is ideal if you enjoy turning raw data into reliable pipelines, collaborating with architects and analysts, and building systems that help organizations make smarter decisions. What You’ll Work On Data Pipelines + Integration
Design, build, and maintain scalable ETL pipelines using Azure Data Factory, Databricks, and cloud-based tools
Integrate data across platforms to enable reliable analytics and operational use cases
Collaborate with architects and analysts to translate business requirements into data solutions Performance + Optimization
Optimize data pipelines for performance, reliability, and scalability
Automate workflows and orchestration to ensure stable and efficient data movement
Identify and resolve pipeline issues, bottlenecks, and failures proactively Data Quality + Governance
Implement validation checks, monitoring, and governance controls to ensure data accuracy and integrity
Support data lineage, documentation, and standards across pipelines and systems
Ensure compliance with data governance and security best practices Architecture + Documentation
Develop and maintain documentation for ETL processes, data models, and data architecture
Contribute to the design of scalable cloud data environments, including lakes and warehouses
Stay current with evolving tools, technologies, and practices in cloud data engineering How You’ll Show Up
Build pipelines and workflows that teams can trust
Translate data requirements into scalable engineering solutions
Collaborate closely with architects, analysts, and consultants
Take ownership of pipeline reliability and data quality
Document systems clearly so others can understand and build on your work
Continuously improve tooling, automation, and engineering practices What We’re Looking For
Data Engineering Expertise: Experience designing and building ETL pipelines and scalable data workflows.
Cloud Data Platforms: Hands-on experience with Azure Data Factory, Databricks, or similar cloud data engineering tools.
SQL + Data Modeling: Strong SQL skills with the ability to write complex queries and support relational data structures.
Data Architecture Understanding: Familiarity with data lakes, warehouses, and modern cloud-based data platforms.
Problem Solving: Strong analytical thinking with the ability to troubleshoot pipeline issues and optimize performance.
Collaboration: Works effectively with architects, analysts, and cross-functional teams to deliver reliable data solutions. Preferred Experience
Experience with Python, Scala, or other programming languages used in data engineering
Familiarity with data governance, security, and compliance frameworks
Experience working with AWS or GCP data platforms
Exposure to DevOps practices and CI/CD for data pipelines
Understanding of marketing or customer data ecosystems What Success Looks Like
Pipelines you build run reliably and scale as data grows
Data flows cleanly across systems without manual intervention
Teams trust the data infrastructure you’ve created
Architects and analysts can move faster because of your work
You help transform raw data into systems that drive real decisions
Design, build, and maintain scalable ETL pipelines using Azure Data Factory, Databricks, and cloud-based tools
Integrate data across platforms to enable reliable analytics and operational use cases
Collaborate with architects and analysts to translate business requirements into data solutions Performance + Optimization
Optimize data pipelines for performance, reliability, and scalability
Automate workflows and orchestration to ensure stable and efficient data movement
Identify and resolve pipeline issues, bottlenecks, and failures proactively Data Quality + Governance
Implement validation checks, monitoring, and governance controls to ensure data accuracy and integrity
Support data lineage, documentation, and standards across pipelines and systems
Ensure compliance with data governance and security best practices Architecture + Documentation
Develop and maintain documentation for ETL processes, data models, and data architecture
Contribute to the design of scalable cloud data environments, including lakes and warehouses
Stay current with evolving tools, technologies, and practices in cloud data engineering How You’ll Show Up
Build pipelines and workflows that teams can trust
Translate data requirements into scalable engineering solutions
Collaborate closely with architects, analysts, and consultants
Take ownership of pipeline reliability and data quality
Document systems clearly so others can understand and build on your work
Continuously improve tooling, automation, and engineering practices What We’re Looking For
Data Engineering Expertise: Experience designing and building ETL pipelines and scalable data workflows.
Cloud Data Platforms: Hands-on experience with Azure Data Factory, Databricks, or similar cloud data engineering tools.
SQL + Data Modeling: Strong SQL skills with the ability to write complex queries and support relational data structures.
Data Architecture Understanding: Familiarity with data lakes, warehouses, and modern cloud-based data platforms.
Problem Solving: Strong analytical thinking with the ability to troubleshoot pipeline issues and optimize performance.
Collaboration: Works effectively with architects, analysts, and cross-functional teams to deliver reliable data solutions. Preferred Experience
Experience with Python, Scala, or other programming languages used in data engineering
Familiarity with data governance, security, and compliance frameworks
Experience working with AWS or GCP data platforms
Exposure to DevOps practices and CI/CD for data pipelines
Understanding of marketing or customer data ecosystems What Success Looks Like
Pipelines you build run reliably and scale as data grows
Data flows cleanly across systems without manual intervention
Teams trust the data infrastructure you’ve created
Architects and analysts can move faster because of your work
You help transform raw data into systems that drive real decisions