Mid-Senior level
Posted March 13, 2026
Job link
Thinking about this job
Responsibilities
Commitments
Responsibilities
- Stabilize the bronze (raw extract) layer of the data warehouse.
- Optimize silver/gold medallion layers for performance and reliability.
- Reduce overnight ETL batch lag (current window: midnight → ~7 AM).
- Consult on pipeline design and recommend efficiency improvements.
- Participate in a workshop to clarify detailed scope and stabilization priorities.
- Collaborate with team members and stakeholders to design and implement efficient data pipelines.
- Build structurally sound data for the silver-layer warehouse using SQL and PySpark .
- Optimize ETL processes to enable near-real-time operational visibility.
- Support onboarding and knowledge transfer to other engineers.
- Databricks (notebooks, PySpark, SQL) – primary focus.
- Snowflake – data warehousing and optimization.
- SAP ECC / Oracle table structure knowledge – especially for supply chain/manufacturing data.
- Azure Data Factory – extraction pipelines.
- Power BI – dashboards and reporting; SSIS/SSRS not required.
- Technical Stack & Architecture
- Batch Window: 12:00–12:30 AM → ~7:00 AM Operational challenge: Current pipelines deliver third-shift manufacturing data a day late, limiting timely decision-making.
- Candidate will help redesign architecture for faster, more reliable processing.
Commitments
Contract Senior Data Engineer Location: Hybrid (Charlotte, NC) Duration: 6 months Contract to hire Team Size: 7 (6 staff + lead) Overview We are seeking a Senior Data Engineer to support the stabilization and optimization of our data warehouse.
Work model: Hybrid, approximately 3 days per week onsite Key Responsibilities
Batch Window: 12:00–12:30 AM → ~7:00 AM Operational challenge: Current pipelines deliver third-shift manufacturing data a day late, limiting timely decision-making.
Candidate will help redesign architecture for faster, more reliable processing.
Capable of working independently and collaboratively in a hybrid/remote setup.
Interview Process
Virtual one-on-one (30 min, with camera) – hiring manager.
Cultural/fit interview (30 min, onsite if local).
Panel interview (technical deep dive, same day; 1–1.5 hrs total).
Total candidate time: 1.5–2 hours
Not Met Priorities
What still needs stronger evidence
Requirements
- SAP ECC / Oracle table structure knowledge – especially for supply chain/manufacturing data.
- Azure Data Factory – extraction pipelines.
- Power BI – dashboards and reporting; SSIS/SSRS not required.
- Technical Stack & Architecture
- ETL Extraction: Azure Data Factory from SAP HANA
- Transformation: Databricks notebooks with PySpark/SQL
- Data Storage: Snowflake
- Consumption/Reporting: Power BI
- Batch Window: 12:00–12:30 AM → ~7:00 AM Operational challenge: Current pipelines deliver third-shift manufacturing data a day late, limiting timely decision-making.
- Candidate will help redesign architecture for faster, more reliable processing.
- Experience: Must have at least 8 years+ as a Data Engineer or similar role.
- Strong hands-on experience in SQL, PySpark, and Databricks .
- Familiarity with Snowflake and SAP ECC .
- Proven ability to consult on data pipeline design and performance optimization.
- Capable of working independently and collaboratively in a hybrid/remote setup.
Preferred Skills
- The ideal candidate will have strong experience in Databricks, Snowflake, and SAP ECC, with a background in supply-chain or manufacturing data preferred.
- Databricks (notebooks, PySpark, SQL) – primary focus.
- SAP ECC / Oracle table structure knowledge – especially for supply chain/manufacturing data.
- Azure Data Factory – extraction pipelines.
- Power BI – dashboards and reporting; SSIS/SSRS not required.
- Technical Stack & Architecture
- ETL Extraction: Azure Data Factory from SAP HANA
- Transformation: Databricks notebooks with PySpark/SQL
- Data Storage: Snowflake
- Consumption/Reporting: Power BI
- Background in supply-chain or manufacturing data preferred.
Contract Senior Data Engineer Location: Hybrid (Charlotte, NC) Duration: 6 months Contract to hire Team Size: 7 (6 staff + lead) Overview We are seeking a Senior Data Engineer to support the stabilization and optimization of our data warehouse. This is a hands-on, contract role with approximately 50% coding and 50% design/consulting responsibilities. The ideal candidate will have strong experience in Databricks, Snowflake, and SAP ECC, with a background in supply-chain or manufacturing data preferred. Primary goals:
Stabilize the bronze (raw extract) layer of the data warehouse.
Optimize silver/gold medallion layers for performance and reliability.
Reduce overnight ETL batch lag (current window: midnight → ~7 AM).
Consult on pipeline design and recommend efficiency improvements. Work model: Hybrid, approximately 3 days per week onsite Key Responsibilities
Participate in a workshop to clarify detailed scope and stabilization priorities.
Collaborate with team members and stakeholders to design and implement efficient data pipelines.
Build structurally sound data for the silver-layer warehouse using SQL and PySpark .
Optimize ETL processes to enable near-real-time operational visibility.
Support onboarding and knowledge transfer to other engineers. Technical priorities:
Databricks (notebooks, PySpark, SQL) – primary focus.
Snowflake – data warehousing and optimization.
SAP ECC / Oracle table structure knowledge – especially for supply chain/manufacturing data.
Azure Data Factory – extraction pipelines.
Power BI – dashboards and reporting; SSIS/SSRS not required. Technical Stack & Architecture
ETL Extraction: Azure Data Factory from SAP HANA
Transformation: Databricks notebooks with PySpark/SQL
Data Storage: Snowflake
Consumption/Reporting: Power BI
Batch Window: 12:00–12:30 AM → ~7:00 AM Operational challenge: Current pipelines deliver third-shift manufacturing data a day late, limiting timely decision-making. Candidate will help redesign architecture for faster, more reliable processing. Candidate Requirements
Experience: Must have at least 8 years+ as a Data Engineer or similar role.
Strong hands-on experience in SQL, PySpark, and Databricks .
Familiarity with Snowflake and SAP ECC .
Background in supply-chain or manufacturing data preferred.
Proven ability to consult on data pipeline design and performance optimization.
Capable of working independently and collaboratively in a hybrid/remote setup. Interview Process
Virtual one-on-one (30 min, with camera) – hiring manager.
Cultural/fit interview (30 min, onsite if local).
Panel interview (technical deep dive, same day; 1–1.5 hrs total). Total candidate time: 1.5–2 hours
Stabilize the bronze (raw extract) layer of the data warehouse.
Optimize silver/gold medallion layers for performance and reliability.
Reduce overnight ETL batch lag (current window: midnight → ~7 AM).
Consult on pipeline design and recommend efficiency improvements. Work model: Hybrid, approximately 3 days per week onsite Key Responsibilities
Participate in a workshop to clarify detailed scope and stabilization priorities.
Collaborate with team members and stakeholders to design and implement efficient data pipelines.
Build structurally sound data for the silver-layer warehouse using SQL and PySpark .
Optimize ETL processes to enable near-real-time operational visibility.
Support onboarding and knowledge transfer to other engineers. Technical priorities:
Databricks (notebooks, PySpark, SQL) – primary focus.
Snowflake – data warehousing and optimization.
SAP ECC / Oracle table structure knowledge – especially for supply chain/manufacturing data.
Azure Data Factory – extraction pipelines.
Power BI – dashboards and reporting; SSIS/SSRS not required. Technical Stack & Architecture
ETL Extraction: Azure Data Factory from SAP HANA
Transformation: Databricks notebooks with PySpark/SQL
Data Storage: Snowflake
Consumption/Reporting: Power BI
Batch Window: 12:00–12:30 AM → ~7:00 AM Operational challenge: Current pipelines deliver third-shift manufacturing data a day late, limiting timely decision-making. Candidate will help redesign architecture for faster, more reliable processing. Candidate Requirements
Experience: Must have at least 8 years+ as a Data Engineer or similar role.
Strong hands-on experience in SQL, PySpark, and Databricks .
Familiarity with Snowflake and SAP ECC .
Background in supply-chain or manufacturing data preferred.
Proven ability to consult on data pipeline design and performance optimization.
Capable of working independently and collaboratively in a hybrid/remote setup. Interview Process
Virtual one-on-one (30 min, with camera) – hiring manager.
Cultural/fit interview (30 min, onsite if local).
Panel interview (technical deep dive, same day; 1–1.5 hrs total). Total candidate time: 1.5–2 hours