Associate
Posted March 26, 2026
Job link
Thinking about this job
Responsibilities
Commitments
Responsibilities
- Design and implement end-to-end ML pipelines for data ingestion, feature engineering, model training, validation, deployment, and monitoring
- Deploy and manage ML models in production across AWS, Azure, and Snowflake-based ecosystems
- Build batch and real-time inference pipelines using cloud-native and platform-native services
- Automate model packaging, testing, release, and rollback using CI/CD best practices
- Integrate ML workflows with services such as AWS SageMaker, AWS Lambda, Azure Machine Learning, Azure Data Factory, and Snowflake
- Build and maintain orchestration workflows using tools such as Airflow, Azure Data Factory, or similar platforms
- Implement experiment tracking, model registry, and model governance processes
- Monitor model accuracy, drift, latency, throughput, pipeline failures, and infrastructure usage
- Establish deployment strategies such as canary, shadow, blue-green, and rollback mechanisms
- Collaborate with cross-functional teams to move models from research to production
- Ensure security, compliance, traceability, and access control for models and data across cloud environments
- Optimize platform performance, reliability, and cost across AWS, Azure, and Snowflake
- Document architecture, deployment standards, and operational procedures Required Qualifications
Commitments
Job Title: MLOps Engineer Job Location: Houston, TX, 77002 (Hybrid - 4 Days a week in office) Job Duration: 8 Months contract (with possible extension) JOB DESCRIPTION
Not Met Priorities
What still needs stronger evidence
Requirements
- Must-have: Hands-on experience with AWS, Microsoft Azure, and Snowflake in building or supporting production ML/data platforms.
- Document architecture, deployment standards, and operational procedures Required Qualifications
- Five or more years of relevant experiences
- Proven experience in MLOps, ML engineering, platform engineering, or DevOps
- Strong hands-on experience with AWS, Microsoft Azure, and Snowflake
- Strong programming skills in Python and SQL
- Experience deploying and managing ML models in production
- Experience with cloud ML services such as AWS SageMaker and Azure Machine Learning
- Experience building data pipelines and integrating with Snowflake
- Knowledge of CI/CD pipelines, infrastructure automation, and model versioning
- Experience with containerization and orchestration tools such as Docker and Kubernetes
- Experience with workflow orchestration tools such as Airflow, Azure Data Factory, or similar
- Familiarity with model monitoring, logging, alerting, and observability
- Solid understanding of data engineering concepts, APIs, and distributed processing
- Strong troubleshooting, communication, and cross-team collaboration skills Preferred Qualifications
- Experience with Snowflake Cortex AI, Snowpark, or ML workloads in Snowflake
- Experience with AWS Bedrock, Azure Open AI, or production LLM workflows
- Experience with real-time inference, event-driven pipelines, and server less architectures
- Familiarity with feature stores, vector databases, and RAG-based systems
- Experience with Terraform, Cloud Formation, or Azure infrastructure-as-code tools
- Understanding of security, compliance, and governance requirements for regulated environments
- Experience with production A/B testing, shadow deployment, and rollback strategies
Preferred Skills
- Proven experience in MLOps, ML engineering, platform engineering, or DevOps
- Experience with cloud ML services such as AWS SageMaker and Azure Machine Learning
- Experience with workflow orchestration tools such as Airflow, Azure Data Factory, or similar
- Strong troubleshooting, communication, and cross-team collaboration skills Preferred Qualifications
- Experience with Snowflake Cortex AI, Snowpark, or ML workloads in Snowflake
- Experience with AWS Bedrock, Azure Open AI, or production LLM workflows
- Experience with real-time inference, event-driven pipelines, and server less architectures
- Familiarity with feature stores, vector databases, and RAG-based systems
- Experience with Terraform, Cloud Formation, or Azure infrastructure-as-code tools
- Understanding of security, compliance, and governance requirements for regulated environments
- Experience with production A/B testing, shadow deployment, and rollback strategies
Education
- (Not required) – Master’s or Advanced degree (PhD) in Computer Science, Computer Engineering, or Similar
Job Title: MLOps Engineer Job Location: Houston, TX, 77002 (Hybrid - 4 Days a week in office) Job Duration: 8 Months contract (with possible extension) JOB DESCRIPTION
Must-have: Hands-on experience with AWS, Microsoft Azure, and Snowflake in building or supporting production ML/data platforms. Job Summary
We are seeking an MLOps Engineer to design, deploy, monitor, and maintain machine learning solutions in production across AWS, Microsoft Azure, and Snowflake environments. This role will partner with data scientists and cloud teams to operationalize ML models, automate pipelines, and build reliable, secure, and scalable ML platforms.
The ideal candidate has strong experience in the end-to-end ML lifecycle, cloud-native deployment, CI/CD automation, model monitoring, and production data pipelines, with hands-on expertise in AWS, Azure, and Snowflake. Key Responsibilities
Design and implement end-to-end ML pipelines for data ingestion, feature engineering, model training, validation, deployment, and monitoring
Deploy and manage ML models in production across AWS, Azure, and Snowflake-based ecosystems
Build batch and real-time inference pipelines using cloud-native and platform-native services
Automate model packaging, testing, release, and rollback using CI/CD best practices
Integrate ML workflows with services such as AWS SageMaker, AWS Lambda, Azure Machine Learning, Azure Data Factory, and Snowflake
Build and maintain orchestration workflows using tools such as Airflow, Azure Data Factory, or similar platforms
Implement experiment tracking, model registry, and model governance processes
Monitor model accuracy, drift, latency, throughput, pipeline failures, and infrastructure usage
Establish deployment strategies such as canary, shadow, blue-green, and rollback mechanisms
Collaborate with cross-functional teams to move models from research to production
Ensure security, compliance, traceability, and access control for models and data across cloud environments
Optimize platform performance, reliability, and cost across AWS, Azure, and Snowflake
Document architecture, deployment standards, and operational procedures Required Qualifications
Master’s or Advanced degree (PhD) in Computer Science, Computer Engineering, or Similar
Five or more years of relevant experiences
Proven experience in MLOps, ML engineering, platform engineering, or DevOps
Strong hands-on experience with AWS, Microsoft Azure, and Snowflake
Strong programming skills in Python and SQL
Experience deploying and managing ML models in production
Experience with cloud ML services such as AWS SageMaker and Azure Machine Learning
Experience building data pipelines and integrating with Snowflake
Knowledge of CI/CD pipelines, infrastructure automation, and model versioning
Experience with containerization and orchestration tools such as Docker and Kubernetes
Experience with workflow orchestration tools such as Airflow, Azure Data Factory, or similar
Familiarity with model monitoring, logging, alerting, and observability
Solid understanding of data engineering concepts, APIs, and distributed processing
Strong troubleshooting, communication, and cross-team collaboration skills Preferred Qualifications
Experience with Snowflake Cortex AI, Snowpark, or ML workloads in Snowflake
Experience with AWS Bedrock, Azure Open AI, or production LLM workflows
Experience with real-time inference, event-driven pipelines, and server less architectures
Familiarity with feature stores, vector databases, and RAG-based systems
Experience with Terraform, Cloud Formation, or Azure infrastructure-as-code tools
Understanding of security, compliance, and governance requirements for regulated environments
Experience with production A/B testing, shadow deployment, and rollback strategies
Must-have: Hands-on experience with AWS, Microsoft Azure, and Snowflake in building or supporting production ML/data platforms. Job Summary
We are seeking an MLOps Engineer to design, deploy, monitor, and maintain machine learning solutions in production across AWS, Microsoft Azure, and Snowflake environments. This role will partner with data scientists and cloud teams to operationalize ML models, automate pipelines, and build reliable, secure, and scalable ML platforms.
The ideal candidate has strong experience in the end-to-end ML lifecycle, cloud-native deployment, CI/CD automation, model monitoring, and production data pipelines, with hands-on expertise in AWS, Azure, and Snowflake. Key Responsibilities
Design and implement end-to-end ML pipelines for data ingestion, feature engineering, model training, validation, deployment, and monitoring
Deploy and manage ML models in production across AWS, Azure, and Snowflake-based ecosystems
Build batch and real-time inference pipelines using cloud-native and platform-native services
Automate model packaging, testing, release, and rollback using CI/CD best practices
Integrate ML workflows with services such as AWS SageMaker, AWS Lambda, Azure Machine Learning, Azure Data Factory, and Snowflake
Build and maintain orchestration workflows using tools such as Airflow, Azure Data Factory, or similar platforms
Implement experiment tracking, model registry, and model governance processes
Monitor model accuracy, drift, latency, throughput, pipeline failures, and infrastructure usage
Establish deployment strategies such as canary, shadow, blue-green, and rollback mechanisms
Collaborate with cross-functional teams to move models from research to production
Ensure security, compliance, traceability, and access control for models and data across cloud environments
Optimize platform performance, reliability, and cost across AWS, Azure, and Snowflake
Document architecture, deployment standards, and operational procedures Required Qualifications
Master’s or Advanced degree (PhD) in Computer Science, Computer Engineering, or Similar
Five or more years of relevant experiences
Proven experience in MLOps, ML engineering, platform engineering, or DevOps
Strong hands-on experience with AWS, Microsoft Azure, and Snowflake
Strong programming skills in Python and SQL
Experience deploying and managing ML models in production
Experience with cloud ML services such as AWS SageMaker and Azure Machine Learning
Experience building data pipelines and integrating with Snowflake
Knowledge of CI/CD pipelines, infrastructure automation, and model versioning
Experience with containerization and orchestration tools such as Docker and Kubernetes
Experience with workflow orchestration tools such as Airflow, Azure Data Factory, or similar
Familiarity with model monitoring, logging, alerting, and observability
Solid understanding of data engineering concepts, APIs, and distributed processing
Strong troubleshooting, communication, and cross-team collaboration skills Preferred Qualifications
Experience with Snowflake Cortex AI, Snowpark, or ML workloads in Snowflake
Experience with AWS Bedrock, Azure Open AI, or production LLM workflows
Experience with real-time inference, event-driven pipelines, and server less architectures
Familiarity with feature stores, vector databases, and RAG-based systems
Experience with Terraform, Cloud Formation, or Azure infrastructure-as-code tools
Understanding of security, compliance, and governance requirements for regulated environments
Experience with production A/B testing, shadow deployment, and rollback strategies