Mid-Senior level
Posted March 25, 2026
Job link
Thinking about this job
Responsibilities
Commitments
Responsibilities
- This role requires experience developing applied AI solutions that support real business use cases and are deployed within enterprise systems .
- The Applied AI Engineer builds and deploys AI solutions that directly support business workflows across Commercial, Regulatory, Quality, Finance, Operations, and Corporate functions.
- This role focuses on turning real business problems into working AI applications—including copilots, retrieval-augmented generation (RAG) solutions, document generation, automation agents, predictive models and decision-support tools.
- Will works closely with business SMEs, Data Engineering, and the AI Governance team to ensure solutions are secure, compliant, explainable, and production-ready in a regulated life-sciences environment.
- Applied AI/ML engineering
- Prompt engineering & grounding techniques
- Enterprise data integration (SharePoint, data lakes, document repositories)
- MLOps / LLMOps (monitoring, logging, versioning, observability, cost optimization)
- Security‑aware engineering (RBAC, Purview, guardrails)
- Responsible AI, governance, explainability, and data‑classification frameworks
- Business problem‑solving & systems thinking
- Experience working in regulated or data‑sensitive environments (e.g., pharma, healthcare, finance)
- Knowledge of AI governance, Responsible AI, model explainability, and data‑classification standards
- Build AI applications such as enterprise copilots, search assistants, document intelligence and generation tools, workflow-automation agents, predictive models, decision‑support tools, and reusable AI components including prompt libraries and solution patterns
- Implement Retrieval-Augmented Generation (RAG) pipelines leveraging enterprise data sources such as SharePoint, data lakes, document repositories, and research systems
- Build and maintain end‑to‑end AI/ML pipelines including data ingestion, feature engineering, model training, evaluation, deployment, and monitoring
- Integrate LLMs into business workflows using APIs and platforms such as Azure OpenAI, OpenAI, Anthropic, and AWS Bedrock
- Develop prompt-engineering, grounding, and evaluation frameworks to improve accuracy, reliability, and alignment
- Translate business use cases across domains (e.g., medical affairs, regulatory, commercial, finance) into functional AI prototypes and production-ready applications
- Collaborate with Data Scientists to scale models into production systems and with Product Owners/SMEs to refine requirements, acceptance criteria, and success metrics
- Deploy and maintain AI solutions on cloud platforms using modern APIs and software‑engineering best practices
- Implement MLOps and LLMOps capabilities including versioning, monitoring, logging, performance tracking, observability, and workload cost optimization
- Implement guardrails and controls to prevent data leakage, hallucinations, and misuse
- Integrate AI solutions with enterprise identity and data‑security frameworks, including RBAC, Purview, and related governance tools
- Ensure all AI systems are reliable, scalable, and secure, and that they comply with data‑classification rules, privacy requirements, and AI governance policies Must be able to pass and clear background check prior to starting.
Commitments
3 days week onsite (Tuesday, Wednesday, Thursday) Location: San Diego CA ( Carmel Valley area) Full Time Direct Hire Salary $140k - $165k + bonus + great benefits
Ensure all AI systems are reliable, scalable, and secure, and that they comply with data‑classification rules, privacy requirements, and AI governance policies Must be able to pass and clear background check prior to starting.
The client Will also Require professional Work References to be completed prior to starting.
Candidates must be legally authorized to work in the United States without current or future employer sponsorship.
If you are interested, please send me your updated Word Resume, along with your direct phone number and email.
Not Met Priorities
What still needs stronger evidence
Requirements
- Applied AI/ML engineering
- Prompt engineering & grounding techniques
- Generative AI & LLM integration (Azure OpenAI, OpenAI, Anthropic, AWS Bedrock)
- Enterprise data integration (SharePoint, data lakes, document repositories)
- RAG architectures, vector databases, and semantic search
- Cloud and API application development (Azure/AWS/GCP)
- Python engineering
- MLOps / LLMOps (monitoring, logging, versioning, observability, cost optimization)
- Security‑aware engineering (RBAC, Purview, guardrails)
- Responsible AI, governance, explainability, and data‑classification frameworks
- Business problem‑solving & systems thinking
- Experience with RAG pipelines, vector databases, and semantic search systems
- Exposure to Azure OpenAI, Copilot Studio, LangChain, LlamaIndex, or similar AI frameworks
- Familiarity with MLOps platforms such as MLflow, SageMaker, Azure ML, or Databricks
- Experience working in regulated or data‑sensitive environments (e.g., pharma, healthcare, finance)
- Knowledge of AI governance, Responsible AI, model explainability, and data‑classification standards
- Experience building enterprise copilots, agentic AI systems, or intelligent automation solutions Skill Needed:
- Strong proficiency in Python is required
- Experience building and deploying applications using LLM APIs and AI solutions in cloud environments (Azure, AWS)
- Experience in Applied AI/ML & Prompt Engineering, Generative AI & LLM Integration, Enterprise Data Integration, API & Cloud Application Development, and Security-aware Engineering
- Hands-on experience with ML frameworks (PyTorch, TensorFlow, scikit-learn)
- Strong understanding of data engineering fundamentals, APIs, and distributed systems
- Build AI applications such as enterprise copilots, search assistants, document intelligence and generation tools, workflow-automation agents, predictive models, decision‑support tools, and reusable AI components including prompt libraries and solution patterns
- Implement Retrieval-Augmented Generation (RAG) pipelines leveraging enterprise data sources such as SharePoint, data lakes, document repositories, and research systems
- Build and maintain end‑to‑end AI/ML pipelines including data ingestion, feature engineering, model training, evaluation, deployment, and monitoring
- Integrate LLMs into business workflows using APIs and platforms such as Azure OpenAI, OpenAI, Anthropic, and AWS Bedrock
- Develop prompt-engineering, grounding, and evaluation frameworks to improve accuracy, reliability, and alignment
- Translate business use cases across domains (e.g., medical affairs, regulatory, commercial, finance) into functional AI prototypes and production-ready applications
- Collaborate with Data Scientists to scale models into production systems and with Product Owners/SMEs to refine requirements, acceptance criteria, and success metrics
- Deploy and maintain AI solutions on cloud platforms using modern APIs and software‑engineering best practices
- Implement MLOps and LLMOps capabilities including versioning, monitoring, logging, performance tracking, observability, and workload cost optimization
- Integrate AI solutions with enterprise identity and data‑security frameworks, including RBAC, Purview, and related governance tools
- Ensure all AI systems are reliable, scalable, and secure, and that they comply with data‑classification rules, privacy requirements, and AI governance policies Must be able to pass and clear background check prior to starting.
- The client Will also Require professional Work References to be completed prior to starting.
- Candidates must be legally authorized to work in the United States without current or future employer sponsorship.
Preferred Skills
- Strong stakeholder communication and cross‑functional collaboration Preferred Experience
- Experience with RAG pipelines, vector databases, and semantic search systems
- Exposure to Azure OpenAI, Copilot Studio, LangChain, LlamaIndex, or similar AI frameworks
- Familiarity with MLOps platforms such as MLflow, SageMaker, Azure ML, or Databricks
- Experience working in regulated or data‑sensitive environments (e.g., pharma, healthcare, finance)
- Experience building enterprise copilots, agentic AI systems, or intelligent automation solutions Skill Needed:
- Hands-on experience with ML frameworks (PyTorch, TensorFlow, scikit-learn)
- Experience with RAG architectures, vector databases, and semantic search is preferred
- Exposure to Azure OpenAI, Copilot Studio, LangChain, LlamaIndex, or similar frameworks is preferred
- Familiarity with MLOps platforms (MLflow, SageMaker, Azure ML, Databricks) is preferred
- Experience in regulated or data-sensitive environments (pharma, healthcare, finance) is preferred
- Familiarity with AI governance, responsible AI, model explainability, and data classification is preferred
- Experience building enterprise copilots or agentic AI solutions is preferred In this role, you’ll have the opportunity to:
- Build AI applications such as enterprise copilots, search assistants, document intelligence and generation tools, workflow-automation agents, predictive models, decision‑support tools, and reusable AI components including prompt libraries and solution patterns
- Implement Retrieval-Augmented Generation (RAG) pipelines leveraging enterprise data sources such as SharePoint, data lakes, document repositories, and research systems
- Build and maintain end‑to‑end AI/ML pipelines including data ingestion, feature engineering, model training, evaluation, deployment, and monitoring
- Integrate LLMs into business workflows using APIs and platforms such as Azure OpenAI, OpenAI, Anthropic, and AWS Bedrock
- Develop prompt-engineering, grounding, and evaluation frameworks to improve accuracy, reliability, and alignment
- Translate business use cases across domains (e.g., medical affairs, regulatory, commercial, finance) into functional AI prototypes and production-ready applications
- Collaborate with Data Scientists to scale models into production systems and with Product Owners/SMEs to refine requirements, acceptance criteria, and success metrics
- Deploy and maintain AI solutions on cloud platforms using modern APIs and software‑engineering best practices
- Implement MLOps and LLMOps capabilities including versioning, monitoring, logging, performance tracking, observability, and workload cost optimization
Education
- (Not required) – Bachelor’s degree in Computer Science, Engineering, Data Science, or related field, with at least 8 years of experience in software engineering, data engineering, or applied AI engineering (An equivalent combination of experience and education may be considered)
Our Direct client is looking for an Applied AI Engineer to join the team for a Full Time Direct Hire. 3 days week onsite (Tuesday, Wednesday, Thursday) Location: San Diego CA ( Carmel Valley area) Full Time Direct Hire Salary $140k - $165k + bonus + great benefits
Full and comprehensive benefit program, including an Employee Stock Purchase Program and 401(k) matching. We are specifically looking for an Applied AI Engineer who has experience building and deploying AI solutions used by the business in enterprise environments . This role requires experience developing applied AI solutions that support real business use cases and are deployed within enterprise systems . The Applied AI Engineer builds and deploys AI solutions that directly support business workflows across Commercial, Regulatory, Quality, Finance, Operations, and Corporate functions. This role focuses on turning real business problems into working AI applications—including copilots, retrieval-augmented generation (RAG) solutions, document generation, automation agents, predictive models and decision-support tools. Will works closely with business SMEs, Data Engineering, and the AI Governance team to ensure solutions are secure, compliant, explainable, and production-ready in a regulated life-sciences environment. Key Skills
Applied AI/ML engineering
Prompt engineering & grounding techniques
Generative AI & LLM integration (Azure OpenAI, OpenAI, Anthropic, AWS Bedrock)
Enterprise data integration (SharePoint, data lakes, document repositories)
RAG architectures, vector databases, and semantic search
Cloud and API application development (Azure/AWS/GCP)
Python engineering
MLOps / LLMOps (monitoring, logging, versioning, observability, cost optimization)
Security‑aware engineering (RBAC, Purview, guardrails)
Responsible AI, governance, explainability, and data‑classification frameworks
Business problem‑solving & systems thinking
Strong stakeholder communication and cross‑functional collaboration Preferred Experience
Experience with RAG pipelines, vector databases, and semantic search systems
Exposure to Azure OpenAI, Copilot Studio, LangChain, LlamaIndex, or similar AI frameworks
Familiarity with MLOps platforms such as MLflow, SageMaker, Azure ML, or Databricks
Experience working in regulated or data‑sensitive environments (e.g., pharma, healthcare, finance)
Knowledge of AI governance, Responsible AI, model explainability, and data‑classification standards
Experience building enterprise copilots, agentic AI systems, or intelligent automation solutions Skill Needed:
Bachelor’s degree in Computer Science, Engineering, Data Science, or related field, with at least 8 years of experience in software engineering, data engineering, or applied AI engineering (An equivalent combination of experience and education may be considered)
Strong proficiency in Python is required
Experience building and deploying applications using LLM APIs and AI solutions in cloud environments (Azure, AWS)
Experience in Applied AI/ML & Prompt Engineering, Generative AI & LLM Integration, Enterprise Data Integration, API & Cloud Application Development, and Security-aware Engineering
Hands-on experience with ML frameworks (PyTorch, TensorFlow, scikit-learn)
Strong understanding of data engineering fundamentals, APIs, and distributed systems
Experience with RAG architectures, vector databases, and semantic search is preferred
Exposure to Azure OpenAI, Copilot Studio, LangChain, LlamaIndex, or similar frameworks is preferred
Familiarity with MLOps platforms (MLflow, SageMaker, Azure ML, Databricks) is preferred
Experience in regulated or data-sensitive environments (pharma, healthcare, finance) is preferred
Familiarity with AI governance, responsible AI, model explainability, and data classification is preferred
Experience building enterprise copilots or agentic AI solutions is preferred In this role, you’ll have the opportunity to:
Build AI applications such as enterprise copilots, search assistants, document intelligence and generation tools, workflow-automation agents, predictive models, decision‑support tools, and reusable AI components including prompt libraries and solution patterns
Implement Retrieval-Augmented Generation (RAG) pipelines leveraging enterprise data sources such as SharePoint, data lakes, document repositories, and research systems
Build and maintain end‑to‑end AI/ML pipelines including data ingestion, feature engineering, model training, evaluation, deployment, and monitoring
Integrate LLMs into business workflows using APIs and platforms such as Azure OpenAI, OpenAI, Anthropic, and AWS Bedrock
Develop prompt-engineering, grounding, and evaluation frameworks to improve accuracy, reliability, and alignment
Translate business use cases across domains (e.g., medical affairs, regulatory, commercial, finance) into functional AI prototypes and production-ready applications
Collaborate with Data Scientists to scale models into production systems and with Product Owners/SMEs to refine requirements, acceptance criteria, and success metrics
Deploy and maintain AI solutions on cloud platforms using modern APIs and software‑engineering best practices
Implement MLOps and LLMOps capabilities including versioning, monitoring, logging, performance tracking, observability, and workload cost optimization
Implement guardrails and controls to prevent data leakage, hallucinations, and misuse
Integrate AI solutions with enterprise identity and data‑security frameworks, including RBAC, Purview, and related governance tools
Ensure all AI systems are reliable, scalable, and secure, and that they comply with data‑classification rules, privacy requirements, and AI governance policies Must be able to pass and clear background check prior to starting. The client Will also Require professional Work References to be completed prior to starting. Candidates must be legally authorized to work in the United States without current or future employer sponsorship. If you are interested, please send me your updated Word Resume, along with your direct phone number and email.
Full and comprehensive benefit program, including an Employee Stock Purchase Program and 401(k) matching. We are specifically looking for an Applied AI Engineer who has experience building and deploying AI solutions used by the business in enterprise environments . This role requires experience developing applied AI solutions that support real business use cases and are deployed within enterprise systems . The Applied AI Engineer builds and deploys AI solutions that directly support business workflows across Commercial, Regulatory, Quality, Finance, Operations, and Corporate functions. This role focuses on turning real business problems into working AI applications—including copilots, retrieval-augmented generation (RAG) solutions, document generation, automation agents, predictive models and decision-support tools. Will works closely with business SMEs, Data Engineering, and the AI Governance team to ensure solutions are secure, compliant, explainable, and production-ready in a regulated life-sciences environment. Key Skills
Applied AI/ML engineering
Prompt engineering & grounding techniques
Generative AI & LLM integration (Azure OpenAI, OpenAI, Anthropic, AWS Bedrock)
Enterprise data integration (SharePoint, data lakes, document repositories)
RAG architectures, vector databases, and semantic search
Cloud and API application development (Azure/AWS/GCP)
Python engineering
MLOps / LLMOps (monitoring, logging, versioning, observability, cost optimization)
Security‑aware engineering (RBAC, Purview, guardrails)
Responsible AI, governance, explainability, and data‑classification frameworks
Business problem‑solving & systems thinking
Strong stakeholder communication and cross‑functional collaboration Preferred Experience
Experience with RAG pipelines, vector databases, and semantic search systems
Exposure to Azure OpenAI, Copilot Studio, LangChain, LlamaIndex, or similar AI frameworks
Familiarity with MLOps platforms such as MLflow, SageMaker, Azure ML, or Databricks
Experience working in regulated or data‑sensitive environments (e.g., pharma, healthcare, finance)
Knowledge of AI governance, Responsible AI, model explainability, and data‑classification standards
Experience building enterprise copilots, agentic AI systems, or intelligent automation solutions Skill Needed:
Bachelor’s degree in Computer Science, Engineering, Data Science, or related field, with at least 8 years of experience in software engineering, data engineering, or applied AI engineering (An equivalent combination of experience and education may be considered)
Strong proficiency in Python is required
Experience building and deploying applications using LLM APIs and AI solutions in cloud environments (Azure, AWS)
Experience in Applied AI/ML & Prompt Engineering, Generative AI & LLM Integration, Enterprise Data Integration, API & Cloud Application Development, and Security-aware Engineering
Hands-on experience with ML frameworks (PyTorch, TensorFlow, scikit-learn)
Strong understanding of data engineering fundamentals, APIs, and distributed systems
Experience with RAG architectures, vector databases, and semantic search is preferred
Exposure to Azure OpenAI, Copilot Studio, LangChain, LlamaIndex, or similar frameworks is preferred
Familiarity with MLOps platforms (MLflow, SageMaker, Azure ML, Databricks) is preferred
Experience in regulated or data-sensitive environments (pharma, healthcare, finance) is preferred
Familiarity with AI governance, responsible AI, model explainability, and data classification is preferred
Experience building enterprise copilots or agentic AI solutions is preferred In this role, you’ll have the opportunity to:
Build AI applications such as enterprise copilots, search assistants, document intelligence and generation tools, workflow-automation agents, predictive models, decision‑support tools, and reusable AI components including prompt libraries and solution patterns
Implement Retrieval-Augmented Generation (RAG) pipelines leveraging enterprise data sources such as SharePoint, data lakes, document repositories, and research systems
Build and maintain end‑to‑end AI/ML pipelines including data ingestion, feature engineering, model training, evaluation, deployment, and monitoring
Integrate LLMs into business workflows using APIs and platforms such as Azure OpenAI, OpenAI, Anthropic, and AWS Bedrock
Develop prompt-engineering, grounding, and evaluation frameworks to improve accuracy, reliability, and alignment
Translate business use cases across domains (e.g., medical affairs, regulatory, commercial, finance) into functional AI prototypes and production-ready applications
Collaborate with Data Scientists to scale models into production systems and with Product Owners/SMEs to refine requirements, acceptance criteria, and success metrics
Deploy and maintain AI solutions on cloud platforms using modern APIs and software‑engineering best practices
Implement MLOps and LLMOps capabilities including versioning, monitoring, logging, performance tracking, observability, and workload cost optimization
Implement guardrails and controls to prevent data leakage, hallucinations, and misuse
Integrate AI solutions with enterprise identity and data‑security frameworks, including RBAC, Purview, and related governance tools
Ensure all AI systems are reliable, scalable, and secure, and that they comply with data‑classification rules, privacy requirements, and AI governance policies Must be able to pass and clear background check prior to starting. The client Will also Require professional Work References to be completed prior to starting. Candidates must be legally authorized to work in the United States without current or future employer sponsorship. If you are interested, please send me your updated Word Resume, along with your direct phone number and email.