← Serch more jobs

Applied AI Engineer - Artificial Intelligence

LinkedIn Avenue 45 San Diego Metropolitan Area
Mid-Senior level Posted March 25, 2026 Job link
Thinking about this job
Not Met Priorities
What still needs stronger evidence
Requirements
  • Applied AI/ML engineering
  • Prompt engineering & grounding techniques
  • Generative AI & LLM integration (Azure OpenAI, OpenAI, Anthropic, AWS Bedrock)
  • Enterprise data integration (SharePoint, data lakes, document repositories)
  • RAG architectures, vector databases, and semantic search
  • Cloud and API application development (Azure/AWS/GCP)
  • Python engineering
  • MLOps / LLMOps (monitoring, logging, versioning, observability, cost optimization)
  • Security‑aware engineering (RBAC, Purview, guardrails)
  • Responsible AI, governance, explainability, and data‑classification frameworks
  • Business problem‑solving & systems thinking
  • Experience with RAG pipelines, vector databases, and semantic search systems
  • Exposure to Azure OpenAI, Copilot Studio, LangChain, LlamaIndex, or similar AI frameworks
  • Familiarity with MLOps platforms such as MLflow, SageMaker, Azure ML, or Databricks
  • Experience working in regulated or data‑sensitive environments (e.g., pharma, healthcare, finance)
  • Knowledge of AI governance, Responsible AI, model explainability, and data‑classification standards
  • Experience building enterprise copilots, agentic AI systems, or intelligent automation solutions Skill Needed:
  • Strong proficiency in Python is required
  • Experience building and deploying applications using LLM APIs and AI solutions in cloud environments (Azure, AWS)
  • Experience in Applied AI/ML & Prompt Engineering, Generative AI & LLM Integration, Enterprise Data Integration, API & Cloud Application Development, and Security-aware Engineering
  • Hands-on experience with ML frameworks (PyTorch, TensorFlow, scikit-learn)
  • Strong understanding of data engineering fundamentals, APIs, and distributed systems
  • Build AI applications such as enterprise copilots, search assistants, document intelligence and generation tools, workflow-automation agents, predictive models, decision‑support tools, and reusable AI components including prompt libraries and solution patterns
  • Implement Retrieval-Augmented Generation (RAG) pipelines leveraging enterprise data sources such as SharePoint, data lakes, document repositories, and research systems
  • Build and maintain end‑to‑end AI/ML pipelines including data ingestion, feature engineering, model training, evaluation, deployment, and monitoring
  • Integrate LLMs into business workflows using APIs and platforms such as Azure OpenAI, OpenAI, Anthropic, and AWS Bedrock
  • Develop prompt-engineering, grounding, and evaluation frameworks to improve accuracy, reliability, and alignment
  • Translate business use cases across domains (e.g., medical affairs, regulatory, commercial, finance) into functional AI prototypes and production-ready applications
  • Collaborate with Data Scientists to scale models into production systems and with Product Owners/SMEs to refine requirements, acceptance criteria, and success metrics
  • Deploy and maintain AI solutions on cloud platforms using modern APIs and software‑engineering best practices
  • Implement MLOps and LLMOps capabilities including versioning, monitoring, logging, performance tracking, observability, and workload cost optimization
  • Integrate AI solutions with enterprise identity and data‑security frameworks, including RBAC, Purview, and related governance tools
  • Ensure all AI systems are reliable, scalable, and secure, and that they comply with data‑classification rules, privacy requirements, and AI governance policies Must be able to pass and clear background check prior to starting.
  • The client Will also Require professional Work References to be completed prior to starting.
  • Candidates must be legally authorized to work in the United States without current or future employer sponsorship.
Preferred Skills
  • Strong stakeholder communication and cross‑functional collaboration Preferred Experience
  • Experience with RAG pipelines, vector databases, and semantic search systems
  • Exposure to Azure OpenAI, Copilot Studio, LangChain, LlamaIndex, or similar AI frameworks
  • Familiarity with MLOps platforms such as MLflow, SageMaker, Azure ML, or Databricks
  • Experience working in regulated or data‑sensitive environments (e.g., pharma, healthcare, finance)
  • Experience building enterprise copilots, agentic AI systems, or intelligent automation solutions Skill Needed:
  • Hands-on experience with ML frameworks (PyTorch, TensorFlow, scikit-learn)
  • Experience with RAG architectures, vector databases, and semantic search is preferred
  • Exposure to Azure OpenAI, Copilot Studio, LangChain, LlamaIndex, or similar frameworks is preferred
  • Familiarity with MLOps platforms (MLflow, SageMaker, Azure ML, Databricks) is preferred
  • Experience in regulated or data-sensitive environments (pharma, healthcare, finance) is preferred
  • Familiarity with AI governance, responsible AI, model explainability, and data classification is preferred
  • Experience building enterprise copilots or agentic AI solutions is preferred In this role, you’ll have the opportunity to:
  • Build AI applications such as enterprise copilots, search assistants, document intelligence and generation tools, workflow-automation agents, predictive models, decision‑support tools, and reusable AI components including prompt libraries and solution patterns
  • Implement Retrieval-Augmented Generation (RAG) pipelines leveraging enterprise data sources such as SharePoint, data lakes, document repositories, and research systems
  • Build and maintain end‑to‑end AI/ML pipelines including data ingestion, feature engineering, model training, evaluation, deployment, and monitoring
  • Integrate LLMs into business workflows using APIs and platforms such as Azure OpenAI, OpenAI, Anthropic, and AWS Bedrock
  • Develop prompt-engineering, grounding, and evaluation frameworks to improve accuracy, reliability, and alignment
  • Translate business use cases across domains (e.g., medical affairs, regulatory, commercial, finance) into functional AI prototypes and production-ready applications
  • Collaborate with Data Scientists to scale models into production systems and with Product Owners/SMEs to refine requirements, acceptance criteria, and success metrics
  • Deploy and maintain AI solutions on cloud platforms using modern APIs and software‑engineering best practices
  • Implement MLOps and LLMOps capabilities including versioning, monitoring, logging, performance tracking, observability, and workload cost optimization
Education
  • (Not required) – Bachelor’s degree in Computer Science, Engineering, Data Science, or related field, with at least 8 years of experience in software engineering, data engineering, or applied AI engineering (An equivalent combination of experience and education may be considered)