Not Applicable
Posted March 30, 2026
Job link
Thinking about this job
Responsibilities
Commitments
Responsibilities
- Build and Ship Production AI Systems
- Design and implement machine learning models and AI pipelines for OCR, retrieval-augmented generation (RAG), conversational agents, document intelligence, and content generation
- Architect and deploy LLM-powered systems using OpenAI GPT-4, Anthropic Claude, Google Gemini, and open-source models including Llama
- Build robust agent frameworks, LLM chains, and orchestration layers that hold up in production under real-world conditions
- Deploy AI models into production environments and own their ongoing performance—monitoring, debugging, and iterating as needed
- Optimize Models and Prompts
- Develop and optimize prompts for LLMs to improve output quality and align model behavior with specific business objectives
- Evaluate and fine-tune ML models and LLMs using rigorous testing, validation, and performance benchmarking methodologies
- Build evaluation frameworks and regression suites that prove your systems work—and catch when they don’t
- Apply the right technique for the problem: prompt engineering, RAG, fine-tuning, or classical ML based on the actual constraints of data, latency, and budget
- Own Data and Infrastructure
- Collect, preprocess, and manage large datasets to support AI model training and deployment
- Design and enforce data quality rules, ETL pipelines, and secure cloud deployments across AWS, GCP, and Azure
- Work with SQL (PostgreSQL) and data warehouse tooling (Snowflake preferred) to support model pipelines and analytics
- Ensure clean data pipelines, strong governance, and actionable outputs across every engagement
- Collaborate and Contribute
- Work closely with Forward Deployed Engineers, AI Strategists, and product teams to integrate AI models into client-facing products and services
- Stay current on advancements in generative AI, LLMs, and ML frameworks—bringing new tools and techniques to the team when they’re actually worth using
- Contribute to Cadre’s internal AI frameworks, prompt libraries, and reusable components that accelerate delivery across pods
- Mentor junior engineers through design reviews, pair programming, and hands-on coaching
Commitments
No writing tickets for someone else to close.
AI-native culture.
We use it to run our own operations.
You’ll work with people who are as obsessed with the tools as you are.
Access to the frontier.
Not Met Priorities
What still needs stronger evidence
Requirements
- Work with SQL (PostgreSQL) and data warehouse tooling (Snowflake preferred) to support model pipelines and analytics
- 5+ years of experience as an AI/ML Engineer or in a closely related role, with a focus on shipping production systems—not proofs-of-concept
- Hands-on expertise with modern LLMs including OpenAI GPT-4, Anthropic Claude, Google Gemini, and open-source models like Llama
- Strong experience building RAG systems, agent frameworks, and LLM chains that work reliably at scale
- Proficient in Python and experienced with ML frameworks including PyTorch
- Solid understanding of machine learning algorithms, deep learning techniques, and natural language processing
- Experienced evaluating ML models and LLMs using appropriate metrics and methodologies—you know the difference between a good demo and a reliable system
- SQL proficiency (PostgreSQL) and data warehouse experience (Snowflake preferred)
- Strong analytical and problem-solving skills with the ability to work across disciplines and communicate clearly with non-technical stakeholders
- You have deployed AI models on cloud platforms (AWS, GCP, or Azure) and know what production readiness actually requires
- Open-source contributions in AI projects or active participation in AI research communities—you build in public and share what you learn
- Experience with big data technologies like Hadoop or Spark
- Domain knowledge in financial services, real estate, lending, or B2B SaaS—you understand the business context behind the systems you build
Preferred Skills
- Work with SQL (PostgreSQL) and data warehouse tooling (Snowflake preferred) to support model pipelines and analytics
- SQL proficiency (PostgreSQL) and data warehouse experience (Snowflake preferred)
- You have deployed AI models on cloud platforms (AWS, GCP, or Azure) and know what production readiness actually requires
- Open-source contributions in AI projects or active participation in AI research communities—you build in public and share what you learn
- Experience with big data technologies like Hadoop or Spark
- Domain knowledge in financial services, real estate, lending, or B2B SaaS—you understand the business context behind the systems you build
Education
- (Not required) – A Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field
About Cadre AI
Cadre AI is an AI strategy and integration firm that builds production AI systems for B2B companies in private equity, wholesale lending, real estate, and SaaS. We don’t build decks about what AI could do. We ship systems that move revenue, compress costs, and automate the work that used to take entire teams.
The Role
The AI Engineer at Cadre AI is a core builder on our delivery teams. You design, develop, and ship the AI systems that power our clients’ most critical workflows—RAG pipelines, LLM agents, document intelligence, conversational AI, and predictive models. This is not a research role. You work on production systems used by real businesses, and you are accountable for how they perform.
You’ll work closely with Forward Deployed Engineers, AI Strategists, and client stakeholders to turn business problems into scalable, reliable AI solutions. You bring strong ML fundamentals, hands-on LLM expertise, and the engineering discipline to take a model from experiment to production without cutting corners.
What You’ll Do
Build and Ship Production AI Systems
Design and implement machine learning models and AI pipelines for OCR, retrieval-augmented generation (RAG), conversational agents, document intelligence, and content generation
Architect and deploy LLM-powered systems using OpenAI GPT-4, Anthropic Claude, Google Gemini, and open-source models including Llama
Build robust agent frameworks, LLM chains, and orchestration layers that hold up in production under real-world conditions
Deploy AI models into production environments and own their ongoing performance—monitoring, debugging, and iterating as needed
Optimize Models and Prompts
Develop and optimize prompts for LLMs to improve output quality and align model behavior with specific business objectives
Evaluate and fine-tune ML models and LLMs using rigorous testing, validation, and performance benchmarking methodologies
Build evaluation frameworks and regression suites that prove your systems work—and catch when they don’t
Apply the right technique for the problem: prompt engineering, RAG, fine-tuning, or classical ML based on the actual constraints of data, latency, and budget
Own Data and Infrastructure
Collect, preprocess, and manage large datasets to support AI model training and deployment
Design and enforce data quality rules, ETL pipelines, and secure cloud deployments across AWS, GCP, and Azure
Work with SQL (PostgreSQL) and data warehouse tooling (Snowflake preferred) to support model pipelines and analytics
Ensure clean data pipelines, strong governance, and actionable outputs across every engagement
Collaborate and Contribute
Work closely with Forward Deployed Engineers, AI Strategists, and product teams to integrate AI models into client-facing products and services
Stay current on advancements in generative AI, LLMs, and ML frameworks—bringing new tools and techniques to the team when they’re actually worth using
Contribute to Cadre’s internal AI frameworks, prompt libraries, and reusable components that accelerate delivery across pods
Mentor junior engineers through design reviews, pair programming, and hands-on coaching
Who You Are
5+ years of experience as an AI/ML Engineer or in a closely related role, with a focus on shipping production systems—not proofs-of-concept
Hands-on expertise with modern LLMs including OpenAI GPT-4, Anthropic Claude, Google Gemini, and open-source models like Llama
Strong experience building RAG systems, agent frameworks, and LLM chains that work reliably at scale
Proficient in Python and experienced with ML frameworks including PyTorch
Solid understanding of machine learning algorithms, deep learning techniques, and natural language processing
Experienced evaluating ML models and LLMs using appropriate metrics and methodologies—you know the difference between a good demo and a reliable system
SQL proficiency (PostgreSQL) and data warehouse experience (Snowflake preferred)
Strong analytical and problem-solving skills with the ability to work across disciplines and communicate clearly with non-technical stakeholders
What Sets You Apart
You have deployed AI models on cloud platforms (AWS, GCP, or Azure) and know what production readiness actually requires
Open-source contributions in AI projects or active participation in AI research communities—you build in public and share what you learn
Experience with big data technologies like Hadoop or Spark
Domain knowledge in financial services, real estate, lending, or B2B SaaS—you understand the business context behind the systems you build
A Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field
Why Cadre AI
Real ownership. You build the systems, ship them, and see the impact directly. No writing tickets for someone else to close.
Variety and velocity. Every engagement is a different industry, a different problem, and a new chance to build something that matters. You won’t get bored.
AI-native culture. We don’t just build AI for clients. We use it to run our own operations. You’ll work with people who are as obsessed with the tools as you are.
Access to the frontier. Through partnerships with Anthropic, OpenAI, and YC, you’ll be among the first to experiment with new models and capabilities.
Upside. We’re bootstrapped, profitable, and growing fast. Early team members share in the success they help create.
No bureaucracy. Small pods. Clear accountability. The best idea wins, regardless of who says it.
Cadre AI is building the future of how companies adopt and operate AI. We believe the best AI systems come from engineers who understand both the technology and the business it serves. If that’s how you think, we want to talk.
Compensation
The base pay range for this role is $70,000 – $85,000 per year.
Cadre AI is an AI strategy and integration firm that builds production AI systems for B2B companies in private equity, wholesale lending, real estate, and SaaS. We don’t build decks about what AI could do. We ship systems that move revenue, compress costs, and automate the work that used to take entire teams.
The Role
The AI Engineer at Cadre AI is a core builder on our delivery teams. You design, develop, and ship the AI systems that power our clients’ most critical workflows—RAG pipelines, LLM agents, document intelligence, conversational AI, and predictive models. This is not a research role. You work on production systems used by real businesses, and you are accountable for how they perform.
You’ll work closely with Forward Deployed Engineers, AI Strategists, and client stakeholders to turn business problems into scalable, reliable AI solutions. You bring strong ML fundamentals, hands-on LLM expertise, and the engineering discipline to take a model from experiment to production without cutting corners.
What You’ll Do
Build and Ship Production AI Systems
Design and implement machine learning models and AI pipelines for OCR, retrieval-augmented generation (RAG), conversational agents, document intelligence, and content generation
Architect and deploy LLM-powered systems using OpenAI GPT-4, Anthropic Claude, Google Gemini, and open-source models including Llama
Build robust agent frameworks, LLM chains, and orchestration layers that hold up in production under real-world conditions
Deploy AI models into production environments and own their ongoing performance—monitoring, debugging, and iterating as needed
Optimize Models and Prompts
Develop and optimize prompts for LLMs to improve output quality and align model behavior with specific business objectives
Evaluate and fine-tune ML models and LLMs using rigorous testing, validation, and performance benchmarking methodologies
Build evaluation frameworks and regression suites that prove your systems work—and catch when they don’t
Apply the right technique for the problem: prompt engineering, RAG, fine-tuning, or classical ML based on the actual constraints of data, latency, and budget
Own Data and Infrastructure
Collect, preprocess, and manage large datasets to support AI model training and deployment
Design and enforce data quality rules, ETL pipelines, and secure cloud deployments across AWS, GCP, and Azure
Work with SQL (PostgreSQL) and data warehouse tooling (Snowflake preferred) to support model pipelines and analytics
Ensure clean data pipelines, strong governance, and actionable outputs across every engagement
Collaborate and Contribute
Work closely with Forward Deployed Engineers, AI Strategists, and product teams to integrate AI models into client-facing products and services
Stay current on advancements in generative AI, LLMs, and ML frameworks—bringing new tools and techniques to the team when they’re actually worth using
Contribute to Cadre’s internal AI frameworks, prompt libraries, and reusable components that accelerate delivery across pods
Mentor junior engineers through design reviews, pair programming, and hands-on coaching
Who You Are
5+ years of experience as an AI/ML Engineer or in a closely related role, with a focus on shipping production systems—not proofs-of-concept
Hands-on expertise with modern LLMs including OpenAI GPT-4, Anthropic Claude, Google Gemini, and open-source models like Llama
Strong experience building RAG systems, agent frameworks, and LLM chains that work reliably at scale
Proficient in Python and experienced with ML frameworks including PyTorch
Solid understanding of machine learning algorithms, deep learning techniques, and natural language processing
Experienced evaluating ML models and LLMs using appropriate metrics and methodologies—you know the difference between a good demo and a reliable system
SQL proficiency (PostgreSQL) and data warehouse experience (Snowflake preferred)
Strong analytical and problem-solving skills with the ability to work across disciplines and communicate clearly with non-technical stakeholders
What Sets You Apart
You have deployed AI models on cloud platforms (AWS, GCP, or Azure) and know what production readiness actually requires
Open-source contributions in AI projects or active participation in AI research communities—you build in public and share what you learn
Experience with big data technologies like Hadoop or Spark
Domain knowledge in financial services, real estate, lending, or B2B SaaS—you understand the business context behind the systems you build
A Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field
Why Cadre AI
Real ownership. You build the systems, ship them, and see the impact directly. No writing tickets for someone else to close.
Variety and velocity. Every engagement is a different industry, a different problem, and a new chance to build something that matters. You won’t get bored.
AI-native culture. We don’t just build AI for clients. We use it to run our own operations. You’ll work with people who are as obsessed with the tools as you are.
Access to the frontier. Through partnerships with Anthropic, OpenAI, and YC, you’ll be among the first to experiment with new models and capabilities.
Upside. We’re bootstrapped, profitable, and growing fast. Early team members share in the success they help create.
No bureaucracy. Small pods. Clear accountability. The best idea wins, regardless of who says it.
Cadre AI is building the future of how companies adopt and operate AI. We believe the best AI systems come from engineers who understand both the technology and the business it serves. If that’s how you think, we want to talk.
Compensation
The base pay range for this role is $70,000 – $85,000 per year.