Not Applicable
Posted March 13, 2026
Job link
Thinking about this job
Responsibilities
Commitments
Responsibilities
- Design, deploy, and operate scalable data platforms and pipelines, primarily on Azure (Databricks, ADF, ADLS)
- Build, manage, and optimize Apache Spark clusters and workloads for batch and streaming data processing across Azure and AWS environments.
- Implement CI/CD pipelines for data engineering code, Spark jobs, and pipeline configurations using Azure DevOps/GitHub Actions
- Automate infrastructure using Infrastructure as Code (Terraform) and manage containerized workloads with Docker and Kubernetes
- Monitor data pipelines and platforms to ensure data reliability, quality, observability, and cost optimization across Azure and AWS data platforms.
- Enforce security, governance, and best practices, collaborating closely with data engineers and platform teams in Azure-first, multi-cloud environments.
Commitments
This is a 5-days a week in-office role located in our North San Jose, CA location
Not Met Priorities
What still needs stronger evidence
Requirements
- 6+ years of professional experience in data engineering, Data DevOps, or Data platform engineering roles
- Proven experience supporting production-grade data platforms in enterprise environments
- Proven ability to design, build, deploy, and maintain scalable data pipelines (ETL/ELT)
- Deep understanding of Apache Spark for batch and streaming workloads
- Experience creating, configuring, and managing Spark clusters, including performance tuning and cost optimization
- Practical experience with at least one major cloud provider: AWS, Azure, or GCP.
- Strong experience using Terraform for infrastructure automation.
- Proven ability to diagnose and resolve system and infrastructure issues.
- Bonus Skill: Experience deploying and managing Spark workloads on Azure Databricks or Azure Synapse
- This is a 5-days a week in-office role located in our North San Jose, CA location
Preferred Skills
- Practical experience with at least one major cloud provider: AWS, Azure, or GCP.
- Bonus Skill: Experience deploying and managing Spark workloads on Azure Databricks or Azure Synapse
Kai is the AI company rebuilding cybersecurity for the machine-speed era. Founded by second time founders and trusted by Fortune 500 enterprises, Kai is building a future where security has no categories, no silos, and no human speed bottlenecks. The Kai Agentic AI Platform replaces fragmented, human-limited workflows with agentic AI systems that continuously contextualize, assess, reason, and execute security work at machine speed - making human defenders, superhuman.
Why Join Kai
Well-funded: With $125M raised, we have the capital, runway, and resolve to rebuild cybersecurity from first principles.
Proven: We've earned the trust of Fortune 500 and Global 1000 companies, and we're just getting started. Their confidence in Kai reflects what we've built: an AI-powered cybersecurity platform that performs at the scale and speed the enterprise demands.
Experienced founders: Our founding team consists of second-time entrepreneurs, each with over 20 years of experience in the cybersecurity industry. Their proven expertise and vision drive our ambitious goals.
World-class leadership team: Our Heads of AI, Engineering, and Product bring extensive experience from some of the world’s most influential companies, ensuring top-tier mentorship, direction, and vision.
Frontier AI Applied Research Team: Our researchers operate at the leading edge of agentic AI systems, translating breakthrough capabilities into real-world cybersecurity applications.
Generous compensation: We offer highly competitive salaries, equity options, and a supportive work environment. Your contributions will be valued and rewarded as we grow together.
Why This Role Matters
This role focuses on automating workflows, improving platform reliability, and supporting data engineering teams with efficient development and deployment practices. In a world where digital experiences shape trust and adoption, DataOps increases development productivity which directly drives product success and customer confidence.
What You’ll Do
Design, deploy, and operate scalable data platforms and pipelines, primarily on Azure (Databricks, ADF, ADLS)
Build, manage, and optimize Apache Spark clusters and workloads for batch and streaming data processing across Azure and AWS environments.
Implement CI/CD pipelines for data engineering code, Spark jobs, and pipeline configurations using Azure DevOps/GitHub Actions
Automate infrastructure using Infrastructure as Code (Terraform) and manage containerized workloads with Docker and Kubernetes
Monitor data pipelines and platforms to ensure data reliability, quality, observability, and cost optimization across Azure and AWS data platforms.
Enforce security, governance, and best practices, collaborating closely with data engineers and platform teams in Azure-first, multi-cloud environments.
What We’re Looking For
6+ years of professional experience in data engineering, Data DevOps, or Data platform engineering roles
Proven experience supporting production-grade data platforms in enterprise environments
Proven ability to design, build, deploy, and maintain scalable data pipelines (ETL/ELT)
Deep understanding of Apache Spark for batch and streaming workloads
Experience creating, configuring, and managing Spark clusters, including performance tuning and cost optimization
Practical experience with at least one major cloud provider: AWS, Azure, or GCP.
Strong experience using Terraform for infrastructure automation.
Proven ability to diagnose and resolve system and infrastructure issues.
Bonus Skill: Experience deploying and managing Spark workloads on Azure Databricks or Azure Synapse
This is a 5-days a week in-office role located in our North San Jose, CA location
Why Join Kai
Well-funded: With $125M raised, we have the capital, runway, and resolve to rebuild cybersecurity from first principles.
Proven: We've earned the trust of Fortune 500 and Global 1000 companies, and we're just getting started. Their confidence in Kai reflects what we've built: an AI-powered cybersecurity platform that performs at the scale and speed the enterprise demands.
Experienced founders: Our founding team consists of second-time entrepreneurs, each with over 20 years of experience in the cybersecurity industry. Their proven expertise and vision drive our ambitious goals.
World-class leadership team: Our Heads of AI, Engineering, and Product bring extensive experience from some of the world’s most influential companies, ensuring top-tier mentorship, direction, and vision.
Frontier AI Applied Research Team: Our researchers operate at the leading edge of agentic AI systems, translating breakthrough capabilities into real-world cybersecurity applications.
Generous compensation: We offer highly competitive salaries, equity options, and a supportive work environment. Your contributions will be valued and rewarded as we grow together.
Why This Role Matters
This role focuses on automating workflows, improving platform reliability, and supporting data engineering teams with efficient development and deployment practices. In a world where digital experiences shape trust and adoption, DataOps increases development productivity which directly drives product success and customer confidence.
What You’ll Do
Design, deploy, and operate scalable data platforms and pipelines, primarily on Azure (Databricks, ADF, ADLS)
Build, manage, and optimize Apache Spark clusters and workloads for batch and streaming data processing across Azure and AWS environments.
Implement CI/CD pipelines for data engineering code, Spark jobs, and pipeline configurations using Azure DevOps/GitHub Actions
Automate infrastructure using Infrastructure as Code (Terraform) and manage containerized workloads with Docker and Kubernetes
Monitor data pipelines and platforms to ensure data reliability, quality, observability, and cost optimization across Azure and AWS data platforms.
Enforce security, governance, and best practices, collaborating closely with data engineers and platform teams in Azure-first, multi-cloud environments.
What We’re Looking For
6+ years of professional experience in data engineering, Data DevOps, or Data platform engineering roles
Proven experience supporting production-grade data platforms in enterprise environments
Proven ability to design, build, deploy, and maintain scalable data pipelines (ETL/ELT)
Deep understanding of Apache Spark for batch and streaming workloads
Experience creating, configuring, and managing Spark clusters, including performance tuning and cost optimization
Practical experience with at least one major cloud provider: AWS, Azure, or GCP.
Strong experience using Terraform for infrastructure automation.
Proven ability to diagnose and resolve system and infrastructure issues.
Bonus Skill: Experience deploying and managing Spark workloads on Azure Databricks or Azure Synapse
This is a 5-days a week in-office role located in our North San Jose, CA location