← Serch more jobs

Junior Research Infrastructure Engineer

LinkedIn MeshyAI Sunnyvale, CA
Not Applicable Posted March 13, 2026 Job link
Thinking about this job
Not Met Priorities
What still needs stronger evidence
Requirements
  • Technical Background
  • 2+ years of experience in software engineering, backend development, or distributed systems.
  • Strong programming skills in Python (plus Scala/Java/C++ a plus).
  • Familiarity with distributed frameworks (Spark, Dask, Ray) and cloud platforms (AWS/GCP/Azure).
  • Experience with workflow orchestration tools (Temporal, Celery, or Airflow).
  • Proficiency with Infrastructure as Code (Terraform) and CI/CD tools (GitHub Actions).
  • Frontend & User Experience
  • Experience building web applications or internal tools using React or Next.js.
  • A "product-first" mindset: an interest in how users interact with infrastructure and a desire to build clean, functional interfaces.
  • Experience handling large-scale unstructured datasets (images, video, binaries, or 3D/2D assets).
  • Familiarity with AI/ML training data pipelines, including dataset versioning, augmentation, and sharding.
  • Exposure to computer graphics or 3D/2D data processing.
  • The 70/30 Specialist: You enjoy deep systems engineering but are equally excited to build the UI that makes those systems accessible.
  • Comfortable in a startup environment: versatile, self-directed, pragmatic, and adaptive.
  • Strong problem solver who enjoys tackling ambiguous challenges and "0 to 1" building.
  • Familiarity with GPU-accelerated computing and HPC clusters.
  • Experience with 3D/2D asset processing (geometry transformations, rendering pipelines).
Preferred Skills
  • Domain Skills (Preferred)
  • Experience handling large-scale unstructured datasets (images, video, binaries, or 3D/2D assets).
  • Familiarity with AI/ML training data pipelines, including dataset versioning, augmentation, and sharding.
  • Exposure to computer graphics or 3D/2D data processing.
  • Kubernetes (K8s) for distributed workloads and cluster orchestration.
  • Data lakehouse platforms (specifically Databricks and DABs).
  • Familiarity with GPU-accelerated computing and HPC clusters.
  • Experience with 3D/2D asset processing (geometry transformations, rendering pipelines).