Not Applicable
Posted April 2, 2026
Job link
Thinking about this job
Responsibilities
Responsibilities
- Design and implement SLAM and localization systems (visual, visual-inertial, lidar, or multi-sensor)
- Develop and integrate computer vision pipelines for perception tasks such as feature extraction, tracking, mapping, and scene understanding
- Implement and optimize estimation algorithms (e.g., filtering, optimization-based methods)
- Fuse data from multiple sensors (cameras, IMUs, lidars, depth sensors)
- Evaluate perception system performance using real-world data and metrics
- Optimize algorithms for real-time performance and robustness
- Collaborate with controls and planning teams to support downstream autonomy
- Maintain clean, well-tested, production-quality code
- Contribute to tooling, datasets, and evaluation frameworks
Not Met Priorities
What still needs stronger evidence
Requirements
- Strong background in robotics perception or computer vision
- Experience implementing SLAM or localization systems in practice
- Solid understanding of:
- 3D geometry and coordinate transformations
- Camera models and calibration
- Feature-based and/or direct visual methods
- Probabilistic state estimation
- Proficiency in C++ and/or Python
- Experience working in Linux environments
- Familiarity with robotics software stacks (e.g., ROS / ROS 2)
- Strong debugging and data analysis skills
Preferred Skills
- Experience with specific SLAM frameworks (e.g., ORB-SLAM, VINS, Cartographer, GTSAM)
- Experience with lidar-based perception and mapping
- Familiarity with deep learning–based perception models
- Experience deploying perception systems on real robots
- Knowledge of GPU acceleration (CUDA, OpenCL)
- Experience with dataset curation and annotation
- Publications or research background in robotics or computer vision
Description
We are a robotics company building autonomous systems that operate in complex, dynamic environments. Our perception stack enables our robots to understand, localize, and navigate the world in real time, and we place a strong emphasis on robustness, performance, and maintainable engineering.
We are seeking a Perception Engineer to design and implement SLAM, state estimation, and computer vision algorithms for real-world robotic systems. You will work closely with robotics, controls, and systems engineers to bring perception algorithms from research into reliable, production-ready software.
This role is ideal for someone who enjoys bridging the gap between theory and deployment—turning academic algorithms into efficient, well-engineered systems.
Responsibilities
Design and implement SLAM and localization systems (visual, visual-inertial, lidar, or multi-sensor)
Develop and integrate computer vision pipelines for perception tasks such as feature extraction, tracking, mapping, and scene understanding
Implement and optimize estimation algorithms (e.g., filtering, optimization-based methods)
Fuse data from multiple sensors (cameras, IMUs, lidars, depth sensors)
Evaluate perception system performance using real-world data and metrics
Optimize algorithms for real-time performance and robustness
Collaborate with controls and planning teams to support downstream autonomy
Maintain clean, well-tested, production-quality code
Contribute to tooling, datasets, and evaluation frameworks
Requirements
Required Qualifications
Strong background in robotics perception or computer vision
Experience implementing SLAM or localization systems in practice
Solid understanding of:
3D geometry and coordinate transformations
Camera models and calibration
Feature-based and/or direct visual methods
Probabilistic state estimation
Proficiency in C++ and/or Python
Experience working in Linux environments
Familiarity with robotics software stacks (e.g., ROS / ROS 2)
Strong debugging and data analysis skills
Preferred Qualifications
Experience with specific SLAM frameworks (e.g., ORB-SLAM, VINS, Cartographer, GTSAM)
Experience with lidar-based perception and mapping
Familiarity with deep learning–based perception models
Experience deploying perception systems on real robots
Knowledge of GPU acceleration (CUDA, OpenCL)
Experience with dataset curation and annotation
Publications or research background in robotics or computer vision
We are a robotics company building autonomous systems that operate in complex, dynamic environments. Our perception stack enables our robots to understand, localize, and navigate the world in real time, and we place a strong emphasis on robustness, performance, and maintainable engineering.
We are seeking a Perception Engineer to design and implement SLAM, state estimation, and computer vision algorithms for real-world robotic systems. You will work closely with robotics, controls, and systems engineers to bring perception algorithms from research into reliable, production-ready software.
This role is ideal for someone who enjoys bridging the gap between theory and deployment—turning academic algorithms into efficient, well-engineered systems.
Responsibilities
Design and implement SLAM and localization systems (visual, visual-inertial, lidar, or multi-sensor)
Develop and integrate computer vision pipelines for perception tasks such as feature extraction, tracking, mapping, and scene understanding
Implement and optimize estimation algorithms (e.g., filtering, optimization-based methods)
Fuse data from multiple sensors (cameras, IMUs, lidars, depth sensors)
Evaluate perception system performance using real-world data and metrics
Optimize algorithms for real-time performance and robustness
Collaborate with controls and planning teams to support downstream autonomy
Maintain clean, well-tested, production-quality code
Contribute to tooling, datasets, and evaluation frameworks
Requirements
Required Qualifications
Strong background in robotics perception or computer vision
Experience implementing SLAM or localization systems in practice
Solid understanding of:
3D geometry and coordinate transformations
Camera models and calibration
Feature-based and/or direct visual methods
Probabilistic state estimation
Proficiency in C++ and/or Python
Experience working in Linux environments
Familiarity with robotics software stacks (e.g., ROS / ROS 2)
Strong debugging and data analysis skills
Preferred Qualifications
Experience with specific SLAM frameworks (e.g., ORB-SLAM, VINS, Cartographer, GTSAM)
Experience with lidar-based perception and mapping
Familiarity with deep learning–based perception models
Experience deploying perception systems on real robots
Knowledge of GPU acceleration (CUDA, OpenCL)
Experience with dataset curation and annotation
Publications or research background in robotics or computer vision