Lead research that improves embodied agent capability, spanning reinforcement learning, imitation learning, and vision-language-action modelling.
Key Highlights
Technical Skills Required
Benefits & Perks
Job Description
Build the models that turn vision and language into real-world action. Your research will ship onto physical robots and get tested in messy, adversarial environments, not just in a notebook.
Why This Role Exists
A well-funded, early-stage robotics and ML team is building a foundation model for embodied autonomy in mission-critical settings. They are past “cool demo” territory and are now scaling training, evaluation, and deployment so fleets of varying robots can perceive, plan, and coordinate under real operational constraints. In 12–18 months, success looks like robust policies and world models running reliably on hardware, sim-to-real transfer that holds up in the field, and a research pipeline that consistently produces deployable improvements.
What You Will Actually Be Doing
- You will lead research that improves embodied agent capability, spanning reinforcement learning, imitation learning, and/or vision-language-action modelling.
- You will build and benchmark large-scale models for perception, decision-making, and control across simulation and real robots.
- You will push sim-to-real transfer and continual learning so behaviours generalize across domains, terrains, sensors, and platforms.
- You will develop memory and long-horizon autonomy primitives, enabling agents to operate coherently over extended missions and shifting objectives.
- You will partner tightly with robotics and systems engineers to integrate research into production-grade stacks, including evaluation, deployment, and observability.
- You will support field tests and data collection to close the loop between research insight and real performance.
This Will Suit You If
- You like research that turns into working systems, fast.
- You enjoy unstructured problems where the right answer is not in a paper, and constraints are real.
- You are comfortable in a high-ownership startup environment. Context-switching, moving quickly, and setting standards as you go.
- You care about rigour and repeatability, not heroics. Clean experiments, strong baselines, honest conclusions.
- You want your work to matter in a mission-critical domain, with clear stakes and rapid feedback.
What You Need To Have
- PhD in CS, Robotics, ML (or equivalent industry research track record) with a strong publication history.
- Deep expertise in one or more**: RL, imitation learning, vision-language models, sim-to-real, world modelling, agentic AI, embodied autonomy.
- Hands-on experience building and evaluating embodied agents across simulation and physical systems.
- Strong Python** and proficiency in **PyTorch, JAX, or TensorFlow.
- Proven ability to **take an idea from concept to a deployed prototype, including evaluation design and iteration.
Optional, but nice if you have
- Multi-agent coordination and swarm behaviors.
- ROS, real-time control loops, embedded or edge inference constraints.
- Experience integrating ML with physical robotic platforms and high-dimensional sensor streams.
- Background shipping in aerospace, defense, or other high-reliability environments.
Who You Will Be Joining
A small, senior team in the South Bay building an embodied autonomy platform from the ground up. They are early-stage, heavily backed, and operating at the intersection of cutting-edge ML research and real-world robotics, with direct access to leadership and a culture that prioritises pace, clarity, and shipping.
Location
South Bay, CA. This is an on-site role due to hardware access and frequent integration work. Relocation support is available.