Join OpenAI's Hardware organization to develop silicon and system-level solutions for advanced AI workloads. As a software engineer on the Scaling team, you'll design and build high-performance runtimes, kernels, and compiler infrastructure. Work at the intersection of systems programming, ML infrastructure, and high-performance computing.
Key Highlights
Technical Skills Required
Benefits & Perks
Job Description
About The Team
OpenAIs Hardware organization develops silicon and system-level solutions designed for the unique demands of advanced AI workloads. The team is responsible for building the next generation of AI-native silicon while working closely with software and research partners to co-design hardware tightly integrated with AI models. In addition to delivering production-grade silicon for OpenAIs supercomputing infrastructure, the team also creates custom design tools and methodologies that accelerate innovation and enable hardware optimized specifically for AI.
About The Role
As a software engineer on the Scaling team, youll help build and optimize the low-level stack that orchestrates computation and data movement across OpenAIs supercomputing clusters. Your work will involve designing high-performance runtimes, building custom kernels, contributing to compiler infrastructure, and developing scalable simulation systems to validate and optimize distributed training workloads.
You will work at the intersection of systems programming, ML infrastructure, and high-performance computing, helping to create both ergonomic developer APIs and highly efficient runtime systems. This means balancing ease of use and introspection with the need for stability and performance on our evolving hardware fleet.
This role is based in San Francisco, CA, with a hybrid work model (3 days/week in-office). Relocation assistance is available.
In This Role, You Will
- Design and build APIs and runtime components to orchestrate computation and data movement across heterogeneous ML workloads.
- Contribute to compiler infrastructure, including the development of optimizations and compiler passes to support evolving hardware.
- Engineer and optimize compute and data kernels, ensuring correctness, high performance, and portability across simulation and production environments.
- Profile and optimize system bottlenecks, especially around I/O, memory hierarchy, and interconnects, at both local and distributed scales.
- Develop simulation infrastructure to validate runtime behaviors, test training stack changes, and support early-stage hardware and system development.
- Rapidly deploy runtime and compiler updates to new supercomputing builds in close collaboration with hardware and research teams.
- Work across a diverse stack, primarily using Rust and Python, with opportunities to influence architecture decisions across the training framework.
- Have a deep curiosity for how large-scale systems work and enjoy making them faster, simpler, and more reliable.
- Are proficient in systems programming (e.g., Rust, C++) and scripting languages like Python.
- Have experience in one or more of the following areas: compiler development, kernel authoring, accelerator programming, runtime systems, distributed systems, or high-performance simulation.
- Are excited to work in a fast-paced, highly collaborative environment with evolving hardware and ML system demands.
- Value engineering excellence, technical leadership, and thoughtful system design.