Inferact is seeking an Inference Runtime Engineer to push the boundaries of what's possible in LLM and diffusion model serving. The ideal candidate will have a deep understanding of transformer architectures and their variants, strong programming skills in Python, and experience with LLM inference systems.
Key Highlights
Technical Skills Required
Benefits & Perks
Job Description
Inferact's mission is to grow vLLM as the world's AI inference engine and accelerate AI progress by making inference cheaper and faster. Founded by the creators and core maintainers of vLLM, we sit at the intersection of models and hardware—a position that took years to build.
About The Role
We're looking for an inference runtime engineer to push the boundaries of what's possible in LLM and diffusion model serving. Models grow larger. Architectures shift: mixture-of-experts, multimodal, agentic. Every breakthrough demands innovations on the inference engine itself. You'll work at the core of vLLM, optimizing how models execute across diverse hardware and architectures. Your work will directly impact how the world runs AI inference.
Skills And Qualifications
Minimum qualifications:
- Bachelor's degree or equivalent experience in computer science, engineering, or similar.
- Deep understanding of transformer architectures and their variants.
- Strong programming skills in Python with experience in PyTorch internals.
- Experience with LLM inference systems (vLLM, TensorRT-LLM, SGLang, TGI).
- Ability to read and implement model architectures and inference techniques from research papers.
- Demonstrate the ability to contribute performant and maintainable code and debug in complex ML codebases.
- Deep understanding of KV-cache memory management, prefix caching, and hybrid model serving.
- Familiarity with RL frameworks and algorithms for LLMs.
- Experience with multimodal inference (audio/image/video/text).
- Contributions to open-source ML or system infrastructure projects.
- Implemented core features in vLLM or other inference engine projects.
- Contributed to vLLM integrations (verl, OpenRLHF, Unsloth, LlamaFactory, etc).
- Written widely-shared technical blogs or side projects on vLLM or LLM inference.
- Location: This role is based in San Francisco, California. Will consider remote in the US for exceptional candidates.
- Compensation: Depending on background, skills, and experience, the expected annual salary range for this position is $200,000 - $400,000 USD + equity.
- Visa sponsorship: We sponsor visas on a case-by-case basis.
- Benefits: Inferact offers generous health, dental, and vision benefits as well as 401(k) company match.
Similar Jobs
Explore other opportunities that match your interests
Bright Vision Technologies
Lensa