Inference Runtime Engineer

inferact United State
Remote Visa Sponsorship
Apply
AI Summary

Inferact is seeking an Inference Runtime Engineer to push the boundaries of what's possible in LLM and diffusion model serving. The ideal candidate will have a deep understanding of transformer architectures and their variants, strong programming skills in Python, and experience with LLM inference systems.

Key Highlights
Inference Runtime Engineer
LLM and diffusion model serving
Transformer architectures
Python programming skills
LLM inference systems
Technical Skills Required
Python PyTorch Transformer architectures LLM inference systems KV-cache memory management Prefix caching Hybrid model serving
Benefits & Perks
Annual salary range: $200,000 - $400,000
Equity
Health, dental, and vision benefits
401(k) company match
Remote work option
Visa sponsorship

Job Description


Inferact's mission is to grow vLLM as the world's AI inference engine and accelerate AI progress by making inference cheaper and faster. Founded by the creators and core maintainers of vLLM, we sit at the intersection of models and hardware—a position that took years to build.

About The Role

We're looking for an inference runtime engineer to push the boundaries of what's possible in LLM and diffusion model serving. Models grow larger. Architectures shift: mixture-of-experts, multimodal, agentic. Every breakthrough demands innovations on the inference engine itself. You'll work at the core of vLLM, optimizing how models execute across diverse hardware and architectures. Your work will directly impact how the world runs AI inference.

Skills And Qualifications

Minimum qualifications:

  • Bachelor's degree or equivalent experience in computer science, engineering, or similar.
  • Deep understanding of transformer architectures and their variants.
  • Strong programming skills in Python with experience in PyTorch internals.
  • Experience with LLM inference systems (vLLM, TensorRT-LLM, SGLang, TGI).
  • Ability to read and implement model architectures and inference techniques from research papers.
  • Demonstrate the ability to contribute performant and maintainable code and debug in complex ML codebases.

Preferred qualifications:

  • Deep understanding of KV-cache memory management, prefix caching, and hybrid model serving.
  • Familiarity with RL frameworks and algorithms for LLMs.
  • Experience with multimodal inference (audio/image/video/text).
  • Contributions to open-source ML or system infrastructure projects.

Bonus points if you have:

  • Implemented core features in vLLM or other inference engine projects.
  • Contributed to vLLM integrations (verl, OpenRLHF, Unsloth, LlamaFactory, etc).
  • Written widely-shared technical blogs or side projects on vLLM or LLM inference.

Logistics

  • Location: This role is based in San Francisco, California. Will consider remote in the US for exceptional candidates.
  • Compensation: Depending on background, skills, and experience, the expected annual salary range for this position is $200,000 - $400,000 USD + equity.
  • Visa sponsorship: We sponsor visas on a case-by-case basis.
  • Benefits: Inferact offers generous health, dental, and vision benefits as well as 401(k) company match.

Compensation Range: $200K - $400K


Similar Jobs

Explore other opportunities that match your interests

Visa Sponsorship Relocation Remote
Job Type Full-time
Experience Level Mid-Senior level

Bright Vision Technologies

United State
Visa Sponsorship Relocation Remote
Job Type Internship
Experience Level Entry level

Lensa

United State

Quicklizard Developer

Programming
2h ago
Visa Sponsorship Relocation Remote
Job Type Contract
Experience Level Entry level

the brixton group

United State

Subscribe our newsletter

New Things Will Always Update Regularly