Senior AI Engineer - LLMOps & MLOps

Motion Recruitment • United State
Remote
Apply
AI Summary

We are seeking a Senior AI Engineer to own the production lifecycle of AI initiatives, building automated infrastructure that bridges legacy data systems with modern AWS and Azure AI services. The role involves ensuring LLM applications, RAG pipelines, and traditional ML models are deployable, observable, and scalable in a multi-cloud environment. The ideal candidate will have 6+ years of engineering experience, with a focus on MLOps or LLMOps in a production environment.

Key Highlights
Multi-Cloud Pipeline Execution
LLMOps Framework Implementation
Legacy Data Connectivity
Key Responsibilities
Build and maintain automated CI/CD and CT pipelines across AWS and Azure
Design and execute the infrastructure for Retrieval-Augmented Generation (RAG)
Build the engineering 'pipes' to securely ingest and move data from legacy systems into cloud-native MLOps workflows
Technical Skills Required
Python SQL PySpark Docker Kubernetes Airflow Kubeflow Step Functions Terraform CloudFormation
Benefits & Perks
100% remote work
Salary range: $138,550-$187,450

Job Description


About the job:

  • Our client, a global leader in technology-enabled risk, benefits, and integrated solutions, is seeking a Senior AI Engineer to be responsible for end-to-end ownership of AI initiatives for one of their claims processing domains.
  • This is a high-stakes, execution-focused role within their AI Transformation Office. The AI Innovation group is a part of a business-critical initiative to adopt the use of AI to make their claims processing lifecycle more efficient. They collaborated with Microsoft and used Azure OpenAI Service and Azure AI Document Intelligence to develop an AI tool that supports their internal claims adjusters and the needs of their customers around the globe. This new AI Agent is a suite of agentic AI insights and capabilities providing real-time guidance to insurance claims professionals at the desk level.
  • They are looking for a "day-one" engineer to own the production lifecycle of their AI initiatives. Your mission is to build the automated infrastructure that bridges our legacy data systems with modern AWS and Azure AI services.
  • You will be responsible for the "Ops" of AI: ensuring that LLM applications, RAG pipelines, and traditional ML models are deployable, observable, and scalable in a multi-cloud environment.



Sr. AI Engineer – LLMOps & MLOps – 100% remote


What you will be doing:


Role Overview

  • This is a high-stakes, execution-focused role within the Transformation Office.
  • We are looking for a "day-one" engineer to own the production lifecycle of our AI initiatives. Your mission is to build the automated infrastructure that bridges our legacy data systems with modern AWS and Azure AI services.
  • You will be responsible for the "Ops" of AI: ensuring that LLM applications, RAG pipelines, and traditional ML models are deployable, observable, and scalable in a multi-cloud environment.


Key Responsibilities:

  • Multi-Cloud Pipeline Execution: Build and maintain automated CI/CD and CT (Continuous Training) pipelines across AWS (SageMaker/Bedrock) and Azure (AI Studio).
  • LLMOps Framework Implementation: Design and execute the infrastructure for Retrieval-Augmented Generation (RAG), including vector database management (OpenSearch, Pinecone, or Azure AI Search) and semantic index optimization.
  • Legacy Data Connectivity: Build the engineering "pipes" to securely ingest and move data from legacy systems (Mainframes, SQL Server, on-prem DBs) into cloud-native MLOps workflows.
  • Automated Model Evaluation: Implement systemized frameworks for LLM evaluation (LLM-as-a-judge, ROUGE, METEOR) and traditional ML validation to ensure performance before deployment.
  • Observability & Monitoring: Deploy real-time monitoring for model drift, hallucination detection, latency, and token consumption to manage both quality and cost.
  • Infrastructure as Code (IaC): Manage all AI resources using Terraform or CloudFormation, ensuring the cloud posture is reproducible, secure, and follows a "Privacy by Design" mandate.
  • Advanced Analytics Integration: Partner with teams using platforms like Palantir, Databricks, or Snowflake to ensure a high-fidelity data flow between analytical ontologies and production models.
  • IT & Security Diplomacy: Work directly with central IT and Security to navigate IAM roles, VPC peering, and firewall configurations, clearing the path for rapid transformation.
  • Scalable Inference Engineering: Optimize model serving endpoints for high-throughput and low-latency, utilizing containerization (Docker/Kubernetes) and serverless architectures where appropriate.
  • Prompt & Model Versioning: Establish rigorous version control for prompts (PromptOps), model weights, and data snapshots to ensure 100% auditability and rollback capability.
  • Data Science Engineering: Support the data science lifecycle by automating feature stores, feature engineering pipelines, and the transition of experimental notebooks into hardened production microservices.
  • Security & Compliance Hardening: Implement automated scanning and guardrails (e.g., Bedrock Guardrails or Azure Content Safety) to prevent prompt injection and data leakage.


Requirements:

  • Proven Execution: 6+ years of engineering experience, with a minimum of 3 years strictly focused on MLOps or LLMOps in a production environment.
  • AWS & Azure Mastery: Deep, hands-on proficiency in both ecosystems. You must be able to configure Bedrock and Azure OpenAI services, including private networking and endpoint security, on day one.
  • Technical Stack: Expert Python, SQL, and PySpark. Extensive experience with containerization (Docker, Kubernetes) and orchestration tools (Airflow, Kubeflow, or Step Functions).
  • LLM Tooling: Professional experience with evaluation and observability frameworks like LangSmith, Arize Phoenix, or WhyLabs.
  • Data Science Flavor: A strong understanding of statistical validation, model evaluation metrics, and the ability to partner with Data Scientists to optimize model performance.
  • Transformation Mindset: The ability to move at the speed of a startup while maintaining the collaborative relationships required to function within a large-scale enterprise IT landscape.


Similar Jobs

Explore other opportunities that match your interests

Visa Sponsorship Relocation Remote
Job Type Contract
Experience Level Mid-Senior level

Intracruit Solutions

United State
Visa Sponsorship Relocation Remote
Job Type Full-time
Experience Level Mid-Senior level

phoenixteam

United State

AI Architect

Devops
•
2h ago
Visa Sponsorship Relocation Remote
Job Type Full-time
Experience Level Mid-Senior level

omnistarr

United State

Subscribe our newsletter

New Things Will Always Update Regularly