Lead Data Scientist - Large Language Models

Dynatrace • Spain
Relocation
Apply
AI Summary

Lead Data Scientist for Large Language Models to design, build, and scale generative AI capabilities for real-world, enterprise-grade use cases. Own the end-to-end LLM stack, from data/knowledge ingestion and retrieval to prompt and tool-use architecture, evaluation frameworks, safety/guardrails, and cost/latency optimization. Implement end-to-end RAG systems and engineer robust prompts/tools.

Key Highlights
Own the LLM system architecture
Implement end-to-end RAG systems
Engineer robust prompts/tools
Key Responsibilities
Own the LLM system architecture
Establish technical standards for RAG
Define evaluation strategy
Formalize LLMOps
Drive tool/agent design
Make build-vs-buy calls
Mentoring
Implement end-to-end RAG systems
Engineer robust prompts/tools
Technical Skills Required
Python Core ML stack Data engineering Unstructured data processing Text processing Embedding-friendly preprocessing RAG expertise Evaluation Safety/privacy LLMOps
Benefits & Perks
Flexible work options
International team with diverse backgrounds
Global career development program
Relocation assistance
Nice to Have
Serving/scaling
Tuning/distillation
Domain knowledge
Cloud/security

Job Description


Your role at Dynatrace

Dynatrace makes it easy and simple to monitor and run the most complex, hyper-scale multicloud

systems. Dynatrace is a full stack and completely automated monitoring solution that can trackevery user, every transaction, across every application.

Our team is looking for a Lead Data Scientist specialized in Large Language Models (LLMs) to design, build, and scale generative AI capabilities for real-world, enterprise-grade use cases. In this hands-on technical leadership role, you’ll own the end-to-end LLM stack, from data/knowledge, Ingestion and retrieval to prompt and tool-use architecture, evaluation frameworks,safety/guardrails, and cost/latency optimization.

Your Tasks

  • Own the LLM system architecture: Retrieval pipelines, prompt/tool design, routing/fallbacks, safety layers, and telemetry, optimized for quality, latency, and cost.
  • Establish technical standards for RAG: content ingestion, chunking/windowing, hybrid retrieval, reranking, query understanding, and structured output contracts.
  • Define evaluation strategy: Create a rigorous eval suite covering answer correctness, attribution/grounding, toxicity/safety, privacy leakage, determinism, latency, and cost.
  • Formalize LLMOps: Versioning for prompts/datasets/models, experiment governance, prompt and dataset registries, and promotion criteria from dev - staging - prod.
  • Drive tool/agent design: API schema design for function calling, error handling, recovery strategies, self-correction, and guardrail integration.
  • Make build-vs-buy calls: Weigh managed providers vs. open-source/self-hosted, considering performance, cost, IP, privacy, and compliance.
  • Mentoring: Provide deep technical mentorship on prompting, retrieval design, evals, and safe deployment; lead reviews of prompts, pipelines, and evaluation reports.

Hands-on Data Science

  • Implement end-to-end RAG systems: ingestion - chunking - embeddings - hybrid search - rerank - prompt assembly - tool calls - post-processing.
  • Engineer robust prompts/tools: reusable templates, multi-turn strategies, structured outputs via JSON Schema/Pydantic.
  • Select/tune models: foundation models, embeddings, rerankers; apply LoRA/PEFT or distillation when justified.
  • Build eval corpora: golden sets, KPIs for accuracy, groundedness, deflection, tool success.
  • Implement guardrails: PII/PHI detection, policy prompts, jailbreak resistance, filters, safety scorecards.
  • Productionize: ship resilient services with analytics, alerts (drift, quality, cost), SLOs, etc.
  • Optimize for scale: token, latency, cost; caching, context packing, batching, speculative decoding, routing by intent

What Will Help You Succeed

Minimum requirements:

  • Advanced CS/AI/ML degree or equivalent, strong ML background.
  • 7+ years DS/ML, 3+ years NLP /LLMs, shipped production systems.
  • Python and core ML stack: 5+ years of professional Python.
  • Data engineering for unstructured data (3+ years): text processing, parsing, embedding- friendly preprocessing.
  • Proven RAG expertise (1+ years): embeddings, retrieval, reranking, chunking.
  • Evaluation depth (1+ years): offline/online evals for accuracy, grounding, safety.
  • Safety/privacy (1+ years): moderation, PII/PHI redaction, policy enforcement.
  • LLMOps (1+ years): prompt/version management, experiment tracking, monitoring.
  • Excellent communication: explain trade-offs, drive data decisions.

Desirable Experiance

  • Serving/scaling: vLLM/TGI, Ray Serve, Triton; GPU/CPU trade-offs.
  • Tuning/distillation: LoRA/PEFT, safety alignment, synthetic data.
  • Domain: observability, support systems, multilingual, regulated environments.
  • Cloud/security: Snowflake/AWS, managed vs self-hosted.
  • Experience with graph-based knowledge bases (e.g., GraphDB, Neo4j) and knowledge graphs to complement RAG systems with entity modeling and relationship-aware retrieval.

Why you will love being a Dynatracer

  • Working models that offer you the flexibility you need, ranging from full remote options to hybrid ones combining home and in-office work
  • A team that thinks outside the box, welcomes unconventional ideas, and pushes boundaries
  • An environment that fosters innovation enables creative collaboration and allows you to grow
  • A globally unique and tailor-made career development program recognizing your potential, promoting your strengths, and supporting you in achieving your career goals
  • A truly international mindset with Dynatracers from different countries and cultures all over the world, and English as the corporate language that connects us all
  • A culture that is being shaped by our global team’s diverse personalities, expertise, and backgrounds
  • A relocation team that is eager to help you start your journey to a new country, always there to support and by your side. If you need to relocate for a position you’re applying for, we offer you a relocation allowance and support with your visa, work permit,accommodation .

Similar Jobs

Explore other opportunities that match your interests

Data Engineer

Data Science
•
3d ago

Premium Job

Sign up is free! Login or Sign up to view full details.

•••••• •••••• ••••••
Job Type ••••••
Experience Level ••••••

allianz technology

Spain

Data Analyst

Data Science
•
6d ago

Premium Job

Sign up is free! Login or Sign up to view full details.

•••••• •••••• ••••••
Job Type ••••••
Experience Level ••••••

perk

Spain

Senior Data Engineer

Data Science
•
2w ago
Visa Sponsorship Relocation Remote
Job Type Full-time
Experience Level Mid-Senior level

Oliver Bernard

Spain

Subscribe our newsletter

New Things Will Always Update Regularly