Senior Data Scientist - Synthetic Data Generation

poolside • United State
Remote
Apply
AI Summary

Deliver high-quality synthetic datasets for training large language models. Design and implement complex pipelines for data generation. Collaborate with cross-functional teams to ensure model quality.

Key Highlights
Hands-on role in data team
Synthetic data generation at scale
Collaboration with cross-functional teams
Key Responsibilities
Follow the latest research related to LLMs and synthetic data generation
Design and implement complex pipelines for data generation
Closely work with other teams to ensure model quality
Technical Skills Required
Python Large Language Models (LLMs) Data ablations and scaling laws Post-training techniques Training reasoning and agentic models Distributed data pipelines GPU clusters
Benefits & Perks
Fully remote work
37 days/year of vacation & holidays
Health insurance allowance
Company-provided equipment
Well-being, always-be-learning & home office allowances
Nice to Have
Author of scientific papers on applied deep learning, LLMs, source code generation, etc.

Job Description


About Poolside

In this decade, the world will create Artificial General Intelligence. There will only be a small number of companies who will achieve this. Their ability to stack advantages and pull ahead will define the winners. These companies will move faster than anyone else. They will attract the world's most capable talent. They will be on the forefront of applied research, engineering, infrastructure and deployment at scale. They will continue to scale their training to larger & more capable models. They will be given the right to raise large amounts of capital along their journey to enable this. They will create powerful economic engines. They will obsess over the success of their users and customers.

Poolside exists to be this company: to build a world where AI will be the engine behind economically valuable work and scientific progress. We believe the fastest way to reach AGI lies in accelerating software development itself, by reshaping the developer experience with agentic systems, coding assistants, and the frontier models that power them. We deploy these systems directly into the development environments of security-conscious enterprises.

About Our Team

We were founded in the US and have our home there, but our team is distributed across Europe and North America. We get our fix of in-person collaboration in Paris each month for 3 days, with an open invitation to stay the whole week. For those based in PST, we understand this is a significant travel cadence; we are open to agree on a lower cadence and will discuss this in the interview process. We also do longer off-sites once a year.

Our team is a multidisciplinary blend of research, engineering, and business experts. What unites us is our deep care for what we build together. We’re in a race that requires hard work, intellectual curiosity, and obsession; to balance this intensity, we’ve assembled a team of low ego and kind-hearted individuals who have built the special culture Poolside has. By building collaboratively and with intention, we create a compounding effect that moves the entire company forward towards our mission: reaching AGI through intelligence systems built for software development.

About The Role

You’ll be working on our data team focused on the quality of the datasets being delivered for training our models. This is a hands-on role where your #1 mission would be to improve the quality of the pretraining datasets by leveraging your previous experience, intuition and training experiments. This role particularly focuses on generating synthetic data at scale and determining the best strategies to leverage such data into training large models. You’ll closely collaborate with other teams like Pretraining, Postraining, Evals, and Product to define high-quality data needs that map to missing model capabilities and downstream use cases.

Staying in sync with the latest research in synthetic data generation and pretraining is key to success in this role. You will constantly lead original research initiatives through short, time-bounded experiments while deploying highly technical engineering solutions into production. With the volumes of data to process being massive, you'll have a performant distributed data pipeline together with a large GPU cluster at your disposal.

Curious about the tech? Take a deep dive into our pretraining data stack in this blogpost from our 'Model Factory' series.

YOUR MISSION

To deliver large, high-quality, and diverse synthetic datasets mixing natural language and code modalities to train best-in-class Poolside coding agents.

Responsibilities

  • Follow the latest research related to LLMs and synthetic data generation in particular. Be familiar with the most relevant open-source datasets and models.
  • Design and implement complex pipelines that can generate large amounts of data while maintaining high diversity and optimizing the resources available.
  • Closely work with other teams such as Pretraining, Posttraining, Evals and Product to ensure alignment on the quality of the models delivered.
  • Continuously measure and refine the quality of the datasets being generated while validating the final data strategy through quantitative data ablation experiments.

Skills & Experience

  • Strong machine learning and engineering background
  • Experience with Large Language Models (LLM), including:
    • Understanding of how LLMs learn
    • Data ablations and scaling laws
    • Post-training techniques
    • Training reasoning and agentic models
  • Experience with implementing cost-efficient, complex pipelines to generate synthetical datasets at scale optimizing for data quality, correctness, diversity, etc.
  • Experience with evals tracking model capabilities (general knowledge, reasoning, math, coding, long-context, etc)
  • Experience in building trillion-scale pretraining datasets, and familiarity with concepts like data curation, deduplication, data mixing, tokenization, curriculum, impact of data repetition, etc.
  • Excellent programming skills in Python
  • Strong prompt engineering skills
  • Experience working with large-scale GPU clusters and distributed data pipelines
  • Strong obsession with data quality
  • Research experience:
    • Author of scientific papers on any of the topics: applied deep learning, LLMs, source code generation, etc. - is a nice to have
    • Can freely discuss the latest papers and descend to fine details
    • Is reasonably opinionated
PROCESS

  • Intro call with one of our Founding Engineers
  • Technical Interview(s) with one of our Members of Engineering
  • Team fit call with the People team
  • Final interview with one of our Founding Engineers

Benefits

  • Fully remote work & flexible hours
  • 37 days/year of vacation & holidays
  • Health insurance allowance for you & dependents
  • Company-provided equipment
  • Well-being, always-be-learning & home office allowances
  • Frequent team get togethers
  • Diverse & inclusive people-first culture


Similar Jobs

Explore other opportunities that match your interests

Lead Backend Software Engineer

Programming
•
13m ago
Visa Sponsorship Relocation Remote
Job Type Full-time
Experience Level Mid-Senior level

sogal ventures

United State

Senior Python Developer - Remote

Programming
•
5h ago
Visa Sponsorship Relocation Remote
Job Type Contract
Experience Level Not Applicable

joblet-ai

United State

Senior Full-Stack Software Engineer

Programming
•
6h ago
Visa Sponsorship Relocation Remote
Job Type Full-time
Experience Level Mid-Senior level

for good

United State

Subscribe our newsletter

New Things Will Always Update Regularly