Mercor seeks a skilled professional to evaluate LLM-generated responses for AI research labs. Responsibilities include accuracy assessment, fact-checking, and code validation. Requires a CS degree, 5+ years of software engineering experience, and Ruby expertise.
Key Highlights
Key Responsibilities
Technical Skills Required
Benefits & Perks
Nice to Have
Job Description
About The Job
Mercor connects elite creative and technical talent with leading AI research labs. Headquartered in San Francisco, our investors include Benchmark, General Catalyst, Peter Thiel, Adam D'Angelo, Larry Summers, and Jack Dorsey.
Position: [Role Title]
Type: Full-time or Part-time Contract Work
Compensation: $60–$100/hour
Location: Remote
Commitment: 20+ hours/week
Role Responsibilities
- Evaluate LLM-generated responses to coding and software engineering queries for accuracy, reasoning, clarity, and completeness.
- Conduct fact-checking using trusted public sources and authoritative references.
- Execute code and validate outputs using appropriate tools to ensure accuracy.
- Annotate model responses by identifying strengths, areas of improvement, and factual or conceptual inaccuracies.
- Assess code quality, readability, algorithmic soundness, and explanation quality.
- Ensure model responses align with expected conversational behavior and system guidelines.
- Apply consistent evaluation standards by following clear taxonomies, benchmarks, and detailed evaluation guidelines.
Interested in remote work opportunities in Development & Programming? Discover Development & Programming Remote Jobs featuring exclusive positions from top companies that offer flexible work arrangements.
Must-Have
- BS, MS, or PhD in Computer Science or a closely related field
- 5+ years real-world experience in software engineering or related technical roles
- Expertise in Ruby
- Ability to solve HackerRank or LeetCode Medium and Hard–level problems independently
- Experience contributing to well-known open-source projects, including merged pull requests
- Significant experience using LLMs while coding and understanding their strengths and failure modes
- Strong attention to detail and comfort evaluating complex technical reasoning
- Prior experience with RLHF, model evaluation, or data annotation work
- Track record in competitive programming
- Experience reviewing code in production environments
- Familiarity with multiple programming paradigms or ecosystems
- Experience explaining complex technical concepts to non-expert audiences
Browse our curated collection of remote jobs across all categories and industries, featuring positions from top companies worldwide.
- Upload resume
- AI interview based on your resume
- Submit form
- For details about the interview process and platform information, please check: https://talent.docs.mercor.com/welcome/welcome
- For any help or support, reach out to: support@mercor.com
,
Similar Jobs
Explore other opportunities that match your interests
AI Model Evaluator
Mercor
Haystack