Remote STEM Engineer (United States) - AI/ML Support

rex.zone • United State
Remote
Apply
AI Summary

As a Remote STEM Engineer, you will design, implement, and maintain data workflows to support machine learning and large language model evaluation. You will also execute RLHF-related processes, define annotation guidelines, and contribute to content safety labeling. This is a full-time, remote role with a competitive hourly rate of $30-$50.

Key Highlights
Design and maintain data workflows for machine learning and LLM evaluation
Execute RLHF-related processes, including prompt evaluation and QA evaluation
Define and operationalize annotation guidelines for training data quality
Key Responsibilities
Design, implement, and maintain data workflows that support machine learning and large language model evaluation
Execute RLHF-related processes, including prompt evaluation, preference ranking, and rubric-based QA evaluation
Define and operationalize annotation guidelines compliance to improve training data quality and reduce label noise
Technical Skills Required
Python pandas NumPy SQL
Benefits & Perks
Competitive hourly rate: $30-$50/hr (USD)

Job Description


Remote STEM Jobs in the United States (Full-Time, Remote)

Rex.zone connects STEM professionals to real AI/ML production workflows, including LLM training pipelines, RLHF evaluation, data labeling, QA evaluation, prompt evaluation, named entity recognition, computer vision annotation, and content safety labeling. You will support training data quality, annotation guidelines compliance, and model performance improvement across distributed teams.

About The Role

As a Remote STEM Engineer (United States), you will deliver measurable outcomes across applied engineering and AI/ML support workstreams. Your day-to-day may include building and validating data pipelines, improving training data quality, running statistical analyses, and partnering with ML teams on evaluation harnesses.

Key Responsibilities

  • Design, implement, and maintain data workflows that support machine learning and large language model evaluation.
  • Execute RLHF-related processes including prompt evaluation, preference ranking, and rubric-based QA evaluation.
  • Define and operationalize annotation guidelines compliance to improve training data quality and reduce label noise.
  • Perform named entity recognition (NER) and schema validation checks; troubleshoot edge cases and ambiguous labeling.
  • Support computer vision annotation programs (bounding boxes, polygons, keypoints) and audit inter-annotator agreement.
  • Contribute to content safety labeling and policy-driven evaluation for harmful, sensitive, and restricted content.
  • Create metrics and dashboards for model performance improvement (accuracy, precision/recall, calibration, and error taxonomy).
  • Collaborate asynchronously with distributed teams; document decisions, experiments, and release notes.

Required Qualifications

  • Bachelor’s degree (or higher) in a STEM field (CS, EE, Math, Stats, Physics, or related).
  • Mid-Senior experience delivering engineering or applied data/ML work in production or research-adjacent environments.
  • Proficiency with Python and common data tooling (pandas, NumPy) plus SQL for analysis and reporting.
  • Understanding of ML evaluation concepts: ground truth construction, bias/variance, and dataset shift.
  • Experience with quality assurance practices: sampling plans, audit checklists, and root-cause analysis.
  • Ability to write clear documentation and follow structured rubrics for QA evaluation and labeling tasks.
  • Comfort working fully remote with time-zone coordination across the United States.

Preferred Qualifications

  • Exposure to NLP and LLM workflows (prompting, prompt evaluation, instruction tuning concepts).
  • Experience with RLHF or human-in-the-loop evaluation pipelines.
  • Computer vision annotation familiarity and tooling experience (CVAT, Labelbox, or similar).
  • Knowledge of content safety labeling standards and policy frameworks.
  • Experience with cloud platforms (AWS/GCP/Azure) and CI/CD or MLOps basics.
  • Hands-on experience improving annotation guidelines compliance and inter-annotator agreement.

Compensation

Competitive hourly rate: $30–$50/hr (USD).

Similar Jobs

Explore other opportunities that match your interests

Senior High Yield Corporate Credit Analyst

Programming
•
1h ago

Premium Job

Sign up is free! Login or Sign up to view full details.

•••••• •••••• ••••••
Job Type ••••••
Experience Level ••••••

Lensa

United State

Senior Full Stack C++ Developer

Programming
•
2h ago
Visa Sponsorship Relocation Remote
Job Type Contract
Experience Level Not Applicable

keystone recruitment

United State

Senior Full Stack Engineer Part-Time

Programming
•
2h ago
Visa Sponsorship Relocation Remote
Job Type Contract
Experience Level Associate

crossing hurdles

United State

Subscribe our newsletter

New Things Will Always Update Regularly