AI Product QA Lead

ContentJet Argentina
Remote
Apply
AI Summary

Lead AI product quality assurance to ensure reliable, accurate, and useful AI workflows. Test AI agents and automations, document bugs, and create QA processes. Collaborate with engineering and operations teams to improve AI systems.

Key Highlights
Lead AI product quality assurance
Test AI agents and automations
Collaborate with engineering and operations teams
Key Responsibilities
Audit internal workflows and document how they currently work
Map manual processes across creator sourcing, creator outreach, project coordination, content production, client delivery, and internal team communication
Test AI agents and automations before they are deployed to the team
Test workflows across tools such as Slack, Monday.com, chat interfaces, n8n, VAPI, and internal dashboards
Check whether the AI produces accurate, useful, and brand-safe outputs
Write clear bug reports and product feedback for the engineering team
Create QA checklists, testing procedures, and acceptance criteria for new AI workflows
Help prioritize issues based on business impact
Write SOPs and internal guides once workflows are ready to be used by the team
Train team members on how to use new AI agents and automations properly
Technical Skills Required
APIs Webhooks Automations SaaS tools Internal platforms n8n VAPI Monday.com Slack Jira Linear
Benefits & Perks
Remote work
USD $3,000-$4,500/month salary range
Fluent written English required
Nice to Have
Experience testing LLMs, chatbots, AI agents, or automation workflows
Experience with n8n, Zapier, Make, VAPI, Monday.com, Slack, Jira, Linear, or similar tools
Experience working in an AI startup, SaaS company, marketing agency, creative agency, or operations-heavy business

Job Description


ABOUT CONTENTJET


ContentJet is a Canadian performance creative and UGC company helping global brands produce high-performing short-form video ads at scale.

We are transforming from a service-led creative agency into a tech-enabled, AI-native creative operations company.

We already have an internal AI platform and automation ecosystem that supports creative strategy, project workflows, creator operations, content production, and client delivery.

Our systems use tools such as Claude, custom AI agents, n8n, VAPI, Monday.com, Slack, APIs, and internal automation workflows.

We are now hiring an AI Product QA Lead to help us test, improve, and safely deploy these AI systems across the company.


THE MISSION


  • Your mission is to make sure every AI workflow we deploy is reliable, accurate, useful, and ready for the team to use.
  • You will work between our operations team, creative team, CEO, and engineering team.
  • You will not be expected to build complex AI architectures or write production code. But you must be technical enough to understand how AI workflows, APIs, webhooks, prompts, automations, and internal tools work.
  • You will own the QA process for our AI agents and automations.
  • That means testing them, finding edge cases, documenting bugs, checking output quality, creating QA processes, and making sure the team can confidently use the systems once they are launched.


WHAT YOU WILL DO


  • You will audit our internal workflows and document how they currently work.
  • You will map manual processes across creator sourcing, creator outreach, project coordination, content production, client delivery, and internal team communication.
  • You will turn those workflows into clear requirements for the engineering team.
  • You will test AI agents and automations before they are deployed to the team.
  • You will test workflows across tools such as Slack, Monday.com, chat interfaces, n8n, VAPI, and internal dashboards.
  • You will check whether the AI produces accurate, useful, and brand-safe outputs.
  • You will test for hallucinations, incorrect logic, missing data, weak prompts, broken workflows, bad routing, and confusing user experiences.
  • You will write clear bug reports and product feedback for the engineering team.
  • You will create QA checklists, testing procedures, and acceptance criteria for new AI workflows.
  • You will help prioritize issues based on business impact.
  • You will write SOPs and internal guides once workflows are ready to be used by the team.
  • You will train team members on how to use new AI agents and automations properly.


EXAMPLE OF WORKFLOWS YOU MAY TEST


  • An AI agent that drafts personalized creator outreach messages.
  • An automation that moves project information between Monday.com, Slack, and internal dashboards.
  • A VAPI voice agent that screens or follows up with creators.
  • An AI workflow that generates creator briefs.
  • An AI assistant that summarizes project status and identifies blockers.
  • A chatbot that helps the team ask questions about a specific client or project.
  • An AI system that analyzes creative briefs, scripts, competitor ads, or performance data.


WHAT WE ARE LOOKING FOR


  • We are looking for someone with strong QA experience and strong product judgment.
  • You should be comfortable testing software, automations, AI workflows, and internal tools.
  • You should be able to understand how a workflow is supposed to work, test where it can break, and communicate clearly with engineers.
  • You do not need to be a software engineer, but you should understand technical concepts such as APIs, webhooks, logs, prompts, integrations, test cases, and automation logic.
  • You should be organized, detail-oriented, and comfortable managing multiple workflows at the same time.
  • You should be able to write clear documentation in English.
  • You should be comfortable working remotely with an international team.


MUST HAVE REQUIREMENTS


  • Strong experience in software QA, product QA, automation testing, or workflow testing.
  • Experience writing clear bug reports, test cases, QA checklists, or acceptance criteria.
  • Strong technical literacy around APIs, webhooks, automations, SaaS tools, or internal platforms.
  • Ability to test AI-generated outputs for accuracy, consistency, hallucinations, tone, and usefulness.
  • Excellent written English.
  • Strong organization and communication skills.
  • Ability to work independently and manage multiple QA cycles at once.
  • Comfortable working with engineers, operators, and non-technical team members.


NICE TO HAVE EXPERIENCE


  • Experience testing LLMs, chatbots, AI agents, or automation workflows.
  • Experience with n8n, Zapier, Make, VAPI, Monday.com, Slack, Jira, Linear, or similar tools.
  • Experience working in an AI startup, SaaS company, marketing agency, creative agency, or operations-heavy business.
  • Experience with Playwright, Selenium, Postman, or other QA/testing tools.
  • Experience with UGC, creator operations, content production, or performance marketing.
  • Experience creating SOPs or training internal teams.


WHAT SUCCESS LOOKS LIKE


  1. In your first 30 days, you will understand our core operations, review the existing AI workflows, and create a clear QA process for testing new AI agents and automations.
  2. In your first 60 days, you will be actively testing new AI workflows, documenting bugs, identifying edge cases, and helping the engineering team improve reliability before deployment.
  3. In your first 90 days, you will own the QA process for our AI systems, create repeatable testing procedures, and help the team confidently adopt new AI agents and automations.


WHO THIS ROLE IS PERFECT FOR


  • This role is a strong fit for someone who has worked in QA, product testing, or automation testing and wants to move deeper into AI products.
  • You may be a QA Lead, SDET, Product QA Specialist, QA Manager, Automation QA Analyst, or Technical Product Analyst.
  • You are not just someone who checks if buttons work.
  • You are someone who thinks:

“Does this workflow actually solve the business problem?”

“Where will this AI agent fail?”

“Is this output accurate and useful?”

“What edge case could break this automation?”

“What does the engineer need to fix before this is safe to deploy?”


COMPENSATION


Compensation will depend on experience.

Expected range:

USD $3,000–$4,500/month


For exceptional senior candidates with strong AI, automation, or QA leadership experience, we may consider a higher range.


ENGAGEMENT


This is a remote role.


Preferred location: Brazil or Latin America.


You should be able to overlap at least 4 working hours per day with our team.


  • Fluent written English is required.

Similar Jobs

Explore other opportunities that match your interests

QA Automation Engineer

Testing
1w ago

Premium Job

Sign up is free! Login or Sign up to view full details.

•••••• •••••• ••••••
Job Type ••••••
Experience Level ••••••

Valtech

Argentina
Visa Sponsorship Relocation Remote
Job Type Contract
Experience Level Mid-Senior level

coderoad inc

Latin America
Visa Sponsorship Relocation Remote
Job Type Full-time
Experience Level Mid-Senior level

Ruby Labs

Estonia

Subscribe our newsletter

New Things Will Always Update Regularly