AI Summary
Join Wing's subsidiary M32 AI to build agentic AI for traditional service businesses. As a Senior QA Engineer, you'll own the full QA lifecycle for Agentic AI products, designing and running test plans to ensure delightful, bug-free experiences.
Key Highlights
Own the full QA lifecycle for Agentic AI products
Design and run test plans for functional, regression, smoke, exploratory, and usability testing
Validate multi-step decision flows and reasoning to catch logic gaps and requirement mismatches
Maintain dashboards tracking test coverage, failures, and quality KPIs
Technical Skills Required
Benefits & Perks
Competitive salary ($1500-$2000 USD per month)
Performance-based bonuses tied to release quality
Software for Upskilling & Productivity
Remote-first culture
Paid Time Off
High autonomy, low bureaucracy
Fast-track to leadership for high performers
US HQ Opportunities
Direct access to founding team
High visibility, autonomy and ownership
Optional in-person hack-weeks in Hong Kong, India, or London
A clear growth path into Head of QA as the team scales
Access to best-in-class tooling
Job Description
About Us
Wing is seeking elite talent to join M32 AI (a subsidiary of Wing, backed by top-tier Silicon Valley VCs), dedicated to building agentic AI for traditional service businesses.
Think of it like a startup within a corporate: fast moving and agile, with the stability of a corporate, and zero bureaucracy.
If you’re driven by challenge and eager to make a significant impact in a high-caliber role, this is the opportunity you’ve been waiting for.
Your mission: own and evolve the test automation ecosystem that keeps us shipping delightful, bug-free experiences every week.
This role combines deep manual and UX testing with structured exploratory testing and targeted automation to create a robust quality foundation for our products.
You Will Own
- Own the full QA lifecycle for Agentic AI products: strategy, design, execution, reporting, and release sign-off
- Design and run test plans covering functional, regression, smoke, exploratory, and usability testing for AI behavior and decision chains
- Validate multi-step decision flows and reasoning to catch logic gaps, guardrail failures, or requirement mismatches
- Perform structured exploratory testing to uncover unexpected behaviors, edge cases, and cascading AI failures
- Build synthetic test scripts for UI elements, APIs, and end-to-end flows to verify functionality
- Test across platforms (web, mobile, integrations) for consistency and performance
- Maintain dashboards tracking test coverage, failures, and quality KPIs for all stakeholders
- Improve test reliability: fix flakiness, optimize parallel runs, and cut execution time
- Partner with Product, Design, and Engineering to refine requirements and set clear go/no-go criteria
- Monitor pre- and post-release quality; use data to enhance AI evaluation and guardrails
- Automated Coverage: Achieve and sustain 90% critical path test coverage within 21 days
- Fast Feedback: Keep full regression test execution under 10 minutes to enable near‑instant feedback for engineers
- Bug-Free Releases: Ship weekly without major production bugs
- Experience testing GenAI or LLM‑driven products, including common failure modes such as hallucinations, unsafe responses, bias, and brittle decision paths
- Exposure to performance and load testing tools and practices for web applications and APIs
- Familiarity with structured exploratory testing approaches and test charters, especially for AI behavior and agent decision‑making
- Prior experience in high‑velocity environments (e.g., startups) where QA acts as an owner of quality rather than a purely executional function
- Prefer automation over repetition, while recognizing the value of focused exploratory testing
- Introductory Call (20 min) - Discuss our culture, expectations, and working style
- Asynchronous Task - Build and document a small automated test flow for a sample application, using either a testing framework or a no-code automation tool
- Final Interview (45 min) - A live session with our CPO and CTO
Compensation
- $1500-$2000 USD per month
- Competitive salary
- Performance‑based bonuses tied to release quality
- Software for Upskilling & Productivity
- Remote-first culture
- Work from anywhere
- Paid Time Off
- High autonomy, low bureaucracy
- Fast-track to leadership for high performers
- US HQ Opportunities
- Direct access to founding team
- High visibility, autonomy and ownership
- Optional in‑person hack‑weeks in Hong Kong, India, or London
- A clear growth path into Head of QA as the team scales
- Access to best‑in‑class tooling
We may use artificial intelligence (AI) tools to support parts of the hiring process, such as reviewing applications, analyzing resumes, or assessing responses. These tools assist our recruitment team but do not replace human judgment. Final hiring decisions are ultimately made by humans. If you would like more information about how your data is processed, please contact us.