Research Engineer, Evaluations
AssemblyAI
About AssemblyAI
AssemblyAI builds the best-in-class Speech AI models powering the next generation of voice applications. Our models serve 600M+ inference calls monthly, process 1M+ hours of audio daily, and power 2 billion+ end-user experiences—from voice agents and meeting assistants to contact centers and medical scribes. Companies like Zoom, Granola, Fireflies, Cluely, and Calabrio rely on AssemblyAI to ship production-ready voice AI.
We're at an inflection point in Speech AI. We released Universal-Streaming in mid-2025, and it has quickly earned its place as the model offering the best accuracy-latency-cost tradeoff on the market. Our research team drives these advances and ships with relentless velocity. Since releasing Universal-Streaming, we've already launched keyterms prompting feature and multilingual support—with more significant improvements on the roadmap.
We've raised $115M+ from Accel, Insight Partners, Y Combinator's AI Fund, Patrick and John Collison, Nat Friedman, and Daniel Gross. We're a remote team building one of the next great AI companies—and we're looking for people who will shape its future.
About the Role
We are looking for a Senior Research Engineer to join our streaming speech-to-text research team—a new role that sits at the intersection of research, product, and engineering.
You'll be the person who makes sure we're measuring the right things, benchmarking against the right competitors, building and extending evaluation tooling and translating customer pain points into quantifiable research targets. You'll own the evaluation infrastructure that tells us whether our models are actually better—and by how much.
This role is ideal for someone with a Machine Learning / Research Engineering background who is obsessed with understanding what customers actually need, and who gets satisfaction from turning vague feedback ("the model feels slow") into concrete metrics that the whole team can align around. You're comfortable talking to customer-facing teams one hour, designing a new evaluation framework the next, and then convincing researchers why it matters.
You'll also operate at the frontier of the voice agent ecosystem. Our streaming product integrates with orchestration frameworks like LiveKit, Pipecat, and Vapi, and you'll need to understand how ASR fits into the broader voice agent stack—alongside VAD, turn detection, TTS, and LLM components. As this stack evolves rapidly, you'll help ensure our evaluations reflect real-world integration scenarios.
You'll work directly with our research and engineering teams and become the connective tissue between what customers need and what researchers build. If you're entrepreneurial, rigorous about measurement, and want to have an outsized impact on the success of a rapidly growing product, this is your role.
What You'll Do
Evaluation & Benchmarking
- Own end-to-end and integration-level model evaluation across accuracy, latency, and feature-specific metrics (e.g., turn detection latency, endpointing accuracy)
- Build and maintain competitive benchmarking pipelines against other providers in the market
- Design and run systematic experiments to measure the impact of model changes
Dataset & Test Set Management
- Onboard, curate, and maintain evaluation datasets—both public benchmarks and internal test sets
- Create evaluation subsets that stress-test specific capabilities and edge cases
Metric Development & Research Translation
- Define evaluation metrics that capture real-world performance
- Translate qualitative customer feedback into quantifiable evaluation criteria
- Work with customer-facing teams to understand pain points and convert them into research priorities
Research Velocity
- Reduce friction for researchers by maintaining clean evaluation pipelines and clear documentation
- Identify evaluation gaps proactively and propose solutions
- Move fast—iterate on benchmarking approaches weekly, not monthly
What You'll Need
- ML fundamentals: You understand how ML models are trained and evaluated well enough to interpret results and debug issues. You don't need to train them from scratch.
- Strong Python skills: You can write clean evaluation scripts, work with data pipelines, and are comfortable with SQL and cloud infrastructure.
- Metric intuition: You understand what makes a good evaluation metric, when to use relative vs. absolute improvements, and how to ensure statistical rigor.
- Voice agent stack familiarity: You understand how the components of a voice agent system interact—VAD, ASR, turn detection, LLM, TTS—and can reason about how changes in one affect the others.
- Tinkerer mentality: You'd rather ship something rough and iterate than spend weeks perfecting it. You're energized by variety.
- Communication skills: You can explain technical results to researchers, summarize findings for leadership, and translate customer feedback into requirements.
- Ownership mindset: You don't wait to be told what to evaluate. You see gaps and fill them.
- Will need to work at least 3-4 hours overlapping with Eastern US Time Zone
Nice to Have
- Experience with speech/audio ML or real-time systems
- Hands-on experience with voice agent orchestrators (LiveKit, Pipecat, Vapi, or similar)
- Familiarity with standard ML evaluation practices and benchmarks
- Experience working with customer-facing or product teams
- Background in QA, data science, or applied ML roles
What Success Looks Like
First 30 days: You've onboarded to our evaluation infrastructure, run your first competitive benchmark, and identified one gap in how we measure model quality.
First 90 days: You own our competitive benchmarking process. Researchers come to you to understand how their changes affect real-world metrics. You've proposed a new metric for a capability we weren't measuring well.
First 6 months: You're the go-to person for "how do we know if this is actually better?" You've built relationships with customer-facing teams. Your work directly influences which research directions we prioritize. You maintain benchmarks that reflect both standalone ASR quality and integrated voice agent performance.
Pay Transparency:
AssemblyAI strives to recruit and retain exceptional talent from diverse backgrounds while ensuring pay equity for our team. Our salary ranges are based on paying competitively for our size, stage, and industry, and are one part of many compensation, benefit, and other reward opportunities we provide.
There are many factors that go into salary determinations, including relevant experience, skill level, qualifications assessed during the interview process, and maintaining internal equity with peers on the team. The range shared below is a general expectation for the function as posted, but we are also open to considering candidates who may be more or less experienced than outlined in the job description. In this case, we will communicate any updates in the expected salary range.
The provided range is the expected salary for candidates in the U.S. Outside of those regions, there may be a change in the range which will be communicated to candidates throughout the interview process.
Salary range: $210,000 - $260,000
Working at AssemblyAI
We are a small but mighty group of startup veterans and experienced AI researchers with over 20 years of expertise in Machine Learning, Speech Recognition, and NLP. As a fully remote team, we’re looking for people to join our team who are ambitious, curious, and lead with integrity. We’re still in the early days of AI and of AssemblyAI’s journey, and are looking for teammates who won’t just fit in, but will help us define and build our company culture.
We’re committed to creating a space where our employees can bring their full selves to work and have equal opportunity to succeed. No matter your race, gender identity or expression, sexual orientation, religion, origin, ability, age, veteran status, if joining this mission speaks to you, we encourage you to apply!
Using AI to Interview:
If you’re selected for an interview, please review this resource to better understand how AssemblyAI approaches the use of AI in our interview process.
GDPR privacy notice:
Candidates from the EU should review this job applicant privacy notice before applying.
Keep Exploring AssemblyAI:
Learn more about AI models for speech recognition
Speech-to-Text | Speech Understanding | LLM Gateway | Try the Playground
