Research Engineering Manager - Model Training
Perplexity
Location
San Francisco
Employment Type
Full time
Department
AI
Compensation
- $300K – $470K • Offers Equity
U.S. Benefits
Full-time U.S. employees enjoy a comprehensive benefits program including equity, health, dental, vision, retirement, fitness, commuter and dependent care accounts, and more.
International Benefits
Full-time employees outside the U.S. enjoy a comprehensive benefits program tailored to their region of residence.
USD salary ranges apply only to U.S.-based positions. International salaries are set based on the local market. Final offer amounts are determined by multiple factors, including experience and expertise, and may vary from the amounts listed above.
Perplexity is seeking a Research Engineering Manager to lead the team of all-star AI researchers and engineers responsible for developing the models that drive our products. Our team has developed some of the most advanced models for agentic research, query understanding, and other domains that require accuracy and depth. As we expand our userbase and portfolio of product surfaces, our in-house models are increasingly critical to providing a premium, high-taste experience for the world’s most sophisticated users.
You will dive into our rich datasets of conversational and agentic queries, leveraging cutting‑edge training techniques to scale AI model performance. Through hands-on technical and organizational leadership, you will empower your team to develop SotA models for the use cases that matter most to our business and our users.
Responsibilities
Lead a team of researchers and engineers focused on training SotA models for Perplexity-relevant use cases, leveraging the latest supervised and reinforcement learning techniques.
Drive research and engineering efforts to develop production models through advanced model training and alignment techniques, including RL, SFT, and other approaches.
Become deeply familiar with the team’s technical stack, leading from the front through hands-on technical contributions.
Own the data, training, and eval pipelines required to train and continuously improve LLM models.
Design and iterate on model training and finetuning algorithms (e.g., preference‑based methods, reinforcement learning from human or AI feedback) through an approach that balances scientific rigor and iteration velocity.
Design evaluations and improve the production model training pipeline to reliably deliver models that lie on the Pareto frontier of speed and quality.
Work closely with engineering teams to integrate in-house models into our product and rapidly iterate based on real‑world usage.
Manage day‑to‑day execution, project planning, and prioritization for the model training team to hit ambitious quality and performance goals.
Qualifications
Proven experience with large-scale LLMs and Deep Learning systems.
Strong Python and PyTorch skills; versatility across languages and frameworks is a plus.
Experience leading or managing research or engineering teams working on large-scale AI model development, including driving complex projects from idea to production.
Self‑starter with a willingness to take ownership of tasks and navigate ambiguity in a fast‑moving environment.
Passion for tackling challenging problems in AI model quality, speed, safety, and reliability.
10+ years of technical experience, with at least 2 of those years as a manager and at least 4 of those years working on large-scale AI model development.
Nice-to-have
PhD in Machine Learning or related areas.
Experience training very large Transformer-based models with techniques such as SFT, DPO, GRPO, RLHF‑style methods, or related preference‑based optimization approaches.
Prior experience designing evaluations and production training pipelines for large‑scale models in a high‑growth environment.
Compensation Range: $300K - $470K
