Engineering Manager (AI Inference)
About the Role We are looking for an Inference Engineering Manager to lead our AI Inference team. This is a unique opportunity to build and scale the infrastructure that powers Perplexity's products and APIs, serving millions of users with state-of-the-art AI capabilities.
What this role actually needs.
About the Role We are looking for an Inference Engineering Manager to lead our AI Inference team. This is a unique opportunity to build and scale the infrastructure that powers Perplexity's products and APIs, serving millions of users with state-of-the-art AI capabilities. Responsibilities: - Lead and grow a high-performing team of AI inference engineers - Develop APIs for AI inference used by both internal and external customers - Architect and scale our inference infrastructure for reliability and efficiency - Benchmark and eliminate bottlenecks throughout our inference stack - Drive large sparse/MoE model inference at rack scale, including sharding strategies for massive models - Push the frontier with building inference systems to support sparse attention, disaggregated pre-fill/decoding serving, etc. Requirements: - 5+ years of engineering experience with 2+ years in a technical leadership or management role - Deep experience with ML systems and inference frameworks (PyTorch, TensorFlow, ONNX, TensorRT, vLLM) - Strong understanding of LLM architecture: Multi-Head Attention, Multi/Grouped-Query Attention, and common layers - Experience with inference optimizations: batching, quantization, kernel fusion, FlashAttention - Familiarity with GPU characteristics, roofline models, and performance analysis - Experience deploying reliable, distributed, real-time systems at scale Company context: Perplexity is an answer engine β an AI-native search product used by millions of professionals daily.
Day-to-day expectations
Perplexity lists these responsibilities for the Engineering Manager (AI Inference) role.
- Lead and grow a high-performing team of AI inference engineers
- Develop APIs for AI inference used by both internal and external customers
- Architect and scale our inference infrastructure for reliability and efficiency
- Benchmark and eliminate bottlenecks throughout our inference stack
- Drive large sparse/MoE model inference at rack scale, including sharding strategies for massive models
- Push the frontier with building inference systems to support sparse attention, disaggregated pre-fill/decoding serving, etc.
What a strong candidate brings
These requirements are extracted from the source listing and normalized for UpJobz readers.
- 5+ years of engineering experience with 2+ years in a technical leadership or management role
- Deep experience with ML systems and inference frameworks (PyTorch, TensorFlow, ONNX, TensorRT, vLLM)
- Strong understanding of LLM architecture: Multi-Head Attention, Multi/Grouped-Query Attention, and common layers
- Experience with inference optimizations: batching, quantization, kernel fusion, FlashAttention
- Familiarity with GPU characteristics, roofline models, and performance analysis
- Experience deploying reliable, distributed, real-time systems at scale
Why this listing is more than a copied job post.
Engineering Manager (AI Inference) is framed against UpJobz source checks, country scope, compensation visibility, and work-authorization signals so candidates can make a faster go/no-go decision.
United States tech market
United States roles on UpJobz are filtered for high-tech relevance, source freshness, and actionable employer detail before they are allowed into SEO surfaces.
Compensation read
$300K - $485K is visible before the click, so candidates can compare the role against local market expectations before applying.
Work authorization read
Current extracted signal: Open to TN, H-1B, and OPT candidates already in the United States. UpJobz treats this as a search signal, not legal advice, and links visa-sensitive roles back to the relevant visa hub where possible.
Location read
On-site roles in San Francisco should be compared against commute, local salary bands, and nearby employer demand.
Browse similar jobs
Turn this listing into an application plan.
This is the first pass at the premium UpJobz layer: a fast brief that helps serious applicants move with more clarity.
Next moves
- Tailor your resume around ai and llm instead of sending a generic application.
- Use the first two bullets of your application to connect your background directly to engineering manager (ai inference) is a high-signal on-site role in san francisco, and it is most realistic for open to tn, h-1b, and opt candidates already in the united states.
- Open the role quickly if it fits and bookmark three similar jobs before you leave the page.
Interview themes
Watchouts
- $300K - $485K is visible, so calibrate your application around the posted range.
- Use open to tn, h-1b, and opt candidates already in the united states as part of your positioning so the recruiter does not have to infer it.
- Show concrete examples of succeeding in on-site environments.
Keywords to match against your background
Use these terms to decide whether your resume, portfolio, and recent projects line up with the role.
Apply through the employer source
Open the source listing from jobs.ashbyhq.com, confirm the role is still active, then apply on the employer or ATS page.
Source: jobs.ashbyhq.com Β· Source ID: 2a87ccbf-82ef-4fc7-b1ed-4dd18b11baf9 Β· Confidence: 94/100 Β· Last checked: May 7, 2026
How UpJobz verifies job sourcesContinue browsing tech jobs