87 remote roles added today376 active tech employersπŸ‡ΊπŸ‡Έ πŸ‡¨πŸ‡¦ πŸ‡²πŸ‡½ Tri-border network749 metros covered12 database updates this hourTN visa filter live87 remote roles added today376 active tech employersπŸ‡ΊπŸ‡Έ πŸ‡¨πŸ‡¦ πŸ‡²πŸ‡½ Tri-border network749 metros covered12 database updates this hourTN visa filter live
Jobs/San Francisco/Member Of Technical Staff (AI Infrastructure Engineer)
San Francisco, CA

Member Of Technical Staff (AI Infrastructure Engineer)

We are looking for an AI Infra engineer to join our growing team. We work with Kubernetes, Slurm, Python, C++, PyTorch, and primarily on AWS.

Company
Perplexity
Compensation
$220K - $405K
Schedule
Full-Time
Role overview

What this role actually needs.

We are looking for an AI Infra engineer to join our growing team. We work with Kubernetes, Slurm, Python, C++, PyTorch, and primarily on AWS. Responsibilities: - Design, deploy, and maintain scalable Kubernetes clusters for AI model inference and training workloads - Manage and optimize Slurm-based HPC environments for distributed training of large language models - Develop robust APIs and orchestration systems for both training pipelines and inference services - Implement resource scheduling and job management systems across heterogeneous compute environments - Benchmark system performance, diagnose bottlenecks, and implement improvements across both training and inference infrastructure - Build monitoring, alerting, and observability solutions tailored to ML workloads running on Kubernetes and Slurm Requirements: - Strong expertise in Kubernetes administration, including custom resource definitions, operators, and cluster management - Hands-on experience with Slurm workload management, including job scheduling, resource allocation, and cluster optimization - Experience with deploying and managing distributed training systems at scale - Deep understanding of container orchestration and distributed systems architecture - High level familiarity with LLM architecture and training processes (Multi-Head Attention, Multi/Grouped-Query, distributed training strategies) - Experience managing GPU clusters and optimizing compute resource utilization Company context: Perplexity is an answer engine β€” an AI-native search product used by millions of professionals daily.

Responsibilities

Day-to-day expectations

Perplexity lists these responsibilities for the Member Of Technical Staff (AI Infrastructure Engineer) role.

  • Design, deploy, and maintain scalable Kubernetes clusters for AI model inference and training workloads
  • Manage and optimize Slurm-based HPC environments for distributed training of large language models
  • Develop robust APIs and orchestration systems for both training pipelines and inference services
  • Implement resource scheduling and job management systems across heterogeneous compute environments
  • Benchmark system performance, diagnose bottlenecks, and implement improvements across both training and inference infrastructure
  • Build monitoring, alerting, and observability solutions tailored to ML workloads running on Kubernetes and Slurm
Requirements

What a strong candidate brings

These requirements are extracted from the source listing and normalized for UpJobz readers.

  • Strong expertise in Kubernetes administration, including custom resource definitions, operators, and cluster management
  • Hands-on experience with Slurm workload management, including job scheduling, resource allocation, and cluster optimization
  • Experience with deploying and managing distributed training systems at scale
  • Deep understanding of container orchestration and distributed systems architecture
  • High level familiarity with LLM architecture and training processes (Multi-Head Attention, Multi/Grouped-Query, distributed training strategies)
  • Experience managing GPU clusters and optimizing compute resource utilization
UpJobz market context

Why this listing is more than a copied job post.

Member Of Technical Staff (AI Infrastructure Engineer) is framed against UpJobz source checks, country scope, compensation visibility, and work-authorization signals so candidates can make a faster go/no-go decision.

United States tech market

United States roles on UpJobz are filtered for high-tech relevance, source freshness, and actionable employer detail before they are allowed into SEO surfaces.

Compensation read

$220K - $405K is visible before the click, so candidates can compare the role against local market expectations before applying.

Work authorization read

Current extracted signal: Open to TN, H-1B, and OPT candidates already in the United States. UpJobz treats this as a search signal, not legal advice, and links visa-sensitive roles back to the relevant visa hub where possible.

Location read

On-site roles in San Francisco should be compared against commute, local salary bands, and nearby employer demand.

Browse similar jobs

Subscriber playbook

Turn this listing into an application plan.

This is the first pass at the premium UpJobz layer: a fast brief that helps serious applicants move with more clarity.

Next moves

  • Tailor your resume around ai and llm instead of sending a generic application.
  • Use the first two bullets of your application to connect your background directly to member of technical staff (ai infrastructure engineer) is a high-signal on-site role in san francisco, and it is most realistic for open to tn, h-1b, and opt candidates already in the united states.
  • Open the role quickly if it fits and bookmark three similar jobs before you leave the page.

Interview themes

Cloud and DevOpsOn-siteaillmmachine-learningresearch

Watchouts

  • $220K - $405K is visible, so calibrate your application around the posted range.
  • Use open to tn, h-1b, and opt candidates already in the united states as part of your positioning so the recruiter does not have to infer it.
  • Show concrete examples of succeeding in on-site environments.
Role signals

Keywords to match against your background

Use these terms to decide whether your resume, portfolio, and recent projects line up with the role.

aillmmachine-learningresearchpythonkubernetesterraformawsplatformobservabilityapisearchinfrastructureproduct
Next step

Apply through the employer source

Open the source listing from jobs.ashbyhq.com, confirm the role is still active, then apply on the employer or ATS page.

Open employer application

Source: jobs.ashbyhq.com Β· Source ID: 598e1f7d-b802-4de2-99ac-90eb2bc33315 Β· Confidence: 94/100 Β· Last checked: May 7, 2026

How UpJobz verifies job sourcesContinue browsing tech jobs