Software Engineer, Workload Enablement
About the Team The Scaling team is responsible for the architectural and engineering backbone of OpenAI’s infrastructure. We design and deliver advanced systems that support the deployment and operation of cutting-edge AI models.
What this role actually needs.
About the Team The Scaling team is responsible for the architectural and engineering backbone of OpenAI’s infrastructure. We design and deliver advanced systems that support the deployment and operation of cutting-edge AI models. Responsibilities: - Port and validate key inference and training workloads on new platforms/SKUs as they arrive; drive correctness, performance, and stability to an internal readiness bar. - Build a suite of benchmarks and stress tests that capture real E2E behavior of our workloads by exercising all aspects of a system, including CPU, GPU, memory subsystem, frontend, scale-up, and scale-out networking (including WAN traffic, NVlink and RDMA collectives), storage, thermals, and any other relevant parts. - Deep-dive performance on distributed training/inference: Collective performance and tuning (across NCCL/RCCL and internal libraries) - Overlap of compute/communication, kernel-level bottlenecks, memory bandwidth and scheduling effects - Create repeatable test harnesses that run in CI / lab environments and produce actionable outputs (pass/fail, performance score, regression detection). - Partner with systems + fleet bring-up engineers to ensure the platform is not only stable and performant, but also operationally usable and scalable (containerization, K8s integration, telemetry hooks, failure triage loops). Requirements: - BS in CS/EE (or equivalent practical experience). - 5+ years in one or more of: ML systems, performance engineering, distributed systems, or HPC. - Strong hands-on experience with: PyTorch and modern LLM training/inference stacks - Large-scale distributed training concepts (data/model/pipeline parallel, collective comms) - Experience with RDMA and debugging/optimizing comms libraries (NCCL or RCCL) and their interaction with hardware/network - Proficiency in Python plus comfort reading/writing performance-critical code (C++/CUDA/HIP is a plus). Company context: OpenAI builds frontier AI systems, research infrastructure, and applied products for developers, enterprises, and global users.
Day-to-day expectations
OpenAI lists these responsibilities for the Software Engineer, Workload Enablement role.
- Port and validate key inference and training workloads on new platforms/SKUs as they arrive; drive correctness, performance, and stability to an internal readiness bar.
- Build a suite of benchmarks and stress tests that capture real E2E behavior of our workloads by exercising all aspects of a system, including CPU, GPU, memory subsystem, frontend, scale-up, and scale-out networking (including WAN traffic, NVlink and RDMA collectives), storage, thermals, and any other relevant parts.
- Deep-dive performance on distributed training/inference: Collective performance and tuning (across NCCL/RCCL and internal libraries)
- Overlap of compute/communication, kernel-level bottlenecks, memory bandwidth and scheduling effects
- Create repeatable test harnesses that run in CI / lab environments and produce actionable outputs (pass/fail, performance score, regression detection).
- Partner with systems + fleet bring-up engineers to ensure the platform is not only stable and performant, but also operationally usable and scalable (containerization, K8s integration, telemetry hooks, failure triage loops).
What a strong candidate brings
These requirements are extracted from the source listing and normalized for UpJobz readers.
- BS in CS/EE (or equivalent practical experience).
- 5+ years in one or more of: ML systems, performance engineering, distributed systems, or HPC.
- Strong hands-on experience with: PyTorch and modern LLM training/inference stacks
- Large-scale distributed training concepts (data/model/pipeline parallel, collective comms)
- Experience with RDMA and debugging/optimizing comms libraries (NCCL or RCCL) and their interaction with hardware/network
- Proficiency in Python plus comfort reading/writing performance-critical code (C++/CUDA/HIP is a plus).
Why this listing is more than a copied job post.
Software Engineer, Workload Enablement is framed against UpJobz source checks, country scope, compensation visibility, and work-authorization signals so candidates can make a faster go/no-go decision.
United States tech market
United States roles on UpJobz are filtered for high-tech relevance, source freshness, and actionable employer detail before they are allowed into SEO surfaces.
Compensation read
$293K - $455K is visible before the click, so candidates can compare the role against local market expectations before applying.
Work authorization read
Current extracted signal: United States residents. UpJobz treats this as a search signal, not legal advice, and links visa-sensitive roles back to the relevant visa hub where possible.
Location read
Hybrid roles in San Francisco should be compared against commute, local salary bands, and nearby employer demand.
Browse similar jobs
Turn this listing into an application plan.
This is the first pass at the premium UpJobz layer: a fast brief that helps serious applicants move with more clarity.
Next moves
- Tailor your resume around ai and llm instead of sending a generic application.
- Use the first two bullets of your application to connect your background directly to software engineer, workload enablement is a high-signal hybrid role in san francisco, and it is most realistic for united states residents.
- Open the role quickly if it fits and bookmark three similar jobs before you leave the page.
Interview themes
Watchouts
- $293K - $455K is visible, so calibrate your application around the posted range.
- Use united states residents as part of your positioning so the recruiter does not have to infer it.
- Show concrete examples of succeeding in hybrid environments.
Keywords to match against your background
Use these terms to decide whether your resume, portfolio, and recent projects line up with the role.
Apply through the employer source
Open the source listing from jobs.ashbyhq.com, confirm the role is still active, then apply on the employer or ATS page.
Source: jobs.ashbyhq.com · Source ID: 9efcef02-0515-4672-bace-81329944b38b · Confidence: 97/100 · Last checked: May 7, 2026
How UpJobz verifies job sourcesContinue browsing tech jobs