87 remote roles added today376 active tech employers🇺🇸 🇨🇦 🇲🇽 Tri-border network749 metros covered12 database updates this hourTN visa filter live87 remote roles added today376 active tech employers🇺🇸 🇨🇦 🇲🇽 Tri-border network749 metros covered12 database updates this hourTN visa filter live
Jobs/San Francisco/Research Engineer, Privacy
San Francisco, CA

Research Engineer, Privacy

About the Team The Privacy Engineering Team at OpenAI is committed to integrating privacy as a foundational element in OpenAI's mission of advancing Artificial General Intelligence (AGI). Our focus is on all OpenAI products and systems handling user data, striving to uphold the highest standards of data privacy and sec

Company
OpenAI
Compensation
$380K - $445K
Schedule
Full-Time
Role overview

What this role actually needs.

About the Team The Privacy Engineering Team at OpenAI is committed to integrating privacy as a foundational element in OpenAI's mission of advancing Artificial General Intelligence (AGI). Our focus is on all OpenAI products and systems handling user data, striving to uphold the highest standards of data privacy and sec Responsibilities: - Design and prototype privacy-preserving machine-learning algorithms (e.g., differential privacy, secure aggregation, federated learning) that can be deployed at OpenAI scale. - Measure and strengthen model robustness against privacy attacks such as membership inference, model inversion, and data memorization leaks—balancing utility with provable guarantees. - Develop internal libraries, evaluation suites, and documentation that make cutting-edge privacy techniques accessible to engineering and research teams. - Lead deep-dive investigations into the privacy–performance trade-offs of large models, publishing insights that inform model-training and product-safety decisions. - Define and codify privacy standards, threat models, and audit procedures that guide the entire ML lifecycle—from dataset curation to post-deployment monitoring. - Collaborate across Security, Policy, Product, and Legal to translate evolving regulatory requirements into practical technical safeguards and tooling. Requirements: - Have hands-on research or production experience with PETs. - Are fluent in modern deep-learning stacks (PyTorch/JAX) and comfortable turning cutting-edge papers into reliable, well-tested code. - Enjoy stress-testing models—probing them for private data leakage—and can explain complex attack vectors to non-experts with clarity. - Have a track record of publishing (or implementing) novel privacy or security work and relish bridging the gap between academia and real-world systems. - Thrive in fast-moving, cross-disciplinary environments where you alternate between open-ended research and shipping production features under tight deadlines. - Communicate crisply, document rigorously, and care deeply about building AI systems that respect user privacy while pushing the frontiers of capability. Benefits: - Design and prototype privacy-preserving machine-learning algorithms (e.g., differential privacy, secure aggregation, federated learning) that can be deployed at OpenAI scale. - Measure and strengthen model robustness against privacy attacks such as membership inference, model inversion, and data memorization leaks—balancing utility with provable guarantees. - Develop internal libraries, evaluation suites, and documentation that make cutting-edge privacy techniques accessible to engineering and research teams. - Lead deep-dive investigations into the privacy–performance trade-offs of large models, publishing insights that inform model-training and product-safety decisions. - Define and codify privacy standards, threat models, and audit procedures that guide the entire ML lifecycle—from dataset curation to post-deployment monitoring. - Collaborate across Security, Policy, Product, and Legal to translate evolving regulatory requirements into practical technical safeguards and tooling. Company context: OpenAI builds frontier AI systems, research infrastructure, and applied products for developers, enterprises, and global users.

Responsibilities

Day-to-day expectations

OpenAI lists these responsibilities for the Research Engineer, Privacy role.

  • Design and prototype privacy-preserving machine-learning algorithms (e.g., differential privacy, secure aggregation, federated learning) that can be deployed at OpenAI scale.
  • Measure and strengthen model robustness against privacy attacks such as membership inference, model inversion, and data memorization leaks—balancing utility with provable guarantees.
  • Develop internal libraries, evaluation suites, and documentation that make cutting-edge privacy techniques accessible to engineering and research teams.
  • Lead deep-dive investigations into the privacy–performance trade-offs of large models, publishing insights that inform model-training and product-safety decisions.
  • Define and codify privacy standards, threat models, and audit procedures that guide the entire ML lifecycle—from dataset curation to post-deployment monitoring.
  • Collaborate across Security, Policy, Product, and Legal to translate evolving regulatory requirements into practical technical safeguards and tooling.
Requirements

What a strong candidate brings

These requirements are extracted from the source listing and normalized for UpJobz readers.

  • Have hands-on research or production experience with PETs.
  • Are fluent in modern deep-learning stacks (PyTorch/JAX) and comfortable turning cutting-edge papers into reliable, well-tested code.
  • Enjoy stress-testing models—probing them for private data leakage—and can explain complex attack vectors to non-experts with clarity.
  • Have a track record of publishing (or implementing) novel privacy or security work and relish bridging the gap between academia and real-world systems.
  • Thrive in fast-moving, cross-disciplinary environments where you alternate between open-ended research and shipping production features under tight deadlines.
  • Communicate crisply, document rigorously, and care deeply about building AI systems that respect user privacy while pushing the frontiers of capability.
Benefits

Why people would want this job

OpenAI published these compensation, benefits, or working-context details with the role.

  • Design and prototype privacy-preserving machine-learning algorithms (e.g., differential privacy, secure aggregation, federated learning) that can be deployed at OpenAI scale.
  • Measure and strengthen model robustness against privacy attacks such as membership inference, model inversion, and data memorization leaks—balancing utility with provable guarantees.
  • Develop internal libraries, evaluation suites, and documentation that make cutting-edge privacy techniques accessible to engineering and research teams.
  • Lead deep-dive investigations into the privacy–performance trade-offs of large models, publishing insights that inform model-training and product-safety decisions.
  • Define and codify privacy standards, threat models, and audit procedures that guide the entire ML lifecycle—from dataset curation to post-deployment monitoring.
  • Collaborate across Security, Policy, Product, and Legal to translate evolving regulatory requirements into practical technical safeguards and tooling.
UpJobz market context

Why this listing is more than a copied job post.

Research Engineer, Privacy is framed against UpJobz source checks, country scope, compensation visibility, and work-authorization signals so candidates can make a faster go/no-go decision.

United States tech market

United States roles on UpJobz are filtered for high-tech relevance, source freshness, and actionable employer detail before they are allowed into SEO surfaces.

Compensation read

$380K - $445K is visible before the click, so candidates can compare the role against local market expectations before applying.

Work authorization read

Current extracted signal: United States residents. UpJobz treats this as a search signal, not legal advice, and links visa-sensitive roles back to the relevant visa hub where possible.

Location read

Hybrid roles in San Francisco should be compared against commute, local salary bands, and nearby employer demand.

Browse similar jobs

Subscriber playbook

Turn this listing into an application plan.

This is the first pass at the premium UpJobz layer: a fast brief that helps serious applicants move with more clarity.

Next moves

  • Tailor your resume around ai and machine-learning instead of sending a generic application.
  • Use the first two bullets of your application to connect your background directly to research engineer, privacy is a high-signal hybrid role in san francisco, and it is most realistic for united states residents.
  • Open the role quickly if it fits and bookmark three similar jobs before you leave the page.

Interview themes

Artificial IntelligenceHybridaimachine-learningresearchaws

Watchouts

  • $380K - $445K is visible, so calibrate your application around the posted range.
  • Use united states residents as part of your positioning so the recruiter does not have to infer it.
  • Show concrete examples of succeeding in hybrid environments.
Role signals

Keywords to match against your background

Use these terms to decide whether your resume, portfolio, and recent projects line up with the role.

aimachine-learningresearchawssecurityapillmpythoninfrastructure
Next step

Apply through the employer source

Open the source listing from jobs.ashbyhq.com, confirm the role is still active, then apply on the employer or ATS page.

Open employer application

Source: jobs.ashbyhq.com · Source ID: cc434f5b-dc0b-42fd-97ec-e0171545c6e9 · Confidence: 97/100 · Last checked: May 7, 2026

How UpJobz verifies job sourcesContinue browsing tech jobs