87 remote roles added today376 active tech employers🇺🇸 🇨🇦 🇲🇽 Tri-border network749 metros covered12 database updates this hourTN visa filter live87 remote roles added today376 active tech employers🇺🇸 🇨🇦 🇲🇽 Tri-border network749 metros covered12 database updates this hourTN visa filter live
Jobs/San Francisco/Research Engineer, Model Evaluations
San Francisco, CA

Research Engineer, Model Evaluations

About Anthropic Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole.

Company
Anthropic
Compensation
Not listed
Schedule
Full-Time
Role overview

What this role actually needs.

About Anthropic Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Responsibilities: - Design and run new evaluations of Claude's capabilities — reasoning, agentic behavior, knowledge, safety properties — and produce visualizations that make the results legible to researchers and decision-makers - Build and harden the distributed eval execution platform so hundreds of evals run reliably against checkpoints throughout production RL training runs - Own the dashboards researchers and leadership use to monitor model health during training, improving signal-to-noise, reducing latency, and making regressions impossible to miss - Debug anomalous eval results mid-training-run, determine whether the cause is a model change or an infrastructure issue, and communicate the answer clearly under time pressure - Improve the tooling, libraries, and workflows researchers use to implement and iterate on evaluations - Partner with research teams across the full lifecycle of a new capability — from defining what to measure to interpreting results as training progresses Requirements: - Strong Python programming skills, including production or research infrastructure - Experience building or operating distributed systems, data pipelines, or other infrastructure that needs to be reliable at scale - Clear written and verbal communication, especially when explaining technical results to non-specialists - Comfort operating in an on-call or production-support capacity when training runs are live - Care about the societal impacts of your work and an interest in steering powerful AI to be safe and beneficial Company context: Anthropic is an AI safety company building Claude, a frontier-model assistant for developers, enterprises, and consumers.

Responsibilities

Day-to-day expectations

Anthropic lists these responsibilities for the Research Engineer, Model Evaluations role.

  • Design and run new evaluations of Claude's capabilities — reasoning, agentic behavior, knowledge, safety properties — and produce visualizations that make the results legible to researchers and decision-makers
  • Build and harden the distributed eval execution platform so hundreds of evals run reliably against checkpoints throughout production RL training runs
  • Own the dashboards researchers and leadership use to monitor model health during training, improving signal-to-noise, reducing latency, and making regressions impossible to miss
  • Debug anomalous eval results mid-training-run, determine whether the cause is a model change or an infrastructure issue, and communicate the answer clearly under time pressure
  • Improve the tooling, libraries, and workflows researchers use to implement and iterate on evaluations
  • Partner with research teams across the full lifecycle of a new capability — from defining what to measure to interpreting results as training progresses
Requirements

What a strong candidate brings

These requirements are extracted from the source listing and normalized for UpJobz readers.

  • Strong Python programming skills, including production or research infrastructure
  • Experience building or operating distributed systems, data pipelines, or other infrastructure that needs to be reliable at scale
  • Clear written and verbal communication, especially when explaining technical results to non-specialists
  • Comfort operating in an on-call or production-support capacity when training runs are live
  • Care about the societal impacts of your work and an interest in steering powerful AI to be safe and beneficial
UpJobz market context

Why this listing is more than a copied job post.

Research Engineer, Model Evaluations is framed against UpJobz source checks, country scope, compensation visibility, and work-authorization signals so candidates can make a faster go/no-go decision.

United States tech market

United States roles on UpJobz are filtered for high-tech relevance, source freshness, and actionable employer detail before they are allowed into SEO surfaces.

Compensation read

The employer source does not expose a reliable salary range, so candidates should ask for compensation early instead of waiting until late-stage interviews.

Work authorization read

Current extracted signal: Open to TN, H-1B, and OPT candidates already in the United States. UpJobz treats this as a search signal, not legal advice, and links visa-sensitive roles back to the relevant visa hub where possible.

Location read

On-site roles in San Francisco should be compared against commute, local salary bands, and nearby employer demand.

Browse similar jobs

Subscriber playbook

Turn this listing into an application plan.

This is the first pass at the premium UpJobz layer: a fast brief that helps serious applicants move with more clarity.

Next moves

  • Tailor your resume around ai and llm instead of sending a generic application.
  • Use the first two bullets of your application to connect your background directly to research engineer, model evaluations is a high-signal on-site role in san francisco, and it is most realistic for open to tn, h-1b, and opt candidates already in the united states.
  • Open the role quickly if it fits and bookmark three similar jobs before you leave the page.

Interview themes

Artificial IntelligenceOn-siteaillmmachine-learningresearch

Watchouts

  • Compensation is hidden, so get range clarity in the first recruiter conversation.
  • Use open to tn, h-1b, and opt candidates already in the united states as part of your positioning so the recruiter does not have to infer it.
  • Show concrete examples of succeeding in on-site environments.
Role signals

Keywords to match against your background

Use these terms to decide whether your resume, portfolio, and recent projects line up with the role.

aillmmachine-learningresearchpythonawsplatformobservabilityapisafety
Next step

Apply through the employer source

Open the source listing from job-boards.greenhouse.io, confirm the role is still active, then apply on the employer or ATS page.

Open employer application

Source: job-boards.greenhouse.io · Source ID: 5198255008 · Confidence: 97/100 · Last checked: May 7, 2026

How UpJobz verifies job sourcesContinue browsing tech jobs