87 remote roles added today376 active tech employers🇺🇸 🇨🇦 🇲🇽 Tri-border network749 metros covered12 database updates this hourTN visa filter live87 remote roles added today376 active tech employers🇺🇸 🇨🇦 🇲🇽 Tri-border network749 metros covered12 database updates this hourTN visa filter live
Jobs/San Francisco/Research Engineer / Scientist, Alignment Science
San Francisco, CA

Research Engineer / Scientist, Alignment Science

About Anthropic Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole.

Company
Anthropic
Compensation
Not listed
Schedule
Full-Time
Role overview

What this role actually needs.

About Anthropic Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Responsibilities: - Scalable Oversight: Developing techniques to keep highly capable models helpful and honest, even as they surpass human-level intelligence in various domains. - AI Control: Creating methods to ensure advanced AI systems remain safe and harmless in unfamiliar or adversarial scenarios. - Alignment Stress-testing : Creating model organisms of misalignment to improve our empirical understanding of how alignment failures might arise. - Automated Alignment Research: Building and aligning a system that can speed up & improve alignment research. - Alignment Assessments : Understanding and documenting the highest-stakes and most concerning emerging properties of models through pre-deployment alignment and welfare assessments (see our Claude 4 System Card ) , misalignment-risk safety cases, and coordination with third-party evaluators. - Safeguards Research : Developing robust defenses against adversarial attacks, comprehensive evaluation frameworks for model safety, and automated systems to detect and mitigate potential risks before deployment. Company context: Anthropic is an AI safety company building Claude, a frontier-model assistant for developers, enterprises, and consumers.

Responsibilities

Day-to-day expectations

Anthropic lists these responsibilities for the Research Engineer / Scientist, Alignment Science role.

  • Scalable Oversight: Developing techniques to keep highly capable models helpful and honest, even as they surpass human-level intelligence in various domains.
  • AI Control: Creating methods to ensure advanced AI systems remain safe and harmless in unfamiliar or adversarial scenarios.
  • Alignment Stress-testing : Creating model organisms of misalignment to improve our empirical understanding of how alignment failures might arise.
  • Automated Alignment Research: Building and aligning a system that can speed up & improve alignment research.
  • Alignment Assessments : Understanding and documenting the highest-stakes and most concerning emerging properties of models through pre-deployment alignment and welfare assessments (see our Claude 4 System Card ) , misalignment-risk safety cases, and coordination with third-party evaluators.
  • Safeguards Research : Developing robust defenses against adversarial attacks, comprehensive evaluation frameworks for model safety, and automated systems to detect and mitigate potential risks before deployment.
UpJobz market context

Why this listing is more than a copied job post.

Research Engineer / Scientist, Alignment Science is framed against UpJobz source checks, country scope, compensation visibility, and work-authorization signals so candidates can make a faster go/no-go decision.

United States tech market

United States roles on UpJobz are filtered for high-tech relevance, source freshness, and actionable employer detail before they are allowed into SEO surfaces.

Compensation read

The employer source does not expose a reliable salary range, so candidates should ask for compensation early instead of waiting until late-stage interviews.

Work authorization read

Current extracted signal: Open to TN, H-1B, and OPT candidates already in the United States. UpJobz treats this as a search signal, not legal advice, and links visa-sensitive roles back to the relevant visa hub where possible.

Location read

On-site roles in San Francisco should be compared against commute, local salary bands, and nearby employer demand.

Browse similar jobs

Subscriber playbook

Turn this listing into an application plan.

This is the first pass at the premium UpJobz layer: a fast brief that helps serious applicants move with more clarity.

Next moves

  • Tailor your resume around ai and llm instead of sending a generic application.
  • Use the first two bullets of your application to connect your background directly to research engineer / scientist, alignment science is a high-signal on-site role in san francisco, and it is most realistic for open to tn, h-1b, and opt candidates already in the united states.
  • Open the role quickly if it fits and bookmark three similar jobs before you leave the page.

Interview themes

Artificial IntelligenceOn-siteaillmmachine-learningresearch

Watchouts

  • Compensation is hidden, so get range clarity in the first recruiter conversation.
  • Use open to tn, h-1b, and opt candidates already in the united states as part of your positioning so the recruiter does not have to infer it.
  • Show concrete examples of succeeding in on-site environments.
Role signals

Keywords to match against your background

Use these terms to decide whether your resume, portfolio, and recent projects line up with the role.

aillmmachine-learningresearchpythonkubernetesawsapisafety
Next step

Apply through the employer source

Open the source listing from job-boards.greenhouse.io, confirm the role is still active, then apply on the employer or ATS page.

Open employer application

Source: job-boards.greenhouse.io · Source ID: 4631822008 · Confidence: 97/100 · Last checked: May 7, 2026

How UpJobz verifies job sourcesContinue browsing tech jobs