Abuse Investigator (AI Self-Improvement Risk)
About the Team OpenAI’s mission is to ensure that general-purpose artificial intelligence benefits all of humanity. We believe that achieving our goal requires real world deployment and iteratively updating based on what we learn.
What this role actually needs.
About the Team OpenAI’s mission is to ensure that general-purpose artificial intelligence benefits all of humanity. We believe that achieving our goal requires real world deployment and iteratively updating based on what we learn. Responsibilities: - Review leads, investigate model behavior, and identify cases where systems demonstrate agentic or autonomous patterns that that introduce safety risks - Detect and analyze behaviors such as multi-step planning, capability chaining, tool use, persistence, and workaround behavior - Develop signals and tracking strategies to help proactively identify emerging agentic risk patterns across our platform - Identify gaps in existing safeguards, evaluations, or monitoring systems and propose improvements - Communicate investigation findings clearly to technical, policy, and leadership stakeholders - Be someone people enjoy working with and appreciate the opportunity to help others Benefits: - Review leads, investigate model behavior, and identify cases where systems demonstrate agentic or autonomous patterns that that introduce safety risks - Detect and analyze behaviors such as multi-step planning, capability chaining, tool use, persistence, and workaround behavior - Develop signals and tracking strategies to help proactively identify emerging agentic risk patterns across our platform - Identify gaps in existing safeguards, evaluations, or monitoring systems and propose improvements - Communicate investigation findings clearly to technical, policy, and leadership stakeholders - Be someone people enjoy working with and appreciate the opportunity to help others Company context: OpenAI builds frontier AI systems, research infrastructure, and applied products for developers, enterprises, and global users.
Day-to-day expectations
OpenAI lists these responsibilities for the Abuse Investigator (AI Self-Improvement Risk) role.
- Review leads, investigate model behavior, and identify cases where systems demonstrate agentic or autonomous patterns that that introduce safety risks
- Detect and analyze behaviors such as multi-step planning, capability chaining, tool use, persistence, and workaround behavior
- Develop signals and tracking strategies to help proactively identify emerging agentic risk patterns across our platform
- Identify gaps in existing safeguards, evaluations, or monitoring systems and propose improvements
- Communicate investigation findings clearly to technical, policy, and leadership stakeholders
- Be someone people enjoy working with and appreciate the opportunity to help others
Why people would want this job
OpenAI published these compensation, benefits, or working-context details with the role.
- Review leads, investigate model behavior, and identify cases where systems demonstrate agentic or autonomous patterns that that introduce safety risks
- Detect and analyze behaviors such as multi-step planning, capability chaining, tool use, persistence, and workaround behavior
- Develop signals and tracking strategies to help proactively identify emerging agentic risk patterns across our platform
- Identify gaps in existing safeguards, evaluations, or monitoring systems and propose improvements
- Communicate investigation findings clearly to technical, policy, and leadership stakeholders
- Be someone people enjoy working with and appreciate the opportunity to help others
Why this listing is more than a copied job post.
Abuse Investigator (AI Self-Improvement Risk) is framed against UpJobz source checks, country scope, compensation visibility, and work-authorization signals so candidates can make a faster go/no-go decision.
United States tech market
United States roles on UpJobz are filtered for high-tech relevance, source freshness, and actionable employer detail before they are allowed into SEO surfaces.
Compensation read
$288K - $320K is visible before the click, so candidates can compare the role against local market expectations before applying.
Work authorization read
Current extracted signal: United States residents. UpJobz treats this as a search signal, not legal advice, and links visa-sensitive roles back to the relevant visa hub where possible.
Location read
Hybrid roles in San Francisco should be compared against commute, local salary bands, and nearby employer demand.
Browse similar jobs
Turn this listing into an application plan.
This is the first pass at the premium UpJobz layer: a fast brief that helps serious applicants move with more clarity.
Next moves
- Tailor your resume around ai and research instead of sending a generic application.
- Use the first two bullets of your application to connect your background directly to abuse investigator (ai self-improvement risk) is a high-signal hybrid role in san francisco, and it is most realistic for united states residents.
- Open the role quickly if it fits and bookmark three similar jobs before you leave the page.
Interview themes
Watchouts
- $288K - $320K is visible, so calibrate your application around the posted range.
- Use united states residents as part of your positioning so the recruiter does not have to infer it.
- Show concrete examples of succeeding in hybrid environments.
Keywords to match against your background
Use these terms to decide whether your resume, portfolio, and recent projects line up with the role.
Apply through the employer source
Open the source listing from jobs.ashbyhq.com, confirm the role is still active, then apply on the employer or ATS page.
Source: jobs.ashbyhq.com · Source ID: 55e4518a-5caf-4d96-9c16-3924805d71bd · Confidence: 97/100 · Last checked: May 7, 2026
How UpJobz verifies job sourcesContinue browsing tech jobs