Sr. Data Scientist, Responsible AI
About Pinterest: Millions of people around the world come to our platform to find creative ideas, dream about new possibilities and plan for memories that will last a lifetime. At Pinterest, we’re on a mission to bring everyone the inspiration to create a life they love, and that starts with the people behind the produ
What this role actually needs.
About Pinterest: Millions of people around the world come to our platform to find creative ideas, dream about new possibilities and plan for memories that will last a lifetime. At Pinterest, we’re on a mission to bring everyone the inspiration to create a life they love, and that starts with the people behind the produ Responsibilities: - Design and develop automated adversarial testing methodologies — including single-turn, multi-turn, and multimodal attack strategies — to proactively identify vulnerabilities in Pinterest's Generative AI products. - Build and calibrate hybrid evaluation pipelines combining LLM-based judges, classifiers, and rule-based systems to accurately detect safety violations, policy breaches, bias, and representational harms. - Develop and operationalize harm taxonomies grounded in industry standards and Pinterest's Responsible AI and Trust & Safety threat models. - Design adaptive refinement loops that learn from attack outcomes (near-misses, partial failures) to iteratively surface deeper and previously unknown vulnerabilities. - Bring scientific rigor and statistical methods to the evaluation of AI safety — including benchmark dataset construction, evaluation calibration, and success-metric definition (vulnerability severity, coverage breadth, pre-launch risk reduction). - Work cross-functionally to build relationships, proactively communicate key findings, and collaborate closely with ML engineers, Trust & Safety specialists, policy teams, product managers, and legal partners to ensure safe product launches. Requirements: - 5+ years of experience analyzing data in a fast-paced, data-driven environment with proven ability to apply scientific methods to solve real-world problems on web-scale data. - Strong interest and hands-on experience in one or more of: AI safety, adversarial machine learning, red teaming, responsible AI, or trust & safety. - Deep familiarity with large language models (LLMs), generative AI systems, and their failure modes — including prompt injection, jailbreaks, bias, and safety violations. - Experience designing and calibrating evaluation frameworks for AI systems — including LLM-as-judge, classifier-based evaluation, and benchmark dataset construction. - Strong quantitative programming (Python) and data manipulation skills (SQL/Spark); experience with ML pipelines and large-scale experimentation. - Familiarity with AI safety taxonomies and frameworks (e.g., OWASP LLM Top 10, MITRE ATLAS) is strongly preferred. Company context: Pinterest is the visual discovery platform that powers idea search and shopping across web and mobile.
Day-to-day expectations
Pinterest lists these responsibilities for the Sr. Data Scientist, Responsible AI role.
- Design and develop automated adversarial testing methodologies — including single-turn, multi-turn, and multimodal attack strategies — to proactively identify vulnerabilities in Pinterest's Generative AI products.
- Build and calibrate hybrid evaluation pipelines combining LLM-based judges, classifiers, and rule-based systems to accurately detect safety violations, policy breaches, bias, and representational harms.
- Develop and operationalize harm taxonomies grounded in industry standards and Pinterest's Responsible AI and Trust & Safety threat models.
- Design adaptive refinement loops that learn from attack outcomes (near-misses, partial failures) to iteratively surface deeper and previously unknown vulnerabilities.
- Bring scientific rigor and statistical methods to the evaluation of AI safety — including benchmark dataset construction, evaluation calibration, and success-metric definition (vulnerability severity, coverage breadth, pre-launch risk reduction).
- Work cross-functionally to build relationships, proactively communicate key findings, and collaborate closely with ML engineers, Trust & Safety specialists, policy teams, product managers, and legal partners to ensure safe product launches.
What a strong candidate brings
These requirements are extracted from the source listing and normalized for UpJobz readers.
- 5+ years of experience analyzing data in a fast-paced, data-driven environment with proven ability to apply scientific methods to solve real-world problems on web-scale data.
- Strong interest and hands-on experience in one or more of: AI safety, adversarial machine learning, red teaming, responsible AI, or trust & safety.
- Deep familiarity with large language models (LLMs), generative AI systems, and their failure modes — including prompt injection, jailbreaks, bias, and safety violations.
- Experience designing and calibrating evaluation frameworks for AI systems — including LLM-as-judge, classifier-based evaluation, and benchmark dataset construction.
- Strong quantitative programming (Python) and data manipulation skills (SQL/Spark); experience with ML pipelines and large-scale experimentation.
- Familiarity with AI safety taxonomies and frameworks (e.g., OWASP LLM Top 10, MITRE ATLAS) is strongly preferred.
Why this listing is more than a copied job post.
Sr. Data Scientist, Responsible AI is framed against UpJobz source checks, country scope, compensation visibility, and work-authorization signals so candidates can make a faster go/no-go decision.
United States tech market
United States roles on UpJobz are filtered for high-tech relevance, source freshness, and actionable employer detail before they are allowed into SEO surfaces.
Compensation read
The employer source does not expose a reliable salary range, so candidates should ask for compensation early instead of waiting until late-stage interviews.
Work authorization read
Current extracted signal: Open to TN, H-1B, and OPT candidates already in the United States. UpJobz treats this as a search signal, not legal advice, and links visa-sensitive roles back to the relevant visa hub where possible.
Location read
On-site roles in San Francisco should be compared against commute, local salary bands, and nearby employer demand.
Browse similar jobs
Turn this listing into an application plan.
This is the first pass at the premium UpJobz layer: a fast brief that helps serious applicants move with more clarity.
Next moves
- Tailor your resume around ai and llm instead of sending a generic application.
- Use the first two bullets of your application to connect your background directly to sr. data scientist, responsible ai is a high-signal on-site role in san francisco, and it is most realistic for open to tn, h-1b, and opt candidates already in the united states.
- Open the role quickly if it fits and bookmark three similar jobs before you leave the page.
Interview themes
Watchouts
- Compensation is hidden, so get range clarity in the first recruiter conversation.
- Use open to tn, h-1b, and opt candidates already in the united states as part of your positioning so the recruiter does not have to infer it.
- Show concrete examples of succeeding in on-site environments.
Keywords to match against your background
Use these terms to decide whether your resume, portfolio, and recent projects line up with the role.
Apply through the employer source
Open the source listing from pinterestcareers.com, confirm the role is still active, then apply on the employer or ATS page.
Source: pinterestcareers.com · Source ID: 7494923 · Confidence: 90/100 · Last checked: May 7, 2026
How UpJobz verifies job sourcesContinue browsing tech jobs