Software Engineer, Data Acquisition
Overview: The Data Acquisition team within the Foundations organization at OpenAI is responsible for all aspects of data collection to support our model training operations. Our team manages web crawling and GPTBot services and works closely with Data Processing, Architecture, and Scaling teams.
What this role actually needs.
Overview: The Data Acquisition team within the Foundations organization at OpenAI is responsible for all aspects of data collection to support our model training operations. Our team manages web crawling and GPTBot services and works closely with Data Processing, Architecture, and Scaling teams. Responsibilities: - Own and lead engineering projects in the area of data acquisition including web crawling, data ingestion, and search. - Collaborate with other sub-teams, such as Data Processing, Architecture, and Scaling, to ensure smooth data flow and system operability. - Work closely with the legal team to handle any compliance or data privacy-related matters. - Develop and deploy highly scalable distributed systems capable of handling petabytes of data. - Architect and implement algorithms for data indexing and search capabilities. - Build and maintain backend services for data storage, including work with key-value databases and synchronization. Requirements: - BS/MS/PhD in Computer Science or a related field. - 4+ years of industry experience in software development. - Experience with large web crawlers a plus - Strong expertise in large stateful distributed systems and data processing. - Proficiency in Kubernetes, and Infrastructure-as-Code concepts. - Willingness and enthusiasm for trying new approaches and technologies. Company context: OpenAI builds frontier AI systems, research infrastructure, and applied products for developers, enterprises, and global users.
Day-to-day expectations
OpenAI lists these responsibilities for the Software Engineer, Data Acquisition role.
- Own and lead engineering projects in the area of data acquisition including web crawling, data ingestion, and search.
- Collaborate with other sub-teams, such as Data Processing, Architecture, and Scaling, to ensure smooth data flow and system operability.
- Work closely with the legal team to handle any compliance or data privacy-related matters.
- Develop and deploy highly scalable distributed systems capable of handling petabytes of data.
- Architect and implement algorithms for data indexing and search capabilities.
- Build and maintain backend services for data storage, including work with key-value databases and synchronization.
What a strong candidate brings
These requirements are extracted from the source listing and normalized for UpJobz readers.
- BS/MS/PhD in Computer Science or a related field.
- 4+ years of industry experience in software development.
- Experience with large web crawlers a plus
- Strong expertise in large stateful distributed systems and data processing.
- Proficiency in Kubernetes, and Infrastructure-as-Code concepts.
- Willingness and enthusiasm for trying new approaches and technologies.
Why this listing is more than a copied job post.
Software Engineer, Data Acquisition is framed against UpJobz source checks, country scope, compensation visibility, and work-authorization signals so candidates can make a faster go/no-go decision.
United States tech market
United States roles on UpJobz are filtered for high-tech relevance, source freshness, and actionable employer detail before they are allowed into SEO surfaces.
Compensation read
$293K - $385K is visible before the click, so candidates can compare the role against local market expectations before applying.
Work authorization read
Current extracted signal: United States residents. UpJobz treats this as a search signal, not legal advice, and links visa-sensitive roles back to the relevant visa hub where possible.
Location read
On-site roles in San Francisco should be compared against commute, local salary bands, and nearby employer demand.
Browse similar jobs
Turn this listing into an application plan.
This is the first pass at the premium UpJobz layer: a fast brief that helps serious applicants move with more clarity.
Next moves
- Tailor your resume around ai and llm instead of sending a generic application.
- Use the first two bullets of your application to connect your background directly to software engineer, data acquisition is a high-signal on-site role in san francisco, and it is most realistic for united states residents.
- Open the role quickly if it fits and bookmark three similar jobs before you leave the page.
Interview themes
Watchouts
- $293K - $385K is visible, so calibrate your application around the posted range.
- Use united states residents as part of your positioning so the recruiter does not have to infer it.
- Show concrete examples of succeeding in on-site environments.
Keywords to match against your background
Use these terms to decide whether your resume, portfolio, and recent projects line up with the role.
Apply through the employer source
Open the source listing from jobs.ashbyhq.com, confirm the role is still active, then apply on the employer or ATS page.
Source: jobs.ashbyhq.com Β· Source ID: 41d9d129-2e58-4ad3-be81-2e5096f4da4d Β· Confidence: 97/100 Β· Last checked: May 7, 2026
How UpJobz verifies job sourcesContinue browsing tech jobs