OpenAI is hiring for a role the company describes as ‘head of preparedness’. As per a job post listing, OpenAI’s head of preparedness will lead the Safety Systems team that was created in 2024. At the time, the safety systems team was led by CEO Sam Altman, along with board members Adam D’Angelo and Nicole Seligman. “The Safety Systems team ensures that OpenAI’s most capable models can be responsibly developed and deployed,” the listing says, adding “you will lead a small, high-impact team to drive core Preparedness research, while partnering broadly across Safety Systems and OpenAI for end-to-end adoption and execution of the framework.”“OpenAI has invested deeply in Preparedness across multiple generations of frontier models, building core capability evaluations, threat models, and cross-functional mitigations,” the listing reads.
“The Head of Preparedness will expand, strengthen, and guide this program so our safety standards scale with the capabilities of the systems we develop,” it further adds.
OpenAI CEO Sam Altman describes head of preparedness job ‘stressful’
OpenAI CEO Sam Altman also shared the hiring post on X (formerly Twitter) where he described the role as “critical” and “stressful”. “We are hiring a Head of Preparedness,” Sam Altman wrote, adding “This will be a stressful job and you’ll jump into the deep end pretty much immediately.” “If you want to help the world figure out how to enable cybersecurity defenders with cutting edge capabilities while ensuring attackers can’t use them for harm, ideally by making all systems more secure, and similarly for how we release biological capabilities and even gain confidence in the safety of running systems that can self-improve, please consider applying,” he added.
OpenAI CEO Sam Altman’s post on hiring head of preparedness
We are hiring a Head of Preparedness. This is a critical role at an important time; models are improving quickly and are now capable of many great things, but they are also starting to present some real challenges. The potential impact of models on mental health was something we saw a preview of in 2025; we are just now seeing models get so good at computer security they are beginning to find critical vulnerabilities.We have a strong foundation of measuring growing capabilities, but we are entering a world where we need more nuanced understanding and measurement of how those capabilities could be abused, and how we can limit those downsides both in our products and in the world, in a way that lets us all enjoy the tremendous benefits. These questions are hard and there is little precedent; a lot of ideas that sound good have some real edge cases. If you want to help the world figure out how to enable cybersecurity defenders with cutting edge capabilities while ensuring attackers can’t use them for harm, ideally by making all systems more secure, and similarly for how we release biological capabilities and even gain confidence in the safety of running systems that can self-improve, please consider applying.This will be a stressful job and you’ll jump into the deep end pretty much immediately.
