OpenAI is looking for a new Head of Preparedness who can help it anticipate the potential harms of its models and how they can be abused, in order to guide the company’s safety strategy. It comes at the end of a year that’s seen OpenAI hit with numerous accusations about ChatGPT’s impacts on users’ mental health, including a few wrongful death lawsuits. In a post on X about the position, OpenAI CEO Sam Altman acknowledged that the “potential impact of models on mental health was something we saw a preview of in 2025,” along with other “real challenges” that have arisen alongside models’ capabilities. The Head of Preparedness “is a critical role at an important time,” he said.
Per the job listing, the Head of Preparedness (who will make $555K, plus equity), “will lead the technical strategy and execution of OpenAI’s Preparedness framework, our framework explaining OpenAI’s approach to tracking and preparing for frontier capabilities that create new risks of severe harm.” It is, according to Altman, “a stressful job and you’ll jump into the deep end pretty much immediately.”
Over the last couple of years, OpenAI’s safety teams have undergone a lot of changes. The company’s former Head of Preparedness, Aleksander Madry, was reassigned back in July 2024, and Altman said at the time that the role would be taken over by execs Joaquin Quinonero Candela and Lilian Weng. Weng left the company a few months later, and in July 2025, Quinonero Candela announced his move away from the preparedness team to lead recruiting at OpenAI.





Leave a Reply