OpenAI Posts 'Chief Worry Officer' Job, Pays $555K to Stress-Test AI
Serge Bulaev
OpenAI is hiring a 'Chief Worry Officer' to make sure their powerful AI models are safe before the public uses them. This person will be paid $555,000 and will look for risks like cyber and health dangers. The job is all about finding problems and stopping launches if things aren't safe. OpenAI wants someone who can handle big decisions and lots of stress. Other big tech companies are also hiring experts to keep AI safe, showing it's now a top priority everywhere.

OpenAI is hiring for a pivotal 'Chief Worry Officer' role with a $555,000 salary plus equity, signaling a major focus on AI safety. The official Head of Preparedness posting details a mandate to stress-test advanced models for catastrophic risks, from cyber threats to biosecurity, before they reach the public. This executive position, known internally as Head of Preparedness, anchors the company's framework for institutionalizing risk management.
Why OpenAI Needs a Chief Worrier
OpenAI is creating this role because its leadership believes advanced AI is entering a more dangerous phase where model capabilities are outpacing safety controls. The Head of Preparedness is tasked with identifying and mitigating catastrophic risks before new systems are released, ensuring safeguards keep pace with development.
CEO Sam Altman recently warned that AI's capabilities are outpacing existing safeguards, noting systems are becoming powerful enough to find critical security vulnerabilities on their own. The preparedness lead will have the authority to block or delay product releases if safety evaluations reveal significant, unsolved hazards (TechCrunch coverage). This executive role sits within the Safety Systems group, granting it equal standing with research and product leadership and emphasizing the company's commitment to safety over speed.
Key Responsibilities and Candidate Profile
The job description outlines a role centered on evaluation, mitigation, and internal influence. Core responsibilities include:
- Building and updating frontier capability evaluations to keep pace with rapid model iterations.
- Translating threat models into concrete technical and policy safeguards before launch.
- Running mitigation design across high-stakes domains such as cybersecurity and biosecurity.
- Guiding leadership on whether a release meets OpenAI's risk thresholds.
- Leading a small, high-impact team and coordinating with external partners on shared safety standards.
The ideal candidate must possess deep expertise in machine learning or security, coupled with the judgment to make high-stakes decisions under pressure. Underscoring the role's intensity, Sam Altman noted, "this will be a stressful job and you'll jump into the deep end immediately." The significant compensation package reflects the position's strategic importance.
An Industry-Wide Trend Toward AI Safety Leadership
OpenAI's move is part of a broader trend. Other major tech and automotive companies, including Meta, Character.ai, and General Motors, are also hiring senior leaders for AI alignment and safety. Governments are establishing dedicated bodies like the UK AI Security Institute, while nonprofits such as the Center for AI Safety are recruiting for similar governance roles.
This wave of hiring indicates that executive-level AI safety is becoming a standard corporate function, much like a Chief Security Officer. While the 'Chief Worry Officer' title is informal, the role's authority to halt product launches represents a significant step in how the industry is operationalizing responsible AI development.
What exactly is OpenAI's "Chief Worry Officer" position?
The formal title is Head of Preparedness, a senior executive role that owns OpenAI's entire preparedness framework. The job listing shows the post pays $555,000 per year plus equity and tasks the hire with building capability evaluations, establishing threat models, and coordinating mitigations across cyber, bio, and other frontier-risk domains. Sam Altman has publicly warned candidates that "this will be a stressful job and you'll jump into the deep end pretty much immediately".
Which risks will the Head of Preparedness oversee?
The role covers cybersecurity, biological threats, mental-health impacts, and large-scale disinformation. Recent reporting notes that OpenAI's latest models are now "so good at computer security they are beginning to find critical vulnerabilities", while lawsuits allege ChatGPT interactions contributed to user suicides and paranoid delusions. The executive must translate these threat models into concrete technical and policy safeguards that can delay or restrict product launches.
How does this role influence product releases?
Evaluation results from the Preparedness team directly inform launch decisions, policy choices, and safety cases. If a frontier model scores too high on dangerous capability benchmarks, the Head of Preparedness can recommend shipping restrictions, additional guardrails, or even cancel deployment. This places safety assessments on equal footing with research, product, and infrastructure leadership inside OpenAI.
Is OpenAI alone in hiring at this level for AI safety?
No. Meta, Character.ai, General Motors, Charles Schwab, Field AI, the UK AI Security Institute, and the Australian AI Safety Institute have all posted director- or principal-level AI safety roles for 2025-2026. Compensation bands range from $184k-$400k for technical leads up to $300k plus equity for robotics safety directors, indicating the industry is institutionalizing safety as a C-suite priority.
Why now?
Altman says AI has entered a "more risky phase" where capabilities are "evolving faster than the safeguards meant to control them". He cites extinction-level concerns alongside pandemic and nuclear-scale risks, and points to internal team reorganizations that left governance gaps. The Head of Preparedness is meant to rebuild and scale OpenAI's capacity to systematically worry before each new model ships.