OpenAI Seeks Executive to Study Emerging AI Risks

OpenAI is hiring a Head of Preparedness to study emerging risks tied to rapidly advancing AI models, including mental health impacts and cybersecurity threats. The move reflects rising concern over how frontier capabilities could be misused as models grow more powerful.

By Maria Konash Published: Updated:
OpenAI Seeks Executive to Study Emerging AI Risks
As AI gets more powerful, OpenAI expands its preparedness efforts by seeking new executive role. Photo: Solen Feyissa / Pexels

OpenAI is seeking a new executive to lead its work on emerging risks posed by increasingly capable AI systems. The role, Head of Preparedness, will focus on assessing and mitigating potential harms linked to frontier models, including mental health effects, cybersecurity vulnerabilities, and the misuse of advanced technical capabilities.

Chief Executive Officer Sam Altman said in a recent post that AI systems are now entering a phase where their benefits are accompanied by more complex challenges. He pointed to early signs in 2025 that some models could negatively affect mental health, as well as more recent developments showing that AI systems are becoming skilled enough in computer security to identify critical software vulnerabilities.

According to OpenAI, the company has built a foundation for measuring growing model capabilities, but now needs a more nuanced understanding of how those capabilities could be abused. The Head of Preparedness will be responsible for refining how OpenAI evaluates risk, limits downsides in its products, and balances safety with continued deployment of advanced models.

The job listing describes the role as executing OpenAI’s Preparedness Framework, which outlines how the company tracks and prepares for frontier AI capabilities that could create severe harm. Compensation for the position is listed at $555,000 plus equity, reflecting the seniority and scope of responsibility.

Cybersecurity, Mental Health, and High-Risk Capabilities

Altman emphasized that the role will involve difficult trade-offs, noting that many proposed safety measures have edge cases and limited precedent. One priority area is enabling cybersecurity defenders to use advanced AI tools while preventing attackers from exploiting the same capabilities. The goal, he said, is to improve overall system security rather than shifting risk.

The position will also engage with questions around the release of sensitive capabilities, including biological research applications and systems that may eventually be able to self-improve. OpenAI has said that gaining confidence in the safety of such systems is becoming more urgent as models improve at a faster pace.

The company first introduced its preparedness team in 2023 to study risks ranging from near-term threats like phishing to more speculative scenarios involving large-scale harm. Since then, the group has undergone changes. Former Head of Preparedness Aleksander Madry was reassigned to work on AI reasoning, and several other safety-focused leaders have moved into different roles or left the company.

Rising Scrutiny and Internal Pressure

The hiring effort comes amid growing scrutiny of generative AI tools. Lawsuits and public criticism have raised concerns about how chatbots handle sensitive mental health conversations, with allegations that some interactions reinforced harmful behaviors. OpenAI has said it is working to improve detection of emotional distress and encourage users to seek human support.

OpenAI recently updated its Preparedness Framework to allow for adjustments if competing AI labs release high-risk models without comparable safeguards. The change reflects competitive pressure in the AI sector, where companies are racing to deploy more capable systems.

Altman warned that the Head of Preparedness role will be demanding, describing it as stressful and requiring immediate immersion into complex problems. By expanding this function, OpenAI is signaling that managing risk alongside rapid technological progress will be central to its strategy as AI capabilities continue to advance.