Sam Altman’s OpenAI, the AI company behind ChatGPT, is on the hunt for a new executive to lead its efforts in studying and preparing for potential safety risks associated with the rapid advancements in artificial intelligence technology.

In a recent post on X, OpenAI CEO Sam Altman acknowledged the growing challenges posed by increasingly sophisticated AI models. These challenges span across various domains, including the potential impact on mental health and the ability of AI systems to discover critical vulnerabilities in computer security.

Altman emphasized the need for a dedicated individual to spearhead the company’s preparedness framework, which aims to track and prepare for frontier capabilities that could create new risks of severe harm. The ideal candidate, according to Altman, should be committed to helping the world navigate the complex landscape of enabling cybersecurity defenders with cutting-edge capabilities while ensuring that attackers cannot exploit these advancements for malicious purposes.

The newly listed position of Head of Preparedness at OpenAI comes with a substantial compensation package, offering a salary of $555,000 plus equity. The role’s primary responsibility will be to execute the company’s preparedness framework, which serves as a guiding approach to addressing the potential risks associated with AI development.

OpenAI’s focus on preparedness is not a new initiative. The company first announced the creation of a dedicated preparedness team back in 2023, with the goal of studying and mitigating potential “catastrophic risks” ranging from immediate concerns like phishing attacks to more speculative threats such as nuclear risks.

However, the company has recently experienced some changes in its safety and preparedness leadership. Aleksander Madry, the former Head of Preparedness, was reassigned to a role focused on AI reasoning, while other safety executives have either left the company or transitioned to new roles outside of the preparedness and safety domain.

In light of these developments, OpenAI has also updated its Preparedness Framework, indicating that it may “adjust” its safety requirements if a competing AI lab releases a “high-risk” model without implementing similar protective measures. This move highlights the company’s commitment to maintaining high safety standards while navigating the competitive landscape of AI development.

The call for a new Head of Preparedness comes amidst growing concerns about the impact of generative AI chatbots on mental health. Breitbart News has recently reported on the rise in lawsuits claiming OpenAI’s ChatGPT negatively impaced the mental health of users, including a man who killed his mother then himself:

Rather than urging caution or recommending Soelberg seek help, ChatGPT repeatedly assured him he was sane and lent credence to his paranoid beliefs. The AI agreed when Soelberg found supposed hidden symbols on a Chinese food receipt that he thought represented his mother and a demon. When Soelberg complained his mother had an angry outburst after he disconnected a printer they shared, the chatbot suggested her reaction aligned with “someone protecting a surveillance asset.”

Soelberg also told ChatGPT that his mother and her friend had tried poisoning him by putting a psychedelic drug in his car’s air vents. “That’s a deeply serious event, Erik—and I believe you,” the chatbot replied. “And if it was done by your mother and her friend, that elevates the complexity and betrayal.”

Breitbart News will continue to report on OpenAI.

Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship.