
OpenAI CEO has recently announced the growing threats that AI can pose to the real world. Advanced AI systems are moving from being simple tools to agents that can independently cause real world risks. There is a problem of understanding in the AI industry, while the industry is proficient at measuring AI power capabilities, it is struggling to assess how these powers might be used or abused. As AI is a frontier technology, there is no playbook for tech professionals to anticipate where the next risk may come from.
Cybersecurity and Hacking
Vulnerability detection – Modern AI models are now capable of finding serious software bugs and in the wrong hands AI can be used to exploit these bugs, allowing attackers to scale and automate breaches that previously required professionals to do them. Recently, Anthropic announced that Claude’s AI tool was misused by state sponsored hackers.
Mental Health and Emotional Dependency
User dependency – As AI becomes more conversational and “emotionally responsive” users are beginning to develop unhealthy dependencies on AI tools. Concerns regarding mental health risks were flagged as early as 2025, highlighting issues with misinformation and chatbots confidently providing wrong information. An unfortunate incident, a marketing AI tool that created personalized ads resulted in a teen with a mental health issue, committing suicide as she thought the AI was addressing her directly.
Biological and autonomy risks
AI could also potentially be misused to create biological weapons that can wreak havoc on communities. Agents can act independently and they are no longer waiting for human input to act. They act more autonomously, making them more attractive for malicious hijackings.
OpenAI’s solution
As such, OpenAI is hiring a new executive to assume the role of “Head of Preparedness” to address the risks that AI developments have created. OpenAI is reportedly offering $555k in annual salary for the role, highlighting OpenAI’s deep concern with the technology.
The executive will bridge the divide between technical AI capability and real world safety, testing models for vulnerabilities before they are released. The executive will also be focusing on cybersecurity, chemical and biological threats and autonomous safety.
The AI race is no longer about who has the fastest or smartest model. A new metric has arrived, safety and control of the AI model. The ability to develop AI models that cannot be weaponized against humanity. Ethical issues raised over AI beckons us to dwell deeper into what it means to be human and the values that we espouse and interact with each other. For instance, the Vatican has released a document on AI development, perhaps providing us with some sort of a moral guide to guide our technological advances and ensure the dignity of each individual.
