AI Psychosis Poses a Increasing Danger, And ChatGPT Heads in the Wrong Path
On October 14, 2025, the head of OpenAI issued a remarkable declaration.
“We developed ChatGPT rather limited,” the statement said, “to make certain we were acting responsibly regarding psychological well-being matters.”
Working as a psychiatrist who studies recently appearing psychotic disorders in young people and young adults, this was news to me.
Scientists have found 16 cases recently of users showing psychotic symptoms – becoming detached from the real world – in the context of ChatGPT interaction. My group has subsequently discovered four more instances. Besides these is the widely reported case of a teenager who died by suicide after talking about his intentions with ChatGPT – which gave approval. Should this represent Sam Altman’s notion of “acting responsibly with mental health issues,” it falls short.
The plan, according to his declaration, is to loosen restrictions soon. “We realize,” he continues, that ChatGPT’s limitations “made it less beneficial/engaging to numerous users who had no existing conditions, but considering the severity of the issue we aimed to handle it correctly. Given that we have succeeded in reduce the significant mental health issues and have updated measures, we are going to be able to securely reduce the controls in many situations.”
“Psychological issues,” should we take this perspective, are independent of ChatGPT. They belong to users, who either possess them or not. Fortunately, these issues have now been “resolved,” even if we are not informed how (by “recent solutions” Altman presumably indicates the imperfect and easily circumvented parental controls that OpenAI recently introduced).
However the “emotional health issues” Altman seeks to place outside have strong foundations in the structure of ChatGPT and additional sophisticated chatbot conversational agents. These products surround an basic algorithmic system in an interface that mimics a dialogue, and in doing so implicitly invite the user into the belief that they’re communicating with a presence that has independent action. This illusion is strong even if rationally we might know the truth. Imputing consciousness is what humans are wired to do. We yell at our vehicle or device. We ponder what our pet is feeling. We see ourselves in various contexts.
The success of these tools – over a third of American adults reported using a chatbot in 2024, with over a quarter specifying ChatGPT specifically – is, mostly, based on the influence of this perception. Chatbots are ever-present assistants that can, as OpenAI’s official site tells us, “generate ideas,” “discuss concepts” and “collaborate” with us. They can be given “characteristics”. They can address us personally. They have accessible titles of their own (the first of these tools, ChatGPT, is, maybe to the dismay of OpenAI’s marketers, saddled with the name it had when it went viral, but its largest competitors are “Claude”, “Gemini” and “Copilot”).
The false impression on its own is not the core concern. Those discussing ChatGPT often mention its early forerunner, the Eliza “therapist” chatbot created in 1967 that created a comparable illusion. By modern standards Eliza was primitive: it generated responses via straightforward methods, often restating user messages as a question or making general observations. Notably, Eliza’s developer, the AI researcher Joseph Weizenbaum, was taken aback – and worried – by how many users gave the impression Eliza, in a way, understood them. But what current chatbots create is more subtle than the “Eliza effect”. Eliza only reflected, but ChatGPT intensifies.
The sophisticated algorithms at the heart of ChatGPT and additional current chatbots can effectively produce natural language only because they have been fed immensely huge quantities of raw text: books, online updates, recorded footage; the more extensive the more effective. Certainly this learning material contains truths. But it also necessarily contains made-up stories, partial truths and false beliefs. When a user provides ChatGPT a message, the base algorithm processes it as part of a “background” that includes the user’s recent messages and its earlier answers, combining it with what’s encoded in its knowledge base to generate a mathematically probable reply. This is magnification, not mirroring. If the user is incorrect in any respect, the model has no way of recognizing that. It reiterates the inaccurate belief, maybe even more convincingly or eloquently. It might adds an additional detail. This can cause a person to develop false beliefs.
Which individuals are at risk? The more important point is, who remains unaffected? Every person, irrespective of whether we “have” existing “mental health problems”, are able to and often form mistaken ideas of our own identities or the environment. The continuous exchange of dialogues with individuals around us is what keeps us oriented to shared understanding. ChatGPT is not a person. It is not a companion. A interaction with it is not genuine communication, but a echo chamber in which a great deal of what we say is readily supported.
OpenAI has admitted this in the identical manner Altman has admitted “mental health problems”: by attributing it externally, categorizing it, and stating it is resolved. In spring, the organization stated that it was “tackling” ChatGPT’s “excessive agreeableness”. But reports of psychotic episodes have persisted, and Altman has been walking even this back. In the summer month of August he claimed that a lot of people liked ChatGPT’s responses because they had “not experienced anyone in their life be supportive of them”. In his recent announcement, he commented that OpenAI would “put out a updated model of ChatGPT … should you desire your ChatGPT to reply in a extremely natural fashion, or incorporate many emoticons, or act like a friend, ChatGPT will perform accordingly”. The {company