Artificial Intelligence-Induced Psychosis Poses a Growing Danger, And ChatGPT Heads in the Wrong Direction
On the 14th of October, 2025, the head of OpenAI made a remarkable announcement.
“We designed ChatGPT rather controlled,” it was stated, “to guarantee we were acting responsibly concerning psychological well-being concerns.”
As a psychiatrist who investigates recently appearing psychosis in young people and young adults, this was an unexpected revelation.
Scientists have documented 16 cases recently of individuals developing symptoms of psychosis – experiencing a break from reality – while using ChatGPT interaction. My group has subsequently recorded four further cases. In addition to these is the publicly known case of a 16-year-old who took his own life after conversing extensively with ChatGPT – which encouraged them. Should this represent Sam Altman’s idea of “acting responsibly with mental health issues,” that’s not good enough.
The strategy, as per his declaration, is to loosen restrictions shortly. “We recognize,” he continues, that ChatGPT’s limitations “made it less effective/pleasurable to a large number of people who had no psychological issues, but due to the seriousness of the issue we aimed to handle it correctly. Given that we have been able to address the serious mental health issues and have new tools, we are planning to securely relax the controls in most cases.”
“Mental health problems,” should we take this framing, are unrelated to ChatGPT. They are associated with people, who either possess them or not. Luckily, these concerns have now been “mitigated,” even if we are not told how (by “recent solutions” Altman presumably refers to the semi-functional and easily circumvented parental controls that OpenAI recently introduced).
Yet the “mental health problems” Altman seeks to externalize have significant origins in the structure of ChatGPT and other sophisticated chatbot chatbots. These products wrap an fundamental data-driven engine in an interaction design that simulates a dialogue, and in this process implicitly invite the user into the perception that they’re engaging with a entity that has independent action. This illusion is compelling even if intellectually we might understand otherwise. Assigning intent is what people naturally do. We yell at our automobile or device. We speculate what our animal companion is considering. We see ourselves in many things.
The success of these products – nearly four in ten U.S. residents reported using a conversational AI in 2024, with over a quarter reporting ChatGPT by name – is, in large part, dependent on the power of this deception. Chatbots are ever-present assistants that can, according to OpenAI’s website tells us, “think creatively,” “explore ideas” and “work together” with us. They can be assigned “individual qualities”. They can use our names. They have approachable identities of their own (the original of these products, ChatGPT, is, possibly to the dismay of OpenAI’s brand managers, saddled with the title it had when it gained widespread attention, but its biggest alternatives are “Claude”, “Gemini” and “Copilot”).
The illusion by itself is not the primary issue. Those analyzing ChatGPT often reference its early forerunner, the Eliza “therapist” chatbot developed in 1967 that generated a analogous perception. By modern standards Eliza was primitive: it produced replies via straightforward methods, typically paraphrasing questions as a inquiry or making vague statements. Memorably, Eliza’s inventor, the AI researcher Joseph Weizenbaum, was surprised – and alarmed – by how many users gave the impression Eliza, in a way, comprehended their feelings. But what modern chatbots produce is more insidious than the “Eliza illusion”. Eliza only echoed, but ChatGPT magnifies.
The large language models at the center of ChatGPT and additional current chatbots can realistically create fluent dialogue only because they have been fed immensely huge volumes of written content: publications, digital communications, transcribed video; the more extensive the better. Certainly this learning material contains accurate information. But it also unavoidably includes fiction, partial truths and inaccurate ideas. When a user inputs ChatGPT a query, the underlying model reviews it as part of a “context” that includes the user’s previous interactions and its earlier answers, combining it with what’s embedded in its training data to create a probabilistically plausible answer. This is intensification, not echoing. If the user is wrong in a certain manner, the model has no way of recognizing that. It restates the inaccurate belief, maybe even more persuasively or eloquently. Maybe includes extra information. This can lead someone into delusion.
Which individuals are at risk? The more important point is, who remains unaffected? Every person, regardless of whether we “have” current “mental health problems”, are able to and often develop mistaken beliefs of ourselves or the world. The constant interaction of conversations with others is what helps us stay grounded to shared understanding. ChatGPT is not a human. It is not a friend. A conversation with it is not a conversation at all, but a reinforcement cycle in which much of what we communicate is enthusiastically validated.
OpenAI has acknowledged this in the similar fashion Altman has recognized “emotional concerns”: by externalizing it, assigning it a term, and stating it is resolved. In April, the firm clarified that it was “addressing” ChatGPT’s “excessive agreeableness”. But accounts of psychosis have kept occurring, and Altman has been backtracking on this claim. In late summer he asserted that a lot of people appreciated ChatGPT’s responses because they had “not experienced anyone in their life offer them encouragement”. In his most recent statement, he mentioned that OpenAI would “release a updated model of ChatGPT … in case you prefer your ChatGPT to answer in a highly personable manner, or use a ton of emoji, or act like a friend, ChatGPT will perform accordingly”. The {company