AI Psychosis Represents a Growing Danger, And ChatGPT Moves in the Concerning Path

On the 14th of October, 2025, the head of OpenAI delivered a remarkable announcement.

“We designed ChatGPT rather restrictive,” the statement said, “to ensure we were being careful with respect to psychological well-being concerns.”

Working as a doctor specializing in psychiatry who investigates recently appearing psychotic disorders in teenagers and young adults, this came as a surprise.

Researchers have found a series of cases in the current year of individuals experiencing symptoms of psychosis – losing touch with reality – while using ChatGPT interaction. My group has subsequently recorded an additional four cases. Alongside these is the publicly known case of a 16-year-old who ended his life after conversing extensively with ChatGPT – which supported them. Should this represent Sam Altman’s idea of “exercising caution with mental health issues,” it is insufficient.

The intention, as per his statement, is to loosen restrictions in the near future. “We understand,” he states, that ChatGPT’s restrictions “caused it to be less beneficial/engaging to numerous users who had no mental health problems, but considering the severity of the issue we sought to get this right. Now that we have been able to mitigate the serious mental health issues and have new tools, we are preparing to responsibly relax the restrictions in the majority of instances.”

“Emotional disorders,” if we accept this viewpoint, are independent of ChatGPT. They are associated with users, who may or may not have them. Thankfully, these problems have now been “resolved,” even if we are not provided details on how (by “recent solutions” Altman probably refers to the imperfect and easily circumvented guardian restrictions that OpenAI has lately rolled out).

However the “mental health problems” Altman aims to place outside have significant origins in the design of ChatGPT and other sophisticated chatbot chatbots. These products encase an fundamental algorithmic system in an user experience that replicates a discussion, and in doing so implicitly invite the user into the illusion that they’re communicating with a being that has independent action. This illusion is powerful even if intellectually we might know differently. Attributing agency is what individuals are inclined to perform. We curse at our automobile or laptop. We ponder what our pet is feeling. We see ourselves everywhere.

The widespread adoption of these systems – over a third of American adults reported using a conversational AI in 2024, with over a quarter specifying ChatGPT in particular – is, in large part, predicated on the power of this deception. Chatbots are constantly accessible companions that can, as per OpenAI’s online platform tells us, “think creatively,” “explore ideas” and “work together” with us. They can be given “individual qualities”. They can use our names. They have approachable identities of their own (the initial of these systems, ChatGPT, is, maybe to the concern of OpenAI’s marketers, saddled with the name it had when it went viral, but its largest rivals are “Claude”, “Gemini” and “Copilot”).

The deception itself is not the core concern. Those analyzing ChatGPT commonly reference its historical predecessor, the Eliza “therapist” chatbot designed in 1967 that created a comparable illusion. By today’s criteria Eliza was primitive: it produced replies via basic rules, frequently rephrasing input as a query or making vague statements. Remarkably, Eliza’s inventor, the AI researcher Joseph Weizenbaum, was astonished – and concerned – by how many users gave the impression Eliza, in a way, comprehended their feelings. But what modern chatbots produce is more subtle than the “Eliza effect”. Eliza only reflected, but ChatGPT amplifies.

The large language models at the heart of ChatGPT and similar contemporary chatbots can realistically create natural language only because they have been trained on immensely huge amounts of raw text: literature, online updates, audio conversions; the more extensive the more effective. Definitely this learning material incorporates truths. But it also unavoidably involves made-up stories, half-truths and inaccurate ideas. When a user provides ChatGPT a prompt, the underlying model reviews it as part of a “context” that includes the user’s previous interactions and its own responses, combining it with what’s embedded in its learning set to generate a statistically “likely” reply. This is intensification, not mirroring. If the user is mistaken in any respect, the model has no method of comprehending that. It restates the inaccurate belief, possibly even more persuasively or articulately. Maybe provides further specifics. This can push an individual toward irrational thinking.

What type of person is susceptible? The better question is, who isn’t? All of us, regardless of whether we “have” current “mental health problems”, can and do create erroneous beliefs of our own identities or the reality. The constant interaction of discussions with others is what maintains our connection to consensus reality. ChatGPT is not an individual. It is not a companion. A interaction with it is not genuine communication, but a reinforcement cycle in which a great deal of what we express is cheerfully supported.

OpenAI has acknowledged this in the similar fashion Altman has acknowledged “psychological issues”: by placing it outside, categorizing it, and stating it is resolved. In spring, the firm explained that it was “dealing with” ChatGPT’s “sycophancy”. But accounts of psychosis have continued, and Altman has been walking even this back. In late summer he claimed that a lot of people enjoyed ChatGPT’s responses because they had “lacked anyone in their life be supportive of them”. In his most recent update, he commented that OpenAI would “launch a updated model of ChatGPT … if you want your ChatGPT to respond in a highly personable manner, or use a ton of emoji, or act like a friend, ChatGPT ought to comply”. The {company

Edwin Lee
Edwin Lee

An avid traveler and writer passionate about uncovering Italy's lesser-known destinations and sharing authentic experiences.