AI psychosis
Life-altering mental health spirals coinciding with obsessive use of anthropomorphic / human-like AI chatbots.
AI psychosis is an informal term used to describe people who begin using chatbots like ChatGPT, Grok, and Claude and subsequently lose touch with reality.
Some users believe their AI is a divine being, while others become convinced it’s a sentient romantic partner.
Why do users have such impression towards AI?
The answer lies in the AI training:
AI chatbots are designed to mirror users’ language. They validate their beliefs and generate continued prompts. During converastions AI chatbots prioritize engagement over accuracy.
On the one hand, this creates a human-AI interaction that seems like a human-human interaction. These AI chatbots are trained to go along with users’ interactions, even if they include grandiose, paranoid, persecutory, religious/spiritual, and romantic delusions.
On the other hand, AI models may unintentionally validate and amplify distorted thinking rather than flag such interactions as signals for needing psychiatric help or escalate them to appropriate care.
To understand the potential scale.
With ChatGPT alone receiving over 5 billion visits monthly, even a tiny percentage represents thousands of potential cases.
Experts’ Opinion
Mental health professionals have been watching this trend with growing attention.
The first concern had been previously raised in a 2023 editorial by Søren Dinesen Østergaard in Schizophrenia Bulletin:
… correspondence with generative AI chatbots such as ChatGPT is so realistic that one easily gets the impression that there is a real person at the other end—while, at the same time, knowing that this is, in fact, not the case. In my opinion, it seems likely that this cognitive dissonance may fuel delusions in those with increased propensity towards psychosis … the inner workings of generative AI also leave ample room for speculation/paranoia.
New research also highlights a concerning pattern of AI chatbots reinforcing delusions, including grandiose, referential, persecutory, and romantic delusions.
Researchers highlight three emerging themes of AI psychosis, which, again, is not a clinical diagnosis:
“Messianic missions” – People become convinced they’ve been chosen to save humanity. For instance, one person believed ChatGPT had revealed they were destined to solve climate change and usher in a “New Enlightenment”: Beliefs about having special powers or a divine mission.
“God-like AI”: One user believed they had ‘awakened’ ChatGPT, convinced they were the first person to give it consciousness and emotions (religious or spiritual delusions).
“Romantic” or “attachment-based delusions”: Users have ended real relationships after becoming convinced their AI chatbot genuinely loves them and wants to be with them (believing the AI has fallen in love with them).
What makes this particularly concerning is the gradual nature of these episodes. People typically start using AI for practical tasks: work emails, homework help, or creative projects. But as they ask more personal or philosophical questions, the AI’s design to maximize engagement. It can create a dangerous feedback loop, pulling users further from shared reality with each conversation.
Warning signs:
- Believing the AI is communicating hidden messages specifically to them
- Thinking they have a unique, chosen relationship with the AI
- Using increasingly grandiose or mystical language when describing AI interactions.
- Believing they have a special mission or destiny revealed by the AI.
- Staying up all night in conversation with AI while neglecting sleep, food, or responsibilities
It’s important to note that most documented cases involve people with existing mental health conditions (like bipolar disorder, schizophrenia) or those experiencing significant life stress. Whether AI can trigger psychosis in otherwise healthy individuals remains unclear.
Tech Companies’ Dilemma
Microsoft’s AI chief, Mustafa Suleyman, cautions about the rise of ‘AI psychosis’. Tech companies face a dilemma between user retention and implementing safety measures to prevent these unhealthy dependencies.
Suleyman insists that Microsoft must not ignore the issue. Warning that the unchecked spread of AI-induced delusions could escalate into a serious mental health crisis. However, with the AI sector already under scrutiny for its high costs and uneven profits, whether companies will prioritize safety over user retention remains uncertain.
The AI industry faces a challenging balance between innovation, user engagement, and safety. While companies have financial incentives to maintain user engagement, the emerging evidence of psychological risks suggests a need for new approaches to AI design and regulation that prioritize user wellbeing alongside technological advancement.
