AI Can Spark Demands For Rights, Citizenship One Day: Microsoft's Mustafa Suleyman
Suleyman added the term 'psychosis risk' to describe the dangers of users forming deep, and at times delusional, attachments to artificial intelligence systems.

(Photo source: Freepik)
Mustafa Suleyman, chief of Microsoft AI, has raised concerns about the way people perceive artificial intelligence systems. He has warned that AI systems' lifelike qualities may lead some to believe they are conscious beings.
In a blog post published on Aug. 19, Suleyman said that interacting with advanced AI models can feel “highly compelling and very real,” a phenomenon he fears could blur the line between simulation and reality.
“The experience of interacting with an LLM is by definition a simulation of conversation. But to many people it's a highly compelling and very real interaction, rich in feeling and experience,” he wrote. “Concerns around ‘AI psychosis’, attachment and mental health are already growing. Some people reportedly believe their AI is God, or a fictional character, or fall in love with it to the point of absolute distraction.”
Suleyman added the term “psychosis risk” to describe the dangers of users forming deep, and at times delusional, attachments to AI systems. “I’m growing more and more concerned about what is becoming known as the ‘psychosis risk’, and a bunch of related issues. I don’t think this will be limited to those who are already at risk of mental health issues,” he wrote.
“Simply put, my central worry is that many people will start to believe in the illusion of AIs as conscious entities so strongly that they’ll soon advocate for AI rights, model welfare and even AI citizenship. This development will be a dangerous turn in AI progress and deserves our immediate attention,” the Microsoft AI chief added.
To prevent this, Suleyman has urged the industry to establish strong ethical boundaries. “We must build AI for people; not to be a digital person,” he said.
Suleyman emphasised that while his ambition is to create AI companions that are supportive and beneficial, there are also clear lines that should not be crossed. “I’m fixated on building the most useful and supportive AI companion imaginable. But to succeed, I also need to talk about what we, and others, shouldn’t build.”
Public perceptions of AI are already shifting rapidly. Research by EduBirdie, published in April 2025, found that one in four Gen Z users surveyed believe AI systems are already conscious. A further 52% did not believe in present consciousness but said they expect AI to develop it in the future.
OpenAI chief executive Sam Altman has voiced similar concerns about emotional dependence on AI. In a recent post on X, he said the bond people form with models “feels different and stronger than the kinds of attachment people have had to previous kinds of technology.” Altman warned that while most users can separate role-play from reality, “if a user is in a mentally fragile state and prone to delusion, we do not want the AI to reinforce that.”
He acknowledged that many people now use ChatGPT as a “therapist or life coach,” often with positive results, but cautioned against scenarios where users unknowingly compromise their long-term well-being.
If you have been following the GPT-5 rollout, one thing you might be noticing is how much of an attachment some people have to specific AI models. It feels different and stronger than the kinds of attachment people have had to previous kinds of technology (and so suddenlyâ¦
— Sam Altman (@sama) August 11, 2025
As people turn more to AI chatbots, it raises questions about how much space technology should take in our lives. They can be helpful, but it’s important to keep a balance between digital support and real human connections.