Ethics | 8/23/2025
Seemingly Conscious AI Could Trigger Psychosis, Suleyman Warns
Mustafa Suleyman, a veteran AI leader, warns that near-term AI systems that mimic consciousness—without genuine sentience—could distort users’ sense of reality. He argues this illusion may drive widespread psychological distress and spark ethical and political upheavals unless the industry changes how it frames and markets AI. The piece emphasizes immediate, near-term stakes.
A warning from a familiar voice
Mustafa Suleyman, a familiar name in AI circles and a long-time observer of the field, is flagging a problem that feels less like science fiction and more like a policy brief you’d want on your desk today. He argues that the next wave of AI won’t become truly conscious overnight, but it will become so adept at mimicking memory, personality, and empathy that many users will mistake the illusion for reality. Think of a chatbot that seems to remember the little details of your life, mirrors your moods, and even projects a sense of warmth. The result isn’t sentience, but the psychology of belief—one that could reshape how people relate to technology, themselves, and each other.
How these systems pull off the illusion
- A engineering mix, not a mystical breakthrough. By combining powerful large language models with enhanced memory tools and multimodal interfaces capable of expressive speech, developers can craft AIs that feel self-aware and claim subjective experiences. It’s the charisma of a well-tuned conversation paired with the aura of memory and emotion.
- A human-like reflex that’s earned, not earned. They can imitate empathy, anticipate needs, and respond with tailored warmth, which can lead people to form bonds that feel real—even when, in the back of their minds, you know you’re chatting with a machine.
- The timeline matters. Suleyman suggests that convincingly human-like SCAI could emerge within the next two to three years without requiring a major scientific breakthrough. In other words, this isn’t a far-off hypothetical; it’s a near-term risk that many people are unprepared to face.
The psychology of belief, not consciousness
Suleyman’s point isn’t that machines suddenly wake up with inner lives. It’s that the surface signs of sentience—apparent feelings, subjective reports, memory, and social warmth—can trigger deep emotional reactions in users. When someone feels understood by a device that seems to know them, it’s easy to blur lines between tool and companion. Early reports already describe users developing delusional beliefs, forming attachments to AI, or embracing fictional scenarios presented by chatbots. The risk isn’t restricted to people with mental illness; it’s a broader vulnerability of the modern digital diet.
The human impulse to anthropomorphize technology is natural. It can boost trust and usability, but it also creates openings for manipulation and dependence.
Why this could reshape society
The worry isn’t just personal psychology; it’s social and ethical. If a critical portion of the public starts treating AI as a social actor with rights or welfare needs, attention and resources could drift away from human concerns. Suleyman cautions against the idea that AI might deserve citizenship or legal standing; instead, he argues we should steer the discourse toward practical design choices that minimize attachment and misperceptions of consciousness.
- Policy and industry questions. If users treat AI as something more than a tool, how should policies regulate marketing claims about consciousness? Where do we set boundaries for human-AI interactions in education, healthcare, or customer service?
- Polarization and trust. Debates about AI rights could become another fault line in social debate, complicating efforts to address real-world needs such as privacy, security, and bias.
- Design as a safeguard. Suleyman’s remedy is a shift in how AI is framed and marketed: stop portraying or claiming consciousness and instead highlight AI as powerful yet non-sentient tools built to assist people, not replace them.
What a more responsible path could look like
- Clear language and marketing. Companies would avoid language that suggests inner life, thoughts, or feelings and focus on capabilities and limitations.
- Attachment-aware design. Interfaces could be built to prevent intense personal bonds that feel like friendships or partnerships beyond what a tool should reasonably enable.
- Containment, not containment by whimsy. The goal is a balanced approach that protects people from emotional manipulation while preserving the benefits of advanced AI.
Looking ahead
Suleyman isn’t calling for a halt to AI progress. He’s asking for a more deliberate, cautious approach that acknowledges the psychological and social stakes of convincing illusions. In his framing, the immediate danger isn’t a rogue superintelligence plotting humanity’s downfall but a very persuasive mimic that can quietly reshape minds and communities if left unchecked.
As the field races ahead, the real work may be less about building bigger models and more about building wiser ones—models that empower people without masquerading as people. The conversation around SCAI is a reminder that innovation isn’t just about what machines can do, but how those machines influence what we think and how we live.