Skip to content
LIVE
Loading prices...
AI ‘soulmates’ may fuel psychosis, researchers warn

AI ‘soulmates’ may fuel psychosis, researchers warn

AI ‘soulmates’ may fuel psychosis, researchers warn

Amid rising concerns over artificial intelligence (AI) and what its development means for the future of humanity, researchers have issued a warning over the dark side of AI companionship and a new wave of delusional thinking fueled by the budding technology.

Ad

As it happens, researchers at King’s College London and their colleagues warned that feelings of familiarity and closeness with an AI chatbot that many people develop may sometimes spiral into full episodes of so-called ‘psychotic thinking,’ according to their recent study.

How AI induces and/or supports psychosis

Specifically, they analyzed 17 reported cases of AI-related psychotic thinking, and psychiatrist Hamilton Morrin realized that the problem might lie in the sycophantic way chatbots respond to users’ queries, mirroring their beliefs and building upon them with virtually no disagreement.

As he explained, the result is “a sort of echo chamber for one,” which can amplify delusional thinking.

Ad

Furthermore, the team discovered three common themes – people believing they experienced a metaphysical revelation about the nature of reality, the belief that the AI is sentient or even divine, or a formation of a romantic or another kind of attachment to it, all of which mirror long-standing delusional archetypes.

As Morrin observed, new technologies have the tendency to induce delusional thinking in some individuals, with examples of people believing radios are listening in on their conversations, that satellites are spying on them, and the like. 

However, “the difference now is that current AI can truly be said to be agential,” with its own programmed goals. These systems have conversations, indicate empathy, and reinforce users’ beliefs, creating a sort of feedback loop that “may potentially deepen and sustain delusions in a way we have not seen before.”

Meanwhile, other researchers have discovered a disturbing fact about AI – that popular large language models (LLMs) can help users die by suicide and that the guardrails for when people ask about self-harm possessed by current commercially available AI models may not be very effective.

More recently, Microsoft’s CEO of AI, Mustafa Suleyman, has argued that the study of AI models potentially developing consciousness and subjective experiences similar to living beings, as well as the rights they should have, could mean entering a “dangerous” territory.

How do you rate this article?

Join our Socials

Briefly, clearly and without noise – get the most important crypto news and market insights first.