Skip to content
LIVE
Loading prices...
Microsoft AI boss raises alarm: Studying AI consciousness is ‘dangerous’

Microsoft AI boss raises alarm: Studying AI consciousness is ‘dangerous’

Microsoft AI boss raises alarm: Studying AI consciousness is ‘dangerous’

As the artificial intelligence (AI) sector continues to evolve and companies keep trying to best each other in new advancements, some insiders are concerned about the concept of consciousness in AI and warn against exploring it.

Ad

Indeed, Microsoft’s CEO of AI, Mustafa Suleyman, has argued that the study of AI models potentially developing consciousness and subjective experiences similar to living beings, as well as the rights they should have, could mean entering a “dangerous” territory, as he said in a blog post on August 19.

Suleyman: AI isn’t people and we shouldn’t treat it as such

Specifically, in his post titled ‘We must build AI for people; not to be a person,’ Suleyman noted that, aside from considering the impending emergence of superintelligence and its impact on jobs, alignment, and the like, there were even more pressing issues to think about.

One of them is the developments in the lead-up towards superintelligence, as these technologies “already have the potential to fundamentally change our sense of personhood and society,” when his life’s mission “has been to create safe and beneficial AI that will make the world a better place [and] empower people.”

Ad

As Suleyman further pointed out, he was increasingly worrying about the so-called ‘psychosis risk’ (when chatbots fuel delusional beliefs through sycophancy and hallucinations) and unhealthy attachments to AI chatbots, not just by people already vulnerable to mental health issues.

“Simply put, my central worry is that many people will start to believe in the illusion of AIs as conscious entities so strongly that they’ll soon advocate for AI rights, model welfare, and even AI citizenship. This development will be a dangerous turn in AI progress and deserves our immediate attention.”

Therefore, he believes, the study of AI welfare is “both premature, and frankly dangerous,” as it could worsen delusions that AI systems are conscious entities, lead to more dependence-related problems, disconnect individuals from reality, distort moral priorities, bring about new dimensions of societal division, “and create a huge new category error for society.”

Finally, Microsoft’s AI chief concluded his thoughts by arguing that “we should build AI for people; not to be a person,” as the “arrival of Seemingly Conscious AI is inevitable and unwelcome,” so “we need a vision for AI that can fulfill its potential as a helpful companion without falling prey to its illusions.”

How do you rate this article?

Join our Socials

Briefly, clearly and without noise – get the most important crypto news and market insights first.