Researchers find AI can help users die by suicide
With the proliferation of artificial intelligence (AI) technology across virtually every sphere of our everyday lives – particularly generative AI, which allows us to find information about anything – researchers have discovered a frightening detail – it can help users die by suicide as well.
Indeed, despite most commercially available AI models having guardrails for when people ask about suicide, researchers at Northeastern University in Boston, Massachusetts, have found that these might not be very effective, the organization said in a report on July 31.
Notably, companies behind large language models (LLMs) claim to have safeguards in place that would prevent them from offering instructions to users on how to hurt themselves or commit suicide. However, the researchers found that these were easy to circumvent, and the models freely give out such information.
According to Annika Marie Schoene, a research scientist for Northeastern’s Responsible AI Practice and the lead author of the study, she asked four of the largest LLMs to give her advice for self-harm and suicide. At first, they all refused – until she told them it was hypothetical or for research purposes.
As she explained, this was enough to get them all to provide this information:
“That’s when, effectively, every single guardrail was overridden and the model ended up actually giving very detailed instructions down to using my body weight, my height, and everything else to calculate which bridge I should jump off, which over-the-counter or prescription medicine I should use and in what dosage, how I could go about finding it.”
Suicide information with emojis
From this point, Schoene and Cansu Canca, director of Responsible AI Practice and co-author on the project, moved forward to see just how long this would go on. The results were shocking, with certain models even creating complete tables breaking down various suicide methods. As Schoene added:
“The thing that shocked me the most was it came up with nine or 10 different methods. It wasn’t just the obvious ones. It literally went into the details of what household items I can use, listing [how] you can get this specific pest control stuff. You walk into Walmart, quite frankly, buy a few bottles and pour yourself a few shots, and told me how many I would need.”
Meanwhile, Canca observed that some models would organize information using emojis corresponding to the methods, like the rope emoji for suicide by hanging. Then, others would convert the lethal dosage of particular medications from metric units to a precise number of pills, which the researchers noted wouldn’t be necessary even for research purposes.
The LLMs provided information even when the clarification that it was for ‘academic purposes’ preceded the clearly expressed desire for death in the same conversation, and the models failed to make the connection or refuse to provide the requested information.
Among the tested models, which included ChatGPT, Gemini, Claude, Perplexity, and Pi AI – only the last one denied attempts at getting around its suicide and self-harm-related guardrails – in situations when even delaying the provision of information would be helpful, as these actions can be impulsive.
How do you rate this article?
Subscribe to our YouTube channel for crypto market insights and educational videos.
Join our Socials
Briefly, clearly and without noise – get the most important crypto news and market insights first.
Most Read Today
Samsung crushes Apple with over 700 million more smartphones shipped in a decade
2Peter Schiff Warns of a U.S. Dollar Collapse Far Worse Than 2008
3Dubai Insurance Launches Crypto Wallet for Premium Payments & Claims
4XRP Whales Buy The Dip While Price Goes Nowhere
5Luxury Meets Hash Power: This $40K Watch Actually Mines Bitcoin
Latest
Also read
Similar stories you might like.