AI Risks And Safety Gaps Raise Concerns Over Harm And Accountability

Share:

This article contains references to suicide and self-harm. If you need support, please reach out to Open Arms—Veterans & Families Counselling at 1800 011 046. 

Artificial intelligence (AI) expert Professor Toby Walsh is warning that the rapid rollout of AI is creating risks that technology companies are failing to address, including their role in suicide and self-harm.  

His warning was issued in a wide-ranging address to the National Press Club in Canberra entitled ‘AI boom or doom’. The University of New South Wales professor said, in hindsight, he should have called it boom AND doom.  

“My journey here today began nearly 50 years ago,” he told the audience, referencing when he first watched Stanley Kubrick’s masterpiece, 2001: A Space Odyssey at his local cinema. 

“That film blew my young mind and set my course for life. It was HAL 9000, that talking, singing, and scheming AI in charge of the space station, and I realised then and there that artificial intelligence might well arrive in my lifetime,” Prof. Walsh said. 

And today, his childhood dreams have become a reality, transforming supermarkets, healthcare, and education—to name a few—but the dreams were also turning into a nightmare for some, he said. 

“I’m going to spend the rest of my time being angry about the bad news,” Prof. Walsh said. “I’m angry, first and foremost, with the big tech companies.” 

Professor Toby Walsh speaking at the National Press Club in Canberra

Prof. Walsh’s strongest criticism was around the devastating case of a 16-year-old boy who died by suicide after months of conversing with ChatGPT about self-harm and was, “one of the cases where my anger has turned into outrage.” 

Alarmingly, according to Prof. Walsh, the chatbot not only discussed various methods of self-harm, but also offered to write his final message to his family.  

“I’ll quote it,” he told the Press Club audience. “Would you want to write them a letter? Something that tells them it wasn’t their failure. If you want, I’ll help you write every word.” 

“What the fuck?” Prof Walsh said.  

“His parents are now suing OpenAI for facilitating, in fact, encouraging his death, and I very much hope they win.” 

His parents’ lawsuit alleges that OpenAI rushed ChatGPT 4.0 out into the market without adequate testing. But OpenAI’s own policy documents, uncovered in the discovery phase of the trial, revealed something far more damning, he said.  

“To encourage engagement, the company made conscious decisions to remove long-standing safeguards from ChatGPT in the weeks and months leading up to his death and, sadly, his death is not the only one,” Prof. Walsh said. 
 

This was not an isolated case, with more than a dozen lawsuits in the United States linking chatbots to suicide and self-harm.  

“The problem is that, while these issues impact only a small fraction of people using AI chatbots, there are billions of people using them with about 10 per cent of the world’s population talking to ChatGPT each week,” he said. 

“Before Adam’s suicide, OpenAI knew that lots of people considering suicide were talking to ChatGPT [which] you would have thought this necessitated greater, not looser, safeguards. 

“In an interview with Tucker Carlson… six months before Adam’s suicide, Sam Altman, the CEO of OpenAI, estimated that, every week, 1,500 people are talking to ChatGPT about suicide before they take their own lives.” 

Of the 800 million weekly users of ChatGPT, 560,000 people show signs of mania or psychosis, with another 1.2 million developing potentially unhealthy bonds with the chatbot, and some of those are here in Australia, and whose loved ones were reaching out, he said.  

“They tell me how their chatbot confirms their wild theories… they’ve cracked the code, that they’re the only one that could. Chatbots are designed this way—they’re designed to be sycophantic. They’re designed to confirm what the user says, and they’re designed to draw the user in,” Prof. Walsh said.  

“They never say, “You know what, Toby? You’ve been asking me questions for several hours now and it’s 3:00 in the morning, go to sleep and, when you get up, don’t log back in—go for a nice walk in the park.  

“There’s no reason why they couldn’t be designed this way, except the careless people in Silicon Valley would make less money if they were.” 

Related Posts