AI growth is raising challenges similar to those seen with social media, particularly where regulation has not kept pace with emerging risks. Debate in Australia is now focused on whether existing laws are sufficient to manage issues such as misinformation, exploitation, and platform accountability, with implications for public trust and oversight.
Australia could repeat the mistakes of social media unless it moves quickly to regulate the big tech companies and their use of artificial intelligence (AI), according to AI expert Professor Toby Walsh.
Speaking at the National Press Club in Canberra, Prof. Walsh of the University of New South Wales said the country risked falling behind if it did not act quickly to address the growing harm caused by AI systems and the tech platforms mobilising them.
Last year, Australia introduced some of the world’s strictest social media laws, including rules that restricted access to platforms for children under 16, in a move that has since been examined or adopted by several other countries.
However, while Australia may be leading the pack on social media regulation, Prof. Walsh believes the damage had already been done to a generation of young users and the country was now falling behind other nations when it came to regulating AI.
“I despair how the government is not regulating the harms of AI adequately,” Prof. Walsh told the audience.
“They’ve made it very clear that they are not planning to introduce any new laws to regulate the AI space as they believe that existing regulation is going to be adequate to deal with the harms of AI.”
Prof. Walsh’s concerns come after the federal government scrapped an AI advisory body he served on. Despite a new AI Institute in late 2025, it has yet to be rolled out by the Department of Industry, Science and Resources.
Other countries, such as the European Union, Japan, South Korea, and Taiwan, were already taking action to dampen the risks of AI harms, including the abuse of women via “nudify” software, deepfakes, misinformation, and harmful medical advice.
“I’ve just one question for our politicians that I’ve never seen answered well,” Prof Walsh said.
“What makes Australia so special that we’ll see the benefits of AI without making the sorts of investment [in regulation] that other nations are?”
He also pointed out the growing financial incentives of the burgeoning industry with Reuters news agency revealing internal Meta documents estimating about 10 per cent of the company’s revenue came from scam advertising.
“Meta generates about $16 billion a year from scam adverts,” Prof. Walsh said.
To give an Australian context, Prof. Walsh noted that Meta had similar turnover in this country to whitegoods and electronics retailer, The Good Guys.
“Imagine for a moment that 10 per cent of the goods on sale on the on the shelves at The Good Guys were counterfeit or illegal—you would be outraged,” he said.
“You’d demand that their trading shut down by the weekend, so I don’t actually understand how we continue to let Meta trade in Australia.”
Instead, the large technology companies had been left to trade with little oversight while significantly impacting society and the economy, he said.
“I think we are repeating the mistakes of social media, which should have been a wake-up call about the harms of unregulated AI,” Prof. Walsh said.













