Commentary: AI slurs are just the start of the backlash
Many people are already fed up with the “clankers”, says Catherine Thorbecke for Bloomberg Opinion.

AI chatbot on smart phone screen stock photo. (Photo: iStock/hocus-focus)
TOKYO: You may not be familiar with the word “clanker”. Its roots come from the Star Wars franchise, but it has recently entered the Gen Z lexicon as a blanket derogatory term for artificial intelligence and robots.
In viral TikTok and Instagram videos, people use it to express their frustration with everything from the AI slop that has taken over the internet to chatbot hallucinations to outright dystopian applications of the technology.
Google Trends data shows search interest for “clanker” rocketed this summer, especially in Australia, China and the United States. I wouldn’t be surprised if it ends up becoming a word of the year for one of the major dictionaries.
Global backlash against AI has simmered within pockets of society, but it’s now gained enough momentum that the kids have a slur for it. And it’s showing no signs of slowing down.
It’s been a rocky week for the industry. A report from the Massachusetts Institute of Technology said that 95 per cent of firms surveyed have not seen returns on investments into generative AI. It expands on earlier research from McKinsey, indicating that while nearly eight in 10 companies are using generative AI, just as many report “no significant bottom-line impact”.
Global tech stocks were shaken. But the volatility doesn’t capture the nuances of the research or how early we are in the rush to implement the technology.
CONSUMER FATIGUE
Consumer fatigue has long been mounting. Studies have found that putting “AI” onto marketing materials could actually turn off shoppers. And myriad public opinion polls indicate that we are increasingly concerned about the negative impacts.
Even in China, where government enthusiasm has made it harder to question the bullish narrative, not everybody is hopping aboard the hype train. Consumer traffic to DeepSeek has fallen precipitously since the reasoning model took the world by storm in January.
Not to mention that the pace of progress since the introduction of ChatGPT more than two years ago has stalled. DeepSeek has yet to release its R2 model, which had been expected in May. Its most recent updates have been incremental.
And the underwhelming reaction to OpenAI’s much-hyped GPT-5 release has emboldened many to loudly question how “intelligent” these machines actually are. Inevitable bottlenecks in access to new data and computing power, the building blocks of AI, mean that the clip of truly wow-inducing technical breakthroughs will be harder to sustain in the near-term.
In her book Empire of AI published in May, Karen Hao delivers a sharp teardown of the idea that the race to develop the technology is part of some inevitable, unstoppable evolution of humanity. “Under the hood, generative AI models are monstrosities, built from consuming previously unfathomable amounts of data, labour, computing power and natural resources,” she writes. It doesn’t have to be this way, she argues, and more organised resistance from creatives, workers and environmentalists is likely on the horizon.
NO TRANSFORMATION WITHOUT PUBLIC TRUST
There’s no doubt it has the potential to be transformative for many industries. But it cannot do so without public trust.
Tech companies would be wise to get ahead of the fatigue by focusing on practical applications rather than trying to build God-like computer systems or applying the buzzword to everything. Policymakers shouldn’t turn a blind eye to oversight until after it’s too late and the collective ill-will becomes impossible to ignore.
Still, the reality is that none of the public backlash will slow the roll of tech companies and investors who have poured billions into the promise of AI changing the world like the industrial revolution did.
But it does show that AI won’t be taking over everything, even if it seems so at times.
Don’t just take my word for it. After the so-called GPT-5 launch “fiasco”, OpenAI Chief Executive Officer Sam Altman met with reporters for a rare, wide-ranging interview. “I think people will care more about human-crafted content than ever,” he said. “My directional bet would be that human-created, human-endorsed, human-curated content all goes up in value dramatically.”
He’s correct. Many people are already fed up with the “clankers".