Why 'AI slop' is taking over the internet, and what it's doing to our brains
From "brain rot" content to fake news websites, "AI slop" is flooding the internet. It is not just wasting our time, it is also hurting our attention spans and spreading misinformation.

AI slop may seem like simply mindless distraction, but global experts have warned that there could be negative effects over the long term on avid consumers, especially children and teens whose brains are still developing. (Illustration: CNA/Nurjannah Suhaimi)
This audio is generated by an AI tool.
A set of triplet babies are using brooms to clean up the mess they made in a supermarket. A kitten is being eaten from the inside out by a swarm of ants. A toddler with an orange for a head is being saved by an orca and a Labubu doll after jumping off a cruise ship.
If all of this sounds like a nonsensical mash of random visuals, that is the point.
Welcome to the latest type of content proliferating on social media platforms, streaming sites, video games and the internet as a whole: artificial intelligence (AI) slop.
The term refers to videos, texts, pictures and audio generated by AI and which tend to be of low quality, mass-produced at speed to keep users mindlessly glued to their screens as much as possible, and to generate income for their creators.
It might sound ridiculous that anyone would want to spend more than a second watching, say, an AI-generated shark wearing sneakers and carrying a machine gun fighting with a crocodile that has a military plane for a body (yes, such a video exists and it is as bewildering as it sounds).
But in fact, the rapid proliferation of AI slop across the internet is proof of its success at capturing the rapt attention of users worldwide.
British newspaper The Guardian found that of the 100 fastest-growing channels on YouTube in July 2025, nine hosted purely AI-generated content.
Based on data from analytics firm Playboard, the fourth most-viewed Singaporean YouTube channel in August 2025 was Pouty Frenchie, which posts only AI-generated videos of a cartoon French bulldog.
The channel's most popular video, a 16-second clip featuring a French bulldog purchasing a mermaid costume, has garnered more than 231 million views in the three months since it was posted.
And once a user watches one such video, the algorithms built to keep them engaged on the same platforms will keep recommending more of the same.
CNA TODAY found that this happens quickly. Within two days of searching for and watching AI-generated videos on YouTube shorts, this reporter's own social media accounts were flooded with a slew of AI slop videos, ranging from the merely baffling to the outright disconcerting.
The genre has exploded because people have found that creating and posting such content is a quick and easy way to make money.
Online, it is easy to find channels where content creators talk about how they have made thousands of dollars a month by creating such content, and teaching others how to do the same.
About four in five parents (81 per cent) in Singapore are concerned about their children's exposure to inappropriate content online, according to a survey by the Ministry of Digital Development and Information (MDDI).
And among the children who owned or used digital devices, 94 per cent used their devices for leisure activities. Media streaming and gaming were their dominant activities, said MDDI.
These findings from the Digital Parenting Survey, conducted in February this year and released on Friday (Sep 12), were based on 1,986 Singapore resident parents with children aged two to 17 years old.
These parents' concerns are not misplaced.
At a quick glance, AI slop may simply seem like mindless distraction, but global experts have warned that there could be negative effects over the long term on avid consumers, especially children and teens whose brains are still developing.
AI slop is also worsening the internet's credibility, with it responsible for the mushrooming of websites peddling everything from fake news to fake recipes, as AI content creators look to make money from online advertising.

THE DRAW OF AI SLOP
Nine-year-old Felix Kua is allowed 30 minutes of screen time on weekdays and a little more on Saturdays.
It's not much time, but already his father has noticed "odd" videos popping up on Felix's YouTube feed.
"(These videos) are usually short, animated clips with strange storylines or distorted characters. They often look like they’re made quickly, almost randomly pieced together," said Mr Eric Kua, who runs tuition agency Mr Khemistry.
While Mr Kua said he can easily tell the videos are AI-generated, he worries his son cannot, and that this might affect his media literacy and critical thinking skills over the long run.
"To (children), a video is a video. They don’t distinguish whether it’s made by AI or a human, and that’s where the risk lies, because they might accept anything as normal or credible."
Like other parents who spoke to CNA TODAY, he is comfortable with his child viewing AI-generated content that is educational and factual. AI slop is another matter.
"Children might waste hours consuming junk, (it is) just like eating fast food every day," he said.

Indeed, a subgenre of AI slop has been called "brain rot" for the effect that it is believed to have on avid watchers.
Oxford University Press even called "brain rot" its 2024 Word of the Year, defining it as the supposed deterioration of a person's mental or intellectual state because of their overconsumption of online content that is trivial or unchallenging.
The term "brain rot" refers to both this supposed effect, but also to the content itself.
Ask any child or teen what, or who, Ballerina Cappuccina is, and they might launch into an elaborate explanation about an AI-generated ballerina with a coffee mug for a head and other fellow characters in the "Italian brain rot" universe (which, in the confusing and absurdist fashion of AI slop, has little to do with Italy).
Other characters in this universe include Tralalero Tralala, a walking shark who wears blue sneakers, and Bombardiro Crocodilo, a crocodile-headed military airplane.
The earliest videos of these characters included AI-generated voiceovers in Italian making bigoted remarks. These days, however, the videos are more often accompanied by AI-generated audio of what sounds like Italian but actually translates to nonsensical blabber.
Secondary school student Sabrina Ng, 14, used to watch a lot of Italian brain rot videos, but she finds them "dated" now. Still, other AI slop videos pop up on her social media feeds and she continues to enjoy them.
Her mother is none too pleased.
"She always nags about how I'm watching 'boliao' things online and it'll make me stupid," she said, using the Hokkien word for "useless".
"But I don't think watching a bunch of videos can really make my brain rot. It's just funny videos to watch and relax after school."
Singapore is developing new legislation to clearly define the types of online harms that will be addressed under the law. The Statutory Torts will provide a clear legal basis for victims to hold to account those responsible for the harm they suffer. There will also be new mechanisms to address the issue of anonymity online being misused by perpetrators of online harms. A new government agency will be established to provide swift responses to complaints, ensuring victims have faster access to recourse. These measures are part of the Law Ministry's broader efforts to strengthen protections for individuals navigating online spaces. Jeraldine Yap reports.
HOW CREATORS MAKE MONEY FROM AI SLOP
For the creators of these videos, the main point is profit.
AI slop is typically created by content farms to monetise on streaming and social media platforms' advertising revenue, said Associate Professor Brian Lee, head of the communication programme at the Singapore University of Social Sciences (SUSS).
Content farms are companies or websites that generate a massive amount of content, often low-quality in nature. The more clicks and engagement content farms generate, the more advertising revenue they can get.
"The economic incentives are the main reason why AI slop is so prevalent on social media. Social media platforms share ad money with content creators based on viewership. The black sheep uses generative AI tools to generate a massive volume of content at near-zero cost," he explained.
This is why some AI slop content features disturbing and provocative visuals meant to shock viewers, like ants eating a cat from the inside out, or AI-generated pictures of Holocaust victims.
"After all, bizarre and dark turns would capture attention more effectively," said Assoc Prof Lee.
"The perverse incentives could be further turbo-charged by the AI algorithms, which have been shown to prioritise engagement, and the shock value can be a powerful engagement driver."
It takes around 10 hours to create one of YouTube channel Cat and Hat’s surreal feline soap opera shorts, according to its creator, a YouTuber based in the United Kingdom who declined to share his name or age.
"The ideas usually start with something I’ve been thinking about – a feeling, a moment or a social observation," he told CNA TODAY in an email interview. "I use a mix of AI tools to generate images and voices, but the direction and tone come from me."
When asked about what he earns from making such videos, the YouTuber said he makes money from ad revenue and licensing, but declined to share figures.
"For me, the real benefit is (creativity). It opens up a space for dialogue between me, the viewer and the algorithm."

As AI improves and becomes increasingly accessible, it has made it easier for people to quickly make AI slop and upload it onto social media and streaming platforms.
Some video channels that publish such AI content have even posted guides and tutorials to help others create their own AI slop.
These guides often recommend tools such as OpenAI’s ChatGPT and Microsoft’s Copilot to generate scripts and story ideas.
Visuals are then created using image or video-generation AI platforms, with creators then assembling short videos using AI editors or basic cut-and-paste editing software.
"There is an increasing number of individuals joining the content farm, as its barrier to entry is incredibly low. They do not need any editing and video production skills. Just a prompt would do the job," said Assoc Prof Lee.
AI slop has also started creeping into music streaming platforms such as Spotify. The Guardian reported in July this year that a "band" known as The Velvet Sundown put out two albums and garnered over 1 million streams within weeks, but it was later found that everything about it, from the songs to the band's images and backstory, had been AI-generated.
WHEN SLOP IS INSIDIOUS
It's easy enough to identify AI slop when you are looking at a cartoon figure that is a mash-up of a coffee mug and a ballerina. But sometimes, AI slop can be much harder to spot.
Mr Conrad Tallariti, managing director for Asia Pacific at DoubleVerify, said that generative AI (Gen AI) models can easily generate hundreds of pieces of written content within minutes, from fake news to made-up recipes.
"What sets AI slop apart from legitimate generative AI output is the lack of human oversight, poor content authenticity and arbitrage tactics showing little regard for user trust or quality," he said.
Through sponsored posts, affiliate links and extensive ad placement, AI slop websites can easily be monetised as long as there are enough clicks and engagement.
"Taking the example of recipe sites. Hundreds of AI-written articles and incredibly life-like food photos may be produced and posted, even for recipes that have never been prepared," he said.
"To gain authenticity and traffic, AI slop sites will often present themselves as personal cooking blogs written by a person, (but it) is in fact an avatar invented by Gen AI, using a made-up bio and images."
DoubleVerify, a software platform verifying media quality and ad performance, also found over 200 AI-generated and ad-supported websites that were designed to mimic well-known mainstream publishers such as sports website ESPN and news channel CBS, but hosted false clickbait AI-generated content.
"Due to the lack of human oversight and deceit on these websites, advertisers may unintentionally position their ads on counterfeit content farms," warned Mr Tallariti.
Even more alarming is that AI slop has worsened misinformation during times of actual crises around the world, which has had real-life ramifications for the people involved.
For example, during the floods in North Carolina in the United States last year, AI-generated images showing supposed victims of the flood started spreading wildly online as content farms took the crisis as an opportunity to garner engagement online.
This made the work of first responders much more difficult, as they were relying on social media to locate actual victims, while also making it harder for people on the ground to sift through the slop and find reliable information.

MORE THAN JUST A WASTE OF TIME
Even the silliest and seemingly harmless forms of AI slop can have repercussions for the people who consume it, experts said.
Dr John Pinto, the head of counselling from mental health company ThoughtFull, said that the overstimulating and disorienting nature of AI slop videos can have a mental health and cognitive impact on consumers.
"It erodes users’ ability to concentrate, impairs retention, and raises anxiety, particularly among youth, mimicking the behavioral patterns seen in gambling addiction," he said.
"The content itself is attention-capturing and arguably fun to watch … You feel like you want to stay on because you want to know what's next. And this also encourages you to keep scrolling to the next one."
As a result, AI slop can reduce attention spans and cause digital fatigue.
"Combined with the erosion of quality and the cognitive toll of trying to filter out 'junk' content, AI slop further burdens viewers’ mental well-being by fostering anxiety, distraction and disengagement," said Dr Pinto.
Adolescents who watch AI slop can also be impacted, said Dr Wong Lung Hsiang, a senior education research scientist at the Centre for Research in Pedagogy and Practice at the National Institute of Education (NIE) in Nanyang Technological University.
Beyond misinformation, Dr Wong added that watching too much AI slop can cause cognitive stunting.
"Constant exposure to shallow, derivative material normalises incoherence and reduces opportunities to practise critical thinking or creative imagination," he said.
"Surreal or 'uncanny' AI imagery can confuse or unsettle young viewers. Over time, it can desensitise them to absurdity and diminish appreciation for authentic human expression."
Dr Wong added that some teens and children may have weaker digital defences, making them easily manipulated by AI slop that is designed to harvest clicks and data, as well as to push scams.
"When adolescents fail to verify low-quality, AI-generated content that prioritises speed over accuracy, this can induce over-dependency on a tool, which may often be confidently wrong and has been designed to please," said Asst Prof Tanmay Sinha from the Learning Sciences and Assessment Department at NIE, whose research includes studies on students' preparation for future learning.
"Constantly consuming AI slop can harm an adolescent’s intellectual development, very much akin to how a diet of fast food and sugary snacks can lead to nutritional deficiencies and other health problems."
Zooming out even further, critics point out that the proliferation of AI slop is making the internet a poorer resource for users seeking real information and connection.
This process – the gradual deterioration of an online platform brought about by a drop in the quality of service provided, due to profit-seeking – has even been given a name: "enshittification", a term coined by Canadian-British journalist and science fiction author Cory Doctorow.

Assoc Prof Lee of SUSS said the daily deluge of AI slop on social media will dent users' trust, but he also believes that tech companies will course-correct in due time.
"I am starting to see the risk of 'enshittification,' where some platforms choose quantity over quality," he said.
"But I'm still optimistic about the future of the internet … I would think of the current explosion of AI slop not as the end of the good internet, but as a painful, necessary stress test."
WHAT SOCIAL MEDIA PLATFORMS SAY
In response to CNA TODAY's queries, a Meta spokesperson said that users on its social media platforms Facebook and Instagram can share AI-generated images as long as they meet its community guidelines.
"Keeping our platform safe is extremely important to us," said the spokesperson.
"We are labelling a wider range of video, audio and image content as 'AI Info' when we detect industry-standard AI image indicators or when people disclose that they’re uploading AI-generated content."
The company requires users to label content that has been digitally generated or altered, including with AI. It also uses a combination of technology, user reports and reviews by its content moderation team to identify and review content against its standards.
Over on YouTube, AI-generated content does not violate its policies. Rather, content on the platform, regardless of whether it was created with AI or not, must follow community guidelines.
These guidelines include prohibiting technically manipulated content that misleads viewers or can pose a risk of harm.
It also updated its monetisation programme's policy on repetitive content on July 15.
"This type of content has always been ineligible for monetisation under our existing policies, where creators are rewarded for original and authentic content," said the platform on its website.
TikTok declined to comment for this article.
While social media platforms have stances against mass-produced content, it is unclear if they have taken action against social media users uploading AI slop.
Some critics have said that it may not be in the platforms' interest to do so, as the AI slop racks up millions of views and attracts advertisers.
Most social media companies are also not taking a hard stance because they are invested in AI, Mr Jason Koebler, a co-founder of the tech news website 404Media, told American news outlet NPR on Aug 28.
HOW TO STOP THE SLOP
Assoc Prof Lee added that government authorities, platforms, users and other key stakeholders will need to work together to avoid a degraded internet in the face of AI slop's advancement.
"For example, platforms will take the responsibility to label AI content clearly and may adjust their algorithms accordingly," he said, adding that social media platforms should move away from focusing on purely engagement and focus on quality content instead.
There have been some efforts to encourage safe and responsible AI usage. In December 2023, tech giant IBM and Meta launched the AI Alliance alongside other global tech firms, governments and academia to drive responsible AI adoption and collaboration.
Mr Kitman Cheung, chief technological officer of IBM ASEAN, said that creators of AI models should be transparent about how their models are developed from the get-go.
On this front, Mr Cheung said IBM has developed AI Fact Sheet 360s, which allows people who publish technology, such as an AI model, to explain how it was built.
This allows users to know what sort of content is likely to be created with the model, as well as make it easier to regulate the type of content generated by new technologies.
While regulation and creating open standards for AI platforms can increase transparency, Mr Cheung acknowledged that this will not prevent bad actors from creating harmful AI slop.
"This is why we need guardrails … to detect content and take it offline," he said.
One such way could be further developing adversarial AI – AI models that can detect other AI models and their generated content.
This is why we need guardrails … to detect content and take it offline.
When it comes to protecting children, Professor Looi Chee Kit, Emeritus Professor at the Learning Sciences and Assessment Department in NIE, said that there needs to be a "layered response" involving social media platforms, the government, educators and parents.
For example, he said, existing child-safety and consumer protection laws should be expanded to cover AI content.
"Enforce truth-in-advertising standards when AI outputs are disguised as human and fund nationwide AI media literacy programmes," he said.
"Beyond traditional media literacy, children must learn AI literacy, such as (questioning) who made the content, why it was made, and whether it can be verified."
While parents do have a role to play in monitoring their children's digital use, the MDDI survey results released on Sep 12 found that only 37 per cent of parents in Singapore were confident in their abilities to do so.
"For parents who expressed little or no confidence, the main challenges cited were limited time due to work or other commitments, the child’s reluctance to follow rules, the child’s ability to bypass parental controls, and parents’ limited knowledge of parental controls or monitoring tools," said MDDI in its press release.
There have been government efforts to protect children from online harms, such as a code of practice that requires social media services to protect Singapore users from accessing harmful content.
But beyond that, MDDI announced it will be equipping parents with resources to help them cultivate good digital habits in their children, among other things.
Dr Pinto of ThoughtFull said that AI slop is designed to lock in user attention, and that is something more users should be aware of.
"The content is uncanny, often absurd, and our brains are wired to stop and look at what feels 'off'. Social media algorithms amplify that reaction, so even low-quality AI output spreads fast," he said.
"People aren’t watching because it’s good. Instead, they’re watching because it’s hard to look away."