analysis Asia
‘Be the one in control’: Why are more countries leaning towards banning social media access for kids?
When it comes to social media bans, governments are willing to absorb criticism on feasibility, privacy and civil liberties, experts tell CNA.
Social media apps on a person's phone. (File photo: iStock)
This audio is generated by an AI tool.
KUALA LUMPUR: As countries around the world mull social media bans for those under 16, tech firms have touted their own child safety features and warned of unintended effects in their bid to push back against more regulation.
But experts say that ship has sailed for Big Tech, noting that internet regulators seem more determined than ever to push through the bans despite what platforms say or do.
With strong public support for intervention against the threat of online harms especially against children, governments are willing to absorb criticism on feasibility, privacy and civil liberties, rather than be accused of inaction, the analysts add.
While tech firms can highlight how they are already protecting children on their respective platforms, the debate has moved beyond safety tools to whether they can demonstrate systemic and enforceable measures, the experts tell CNA.
“Social media platforms can (roll out features that may) reduce some regulatory pressure, but they are unlikely to reverse the current policy momentum through product tweaks alone,” said Galvin Lee, a marketing and economics lecturer at Taylor’s College in Malaysia.
The momentum in the region is palpable.
On Dec 10 last year, Australia became the first country to ban social media for children under 16, blocking access to platforms including TikTok, Alphabet's YouTube and Meta's Instagram and Facebook.
Since the landmark ban, regulators across Southeast Asia, Europe, in Brazil and in a handful of states in the United States are moving to study or emulate it.
Southeast Asian countries’ stance
On Mar 11, Indonesia acknowledged that its plan to ban social media for under-16s from Mar 28 was a “major task” given the sheer number of children in the country, but insisted it was necessary to safeguard them in the digital space.
Malaysia also plans to launch a ban this year. The government is reviewing age-restriction mechanisms and has started a regulatory sandbox with tech firms to introduce a minimum age limit for new account registrations.
In January, Singapore said it was “actively engaging” its Australian counterparts in assessing the effectiveness of social media age assurance measures there, as some parents have issued calls for a similar ban.
In the Philippines, a lawmaker has filed a Bill that aims to ban minors under 16 from social media, arguing that the “burden of responsibility” should be on social media platforms rather than users or their parents, local newspaper Inquirer reported on Mar 10.
In Thailand, 87 per cent of respondents surveyed believe children under 14 should not be allowed to use social media, the highest percentage among 30 countries polled globally, according to a Bangkok Post report published in January, which quoted the Ipsos Education Monitor 2025.
Vietnam’s ruling party in January issued a directive mandating identity and age authentication for all social media users, although this did not extend to a ban for minors. Several local news outlets, however, have covered the debate on whether children under 16 should be allowed on social media.
Tech firms have argued that such bans would be tough to implement, deprive young people of social contact, and drive them to darker corners of the internet that are poorly monitored.
After Australia’s lawmakers voted in favour of the ban in December 2024, a Meta spokesperson said it was “concerned about the process which rushed the legislation”, highlighting a “lack of evidence” underpinning it.
Meanwhile, social video platform TikTok said then that it was “important that the (Australian) government works closely with (the) industry to fix issues created by this rushed process”.
Snap, the owner of photo-sharing app Snapchat, cited “many unanswered questions about how the law will be implemented in practice”, and said it will work with authorities to develop an approach that balances safety, privacy and practicality.
Last Tuesday (Mar 10), Meta held a press briefing in Kuala Lumpur to promote its teen accounts feature for users aged 13 to 17, which has been gradually rolled out since 2024 and is designed to limit who can contact the teens and the content they see on the app. It could not say when the feature was first introduced in the Malaysian market.
These accounts will be private by default, and they will not be able to get new message requests from users they are not connected with. Teens aged 13 to 15 will need parental permission to change these settings.
Teen accounts are also prohibited from going live, and will automatically filter out content that could be considered nudity sent in messages.
Philip Chua, Meta’s APAC director of public policy for products, said there was a need to balance the potential benefits and harms of being connected on social media.
“The issue here with some of the ban proposals that we've been seeing around the world, including Australia, is that ultimately that results in a lot of unintended consequences,” he said.
These include migration to unregulated platforms, a surge in circumvention techniques like the use of virtual private networks (VPN), and the creation of a regulatory gap that does not reflect where teens actually spend time online, he added.
But Shafizan Mohamed, a communications lecturer at the International Islamic University Malaysia (IIUM), said regulators looked at “more than just talking or having features”.
“Even if Big Tech is being very serious in improving their safety features, coming up with new alternatives or initiatives, it would not make governments reconsider under-16 restrictions,” she told CNA.
“There is a bigger political momentum not just here in our part of the world, but also in Europe for example, where governments are shifting their positions from trusting platforms to enforcing regulations."
WHY REGULATORS AREN’T BUDGING
Like Meta, TikTok and Snapchat also build in child safety features on their platforms. TikTok’s family pairing feature lets parents set boundaries and customisable limits, while Snapchat applies safety and privacy settings by default for children.
But despite such initiatives, governments lean towards regulation as they have seen the impact of social media on children, concerns from parents and the larger issue of public trust, experts said.
"Governments can see that it is time they need to be the one in control; that it cannot be left to Big Tech to decide,” Shafizan said.
Australia’s ban has also shown that child safety politics can overpower platform lobbying, Lee from Taylor’s College said.
“Australia showed that once the issue is framed as a social protection question, governments may be willing to absorb criticism on feasibility, privacy, and even civil liberties if the public strongly supports intervention,” he said.
Shafinaz said regulators were increasingly prioritising a “precautionary approach” when it comes to the social media landscape.
“For example, I think even MCMC would rather be accused of over-regulating than failing to act while all of these harms continue,” she said, referring to the Malaysian Communications and Multimedia Commission.
In response to CNA’s question on whether Meta's teen accounts feature would be enough to make governments reconsider blanket bans, Chua said the company shares a “common purpose” with regulators.
“But there's definitely, I think, more conversations to be had about how you can pursue the common intent,” he said.
“The conversations that we have with regulators is to figure out how we can keep people safe online, not just in a small number of apps that are actually perhaps more invested in safety than unregulated or newer apps.”
Chua reiterated Meta’s calls for age verification to be introduced at the base level of app stores, saying that this would be more efficient than requiring age verification for each of the dozens of apps that teens use.
This means app stores would be required to verify a user's age before letting them download new apps.
Meta has shared with MCMC its child safety features and the unintended consequences of a ban, Chua said, calling it a “constructive working relationship”.
“My hope is those points are well registered,” he added.
CNA has reached out to MCMC for comment.
UNINTENDED CONSEQUENCES
The unintended consequences tech firms talk about to rebuff blanket bans are not without merit, but only up to a point, experts told CNA.
Lee said it is true that underage users can circumvent restrictions through borrowed identities, older siblings’ accounts, VPNs, or migration to less regulated online spaces.
He also pointed to how Australia’s ban excludes standalone messaging apps, online gaming, professional networking, education and health support services, creating “obvious edge cases”.
But Lee said regulators keep insisting on bans or hard minimum-age rules because they no longer see this as just an access problem, but an incentives problem.
“In their view, platforms have had years to improve teen safety and have not earned the presumption of self-regulation,” he said.
“A ban is blunt, but it creates a non-negotiable compliance duty and shifts the burden back onto firms that design and profit from these systems.”
Lee also noted that Australia’s framework includes continuing oversight and an independent review within two years.
“In other words, Australia has accepted that this is not a one-off announcement but an evolving regulatory programme,” he added.
Shafizan from IIUM said loopholes are "natural" in any newly implemented legislation, with regulators involved in a “learning process” to identify and close them.
"But I think most governments still see regulation as a stronger governance signal, rather than voluntary company safeguards,” she said.
Benjamin Loh, a media scholar and senior lecturer at Monash University Malaysia, however described unintended consequences as a “legitimate concern”, citing the example of how the US tried to ban alcohol in the 1920s during the Prohibition era.
Prohibition not only failed to eliminate alcohol consumption but also triggered a rapid rise in organised crime and dangerous illicit alcohol production.
“Social media has become quite ingrained in the lives of most young people and cutting it off cold turkey will likely make many resort to risky behaviours to circumvent it, hence why there needs to be nuance in the way the ban is enforced,” Loh said.
This means regulators should not only use identity or age verification in enforcing the ban but also monitor overall usage across all apps, he explained.
BEST WAY FORWARD
When tweaking ban policies, regulators could still be influenced by the “credibility of the package” behind tech firms’ child safety features, said Lee from Taylor’s College.
Platforms would need to show independently auditable age assurance, privacy-preserving enforcement, default high-safety settings for minors, faster intervention against grooming and bullying, and transparent data on outcomes, he said.
“So the obstacle is now less about technical feasibility and more about trust, accountability, and whether regulators believe platforms are moving fast enough without legal compulsion,” he added.
But Loh warned that the Cambridge Analytica scandal has made platform owners “far more defensive” in how they interact with regulators, especially since most regulators tend to approach platforms individually rather than collectively.
In 2018, it was revealed that British consulting firm Cambridge Analytica collected personal data belonging to millions of Facebook users without their consent, mainly to be used for political advertising.
Before the scandal, regulators often deferred to platform owners’ judgment to self-regulate, accepting the narrative that they lacked the knowledge to properly understand and regulate social media, Loh said.
But the scandal made regulators realise that Big Tech could not be trusted, which in turn led to constant pressure on the industry to do more and tech firms “fighting back at every turn”, he said.
“Platform owners are also more careful in ensuring that global-level regulations like the EU’s GDPR are not so easily created or at the very least are only produced with their direct input and influence, which we can see with artificial intelligence regulation,” Loh said.
The European Union’s General Data Protection Regulation is a legal framework that imposes data privacy obligations on organisations anywhere, as long as they target or collect data related to people in the EU.
While governments can set non-negotiable child safety standards, platforms should also be able to retain some flexibility in how they meet them in a co-regulation model, Shafizan said.
She highlighted that online child safety cannot just depend on strict legislation, but also a more “comprehensive movement” involving digital literacy, awareness and community support.
“I think the most sustainable policy, therefore, is not just one that makes platforms materially safer by design, but also gives governments real enforcement tools to allow room to refine these rules as evidence emerges,” said Shafizan.