Pseudo doctors, finance and fitness gurus – do consumers need more protection from unqualified advice online?
Experts, creators and users said the goal is to strike a sweet spot where there is tighter oversight of sensitive content that can cause harm, while still allowing responsible creators to provide insights drawn from their own experiences.
Where consumers once turned to professionals for help on sensitive matters, it has now become second nature for many to turn to content creators for quick answers. (Illustration: CNA/Samuel Woo)
This audio is generated by an AI tool.
Scroll through TikTok or Instagram today and it won't take long to hit upon a content creator offering advice – sponsored or otherwise – on anything from money matters to health supplements.
Many of them come across as confident, polished, articulate and authoritative – whether it's on the topic of stocks to watch, their weight loss tips, or even as a self-described life coach offering mental health hacks.
Where gleaning such knowledge would once have required hours of deep research or speaking to other professionals, everyday social media users now have instant access to creators with a mix of formal training, lived experience and self-taught know-how – all eager to share their insights with scores of viewers.
Their content now spans far more than low-risk topics such as tech gadgets or food reviews, with users also increasingly turning to creators for medical advice, political commentary and investing tips.
It is perhaps unsurprising that in an age where everything is just a click away, these content creators have become a go-to source of advice for millions, including those in Singapore, on almost any matter.
However, despite the creators' significant influence – with some commanding vast audiences and dispensing guidance on high-stakes issues to impressionable viewers – the backgrounds, training and credentials of many creators are often unclear and difficult to verify.
This opacity has, in some cases, led to real-world harm when people follow advice from untrained individuals.
In 2023, American influencer Logan Paul faced a class action lawsuit over CryptoZoo, a failed crypto project he had heavily promoted in 2021 as an online game where users could buy, hatch and trade digital "animal" non-fungible tokens for profit.
Although millions of dollars were allegedly raised, the project never produced a playable game and several thousand participants reportedly lost money. The lawsuit however was dismissed this year after the courts found insufficient evidence directly linking Paul to the alleged misconduct.
The CryptoZoo saga nonetheless underscored how creator-led investment hype can result in real-world consequences for audiences.
And Paul is just one of several high-profile influencers who have come under scrutiny after their followers allegedly suffered harm from accepting unqualified advice.
This year, global entertainment giant Netflix released Apple Cider Vinegar, a miniseries based on the story of Australian wellness influencer Belle Gibson, who falsely claimed to have a terminal illness while promoting alternative therapies.
In 2013, Ms Gibson said she had been diagnosed with "malignant brain cancer" and had rejected chemotherapy and radiotherapy to heal herself "naturally" through diet and nutritional monitoring.
She went on to launch a successful wellness app and a cookbook, crediting her dietary regime with curing her "cancer" and urging others to follow her example.
In reality, Ms Gibson was never diagnosed with the illness. Her deception was exposed in 2015 and later confirmed by Ms Gibson in an interview.
In 2017, she was fined A$410,000 (S$346,000) by Australia's Federal Court for engaging in misleading and deceptive conduct.
And while not all influencers end up facing multimillion-dollar lawsuits over misleading or untrained advice, the harm inflicted on their followers can be just as distressing.
The Guardian reported in February that Ms Gibson's actions had devastating consequences for followers with cancer who were encouraged to abandon conventional and potentially life-saving treatment in favour of unproven alternatives.
Across the globe in China, in October this year, some media outlets reported that the country had passed a new law requiring influencers to prove their official qualifications – such as a degree, licence or certification – before posting content on sensitive topics such as medicine, law, education and finance.
According to the reports, Chinese social media platforms such as Douyin and Weibo must also verify creators' credentials, enforce proper citations and label AI-generated content.
The harms associated with creators' posts, followed by the authorities' attempts to rein in an increasingly unruly influencer landscape, have intensified global debates about what guardrails are needed and whether excessive regulation might be counterproductive or curtail creativity.
In Singapore, experts, creators and consumers said that the challenge lies in striking a balance: protecting the public without stifling the very ecosystem that has made social media such a rich source of information and expression.
SOCIAL MEDIA AS A PRIMARY RESOURCE FOR MANY
Where consumers once turned to professionals for help on sensitive matters, it has now become second nature for many to turn to content creators for quick answers, real-time reviews and approachable guidance.
Dr Shilpa Madan, an assistant professor of marketing at the Singapore Management University (SMU), said that influencer and user-generated content is now a powerful driver of discovery and purchase because it sits at what she called the "sweet spot between advertising and word-of-mouth".
While consumers tend to be wary of traditional advertising – believing that marketers might have a "manipulative intent" to drive sales – influencers often appear more like "friends" testing products on their behalf, she said.
Dr Madan said that influencer content also draws on two other consumer psychology cues of "social proof" where people take comfort in following what others are also choosing and "parasocial relationships", where followers may develop a sense of familiarity with influencers over time and trust their judgment.
A 2024 study by financial website MoneySmart found that more than half of the 1,000 Singaporeans aged 18 and above who were polled relied on platforms such as YouTube, Instagram, Facebook and TikTok for financial advice, preferring them over traditional sources such as family, friends and financial advisers.
About 21 per cent said they check social media daily for financial advice and tips, while 44 per cent follow specific influencers for such advice.
Meanwhile, a 2025 survey by consultancy Edelman across 16 countries, including Singapore, found that 38 per cent of respondents aged 18 to 34 had disregarded their medical provider's guidance in the past year in favour of health advice from social media.
The survey also found that 33 per cent of respondents from that same age group have made health decisions based on information from content creators without medical training.
Yet even as social media and influencers become central sources of advice publishing to millions instantly, the authorities globally have not built the same level of infrastructure to maintain the quality of what circulates and, thereby, protect consumers.
That asymmetry, said Dr Madan, poses particular risks for more vulnerable groups – including people with lower literacy or digital skills, those with limited expertise in complex fields such as health or finance, and older adults who did not grow up with social media.
Such individuals are usually most susceptible to scams and least equipped to absorb the fallout of a poor decision, she added.
For everyday users, it can also be difficult to tell who is truly credible. Many content creators may look professional to the undiscerning eye, and platform algorithms tend to amplify what is popular, or reinforce existing views – not necessarily what is accurate.
Ms Goh Yan Xuan, 30, recalled being drawn into trying a weight-loss and meal-replacement regime promoted online that cost about S$500 (US$386) for two months' worth of products, after seeing multiple reviews claiming real-life results.
"They market it as (something) healthy which still gives you the nutrients that you need," said the 30-year-old who works in the education sector.
But the programme, fronted by an influencer who marketed herself as a "health consultant", turned out to be essentially just an extreme calorie-deficit diet.
Ms Goh, who bought the product in early 2023, said she tried to follow the regime on and off for two months before giving up, describing the use of the product as "not feasible at all".
The plan required her to replace two meals a day with the drink, and the significantly reduced calorie intake left her fatigued, irritable and lethargic.
"I couldn't bring myself to exercise or do cardio (workouts) because I would feel faint," said Ms Goh, adding that the content creator who sold her the products would also guilt-trip her in controlling her diet further.
When Ms Goh told her that she struggled to control her diet and that she was constantly hungry and angry, the influencer replied that she needed to "relax" for the diet to work and keep herself busy so she could ignore the hunger – advice Ms Goh found "very suspicious".
"How can I trust that she's a health consultant when all she tells me is, 'you should limit your food'?" she said, adding that she eventually threw away the products.
For Ms Goh, the experience underscored why clearer rules are needed around high-risk health and wellness claims, especially given how such content could also reach more vulnerable groups like youths.
The rise of artificial intelligence (AI) has further complicated matters. Today, AI has made it far easier to produce convincing synthetic content, accelerating the spread of misinformation – from deepfake experts to AI-written scripts that may be inaccurate, unsubstantiated or stitched together from disparate sources.
"AI has disrupted the general level of trust online by blurring the lines between human and machine-generated content," said Dr Jeremy Sng, a lecturer from the Interdisciplinary Collaborative Core Office at the Nanyang Technological University (NTU).
In the past, users could rely on certain mental shortcuts to decide whether to trust content – such as deferring to authority figures or assuming that user-generated posts were more authentic than sponsored ones, said Dr Sng.
But in an AI-driven environment, these cues are increasingly unreliable.
With scammers now able to pose convincingly as politicians, authoritative figures or influencers, users can no longer depend on such conventional shortcuts to judge the legitimacy of what they see online, he added.
"It's not just generating content alone but the misuse of one's digital likeness – meaning, the problem is not just that content generated is fake, but that fake content can be falsely attributed to a political or authoritative figure or influencer," said Dr Sng.
"Someone can be seen saying something they never said, and authorities need a way to deal with that misuse of digital likeness too."
HOW IS SINGAPORE TACKLING THIS?
To protect consumers, different jurisdictions around the world have introduced varying forms of oversight.
Denmark, for instance, is considering a landmark copyright law that would grant individuals the right to their own digital likeness – ensuring that deepfake or AI-generated representations cannot be created or used without their consent.
Under the proposal, individuals would also be able to demand that online platforms remove such content if it is shared without permission.
In Singapore, an array of regulations and guidelines is already in place, with some frameworks further tailored to specific sectors such as financial promotions, healthcare and medical advertisements, and food-related claims.
Among the broader industry guides are the Advertising Standards Authority of Singapore (ASAS) guidelines for interactive marketing communication and social media. These set out ethical standards for marketers, advertising agencies and media using social platforms to promote goods and services or influence consumer behaviour.
These guidelines require "clear and prominent" disclosures for sponsored content, indicating that the material has been paid for. Such disclosures should appear "as early as is reasonably possible so that they are noticeable to consumers with minimal scrolling or clicking", according to the ASAS document.
Yet, the tension lies in how toothless some of the measures can be. Penalties are often limited and several regulations apply only to paid promotions – not to organic content, where creators freely dispense information or share insights drawn from their own experiences.
Dr Joicey Wei, an associate professor of marketing at the Singapore University of Social Sciences (SUSS), said the punishment regime under ASAS' guidelines is "relatively weak".
ASAS investigates complaints of undisclosed sponsorships and, if a breach is found, may issue a public advisory naming the influencer and brand, or request that the platform or advertiser amend or remove the non-compliant content. It however does not impose monetary fines.
As a self-regulatory industry body, ASAS has no statutory enforcement powers. While its Code of Advertising Practice is backed by industry associations, participation by influencers or brands is entirely voluntary.
Dr Wei said that ASAS' actions also often rely on complaints being filed, meaning problematic content may go unchallenged.
From a deterrence perspective for influencers or brands, the threat of public naming or removal may not be strong enough when the commercial gains of non-disclosure are high, she added.
Nevertheless, industry and sectoral guidelines here continue to be strengthened.
In September, the Monetary Authority of Singapore (MAS) announced new guidelines aimed at promoting responsible online financial advertising.
Taking effect in March next year, the guidelines set out safeguards that financial institutions are expected to adopt to manage risks associated with digital advertising. They will apply to all financial institutions and their appointed third parties, including online content creators.
MAS and ASAS have also jointly developed a separate guide to help creators produce responsible financial content, outlining seven "must-knows" for sharing financial information online.
The guide states some key questions creators should consider before publishing financial content, including whether they require an MAS licence, whether the entities they are promoting are legitimate, and whether disclosures need to be made regarding the content.
It also says that creators cannot absolve themselves of legal liabilities related to their content by simply stating: "This is not financial advice."
Currently, the Infocomm Media Development Authority (IMDA) also requires social media platforms, including Facebook, Instagram, TikTok and YouTube, to curb harmful content and implement safety measures, particularly to protect children and other vulnerable users.
In response to mounting pressure from governments and consumers around the world, major social media platforms have tightened their community standards, now explicitly barring harmful material such as misleading health or financial claims. Content flagged for breaching these rules may be removed or have its reach restricted.
While these rules are in place, the reality is that the volume of content far outpaces what moderation teams can realistically handle, which leaves gaps and delays for misleading or harmful claims slipping through.
Responding to queries from CNA TODAY, a spokesperson for the Ministry of Digital Development and Information (MDDI) pointed to existing measures to address online misinformation.
These include legislation such as the Protection from Online Falsehoods and Manipulation Act (POFMA), as well as ground-up fact-checking initiatives and digital literacy programmes aimed at helping the public become more discerning online.
"At present, there are no plans to introduce measures that require online content creators to verify their credentials before posting content on specialised topics.
"That said, we will continue to ensure that our approach to combating online misinformation, including AI-generated misinformation, remains fit-for-purpose," said the spokesperson.
HOW LIGHT OR STRONG A TOUCH TO CURB MISINFORMATION?
Media and marketing experts, content creators and social media users agreed that regulation is necessary to curb platform abuse and misinformation. But this must be achieved without stifling freedom of expression or creativity, they stressed.
Some suggested that a tiered, risk-based approach to regulating creators and the wider social media ecosystem may be more appropriate than a blanket clampdown.
SUSS' Dr Wei proposed that low-risk categories such as fashion, beauty and lifestyle could remain under standard disclosure rules and existing platform tools.
Medium-risk verticals – including wellness, devices and food claims – might warrant stricter substantiation requirements and faster takedowns for problematic content.
For high-risk areas such as finance and medical advice, she said regulators could consider enhanced disclosures, some form of registration or attestation for creators, and even co-liability for brands and platforms when harmful misinformation is amplified or spread.
Experts, creators and users interviewed by CNA TODAY were, however, divided on the issue of whether influencers should verify their credentials before offering their views, though most agreed that such measures would work well as a first layer of safeguard.
Dr Samuel Ravi Choudhury, a family physician and content creator, said qualifications can act as an assurance because they provide accountability.
"If a doctor shares false information, there are real consequences. Not everyone online has that level of accountability," he said.
Consumers like 32-year-old public servant Mr Wong, who declined to give his full name, said that while qualifications would make him "more likely" to trust a creator – as they signal experience and research in a given field – their importance also depends on the topic at hand.
For instance, he believes credentials are important for creators producing medical content. But when it comes to financial advice, he feels qualifications are "hardly much use", given the many differing approaches to investing.
Rather than outright banning influencers without specific certifications from posting content, Mr Wong suggested that a clearer labelling system could help.
If creators' credentials were displayed alongside their content, he said, they could continue posting on their topics of choice, while viewers would be better equipped to judge whether to trust their advice.
Credentials aside, experts also highlighted the need for platform-level accountability and stronger tools to curb misinformation.
SMU's Dr Madan noted that platforms are uniquely positioned to detect patterns that individual creators or consumers cannot – such as large volumes of near-identical reviews, coordinated campaigns or sudden spikes in certain types of content.
She added that small "nudges" in interface design could also make a difference.
These include prompts asking creators if they are giving medical or financial advice before posting high-risk claims, standardised labels for sponsored and AI-generated content, clearer mechanisms to flag "suspicious" posts, and tools to crowdsource verification of doubtful claims.
Agreeing, SUSS' Dr Wei suggested stronger, standardised disclosure tools across platforms – such as mandatory, uneditable labels at the start of captions indicating an advertisement or paid partnership – rather than relying on creators to type hashtags like "#ad" or "#sponsored".
For high-risk verticals, she said authorities could also consider expanding existing guidance with clearer "dos and don'ts" for creators, quicker takedown processes and stricter penalties.
Governing the emerging field of AI-generated content presents a separate set of challenges.
Noting the tension between regulation and innovation, NTU's Dr Sng said: "Regulate too early, you stifle innovation, because innovation needs room to grow and explore. But you can't regulate too late because regulation takes time to develop and formulate.
"If it's too late, your new regulation might become less relevant by the time it's implemented – (and) innovation has already gone on to new versions or new features."
He added that excessively harsh regulation may also risk driving investors away.
Ultimately, experts and users agreed that holistic regulation will be key – combining stronger platform obligations, stricter enforcement, smarter disclosure, targeted rules for high-risk sectors and improved media literacy.
FINDING THE SWEET SPOT
As such, interviewees said the goal is to strike a sweet spot where there is tighter oversight of sensitive content that can cause harm, while still allowing responsible creators to provide insights drawn from their own experiences.
Ms Cheryl Tay, a digital content creator and founder of a sports marketing agency, said a blanket requirement for formal qualifications may not be practical, given that many creators share valuable insights drawn from lived experience, peer learning or community expertise.
Still, she agreed that creators should bear "stronger responsibility" when discussing topics that directly impact someone's health, safety or finances.
She added that this does not have to mean mandating formal credentials, but at the very least, such content should include clear disclaimers, evidence-based information or involve collaboration with qualified experts.
"Rather than regulating who is 'allowed' to speak, I think it's more realistic to regulate how creators speak about sensitive topics, requiring accurate sources, disclaimers, proper labelling, and responsible framing," said Ms Tay.
Personal finance content creator Chris Chong, who runs the HoneyMoneySG platform and SoloBizSG firm, said that requiring influencers to prove official qualifications before speaking on sensitive topics may help curb misinformation.
But he added that there is the risk of shutting out creators who offer valuable real-world experience but lack formal credentials.
"Some of the most practical financial tips come from lived experience, not a textbook," said Mr Chong, who was previously a chartered accountant.
He said that while his credentials have shaped how he analyses numbers and approaches risk, much of his content is also rooted in a decade of investing and learning through personal experience.
"If every finance topic required a licence, most educational creators would disappear. That would lower financial literacy, which is the opposite of what we want.
"A creator without formal qualifications can still be responsible if they do thorough research before sharing anything."
He said that a better approach is to avoid personalised advice and to remind viewers that their circumstances can be very different.
Personal finance content creator Dawn Cher, also known as SG Budget Babe, is one such content creator without a background in the field she provides advice in.
In place of formal qualifications, Ms Cher said she ensures her content remains accurate, responsible and transparent by "triple-checking" every number, citing credible sources, and "grounding (her) content in facts" so her audience can cross-check the facts for themselves.
If she recommends a product, she said it is because she has personally used it or has thoroughly researched it.
For sponsored content, she added that she applies "a second layer of checks and balances" by working with the product and compliance teams of the financial institution to ensure the material is "sound, accurate, responsible and factual".
She also makes it clear that the content is sponsored — whether in a disclaimer statement at the end of a blog article, or through hashtags on her social media posts.
Ms Cher said that her communications degree has also equipped her with training in journalistic credibility, media law and fact-checking.
She also pointed to the fact that many available finance certifications are geared towards sales-focused roles, such as insurance or hedge-fund positions.
"Are there even any relevant credentials – one that certifies someone is adept at managing their own personal finances?" she questioned.
Even credentialled professionals who create social media content told CNA TODAY that qualifications alone are insufficient.
Credibility, they said, also rests on rigorous fact-checking, authenticity, honesty and clear ethical boundaries.
Dr Choudhury put it this way: "I'm a practising medical doctor – but that alone is not enough. Qualifications give you a foundation but they don't make you an expert in everything."
Knowing where one's expertise ends, he added, is part of responsible content creation.
He noted, for instance, that he would never make a video on the "best method to treat a brain tumour" because it is not a field he has clinical experience in.
"Medicine evolves extremely fast. Staying competent as doctors means constantly updating what we know. Every educational piece I post has to be verified (as) accuracy is part of my professional responsibility," Dr Choudhury said.
Likewise, Dr Choudhury said he only shares personal experiences when they align with medical guidelines.
"My practice is to include disclaimers that the content is meant for educational purposes, because medical advice requires a lot of nuance. That is why it is very important to still speak to your own healthcare professional – only they can tell you if what you see online is useful for you."
Consumers, too, cautioned that credentials are not a foolproof measure of trustworthiness – underscoring the need for a broader ecosystem of checks.
Postgraduate student Wee Ci En pointed to the example of US television host Dr Mehmet Oz, who despite his medical background, was criticised for promoting supplements and treatments with unproven claims, and for featuring guests who endorsed home remedies without credible evidence.
The 29-year-old said: "To a certain extent, it does appeal to me to know that influencers who are giving advice online are qualified. Certain topics that can be life-changing – like health, law or education – should only be given by professionals.
"However, people within that particular industry may also have vested interests, such as a financial advisor trying to sell you financial products or investments. He may be a professional and qualified, but he may also have vested interests."
Conversely, someone outside the industry may sometimes offer a more neutral point of view, she added.
Given that creators recognise that they bear responsibility for what they publish, would they be open to policing their own community?
Some said they would, noting that peer accountability could help lift industry standards.
Ms Tay, the content creator, said: "We creators are part of the ecosystem too, and our reputation is ultimately shaped by the environment we operate in.
"When there are bad eggs, whether they're spreading misinformation, exploiting trends irresponsibly, or using harmful tactics, it reflects on the wider creator community."
As creators understand their niche and audience better than any platform or regulator can, Ms Tay believes that they also have a responsibility to "model good behaviour, set standards, and be mindful of the kind of content we put out".
"Accountability isn't just a top-down thing; it's also peer-driven," she added.
Agreeing, health and fitness transformation coach Preston Sin, said: "Because creators are closest to the ground, we see what's happening in our space every day, notice trends quickly, and can 'feel' when something is off. This means sometimes we can spot problems much faster than regulations can.
"And if a creator has been given a platform or a larger following, I think that influence comes with responsibility," he added.
While he does not believe in a culture of "public shaming", Mr Sin said he would report a creator through the proper platform channels if they were misusing their platform or posting content that was "genuinely harmful".
"And if it's misleading, or misinformation, then I'd create content that educates and corrects the principle, without naming or shaming anyone."
At the same time, many agreed that even the most robust regulatory system cannot replace consumer vigilance.
While regulators and platforms try to keep pace with an ever-shifting social media landscape, consumers must also exercise their own due diligence and discernment when engaging with content, they said.
After all, regulators, platforms and creators can help build a safer ecosystem but advice on social media should never be consumed uncritically or taken at face value.
Finance blogger Ms Cher said: "What we need instead is better digital literacy – for both creators and audiences – so that people can discern fact from opinion, and verify what they read online. Technology moves faster than regulation ever can, so education is a more sustainable path forward."
Adding that individuals must conduct their own due diligence, Mr Wong, the public servant, said consumers should cross-reference information and avoid relying solely on a single source.
"I feel the onus should be on consumers more than content creators, as they should … at least fact-check or cross-reference information," he said.
"If an individual is easily influenced by misinformation from content creators, then it is the individual's responsibility."