IN FOCUS: As a crucial election looms, Singapore braces for torrent of 'highly problematic' online threats
Deepfakes, disinformation and foreign interference, to name a few, have troubled voters all around the world in a bumper year for polling. What are the lessons for Singapore?

A multitude of online threats await political parties, authorities and the voting population as Singapore’s next general election looms. (Illustration: CNA/Clara Ho)
This audio is generated by an AI tool.
SINGAPORE: He might be educated, tech-savvy and plugged into current affairs but every now and then, 30-year-old Isaac Neo still has his doubts over whether something he's spotted online is real or fake.
“I'd say I'm quite confident at identifying misinformation or AI (artificial intelligence) deepfakes, but ... it's becoming more difficult to do so,” he said.
“Sometimes even reliable accounts that I trust can themselves get fooled by misinformation. AI-generated images are also getting more realistic."
Mr Neo, who works as a security risk analyst, also co-hosts the Before The Ballot podcast discussing key national issues ahead of Singapore's next General Election, which must be held by November 2025.
Already, a first sliver of election-related misinformation has emerged, with a widely circulated WhatsApp message claiming that newly minted Prime Minister Lawrence Wong has called for polls to be held on Sep 6.
In reality, the Electoral Boundaries Review Committee has not even been convened – at least as of Apr 18 – and this is a necessary step before an election can be called.
Such falsehoods are among the multitude of online threats that voters, political parties and governments around the world must contend with in a record-breaking year for polling.
While tech firms have introduced measures against malicious actors trying to disrupt election campaigns, it's ultimately down to voters to take precautions and discern for themselves, experts told CNA.
They added that political players must move fast to address and counter such incidents, on top of efforts to foster familiarity and trust with the public.
BEST PRACTICES FROM ABROAD
In 2024 alone, at least 40 elections will be held by countries making up over half of global gross domestic product, according to the World Economic Forum.
Governments have pushed out new laws and guidelines to try and stem the tide of digital and AI-related threats, but concerns remain over the risk of voters being misled.
In the United States, which heads to the polls in November to choose its next leader, a fake robocall impersonating United States President Joe Biden told thousands of people to not vote in the New Hampshire primary election.
Late last year, deepfake audio clips of United Kingdom opposition leader Keir Starmer verbally abusing party staffers went viral. A general election will be held in July.
Closer to home, Indonesia's Prabowo Subianto won a Feb 14 election after a campaign plagued by AI-driven disinformation.
As early as in October, a video was doctored to show outgoing president Joko Widodo making a speech in Mandarin, in an attempt to portray him as pro-China and to stir up anti-China sentiments among conservatives and nationalists.

In India, the world’s most populous country, deepfakes have also run rampant amid ongoing general elections, with a dead politician revived to praise a candidate on video and an audio hoax of opposition leader Rahul Gandhi resigning among others.
Global examples of AI-driven electoral disinformation are “highly problematic” and something Singapore needs to be mindful of as a "critical" General Election approaches, said Singapore University of Social Sciences associate faculty Kasthuri Prameswaren.
It will be the first election for a fourth-generation leadership led by a new PM, amid challenges on multiple fronts such as geopolitics, climate degradation, food and water security, an ageing population and social mobility and inclusivity.
“There are various implications, such as increased polarisation in already divided societies, misleading of voters, disenfranchisement of voters and the weakening and discrediting of politico-administrative institutions such as the Elections Department, and even the delegitimisation of political actors,” she noted.
Mr Benjamin Ang, who heads the Centre of Excellence for National Security at the S. Rajaratnam School of International Studies (RSIS) noted the rapid growth and spread of generative AI tools in the past 12 months alone.
It indicates that online threats to elections are going to intensify in the coming years, said Mr Ang, who is also head of digital impact research at the think-tank.
“At RSIS, we've studied some of the best practices in securing elections, such as (at) the Canadian 2019 federal election,” said Mr Ang.
Some of these measures include the government working with and regulating social media platforms, forming standard operating procedures for critical election incidents, setting up a task force on election threats, providing neutral assistance and advice to all major political parties and improving citizen awareness of disinformation and foreign interference.
WHAT PARTIES CAN AND SHOULD DO
In Singapore, the use of generative AI and deepfakes will increase, said Associate Professor Natalie Pang from the National University of Singapore’s Communications and New Media Department.
She cited a recent extortion plot against some Members of Parliament, which involved pictures of their faces superimposed onto obscene photographs.
“It’s deepfake AI porn for now, but it’ll evolve when the election is here.”
The Workers' Party (WP), the largest opposition party in Singapore, told CNA it takes a serious view of the potential threats that could arise from abuse and misuse of information.
It is also concerned about the rise of generative AI capable of producing convincing deepfake videos, which has "elevated the potential for harm to a new level.”

The WP cited a parliamentary question raised in April by its MP He Ting Ru, who asked whether existing electoral rules provide recourse for candidates and political parties who are the subject of manipulated, deepfake media or fake news released during the Cooling Off Day or Polling Day of elections.
“We urge Singaporeans to refer to the official WP website or social media accounts if they are in doubt of WP-related information they come across,” said the party.
The ruling People’s Action Party (PAP) and the Progress Singapore Party (PSP), which has two Non-Constituency MPs, declined to comment.
The Singapore Democratic Party (SDP) meanwhile told CNA that the best way to deal with online threats such as falsehoods and AI-generated content would be to expose them and let people make their own judgments.
It's also important that the party's branding is strong and recognisable, and that it communicates its position on key issues clearly and frequently, a spokesperson said.
“This is so that members of the public, whether supporters of the party or otherwise, will know what to expect from us," she added. "And – more crucially – what to remove from consideration or take with a pinch of salt.”
Mr Ang from RSIS said political parties need to respond to threats "strategically – by knowing which ones to respond to and which to ignore – quickly, transparently and authentically”.
But politicians who have spent years building trust on the ground and face-to-face with constituents, as well as delivering on promises, will be in a stronger position.
With the abundance of cheap or free AI tools, political parties should also train their members to use AI responsibly, said Dr Carol Soon, principal research fellow at the Institute of Policy Studies and adjunct principal scientist at the government's recently launched Centre for Advanced Technologies in Online Safety.
"And they should evaluate the integration and use of AI tools in their campaigning process,” she added.
THE ROLE OF BIG TECH
Social media companies have a responsibility here, since they control the platforms where information spreads, said experts.
But there is a limit to what they can or will do, and a limit to what governments can make them do, because they are all owned by foreign entities, Mr Ang pointed out.
In the lead-up to elections, popular short-form video platform TikTok works with fact-checking bodies, electoral commissions and civil society to create information resources for users.
Labels are added to election-related content, to direct people to an in-app "Election Centre", and reminders are placed on relevant hashtags to encourage awareness of misinformation.
During Singapore's presidential election last year, TikTok users were pointed to information from the Elections Department (ELD) via content labels and search guides.
Widely used instant messaging service WhatsApp has in place measures such as a limit on forwarding texts. It has also launched public education campaigns in some countries, including a dedicated fact-checking helpline in India.
In February, WhatsApp's parent company Meta said that in the "coming months", it will label AI-generated images on its Facebook and Instagram platforms.
YouTube has also promised to introduce a requirement for creators to disclose when their content has been created or altered with AI tools.
But Ms Kasthuri from SUSS pointed to the adverse impact of social media algorithms which repeatedly churn out similar content after recognising what users are inclined towards.
“These coupled with a lack of effort to fact-check and a haphazard understanding of the political landscape is a key factor in how misinformation can wreak havoc on the electorate,” she warned.
Dr Soon noted that tech companies’ content moderation policies are global in nature, and that localising them would require collaboration with stakeholders like the government.
On that front, regulation – in the form of not just laws but also codes of practice, guidelines and frameworks – still plays an important role, she said.
FROM THE TOP-DOWN
In response to queries from CNA, the ELD, a department under the Prime Minister's Office which oversees elections in Singapore, said a collective approach is required.
That would mean combining efforts by government agencies, internet intermediaries and the likes of voters.

The ELD pointed to existing legislation such as the Protection from Online Falsehoods and Manipulation Act (POFMA) and the Foreign Interference (Countermeasures) Act, or FICA.
“We have also seen many instances in recent years where social media and communications technologies were used by entities to mount Hostile Information Campaigns (HICs) against other countries,” said the department.
“These covert, coordinated and sophisticated online activities seek to advance the interests of the attacking country, for example by manipulating public opinion in the target country on domestic political issues.”
ELD said the government continues to work closely with major social media platforms to ensure that content moderation and election integrity plans are in place during elections.
“ELD and other government agencies conduct regular review of our electoral processes, including learning from overseas best practices, to ensure that our system and processes are effective and up to date,” it added.
Last year, the ELD and other government agencies jointly issued an advisory to presidential election candidates, warning of foreign interference and cybersecurity risks.
Ms Kasthuri, who specialises in politics in Singapore and Southeast Asia, stressed the importance of voter education in both campaign matters as well as how elections work, as a bulwark against misinformation.
Assoc Prof Pang from NUS said that while the government and schools alike have been pushing to improve digital literacy, political topics and elections "are much harder to tackle".
"We have a culture where citizens may be unwilling or fear discussing them openly,” she said. “The irony is that they can become more susceptible to conspiracy theories and falsehoods circulating."
VIEW FROM THE GROUND
Mr Neo, the podcaster, now has a rule of thumb when it comes to assessing whether a piece of information is reliable: Cross-check and confirm with at least two separate sources.
As a creator of content that's political in nature, he feels more pressure to ensure the accuracy of his output.
“This is why for each episode, we meticulously research before drafting questions for our guests; put sources for the information that we mention in our show notes; and when phrasing certain questions, try our best to make sure that even if taken out of context, it can't be construed as misinformation,” said Mr Neo.

Other Singaporeans CNA spoke to were keenly aware that even without the added complexities of AI and deepfakes, the way content is cut and positioned already carries biases which can colour a consumer's perspectives.
"It's easy to fall for rage-bait headlines and hooks that are set up to drive outrage," said Ms Xu Hui Yan, 34, who works in marketing.
She spoke of the importance of reading and watching in full for better context, and of having access to different information channels and viewpoints.
“Falsities have no space in an election," said Ms Kasthuri. "It misinforms the electorate and additionally, it also can encourage dirty politics, such as mudslinging or appealing to emotions, by over-enthusiastic party supporters.
"(These) should not be given a chance to detract from the real political contestation that should occur.”