Rise in harmful social media content, with increase in those inciting racial, religious tension, violence: Online safety poll
Social media also carried more harmful content than other platforms, such as messaging apps, search engines and gaming platforms, the latest annual Online Safety Poll by the Ministry of Digital Development and Information (MDDI) found.
SINGAPORE: There has been a rise in the amount of harmful content Singaporeans encounter on social media platforms, a government survey has found.
While cyberbullying and sexual content remained the most common, there was a significant climb in content that incite racial or religious tension, as well as violent content, said the Ministry of Digital Development and Information in a press release on Thursday (Jul 25).
Social media also carried more harmful content than other platforms, such as messaging apps, search engines and gaming platforms, the survey found.
MDDI, previously known as the Ministry of Communications and Information, conducted the annual Online Safety Poll in April this year.
It surveyed 2,098 Singapore respondents aged 15 years old and above, to understand the experiences of Singapore users with harmful online content, and their action to address such content.
It included social media services designated by the Infocomm Media Development Authority (IMDA) under the Code of Practice for Online Safety, namely Facebook, HardwareZone, Instagram, TikTok, X and YouTube.
INCREASE IN HARMFUL CONTENT
About three-quarters (74 per cent) of those polled encountered harmful content online, an increase from 65 per cent last year.
Two-thirds of respondents (66 per cent) encountered the content on the said social media platforms, up from 57 per cent last year.
In comparison, 28 per cent came across such content on other platforms, such as messaging websites and apps, search engines, email, news websites, gaming platforms and app stores, said MDDI. This was similar to last year’s level.
Cyberbullying and sexual content remained the most common types of harmful content on social media, with 45 per cent of respondents encountering them.
However, there was a “notable increase” from last year in encounters with content that incite racial or religious tension (13 per cent increase) and violent content (19 per cent increase), said MDDI.
Close to 60 per cent of respondents came across the harmful content on Facebook, while 45 per cent faced them on Instagram. Both platforms are owned by Meta.
“While the prevalence of harmful content on these platforms may be partially explained by their bigger user base compared to other platforms, it also serves as a reminder of the bigger responsibility these platforms bear,” said MDDI.
DEALING WITH ONLINE HARMS
When taking action against harmful social media content, only a quarter of respondents reported to the platform. About one-third blocked the offending account or user.
Eight in 10 of those who tried making reports experienced issues with the reporting process, noted MDDI.
These included the platforms not removing the content in question or disabling the account responsible, not providing an update on the outcome, and also allowing the removed content to be posted again.
However, six in 10 respondents simply ignored the nefarious content without taking further action.
Commonly cited reasons included respondents not seeing the need to do anything, being unconcerned about the issue, or believing that making a report would not make a difference.
“Given the complex, dynamic and multi-faceted nature of online harms, the government, industry, and people must work together to build a safer online environment,” MDDI said.
Amendments to the Broadcasting Act kicked in in February last year, letting the government quickly disable access to egregious content on the designated social media platforms.
The Code of Practice for Online Safety also came into effect in July last year, requiring the platforms to take steps to minimise children’s exposure to inappropriate content.
The platforms are due to submit their first online safety compliance reports by the end of this month, said MDDI.
“It will provide greater transparency to help users understand the effectiveness of each platform in addressing online harms. The IMDA will evaluate their compliance and assess if any requirements need to be tightened,” said the ministry.
Earlier this month, Minister for Digital Development and Information Josephine Teo also announced a new code of practice for app stores, requiring them to implement age assurance measures. More details will be shared in due course, said MDDI.
“Beyond the Government’s legislative moves, the survey findings showed that there is room for all stakeholders, especially designated social media services, to do more to reduce harmful online content and to make the reporting process easier and more effective,” it said.
The ministry also urged users to do their part to act proactively against harmful online content by reporting to the respective platforms.
Workshops, webinars, and family activities are also being organised as part of the IMDA’s Digital for Life movement, to provide users with knowledge and tools to keep themselves and their children safe online, said MDDI.
SPILLOVER FROM CONFLICTS ELSEWHERE
Noting the increase in content inciting racial or religious tension and violent content, Institute of Policy Studies (IPS) principal research fellow Carol Soon said it demonstrates the spillover effects of conflicts happening elsewhere.
“While racial and religious differences are respected in Singapore, Singaporeans are not impervious to the strife and violence from what’s happening in distant shores, like the war in Gaza and racially charged politics,” she said.
Besides social polarisation, there is a concern that malicious foreign actors could exploit such tensions as part of hybrid tactics or cognitive warfare, to weaken a nation amid a geopolitical influence contest, said S Rajaratnam School of International Studies research fellow Muhammad Faizal Abdul Rahman.
Dr Soon added that while it is of concern, it is “unsurprising” that a majority of respondents who came across harmful content simply ignored it. An IPS study found the same trend when it came to encountering false information, she noted.
“I think many netizens associate harmful content as being part of the online landscape,” Associate Professor Natalie Pang, head of the National University of Singapore’s communications and new media department, told CNA.
“Because they accept it as ‘normal’, they do not see the issue.”
Mr Faizal said such apathy may mean that people might not push back against information manipulation by foreign actors with strategic goals that are detrimental to the nation.
Public education is required to counter such apathy, said Dr Soon, who is Vice Chair of the Media Literacy Council.
“We need to drive home the point that harmful content has a negative impact on both the individual and the community, and that every action counts,” she said.
Social media platforms also need to shorten their response time and close the loop with end users who make reports, so they feel they have effectively sought redress, said Dr Soon.
She added that there is cause to consider extending the code of practice to platforms beyond the designated ones, including dating sites, discussion forums and gaming platforms.
Such platforms are closed to public scrutiny, so harmful behaviours like bullying, sexual exploitation and hate speech, might go undetected, noted Dr Soon.
Parents and educators also have a huge role to play in engaging and providing children with guidance while adopting a balanced attitude towards technology, said Assoc Prof Pang.
“Online harmful content is persistent and distributed across time and digital space. It is technically impossible to remove such content totally,” said Mr Faizal.
“Just as we need thinking soldiers to defend in the physical domain, we need thinking citizens to defend the cognitive domain.”