Skip to main content
Best News Website or Mobile Service
WAN-IFRA Digital Media Awards Worldwide 2022
Best News Website or Mobile Service
Digital Media Awards Worldwide 2022
Hamburger Menu

Advertisement

Advertisement

Commentary

Commentary: How video-streaming platforms feed hate and sow divisions and what we can do about it

Understanding how YouTube and other Internet sites are increasingly becoming places for extremism to thrive can help us develop a more holistic approach to vigilance, says Gareth Tan.

SINGAPORE: Any story about terrorism usually begins with a chapter on radicalisation – on how individuals stumbled upon something – a video, a picture, even a made-up story – that reshapes their worldview.

Here in Singapore, with the explosion of the Internet and proliferation of smartphones, the means of radicalisation have become diffused and more challenging to detect.

Fourteen of the 16 individuals issued terrorism-related orders over the past two years had been self-radicalised online, according to a recent report by the Internal Security Department.

Online platforms, including social media, communications and games, have been identified as key vectors for self-radicalisation and domestic terrorism threats in Singapore, providing spaces for recruitment and propaganda.

This is worrying when about 4.96 million people in Singapore use social media - around 85 per cent of the total population.

When Prime Minister Lee Hsien Loong recently emphasised the continuing threat of radicalisation in his commentary last weekend, his call for vigilance should echo strongly in the online spaces which dominate Singaporeans’ daily experiences.

How do radicals get that way? (Photo: iStock)

RABBIT HOLES OF RADICALISATION?

Video-streaming platforms have received significant attention in recent scholarship as an under-studied but highly effective space for radicalisation.  YouTube, the most visible video-hosting platform, illustrates the challenges inherent to practicing vigilance.

The platform is seen as a powerful potential vector for self-radicalisation, with 2 billion monthly active users and a recommendation algorithm that industry watchers say prioritises high traffic and user engagement.

This makes it easy for general audiences to be recommended controversial, divisive, and misleading content, claim the platform’s critics. They point to acknowledgements by YouTube’s Chief Product Officer in 2018 that more than 70 per cent of watch time is spent on videos the algorithm recommends.

Critics also highlight that the New Zealand government found that the 2019 Christchurch shooter had been at least partially radicalised on YouTube, and that YouTube was a known recruitment venue for terrorist groups like Islamic State.

Indeed, YouTubers espousing philosophies in professionally crafted long-form video content about social and political issues have become a significant generator of radical ideology.

Many of these content creators capitalise on outrage at politically contentious issues — like unemployment or immigration policy — to galvanise radical sentiment and stoke popular unrest. Their rhetoric is often manipulative, playing fast-and-loose with the truth.

This is a format of video that has become a hallmark of politically extreme commentators and conspiracists, and one that scores highly on the YouTube algorithm’s engagement metrics. These creators often form communities, consciously directing viewers to ideologically similar content in ways that trains YouTube’s algorithm to continue recommending similar videos to consumers.

This process can trap users in echo chambers, where consumption patterns inform recommendation algorithms, which recommend ever more radical videos.

Concerns have been raised over YouTube’s role as a gateway into this rabbit-hole of misinformation, hate, and eventually violence.

Adding to these concerns are findings from a Mozilla Foundation Survey of 37,000 users that YouTube regularly recommends problematic content, including extremist propaganda, to general audiences.

LIBRARIES OF RADICAL CONTENT

However, adjustments YouTube has continually made to its algorithm since 2016 in response to concerns that it is promoting radicalisation may have mitigated its role as an active gateway for radical content and extremism.

Recently in August, US researchers found little evidence indicating that YouTube’s recommendation algorithm drove users to politically extreme content.

The study looked at 300,000 American accounts and their browsing behaviour over 2016 to 2019, concluding that although more Americans are viewing “anti-woke” content opposing progressive intellectual and political stances, YouTube recommendations only contributed to a fraction of these views — as was also the case for videos featuring more extreme, potentially violent far-right ideology.

A man using his smartphone at night. (Photo: iStock)

Analyses further indicate, in line with earlier research in 2019, that news videos recommended by YouTube were much more likely to originate from centrist or slightly left-leaning sources.

The study concludes that self-radicalised consumers of far-right ideology were much more likely to have been drawn to extremist content on YouTube from elsewhere, using search engines, websites, and other platforms rather than starting on and staying within YouTube.

This conclusion is in line with investigations conducted into the Christchurch gunman, which found that he moved within an expansive radicalising echo-chamber involving several imageboards and Internet forums while also using YouTube.

These findings have fresh implications for law enforcement. As radicalisation occurs within larger ecosystems beyond the reach of one site no matter how dominant, a silver bullet solution targeting a single platform is bound to fail.

It turns out video-streaming platforms like YouTube acts more like passive library for radical content. YouTube just happens to be the most prominent one.

WORKING WITH PLATFORMS TO ADDRESS SELF-RADICALISATION

Weeding out radicals before they can carry out their terror plans has long been the holy grail for counterterrorism efforts. There are opportunities to do so by collaborating with digital platforms.

But where attention may have focused on dealing with YouTube’s algorithms, motivating the platform to enforce community guidelines may be more productive, as this hones in on getting tech giants to clean up their repositories of problematic content.

This could entail incentivising and supporting moves to hire more moderators or applying technology to improve moderation.

Today, YouTube has at least 10,000 moderators, engineers and policy specialists reviewing content and content moderation technologies worldwide – a resource worth tapping into.

A man uses a cellphone. (File photo: AP/Jenny Kane)

Significantly, authorities could share information with platform content moderators, community moderators, trusted content creators and platform users, to help them gain a better understanding of how past instances of self-radicalisation developed, and inform efforts to identify new, potentially problematic content.

Information sharing can also be reciprocal, as community and content moderators are valuable sources of intelligence on extreme content.

They are regularly exposed to questionable videos and can provide insights on how such material develops and propagates within their platform’s ecosystem, as well as how creators attempt to fly under the radar and avoid moderation.

Content creators acting as community leaders can additionally aid in distributing rehabilitative messaging – an approach proven by research to undermine the manipulative effects of radical doctrine.

Cultivating such alliances can provide government a ground-level view of trends in digital platform spaces and keep pace with the amorphousness of these online communities.

Digital platforms would in turn be able to demonstrate good corporate citizenship by proactively convening and organising such opportunities.

Who are these “sovereign” offenders refusing to wear masks? Experts explain why Singapore must continue to reject this “deviant" behaviour on CNA's Heart of the Matter podcast:

MAPPING EXTREMIST COMMUNITIES

Through this mutual exchange of information, both government agencies and digital platforms like YouTube may also be able to achieve a more dynamic understanding of how extremism develops, aiding in efforts by all stakeholders to curb self-radicalisation.

An example of how this nuance matters can be found in a comparison of the “anti-woke” community with the far-right community, which are sometimes treated interchangeably, because they use similar vocabulary and memes, and draw from similar demographics.

However, research on their YouTube consumption indicates that “anti-woke” and far-right communities are two distinct entities that differ in subtle but important ways.

“Anti-woke” and far-right content might both address a perceived overextension of social justice movements.

However, while anti-woke commentators might advocate for a dialling back of woke rhetoric, far-right extremists might proceed to make dangerous inferences about ideological “invasions” or make divisive assertions regarding the role of particular groups in promoting a woke agenda.

Enforcement resources should focus on the smaller but much more extreme far-right elements instead getting caught up in more widespread and less immediately problematic anti-woke content.

In Singapore’s multicultural context, understanding exchanges on multilingual channels can also help identify sources of divisive narratives promoting regressive attitudes on other cultures and fuelling societal tensions from non-English speaking sources

Large-scale influence operations promulgated through Chinese-language ecosystems, via platforms like Bilibili, Weibo and WeChat, were discovered by research firm FireEye just this month.

Additionally, the growth of socially divisive rhetoric in Hindi, Sinhala and Tamil has contributed to communal violence in India and Sri Lanka.

HOLISTIC VIGILANCE

The task of curtailing radical content can seem like fighting a multi-headed hydra. Harsh attempts to eliminate threats only drive extremist communities underground briefly, before they re-emerge more numerous than before and on opaque platforms like Telegram.

Maintaining open communication and information sharing between government and industry is thus vital to striking a balance between controlling the spread of radical ideology and managing their migration to darker corners of the Internet.

Situating video-streaming platforms like YouTube within wider networks of radicalisation can create opportunities for collaboration with platforms to develop innovative countermeasures.

This grows all the more important as more content, conversations and communities migrate online, including on other platforms such as audio communication channels like Clubhouse, also grappling with the growth of extremism.

With a more holistic, community-based and industry-aligned approach, however, Singapore can ensure that we remain resilient and responsive to new threats of radicalisation.

Gareth Tan is an analyst at Access Partnership, a public policy consultancy focusing on technology. These are his own views.

Source: CNA/sl

Advertisement

Also worth reading

Advertisement