As fake news flood social media over Israel-Hamas war, observers call for more regulatory pressures on tech firms
Observers say more regulatory pressures on tech firms by governments are needed to rein in the false narratives, but acknowledged the complexities of real-time content moderation.
Misinformation and fake news are spreading like wildfire on social media as the war between Israel and Hamas escalates, calling into question the ability of prominent platforms to combat falsehoods.
Among the fabrications disseminated online was footage from a video game passed off as a scene from a Hamas attack, and an image purportedly showing Israeli soldiers captured by Hamas that was actually from a military exercise in Gaza last year.
Several posts on Facebook, TikTok, and X – formerly Twitter – displayed a doctored White House document showing an allocation of US$8 billion in military assistance to Israel.
Videos with fake English captions of Russian President Vladimir Putin warning the United States not to help Israel have also surfaced. The videos were in fact months-old, showing Mr Putin speaking in Russian about Ukraine.
Researchers say the scale and speed at which misinformation was spread online following Hamas' assault on Israel was unlike ever before, making it increasingly difficult for users to sort fact from fiction.
Observers say more regulatory pressures on tech firms by governments are needed to rein in the false narratives, but acknowledged the complexities of real-time content moderation.
WHY DOES FAKE NEWS SPREAD SO FAST?
“Misinformation and disinformation spread faster than factual information,” said Assistant Professor Saifuddin Ahmed, from the Wee Kim Wee School of Communication and Information in Nanyang Technological University (NTU).
This is because false narratives are often sensational, and play on users’ emotions rather than cognition, he explained on CNA’s Asia Now on Wednesday (Oct 11).
“These fabrications are based on fear, anger, sadness or on our morals. When this happens, research shows that we stop thinking and don't critically analyse the information, but rather, make quick decisions in forming opinions or sharing the content.”
Misinformation is defined as incorrect or inaccurate information, regardless of its intent to mislead; while disinformation is false or misleading information deliberately spread with an intent to deceive. The latter is typically used by malicious actors – both state and non-state – to sow fear and distrust, and sway public opinion.
While the use of disinformation in warfare is not new, with such tactics documented since before the First and Second World Wars, the reach of social media today makes it more pressing and worrying, Prof Saifuddin said.
“(Social media is) precisely why we will see a growth in the use of disinformation as a psychological warfare worldwide,” he said. “Also, what makes social media dangerous is, the algorithms can amplify the spread of such content through various means.”
These include the algorithmic preference of trending content, users’ own cognitive biases, and echo chambers, where users encounter views similar to their own and therefore reinforce their preexisting beliefs, he said.
IS X MORE PRONE TO DISINFORMATION?
While tech firms across the board are struggling to contain the flood of false information around Israeli and Palestinian hostilities, Elon Musk's social media site X appears to have the worst outbreak of fake news related to the war.
The European Union has warned the platform to tackle the spread of disinformation or face hefty fines.
The slew of changes that Mr Musk made after taking over as CEO of X in October last year, including the cutting of a large number of content moderators, led to the chaos, according to experts.
“As a result of reducing funding, and firing people who were responsible for content moderation on Twitter, X has resulted in … opening many avenues for disinformation,” said Dr Olga Boichak, a digital cultures lecturer in the media and communications discipline at the University of Sydney.
WHAT CAN USERS DO?
Prof Saifuddin said there is an onus on social media users to take steps to verify the information they are consuming on such platforms.
“First, verify the credibility of sources and evaluate the content with a rational mind. Look for logic in the content, identify the misleading narratives and question if there can be a hidden propaganda behind,” he said.
“Second, report suspected misinformation. If you suspect that the information is not true, report it. Social media platforms will review the request and if the information is true, they will let the content stay and so there is no harm done.
“Finally, and perhaps the most important step, we must recognise and acknowledge our own biases before we begin to evaluate or share the content with our friends and family.”
CHALLENGES IN REGULATION
Experts say there are multiple challenges in addressing the misinformation phenomenon on social media platforms.
For one, real-time fact-checking and efforts to correct or debunk fake information are difficult as they spread so rapidly.
Savvy users have also learnt to bypass content moderation filters, such as by replacing flagged keywords.
Prof Saifuddin said governments need to apply regulatory pressures as social media platforms, being profit-driven business entities, are unlikely to prioritise user safety and content accuracy if left to their own devices.
“Better regulations will only be implemented through regulatory pressures, through authorities, governments and third parties,” he said.
“By themselves, social media companies will only care about markets where their business will be hit. They may ignore other contexts because they don’t matter.”
It is important for social media platforms to have better strategies in labelling misinformation and disinformation, and limiting the propagation of false material that have already scaled, he added.
Dr Boichak said the implementation of laws surrounding social media platforms and the content they allow is crucial in the age of technological advances and increased dependence on online information.
“This will have increasingly important implications on our understanding of wars and on our ability to participate in some of the discourse around them,” she said. “(This will also affect) the ability of the nefarious actors to manipulate some of the discourse surrounding those events.”