Fake news, propaganda and the Internet: How trust is being broken online

Fake news, propaganda and the Internet: How trust is being broken online

Not everything posted online is true and identifying the fake from the real can be a challenge.

A smartphone user shows the Facebook application on his phone. (REUTERS/Dado Ruvic)

SINGAPORE: "There are lots of people who believe everything posted on the Internet is true."

The observation, made by Trend Micro's senior manager for future threat research Ryan Flores, is just one facet of a wider issue of the emergence of "fake news", people's trust in the information they get online and how public perception can be shaped using existing marketing tools. 

With the advent of social media acting as "propellers" for propaganda, and the use of openly available tools such as search engine optimisation (SEO) and click farms, the shaping of public perception online is no longer just a hypothesis.

And it can happen anywhere, since the World Wide Web is a borderless entity.

Think about it. When you read the news, or see someone on your Facebook or Twitter feed commenting on the news, do you believe the information shared at face value, regardless of how outlandish or inexplicable it may seem, because it appeared on your timeline? 

The answer quite often is likely 'yes'. Especially since the user would have curated their Twitter accounts, and your Facebook feed consists of posts by people you know, thus giving it all a veneer of trustworthiness.

According to a research paper by IT security company Trend Micro, the term “fake news” has become increasingly common during the past year. It said that thanks to Internet connectivity and digital platforms such as Facebook and Twitter that make it possible to share and spread information, “traditional challenges such as physical borders and the constraints of time and distance do not exist anymore”.

“Unfortunately, it also makes it easier to manipulate the public’s perception of reality and thought processes, resulting in the proliferation of fake news that affects our real, non-digital environment,” the research paper stated.

It added that the people behind the propagation of fake news are exploiting the very short attention of today's reader in the digital era.

"There's no need for an article to be sensible, complete or factual; a sensational headline will achieve the objectives just as well." 

PROPAGANDA, IN THE INTERNET ERA

The most talked about manifestation of this phenomenon is how fake news might have swayed the 2016 US presidential election, and allegations of how a state actor, in this case Russia, played a key role in propagating misinformation via social networks.

In fact, executives from Facebook, Twitter and Google recently testified before US lawmakers and publicly acknowledged their role in Russia’s influence on the presidential campaign. The New York Times reported last month that the two-day hearing served as an “initial public reckoning” for the Internet giants, and for them to confront “how they were used as tools for a broad Russian misinformation campaign”.

The misinformation campaign during the US presidential campaign is a wider, state-level example of how public perception can be shaped and constructed.

FireEye manager for Information Operations Analysis Lee Foster shed more light on similar scale activities in this part of the world, saying that the major suspect in nation-backed information operations in East Asia would be China. That said, he pointed out that FireEye does not have any examples of such operations that compare with Russia- or Turkey-style operations, “which tend to be troll-populated and noise generating”.

“Where fake news is concerned, there have been a few notable instances in China in which false reports were seemingly leveraged to tank a company,” Mr Foster said. “For example, there was a campaign that falsely accused a seaweed provider of having plastic flakes in their product.”

As such, “fake news” and similar misinformation campaigns don’t just happen at the nation-state level. It also happens often on a smaller, non-political scale, depending on one’s motivations.

(Infographic: Trend Micro)

During an interview with Channel NewsAsia, Trend Micro director for TrendLabs Myla Pilao said that with the advent of social media, which she described as “one of the biggest propellers for propaganda”, the ability to manipulate public opinion on a large scale is now achievable.

Some of the motivation for such activities seen in the Asia Pacific region could be political and financial, or to discredit someone, usually a public figure or influencer such as a politician, Ms Pilao elaborated.

CASE STUDY: DISCREDITING A JOURNALIST

For example, in one of the examples cited in the research paper, Trend Micro illustrated how a journalist could be discredited for US$55,000. In this scenario, the attacker aims to silence the journalist from speaking out or publishing a story that can be detrimental to an attacker’s agenda or reputation, and this can be done by mounting campaigns against the journalist.

It assumes the journalist is popular and digitally savvy, with a Twitter account that has 50,000 followers and a Facebook account with 10,000 friends. He or she also has a blog that publishes at least three articles a week that garners around 200 comments per post.

In this example, the attacker can wage a four-week “fake news campaign” to defame the journalist using services available in underground online marketplaces. Fake news unfavourable to the journalist can be bought once a week, and in turn promoted by purchasing 50,000 retweets or likes and 100,000 visits. This would cost around US$2,700.

Alternatively, the attacker can buy four related videos and turn them into trending videos on YouTube, each of which can sell for about US$2,500 per video.

To create the appearance of credibility, comments can be bought - starting with 500 comments, 400 of which can be positive, 80 neutral and 20 negative. For about US$1,000, this will translate to 4,000 comments, the research paper said.

With this air of believability in place, the attacker can launch the smear campaign: Poisoning a Twitter account with 200,000 bot followers will cost US$240, and having 12,000 comments with most bearing negative sentiment and references to fake stories against the reporter will cost US$3,000. Dislikes and negative comments on the journalist’s article, and promoting them with 10,000 RTs or likes and 25,000 visits can cost US$20,400 in underground sites, it added.

“The result? For around US$55,000, a user who reads, watches and further searches the campaign’s fake content can be swayed into having a fragmented and negative impression of the journalist,” the researchers wrote.

“A more daunting consequence would be how the story ... or points the journalist wanted to divulge or raise will be drowned out by a sea of noise fabricated by the campaign.”

(Infographic: Trend Micro)

It’s not a hypothetical situation either.

A Mexican journalist Alberto Escorcia shared his experience of this with Amnesty International’s Tanya O’Carroll in January this year, saying in the report that the “techno-censorship” in the country is a “huge problem”.

“On an average day, I see two or three trending topics generated by trolls. Anywhere between 1,000 and 3,000 tweets a day. Many operate as part of organised ‘troll gangs’ who are paid to make stories go viral or launch campaigns discrediting and attacking journalists,” Mr Escorcia said then.

He added: “If they don’t kill you, they ruin your life. The trolls generate a constant climate of fear, and it stops people from publishing.”

And this fake news problem is unlikely to disappear any time soon. Trend Micro’s Flores, for one, said he doesn’t see this phenomenon “diffusing in the next five years”.

COUNTERING FAKE NEWS

The Trend Micro research paper noted that since fake news has become a "pervasive global problem", governments and organisations are now taking measures to mitigate its further proliferation.

It cited Russia's Ministry of Foreign Affairs as an example, after it set up a website that lists and debunks publications that contain false information about the country. The same is available in the website of the European Union's External Action Service, where content considered as misinformation campaigns are reviewed every week, it added.

At home, the Singapore Government has a dedicated webpage, Factually, that provides the official stance on hot-button issues. For instance, the latest posts include "Will the latest fare review exercise result in higher public transport fares?" and "Can you develop diabetes?"

Internet giants Google, Facebook and Twitter, as well as China's Sina Weibo and Tencent's WeChat, are also taking steps to tackle the issue, and this is essential considering "social media is arguably the lifeblood of any successful cyber propaganda and fake news campaign", the research paper said.

Facebook, for example, said on Oct 5 it was starting a new test to give "people additional context on the articles they see in New Feed". For links to articles shared in News Feed, it is testing a button that people can tap to access additional information without needing to go to another site, it said. 

This feature, Facebook said, is designed to provide people some of the tools they need to make an informed decision about which stories to read, share and trust. 

Singapore Prime Minister Lee Hsien Loong on Thursday (Nov 8) also commended the company for tackling the fake news problem. In a post featuring his meeting with COO Sheryl Sandberg, Mr Lee said: "Glad that Facebook is committed to working with partners to deal with this very pertinent and serious problem."

However, not all of its initiatives to combat fake news work.

According to the BBC, the social media giant ended a test that promoted comments containing the word "fake" to the top of news feeds after it left many who had access to the test "frustrated". 

"The comments appeared on a wide range of stories, from ones that could be fake to ones that were clearly legitimate. The remarks, which would appear at the top of the comments section, came from a variety of people but the one thing that they had in common was the word fake," BBC reported.

As for Weibo and WeChat, the research paper said the service providers have implemented mechanisms such as credibility rating that allow users to flag specious content on the platforms and suspned them if they are found posting fake content. 

However, the researchers said the first line of defence against fake news are the users. 

"In a post-truth era where news is easy to manufacture but challenging to verify, it’s essentially up to the users to better discern the veracity of the stories they read and prevent fake news from further proliferating," they wrote.

They also highlighted some signs of fake news users can look out for:

  • Hyperbolic and clickbait headlines
  • Suspicious website domains that spoof legitimate news media
  • Misspellings in content and awkwardly laid out website
  • Doctored photos and images
  • Absence of publishing timestamps
  • Lack of author, sources, and data 

Other recommendations include reading beyond the headline, cross-checking the story with other media outlets if the news is reported elsewhere and "getting out of the filter bubble" by reading news from a broader range of reputable sources - "stories that don't align with your own beliefs don't necessarily mean they're fake", they said. 

"Ultimately, the burden of differentiating the truth from untruth falls on the audience. The pace of change has meant that acquired knowledge and experience is less useful in finding the truth on the part of the public," the researchers wrote.

"Applied critical thinking is necessary not only to find the truth, but to keep civil society intact for future generations."

Source: CNA/kk

Bookmark