Use tech to help fight false information, suggest cybersecurity experts

Use tech to help fight false information, suggest cybersecurity experts

Technology, which have been identified as a key contributor to the speed and scale of how false information is spread, can also help to counter the threat, according to experts from IT security company Trend Micro.

SINGAPORE: Technology, which have been identified as a key contributor to the speed and scale of how false information is spread, can also help to counter the threat, according to experts from IT security company Trend Micro.

Ms Myla Pilao and Mr Ryan Flores, Trend Micro’s director of Core Technology Marketing and senior manager of Forward-Looking Threat Research Team respectively, wrote in their submission to the select committee of deliberate online falsehoods that social media platforms such as Facebook and Twitter have been used to conduct fake news campaigns. The submission was published on the Parliament's website on Friday (Mar 16). 

Examples of these were the 2016 US presidential election and French presidential election the year after, they said. Much of this information was also shared with Channel NewsAsia when they were interviewed last November.

AN AUTOMATION ARMS RACE?

To fight against such campaigns, Ms Pilao and Mr Flores suggested countering technology with technology. Specifically, they mentioned machine learning and artificial intelligence (AI) as tools that be applied to verify and score content on a scale of real or false.

“Computers can be programmed to watch out for sensational content, such as use of hyperbolic language, sensational headlines and misspellings, on sites or pages,” they explained.

“They can then be compiled and analysed to give stories, pages or sites reputation scores that reflect the likelihood that their content can be considered fake news.”

These automated tools can also be trained to detect if certain social media posts were likely spammed by bots or whether an account is a bot, the experts said, thus giving users information on whether to block suspicious posts and accounts or not.

“Suspicious posts and the accounts that spread fake news can also be reported to the respective platform owners so they can take action,” they wrote.

Besides social media posts and accounts, Ms Pilao and Mr Flores also pointed out that these tools can be used to detect pictures, videos or audio files from being used to support false claims. An example is to check the Exchange Image File (EXIF) data to see if a photo or a video used actually coincides with the date of an event it is being linked to or provided as evidence of, they explained.

“Most of the tools cited above are not 100 per cent foolproof but they are considered a promising start,” the Trend Micro executives said.

Ms Pilao, who gave an oral representation at the public hearing on Friday, acknowledged that these tools are "complementary", in response to committee member Chia Yong Yong's question.

She explained that these AI and machine learning tools can act as a "sensor" to detect fake news. However, the information needs to be acted on, whether by the platform operator or user. "Responsibility lies with the user," she added.

The Trend Micro executive also suggested that there could be a centralised reporting body for the public to flag false information, and pointed out that this could be taken on by the Singapore Computer Emergency Response Team (SingCERT).

MORE DIFFERENTIATION

Another written submission by Associate Professor for Political Data Science Simon Hegelich and researcher at the Professorship for Political Data Science Morteza Shahrezaye, both from the Technical University of Munich, drew on the learnings from the implementation of the Network Enforcement Act (NetDG) in Germany.

The NetDG law, which was passed last June, requires social media sites to move quickly to remove hate speech, fake news and illegal material.

The academics noted the law has attracted criticism from various quarters, with one relating to how social media companies are the ones responsible to delete content that is obviously against the law within 24 hours after notice. The argument is that the judiciary is the only body that should decide what is against the law, while there are also fears these platforms may delete many questionable posts to avoid fines, they said.

To avoid such scenarios, they said a better solution for the trade-off between private freedom and public interest could be to have stronger differentiation between the authorship and the distribution of illegal content.

Social media companies should not be responsible for illegal content that is not produced by them but by users, they argued.

At the same time, they acknowledged that the core business of social media companies is “the distribution of users’ content”.

This distribution could then be regulated, for example, with fines for each additional view an illegal post attracted because of the platform’s algorithms. Users who share the illegal content via shares or retweets would also be liable, they added.

“Of course, the trade-off between private freedom and public interest is not vanishing with this proposals. But it might be a way to fine-tune it in a more appropriate way,” the academics said.

They added that while there are many things that should be allowed to be said, this should remain in private. 

In a clarification to questions by select committee member K Shanmugam, Mr Shahrezaye in his oral representation explained that this means it is "not an automatic right" to have a person's views in the public sphere. 

“Everyone who is engaged in public discourse bears a responsibility that goes beyond their individual existence in the private sphere,” they wrote. 

Mr Sharezaye also made the point that social media platforms should be more transparent and explain the "goal" of their algorithms, which determines how a post is ranked higher over another, to the public and the government. 

To this, Mr Shanmugam, who is the Law and Home Affairs minister, said this is an "age-old question" and for these companies, the main purpose of the algorithm is to maximise the commercial profitability.

"These are age-old issues with commercial companies. This is a classic case where their commercial interests may conflict with what is in the public's interests," Mr Shanmugam said.

"We will then have to decide what is in the public's interest, and see how their commercial interests and the public interests can be coincided. That is the task of every government."

Source: CNA/kk

Bookmark