SINGAPORE: Technology companies need to increase their transparency and improve accountability by looking at how they prevent the spread of false information and ensuring their digital advertising tools are not used by suspicious actors, the Select Committee on Deliberate Online Falsehoods said in its report released on Thursday (Sep 20).
There should also be laws in place to make sure these companies contribute to a clean Internet ecosystem, the report added.
The committee said it received evidence on how the design of tech companies’ online platforms has a direct impact on the dissemination of both online falsehoods and quality journalism and that content allowed on these platforms can influence human behaviour considerably.
It noted the 2018 Reuters Institute Digital News Report finding that 71 per cent of respondents across 23 countries, including Singapore, were of the view that tech companies need to do more to separate what is real from the fake on the Internet.
“Given the amount of control technology companies have over the design of their platforms, through which they have profited greatly, the committee is of the view that technology companies need to do more,” the report said.
For instance, they can take proactive action to prevent and minimise the amplification of online falsehoods on their platforms by prioritising credible content and, by the same measure, deprioritising proven falsehoods to limit circulation. They should also label or shut down accounts and networks of accounts that are designed to amplify online falsehoods, it added.
They should also make sure their advertising tools and services do not incentivise or aid the spread of false information through ways like disallowing the placement of advertisements on sites or the use of advertising services by sites that do propagate online falsehoods.
Tech companies should also take reasonable steps to detect and bar suspicious actors from using digital advertising tools, it added.
As for user data, tech companies need to prevent such information from being used to manipulate people. One possible measure could be to inform users of what their data is being used for, the committee said.
These companies should demonstrate their accountability to users, the public and the Government by undertaking regular voluntary reporting and independent audits, the committee proposed.
"Since the hearings in March, you’ve seen that tech companies have been engaging in discussions with legislators ... around the world. I think there’s increasing recognition on all sides that there has to be responsibility on the part of tech companies and that governments have to intervene to ensure that responsibility. We will, of course, engage with the tech companies," said Home Affairs and Law Minister K Shanmugam at the press conference on Thursday where the committee presented its findings.
Such audits should cover areas like how their platforms and products have been used to spread online falsehoods and the measures taken to address the problem as well as how effective these measures are, it added.
However, evidence collected via the written and oral representations supported the view of experts and observers cited by the Institute of Policy Studies’ Dr Carol Soon and Mr Shawn Goh, who had “expressed doubt that the technology giants would have the will and incentive to self-regulate effectively simply because they are monopolies (or near monopolies)”, the report said.
The committee recommended the Government consider both legislation and other ways of regulating these tech companies to achieve the objectives mentioned earlier.
Laws will be needed in particular for measures to be taken in response to an online falsehood since “Facebook, Google and Twitter have a policy of generally not acting against content on the basis that it is false”, it added.
It also recommended the Government consider whether targeted advertising and the use and collection of personal data on online platforms for micro-targeting should be regulated.
Additionally, the laws can be complemented by a voluntary code of practice or guidelines to tackle online falsehoods, which can be developed jointly by the Government, tech companies and other industry stakeholders.
LAWS TO STEM SPREAD OF FALSE INFORMATION
But what happens if falsehoods have been spread online?
The committee said “strong action is needed” to ensure the Internet does not remain a Wild West and proposed a range of measures to disrupt the spread of such falsehoods and changing the digital ecosystem sustaining them.
These measures have the aim of stemming the spread of online falsehoods, deterring their creation and spread and preventing the abuse of digital tools and platforms to do so, it elaborated.
The committee recommended the Government have the powers to swiftly disrupt the spread and influence of online falsehoods. Specifically, the objectives include providing access to and increasing the visibility of corrections, limit or block exposure to the online falsehoods and discredit the sources of online falsehoods.
Legislation will, in turn, need to make sure the measures achieve the objective of breaking virality by being effective in hours, and there are adequate safeguards in place to ensure due process and the proper exercise of power and give assurance to the public of the integrity of the decision-making process.
Another recommendation is for the implementation of monitoring and early warning mechanisms so as to decide when and how to intervene to stop the spread of misinformation.
Powers needed to halt the flow of digital advertising revenue to those who spread online falsehoods, and confiscating the financial benefits of those who do spread misinformation were also mooted by the committee.
“The committee is not calling for the criminalisation of all online falsehoods,” it said.
“Consistent with the calibrated approach the committee has recommended, criminal sanctions should be used only against purveyors of online falsehoods that meet a prescribed threshold.”
It also called on the Government to identify additional measures needed to safeguard election integrity and implement the necessary measures, including legislation.
The committee said it did not receive any detailed analysis on whether Singapore’s electoral laws are “sufficiently comprehensive and modernised” to combat the sophisticated methods employed today such as the use of dark ads, fake accounts or the infiltration of local social media communities to influence voters.
Speaking at the press conference where the committee presented its findings, committee member Chia Yong Yong said the concern is not just whether deliberate falsehoods affect the outcome of an election, but how it undermines the democratic process.
"When you have wrong information and deliberately false information at that, the people do not have the correct information to make a proper decision, to conduct a rational debate. There could be doubts cast on the electoral process or the electoral outcome ... There could also be that occasion for the creation of greater suspicion and greater divide within the community," said Ms Chia.
Anecdotes of falsehoods impacting elections in other countries, such as the United States' 2016 election, have shown that falsehoods can be weaponised, said committee member Edwin Tong.
"Views change overnight because of what's put on the Internet. ... Emotions do run high, issues become hotly debated in a very partisan manner and significant divides can be exploited if one is not careful. So it is in that kind of context and scenario that we think the institution of an election has to be protected in a particularly special way," Mr Tong said.
TECH GIANTS PLEDGE TO WORK WITH GOVT
Commenting on the findings, a Google spokesperson told Channel NewsAsia in a statement the company takes the issue of false information "seriously" and is committed to addressing it in collaboration with governments, the media and civil society.
"We appreciate the efforts of the committee in compiling this report and look forward to work with its members and the Singapore Government to address this important issue," the spokesperson said.
Facebook, which took the brunt of questions during the public hearings, said in a statement that it appreciates the committee's extensive report and shares the same commitment to reduce the spread of deliberate online falsehoods.
"Over the last year and a half, we have invested in technology and people to combat false news and disrupt attempts to manipulate civic discourse," a spokesperson said.
"This work includes removing fake accounts, disrupting the financial incentives behind false news, reducing the posts people see that link to low-quality Web pages, partnering with threat intelligence experts, and promoting digital and news literacy."
The spokesperson added that since the public hearings, the social media giant has introduced a new policy to remove false news that has the potential to lead to offline violence and provided more transparency on advertising.
Twitter, too, said on Thursday that it is committed to keeping people informed about what is happening in the world and "care deeply" about the issues of misinformation as well as disinformation, and their potentially harmful effects on civic and political discourse.
"There are many complexities at play, and societies across the world have to work in concert to understand and better assess trends in the information ecosystem. We look forward to the Singapore Government's continued engagement with industry on the full range of approaches to address these issues," the company said in a statement.