SINGAPORE: In addressing the issue of fake news, new prescriptive laws are not an ideal solution, according to industry association Asia Internet Coalition (AIC), whose members include social media companies like Google, Facebook and Twitter. Instead, a “stringent self-regulatory approach”, executed in coordination and cooperation with the authorities, will have a better outcome.
The association made this point in its written representation to the Select Committee on Deliberate Online Falsehoods. It gave oral evidence to the committee on Thursday (Mar 22), together with representatives from Google, Facebook and Twitter.
Explaining its rationale, AIC’s managing director Jeff Paine said in his written representation that prescriptive legislation will not adequately address the issue effectively, due to the highly subjective, nuanced and difficult task of discerning whether information is true or false.
“There is also a real risk of compromising freedom of expression and speech via legislative tools that may cast a wider net than intended and end up censoring legitimate speech due to the challenges of enforcing such a legislation,” he added.
Mr Paine also noted that prescriptive legislation would also require Internet companies to act as a “judge” to determine what is allowable under the law. This, he said, is a function that should be served by the courts, using the “comprehensive legislation” that currently exists.
EXISTING SINGAPORE LAWS "COMPREHENSIVE ENOUGH" TO ADDRESS THE ISSUE
Mr Paine also wrote that the boundaries of free speech and expression in the Singapore context are already well-defined by existing Singapore laws, which are “widely thought to be comprehensive enough” to address the issue.
Further elaborating on this in their written representation, Facebook’s head of public policy for Southeast Asia Alvin Tan pointed out some of Singapore’s existing laws which address hate speech, defamation and the spreading of false news. This includes the Telecommunications Act, Protection from Harassment Act, Penal Code and the Maintenance of Religious Harmony Act.
“Instead, we believe in the need to adopt an innovative and iterative approach, as prescriptive legislation and requirements would make it harder for us and other online platforms to find the right technical solutions, consumer messaging and policies to address this shared challenge,” said Mr Tan.
Google also brought this point up in their written representation, which considered the situation in other countries like France, Italy and the United Kingdom.
It noted that the European Commission has affirmed that regulation is not optimal for tackling this “thorny issue”, and proposed four approaches instead: encouraging transparency, fostering diversity of information, giving citizens fact-checking tools and proposing inclusive solutions to be voluntarily implemented by “all actors in the infosphere”.
“This approach reflects the fact that misinformation is an age-old problem that can never be completely eradicated,” wrote Google. “We believe an effective way of combating misinformation is through educating citizens on how to distinguish reliable from unreliable information and equipping them with practical, user-friendly tools for determining reliability, as well as promoting quality journalism to ensure that there is a robust network of fact-checking organisations providing reliable information and debunking falsehoods.”
To that end, AIC and its members stressed that addressing harmful misinformation can be achieved with a policy of promoting and inculcating digital, media and information literacy at every level, driven by relevant stakeholders, including industry.
“This is a long-term investment and priority, and should be the first line of defense in fighting misinformation,” wrote Mr Paine.
AIC added that its members have in place wide-ranging community standards, rules and policies that “explicitly state” expected user behaviour and repercussions for non-compliance, as well as prohibited content and behaviour.
“As part of self-regulation, these policies are continuously monitored, updated and improved as needed, in line with the highly complicated and ever-morphing nature of harmful activities on the Internet.”
BUILDING INFORMED COMMUNITY IS CRITICAL: FACEBOOK
In their written representations, representatives from the social media and Internet giants explained how they have been working to fight online falsehoods.
Noting that there are “no silver bullets”, Facebook’s Mr Tan said the battle against deliberate online falsehoods and misinformation is a “shared responsibility” among public authorities, tech companies, newsrooms and classrooms.
Facebook is taking its responsibility seriously, said Mr Tan in his written representation. Apart from doubling its safety and security team to 20,000 people by year-end, the social media giant believes that building an informed community is “critical” and is doing so in five ways.
These include finding and removing fake accounts via machine learning, as well as introducing advertisement transparency amid concerns about misinformation being circulated through targeted advertisements; ensuring that traffickers of false news cannot place advertisements on Facebook; and improving news feeds by down-ranking false news, misinformation and other types of low quality information.
Initiatives such as Article Context would also help users better understand the content they see, as well as partners like journalists; and lastly, enabling people to be more civically engaged.
On the last point, Mr Tan said that Facebook plans to work with Government agencies in Singapore ahead of the next General Elections to ensure that it is “rolling out the right tools to help”. These agencies include the Elections Department and the Ministry of Communications and Information.
Facebook also remains committed to combating hate speech on its platform. It has a team of experts who continually develop and update its community standards to ensure improvements in its policies, and that they reflect changing trends and local context.
“This underscores our commitment to ensuring that deliberate online falsehoods do not cause social discord or disrupt the integrity of the political process globally and here in Singapore,” said Mr Tan.
CONTINUING EFFORT TO DETECT, PREVENT MALICIOUS AUTOMATION: TWITTER
Over at Twitter, an area that it is focusing on is the dissemination techniques for misleading content online, such as malicious automation.
While automation is not prohibited on Twitter and often serves a useful purpose, it can be used for “malicious purposes” such as generating spam, said its director of public policy and philanthropy for Asia Pacific, Kathleen Mary Helen Reen.
“Twitter is continuing its effort to detect and prevent malicious automation by leveraging our technological capabilities and investing in initiatives aimed at understanding and addressing behavioral patterns associated with such accounts.”
She cited the launch of the Information Quality initiative as an example. Rolled out in early 2017, the initiative was an effort aimed at enhancing Twitter’s strategies to “detect and stop malicious automation, improve machine learning to spot spam, and increase the precision of (its) tools”.
According to Ms Reen, Twitter is seeing results. It now detects and blocks about 523,000 logins daily suspected of being generated through malicious automation. In December 2017, its systems identified and challenged more than 6.4 million suspicious accounts globally per week - a 60 per cent increase in detection rate from two months earlier.
The social media platform also developed new techniques to spot malicious automation. These include improvements to its phone verification process and introduction of new challenges, such as the reCAPTCHAs to help validate if a human is in control of an account.
Moving forward, it will invest further in machine-learning capabilities that will help it detect and mitigate the effect on users of fake, coordinated, and automated account activity, said Ms Reen in her written representation.
But in the long run, the solution to counter online falsehoods must include the active involvement of governments, civil society and non-government organisations (NGOs) in addressing media literacy as well, she added.
COMMITTED TO MAKING USEFUL, RELEVANT INFORMATION EASILY ACCESSIBLE: GOOGLE
Internet search giant Google said its role is to help people “find useful and relevant information by supporting the development and detection of quality content online, restricting the flow of money to deliberately misleading content, and ensuring (its) reporting and feedback tools are as effective as they can be”.
It was represented by its News Lab lead for Asia Pacific, Irene Jay Liu, at the public hearing.
On how it supports and promotes quality content, Google said it sees itself as “playing a constructive role” by helping build capacity for the production and the detection of authoritative journalistic content. This, however, requires strong collaboration with the news industry and the news verification community.
It is also investing in several avenues from working with fact-check organisations, teaming up with newsrooms around the world to surfacing new indicators, such as The Trust Project, which looks at ways of differentiating authoritative journalism from promotional content and fakery.
Google is also committed to development tools and technology to help people find information they want, said Ms Liu in her written representation.
For instance, it recently took steps to allow publishers to highlight fact-checked content, as well as help users find and consult more easily articles that provide a critical outlook on claims made by others. It has rolled this out in Google News and will be expanding it to Google Search globally in all languages.
Lastly, it said it is stemming the flow of money to misrepresentative, misleading content and improving its advertiser controls.
“We have made public commitments to raise the bar on our policies, by taking a tougher stance on hateful and derogatory content, helping advertisers more easily and consistently choose where their ads appear, and increasing our investments in people and tools that help us prevent ads from appearing on potentially objectionable content,” said Ms Liu.
For publisher websites, Google is rolling out new technology to enforce its policies at the page level, instead of at the site level. This will help it “act more frequently and quickly on potential infractions, instead of gauging whether the severity of a given violation is enough to take action against an entire site,” she added.