Social media giants grilled over effectiveness in acting on false or harmful content
Over more than six hours on Thursday (March 22), the parliamentary Select Committee on deliberate online falsehoods scrutinised social media giants over their effectiveness in acting on fake news despite the dangers they presented. Photo: Video Screengrab
SINGAPORE – Over more than six hours on Thursday (March 22), the parliamentary Select Committee on deliberate online falsehoods scrutinised social media giants over their effectiveness in acting on false or harmful content despite the dangers they presented.
The hearing was attended by representatives of technology companies Twitter, Google, the Asia Internet Coalition as well as Facebook.
They had to answer questions about the algorithms and technology they have been using to identify falsehoods and information that violate their respective policies on hate speech, for instance.
Citing examples, the committee listed posts that could be viewed as having crossed the line in certain jurisdictions, and detailed the failure or refusal of the tech giants to remove them.
This showed up the inadequacies of Facebook, Google and Twitter in handling these matters, and in some cases, they acknowledged that they are making efforts to change the way things are done.
In listing these instances, the parliamentary panel sought to gain clarity on why the tech giants had argued against legislation as a way to deal with the phenomenon of fake news in their written submissions. The firms had argued in their submissions that legislation could not tackle the problem effectively. There is also the risk of comprising freedom of expression and speech, they said.
Facebook, who admitted it should have informed the public earlier after it found out about political data firm Cambridge Analytica’s misuse of data in 2015, among other issues, was pressed about its policies on taking down false information.
Law and Home Affairs Minister K Shanmugam asked if Facebook would take down information that was identified to be false.
Facebook’s Asia-Pacific vice-president of public policy Simon Milner said there are no clear-cut answers. He noted that there are instances where “rumours become the truth”.
Mr Milner stated: “We do not have a policy that says everything has to be true and we do not put ourselves in the position of deciding what is true.”
But Facebook will take down fake accounts and if there is a court order directing them to do so, said Mr Milner.
This prompted Mr Shanmugam to point out that courts can only act on legislation. “Do you not realise then that the natural and logical conclusion based on your policy that you will not yourself take down falsehoods unless there is a court order, combined with the fact that court orders can only be given pursuant to legislation, means that if a state wants falsehoods to be taken down, that can only be done through legislation, vis-a-vis Facebook,” the minister said.
Mr Milner said it would be tremendously difficult to define in law what is or is not deliberate online falsehoods. This was why the tech firms were “sceptical” about legislation, he said.
“I know it is a real concern and we absolutely share your concern, we know you have certain responsibilities as do we. We are here to talk about all those and we are just saying we are concerned about a rush to legislate and that legislation which is enacted in haste can often be regretted at length,” Mr Milner added.
Turning to the other social media giants, the committee listed several examples of how certain posts have crossed the line but were not taken down.
Twitter’s representative, Director of Public Policy and Philanthropy in Asia Pacific Kathleen Reen, was asked about the company’s refusal to remove a cartoon that depicted a group of male, ethnic minority migrants tying up and abusing a semi-naked white woman while stabbing her baby to death.
Twitter’s reasoning was that this was not in breach of their hateful conduct policy. Ms Reen replied that she was unfamiliar with the specifics of this case.
Minister for Social and Family Development Desmond Lee also pressed Twitter about its efforts at detecting malicious automation. Such initiatives, which rely on machine-learning, run the risk of being over- or under inclusive in tackling malicious bots and could for instance, shut down legitimate handles or allow malicious bots to continue to operate , he noted.
Ms Reen said it takes time for these measures to be effective and that their technologies are “getting smarter everyday” at detecting malicious bots.
“At this point, we are not as worried about the binary you just presented, we are much more interested in the technical detail of how we can continue to grow our capabilities,” she replied.
Google was questioned about relying on algorithm to do the heavy lifting of identifying what is low and high quality content.
Ms Irene Liu, News Lab Lead at Google, said they are always trying to hone the algorithm but at the same time are working closely with news organisations to identify authoritative sources.
She was also asked about how YouTube, under Google, could only be spending less than 0.1 per cent of their annual revenue on discerning online content given the enormity of the challenge of tackling online falsehoods.
Ms Liu said these figures does not reflect the overall effect of their research. “To look at just one dollar figure and one budget line, I don’t think it really reflects what we are doing in terms of the overall effect and the leveraging we learn on the technologies from different products,” she said.
In wrapping up the oral representations with the social media giants, Mr Shanmugam said while the committee welcomes the “broad statements” on how the firms are being proactive and serious about tackling falsehoods, there are responses to specific cases that “don’t seem quite satisfactory”.
“The various beautiful statements you made… have to be tested against reality, this is where the rubber hits the road.,” he added.