BERLIN: Facebook plans to ramp up efforts to fight misinformation ahead of the European Parliament election in May and will partner with German news agency DPA to boost its fact-checking, a senior executive said on Monday (Mar 18).
Facebook has been under pressure around the world since the U.S. election in 2016 to stop the use of fake accounts and other types of deception to sway public opinion.
The European Union last month accused Alphabet's Google, Facebook and Twitter of falling short of their pledges to combat fake news ahead of the European election after they signed a voluntary code of conduct to stave off regulation.
On Monday, Facebook said it was setting up an operations centre that would be staffed 24 hours a day with engineers, data scientists, researchers and policy experts, and coordinate with external organisations.
"They will be proactively trying to identify emerging threats so that they can take action on them as quickly as possible," Tessa Lyons, head of news feed integrity at Facebook, told journalists in Berlin.
Facebook also announced it is teaming up with Germany's biggest news agency, DPA, to help it check the accuracy of posts, in addition to Correctiv, a non-profit collective of investigative journalists that has been flagging fake news to the company since January 2017.
It will also train over 100,000 students in Germany in media literacy and seek to stop paid advertising being misused for political ends.
Germany has been particularly proactive in trying to clamp down on online hate speech, implementing a law last year that forces companies to delete offensive posts or face fines of up to 50 million euros (US$56.71 million).
The issue of misinformation and elections became prominent after U.S. intelligence agencies concluded that Russia tried to influence the outcome of the 2016 U.S. presidential election in Donald Trump's favour, partly by using social media. Moscow denied any meddling.
Lyons said Facebook had made progress in limiting fake news in the last two years, adding that it would increase the number of people working on the issue globally to 30,000 by the end of the year from 20,000 currently.
In addition to human intervention, she said Facebook is constantly refining its machine learning tools to identify untrustworthy messages and limit their distribution.
"This is a very adversarial space, and whether the bad actors are financially or ideologically motivated, they will try to get around and adapt to the work that we are doing," she said.