Facebook shares community enforcement efforts for first time, spotlights 6 violations

Facebook shares community enforcement efforts for first time, spotlights 6 violations

The six violations are graphic violence, adult nudity and sexual activity, terrorist propaganda, hate speech, spam and fake accounts.

FILE PHOTO: A picture illustration shows a Facebook logo reflected in a person's eye, in Zenica
FILE PHOTO: A picture illustration shows a Facebook logo reflected in a person's eye, in Zenica, March 13, 2015. REUTERS/Dado Ruvic/Illustration/File Photo

SINGAPORE: For the first time, social networking giant Facebook is revealing its community standards enforcement efforts, spotlighting six types of violations it feels able to measure and track reliably using the data it has on hand. 

Published on Tuesday (May 15), Facebook’s Community Standards Preliminary Report zoomed in on six violations: Graphic violence, adult nudity and sexual activity, terrorist propaganda, hate speech, spam and fake accounts. The report covers the period from October 2017 to March 2018, it added.

Mr Guy Rosen, vice president of Product Management at Facebook, told Channel NewsAsia in an interview prior to the report’s release that the metrics it has developed for these violations are constantly evolving even as it the company shifts from using these measurements to determine its operational efficiency to one that is looking at the platform’s security.

“We haven’t previously focused on security,” Mr Rosen acknowledged, adding that the preliminary report is another example of how Facebook is opening itself up for “accountability”. 

The US-based technology company is currently embroiled in how it handles the data it accumulates on its platform, after it acknowledged in March that information about millions of users wrongly ended up in the hands of political consultancy Cambridge Analytica. CEO Mark Zuckerberg also revealed it collects data of people who have not signed up for Facebook for security reasons. 

In publishing the preliminary report, the company admitted its metrics “may not be perfect, and it still has a lot of work to do to refine them”, but it said these are the best representation of the work it does. 

By doing so, the company also shed light on how it is monitoring and dealing with these violations, saying it tries to answer four questions when looking at them - how prevalent are they, how much content does it take action on, how much violating content does it find before users report them and how quickly does it take action on violating content. 

Prevalence, in particular, is one metric that Facebook is paying much attention to, Mr Rosen said, as it “keeps the system honest”. 

It essentially represents how much violating content people may have experienced on Facebook that its enforcement team has not caught or missed altogether. For example, if the prevalence of adult nudity and sexual activity on Facebook was 0.07 per cent to 0.09 per cent, that means of every 10,000 content views, 7 to 9 on average of content that violated its standards in this category. 

“We want to make this number as low as possible,” the company added.

Below is the break down of its enforcement efforts in the first quarter of 2018:

  • GRAPHIC VIOLENCE
    - Prevalence: Estimated 0.22 per cent to 0.27 per cent in Q1, 2018 (up from estimated 0.16 per cent to 0.19 per cent in previous quarter)
    - Took action on 3.4 million pieces of content in Q1 this year (up 183 per cent from 1.2 million in Q4 2017)
    - Found and flagged 85.6 per cent of violating content before users reported it in Q1 (up from 71.6 per cent in previous quarter)
    Facebook stats: Graphic violence
    (Table: Facebook)
  • Analysis: Increase in prevalence of graphic violence seen despite “improvements in our detection technology”, Facebook said. “The increase was likely due to a higher volume of graphic content shared on Facebook.”

    The 183 per cent increase in content dealt with was “mostly due to improvements in our detection technology, including using photo-matching to cover with warnings photos that matched ones we previously marked as disturbing”.

    It also “fixed a prior technical issue” that caused it not to always cover photos with warnings when it should have, which was responsible for 13 per cent of the increase as it worked retroactively to address past content.
  • ADULT NUDITY AND SEXUAL ACTIVITY
    - Prevalence: Estimated 0.07 per cent to 0.09 per cent in Q1, 2018 (up from estimated 0.06 per cent to 0.08 per cent in previous quarter)
    - Took action on 21 million pieces of content in Q1 this year (similar to previous quarter)
    - Found and flagged 95.8 per cent of violating content before users reported it in Q1 (up from 94.4 per cent in previous quarter)
    Facebook stats: Adult nudity and sexual activity
    (Table: Facebook)

    Analysis: The increase in prevalence is “small enough that we can’t be certain whether it’s a true rise in prevalence” or normal variance within the margin of error when Facebook used its sampling content.

    That said, Mr Rosen pointed out its “investments in computer vision has really paid off” as it helped improve its technology to detect and deal with violating content in this category.
  • TERRORIST PROPAGANDA
    - Prevalence: NIL (Sampling methodology can’t reliably estimate how much of this content is viewed on Facebook; exploring other methods)
    - Took action on 1.9 million pieces of content in Q1 this year (up 73 per cent from 1.1 million in Q4 2017)
    - Found and flagged 99.5 per cent of violating content before users reported it in Q1 (up from around 97 per cent in previous quarter)
    Facebook stats: Terrorist propaganda
    (Table: Facebook)

    Analysis: The metrics in this report currently only include actions related to ISIS, al-Qaeda and their affiliate groups. “These organisations pose the broadest threat to our global community, and we’ve rolled out technology specifically to detect and counter content from these groups and their affiliates,” Facebook said, adding it intends to include its enforcement efforts to other groups.

    It also said its technology has improved to detect both old content and newly posted content in this category, and the number is also affected by external factors such as real-world events that increase terrorist propaganda content on the platform.
  • HATE SPEECH
    - Prevalence: NIL (It is developing measurement methods, unable to provide reliable data for this report)
    - Took action on around 2.5 million pieces of content in Q1 this year (up about 56 per cent from 1.6 million in Q4 2017)
    - Found and flagged around 38 per cent of violating content before users reported it in Q1 (up from 23.6 per cent in previous quarter)
    Facebook stats: Hate speech
    (Table: Facebook)

    Analysis: “Hate speech is a nuanced issue that requires understanding of context, and technology often can’t do this alone. For example, it may take a human to understand and accurately interpret nuances like counter-speech, self-referential comments or sarcasm. As a result, hate speech relies heavily on review by our teams,” the report said.

    Mr Rosen added it is “very hard” to build artificial intelligence to address hate speech, and as such, does not delete flagged content automatically but refer them to its reviewers. “The tech is not there yet,” he explained.
  • SPAM
    - Prevalence: NIL (It is updating measurement methods, unable to provide reliable data for this report)
    - Took action on around 837 million pieces of content in Q1 this year (up 15 per cent from 727 million in Q4 2017)
    - Found and flagged nearly 100 per cent of violating content before users reported it in Q1 (similar to previous quarter)
    Facebook stats: Spam
    (Table: Facebook)

    Analysis: The figures for spam are affected by the effectiveness of the detection technology, as well as external factors such as cyberattacks that increase spam content on the platform.

    For instance, if during a cyberattack, spammers post 10 million content and Facebook removes all of them, the number would go up by that amount but it may not get many views if removed quickly enough. “In this way, the content actions number can be high but it wouldn’t have much impact on the experience of people on Facebook,” the report said.
  • FAKE ACCOUNTS
    - Prevalence: 3 per cent to 4 per cent of monthly active users (similar to previous quarter)
    - Disabled 583 million fake accounts in Q1 this year (down 16 per cent from 694 million in Q4 2017)
    - Found and flagged nearly 98.5 per cent of violating content before users reported it in Q1 (down from 99.1 per cent in previous quarter)
    Facebook stats: Fake accounts
    (Table: Facebook)

    Analysis: The figures shown depend on the spikes or dips in automated fake account creation. External factors like bad actors trying to create fake accounts in large volumes automatically using scripts or bots, with the intent of spreading spam or conducting illicit activities such as scams, would affect the numbers, it said.

    Facebook also does not count the blocked attempts to create fake accounts every day, as not account was actually created. “We don’t report blocks because the numbers are so high, and many attempts are unsophisticated and easy to detect and stop,” the company said. 

The social networking company said this preliminary report is a “move toward holding ourselves accountable, and letting others in our community hold us accountable” in terms of enforcing the community standards.

“(This report) is just a first step for sharing with our community how we uphold the Facebook community standards to keep people safe while maintaining an open platform for personal expression,” it said.

“These metrics aren’t perfect, and we have a lot of work to do to improve our internal processes, refine our tools and technology, and find the right ways to measure our enforcement reliably,” the company added. 

Source: CNA/kk

Bookmark