Facebook claims it has drastically reduced hate speech prevalence

Facebook is back with a response to the latest criticism of its platform, stating in a lengthy new statement that it has drastically reduced the amount of hate speech its users have seen over the past three quarters. The company focuses on the prevalence of hate speech, which it describes as content that users actually see, not the sum total of problematic content found on its platform.

Facebook claims that with this nearly 50-percent decrease in prevalence over its past few quarters, hate speech accounts for only around 0.05-percent of the content its users view; this amounts to around five viewers for everyone 10,000. Among other things, Facebook says it proactively uses various technologies to detect problematic content and shuttle it off to reviewers for potential removal.

The statement comes from Facebook's VP of Integrity Guy Rosen, who specifically brings up the recent release of leaked content in a report by The Wall Street Journal. In his post, Rosen said, among other things:

Data pulled from leaked documents is being used to create a narrative that the technology we use to fight hate speech is inadequate and that we deliberately misrepresent our progress. This is not true. We don't want to see hate on our platform, nor do our users or advertisers, and we are transparent about our work to remove it. What these documents demonstrate is that our integrity work is a multi-year journey. While we will never be perfect, our teams continually work to develop our systems, identify issues and build solutions.

Rosen goes on to reiterate that in Facebook's view, the prevalence of hate speech on its platform is the most important metric. He specifically touches on the controversial habit of leaving hate speech on the platform that doesn't quite meet 'the bar for removal,' noting that Facebook's systems instead reduce its distribution to users.

Rosen says:

We have a high threshold for automatically removing content. If we didn't, we'd risk making more mistakes on content that looks like hate speech but isn't, harming the very people we're trying to protect, such as those describing experiences with hate speech or condemning it.