Facebook acknowledges content policy is lax on hate speech

Facebook is no stranger to causing controversy with its content policies, having only recently banned decapitation videos after months of turning a blind eye, for example. Once again, some users have taken to virtual picketing of the social network, stating that its content policy allows users to post content that is demeaning and threatening to women, as well as a host of other unsavory content. In light of the growing complaints, Facebook made a post on its blog today discussing its content policy and some changes it is making.

Facebook goes into a long explanation of its content policy, particularly its focus on hate speech and how it has no concrete definition. As such, Facebook has its own definition for hate speech, which encompasses something that is a direct threat to a protected group or a specific individual, such as bullying. Dark humor, controversial images and statuses, and things of a similar nature are allowed, however, to facilitate an open environment.

The social network goes on to state that "in recent days," it has come to realize that its hate speech removal system is flawed, having failed to deal with some of the hate speech on the site, specifying gender-based hate speech in particular. As such, Facebook says it is going to be rolling out some changes to help remedy the situation, including a total review of and updating of its User Operations guidelines.

The teams that are used to review flagged content and make a decision about whether it should be removed will undergo updated training, with the training updates being done in conjunction with both legal experts and women's coalition members, among others. Likewise, the social network plans to "establish more formal and direct lines of communication" with group representatives for women's groups and others.

One particular change is designed to allow the community to direct its ire over certain content towards the content creator rather than towards Facebook. Such a change is made by forcing a content creator that publishes a "cruel or insensitive" statement or image that is reported but not in violation of guidelines to stand behind their content. With this change, the creator will have to apply their real name to the image or status rather than hiding behind a page name, for example.

The changes are said to be happening immediately.

SOURCE: Facebook