Facebook has shared an explanation of how it defines hate speech and harmful content, as well as its plans to address the issues of cruel and insensitive content on the site, following challenges from Women, Action and The Media, the Everyday Sexism Project and a number of activists and organizations calling on the social network to take action against groups, pages and images that condone or encourage rape or domestic violence.
In a note on the Facebook Safety page, Facebook explained that it prohibits content that is “directly harmful,” but it allows content that may be “offensive or controversial.” The company defines harmful content as “anything organizing real world violence, theft, or property destruction, or that directly inflicts emotional distress on a specific private individual (e.g. bullying).” Facebook also prohibits “hate speech,” which it defines as “direct and serious attacks on any protected category of people based on their race, ethnicity, national origin, religion, sex, gender, sexual orientation, disability or disease.”
Facebook says it tries to remove this type of content as soon as possible, but other offensive and distasteful content might not qualify for removal. Still, the company acknowledged:
“In recent days, it has become clear that our systems to identify and remove hate speech have failed to work as effectively as we would like, particularly around issues of gender-based hate. In some cases, content is not being removed as quickly as we want. In other cases, content that should be removed has not been or has been evaluated using outdated criteria.”