Two weeks of bumping up against what Facebook considers “hate speech” shows just how little the platform is willing to do to stop the spread of hate.

Pat Navin
5 min readJul 8, 2020
Facebook claims they are removing hate speech. Really? According to Facebook, this pictured image does not qualify as “hate speech.”

The image you are looking at is a post I reported to Facebook as “hate speech.” This was Facebook’s response (bold mine):

Your report

Today at 3:13 PM

You anonymously reported XXXXXXXXX’s share for displaying hate speech.

Thanks for your feedback

Today at 4:16 PM

Thanks for your report — you did the right thing by letting us know about this. The post was reviewed, and though it doesn’t go against one of our specific Community Standards, we understand that it may still be offensive to you and others. No one should have to see posts they consider hateful on Facebook, so we want to help you avoid things like this in the future.

From the list above, you can block XXXXXXXXX directly, or you may be able to unfriend or unfollow them. If you unfollow them, you’ll stay friends on Facebook but you won’t see their posts in your News Feed.

We know these options may not apply to every situation, so please let us know if you see something else you think we should review. You may also consider using Facebook to speak out and educate the community around you. Counter-speech in the form of accurate information and alternative viewpoints can help create a safer and more respectful environment.

I have been testing Facebook in the past couple of weeks to find out just what the company considers “hate speech,” which is one of the reasons a reader can choose from Facebook’s menu of reasons to report a post. To say I have been disappointed with Facebook’s responses is a massive understatement. The above example is only one case where they have essentially defended a vile post that promotes violence and hate. Note that I also reported the same post for violence and that, too, was rejected. (The green frog, for those who may be unaware, is a cartoon character known as Pepe the Frog that has been co-opted by the alt-right, white supremacist movement in recent years.)

The last two sentences in Facebook’s response quoted above make it abundantly clear that they want users to do the very policing of their site that they so far have refused to do. The company is essentially washing its hands of responsibility for hate speech on its platform and placing the onus on users to “educate” the community through “counter-speech” — whatever that is.

Facebook had a meeting yesterday with civil rights groups who have been advocating (successfully) for an ad boycott of Facebook. The meeting didn’t go well, according to this story in the Washington Post:

Civil rights leaders organizing a major advertising boycott of Facebook said they remained unconvinced that the social network is taking enough action against hate speech and disinformation after meeting with Mark Zuckerberg and other Facebook executives on Tuesday.

Civil rights leaders used the session to press Chief Operating Officer Sheryl Sandberg and Zuckerberg, Facebook’s chief executive, to institute changes at Facebook, including installing a top-level executive who will ensure the global platform does not fuel racism and radicalization.

Color of Change President Rashad Robinson described the meeting as “disappointing” during a news conference later Tuesday. The organizers of the campaign, known as #StopHateForProfit, provided a list of demands to the social network days before the meeting, he said, and the company did not have clear responses to their recommendations.

“Attending alone is not enough,” said Robinson, who participated in the meeting over Zoom, which lasted over an hour. “At this point, we were expecting some very clear answers to the recommendations we put on the table. And we did not get them.”

“It was abundantly clear in our meeting today that Mark Zuckerberg and the Facebook team is not yet ready to address the vitriolic hate on their platform,” Greenblatt said.

To Facebook’s credit, my reports on violent threats have resulted in many more posts and comments being taken down.

But besides trying to understand just what crosses the line for Facebook in terms of hate speech, I have also wondered what Facebook’s criteria are for permanently removing someone who consistently posts provably false, violent and/or hate-filled content.

In my two-week experiment, I reported a number of individuals numerous times for posting violent threats and often succeeded in having the violent threats taken down. The response of these repeat offenders? Usually derisive laughter to their friends about having posts taken down, but little or no actual consequences. Some note an occasional time-out in “Facebook jail,” but few posters seem to be removed permanently. And many of these posters seem to be using multiple names, owning two, three or four different personal Facebook pages.

Obviously, software exists that could catch and remove these repeat offenders as such software is in use at hundreds of thousands of sites online. But Facebook appears unwilling to employ such software. People repeatedly posting provably false and dangerous conspiracy theories, disinformation, outright lies, smears, hate speech, violent imagery and violent threats continue to post.

Why?

Mark Zuckerberg and Cheryl Sandberg do not seem interested in actually creating the environment they claim they want for Facebook. Facebook’s Community Standards explanation for what constitutes “hate speech” includes the following:

We do not allow hate speech on Facebook because it creates an environment of intimidation and exclusion and in some cases may promote real-world violence.

We define hate speech as a direct attack on people based on what we call protected characteristics — race, ethnicity, national origin, religious affiliation, sexual orientation, caste, sex, gender, gender identity, and serious disease or disability. We protect against attacks on the basis of age when age is paired with another protected characteristic, and also provide certain protections for immigration status.

We define attack as violent or dehumanizing speech, statements of inferiority, or calls for exclusion or segregation.

It goes on for a bit more, but given that the image in this post didn’t qualify under their own definition of “hate speech,” it is hard to see just what does qualify.

Zuckerberg and Sandberg must be forced to react. Watching advertising revenue drain out of Facebook seems to be the only thing they really understand.

Keep up the pressure.

This is inexcusable and I refuse to be a contributor to a platform that offers excuses for promoting hate.

Personal note: The 24-year-old woman killed in this incident was a wonderful person and a devoted activist and the daughter of someone I have been acquainted with online for many years. It has been devastating, as one could only imagine.

--

--