Regulating Hate Speech in Social Media

Recently, Facebook released an audit of its policies relating to hate speech and other troubling forms of speech. The audit blistered Facebook for being too slow and too tepid in its response. Facebook has traditionally been a proponent of “free expression” and its reluctance to regulate any kind of speech is laudable in many ways. But this is not a First Amendment issue. Facebook is a non-governmental actor not subject to the First Amendment. It can create whatever rules it wants for its platform. Facebook’s decisions on what speech to forbid or regulate are heavily influenced by the desires of its advertisers and other stakeholders – you and I. So what speech is permitted on Facebook is really the product of community self-regulation.

Whenever Americans talk about regulating speech it ought to make us uncomfortable. Facebook is correct that the default social and legal norm in this country is free speech. But even when a government actor is involved, free speech has well-defined limits consistent with the First Amendment. One of those limits is on hate speech.

Hate speech is speech or expressive conduct that conveys a viewpoint of hostility and hatred against another person or group. It is speech that does more than stimulate debate or discomfort. It attacks others on the basis of a characteristic or viewpoint in such a way as to threaten them. It is not speech directed at ideas, but at people, and often endangers peace and order.

The component of potential violence has always been the key. In Virginia and West Virginia “fighting words” statutes have existed for over 200 years forbidding face-to-face statements to another likely to result in violence. This concept forms the basis of many U.S. Supreme Court cases upholding statutory language that forbids speech likely to result in violence and rejecting statutory language regulating speech that merely creates controversy or discomfort.

But the devil is in the enforcement details. Facebook’s policy defines hate speech as a

direct attack on people based on protected characteristics – race, ethnicity, national origin, religious affiliation, sexual orientation, caste, sex, gender, gender identity and serious disease or disability. . . . We define attack as violent or dehumanizing speech, statements of inferiority, or calls for exclusion or segregation.

Deciding whether speech is or is not “dehumanizing” involves a lot of subjectivity. And there is considerable room for disagreement about whether a “statement of inferiority” promotes or risks violence. But remember, Facebook’s policy is not governed by the First Amendment and it can prohibit speech on its platform that could not be prohibited by a government.

The display of a noose is expressive speech that conveys a threat of violence played out over hundreds of years of experience. NASCAR acted swiftly and appropriately to investigate what was first reported to be a noose hung in a black driver’s garage stall. Can anyone doubt that if a Facebook user posted a page showing nothing more than a noose, it would send a palpable threatening message to a large community in America? The point is that some alleged hate speech can be the subject of debate. Some can’t.

Public mores about speech are in powerful flux. Take sexual matters for example. My parents told me that when they were growing up even mention of sexually transmitted diseases was socially inappropriate. That has certainly changed. So too, the kind of racist or homophobic comments and jokes that were common in “polite” society in the not-too-distant past now mark out the speaker as an ignorant buffoon, or worse.

In our daily lives we collectively exercise social disapproval and shame to prevent inappropriate or harmful speech. We decide what we will tolerate. And it is no different when it comes to Facebook’s grudging move toward more active policing of the speech on its platform. This was forced upon Facebook by the complaints of advertisers, employees and platform users. It was not the product of government regulation, but rather regulation by we the people. It is a good thing.

I am ready for the ration of grief I will get from my libertarian friends. They will say that my position invites the kind of hair-trigger political correctness so prevalent on today’s college campuses. First off, I will say that I was not aware offensive speech had its own political party. What we are talking about is not political correctness so much as social correctness. As far as excessive college speech codes go, the fault lies with the immature students and weak college administrators who permit the ”microaggression” and “safe space” nonsense. Yet the basic idea that speech can threaten and harm is still sound.

I am opposed to uninviting speakers because of their viewpoint, or shouting down unpopular ideas. But one argument in favor of college speech codes resonates in today’s environment. Free speech is important, but it is not the only civic and democratic value to consider. Fairness and inclusiveness are two others. When a public university issues a speech code, it must hew to the First Amendment. But when Facebook issues and enforces its hate speech policy, it can and should be more sensitive to the evolving public understanding of the harm that kind of speech does to our country.

Print Friendly, PDF & Email