We can't police online hate speech

· Essay

As mass shootings and acts of domestic terrorism become tragically regular, authorities have started pointing their fingers at social media and the broader internet. They blame the rise of these atrocities on online hate speech and call for greater policing of online spaces. Among the strategies they’ve suggested, and in some cases implemented, are automated removal of comments, deplatforming particular individuals, and outright banning entire websites. What they’re missing, though, is that policing hate speech is far more complicated than banning a website.

It’s hard to define

Before you can start to police it, you first have to define what hate speech is - it’s a lot harder than you expect. Intuitively, we understand it as speech (or written words) that in some way attack an individual or group. But what are the specifics? Is it hate speech if it isn’t intended to attack someone? Is it hate speech if you’re quoting someone? What about sarcasm and satire? Is it still hate speech if no one takes any offence?

The problem here is that hate speech falls into the broader category of the taboo, and taboos vary dramatically from subculture to subculture, and even between individuals. Combine that with the worldwide mixing pot that is the Internet, which brings its own subsubcultures through Facebook groups and forums and YouTube fandoms and subreddits, and it becomes nearly impossible to say conclusively whether a post or comment is hate speech; what may be the funniest meme I’ve seen today might be something completely heinous to you, and something meaningless and trivial to someone else.

This is what Internet sociologists refer to as ‘ambivalence’. That’s the idea that everything on the Internet isn’t just one side of the coin– it’s both sides of several coins, because a lot of different people coming from a lot of different cultures and subcultures are going to interpret it in their own unique way. That makes it very difficult to make rules about what is and isn’t okay, because at the end of the day, it depends.

It’s hard to police

Even if we could make rules, it’s tough to actually enforce them. Either we have a human team who interpret those rules and enforce them differently, or we have a computer that fails spectacularly. It’s incredibly difficult to write a program or AI that understands meaning instead of just banning specific words, and doing that puts us back at square one deciding which words to ban and why. Even then, it’s very easy to circumvent automated bans by just rephrasing a little.

Alternatively, we could ban individuals or entire websites, but that doesn’t really work either. There are a myriad of tools for individuals to get around blocks to their profile or their particular device, but in most cases they can simply make a new profile and get right back to it. Even if the website they’re posting on gets taken down, the sheer number of platforms on the web makes it all too easy for individuals or groups to just migrate to a new space and take root.

Policing may actually make it worse

When individuals and groups do relocate and take root on new platforms, their hateful ideologies and rhetoric can actually spread as they ‘recruit’ from their new homes.

Even supposing we could find a way to effectively silence hate speakers on the web, we’re not really tackling the problem; just because people aren’t speaking hate, it doesn’t mean they don’t still subscribe to it–it just means we aren’t aware of it anymore. In effect, by silencing the troublemakers, we also silence the protestors. If we can’t see or hear the problem, we’re blind to the very thing that we need to be speaking out against.

We need to try something new

As futile as it may seem, we can’t sit idly by and do nothing. We’ve identified that the main problems in policing hate speech are out-of-touch rules made by a governing body, the resources required, and the problems that arise when hate speech is actually hidden. So, what’s our best way forward?

As repulsive as it might sound, our best strategy may be to let hate speech be seen. That doesn’t mean allowing it to be normalised, though. One proposal that works well is up or downvoting: most prominently used on reddit, this system lets users vote on content, with more popular posts and comments being pushed to the top, and unpopular posts being buried, but still visible. This is a fantastic way of ensuring that the rules being used for policing hate speech are workable within a particular context and subculture, because it’s the members of that subculture that are deciding whether it’s okay. On top of that, it’s also a way of letting the hate speaker know that they’ve gotten it wrong without actually harming them or shaming them publically (they’re anonymous, and they only lose fake Internet points). That’s crucial for letting them acknowledge they’re wrong, and reforming them.

This is far from a perfect system, but it’s a new way forward. Above all else, the takeaway here isn’t that we can’t police hate speech at all - it’s that we can’t do it properly with our current top-down, black-and-white mindset.