Legal Compliance Is Insufficient in Stopping Hate Speech on Social Networks

US law is especially liberal when it comes to free speech. This puts hate speech, however you define it short of a call to violent action, under the protection of the First Amendment. For the foreseeable future, the fight against hate and violence must operate in this context.

This feels unsatisfactory when the killer of eleven people in a Pittsburgh synagogue vented his hate on the social network Gab leading up to the attack, even announcing his intention to act.

Social networks can, in fact, be much better at containing the problem of online hate speech. You should raise your expectations of social networks to help solve the problems of fascist, antisemitic, racist, and sexist speech. Here's why:

Social networks and hate speech

Social networks are privately run, and have the freedom to limit any kind of speech on their platforms, for any reason. You have no right to have Facebook publish your posts. For example, selling illegal drugs or passing around copyright works belonging to large publishers will promptly get your account suspended.

Social networks also have the responsibility to not be enablers of hate and violence, and of other abuse of their platforms such as influence campaigns by foreign adversaries.

This fact should be a useful moderation of free speech absolutism. Social networks should provide a product where freedom of expression is maximized without sliding into the muck of hate speech, just like any business that provides a public gathering place should want to provide the best possible enjoyable experience to customers. Creating a harmful environment is irresponsible. But, obviously, this sounds idealistic in the current environment.

When the management of a social network like Gab claims that the entire purpose of Gab is to go out to the boundaries of free speech, they are ducking their responsibility. Gab has the right to be irresponsible in that way, but they don't have a right to be helped by hosting providers, ad networks, payment providers, and other toolmakers of the internet ecosystem. Gab is rightly finding itself isolated. When more responsible platforms fail to perform responsibly, they not only fail to serve their customers, they fail society and responsible commerce as a whole.

Minority groups and women suffer disproportionately from harassment by online haters, so much so that education and recruitment in internet technology industries is negatively affected by online harassment. The supply of adept engineers and management in social networks themselves is being constrained by their own inability to reign in online hate and harassment. 

It is time to do better. Social networks already have the tools to do better. It is time to apply social network data and analytics to that task.

Social networks have the tools to be responsible

Facebook has billions of users, all over the planet. You can readily imagine that being responsible for preventing the spread of hate and insidious hostile propaganda is not easy. One can't realistically expect that every individual with serious potential for violent action can be identified.

Social networks face difficulty in dimensions other than scale. It is the special talent of social network "stars" who attract large followings, and who earn big incomes on social networks, to manipulate social networks to their advantage. That means that social networks are literally incentivizing people who are especially good at subverting the system, and creating a culture of sophistication in outthinking their algorithms and incentives.

Not least is the problem that social networks have a financial disincentive to root out automated subversion and bad behavior. Social networks are valued on the basis of the number of people visiting them and engaging in activity. Fake activity, like software "bots" that mimic human users count toward the numbers used to attract ad revenue to social networks.

Nevertheless these facts can't excuse the current poor performance of social networks in cleaning out their dark, hostile netherworlds of trolls, "shitposters," nazis, and racists.

Social networks have honed the art of finding out your desires even more accurately and objectively than you are self-aware of those desires. Social network analytics have been built to a remarkable level of refinement because they are the engines of the social network business model: You are  monitored and measured for every signal you emit. Your desires are what they sell to advertisers, quantified, tested, and proven to be far more effective than any medium that preceded social networks.

It's not just you. Social networks know your social graph. They know the strength of those connections. They know the frequency and amount of your interactions. They know your connections' desires better than you do. They know the human context of your desires in ways inaccessible to you.

For the reason that they know you so well, that they know everyone who uses their platforms so well, we should expect that their ability to identify and isolate hate and violence should be much better than is currently apparent. They don't need to rely on being able to distinguish a harmless ranting madman from one that will pick up a rifle and start shooting. They have context. They have everyone's connections. They know the likelihood you will act to buy something. They have the tools to discern the blowhard from the possible gunman.

But we should not be satisfied by social networks merely detecting the hateful. Social networks have the tools to reduce the harm from the hateful.

Have higher expectations

Not only can social networks use their sophisticated tools to detect bad behavior, they also have the potential to isolate and reduce the impact of that behavior. They can turn the tools of the badly behaved against them: Social networks use bots to create the impression of activity for relatively benign purposes like promoting the use of multiplayer games. That is, social networks have their own tame bots and know how to use them.

Just as hate speech mongers use bots and other techniques to subvert productive conversation on social networks, the networks could use their own automated technologies to isolate hate speech, turn the haters against one another, and leave them shouting into the wind, blind to the fact that nobody is listening.

You can bet that social networks use all tools in their toolbox to keep you engaged and sell you stuff. You should expect them to be at least as sophisticated in the service of ridding your social network experience of trolls and bots.

Don't accept excuses

Don't take meeting the minimum standard of legal compliance as an excuse. Social networks have rid themselves of people intent on the crime of sharing a music recording, surely they could try harder with the nazis and misogynists. They've got the tools to detect and quarantine this disease. It is time for them to act.

Comments

Popular Posts

5G: Hype vs Reality

A $99 Android Tablet That Doesn't Suck

The QUIC Brown Fox Jumped Over the Top of Carrier Messaging, or Allo, Duo, WebRTC, QUIC, Jibe, and RCS, Explained