One of the wonders of the web is the sheer amount of information, on virtually any imaginable topic, available at one’s fingertips.
And one of the horrors of the web is the same thing. Anyone with a keyboard can spew whatever awfulness they feel compelled to share.
Companies that provide forums for user-generated online content have to strike a balance between these two sides of the internet, a task the difficulty of which is vastly underestimated.
Take Avvo, for example. Our site is narrowly focused on American lawyers and legal issues. We can moderate out all sorts of things simply on the basis of relevancy. Political screeds in the form of a question in our Q&A forum? Out. Marketing drivel as a legal guide? It’s gone. Off topic rants about someone else’s attorney in our review system? Those won’t even see the light of day.
And yet, even with Avvo’s narrow focus and relatively modest size, the effort necessary to moderate content on our site is a massive job. It involves a number of people working on this task full-time, and regular escalation of issues to all levels and departments of the company. And we’re still not ever going to make everyone happy.
So imagine what the content moderation job is like for Facebook, operating globally, with billions of users, in an environment where ALL topics are relevant to someone. This Guardian article detailing some of Facebook’s struggles to strike the right balance mocks the company for using this form language when reversing content moderation decisions:
“The post was removed in error and restored as soon as we were able to investigate. Our team processes millions of reports each week, and we sometimes get things wrong. We’re very sorry about this mistake.”
Well, of course Facebook uses the same word-for-word apology. They are likely making hundreds – if not thousands – of the same “errors” every week. Content moderation is hard, and given Facebook’s scale it’s surely not difficult to find the sorts of ridiculous examples highlighted in the Guardian piece.
The potential for such errors can be self-inflicted as well. As this earlier piece from the Wall Street Journal notes, Facebook has had to deal with internal pressure to remove some of Donald Trump’s messages as “hate speech.” A global company like Facebook must navigate the norms its own employees bring to the table, particularly those in countries that don’t share America’s appetite for free speech. Although Facebook has resolved to consider the newsworthiness of items facing removal complaints, both Facebook and Twitter have faced complaints about putting a thumb on the scale in favor of progressive messaging and causes.
That’s always going to be a risk where humans are involved. We’re going to bring our biases and preconceptions to the table when trying to decide what is and is not worthy of being published. All the forum sites can do is try to be as balanced and fair as possible. But before getting all apoplectic about every example of content moderation gone bad, consider first the sheer scale and difficulty of the problem.