On the Perils of Regulating Content Moderation (Part 1 of 3)

You’ll have to forgive social media companies for feeling whiplashed on the policy front.

Should they be forced to determine the truth or falsity of all political ads?

Or should they be forced to carry ALL political ads, regardless of truthfulness?

Should they have to post everything their users post, in the quest for “balance?”

Or should they be forced to eliminate hate speech from their platforms – on pain of jail time?

These proposals are all, to varying degrees, misguided, incoherent, impossible, and just plain bad policy. 

Yet they’re being promoted by lots of influential people. There are bad-faithers in Congress — like Senators Josh Hawley and Ted Cruz — calling for online companies to be neutral platforms, to stop “determining falsity” and “censoring conservatives” and, I guess, just mindlessly publish whatever their users decide to post or include in advertisements.

And on the other hand, you’ve got people like Sasha Baron Cohen — who famously abused the trust and good nature of lots of ordinary people in service of his comedy — calling for social media companies to do ever-more, to moderate more fastidiously, within a set of rules and guidelines the deviation from which may lead to criminal sanctions. 

Obviously, you can’t have it both ways, people.

But here’s the thing: “you must publish it all” and “you must moderate better” are both horrible attempts to constrain online platforms. Besides the glaring First Amendment problems, these types of suggestions carry the not-insignificant consequences of either shutting these platforms down or turning them into absolute sewers.

The beauty of existing law, in the US at least, is that sites are free to make their own determinations about what gets published on their platforms. This is thanks to the “good faith content moderation” section of 47 USC 230(c), otherwise known as “CDA 230.” It’s the companion to CDA 230’s more widely-known feature: immunity from liability for hosting third-party content.[ref]Not without limitation; CDA 230 immunity doesn’t apply to federal crimes or intellectual property claims.[/ref]

CDA 230 provides the breathing room for sites to host third party content to serve their audiences. And while “you’ve got to be a neutral platform” is not a serious objection (if sites were required to publish everything that users throw at them, they would be utterly useless), the idea that sites should be held to government-imposed content moderation rules is more pernicious by virtue of its surface appeal. Why not require that sites moderate away objectionable content?

The rejoinder should be obvious: who gets to determine what’s “objectionable?” 

Unfortunately, there’s never a shortage of people stepping up to take on the censor’s role. 

Or people who blindly react to the fresh outrage of today, forgetting that today’s “cut off the hurtful speech” is tomorrow’s censorship of the powerless.

Or people who poo-poo the problem, confidently stating that surely — SURELY — guardrails can be built to require just the right amount of content moderation.

Calling for greater regulation of online content moderation is a recipe for First Amendment violations and unintended consequences. The specter of liability for “getting content moderation wrong” is spectacularly under-appreciated. In Part 2, I’ll get into the detail about how this plays out in practice.