On the Perils of Content Moderation (Part 3 of 3)

Continuing my content moderation story from Part 2:

Despite the blustering and threats, most lawyers understood that the First Amendment protected Avvo’s right to publish its lawyer profiles and ratings, and that it would be pointless to go through with a lawsuit. The published decision in Browne v. Avvo helped as well. So they’d eventually go away. The guardrails of the First Amendment kept them from filing.

And at this point, after 20+ years of extensive litigation, CDA 230 operates in a similar fashion. Sure, there continue to be disputes along the frontier, and there’s always going to be a certain background level of utterly frivolous and SLAPP actions. But for the most part, things operate in a fashion where the rules are settled and suing is pointless. 

Which is why Avvo didn’t get sued for posting third-party content — despite how exercised some lawyers would get over negative reviews.

But what if these lawyers had an argument they could lever? Like, that Avvo’s content moderation had to be “reasonable,” or “neutral?” Or that Avvo could be liable for not adhering to its published content moderation standards?

Were there ANYTHING that would make them think, “well, I’ve gotta chance at this thing,” Avvo would have been buried in lawsuits. And even if we’d been able to turn these suits away on the pleadings, doing so would have been super-expensive.

How expensive? Tech policy shop Engine, in an indispensable primer on the value of CDA 230, estimates that disposing of a frivolous lawsuit on a preliminary motion to dismiss can cost $80,000. And in my experience, it can cost LOTS more than that if the issues are complicated, the plaintiff is proceeding in bad faith, you draw a bad judge, etc, etc.

Now, some internet commenters would say that the way to avoid this risk is to just not do the bad things. But here in the real world, the way companies will avoid this risk (at least until they get big enough to take the costs) will be to either not moderate content (thus destroying the user experience) or simply not post third party content at all.

So, a  cesspool of a user experience on the one hand; a much-lessened interactive internet on the other. Take your pick. 

Bottom line — the clarity that CDA 230 provides is super-valuable in shutting down, at the get-go, anyone who wants to roll the dice on taking your startup out with a little lawfare. And the genius of CDA 230 is that it provides the breathing room for sites to set their own rules, and moderate content using their own discretion, without fear of being punished by the government or subjected to ruinous litigation for so doing.

Perversely, while all of the noise about limiting/eliminating CDA 230 is driven by frustration at Facebook, Google, and other giant platforms, it’s not like neutering the law would even really impact those guys. They’ve got the scale to take the cost of regulation. 

But smaller, newer services? No way. They’d be forced into the loser of a choice I’ve described above: cesspool or wasteland.

Policymakers should think long and hard about the implications for the wider world of innovative online services before even thinking about “tweaks” to CDA 230.

(Part 1, Part 2)

Leave a Reply