“Nice Little Internet You’ve Got There – Be a Shame if Anything Happened to It.”

There’s another shift that comes amid the whining about “tech censorship” and the “silencing of conservative voices online,” and it’s a purely transactional one: put up our content or lose the immunity that CDA 230 provides. Here’s a concise recent formulation of the bargain on offer:

This argument seems to at least implicitly acknowledge that the First Amendment protects the right of online forums to choose the content that appears on their sites. It just wants to do away with that expressive freedom as the price sites must pay in order to continue to enjoy CDA 230 immunity. 

So: either moderate your site to allow all First Amendment-protected speech, or lose your immunity with respect to third party postings. Your choice. 

Now, putting aside some obvious questions (Which sites are “market-dominant?” Could a scheme like this actually pass First Amendment scrutiny? How do we determine whether a site has lived up to this 1A standard in a given case?), it’s worth asking: would any market-dominant site actually agree to this bargain? 

OK, that’s a rhetorical question. 

Because the answer is that there is absolutely no chance that any such site would take this deal.

The First Amendment allows for a stunningly wide range of noisy, messy, irrelevant, dishonest, and offensive expression. And even if this deal allows sites to impose reasonable content-neutral restrictions (like post length, duplication, formatting, etc.), it would unleash a torrent of vitriol, abuse, pornography, and abject nonsense. Sites would be helpless to shape a desired user experience or serve a particular audience. They’d just be bulletin boards for unstructured, unfocused expression.

And what would these successful sites get for this deal? The right to be immune from liability for third party postings and their curatorial decisions? 

Sure, that’s nice and all, but it’s not an existential risk to any site that’s reached market dominance. CDA 230 immunity is most important to sites that are just getting started, who can’t take the cost of fending off suits over user-generated content. Big, established sites? They’ve got the resources to take that cost. And faced with a no-win choice like this, it’s certainly a better alternative than turning one’s site into an unusable cesspool.

What the market-dominant firms WOULD do in response to this ultimatum is pretty much the polar opposite of what the conservatives claim to be advocating for: they’d become much, much more aggressive about policing the sort of content they allow to be posted. 

Why? Because decisions to take content down, de-emphasize posts, or suspend posting privileges are protected by the First Amendment in a way that decisions to post content are not. CDA 230 provides a procedural benefit in the former case; in the latter it offers an important substantive right. Thus, while losing CDA 230 would marginally increase the risk of taking user postings down, it would greatly increase the risk of leaving postings up. 

So, if conservatives get to force government to offer this bargain, no eligible site is going to take it. And if the hammer then comes down, and CDA 230 immunity is taken away, look for the likes of Google, Facebook, and Twitter to take a much, much heavier hand with the “delete content” switch.

Now, maybe “conservatives” calling for this bargain just really don’t like the interactive web, and would be happy to see it stifled in this way. But if they really believe that there’s a deal to be had here that will lead to a more useful, robust, or “politically neutral” internet, they’re sorely mistaken. 

Why Content Moderation Codes Are More Guidelines Than Rules

Also, following on my last post: since the First Amendment protects site moderation & curation decisions, why all the calls to get rid of CDA 230’s content moderation immunity?

Having listened carefully and at length to the GOP Senators and law professors pitching this, the position seems to be a mix of bad faith soapboxing (“look at us take on these tech libs!”) and the idea that sites could be better held to account — contractually, via their moderation codes — if the immunity wasn’t there.

This is because the First Amendment doesn’t necessarily bar claims that various forms of “deplatforming” — like taking down a piece of content, or suspending a user account — violate a site’s Terms of Use, Acceptable Use Policy, or the like. That’s the power of CDA 230(c)(2); it lets sites be flexible, experiment, and treat their moderation policies more as guidelines than rules

Putting aside the modesty of this argument (rallying cry: “let’s juice breach-of-contract lawsuits against tech companies”) and the irony of “conservatives” arguing for fuller employment of trial attorneys, I’ll make two observations:

First of all, giving people a slightly-easier way to sue over a given content moderation decision isn’t going to lead to sites implementing a “First Amendment standard.” Doing so — which would entail allowing posts containing all manner of lies, propaganda, hate speech, and terrorist content — would make any such site choosing this route an utter cesspool. 

Secondly, what sites WOULD do in response to losing immunity for content moderation decisions is adopt much more rigid content moderation policies. These policies would have less play in them, less room for exceptions, for change, for context. 

Don’t like our content moderation decision? Too bad; it complies with our policy. 

You want an exception? Sorry; we don’t make exceptions to the policy. 

Why not? Because some asshole will sue us for doing that, that’s why not. 

Have a nice day.

CDA 230’s content moderation immunity was intended to give online forums the freedom to curate content without worrying about this kind of claim. In this way, it operates somewhat like an anti-SLAPP law, by providing the means for quickly disposing of meritless claims.

Though unlike a strong anti-SLAPP law, CDA 230(c)(2) doesn’t require that those bringing such claims pay the defendant’s attorney fees.

Hey, now THERE’s an idea for an amendment to CDA 230 I could get behind!

Let’s Talk About “Neutrality” – and How Math Works

So if the First Amendment protects site moderation & curation decisions, why are we even talking about “neutrality?” 

It’s because some of the bigger tech companies — I’m looking at you, Google and Facebook — naively assumed good faith when asked about “neutrality” by congressional committees. They took the question as inquiring whether they apply neutral content moderation principles, rather than as Act I in a Kabuki play where bad-faith politicians and pundits would twist this as meaning that the tech companies promised “scrupulous adherence to political neutrality” (and that Act II, as described below, would involve cherry-picking anecdotes to try to show that Google and Facebook were lying, and are actually bastions of conversative-hating liberaldom).

And here’s the thing — Google, Twitter, and Facebook probably ARE pretty damn scrupulously neutral when it comes to political content (not that it matters, because THE FIRST AMENDMENT, but bear with me for a little diversion here). These are big platforms, serving billions of people. They’ve got a vested interest in making their platforms as usable and attractive to as many people as possible. Nudging the world toward a particular political orthodoxy? Not so much. 

But that doesn’t stop Act II of the bad faith play. Let’s look at how unmoored from reality it is.

Anecdotes Aren’t Data

Anecdotes — even if they involve multiple examples — are meaningless when talking about content moderation at scale. Google processes 3.5 billion searches per day. Facebook has over 1.5 billion people looking at its newsfeed daily. Twitter suspends as many as a million accounts a day.

In the face of those numbers, the fact that one user or piece of content was banned tells us absolutely nothing about content moderation practices. Every example offered up — from Diamond & Silk to PragerU — is but one little greasy, meaningless mote in the vastness of the content moderation universe. 

“‘Neutrality?’ You keep using that word . . .”

One obvious reason that any individual content moderation decision is irrelevant is simple numbers: a decision representing 0.00000001 of all decisions made is of absolutely no statistical significance. Random mutations — content moderation mistakes — are going to cause exponentially more postings or deletions than even a compilation of hundreds of anecdotes can provide. And mistakes and edge cases are inevitable when dealing with decision-making at scale.

But there’s more. Cases of so-called “political bias” are, if it is even possible, even less determinative, given the amount of subjectivity involved. If you look at the right-wing whining and whinging about their “voices being censored” by the socialist techlords, don’t expect to see any numerosity or application of basic logic. 

Is there any examination of whether those on “the other side” of the political divide are being treated similarly? That perhaps some sites know their audiences don’t want a bunch of over-the-top political content, and thus take it down with abandon, regardless of which political perspective it’s coming from? 

Or how about acknowledging the possibility that sites might actually be applying their content moderation rules neutrally — but that nutbaggery and offensive content isn’t evenly distributed across the political spectrum? And that there just might be, on balance, more of it coming from “the right?” 

But of course there’s not going to be any such acknowledgement. It’s just one-way bitching and moaning all the way down, accompanied with mewling about “other side” content that remains posted.

Which is, of course, also merely anecdotal.

Yes, I’d Like to Think Utah is Taking My Advice

Back in January, 2018 I wrote a post titled  “What SHOULD Attorney Advertising Regulation Look Like?”

In that post — one of many in my long string of railings against the inanity of the attorney ad rules — I made my pitch plain:

So let’s gut the Rules. We can start by just flat-out eliminating – entirely – Rules 7.2, 7.4, & 7.5. I’ve never heard a remotely compelling argument for the continued existence of these Rules; they are all just sub-variations on the theme of Rule 7.1.

I would be lying if I said that at the time I wrote those words I had any optimism that a state would take this suggestion seriously — at least within my lifetime. But shockingly, this is almost exactly what the Utah Supreme Court has proposed doing: the complete elimination of Rules 7.2 – 7.5.[ref]In fact, Utah is going even further than what I had proposed, eliminating Rule 7.3 (attorney solicitation) and incorporating its central tenants into a new section of Rule 7.1 prohibiting coercion, duress, or harassment when interacting with prospective clients.[/ref]

In that post I also proposed that Bars adopt controlled regulatory tests:

I know that we as lawyers are trained to “spot issues,” but this training drives way too much tentativeness. Instead of applying the precautionary principle – REGULATE NOW, IN CASE THE BAD THINGS HAPPEN – Bars could try controlled tests.

Say a Bar has gotten a question about an innovative product like Avvo Legal Services. Instead of agonizing for 6-12 months over the potential RPC implications, the Bar could – gasp – have a quick talk with the provider and make a deal: the Bar would explicitly let attorneys know it’s OK to participate, if the provider agrees to feed the Bar data on engagement, complaints, etc.

There would also be the understanding that it would be a time-limited test (enough time to get data sufficient to understand consumer impact) and that the Bar could pull the plug early if results looked super-ugly. A process like this would actually IMPROVE the Bar’s ability to address real consumer harm, while smoothing the road to innovation.

Utah? They’ve created a “regulatory sandbox” — where innovations in legal services delivery, including partnerships with non-lawyers — can be tested empirically rather than being squelched out of the gate. That’s exactly what I had in mind.

Just three years ago, Utah was part of the chorus of head-in-the-sand Bars reflexively telling its members that the modest, consumer-friendly innovation that was Avvo Legal Services couldn’t possibly comply with their Rules.[ref]Utah State Bar Ethics Advisory Opinion No. 17-05.[/ref]

Now they’re leading the charge on making real change to the regulations that are holding back consumer access to justice.

While I’d like to think that Utah was persuaded by my advocacy in this area, what’s really important is that a state is actually doing something about the very real problem of regulatory cruft. 

Comments on these proposed changes are open until July 23, 2020. The usual reactionary voices will surely weigh in with their “sky is falling” rhetoric — it would be great if those who support this kind of meaningful regulatory change let the Utah Supreme Court know they are absolutely on the right track.

Lawyer Ethics & CDA 230

I’ve been talking about CDA 230, so let’s explore a case of how “the law that makes the internet go” interfaces with the Rules of Professional Conduct governing the practice of law. 

Scintillating, right? Stick with me here . . .

We’re talking online client reviews. As we all surely know, EVERYTHING is reviewed online – even lawyers.

It’s strange that the Texas Bar is wading into this one in 2020, given that client reviews have been around for a couple of decades now. But hey – the law moves slowly and deliberately, right?

In Opinion 685, issued crisply in January, 2020, the Texas Bar determined that Texas lawyers CAN ask their clients to leave online reviews. Hell, they can even ask their clients to leave positive reviews!

Thanks, guys!

What interests me about the conclusion in this otherwise-blindingly-obvious opinion is this little tidbit, tacked on near the end:

But, if a lawyer becomes aware that a client made a favorable false or misleading statement or a statement that the client has no factual basis for making, the lawyer should take reasonable steps to see that such statements are corrected or removed in order to avoid violating Rules 7.02(a) and 8.04(a)(3).”

Huh.

Would an attorney on the receiving end of such a review be doing the right thing to ask the client to take the review down or pull back on the praise? Of course. 

But is doing so a requirement? Like, a must-do-on-pain-of-being-disciplined requirement? 

Hell no. 

And not just because of the uncertainty and vagueness involved in making this a hard requirement. No, rather because 47 USC 230(c)(1) dictates: 

No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

While typically thought of in the defamation context — a forum site is not responsible for defamatory content posted by its users — it would apply equally here: CDA 230 prevents the application of state licensing rules to hold attorneys responsible for reviews posted by their clients.[ref]That is, provided the attorney didn’t write the review for the client, or create the client and review from whole cloth. CDA 230 doesn’t give businesses a pass for astroturfing.[/ref]

Despite the heavily-litigated history of CDA 230, it’s unlikely this particular issue will ever see the courts. Attorneys are too cautious about threats to their licenses, the steps required to comply are so minimal, and bar counsel usually have bigger fish to fry anyway. Still, it’s instructive of the genius of this little law, and how far it reaches to remove friction from the free flow of information online.

On the Perils of Content Moderation (Part 3 of 3)

Continuing my content moderation story from Part 2:

Despite the blustering and threats, most lawyers understood that the First Amendment protected Avvo’s right to publish its lawyer profiles and ratings, and that it would be pointless to go through with a lawsuit. The published decision in Browne v. Avvo helped as well. So they’d eventually go away. The guardrails of the First Amendment kept them from filing.

And at this point, after 20+ years of extensive litigation, CDA 230 operates in a similar fashion. Sure, there continue to be disputes along the frontier, and there’s always going to be a certain background level of utterly frivolous and SLAPP actions. But for the most part, things operate in a fashion where the rules are settled and suing is pointless. 

Which is why Avvo didn’t get sued for posting third-party content — despite how exercised some lawyers would get over negative reviews.

But what if these lawyers had an argument they could lever? Like, that Avvo’s content moderation had to be “reasonable,” or “neutral?” Or that Avvo could be liable for not adhering to its published content moderation standards?

Were there ANYTHING that would make them think, “well, I’ve gotta chance at this thing,” Avvo would have been buried in lawsuits. And even if we’d been able to turn these suits away on the pleadings, doing so would have been super-expensive.

How expensive? Tech policy shop Engine, in an indispensable primer on the value of CDA 230, estimates that disposing of a frivolous lawsuit on a preliminary motion to dismiss can cost $80,000. And in my experience, it can cost LOTS more than that if the issues are complicated, the plaintiff is proceeding in bad faith, you draw a bad judge, etc, etc.

Now, some internet commenters would say that the way to avoid this risk is to just not do the bad things. But here in the real world, the way companies will avoid this risk (at least until they get big enough to take the costs) will be to either not moderate content (thus destroying the user experience) or simply not post third party content at all.

So, a  cesspool of a user experience on the one hand; a much-lessened interactive internet on the other. Take your pick. 

Bottom line — the clarity that CDA 230 provides is super-valuable in shutting down, at the get-go, anyone who wants to roll the dice on taking your startup out with a little lawfare. And the genius of CDA 230 is that it provides the breathing room for sites to set their own rules, and moderate content using their own discretion, without fear of being punished by the government or subjected to ruinous litigation for so doing.

Perversely, while all of the noise about limiting/eliminating CDA 230 is driven by frustration at Facebook, Google, and other giant platforms, it’s not like neutering the law would even really impact those guys. They’ve got the scale to take the cost of regulation. 

But smaller, newer services? No way. They’d be forced into the loser of a choice I’ve described above: cesspool or wasteland.

Policymakers should think long and hard about the implications for the wider world of innovative online services before even thinking about “tweaks” to CDA 230.

(Part 1, Part 2)

On the Perils of Regulating Content Moderation (Part 2 of 3)

In the first post in this series, I went through the background on CDA 230’s protection for the content moderation decisions of site operators. Today, a story about the implications of adding greater liability in this area — and why exposing sites to liability for their moderation decisions would render unviable most online services that rely on third party content.

From 2007 to 2018 I was general counsel for Avvo, an online resource for people to research legal issues and find lawyers. One of Avvo’s innovations (and the reason it needed a GC from its earliest stages) is that it published  a profile of every lawyer in the country — whether lawyers liked it or not. As those profiles included disciplinary history, Avvo’s rating of the lawyer’s background, and client reviews . . . well, some lawyers didn’t like it.

The week it launched, Avvo was sued in a nationwide class action alleging that Avvo’s editorial rating of attorneys was defamatory. And although that case was thrown out on the pleadings, getting such a result was expensive. In the years that followed, Avvo grew and became more important to consumers and lawyers alike. Despite this, lawyers – often sanctioned lawyers, who disliked the fact that Avvo exposed disciplinary history far more effectively than the websites of the state Bars – tried other vectors of attack. These included consumer fraud, publicity rights, etc. None of these cases survived the pleadings. But pushing back on them wasn’t without cost. These were largely unexplored areas at the intersection of publishing, public records, and commercial speech. Fortunately, Avvo had the resources and was able to aggressively fight back.

But client reviews? For the most part, nobody sued over those.[ref]Or, at least they didn’t until very late in Avvo’s run as an independent company, when an attorney tried the angle that California’s unfair trade practices required some sort of judicially-imposed review moderation regime quite at odds with CDA 230. We got the complaint stricken under California’s stellar anti-SLAPP law.[/ref]

Oh, it wasn’t that every attorney loved their client reviews, or believed that they accurately reflected the services provided. Far from it. For while most reviews ran positive – it turns out people appreciate being gotten out a jam – some, inevitably, did not. It’s a result dictated by the law of large numbers; clients were posting thousands of reviews on Avvo every week.[ref]I do recall a single occasion when an attorney – the attorney behind this video, in fact – readily conceded that he’d earned a poor review. Much respect.[/ref]

And lawyers certainly threatened to sue Avvo over those reviews. Hundreds and hundreds of times. But CDA 230’s broad and straight-forward language — and its hard-fought litigation history — ensured that the threateners scuttled away, often not even bothering to leave a “SEE YOU IN COURT!” in their wake. 

I sometimes felt like a part-time CDA 230 instructor, educating my fellow members of the bar, one at a time, on the simple brilliance of the 26 words that created the internet.

But what if CDA 230’s protections were hedged? What if Avvo had some obligation to moderate content “reasonably,” or take content down on affidavit or “notice of falsity,” or any of the many other suggested tweaks to the statute?

It would have been game over. 

More on that in the final post in this series.

On the Perils of Regulating Content Moderation (Part 1 of 3)

You’ll have to forgive social media companies for feeling whiplashed on the policy front.

Should they be forced to determine the truth or falsity of all political ads?

Or should they be forced to carry ALL political ads, regardless of truthfulness?

Should they have to post everything their users post, in the quest for “balance?”

Or should they be forced to eliminate hate speech from their platforms – on pain of jail time?

These proposals are all, to varying degrees, misguided, incoherent, impossible, and just plain bad policy. 

Yet they’re being promoted by lots of influential people. There are bad-faithers in Congress — like Senators Josh Hawley and Ted Cruz — calling for online companies to be neutral platforms, to stop “determining falsity” and “censoring conservatives” and, I guess, just mindlessly publish whatever their users decide to post or include in advertisements.

And on the other hand, you’ve got people like Sasha Baron Cohen — who famously abused the trust and good nature of lots of ordinary people in service of his comedy — calling for social media companies to do ever-more, to moderate more fastidiously, within a set of rules and guidelines the deviation from which may lead to criminal sanctions. 

Obviously, you can’t have it both ways, people.

But here’s the thing: “you must publish it all” and “you must moderate better” are both horrible attempts to constrain online platforms. Besides the glaring First Amendment problems, these types of suggestions carry the not-insignificant consequences of either shutting these platforms down or turning them into absolute sewers.

The beauty of existing law, in the US at least, is that sites are free to make their own determinations about what gets published on their platforms. This is thanks to the “good faith content moderation” section of 47 USC 230(c), otherwise known as “CDA 230.” It’s the companion to CDA 230’s more widely-known feature: immunity from liability for hosting third-party content.[ref]Not without limitation; CDA 230 immunity doesn’t apply to federal crimes or intellectual property claims.[/ref]

CDA 230 provides the breathing room for sites to host third party content to serve their audiences. And while “you’ve got to be a neutral platform” is not a serious objection (if sites were required to publish everything that users throw at them, they would be utterly useless), the idea that sites should be held to government-imposed content moderation rules is more pernicious by virtue of its surface appeal. Why not require that sites moderate away objectionable content?

The rejoinder should be obvious: who gets to determine what’s “objectionable?” 

Unfortunately, there’s never a shortage of people stepping up to take on the censor’s role. 

Or people who blindly react to the fresh outrage of today, forgetting that today’s “cut off the hurtful speech” is tomorrow’s censorship of the powerless.

Or people who poo-poo the problem, confidently stating that surely — SURELY — guardrails can be built to require just the right amount of content moderation.

Calling for greater regulation of online content moderation is a recipe for First Amendment violations and unintended consequences. The specter of liability for “getting content moderation wrong” is spectacularly under-appreciated. In Part 2, I’ll get into the detail about how this plays out in practice.

Will SCOTUS Address Professional Speech?

Watching with interest: whether the Supreme Court grants cert in Capital Associated Industries v. Stein, a 4th Circuit decision out of North Carolina addressing the interplay between legal licensing and the First Amendment.

While the Stein decision ultimately decides that regulation of the unlicensed practice of law is subject to intermediate scrutiny – and that the North Carolina UPL regulation meets that standard – the opinion suggests that restricting the provision of legal advice is merely conduct regulation, not speech regulation.

That doesn’t seem remotely right. Legal advice . . . is conduct?

Yes, yes – speech can be conduct under certain limited circumstances. But normally when we’re talking about speech-as-conduct we’re talking about consumer disclosure requirements or the things physicians have to say in order to obtain informed consent from their patients. The speech is such cases is treated as conduct because it’s incidental to the good or service at issue.

But legal advice? That’s speech, and, really, nothing BUT speech.

This is not to say that providing legal advice can’t be regulated — or even that intermediate scrutiny isn’t the right standard by which to judge such regulation (though Widener Law Dean Rodney Smolla makes a compelling case for strict scrutiny).

But it’s sloppy and unhelpful for courts to futz around and conflate concepts like “incidental effects on speech,” “speech-as-conduct,” and “bona fide licensing requirements” when talking about government restrictions on the content of speech. That’s going to continue to happen without a coherent approach to professional speech regulation. It would be great if the Supreme Court took this opportunity to finally sort things out on this, one of least-unexplored frontiers of First Amendment law.

Updated: Nope; cert denied. A shame.

Spineless FTC Goes Weak on Astroturfing

Writing astroturf reviews is WRONG, people. Like, OBVIOUSLY wrong.

So you’d think that if you got caught instructing your employees, in sunny yet oh-so-detailed ways, on how to leave fake positive reviews for your products, you would get more than just a slap on the wrist.

Right?

I mean, that’s what happened to the sorry bastards running Lifestyle Lift, who got smashed by the NY AG’s office to the tune of $300K after creating an elaborate scheme of fake microsites and reviews. Dozens of other companies have also paid 5- and 6 figure sums to settle astroturfing complaints brought by regulators.

But for shlepper of high-end beauty products Sunday Riley? Who gave employees a nine-step guide to writing fake reviews on Sephora’s website (you can read it in all of its hyper-specific, fraud-tastic glory here)? They’ve earned merely a stern talking-to and a “please don’t be naughty again” from the FTC.

Timothy Geigner at Techdirt put it best, describing Sunday Riley’s practices as:

really blatant, really fake, and really shady. This was a coordinated attempt to falsely manipulate the review system of Sephora for the purposes of fooling the public into buying more product.

This wasn’t a foot-fault, a naive error, or a single instance of wrongdoing. It was a calculated effort to fool a public that already has a super-hard time staying informed about the rapidly-evolving skincare industry.

But instead of stomping on this, the FTC basically greenlit further wrongdoing. Its settlement with Sunday Riley doesn’t require payment of any money, agreement to any kind of oversight, or even an admission of wrongdoing. While I’m no fan of agency overreach, this is the kind of factual record that screams out for significant punishment – not this kind of “tsk-tsk” nonsense.