The calls for Twitter to ban/censure/otherwise punish Trump came swiftly. And in true Twitter fashion, the company did . . . something entirely different. It tentatively waded into ANOTHER Trump Twitter Shitshow — this one involving the Umber Menace inveigling about the perils of voting by mail. Twitter decided THIS was the fight it was going to take, appending a little linked tooltip disclaimer to the offending tweets:
This is a form of content moderation, and obviously far short of a takedown or account suspension. And as “flags for offensive content” go, this is pretty mild. Hell, squint at it just a little and it looks like an endorsement of Trump’s claim.
But Trump, naturally, took offense:
“Strongly regulate or close them down?” What even IS this noise?
At one time, Republicans cared about the First Amendment. They noted, rightly, that corporations have free speech rights. And you know what? If social media platforms were actually silencing conservative voices, REAL conservatives would say “So what? The government’s got no role to play there.”
Instead, we’ve now got Trump doing his snowflake-swagger routine, echoing the bad faith nonsense that Republican Senators have been spewing for months:
There’s another shift that comes amid the whining about “tech censorship” and the “silencing of conservative voices online,” and it’s a purely transactional one: put up our content or lose the immunity that CDA 230 provides. Here’s a concise recent formulation of the bargain on offer:
So: either moderate your site to allow all First Amendment-protected speech, or lose your immunity with respect to third party postings. Your choice.
Now, putting aside some obvious questions (Which sites are “market-dominant?” Could a scheme like this actually pass First Amendment scrutiny? How do we determine whether a site has lived up to this 1A standard in a given case?), it’s worth asking: would any market-dominant site actually agree to this bargain?
OK, that’s a rhetorical question.
Because the answer is that there is absolutely no chance that any such site would take this deal.
The First Amendment allows for a stunningly wide range of noisy, messy, irrelevant, dishonest, and offensive expression. And even if this deal allows sites to impose reasonable content-neutral restrictions (like post length, duplication, formatting, etc.), it would unleash a torrent of vitriol, abuse, pornography, and abject nonsense. Sites would be helpless to shape a desired user experience or serve a particular audience. They’d just be bulletin boards for unstructured, unfocused expression.
And what would these successful sites get for this deal? The right to be immune from liability for third party postings and their curatorial decisions?
Sure, that’s nice and all, but it’s not an existential risk to any site that’s reached market dominance. CDA 230 immunity is most important to sites that are just getting started, who can’t take the cost of fending off suits over user-generated content. Big, established sites? They’ve got the resources to take that cost. And faced with a no-win choice like this, it’s certainly a better alternative than turning one’s site into an unusable cesspool.
What the market-dominant firms WOULD do in response to this ultimatum is pretty much the polar opposite of what the conservatives claim to be advocating for: they’d become much, much more aggressive about policing the sort of content they allow to be posted.
Why? Because decisions to take content down, de-emphasize posts, or suspend posting privileges are protected by the First Amendment in a way that decisions to post content are not. CDA 230 provides a procedural benefit in the former case; in the latter it offers an important substantive right. Thus, while losing CDA 230 would marginally increase the risk of taking user postings down,it would greatly increase the risk of leaving postings up.
So, if conservatives get to force government to offer this bargain, no eligible site is going to take it. And if the hammer then comes down, and CDA 230 immunity is taken away, look for the likes of Google, Facebook, and Twitter to take a much, much heavier hand with the “delete content” switch.
Now, maybe “conservatives” calling for this bargain just really don’t like the interactive web, and would be happy to see it stifled in this way. But if they really believe that there’s a deal to be had here that will lead to a more useful, robust, or “politically neutral” internet, they’re sorely mistaken.
Having listened carefully and at length to the GOP Senators and law professors pitching this, the position seems to be a mix of bad faith soapboxing (“look at us take on these tech libs!”) and the idea that sites could be better held to account — contractually, via their moderation codes — if the immunity wasn’t there.
Putting aside the modesty of this argument (rallying cry: “let’s juice breach-of-contract lawsuits against tech companies”) and the irony of “conservatives” arguing for fuller employment of trial attorneys, I’ll make two observations:
First of all, giving people a slightly-easier way to sue over a given content moderation decision isn’t going to lead to sites implementing a “First Amendment standard.” Doing so — which would entail allowing posts containing all manner of lies, propaganda, hate speech, and terrorist content — would make any such site choosing this route an utter cesspool.
Secondly, what sites WOULD do in response to losing immunity for content moderation decisions is adopt much more rigid content moderation policies. These policies would have less play in them, less room for exceptions, for change, for context.
Don’t like our content moderation decision? Too bad; it complies with our policy.
You want an exception? Sorry; we don’t make exceptions to the policy.
Why not? Because some asshole will sue us for doing that, that’s why not.
Have a nice day.
CDA 230’s content moderation immunity was intended to give online forums the freedom to curate content without worrying about this kind of claim. In this way, it operates somewhat like an anti-SLAPP law, by providing the means for quickly disposing of meritless claims.
Though unlike a strong anti-SLAPP law, CDA 230(c)(2) doesn’t require that those bringing such claims pay the defendant’s attorney fees.
Hey, now THERE’s an idea for an amendment to CDA 230 I could get behind!
So if the First Amendment protects site moderation & curation decisions, why are we even talking about “neutrality?”
It’s because some of the bigger tech companies — I’m looking at you, Google and Facebook — naively assumed good faith when asked about “neutrality” by congressional committees. They took the question as inquiring whether they apply neutral content moderation principles, rather than as Act I in a Kabuki play where bad-faith politicians and pundits would twist this as meaning that the tech companies promised “scrupulous adherence to political neutrality” (and that Act II, as described below, would involve cherry-picking anecdotes to try to show that Google and Facebook were lying, and are actually bastions of conversative-hating liberaldom).
And here’s the thing — Google, Twitter, and Facebook probably ARE pretty damn scrupulously neutral when it comes to political content (not that it matters, because THE FIRST AMENDMENT, but bear with me for a little diversion here). These are big platforms, serving billions of people. They’ve got a vested interest in making their platforms as usable and attractive to as many people as possible. Nudging the world toward a particular political orthodoxy? Not so much.
But that doesn’t stop Act II of the bad faith play. Let’s look at how unmoored from reality it is.
In the face of those numbers, the fact that one user or piece of content was banned tells us absolutely nothing about content moderation practices. Every example offered up — from Diamond & Silk to PragerU — is but one little greasy, meaningless mote in the vastness of the content moderation universe.
“‘Neutrality?’ You keep using that word . . .”
One obvious reason that any individual content moderation decision is irrelevant is simple numbers: a decision representing 0.00000001 of all decisions made is of absolutely no statistical significance. Random mutations — content moderation mistakes — are going to cause exponentially more postings or deletions than even a compilation of hundreds of anecdotes can provide. And mistakes and edge cases are inevitable when dealing with decision-making at scale.
But there’s more. Cases of so-called “political bias” are, if it is even possible, even less determinative, given the amount of subjectivity involved. If you look at the right-wing whining and whinging about their “voices being censored” by the socialist techlords, don’t expect to see any numerosity or application of basic logic.
Is there any examination of whether those on “the other side” of the political divide are being treated similarly? That perhaps some sites know their audiences don’t want a bunch of over-the-top political content, and thus take it down with abandon, regardless of which political perspective it’s coming from?
Or how about acknowledging the possibility that sites might actually be applying their content moderation rules neutrally — but that nutbaggery and offensive content isn’t evenly distributed across the political spectrum? And that there just might be, on balance, more of it coming from “the right?”
But of course there’s not going to be any such acknowledgement. It’s just one-way bitching and moaning all the way down, accompanied with mewling about “other side” content that remains posted.
Over the last year or so, there’s been a surge of claims that Google, Twitter, YouTube, etc. are “biased against conservatives.”
The starting point of this bad faith argument is a presumption that sites should be “neutral” about their content moderation decisions — decisions like which accounts Twitter suspends, how Google or Facebook rank content in search results or news feeds, or how YouTube promotes or obfuscates videos.
More about this “neutrality” nonsense in a later post, but let’s move on with how this performative mewling works.
So after setting up the strawman standard of “neutrality,” these self-styled “conservatives” turn to anecdotes showing that their online postings were unpublished, de-monetized, shadow-banned, or otherwise not made available to the widest audience possible.
These anecdotes are, of course, offered as evidence that sites haven’t been “neutral.”
And it’s not just some unfocused wingnut whining. This attitude is also driving a number of legislative proposals to amend and scale back CDA 230 — the law that makes the internet go.
Conservative Senators like Josh Hawley, Ted Cruz, and Lindsey Graham — lawyers all, who surely know better — bitch and moan about CDA 230’s content moderation immunity. If only sites didn’t have this freebie, they say — well, then, we’d see some neutrality and fair treatment, yessiree.
This is total bullshit.
Sure, CDA 230(c)(2) makes sites immune from being sued for their content moderation decisions. But that’s only important to the extent it keeps people from treating “community guidelines” and “acceptable use policies” as matters of contract that can be sued over.
Moderation? Curation? Promotion? All of that stuff is fully protected by the First Amendment.
Really, I can’t stress this enough:
CONTENT MODERATION DECISIONS ARE PROTECTED BY THE FIRST AMENDMENT.
Eliminating content moderation protections from CDA 230 doesn’t change this fact.
It can’t change this fact. Because CDA 230 is a statute and not the FIRST AMENDMENT.
So why all the arguing for CDA 230 to be carved back? Some of it is surely just bad-faith angst about “big tech,” misplaced in a way that would unduly harm small, innovative sites. But a lot of of it is just knee-jerk reaction from those who actually think that removing the immunity-for-moderation found in CDA 230(c)(2) will usher in a glorious new world where sites will have to publish everything.
Which, by the way, would be awful. Any site that just published virtually everything users posted (that’s the true “First Amendment standard”) would be an unusable hellhole. No site is going to do that — and, again . . .
They don’t have to BECAUSE THE FIRST AMENDMENT PROTECTS CONTENT MODERATION DECISIONS.
OK, so in my post on the Visaline case, I explored the bizarre idea that regulators can keep people from speaking on certain topics, just by requiring a license to talk about those things — and that the decision to require such a license-to-speak can be supported by little more than caprice.
Hopefully other courts will follow the Fifth Circuit and swiftly eliminate this glitch. For Visaline — with its blunt invocation of the Supreme Court’s 2018 decision in NIFLA v. Becerra — reinforces the notion that real limits exist on the ability of the state to regulate the speech of licensed professionals. And this is so important because a distressingly large number of lawyers and judges — who really should know better — seem to get First Amendment amnesia when it comes to this area.
NIFLA v. Becerra was the Supreme Court’s first-ever decision directly addressing the concept of professional speech regulation writ large. And in that decision, the Court summarized the state of play:
The Court has afforded less protection for professional speech in two circumstances— where a law requires professionals to disclose factual, noncontroversial information in their “commercial speech” . . . and where States regulate professional conduct that incidentally involves speech. 1
The NIFLA opinion goes on to note that the Supreme Court has had several occasions to reinforce that the full protection of the First Amendment applies to most professional speech:
“The Court has applied strict scrutiny to content-based laws regulating the noncommercial speech of lawyers, professional fundraisers, and organizations providing specialized advice on international law. And it has stressed the danger of content-based regulations “in the fields of medicine and public health, where information can save lives.” Sorrell v. IMS Health Inc., 564 U.S. 552, 566.” (internal cites omitted)
And lest we forget, when professionals speak in a non-professional context — even when talking about their licensed profession — THAT speech is as fully First Amendment-protected as it would be if uttered by a non-licensed citizen.
There likely ARE other areas of professional speech where regulation can meet a lesser standard — or can simply clear the bar of strict scrutiny. There’s a lot more work for the courts to do before we arrive at an appropriately narrowed professional speech doctrine.
In fairness, I think what the court is saying here is that these are the two situations where it has applied the lowest protection for professional speech; regulation in these areas need only meet the rational basis test. It has also applied lower protection — the intermediate scrutiny test — to regulation of all commercial speech other than basic disclosures. ↩
In that post — one of many in my long string of railings against the inanity of the attorney ad rules — I made my pitch plain:
So let’s gut the Rules. We can start by just flat-out eliminating – entirely – Rules 7.2, 7.4, & 7.5. I’ve never heard a remotely compelling argument for the continued existence of these Rules; they are all just sub-variations on the theme of Rule 7.1.
I would be lying if I said that at the time I wrote those words I had any optimism that a state would take this suggestion seriously — at least within my lifetime. But shockingly, this is almost exactly what the Utah Supreme Court has proposed doing: the complete elimination of Rules 7.2 – 7.5. 1
In that post I also proposed that Bars adopt controlled regulatory tests:
I know that we as lawyers are trained to “spot issues,” but this training drives way too much tentativeness. Instead of applying the precautionary principle – REGULATE NOW, IN CASE THE BAD THINGS HAPPEN – Bars could try controlled tests.
Say a Bar has gotten a question about an innovative product like Avvo Legal Services. Instead of agonizing for 6-12 months over the potential RPC implications, the Bar could – gasp – have a quick talk with the provider and make a deal: the Bar would explicitly let attorneys know it’s OK to participate, if the provider agrees to feed the Bar data on engagement, complaints, etc.
There would also be the understanding that it would be a time-limited test (enough time to get data sufficient to understand consumer impact) and that the Bar could pull the plug early if results looked super-ugly. A process like this would actually IMPROVE the Bar’s ability to address real consumer harm, while smoothing the road to innovation.
Utah? They’ve created a “regulatory sandbox” — where innovations in legal services delivery, including partnerships with non-lawyers — can be tested empirically rather than being squelched out of the gate. That’s exactly what I had in mind.
Just three years ago, Utah was part of the chorus of head-in-the-sand Bars reflexively telling its members that the modest, consumer-friendly innovation that was Avvo Legal Services couldn’t possibly comply with their Rules. 2
Now they’re leading the charge on making real change to the regulations that are holding back consumer access to justice.
While I’d like to think that Utah was persuaded by my advocacy in this area, what’s really important is that a state is actually doing something about the very real problem of regulatory cruft.
In fact, Utah is going even further than what I had proposed, eliminating Rule 7.3 (attorney solicitation) and incorporating its central tenants into a new section of Rule 7.1 prohibiting coercion, duress, or harassment when interacting with prospective clients. ↩
Utah State Bar Ethics Advisory Opinion No. 17-05. ↩
It’s been all COVID-19 for the last couple of months, so I’m taking a break to take a look at a new professional speech case that I missed when it dropped in late February.
The case is Vizaline v. Tracy, out of the Fifth Circuit. And the thing I love about this case is that it takes on, directly, the fundamental issue I have with so many of the earlier professional speech cases: the idea that the gating function of professional licensing itself is somehow magically immune from First Amendment issues.
Here’s what I mean. There’s little question that when it comes to the speech of professionals, the First Amendment applies. For example, there’s a well-established body of law relating to professional marketing speech, and an (admittedly underdeveloped) body of law when it comes to the speech professionals engage in with their clients. But at least the parameters are understood — the First Amendment applies, and we’re just negotiating about which standard of review the state has to live up to.
But something quirky happens when it comes to entry to the professions. In these cases, courts routinely handwave away the First Amendment issue, despite the fact that entry restrictions are sweeping: they keep the vast majority of the public from engaging in certain types of speech.
So, Vizaline. This company converts existing metes-and-bounds descriptions of real property — the raw data you’d find if you looked up property records at the county Recorder’s office — into simple maps. It sells these maps to community banks who would otherwise have to obtain surveys (from licensed surveyors, of course) on less-expensive properties used as collateral for mortgages.
It isn’t like Vizaline is passing itself off as something it isn’t. Vizaline does simple maps, and discloses that what they offer is “not a Legal Survey or intended to replace a Legal Survey.”
What for? “Surveying without a license,” that’s what for. Which honestly . . . doesn’t sound that awful, but which turns out to be both a civil and criminal offense in the Magnolia State.
Represented by The Institute for Justice (who, along with the R Street Institute, are one of the only groups focused on the excesses of professional licensure), Vizaline contended that its maps are speech, and as such are entitled to First Amendment protection.
The District Court wasn’t having it. That court found no First Amendment issue, on the remarkable theory that the requirement of a license only “incidentally infringes” on Vizaline’s speech because the licensing requirement merely determines who can speak. [ed. note: LOL]
On appeal, the Fifth Circuit went straight to the Supreme Court’s 2018 NIFLA v. Becerra decision, noting that case had eviscerated the concept that the gatekeeping function of licensing acts like some sort of First Amendment get-out-of-jail-free card:
The district court’s holding that occupational-licensing provisions “do not trigger First Amendment scrutiny” is contrary to the Supreme Court’s decision in NIFLA. NIFLA makes clear that occupational-licensing provisions are entitled to no special exception from otherwise-applicable First Amendment protections.
Bam. For as the Supreme Court had noted in NIFLA:
“All that is required to make something a “profession,” according to these courts, is that it involves personalized services and requires a professional license from the State. But that gives the States unfettered power to reduce a group’s First Amendment rights by simply imposing a licensing requirement. States cannot choose the protection that speech receives under the First Amendment, as that would give them a powerful tool to impose “invidious discrimination of disfavored subjects.”
The Fifth Circuit panel described NIFLA as essentially eliminating the “professional speech doctrine” — described in this case as a doctrine excepting professional speech from ANY First Amendment scrutiny — and remanded to the District Court to determine whether Mississippi’s licensing requirements implicate speech or non-expressive professional conduct.
While the remand is understandable, it leaves one a little wanting, as the licensing requirements at issue seem to plainly implicate expressive conduct. But overall, Vizaline points in a hopeful direction, one where the “professional speech doctrine” takes on a new understanding as protecting both the First Amendment rights of professionals and those who approach the murky bounds of licensed professional activity.
I’ve been talking about CDA 230, so let’s explore a case of how “the law that makes the internet go” interfaces with the Rules of Professional Conduct governing the practice of law.
Scintillating, right? Stick with me here . . .
We’re talking online client reviews. As we all surely know, EVERYTHING is reviewed online – even lawyers.
It’s strange that the Texas Bar is wading into this one in 2020, given that client reviews have been around for a couple of decades now. But hey – the law moves slowly and deliberately, right?
In Opinion 685, issued crisply in January, 2020, the Texas Bar determined that Texas lawyers CAN ask their clients to leave online reviews. Hell, they can even ask their clients to leave positive reviews!
What interests me about the conclusion in this otherwise-blindingly-obvious opinion is this little tidbit, tacked on near the end:
But, if a lawyer becomes aware that a client made a favorable false or misleading statement or a statement that the client has no factual basis for making, the lawyer should take reasonable steps to see that such statements are corrected or removed in order to avoid violating Rules 7.02(a) and 8.04(a)(3).”
Would an attorney on the receiving end of such a review be doing the right thing to ask the client to take the review down or pull back on the praise? Of course.
But is doing so a requirement? Like, a must-do-on-pain-of-being-disciplined requirement?
And not just because of the uncertainty and vagueness involved in making this a hard requirement. No, rather because 47 USC 230(c)(1) dictates:
No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”
While typically thought of in the defamation context — a forum site is not responsible for defamatory content posted by its users — it would apply equally here: CDA 230 prevents the application of state licensing rules to hold attorneys responsible for reviews posted by their clients. 1
Despite the heavily-litigated history of CDA 230, it’s unlikely this particular issue will ever see the courts. Attorneys are too cautious about threats to their licenses, the steps required to comply are so minimal, and bar counsel usually have bigger fish to fry anyway. Still, it’s instructive of the genius of this little law, and how far it reaches to remove friction from the free flow of information online.
That is, provided the attorney didn’t write the review for the client, or create the client and review from whole cloth. CDA 230 doesn’t give businesses a pass for astroturfing. ↩
Continuing my content moderation story from Part 2:
Despite the blustering and threats, most lawyers understood that the First Amendment protected Avvo’s right to publish its lawyer profiles and ratings, and that it would be pointless to go through with a lawsuit. The published decision in Brownev. Avvo helped as well. So they’d eventually go away. The guardrails of the First Amendment kept them from filing.
And at this point, after 20+ years of extensive litigation, CDA 230 operates in a similar fashion. Sure, there continue to be disputes along the frontier, and there’s always going to be a certain background level of utterly frivolous and SLAPP actions. But for the most part, things operate in a fashion where the rules are settled and suing is pointless.
Which is why Avvo didn’t get sued for posting third-party content — despite how exercised some lawyers would get over negative reviews.
But what if these lawyers had an argument they could lever? Like, that Avvo’s content moderation had to be “reasonable,” or “neutral?” Or that Avvo could be liable for not adhering to its published content moderation standards?
Were there ANYTHING that would make them think, “well, I’ve gotta chance at this thing,” Avvo would have been buried in lawsuits. And even if we’d been able to turn these suits away on the pleadings, doing so would have been super-expensive.
How expensive? Tech policy shop Engine, in an indispensable primer on the value of CDA 230, estimates that disposing of a frivolous lawsuit on a preliminary motion to dismiss can cost $80,000. And in my experience, it can cost LOTS more than that if the issues are complicated, the plaintiff is proceeding in bad faith, you draw a bad judge, etc, etc.
Now, some internet commenters would say that the way to avoid this risk is to just not do the bad things. But here in the real world, the way companies will avoid this risk (at least until they get big enough to take the costs) will be to either not moderate content (thus destroying the user experience) or simply not post third party content at all.
So, a cesspool of a user experience on the one hand; a much-lessened interactive internet on the other. Take your pick.
Bottom line — the clarity that CDA 230 provides is super-valuable in shutting down, at the get-go, anyone who wants to roll the dice on taking your startup out with a little lawfare. And the genius of CDA 230 is that it provides the breathing room for sites to set their own rules, and moderate content using their own discretion, without fear of being punished by the government or subjected to ruinous litigation for so doing.
Perversely, while all of the noise about limiting/eliminating CDA 230 is driven by frustration at Facebook, Google, and other giant platforms, it’s not like neutering the law would even really impact those guys. They’ve got the scale to take the cost of regulation.
But smaller, newer services? No way. They’d be forced into the loser of a choice I’ve described above: cesspool or wasteland.
Policymakers should think long and hard about the implications for the wider world of innovative online services before even thinking about “tweaks” to CDA 230.