Summer of CDA 230

Since my last series of posts on this topic, there has been an epic surge of hot nonsense government proposals on CDA 230.

First up: Donald F. Trump’s executive order purporting to take social media companies to task for, well . . . see for yourself:

Don’s Executive Order in response to this injustice is performative nonsense (read Eric Goldman’s comprehensive overview for more detail), but it is passing strange how “conservatives” have suddenly embraced government control of private speech. I’m old enough to remember when Republicans actually understood how the First Amendment works. Citizens United, anyone? 

Attorney General Bill Barr dutifully followed this up with a set of “Recommendations for Section 230 Reform.” These include broadening takedown requirements, adding disclosures, narrowing CDA 230 immunity for takedown decisions, and specifying that CDA 230 immunity doesn’t apply to antitrust enforcement (I don’t know that anyone thought that it did, but whatever). If you are looking for conservative, small-government principles . . . you’re aren’t going to find them here.

Next up was the Bad Josh, the would-be “Product Manager for the Internet,” Missouri Senator Josh Hawley. Hawley and his crowd of boob-baiting fellow Republican Senators (all of whom are smart enough to know better) introduced an absolute turd of a bill designed to undercut CDA 230. While it has the virtue of only applying to the largest social media sites (thanks for that, I guess), it also doesn’t appear to do anything other than create a cottage industry of nuisance lawsuits. On the bright side, that creates a nice contingency plan for me, should I decide on a late-career switch back to being a litigator.

Closely following Hawley’s offering was the bipartisan PACT Act, sponsored by Hawaii Senator Brian Schatz and South Dakota Senator John Thune. While this bill includes some generally-unobjectionable disclosure standards, it carves back CDA 230 immunity, creates a notice-and-takedown regime, and imposes some truly laughable requirements (like a live telephone call center to respond to inquiries about moderation decisions). 

Oh, and at the same time there’s ALSO the bipartisan, oh-so-self-righteously-named “EARN IT” Act which would, that’s right, make platforms earn their CDA 230 immunity by satisfying a federal commission that they are doing all they can to prevent online sexual exploitation of children. The devil is always in the details when it comes to “think of the children” legislation like this, but it’s a safe bet that putting the thick, sweaty thumb of government on site operations in such a manner doesn’t bode well for innovation or privacy.

All in all, it’s already been a VERY busy summer for CDA 230, and it isn’t even July yet. These proposals are driven by everything from legitimate concern over edge-case issues to frustration at the reach of the First Amendment to bad-faith posturing to find any sort of brickbat to take to “Big Tech.” 

But we must keep in mind that the simplicity of CDA 230 — a too-rare example of government regulation getting out of the way and providing breathing room for new technology — has been a massive factor in enabling the growth of the internet as we know it over the last 24 years. Just because a multiplicity of voices are calling for change doesn’t mean that change is necessary or wise.   

“Strongly Regulate,” Social Media Edition

So earlier this week the President of the United States took to Twitter to baselessly accuse a private citizen of murder (as one does, amidst a pandemic that has claimed 100,000 American lives and throttled the economy). 

The calls for Twitter to ban/censure/otherwise punish Trump came swiftly. And in true Twitter fashion, the company did . . . something entirely different. It tentatively waded into ANOTHER Trump Twitter Shitshow — this one involving the Umber Menace inveigling about the perils of voting by mail. Twitter decided THIS was the fight it was going to take, appending a little linked tooltip disclaimer to the offending tweets: 

This is a form of content moderation, and obviously far short of a takedown or account suspension. And as “flags for offensive content” go, this is pretty mild. Hell, squint at it just a little and it looks like an endorsement of Trump’s claim.

But Trump, naturally, took offense: 

“Strongly regulate or close them down?” What even IS this noise? 

At one time, Republicans cared about the First Amendment. They noted, rightly, that corporations have free speech rights. And you know what? If social media platforms were actually silencing conservative voices, REAL conservatives would say “So what? The government’s got no role to play there.”

Instead, we’ve now got Trump doing his snowflake-swagger routine, echoing the bad faith nonsense that Republican Senators have been spewing for months:

I’ve written at length about why this argument is abject nonsense, and why threats to change CDA 230 aren’t going to achieve the ends Republicans like Rubio purport to be seeking. But here we are — it looks to be a long summer of escalating stupidity.

“Nice Little Internet You’ve Got There – Be a Shame if Anything Happened to It.”

There’s another shift that comes amid the whining about “tech censorship” and the “silencing of conservative voices online,” and it’s a purely transactional one: put up our content or lose the immunity that CDA 230 provides. Here’s a concise recent formulation of the bargain on offer:

This argument seems to at least implicitly acknowledge that the First Amendment protects the right of online forums to choose the content that appears on their sites. It just wants to do away with that expressive freedom as the price sites must pay in order to continue to enjoy CDA 230 immunity. 

So: either moderate your site to allow all First Amendment-protected speech, or lose your immunity with respect to third party postings. Your choice. 

Now, putting aside some obvious questions (Which sites are “market-dominant?” Could a scheme like this actually pass First Amendment scrutiny? How do we determine whether a site has lived up to this 1A standard in a given case?), it’s worth asking: would any market-dominant site actually agree to this bargain? 

OK, that’s a rhetorical question. 

Because the answer is that there is absolutely no chance that any such site would take this deal.

The First Amendment allows for a stunningly wide range of noisy, messy, irrelevant, dishonest, and offensive expression. And even if this deal allows sites to impose reasonable content-neutral restrictions (like post length, duplication, formatting, etc.), it would unleash a torrent of vitriol, abuse, pornography, and abject nonsense. Sites would be helpless to shape a desired user experience or serve a particular audience. They’d just be bulletin boards for unstructured, unfocused expression.

And what would these successful sites get for this deal? The right to be immune from liability for third party postings and their curatorial decisions? 

Sure, that’s nice and all, but it’s not an existential risk to any site that’s reached market dominance. CDA 230 immunity is most important to sites that are just getting started, who can’t take the cost of fending off suits over user-generated content. Big, established sites? They’ve got the resources to take that cost. And faced with a no-win choice like this, it’s certainly a better alternative than turning one’s site into an unusable cesspool.

What the market-dominant firms WOULD do in response to this ultimatum is pretty much the polar opposite of what the conservatives claim to be advocating for: they’d become much, much more aggressive about policing the sort of content they allow to be posted. 

Why? Because decisions to take content down, de-emphasize posts, or suspend posting privileges are protected by the First Amendment in a way that decisions to post content are not. CDA 230 provides a procedural benefit in the former case; in the latter it offers an important substantive right. Thus, while losing CDA 230 would marginally increase the risk of taking user postings down, it would greatly increase the risk of leaving postings up. 

So, if conservatives get to force government to offer this bargain, no eligible site is going to take it. And if the hammer then comes down, and CDA 230 immunity is taken away, look for the likes of Google, Facebook, and Twitter to take a much, much heavier hand with the “delete content” switch.

Now, maybe “conservatives” calling for this bargain just really don’t like the interactive web, and would be happy to see it stifled in this way. But if they really believe that there’s a deal to be had here that will lead to a more useful, robust, or “politically neutral” internet, they’re sorely mistaken. 

Why Content Moderation Codes Are More Guidelines Than Rules

Also, following on my last post: since the First Amendment protects site moderation & curation decisions, why all the calls to get rid of CDA 230’s content moderation immunity?

Having listened carefully and at length to the GOP Senators and law professors pitching this, the position seems to be a mix of bad faith soapboxing (“look at us take on these tech libs!”) and the idea that sites could be better held to account — contractually, via their moderation codes — if the immunity wasn’t there.

This is because the First Amendment doesn’t necessarily bar claims that various forms of “deplatforming” — like taking down a piece of content, or suspending a user account — violate a site’s Terms of Use, Acceptable Use Policy, or the like. That’s the power of CDA 230(c)(2); it lets sites be flexible, experiment, and treat their moderation policies more as guidelines than rules

Putting aside the modesty of this argument (rallying cry: “let’s juice breach-of-contract lawsuits against tech companies”) and the irony of “conservatives” arguing for fuller employment of trial attorneys, I’ll make two observations:

First of all, giving people a slightly-easier way to sue over a given content moderation decision isn’t going to lead to sites implementing a “First Amendment standard.” Doing so — which would entail allowing posts containing all manner of lies, propaganda, hate speech, and terrorist content — would make any such site choosing this route an utter cesspool. 

Secondly, what sites WOULD do in response to losing immunity for content moderation decisions is adopt much more rigid content moderation policies. These policies would have less play in them, less room for exceptions, for change, for context. 

Don’t like our content moderation decision? Too bad; it complies with our policy. 

You want an exception? Sorry; we don’t make exceptions to the policy. 

Why not? Because some asshole will sue us for doing that, that’s why not. 

Have a nice day.

CDA 230’s content moderation immunity was intended to give online forums the freedom to curate content without worrying about this kind of claim. In this way, it operates somewhat like an anti-SLAPP law, by providing the means for quickly disposing of meritless claims.

Though unlike a strong anti-SLAPP law, CDA 230(c)(2) doesn’t require that those bringing such claims pay the defendant’s attorney fees.

Hey, now THERE’s an idea for an amendment to CDA 230 I could get behind!

Let’s Talk About “Neutrality” – and How Math Works

So if the First Amendment protects site moderation & curation decisions, why are we even talking about “neutrality?” 

It’s because some of the bigger tech companies — I’m looking at you, Google and Facebook — naively assumed good faith when asked about “neutrality” by congressional committees. They took the question as inquiring whether they apply neutral content moderation principles, rather than as Act I in a Kabuki play where bad-faith politicians and pundits would twist this as meaning that the tech companies promised “scrupulous adherence to political neutrality” (and that Act II, as described below, would involve cherry-picking anecdotes to try to show that Google and Facebook were lying, and are actually bastions of conversative-hating liberaldom).

And here’s the thing — Google, Twitter, and Facebook probably ARE pretty damn scrupulously neutral when it comes to political content (not that it matters, because THE FIRST AMENDMENT, but bear with me for a little diversion here). These are big platforms, serving billions of people. They’ve got a vested interest in making their platforms as usable and attractive to as many people as possible. Nudging the world toward a particular political orthodoxy? Not so much. 

But that doesn’t stop Act II of the bad faith play. Let’s look at how unmoored from reality it is.

Anecdotes Aren’t Data

Anecdotes — even if they involve multiple examples — are meaningless when talking about content moderation at scale. Google processes 3.5 billion searches per day. Facebook has over 1.5 billion people looking at its newsfeed daily. Twitter suspends as many as a million accounts a day.

In the face of those numbers, the fact that one user or piece of content was banned tells us absolutely nothing about content moderation practices. Every example offered up — from Diamond & Silk to PragerU — is but one little greasy, meaningless mote in the vastness of the content moderation universe. 

“‘Neutrality?’ You keep using that word . . .”

One obvious reason that any individual content moderation decision is irrelevant is simple numbers: a decision representing 0.00000001 of all decisions made is of absolutely no statistical significance. Random mutations — content moderation mistakes — are going to cause exponentially more postings or deletions than even a compilation of hundreds of anecdotes can provide. And mistakes and edge cases are inevitable when dealing with decision-making at scale.

But there’s more. Cases of so-called “political bias” are, if it is even possible, even less determinative, given the amount of subjectivity involved. If you look at the right-wing whining and whinging about their “voices being censored” by the socialist techlords, don’t expect to see any numerosity or application of basic logic. 

Is there any examination of whether those on “the other side” of the political divide are being treated similarly? That perhaps some sites know their audiences don’t want a bunch of over-the-top political content, and thus take it down with abandon, regardless of which political perspective it’s coming from? 

Or how about acknowledging the possibility that sites might actually be applying their content moderation rules neutrally — but that nutbaggery and offensive content isn’t evenly distributed across the political spectrum? And that there just might be, on balance, more of it coming from “the right?” 

But of course there’s not going to be any such acknowledgement. It’s just one-way bitching and moaning all the way down, accompanied with mewling about “other side” content that remains posted.

Which is, of course, also merely anecdotal.

No, CDA 230(c)(2) Isn’t The Only Thing Keeping Conservatives Off YouTube

Over the last year or so, there’s been a surge of claims that Google, Twitter, YouTube, etc. are “biased against conservatives.” 

The starting point of this bad faith argument is a presumption that sites should be “neutral” about their content moderation decisions — decisions like which accounts Twitter suspends, how Google or Facebook rank content in search results or news feeds, or how YouTube promotes or obfuscates videos.

More about this “neutrality” nonsense in a later post, but let’s move on with how this performative mewling works. 

So after setting up the strawman standard of “neutrality,” these self-styled “conservatives” turn to anecdotes showing that their online postings were unpublished, de-monetized, shadow-banned, or otherwise not made available to the widest audience possible. 

These anecdotes are, of course, offered as evidence that sites haven’t been “neutral.”

And it’s not just some unfocused wingnut whining. This attitude is also driving a number of legislative proposals to amend and scale back CDA 230 — the law that makes the internet go.

Conservative Senators like Josh Hawley, Ted Cruz, and Lindsey Graham — lawyers all, who surely know better — bitch and moan about CDA 230’s content moderation immunity. If only sites didn’t have this freebie, they say — well, then, we’d see some neutrality and fair treatment, yessiree.  

This is total bullshit. 

Sure, CDA 230(c)(2) makes sites immune from being sued for their content moderation decisions. But that’s only important to the extent it keeps people from treating “community guidelines” and “acceptable use policies” as matters of contract that can be sued over. 

Moderation? Curation? Promotion? All of that stuff is fully protected by the First Amendment. 

Really, I can’t stress this enough: 

CONTENT MODERATION DECISIONS ARE PROTECTED BY THE FIRST AMENDMENT. 

Eliminating content moderation protections from CDA 230 doesn’t change this fact. 

It can’t change this fact. Because CDA 230 is a statute and not the FIRST AMENDMENT.

So why all the arguing for CDA 230 to be carved back? Some of it is surely just bad-faith angst about “big tech,” misplaced in a way that would unduly harm small, innovative sites. But a lot of of it is just knee-jerk reaction from those who actually think that removing the immunity-for-moderation found in CDA 230(c)(2) will usher in a glorious new world where sites will have to publish everything. 

Which, by the way, would be awful. Any site that just published virtually everything users posted (that’s the true “First Amendment standard”) would be an unusable hellhole. No site is going to do that — and, again . . .

They don’t have to BECAUSE THE FIRST AMENDMENT PROTECTS CONTENT MODERATION DECISIONS.

Further Thoughts on Professional Speech Regulation

OK, so in my post on the Visaline case, I explored the bizarre idea that regulators can keep people from speaking on certain topics, just by requiring a license to talk about those things — and that the decision to require such a license-to-speak can be supported by little more than caprice.

Hopefully other courts will follow the Fifth Circuit and swiftly eliminate this glitch. For Visaline — with its blunt invocation of the Supreme Court’s 2018 decision in NIFLA v. Becerra — reinforces the notion that real limits exist on the ability of the state to regulate the speech of licensed professionals. And this is so important because a distressingly large number of lawyers and judges — who really should know better — seem to get First Amendment amnesia when it comes to this area.

NIFLA v. Becerra was the Supreme Court’s first-ever decision directly addressing the concept of professional speech regulation writ large. And in that decision, the Court summarized the state of play:

The Court has afforded less protection for professional speech in two circumstances— where a law requires professionals to disclose factual, noncontroversial information in their “commercial speech” . . . and where States regulate professional conduct that incidentally involves speech. 1

The NIFLA opinion goes on to note that the Supreme Court has had several occasions to reinforce that the full protection of the First Amendment applies to most professional speech:

“The Court has applied strict scrutiny to content-based laws regulating the noncommercial speech of lawyers, professional fundraisers, and organizations providing specialized advice on international law. And it has stressed the danger of content-based regulations “in the fields of medicine and public health, where information can save lives.” Sorrell v. IMS Health Inc., 564 U.S. 552, 566.” (internal cites omitted)

And lest we forget, when professionals speak in a non-professional context — even when talking about their licensed profession — THAT speech is as fully First Amendment-protected as it would be if uttered by a non-licensed citizen.

There likely ARE other areas of professional speech where regulation can meet a lesser standard — or can simply clear the bar of strict scrutiny. There’s a lot more work for the courts to do before we arrive at an appropriately narrowed professional speech doctrine.

Notes:

  1. In fairness, I think what the court is saying here is that these are the two situations where it has applied the lowest protection for professional speech; regulation in these areas need only meet the rational basis test. It has also applied lower protection — the intermediate scrutiny test — to regulation of all commercial speech other than basic disclosures.

Yes, I’d Like to Think Utah is Taking My Advice

Back in January, 2018 I wrote a post titled  “What SHOULD Attorney Advertising Regulation Look Like?”

In that post — one of many in my long string of railings against the inanity of the attorney ad rules — I made my pitch plain:

So let’s gut the Rules. We can start by just flat-out eliminating – entirely – Rules 7.2, 7.4, & 7.5. I’ve never heard a remotely compelling argument for the continued existence of these Rules; they are all just sub-variations on the theme of Rule 7.1.

I would be lying if I said that at the time I wrote those words I had any optimism that a state would take this suggestion seriously — at least within my lifetime. But shockingly, this is almost exactly what the Utah Supreme Court has proposed doing: the complete elimination of Rules 7.2 – 7.5. 1

In that post I also proposed that Bars adopt controlled regulatory tests:

I know that we as lawyers are trained to “spot issues,” but this training drives way too much tentativeness. Instead of applying the precautionary principle – REGULATE NOW, IN CASE THE BAD THINGS HAPPEN – Bars could try controlled tests.

Say a Bar has gotten a question about an innovative product like Avvo Legal Services. Instead of agonizing for 6-12 months over the potential RPC implications, the Bar could – gasp – have a quick talk with the provider and make a deal: the Bar would explicitly let attorneys know it’s OK to participate, if the provider agrees to feed the Bar data on engagement, complaints, etc.

There would also be the understanding that it would be a time-limited test (enough time to get data sufficient to understand consumer impact) and that the Bar could pull the plug early if results looked super-ugly. A process like this would actually IMPROVE the Bar’s ability to address real consumer harm, while smoothing the road to innovation.

Utah? They’ve created a “regulatory sandbox” — where innovations in legal services delivery, including partnerships with non-lawyers — can be tested empirically rather than being squelched out of the gate. That’s exactly what I had in mind.

Just three years ago, Utah was part of the chorus of head-in-the-sand Bars reflexively telling its members that the modest, consumer-friendly innovation that was Avvo Legal Services couldn’t possibly comply with their Rules. 2

Now they’re leading the charge on making real change to the regulations that are holding back consumer access to justice.

While I’d like to think that Utah was persuaded by my advocacy in this area, what’s really important is that a state is actually doing something about the very real problem of regulatory cruft. 

Comments on these proposed changes are open until July 23, 2020. The usual reactionary voices will surely weigh in with their “sky is falling” rhetoric — it would be great if those who support this kind of meaningful regulatory change let the Utah Supreme Court know they are absolutely on the right track.

Notes:

  1. In fact, Utah is going even further than what I had proposed, eliminating Rule 7.3 (attorney solicitation) and incorporating its central tenants into a new section of Rule 7.1 prohibiting coercion, duress, or harassment when interacting with prospective clients.
  2. Utah State Bar Ethics Advisory Opinion No. 17-05.

Is the Ice Breaking for Professional Speech?

It’s been all COVID-19 for the last couple of months, so I’m taking a break to take a look at a new professional speech case that I missed when it dropped in late February.

The case is Vizaline v. Tracy, out of the Fifth Circuit. And the thing I love about this case is that it takes on, directly, the fundamental issue I have with so many of the earlier professional speech cases: the idea that the gating function of professional licensing itself is somehow magically immune from First Amendment issues.  

Here’s what I mean. There’s little question that when it comes to the speech of professionals, the First Amendment applies. For example, there’s a well-established body of law relating to professional marketing speech, and an (admittedly underdeveloped) body of law when it comes to the speech professionals engage in with their clients. But at least the parameters are understood — the First Amendment applies, and we’re just negotiating about which standard of review the state has to live up to.

But something quirky happens when it comes to entry to the professions. In these cases, courts routinely handwave away the First Amendment issue, despite the fact that entry restrictions are sweeping: they keep the vast majority of the public from engaging in certain types of speech.

Weird, right?

Background

So, Vizaline. This company converts existing metes-and-bounds descriptions of real property — the raw data you’d find if you looked up property records at the county Recorder’s office — into simple maps. It sells these maps to community banks who would otherwise have to obtain surveys (from licensed surveyors, of course) on less-expensive properties used as collateral for mortgages.

It isn’t like Vizaline is passing itself off as something it isn’t. Vizaline does simple maps, and discloses that what they offer is “not a Legal Survey or intended to replace a Legal Survey.”

So naturally the Mississippi Board of Licensure for Professional Engineers and Surveyors got the state attorney general to sue Vizaline on its behalf. 

What for? “Surveying without a license,” that’s what for. Which honestly . . . doesn’t sound that awful, but which turns out to be both a civil and criminal offense in the Magnolia State. 

The Case

Represented by The Institute for Justice (who, along with the R Street Institute, are one of the only groups focused on the excesses of professional licensure), Vizaline contended that its maps are speech, and as such are entitled to First Amendment protection.

The District Court wasn’t having it. That court found no First Amendment issue, on the remarkable theory that the requirement of a license only “incidentally infringes” on Vizaline’s speech because the licensing requirement merely determines who can speak. [ed. note: LOL]

On appeal, the Fifth Circuit went straight to the Supreme Court’s 2018 NIFLA v. Becerra decision, noting that case had eviscerated the concept that the gatekeeping function of licensing acts like some sort of First Amendment get-out-of-jail-free card:

The district court’s holding that occupational-licensing provisions “do not trigger First Amendment scrutiny” is contrary to the Supreme Court’s decision in NIFLA. NIFLA makes clear that occupational-licensing provisions are entitled to no special exception from otherwise-applicable First Amendment protections.

Bam. For as the Supreme Court had noted in NIFLA: 

“All that is required to make something a “profession,” according to these courts, is that it involves personalized services and requires a professional license from the State. But that gives the States unfettered power to reduce a group’s First Amendment rights by simply imposing a licensing requirement. States cannot choose the protection that speech receives under the First Amendment, as that would give them a powerful tool to impose “invidious discrimination of disfavored subjects.”

The Fifth Circuit panel described NIFLA as essentially eliminating the “professional speech doctrine” — described in this case as a doctrine excepting professional speech from ANY First Amendment scrutiny — and remanded to the District Court to determine whether Mississippi’s licensing requirements implicate speech or non-expressive professional conduct.

While the remand is understandable, it leaves one a little wanting, as the licensing requirements at issue seem to plainly implicate expressive conduct. But overall, Vizaline points in a hopeful direction, one where the “professional speech doctrine” takes on a new understanding as protecting both the First Amendment rights of professionals and those who approach the murky bounds of licensed professional activity. 

Lawyer Ethics & CDA 230

I’ve been talking about CDA 230, so let’s explore a case of how “the law that makes the internet go” interfaces with the Rules of Professional Conduct governing the practice of law. 

Scintillating, right? Stick with me here . . .

We’re talking online client reviews. As we all surely know, EVERYTHING is reviewed online – even lawyers.

It’s strange that the Texas Bar is wading into this one in 2020, given that client reviews have been around for a couple of decades now. But hey – the law moves slowly and deliberately, right?

In Opinion 685, issued crisply in January, 2020, the Texas Bar determined that Texas lawyers CAN ask their clients to leave online reviews. Hell, they can even ask their clients to leave positive reviews!

Thanks, guys!

What interests me about the conclusion in this otherwise-blindingly-obvious opinion is this little tidbit, tacked on near the end:

But, if a lawyer becomes aware that a client made a favorable false or misleading statement or a statement that the client has no factual basis for making, the lawyer should take reasonable steps to see that such statements are corrected or removed in order to avoid violating Rules 7.02(a) and 8.04(a)(3).”

Huh.

Would an attorney on the receiving end of such a review be doing the right thing to ask the client to take the review down or pull back on the praise? Of course. 

But is doing so a requirement? Like, a must-do-on-pain-of-being-disciplined requirement? 

Hell no. 

And not just because of the uncertainty and vagueness involved in making this a hard requirement. No, rather because 47 USC 230(c)(1) dictates: 

No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

While typically thought of in the defamation context — a forum site is not responsible for defamatory content posted by its users — it would apply equally here: CDA 230 prevents the application of state licensing rules to hold attorneys responsible for reviews posted by their clients. 1

Despite the heavily-litigated history of CDA 230, it’s unlikely this particular issue will ever see the courts. Attorneys are too cautious about threats to their licenses, the steps required to comply are so minimal, and bar counsel usually have bigger fish to fry anyway. Still, it’s instructive of the genius of this little law, and how far it reaches to remove friction from the free flow of information online.

Notes:

  1. That is, provided the attorney didn’t write the review for the client, or create the client and review from whole cloth. CDA 230 doesn’t give businesses a pass for astroturfing.