Let’s Talk About “Neutrality” – and How Math Works

So if the First Amendment protects site moderation & curation decisions, why are we even talking about “neutrality?” 

It’s because some of the bigger tech companies — I’m looking at you, Google and Facebook — naively assumed good faith when asked about “neutrality” by congressional committees. They took the question as inquiring whether they apply neutral content moderation principles, rather than as Act I in a Kabuki play where bad-faith politicians and pundits would twist this as meaning that the tech companies promised “scrupulous adherence to political neutrality” (and that Act II, as described below, would involve cherry-picking anecdotes to try to show that Google and Facebook were lying, and are actually bastions of conversative-hating liberaldom).

And here’s the thing — Google, Twitter, and Facebook probably ARE pretty damn scrupulously neutral when it comes to political content (not that it matters, because THE FIRST AMENDMENT, but bear with me for a little diversion here). These are big platforms, serving billions of people. They’ve got a vested interest in making their platforms as usable and attractive to as many people as possible. Nudging the world toward a particular political orthodoxy? Not so much. 

But that doesn’t stop Act II of the bad faith play. Let’s look at how unmoored from reality it is.

Anecdotes Aren’t Data

Anecdotes — even if they involve multiple examples — are meaningless when talking about content moderation at scale. Google processes 3.5 billion searches per day. Facebook has over 1.5 billion people looking at its newsfeed daily. Twitter suspends as many as a million accounts a day.

In the face of those numbers, the fact that one user or piece of content was banned tells us absolutely nothing about content moderation practices. Every example offered up — from Diamond & Silk to PragerU — is but one little greasy, meaningless mote in the vastness of the content moderation universe. 

“‘Neutrality?’ You keep using that word . . .”

One obvious reason that any individual content moderation decision is irrelevant is simple numbers: a decision representing 0.00000001 of all decisions made is of absolutely no statistical significance. Random mutations — content moderation mistakes — are going to cause exponentially more postings or deletions than even a compilation of hundreds of anecdotes can provide. And mistakes and edge cases are inevitable when dealing with decision-making at scale.

But there’s more. Cases of so-called “political bias” are, if it is even possible, even less determinative, given the amount of subjectivity involved. If you look at the right-wing whining and whinging about their “voices being censored” by the socialist techlords, don’t expect to see any numerosity or application of basic logic. 

Is there any examination of whether those on “the other side” of the political divide are being treated similarly? That perhaps some sites know their audiences don’t want a bunch of over-the-top political content, and thus take it down with abandon, regardless of which political perspective it’s coming from? 

Or how about acknowledging the possibility that sites might actually be applying their content moderation rules neutrally — but that nutbaggery and offensive content isn’t evenly distributed across the political spectrum? And that there just might be, on balance, more of it coming from “the right?” 

But of course there’s not going to be any such acknowledgement. It’s just one-way bitching and moaning all the way down, accompanied with mewling about “other side” content that remains posted.

Which is, of course, also merely anecdotal.

Yes, I’d Like to Think Utah is Taking My Advice

Back in January, 2018 I wrote a post titled  “What SHOULD Attorney Advertising Regulation Look Like?”

In that post — one of many in my long string of railings against the inanity of the attorney ad rules — I made my pitch plain:

So let’s gut the Rules. We can start by just flat-out eliminating – entirely – Rules 7.2, 7.4, & 7.5. I’ve never heard a remotely compelling argument for the continued existence of these Rules; they are all just sub-variations on the theme of Rule 7.1.

I would be lying if I said that at the time I wrote those words I had any optimism that a state would take this suggestion seriously — at least within my lifetime. But shockingly, this is almost exactly what the Utah Supreme Court has proposed doing: the complete elimination of Rules 7.2 – 7.5. 1

In that post I also proposed that Bars adopt controlled regulatory tests:

I know that we as lawyers are trained to “spot issues,” but this training drives way too much tentativeness. Instead of applying the precautionary principle – REGULATE NOW, IN CASE THE BAD THINGS HAPPEN – Bars could try controlled tests.

Say a Bar has gotten a question about an innovative product like Avvo Legal Services. Instead of agonizing for 6-12 months over the potential RPC implications, the Bar could – gasp – have a quick talk with the provider and make a deal: the Bar would explicitly let attorneys know it’s OK to participate, if the provider agrees to feed the Bar data on engagement, complaints, etc.

There would also be the understanding that it would be a time-limited test (enough time to get data sufficient to understand consumer impact) and that the Bar could pull the plug early if results looked super-ugly. A process like this would actually IMPROVE the Bar’s ability to address real consumer harm, while smoothing the road to innovation.

Utah? They’ve created a “regulatory sandbox” — where innovations in legal services delivery, including partnerships with non-lawyers — can be tested empirically rather than being squelched out of the gate. That’s exactly what I had in mind.

Just three years ago, Utah was part of the chorus of head-in-the-sand Bars reflexively telling its members that the modest, consumer-friendly innovation that was Avvo Legal Services couldn’t possibly comply with their Rules. 2

Now they’re leading the charge on making real change to the regulations that are holding back consumer access to justice.

While I’d like to think that Utah was persuaded by my advocacy in this area, what’s really important is that a state is actually doing something about the very real problem of regulatory cruft. 

Comments on these proposed changes are open until July 23, 2020. The usual reactionary voices will surely weigh in with their “sky is falling” rhetoric — it would be great if those who support this kind of meaningful regulatory change let the Utah Supreme Court know they are absolutely on the right track.

Notes:

  1. In fact, Utah is going even further than what I had proposed, eliminating Rule 7.3 (attorney solicitation) and incorporating its central tenants into a new section of Rule 7.1 prohibiting coercion, duress, or harassment when interacting with prospective clients.
  2. Utah State Bar Ethics Advisory Opinion No. 17-05.

Lawyer Ethics & CDA 230

I’ve been talking about CDA 230, so let’s explore a case of how “the law that makes the internet go” interfaces with the Rules of Professional Conduct governing the practice of law. 

Scintillating, right? Stick with me here . . .

We’re talking online client reviews. As we all surely know, EVERYTHING is reviewed online – even lawyers.

It’s strange that the Texas Bar is wading into this one in 2020, given that client reviews have been around for a couple of decades now. But hey – the law moves slowly and deliberately, right?

In Opinion 685, issued crisply in January, 2020, the Texas Bar determined that Texas lawyers CAN ask their clients to leave online reviews. Hell, they can even ask their clients to leave positive reviews!

Thanks, guys!

What interests me about the conclusion in this otherwise-blindingly-obvious opinion is this little tidbit, tacked on near the end:

But, if a lawyer becomes aware that a client made a favorable false or misleading statement or a statement that the client has no factual basis for making, the lawyer should take reasonable steps to see that such statements are corrected or removed in order to avoid violating Rules 7.02(a) and 8.04(a)(3).”

Huh.

Would an attorney on the receiving end of such a review be doing the right thing to ask the client to take the review down or pull back on the praise? Of course. 

But is doing so a requirement? Like, a must-do-on-pain-of-being-disciplined requirement? 

Hell no. 

And not just because of the uncertainty and vagueness involved in making this a hard requirement. No, rather because 47 USC 230(c)(1) dictates: 

No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

While typically thought of in the defamation context — a forum site is not responsible for defamatory content posted by its users — it would apply equally here: CDA 230 prevents the application of state licensing rules to hold attorneys responsible for reviews posted by their clients. 1

Despite the heavily-litigated history of CDA 230, it’s unlikely this particular issue will ever see the courts. Attorneys are too cautious about threats to their licenses, the steps required to comply are so minimal, and bar counsel usually have bigger fish to fry anyway. Still, it’s instructive of the genius of this little law, and how far it reaches to remove friction from the free flow of information online.

Notes:

  1. That is, provided the attorney didn’t write the review for the client, or create the client and review from whole cloth. CDA 230 doesn’t give businesses a pass for astroturfing.

On the Perils of Content Moderation (Part 3 of 3)

Continuing my content moderation story from Part 2:

Despite the blustering and threats, most lawyers understood that the First Amendment protected Avvo’s right to publish its lawyer profiles and ratings, and that it would be pointless to go through with a lawsuit. The published decision in Browne v. Avvo helped as well. So they’d eventually go away. The guardrails of the First Amendment kept them from filing.

And at this point, after 20+ years of extensive litigation, CDA 230 operates in a similar fashion. Sure, there continue to be disputes along the frontier, and there’s always going to be a certain background level of utterly frivolous and SLAPP actions. But for the most part, things operate in a fashion where the rules are settled and suing is pointless. 

Which is why Avvo didn’t get sued for posting third-party content — despite how exercised some lawyers would get over negative reviews.

But what if these lawyers had an argument they could lever? Like, that Avvo’s content moderation had to be “reasonable,” or “neutral?” Or that Avvo could be liable for not adhering to its published content moderation standards?

Were there ANYTHING that would make them think, “well, I’ve gotta chance at this thing,” Avvo would have been buried in lawsuits. And even if we’d been able to turn these suits away on the pleadings, doing so would have been super-expensive.

How expensive? Tech policy shop Engine, in an indispensable primer on the value of CDA 230, estimates that disposing of a frivolous lawsuit on a preliminary motion to dismiss can cost $80,000. And in my experience, it can cost LOTS more than that if the issues are complicated, the plaintiff is proceeding in bad faith, you draw a bad judge, etc, etc.

Now, some internet commenters would say that the way to avoid this risk is to just not do the bad things. But here in the real world, the way companies will avoid this risk (at least until they get big enough to take the costs) will be to either not moderate content (thus destroying the user experience) or simply not post third party content at all.

So, a  cesspool of a user experience on the one hand; a much-lessened interactive internet on the other. Take your pick. 

Bottom line — the clarity that CDA 230 provides is super-valuable in shutting down, at the get-go, anyone who wants to roll the dice on taking your startup out with a little lawfare. And the genius of CDA 230 is that it provides the breathing room for sites to set their own rules, and moderate content using their own discretion, without fear of being punished by the government or subjected to ruinous litigation for so doing.

Perversely, while all of the noise about limiting/eliminating CDA 230 is driven by frustration at Facebook, Google, and other giant platforms, it’s not like neutering the law would even really impact those guys. They’ve got the scale to take the cost of regulation. 

But smaller, newer services? No way. They’d be forced into the loser of a choice I’ve described above: cesspool or wasteland.

Policymakers should think long and hard about the implications for the wider world of innovative online services before even thinking about “tweaks” to CDA 230.

(Part 1, Part 2)

On the Perils of Regulating Content Moderation (Part 2 of 3)

In the first post in this series, I went through the background on CDA 230’s protection for the content moderation decisions of site operators. Today, a story about the implications of adding greater liability in this area — and why exposing sites to liability for their moderation decisions would render unviable most online services that rely on third party content.

From 2007 to 2018 I was general counsel for Avvo, an online resource for people to research legal issues and find lawyers. One of Avvo’s innovations (and the reason it needed a GC from its earliest stages) is that it published  a profile of every lawyer in the country — whether lawyers liked it or not. As those profiles included disciplinary history, Avvo’s rating of the lawyer’s background, and client reviews . . . well, some lawyers didn’t like it.

The week it launched, Avvo was sued in a nationwide class action alleging that Avvo’s editorial rating of attorneys was defamatory. And although that case was thrown out on the pleadings, getting such a result was expensive. In the years that followed, Avvo grew and became more important to consumers and lawyers alike. Despite this, lawyers – often sanctioned lawyers, who disliked the fact that Avvo exposed disciplinary history far more effectively than the websites of the state Bars – tried other vectors of attack. These included consumer fraud, publicity rights, etc. None of these cases survived the pleadings. But pushing back on them wasn’t without cost. These were largely unexplored areas at the intersection of publishing, public records, and commercial speech. Fortunately, Avvo had the resources and was able to aggressively fight back.

But client reviews? For the most part, nobody sued over those. 1

Oh, it wasn’t that every attorney loved their client reviews, or believed that they accurately reflected the services provided. Far from it. For while most reviews ran positive – it turns out people appreciate being gotten out a jam – some, inevitably, did not. It’s a result dictated by the law of large numbers; clients were posting thousands of reviews on Avvo every week. 2

And lawyers certainly threatened to sue Avvo over those reviews. Hundreds and hundreds of times. But CDA 230’s broad and straight-forward language — and its hard-fought litigation history — ensured that the threateners scuttled away, often not even bothering to leave a “SEE YOU IN COURT!” in their wake. 

I sometimes felt like a part-time CDA 230 instructor, educating my fellow members of the bar, one at a time, on the simple brilliance of the 26 words that created the internet.

But what if CDA 230’s protections were hedged? What if Avvo had some obligation to moderate content “reasonably,” or take content down on affidavit or “notice of falsity,” or any of the many other suggested tweaks to the statute?

It would have been game over. 

More on that in the final post in this series.

Notes:

  1. Or, at least they didn’t until very late in Avvo’s run as an independent company, when an attorney tried the angle that California’s unfair trade practices required some sort of judicially-imposed review moderation regime quite at odds with CDA 230. We got the complaint stricken under California’s stellar anti-SLAPP law.
  2. I do recall a single occasion when an attorney – the attorney behind this video, in fact – readily conceded that he’d earned a poor review. Much respect.

On the Perils of Regulating Content Moderation (Part 1 of 3)

You’ll have to forgive social media companies for feeling whiplashed on the policy front.

Should they be forced to determine the truth or falsity of all political ads?

Or should they be forced to carry ALL political ads, regardless of truthfulness?

Should they have to post everything their users post, in the quest for “balance?”

Or should they be forced to eliminate hate speech from their platforms – on pain of jail time?

These proposals are all, to varying degrees, misguided, incoherent, impossible, and just plain bad policy. 

Yet they’re being promoted by lots of influential people. There are bad-faithers in Congress — like Senators Josh Hawley and Ted Cruz — calling for online companies to be neutral platforms, to stop “determining falsity” and “censoring conservatives” and, I guess, just mindlessly publish whatever their users decide to post or include in advertisements.

And on the other hand, you’ve got people like Sasha Baron Cohen — who famously abused the trust and good nature of lots of ordinary people in service of his comedy — calling for social media companies to do ever-more, to moderate more fastidiously, within a set of rules and guidelines the deviation from which may lead to criminal sanctions. 

Obviously, you can’t have it both ways, people.

But here’s the thing: “you must publish it all” and “you must moderate better” are both horrible attempts to constrain online platforms. Besides the glaring First Amendment problems, these types of suggestions carry the not-insignificant consequences of either shutting these platforms down or turning them into absolute sewers.

The beauty of existing law, in the US at least, is that sites are free to make their own determinations about what gets published on their platforms. This is thanks to the “good faith content moderation” section of 47 USC 230(c), otherwise known as “CDA 230.” It’s the companion to CDA 230’s more widely-known feature: immunity from liability for hosting third-party content. 1

CDA 230 provides the breathing room for sites to host third party content to serve their audiences. And while “you’ve got to be a neutral platform” is not a serious objection (if sites were required to publish everything that users throw at them, they would be utterly useless), the idea that sites should be held to government-imposed content moderation rules is more pernicious by virtue of its surface appeal. Why not require that sites moderate away objectionable content?

The rejoinder should be obvious: who gets to determine what’s “objectionable?” 

Unfortunately, there’s never a shortage of people stepping up to take on the censor’s role. 

Or people who blindly react to the fresh outrage of today, forgetting that today’s “cut off the hurtful speech” is tomorrow’s censorship of the powerless.

Or people who poo-poo the problem, confidently stating that surely — SURELY — guardrails can be built to require just the right amount of content moderation.

Calling for greater regulation of online content moderation is a recipe for First Amendment violations and unintended consequences. The specter of liability for “getting content moderation wrong” is spectacularly under-appreciated. In Part 2, I’ll get into the detail about how this plays out in practice.

Notes:

  1. Not without limitation; CDA 230 immunity doesn’t apply to federal crimes or intellectual property claims.

Will SCOTUS Address Professional Speech?

Watching with interest: whether the Supreme Court grants cert in Capital Associated Industries v. Stein, a 4th Circuit decision out of North Carolina addressing the interplay between legal licensing and the First Amendment.

While the Stein decision ultimately decides that regulation of the unlicensed practice of law is subject to intermediate scrutiny – and that the North Carolina UPL regulation meets that standard – the opinion suggests that restricting the provision of legal advice is merely conduct regulation, not speech regulation.

That doesn’t seem remotely right. Legal advice . . . is conduct?

Yes, yes – speech can be conduct under certain limited circumstances. But normally when we’re talking about speech-as-conduct we’re talking about consumer disclosure requirements or the things physicians have to say in order to obtain informed consent from their patients. The speech is such cases is treated as conduct because it’s incidental to the good or service at issue.

But legal advice? That’s speech, and, really, nothing BUT speech.

This is not to say that providing legal advice can’t be regulated — or even that intermediate scrutiny isn’t the right standard by which to judge such regulation (though Widener Law Dean Rodney Smolla makes a compelling case for strict scrutiny).

But it’s sloppy and unhelpful for courts to futz around and conflate concepts like “incidental effects on speech,” “speech-as-conduct,” and “bona fide licensing requirements” when talking about government restrictions on the content of speech. That’s going to continue to happen without a coherent approach to professional speech regulation. It would be great if the Supreme Court took this opportunity to finally sort things out on this, one of least-unexplored frontiers of First Amendment law.

Updated: Nope; cert denied. A shame.

Spineless FTC Goes Weak on Astroturfing

Writing astroturf reviews is WRONG, people. Like, OBVIOUSLY wrong.

So you’d think that if you got caught instructing your employees, in sunny yet oh-so-detailed ways, on how to leave fake positive reviews for your products, you would get more than just a slap on the wrist.

Right?

I mean, that’s what happened to the sorry bastards running Lifestyle Lift, who got smashed by the NY AG’s office to the tune of $300K after creating an elaborate scheme of fake microsites and reviews. Dozens of other companies have also paid 5- and 6 figure sums to settle astroturfing complaints brought by regulators.

But for shlepper of high-end beauty products Sunday Riley? Who gave employees a nine-step guide to writing fake reviews on Sephora’s website (you can read it in all of its hyper-specific, fraud-tastic glory here)? They’ve earned merely a stern talking-to and a “please don’t be naughty again” from the FTC.

Timothy Geigner at Techdirt put it best, describing Sunday Riley’s practices as:

really blatant, really fake, and really shady. This was a coordinated attempt to falsely manipulate the review system of Sephora for the purposes of fooling the public into buying more product.

This wasn’t a foot-fault, a naive error, or a single instance of wrongdoing. It was a calculated effort to fool a public that already has a super-hard time staying informed about the rapidly-evolving skincare industry.

But instead of stomping on this, the FTC basically greenlit further wrongdoing. Its settlement with Sunday Riley doesn’t require payment of any money, agreement to any kind of oversight, or even an admission of wrongdoing. While I’m no fan of agency overreach, this is the kind of factual record that screams out for significant punishment – not this kind of “tsk-tsk” nonsense.

Another “Abortion Counseling” Law Knocked Back

It seems to be an equal opportunity area, the fight to control speech around abortion. Blue states want to force churchy “crisis pregnancy centers” to inform people about its availability, while red states want to force doctors to scare patients away from it.

Thankfully, at least the courts are still thinking about the First Amendment.

Last year, we saw the beatdown of California’s mandatory pregnancy center notification requirement in NIFLA v. Becerra (a case that noted the First Amendment right of the centers to not have to carry the state’s message, but which is also notable for FINALLY opening the door for SCOTUS to flesh out a “professional speech” doctrine).

And today, we’ve got a federal district court in North Dakota blocking a law that would have forced doctors to advise patients about, well, all sorts of nonsense in a transparent attempt to make them fear ending their pregnancies.

Other states have similar laws; expect them to see similar fates.

It’s ironic that the strongest precedent for striking these laws is a Supreme Court case nixing a law where the shoe was solidly on the other foot. But far from surprising — too many policymakers are only opposed to speech restrictions when they’re imposed on the other team.

Are ALL Licensing Restrictions OK Now?

I missed this when it was issued last month, but struck by the result in the del Castillo v Philip case, challenging the application of Florida’s licensing law for dietitians to prevent the sale of diet coaching services by a non-licensee.

While the court is foreclosed from asking the obvious question (“do we really need so many god damn occupational licensing laws?”), it could have, you know, paid a little deference to the First Amendment on its way to depriving Heather Kokesch del Castillo of her right to earn an honest living.

Because maybe I’m reading this wrong, but it seems like the court is saying that ANY entry-to-the-profession licensing requirement inherently does not raise First Amendment issues — even if the profession is fundamentally centered on speech.

And even if the licensing requirement involves having a college degree and at least 6 months of relevant experience.

Look, I understand if the state wants to require a business license and the payment of a nominal fee before someone starts selling services to clients. That seems generally applicable, not speech-impacting, and relevant to prosaic matters like being able to hold businesses accountable for fraud and crappy service.

But it’s another thing entirely when those licensing requirements are extensive – and instead of merely giving the licensees the right to advertise their services as having met a state-sanctioned level of putative quality, prohibit non-licensees from providing any sort of advice and counsel in an incredibly broad area like “diet and nutrition.”

Shouldn’t the court have run this through something like intermediate scrutiny analysis – which likely would have found that the state could have achieved its desired objective through a less-speech-impacting means, such as certification?

I mean, there’s nothing keeping Florida from setting up a fancy “certified dietician” program with these educational and experience requirements. Ms. del Castillo couldn’t call herself one of those, but she would still be free to sell her services. And consumers could choose for themselves. Is there some consumer protection need here that is SO pressing we need to keep diet-interested bloggers from sharing their thoughts on a paid basis?

Here’s hoping the Supreme Court takes this case, and provides some much-needed clarity to the nascent professional speech doctrine.

[and yes, the implications for legal licensing should be obvious]