Category Archives: Critical Thinking

Intermediate Scrutiny as a Policymaker’s Touchstone

As I go on (and on, and on) about, speech regulation must meet a higher bar than ordinary regulation. Outside of a handful of relatively-narrow categories, content-based speech regulation must survive “strict scrutiny.” That’s the most exacting standard; most such regulation can’t get there, and is thus struck down when challenged in court.

Some content-based speech regulation – like commercial speech, and maybe professional speech – is subject to the lesser “intermediate scrutiny” standard.  Content-neutral regulation of speech (the familiar “time, place, and manner” restrictions on speech) is also subject to intermediate scrutiny. Regulation that does not impact speech? Unless another limiting principle applies (e.g., other constitutional rights; antitrust law), such regulation is subject to “rational basis” review – which means it’s going to be upheld by a court so as the regulation passed the laugh test.

While it’s understandable that courts would show deference to legislative and executive bodies in this way, policymakers shouldn’t hesitate to hold themselves to a higher standard. And the intermediate scrutiny standard, even if not legal binding on their actions, is an excellent way to discipline the policymaking process.

But before I get to that, it’s helpful to think of the ways that policymaking can fail. These fall into three broad categories:

  1. Regulating Things that Aren’t Actually Problems.

Making rules is expensive and time-consuming. Every new rule adds to the cognitive burden to those expected to comply with – and enforce – that rule. So it seems fair to expect that any proposed rule be designed to address a real problem. 1

Example: Voter ID requirements

Bad Argument for the Rule: “What’s the big deal? It’s easy to comply.”

2. Collateral Damage

Many rules are well-intentioned, but end up creating so many ancillary problems that their cost – often unanticipated – greatly exceeds their benefit.

Example: HIPAA.

Bad Argument for the Rule: “We’re not worried about those other things – THIS thing is the only thing that’s important.”

3. Ineffectiveness

Some rules are all sound and fury, signifying nothing. They make claims of solving a problem, but don’t advance the cause. Often come in an emotional wrapper.

Examples: Assault weapon bans.

Bad Argument for the Rule: “Think of the children.”

And, of course, many rules display characteristics of two or even all three of these markers of policy failure.

So how can the intermediate scrutiny standard help? Easy – it provides a disciplined mental framework for evaluating the effectiveness of policy. The standard requires that rules:

  • address substantial government interests;
  • directly advance those interests; and
  • do so in a reasonably narrow fashion.

That’s actually a great way of thinking about ALL policy. Because assuming we want effective policies, and aren’t just proposing rules for short-term political gain, tribal belief, or sheer contrariness, 2 we should rightly reject rules if they can’t meet this test.

Substantial government interest? That’s asking the question of whether a rule is actually addressing a real concern. Is there a serious enough problem that we need a rule, and all of the attendant costs and implications of government power that come with it? 3

Directly advancing the interest? That’s getting to whether the proposed rule actually has a chance of working. Does it actually dig away at the problem, or is it just window dressing?

Is it narrowly applied? That’s focused on collateral damage. Does our proposed rule create all sorts of other costs and externalities, quite apart from the issue the rule is trying to address?

Of course, this isn’t always easy. It may be hard to tell whether a potential rule will actually work. Ancillary consequences may not be seen until a Rule is already in place. And motivated reasoning can see advocates of a rule ignore all evidence and argument against their baby.

But assuming we want to enact rules that actually work? There’s a lot to be said for using the intermediate scrutiny standard as our analytical framework whenever evaluating policy.


  1. If the objection is that this formulation stacks the deck against rules, well, yes. Rules should have the burden of justifying their own existence. And in close cases, the rule should lose in favor of greater freedom.
  2. A big assumption, I know.
  3. This is particularly key when dealing with Rules that carry criminal sanctions. Given the realities of how enforcement of such Rules works, some have re-styled the “substantial” element of this prong as “don’t support any criminal laws that you aren’t willing to kill to enforce.”

Credibility in Business and Government

[This post was inspired by an email discussion after my last CLE webinar, “Lawyers & Lies.”]

Prior to the 2016 Presidential election, it wasn’t exactly a secret that Donald Trump is a less-than-effective businessman. Those familiar with how businesses operate and grow know that building a business empire through real estate and one’s gilded name, on the back of inherited wealth, is no marker of an excellent business operator.

The signs were all there: the bankruptcies, the reliance on a small coterie of loyalists and family, the rumors of shady dealings, the repeated stories about screwing over counterparties (often tradespeople and small-scale vendors), and the lack of any business vision other than the simple logic of commercial real estate: leveraging other people’s money and hoping that rents rise and assets inflate fast enough to outpace debt service.

So I doubt that any serious business person voted for Trump on the commonly-held assumption that he would wield his “business acumen” to bring the problems of unruly government to heel. 1 But plenty of people less familiar with business surely bought in to this pipe dream, assuming that Trump’s gold-festooned lifestyle was a proxy for serious mastery of all things business.

It should now be obvious – to  all but the willfully blind – that the assumption of Trump’s business skills has been put soundly and firmly to bed. The failures of operational discipline were apparent from the start, from the disorganized transition to the complete goat rodeo that represented Trump’s first travel ban rollout – an egregious bit of policy that nonetheless could have easily been landed successfully had a defter hand been at the helm. But beyond this lack of operating chops, we are now seeing the impact of another, even more critical form of business currency that Trump seems devoid of: credibility.

Presidents and politicians, even more than business leaders, are notorious for spin, overstatement, and failed predictions. But this doesn’t mean they don’t retain – and rely on – some reservoir of core credibility. That much of what they say, particularly when making statements of fact and important or personal commitments, can be fundamentally trusted.

Trump is something else entirely. In the New York Times yesterday, David Leonhardt offered a good overview of the President’s many, many confabulations. Yet while the man is clearly a liar (in the sense that so many of his untruthful utterances, unlike the spin or failed predictions of other politicians, are clearly intentional), he is also something more: a bullshitter. The bullshitter will lie, certainly. But more fundamentally, the bullshitter doesn’t care about the truth. Whatever he says is whatever he says – it’s just a means to end. He says whatever he needs to say to get to where he wants to be.

And the thing is, being a bullshitter probably worked pretty well in Trump’s sad, shoddy, little business empire. You can shine on lenders, as long as they get paid or you’ve got an escape hatch via bankruptcy. You can stiff your “little guy” vendors, because what are they going to do, sue? And you can take advantage of the star-struck and gullible, because suckers abound when celebrity (even of the tarnished, C-list variety) is around.

But lacking credibility doesn’t work in real business, or – as Trump is learning – in government. First of all is the transparency: people start checking things out. They follow up to see if you did what you said you were going to do. And they call you on your bullshit when you lie or fail to follow through.

Even worse for the bullshitter who finds himself out of his depth is the fact that the loss of credibility makes it really hard to get things done. While our society has lots of contracts, laws, and verification procedures, there are myriad points where we invest – time, money, effort – based on our trust of another person. Imagine if you didn’t trust a counterparty to not retrade or willfully breach an agreement. Would you invest the time to negotiate a deal with them anyway? Of course not. The same goes for government – the bullshitter’s got no ability to cajole, persuade, or incentive. His bullshit has cost him any room to negotiate, because his counterparties don’t believe what he’s saying, and don’t trust that he will meet any commitments he makes. He’s stuck with nothing but punitive measures.

The punitive-and-petulant approach may have worked passably well in the gaudy corridors of Trump Tower. But as our 45th President is discovering, it’s not remotely enough to meet the challenge of running the country. Credibility must come first.


  1. Although surely plenty voted for him on the assumption that he would adopt business-friendly policies, either hoping that collateral damage (to democratic institutions, national security, minority rights, etc.) would be minimized or out of a willingness to ignore such concerns. The first part of that seems to be working out so far.

Hyperbole and Confirmation Bias

So the other day I saw this tweet from Melinda Gates:


This struck me as an extraordinary claim. Not because I don’t believe that gender-blind applications might make a difference in tech job callback rates, but because of the claim that it makes a one thousand percent difference.

Some context: the first entities to experiment in a large way with gender-blind application screening were symphony orchestras (applicants would literally sit for their auditions behind a screen). This was at a time – the 1970’s and 1980’s – when the musical directors running symphonies were openly dismissive of female musicians.

The results? Blind screening increased the probability a woman would move past the initial interview by about 50%.

A  fifty percent increase in interview success is a massive change. It unambiguously shows the value of the change in screening procedure. And by extension, it reveals the biases that were previously holding talented female musicians back. It’s why such screenings are now the industry norm.

So what would a study that produced a result twenty times higher tell us? Keeping in mind that the symphony study covered a time period where there was far less equality? When symphony directors would proudly and openly express sentiments about the inferiority of women? And the 20x-higher study was presumably done in 2016, where companies compete in an ever-more-desperate race for tech talent?

I’ll tell you what it would tell us: that the tech industry, for all its veneer of respectability and progressiveness, is in fact led by a pack of misogynistic sociopaths, driven by the single-minded goal of preventing women from being hired.

That would be an extraordinary claim – and extraordinary claims require extraordinary evidence. However, all that seems to be available here is the barest of summaries:

  • The study was done by a recruiting agency known as Speak with a Geek (“SWAG;” cute).
  •  SWAG says it presented a group of employers with 5,000 candidates, first with full identifying details and later in a gender-blinded fashion.
  • SWAG says that in the first instance, only 5% of those selected for interviews were women. Once gender-blinded, that number went up to 54%.

So that’s actually close to an eleven hundred percent increase. Boo, evil male tech leaders!

I strongly suspect, however, that one of three things is actually going on here:

  • The SWAG study was very poorly designed and/or implemented.
  • The summary of the SWAG study misinterprets the data. 1
  • The study is fabricated – either entirely, or by stitching together a few semi-coherent data points and anecdotes into an amalgamation that doesn’t make any sense.

I don’t know which; SWAG never responded to my request for more info on the study.

But why do I care about this?

First, as a male who has worked in tech for 20+ years – including a stint running HR and recruiting – I find the implication appalling that my cohorts and I comprise some sort of cabal working fiendishly to keep women out of our playground.

Second, seeing statistics batted around in a non-critical way really gets under my skin. We’ve all got our cognitive biases, and many of us have a tendency to embrace “data” that supports our positions without doing adequate diligence on whether that data is, in fact, remotely accurate. It’s a bummer to see Melinda Gates, who should know better, fall into this same trap.

Finally, there’s this: those of us working in tech know that the industry has gender issues. The number of women pursuing CS degrees has plummeted over the last generation (although there are signs this is turning around). The culture in some tech companies can feel less-than-welcoming for a lot of women. And there are surely biases against women, implicit or overt, that male hiring managers bring to the table.

But promoting hyperbolic garbage is wholly counterproductive. Tech is a data-driven business, with a lot of smart people working in it. It betrays a certain unseriousness to promote wild-ass claims with no substance to back them up. It keeps people from engaging. And it makes them not trust anything you say – even when it’s true. After all, if proponents of a particular point of view are cool with just making shit up, why should someone take them seriously?

So let’s be critical and always ask to see the data before accepting the conclusion – particularly when the claim is extreme.


  1. For example it would be far more plausible that gender-blinding improved the interview rate of an small cohort of female tech candidates by 54%. That could still be a very significant finding.