Hyperbole and Confirmation Bias

So the other day I saw this tweet from Melinda Gates:

melinda-gates-tweet

This struck me as an extraordinary claim. Not because I don’t believe that gender-blind applications might make a difference in tech job callback rates, but because of the claim that it makes a one thousand percent difference.

Some context: the first entities to experiment in a large way with gender-blind application screening were symphony orchestras (applicants would literally sit for their auditions behind a screen). This was at a time – the 1970’s and 1980’s – when the musical directors running symphonies were openly dismissive of female musicians.

The results? Blind screening increased the probability a woman would move past the initial interview by about 50%.

A  fifty percent increase in interview success is a massive change. It unambiguously shows the value of the change in screening procedure. And by extension, it reveals the biases that were previously holding talented female musicians back. It’s why such screenings are now the industry norm.

So what would a study that produced a result twenty times higher tell us? Keeping in mind that the symphony study covered a time period where there was far less equality? When symphony directors would proudly and openly express sentiments about the inferiority of women? And the 20x-higher study was presumably done in 2016, where companies compete in an ever-more-desperate race for tech talent?

I’ll tell you what it would tell us: that the tech industry, for all its veneer of respectability and progressiveness, is in fact led by a pack of misogynistic sociopaths, driven by the single-minded goal of preventing women from being hired.

That would be an extraordinary claim – and extraordinary claims require extraordinary evidence. However, all that seems to be available here is the barest of summaries:

  • The study was done by a recruiting agency known as Speak with a Geek (“SWAG;” cute).
  •  SWAG says it presented a group of employers with 5,000 candidates, first with full identifying details and later in a gender-blinded fashion.
  • SWAG says that in the first instance, only 5% of those selected for interviews were women. Once gender-blinded, that number went up to 54%.

So that’s actually close to an eleven hundred percent increase. Boo, evil male tech leaders!

I strongly suspect, however, that one of three things is actually going on here:

  • The SWAG study was very poorly designed and/or implemented.
  • The summary of the SWAG study misinterprets the data. 1
  • The study is fabricated – either entirely, or by stitching together a few semi-coherent data points and anecdotes into an amalgamation that doesn’t make any sense.

I don’t know which; SWAG never responded to my request for more info on the study.

But why do I care about this?

First, as a male who has worked in tech for 20+ years – including a stint running HR and recruiting – I find the implication appalling that my cohorts and I comprise some sort of cabal working fiendishly to keep women out of our playground.

Second, seeing statistics batted around in a non-critical way really gets under my skin. We’ve all got our cognitive biases, and many of us have a tendency to embrace “data” that supports our positions without doing adequate diligence on whether that data is, in fact, remotely accurate. It’s a bummer to see Melinda Gates, who should know better, fall into this same trap.

Finally, there’s this: those of us working in tech know that the industry has gender issues. The number of women pursuing CS degrees has plummeted over the last generation (although there are signs this is turning around). The culture in some tech companies can feel less-than-welcoming for a lot of women. And there are surely biases against women, implicit or overt, that male hiring managers bring to the table.

But promoting hyperbolic garbage is wholly counterproductive. Tech is a data-driven business, with a lot of smart people working in it. It betrays a certain unseriousness to promote wild-ass claims with no substance to back them up. It keeps people from engaging. And it makes them not trust anything you say – even when it’s true. After all, if proponents of a particular point of view are cool with just making shit up, why should someone take them seriously?

So let’s be critical and always ask to see the data before accepting the conclusion – particularly when the claim is extreme.

Notes:

  1. For example it would be far more plausible that gender-blinding improved the interview rate of an small cohort of female tech candidates by 54%. That could still be a very significant finding.

Leave a Reply