The Averages Fallacy
Reporting on big trends means reporting on averages, leading to headlines like this:
- 95% of AI initiatives fail
- Average AI productivity gains are modest (single-digit percentages)
- Developers feel 20% more productive but are 19% less productive with AI
Without disputing the accuracy, there's a sneaky assumption that slips past our guard: that most data points are close to the reported average. "Average" becomes "typical." That's not always a given.
Sometimes it is. The average American male is around 5'11" tall. Some are taller, some shorter, and if you chart it, you get the iconic bell curve.
But it's deceptively false in many situations. Take wealth. In the US, the 2019 median household net worth was $97,300 while the average was $692,100. How? The median is the value where exactly half the households are above and half below, independent of how far above. The average gets skewed upward by a few very rich households. (Source: Wikipedia, Affluence in the United States.)
Then there's the population problem. The average American male might measure 5'11", but the average NBA player measures 6'7". This matters enormously for AI productivity reporting, because, as we've written about countless times, there are prerequisites that need to be in place. Broad statistics on AI fail to capture this. Wouldn't it be much more interesting to know what gains are reported by organizations that have the prerequisites in place?
Finally, we can and should be striving to be the outliers. The 5% of AI initiatives that didn't fail: what did they do right? The outlier companies and developers seeing immense productivity gains: what are they doing that others aren't? With a new technology where everybody is in exploration mode, some will hit on something that works. But it'll get lost in the noise if we don't pay attention.
Averages describe populations. You're not a population.
