Weapons of Math Destruction
I finally got my hands on a copy of Cathy O’Neill’s book, Weapons of Math Destruction. I haven’t even finished yet, but it’s already gave me lots to think about. Cathy is a mathematician turned quant turned data scientist. In the book, she explains and demonstrates how machine learning models can and have been used and abused with sometimes catastrophic consequences. The book was written in 2016, almost ten years ago as of this writing, and since that time, the power and prevalence of AI has only increased.
Cathy defines a Weapon of Math Destruction as a mathematical model or algorithm that causes large-scale harm due to three characteristics:
Opacity. It’s a black box making inscrutable decisions: Why was your application for a loan rejected? Computer says no. 🤷♂️
Scale. It operates on a massive scale, impacting millions of people (e.g. credit scores, hiring filters, policing tools)
Damage. It reinforces inequality or injustice, often punishing the poor and vulnerable while rewarding those already privileged.
Two further issues with WMDs are that they often create feedback loops where they reinforce their own biases, and that they offer no recourse for those harmed.
The book was written when deep learning was just about to take of, with image recognition as the first big use case. A decade later, we find ourselves in a situation where these WMDs are ever more powerful. If the machine-learning algorithms of 2016 were atomic bombs, the LLM-powered algorithms of today are hydrogen bombs, with an order of magnitude more destructive power.
It doesn’t have to be this way. Working backwards from the criteria of what makes a model a WMD, we can turn the situation on its head:
Transparency. Its design, data, and decision logic are explainable and auditable by those affected.
Proportionality. It’s applied at an appropriate scale, with oversight matching its potential impact.
Fairness & Accountability . It reduces bias, includes feedback to correct errors, and provides recourse for those affected.
Bonus: it promotes positive feedback loops (improving equity and outcomes over time) and supports human agency, not replaces it.
With the right architecture, an AI tool can ground its decisions in an explainable way. The rest is up to the overall way it gets deployed. Think hard about the feedback loops and accountability that your AI solution creates: If your awesome automated job application review AI rejects someone who’d have been awesome, would you ever know? Don’t trust, but verify.
