AI in Medicine - Promise and Peril

I recently came across Rachel Thomas's talk on AI in medicine. She co-founded fast.ai (a great learning resource for all things deep learning) and has recently taken a strong interest in the intersection of AI and the medical field.

One line from the talk that hit me hard:

It can be exciting to hear about AI that can read MRIs accurately, but that’s not going to help patients whose doctors won’t take their symptoms seriously enough to order an MRI in the first place.

AI in medicine has so much promise; it's worth getting it right. There are the obvious technical pitfalls, such as data quality issues. But as the quote points out, there are also more pernicious inherent biases that, if we don't get it right, will be perpetuated instead of alleviated with AI.

Here are some points on how we would tackle an AI project in healthcare:

  • Relentless focus on the final desired outcome, which in my book means better patient health outcomes.

  • Brutal honesty about which metrics (accuracy, recall, F1 score etc) actually matter toward that end goal, and at what threshold, so we're not just chasing metrics for their own sake.

  • Before writing any code or putting together any wireframes or prototypes, understand exactly the current workflow, how the new product will be integrated, by whom, and what conflicting incentives they might have.

  • Conduct a pre-mortem: Ask, "it's now six months later and the project was a total disaster. What happened?" Then develop defensively to prevent those things.

  • In particular, consider the following hypothetical future scenario: "Our well-meaning AI product had this terrible unintended side effect. What could we have done to prevent it?"

All that can ensure that AI gets used for good is the non-negotiable groundwork for a successful initiative.

Previous
Previous

"The Computer Doesn't Do What I Tell It To!"

Next
Next

Leverage: A Physicist’s Rant