We've been here before

With all the recent news about the disillusionment that's setting in about generative AI, I'm wondering how AI initiatives compare to other initiatives. I'm sure those initiatives experience plenty of failure, too, and AI isn't that special.

There are undoubtedly several failure modes unique to AI work. In the past, there was a lack of high-quality data. For generative AI, where data requirements can be significantly less, it could be the lack of good and unambiguous evaluations.

But a lot of even when data is plenty and evaluations are good, an AI initiative can stall and end up in what I'll call the proof-of-concept purgatory if it just doesn't turn out to be all that useful. Now, why would that happen? Plenty of reasons:

  • The problem shouldn't have been tackled with AI in the first place.

  • The "problem" is non-existent, so nobody will use the solution.

  • The solution wasn't built with a tight user-focused feedback loop, so while it's generally going in the right direction, it still misses the mark.

  • The solution wasn't integrated into the larger ecosystem of where it's being deployed.

These reasons are not unique to AI. To avoid these issues, follow these two principles:

  1. Begin with the end in mind.

  2. Release early and iterate with lots of feedback rather than planning everything out from the beginning.

That might sound contradictory: How can we start with the end in mind and iterate/adjust? It's essential that beginning with the end in mind means clearly understanding what success looks like, not prescribing in much detail how we'll get there. The clear vision for the final outcome guides the iterations (and keeps them from running in circles).

With just those two simple to understand (but hard to put into practice) principles, your project, whether it uses AI or not, has a much higher chance of success.

Previous
Previous

Measuring Soft Outcomes

Next
Next

The AI Trough of Disillusionment