Can AI Even Do This? (Make AI Work - Part 4)
Part 4. We've covered picking the right problem, making sure it's worth solving, and checking that your organization is ready. Today: can AI actually do the thing you want it to do, and can you scope it so you find out fast?
The Right Task for the Right Tool
Not everything that sounds like an AI use case is one. And not everything that is one is equally suited.
Some tasks are a natural fit: processing, extracting, or summarizing information. Repetitive decisions with clear (if complex) rules. Tedious manual work following known patterns. These are AI's sweet spots, because they involve pattern recognition over large volumes, which is exactly what the technology is good at.
Other tasks are a stretch: complex judgment calls that even experts struggle with, or creative and strategic decisions where the "right answer" depends on context that's hard to capture. AI can sometimes assist here, but it's a different game. You're no longer automating; you're augmenting. And augmentation is trickier to get right, because the human-AI handoff becomes the design challenge.
A useful litmus test: Is someone skilled currently spending significant time on work that's below their capability? A senior engineer reviewing every document. A specialist manually checking compliance on every transaction. A doctor doing paperwork instead of seeing patients. If the answer is yes, you've likely found a place where AI can free experts to do expert-level work. That's where the leverage is.
The Verification Question
Here's one that doesn't show up on enough checklists: How easy is it to verify if the AI's output is correct?
This matters more than most people think. If verification takes the same effort as doing the task manually, you haven't saved any time. You've just shifted the work from "doing" to "checking," and you've added a new failure mode: rubber-stamping AI output because checking it is tedious.
The ideal: verification is trivially easy. A quick spot-check, a glance at a dashboard, a simple comparison. The further you get from that, the more carefully you need to think about whether AI actually helps or whether it just creates a more convoluted version of the same workload.
I wrote about this before in the context of AI coding: letting AI generate things where reviewing them takes as long as an expert would need to create them saves no time. It just leads to exasperated experts. The same principle applies to any AI use case.
Start Small or Don't Start
You've heard me say this before and I'll say it again. If you can't identify a small, self-contained first version of your AI initiative, that's a red flag.
"All or nothing" projects are where budgets go to die. The beauty of starting small is that you learn fast and cheaply. Can you scope a meaningful pilot that covers just one part of the process, one document type, one team, one region? If not, ask yourself why. Often the reason is that the problem is poorly understood, which brings us right back to the first email in this series.
A related question: Could you see value from solving just 20% of the problem? If partial progress is meaningless, you're looking at a very risky bet. If even a partial solution would be meaningful, because you've identified the 20% that delivers 80% of the value, you've got something workable.
And finally, consider the blast radius. If this project fails, what breaks? If the answer is "critical operations across many teams," you probably want to pick a different starting point. Start with something isolated, something you can roll back without drama. Build confidence and evidence before you tackle the high-stakes stuff.
The best AI projects I've seen didn't start with a grand vision. They started with a single painful task and proved that it could be done better. Everything else grew from there.
