Are You Ready? (Make AI Work - Part 3)

Part 3 of the mini-series. We've covered picking the right problem and making sure it's worth solving. Today: the unglamorous stuff that determines whether your AI initiative has a foundation to stand on.

Tribal Knowledge Is Not a Foundation

Here's the scenario. You've identified a process you want to improve with AI. You know it's important, you know where the freed-up time would go, you've got metrics in mind. Great. Now, quick question: how does that process actually work?

"Oh, ask Linda. She's been doing it for twelve years."

That's tribal knowledge. It lives in Linda's head, maybe in a few emails, a half-finished wiki page from 2019, and a spreadsheet that only makes sense if you squint. This isn't unique to AI. Any attempt to improve a process that isn't documented is going to struggle. You can't improve what you can't describe.

The good news: you don't need perfect process maps. "Reasonably documented with known variations" is a great place to be. You're not producing a 200-page operations manual before you start. You're getting enough clarity to point at specific steps and say, "This is the part we want to improve, and here's what the inputs and outputs look like."

If your process documentation is severely outdated or nonexistent, that's a valuable discovery. Fix it first. Documenting what people actually do (as opposed to what a process diagram from five years ago says they do) will surface inefficiencies you didn't know existed. Sometimes the best outcome of an AI exploration is that you fixed the process before the AI was even involved.

The Data Question

I've written about the five tiers of data readiness before, from paper documents all the way up to structured databases. Where your data sits on that spectrum tells you a lot about how much work lies between you and a working AI solution.

But format isn't the whole story. The more fundamental questions:

  • Does the data exist at all?

  • Can you get to it, or is it locked in a system that doesn't talk to anything?

  • Is it reasonably clean, or full of inconsistencies, duplicates, and gaps?

"Data exists but is scattered across systems" is a surprisingly common answer, and not a deal-breaker. It means there's integration work ahead, and you should budget for it. What is a deal-breaker is pretending the data situation is better than it is. I've seen AI projects kick off with a cheerful "we have tons of data!" only to discover, three months in, that most of it is unusable.

Be honest about where you stand. If the data needs cleanup, fine. If it doesn't exist yet, fine too, but that changes the scope and timeline significantly.

Who Owns This Thing?

Every successful AI initiative I've seen has one thing in common: clear ownership by someone who understands the business problem, not the technology.

"IT owns it" sounds reasonable, and IT needs to be involved. But if the initiative is driven purely by the technical team, without a business stakeholder who feels the pain of the current process and can make decisions about trade-offs, you end up building technically impressive solutions to problems nobody has.

The strongest setup: a business stakeholder who owns the problem and the outcomes, with technical support for implementation. Even better if there's executive sponsorship to remove roadblocks. There will be roadblocks.

No clear owner means no one to make the tough calls when priorities conflict. And they will. "Should we optimize for accuracy or speed?" "Should we handle the edge cases now or later?" "This integration is harder than expected; do we simplify or push through?" These aren't technical decisions. They're business decisions that need someone with context and authority.

If your AI initiative is an orphan bouncing between departments with no single person accountable for its success, that's your most important problem to solve. Fix that, and everything else follows.

Next
Next

Is It Worth It (Make AI Work - Part 2)