Will Anyone Use It? (Make AI Work - Part 5)

Last part of the mini-series. We've covered problem selection, business impact, organizational readiness, technical fit, and scope. All the "hard" stuff. Today: the human side, which is where plenty of otherwise solid AI initiatives quietly go to die.

How Do People Feel About This?

You can have the perfect problem, the right data, a well-scoped pilot, and a clear owner. If the people who are supposed to use the thing don't want it, none of that matters.

There's a spectrum here. On the good end: people are actively requesting or championing the initiative. They feel the pain of the current process and they're eager for help. You'll know this when you see it, because they'll be the ones asking you when the solution is ready.

On the bad end: the affected teams don't even know it's happening. Surprise AI rollouts rarely go well. People who weren't consulted about a change to their workflow tend to resist it, even if the change is objectively beneficial. Especially if it's being done to them rather than with them.

The middle ground, "mixed or skeptical reactions," is actually workable. Skepticism can be healthy. It means people are paying attention and thinking critically. The question is whether the skepticism comes from informed concern ("I don't think AI can handle the edge cases in our process") or from fear ("Are they trying to replace me?"). The first kind is useful. The second kind needs to be addressed head-on.

The Framing Trap

How is AI being talked about in your organization? This matters more than you might think.

If the message, stated or implied, is "AI will help us do more with fewer people," you're going to have a rough time getting buy-in from the very people whose expertise you need to make the initiative work. Remember the earlier email about who owns the initiative: you need senior operators who live the problem every day. If they think they're building the tool that replaces them, they're not going to give you their best ideas.

"AI as a productivity tool" is better, but still a bit vague. The framing that works best in practice: AI frees people for higher-value work. Not because it's a clever spin, but because, when done right, that's what actually happens. The specialist who used to spend half their day on manual data entry can now spend that time on the analysis work they were actually trained for. That's not a threat. That's a promotion in disguise.

Getting the framing right isn't about marketing. It's about telling the truth in a way that makes people want to be part of the change rather than fight it.

AI Theatre

I wrote about this one back in June, but it bears repeating in this context. There's a specific risk where the pressure to "show AI adoption" creates initiatives that look impressive in a demo but never deliver production value. Press-release-driven development. Flashy proofs of concept that were never meant to survive contact with real users and real data.

If the main motivation for your AI initiative is that someone important wants to see AI on a slide deck, that's a problem. Not because visibility is bad, but because optics as the primary driver leads to optimizing for the wrong thing. You end up with something that demos well at the quarterly review but doesn't actually help anyone do their job.

The antidote is simple: focus on outcomes. Does the thing get the job done? Does it save time, reduce errors, free up capacity? If yes, the demos will take care of themselves. If no, no amount of polish will save it.

So, that's a wrap for the mini-series. Stay tuned for a recap next week.

Next
Next

Can AI Even Do This? (Make AI Work - Part 4)