Faulty AI Strategies
Overheard on LinkedIn: "Large language models haven't shown any return on investment in the enterprise on any type of important task."
We've already talked about that MIT study on the 95% failing AI pilots. So there should be truth to that observation. On the other hand, this is an incredibly powerful technology. So why does ROI prove so elusive?
Here's a collection of faulty AI strategies:
Fire and Forget
The top brass enables Copilot for everyone on their enterprise Microsoft subscription and calls it a day. Bonus points for a vague mandate of "use AI or you're fired!"
Why it doesn't work: Enhancing individual performance is a noble goal. But if you expect significant ROI from having your workers write their emails a tad faster, you're in for disappointment. The typing of the email isn't the hard part. Having clear thinking about what should be communicated in the first place is.
What to do instead: Clear identification of how your organization creates value, where that flow of value hits a snag, and then figuring out what sort of automation, tooling, or agent can help.
Technological Readiness Leapfrog
"We want to move our processes to an agentic AI, preferably on the blockchain in the metaverse." Okay, and where do your processes live right now? "Split between paper printouts in some forgotten filing cabinet and Bob's brain."
Why it doesn't work: You can't leapfrog too far, or you lose track of what you need to keep. Some skipping of intermediate technology is fine—we learn how to drive without first learning how to ride a horse. But going straight from total chaos to AI automation can be a step too far. You're just encoding the chaos.
What to do instead: Crawl before walk, walk before you run. The work of clarifying existing processes in a systematic way is valuable even without AI on top of it.
Addition Bias
One of the most interesting cognitive biases is our tendency to not even think about subtraction as a viable strategy. We problem-solve primarily through addition. More tools, more steps, more controls.
Why it doesn't work: Overwhelm, confusion, and ultimately a slowdown of delivering value, because now AI isn't a help—it's "one more thing to do" on top of everything else.
What to do instead: The best process is the one you don't need. Don't ask, "How can I add AI to this process to make it faster?" Ask, "How can I remove this process thanks to AI?"
The Intern's Job
"Let's give the AI project to that smart junior person who's good with ChatGPT."
Why it doesn't work: Knowing how to prompt ChatGPT is not the same as knowing which process to redesign. The people who understand where the real friction lives are the senior operators who've been doing the work for years. They're also the ones least likely to volunteer for "the AI thing." So the project ends up technically competent but strategically irrelevant.
What to do instead: An automation that doesn't draw on the experience of the people who live the problem every day is doomed to fail. Those senior operators need to be on the team, with the space and freedom to explore what works and what doesn't.
Boiling the Ocean
"We're building an enterprise-wide AI transformation roadmap across all 14 business units."
Why it doesn't work: Everybody has an opinion, everybody has requirements, and of course they all need to circle back, find alignment, or any other corporate speak for "nothing will ever get done." By the time you've finished the roadmap, the technology has moved on, the executive sponsor has changed roles, and half the stakeholders have lost interest.
What to do instead: Meanwhile, the team down the hall that quietly automated one painful report is already saving 20 hours a week. Doesn't even need to be fancy LLM stuff.
Conclusion
I'm sure you could identify many more of these flawed strategies, but this is a good sampling already that might give you some ideas on how to tackle AI in your organization. The ROI isn’t found in your strategy deck, it’s in that one annoying process nobody wants to touch.
