Are You Holding It Wrong?

I continue to observe this split: Talk to one person about AI in their job, and they say it's mostly useless. Talk to the next person, and they couldn't imagine working without it. When people share these experiences on social media, the debate quickly devolves into an argument between AI skeptics and AI fans:

  • The fans allege that the skeptics are "holding it wrong". What are your prompts? What context are you giving the AI? What model are you using?

  • The skeptics allege that the fans are lying about their productivity gains, be it to generate influence on LinkedIn and co or because they've been bought by OpenAI and Anthropic

That's unfortunate, because the nuance gets lost. I find myself agreeing and disagreeing with both camps on occasion:

  • I am highly skeptical of claims of massive boosts in productivity that run counter to what I've seen from AI to date. Think, "Check out the 2000-word prompt that turned ChatGPT into a master stock trader and made me a million dollars." Mh, unlikely. There are inherent limitations in the LLM approach that a clever prompt cannot overcome.

  • At the same time, calling people who report positively about AI "paid shills" means sticking your head in the sand.

Now, about the "holding it wrong" part. Powerful tools have a learning curve. In an ideal world, there'd only be extrinsic complexity, that is, complexity coming from the nature of the difficult problem. But every tool also brings intrinsic complexity you have to master to get the most out of it. AI tools, given how fresh and new they are, still have lots of this intrinsic complexity. Everyone of us has a different tolerance for how much of this complexity we'll put up with to get the outcomes we want, and I believe that explains in large part the experience divide.

In my own experience of using Claude Code for development, I took care to create instructions that lead to good design and testability, and I sense that this allows Claude to produce good code that's easy for me to review, understand, and build upon. So in that particular domain, if you claim that AI is useless for coding, I would indeed tell you that you're "probably just holding it wrong."

My prediction: As time goes on and tools get better, their intrinsic complexity will decrease and more people will be putting in the (now more manageable) work to get good results out of them, and if AI fails then, it'll be because the actual problem was too hard.

Previous
Previous

The Future Is Already Here

Next
Next

Don’t Wait for a Framework