Using AI For Feedback without Fooling Yourself

AI expert Andrej Karpathy recently wrote this on X:

Don't think of LLMs as entities but as simulators. For example, when exploring a topic, don't ask: "What do you think about xyz"? There is no "you". Next time try: "What would be a good group of people to explore xyz? What would they say?"

This becomes especially important when you want AI to explore something where opinion and nuance matter—and where it'd be easy to fool yourself:

  • Is this a good business idea?

  • Should we go for PostgreSQL or MongoDB?

  • Tailwind or Bootstrap? Rust or C++? Dogs or cats?

If you put the question directly to ChatGPT or Claude, they'll answer with whatever personality the training data happened to produce. Instead, try:

  • "What would be a good group of people to critique this business idea? What holes would they poke in my thesis?"

  • "What group of practitioners should chime in on our choice of database?"

  • "Who would have a nuanced take on which CSS framework, systems programming language, or pet to choose?"

This isn't far from classic prompting advice about adopting personas—with the twist of asking the AI to generate the list of personas first.

One caveat: even these answers are simulations. Taking the business idea example—yes, it's better than assuming you know what people would think. But if the AI's simulated experts love your idea, that doesn't mean real experts would. The AI is guessing what a VC or a seasoned operator might say, based on patterns in text. It has no actual model of your market, your timing, or the dozen ways your thesis could be wrong that haven't made it into training data.

Simulated criticism is a useful stress test. It's not validation.

Previous
Previous

If It's Good, Why Do You Need to Manage the Change?

Next
Next

To Code or Not to Code