The AI - Human Feedback Loop
You might have already seen Andrej Karpathy's recent keynote to the 2025 AI Startup School —if not, catch it here—but what stood out for me was that he puts a strong emphasis on partial autonomy where the AI does some of the work for the human while the human provides verification and guidance.
I've written before about the types of tasks that are good for AI automation, with Guided Intelligence_—a task that is highly dependent on specific context _and very open-ended—being the toughtest of them.
High-context, open-ended tasks are precisely those where, with the current state-of-the-art models, we can hope at best for this partial autonomy. To make such a symbiotic relationship between human and AI work, Karpathy points out that the cycle between prompting the AI and getting outputs must be short.
A bad example is an AI coding agent that, after a single prompt, drops 1000 lines of code on you to review and verify. That creates lots of friction and slows you down immensely. It also gives the agent ample time running at top speed in the wrong direction.
A good example is the way Claude Code splits a prompt into several small tasks, asks if those make sense, then presents small code snippets to accept or refine. Instead of firing and forgetting, we are rapid-firing.
So, when designing an AI tool that's meant to assist human workers in a complex task, don't ask "how can we get everything right in one shot?" Don't even ask "how can we do as much work in one shot as possible and then iterate?" Instead, ask "how can we create a tool that affords a low-friction fast feedback cycle"?