A Knowledge Worker's Guide to AI-Assisted Research
What actually works, what doesn't, and how to tell the difference.
AI is remarkably good at explaining concepts, making connections, and sounding confident. It's also remarkably good at being wrong without telling you.
This guide is for you if you've ever asked ChatGPT for references and gotten citations that don't exist, or noticed it making subtle conceptual errors that only an expert would catch.
What's inside:
Why "hallucination" is the wrong word, and the better mental model that changes how you use these tools
Five practical principles for using AI in research without getting burned
Where AI's blind spots actually are (Swiss cheese capabilities, interpolation vs. extrapolation, and why it doesn't learn from your corrections)
The difference between using AI as a fact-checking treadmill and using it as an amplified expert
Why your domain expertise is the multiplier, not a consolation prize
No email required. Share it with anyone who'd find it useful.
