Will AI Put Researchers Out of Their Jobs?
A friend recently asked me whether AI is going to put theoretical physicists out of work. His fear: theoretical physics will "become primarily about fact-checking and fleshing out information provided by AI."
I don't think that's what happens. But the concern is worth unpacking, because it reveals something important about what AI can and can't do, well beyond physics.
Large language models are interpolation machines. They're trained on existing text to predict what comes next, which makes them astonishingly good at recombining, rephrasing, and synthesizing ideas that already exist. But they can't (reliably) extrapolate. They can't make the leap to something genuinely new if there's no immediate and easy way to verify that step, as in programming with automated tests.
An LLM can beautifully explain existing theories, help you work through known derivations, and connect ideas across papers. But the next breakthrough in quantum gravity isn't coming from recombining tokens in the training data.
So no, AI isn't replacing theoretical physicists. But it will change the job. And this is where the real risk hides: when the use of AI gets misallocated.
I once heard a professor lament that she felt like "the world's highest-paid administrative assistant" because of all the non-research tasks she had to handle. Every hour spent filling out forms was an hour not spent on the research the university hired her to do.
AI could go either way here. In the good scenario, it handles the tedious parts: literature reviews, first-draft summaries, routine calculations, formatting, correspondence. The physicist spends more time on actual physics. In the bad scenario, the one my friend described, the physicist becomes a full-time fact-checker, spending their days verifying AI output instead of doing original work. Same administrative assistant problem, new coat of paint.
This is the fork in the road. If you use AI as a generator that you supervise, you're on the fact-checking treadmill. The AI produces, you verify. The more it produces, the more you verify. You've given yourself a new job. If you use AI as an accelerator for work you're already doing, the dynamic flips. You're the one doing the physics. The AI handles the parts that don't require your expertise. You stay in the driver's seat.
As for "learning to ask the right questions," my friend is spot on. That's always been the hard part of research, and it's the part AI is least equipped to help with. The right question is, by definition, one that the existing body of knowledge hasn't answered yet. Asking it requires domain expertise, intuition, and taste.
And AI doesn't have taste. It has statistics.
PS: This wraps our mini-series on AI for research. Check out the free PDF guide summarizing these points from the resources section of our website, https://www.aicelabs.com
