The first three years of AI were about discovering what it can do.
The next three will be about what humans no longer need to do.
The terminal phase will ask a harder question: what we choose to do despite AI being fully capable of outperforming us.
Capability
Programmers have long known about rubber duck debugging — the practice of explaining your problem to an inanimate object, which forces you to articulate your assumptions and often reveals the solution. LLMs are rubber ducks that talk back.
When I'm stuck on a problem, I'll often write it out in conversation form. Not because I expect the model to solve it, but because the act of explaining activates a different mode of thinking. The model's response, even when it's wrong or off-base, gives me something to push against.
The most valuable thing an LLM can do isn't answer your question correctly — it's help you ask a better question.
Substitution
Another pattern: I use LLMs to rapidly scaffold a conceptual space before I go deep. If I'm learning a new domain, I'll have a long conversation to build up a rough map of the territory — what are the key tensions? What do experts disagree about? What questions are considered solved vs. open?
This is distinct from reading about a topic. Reading is passive. The conversational format forces you to steer, to say "but wait, what about..." which engages more active cognition.
Choice
There's a failure mode too. If you outsource too much of your thinking, you can end up with fluent-sounding outputs that aren't really yours. The model generates the reasoning; you just review it.
I think the key is keeping the model in the role of interlocutor, not author. You drive. It responds. You decide.
I'm still figuring out what the long-term cognitive effects of this kind of tool use look like. But I think the framing of "AI replaces human thinking" is wrong. The better frame might be: it's a new medium for thinking, like writing was.