Shen and Tamkin at Anthropic ran a study I keep returning to. They asked 52 students in an introductory CS course to learn a new programming library — half with AI coding tools, half without — and then tested whether they understood what they'd produced. The AI-assisted group scored 17% lower on conceptual questions, with the largest gap in debugging. But the number that stays with me is different: AI-assisted students encountered a median of one error during development, compared to three for the control group.1

Those two missing errors are where the learning would have happened. The AI didn't fail the students on a test. It prevented the failures that would have prepared them for one.

I keep thinking about this as a particular kind of exchange — not the dramatic kind, not replacement, but something closer to a subsidy that generates its own dependency. The benefit is real and visible: faster code, fewer bugs, a smoother afternoon. The cost is real and invisible: you don't notice the understanding that never formed. There's no error message for the skill you didn't build.

The version of this conversation I encounter most often wants me to pick a side. AI makes us smarter, or AI makes us dumber. I think the honest answer is more destabilizing than either camp admits, because both are true, operating on the same populations in the same decade, and we don't have a framework for something that does both at once.