You're probably reading this because an algorithm surfaced it. You might summarize it later with AI. And if you disagree with something in here, there's a nonzero chance you'll ask ChatGPT to articulate why.

None of that is a criticism. It's just the water we swim in now. But it's also the subject of what might be the most important question nobody's really sitting with: as AI gets smarter, are we — quietly, imperceptibly — getting dumber? And if we are, is that actually fine?

Welcome to the cognitive bargain. You're already a party to it. Let's talk about what you're trading away.

The Case That You Should Relax

The panic over AI and cognitive decline follows a script that's been performed, almost word for word, for about 2,400 years.

In 370 BCE, Socrates warned in the Phaedrus that writing would "introduce forgetfulness into the soul" and give people "the appearance of wisdom, not its reality." He was technically right — writing did reduce the need for memorization. But we only know he said this because Plato wrote it down. The irony is perfect, and it repeats.

When Gutenberg's press arrived, the monks whose entire lives revolved around copying manuscripts were rendered obsolete. A craft that took decades to master was replaced by a machine. European literacy at the time sat around 5-10%. Within 50 years, 15-20 million books were in circulation and the Renaissance was underway. The "deskilling" was real. So was the civilizational leap that followed.

Calculators in the 1970s triggered the same fears. Kids will forget how to do math! Ellington's 2003 meta-analysis found no apparent harm to math aptitude. The scare was largely a false alarm.

When Google arrived, Columbia researcher Betsy Sparrow showed that people stopped memorizing facts and started memorizing where to find them. Our memory didn't disappear — it restructured into what she called "transactive memory," a kind of cognitive partnership with the internet.

The pattern is almost suspiciously consistent: technology offloads a cognitive task, people panic, humanity adapts, and the net result is a leap forward that would've been impossible without the trade-off.

The monks lost their craft. The world got the Scientific Revolution.

And the pro-AI case doesn't stop at historical analogy. It's got hard data.

A 2023 study in PNAS tracked 5.8 million decisions by professional Go players over 71 years. After AlphaGo's 2016 victory — the moment a machine proved superhuman at humanity's most complex board game — something unexpected happened. Human decision quality improved. Novelty increased. Players started making historically unprecedented moves, and those moves were more effective. They weren't copying the AI. They were thinking in ways they never had before.

Andy Clark, the philosopher who co-originated the Extended Mind thesis, published a 2025 paper in Nature Communications using this data to argue that AI doesn't replace creativity — it catalyzes it. "We have not achieved this by becoming dumber brains," he writes, "but by becoming smarter and smarter hybrid thinking systems."

Then there's the equalizer effect, which might be the most underappreciated finding in the whole debate. A Harvard/BCG study of 758 consultants found that GPT-4 boosted task quality by 40% overall — but the distribution was wildly asymmetric. Bottom performers improved by 43%. Top performers? Only 17%. AI compressed the skill distribution, lifting the floor dramatically. The same pattern appeared in a Stanford/MIT study of 5,000+ customer service agents: a 35% throughput lift for bottom-quartile workers, almost nothing for veterans.

If "cognitive decline" means the average person can now perform at the level of an expert, you have to ask: decline for whom? The people most worried tend to already be cognitively privileged. For billions of people worldwide — the Kenyan farmer accessing specialist medical knowledge, the 3 million researchers in 190+ countries using AlphaFold's protein database — AI isn't a crutch. It's the first ladder they've ever been handed.

And there's one more uncomfortable truth the "pro" camp has in its arsenal: the Reverse Flynn Effect. IQ scores in developed nations have been declining since the 1990s — well before ChatGPT. Studies from Norway, Denmark, Finland, the UK, and the US all show statistically significant drops. The causes? Processed food, sleep deprivation, social media, reduced reading. Environmental factors, not AI. If anything, blaming AI for cognitive decline is like blaming the ambulance for the car accident.

So that's the case for relaxing. It's historically grounded, empirically supported, and philosophically coherent. It's also, I think, incomplete in a way that should bother you.

The Case That You Should Be Terrified

Everything above is true, and none of it accounts for the thing that makes AI fundamentally different from every previous tool.

Calculators offload arithmetic. GPS offloads navigation. The printing press offloaded copying. Each of those tools replaced a specific, bounded cognitive function. You could point to the skill being outsourced, measure it, and evaluate the trade.

AI offloads thinking itself. Reasoning, writing, analysis, creativity, judgment, moral deliberation. There is no fixed domain. The calculator fills a computational gap. AI fills a cognitive gap. And that distinction isn't pedantic — it's the whole ballgame.

Let's start with the brain scans. The MIT Media Lab's 2025 study put 54 participants under EEG for four months, splitting them into three groups: brain-only, search engine, and LLM-assisted. The LLM group showed the weakest neural connectivity across the board — reduced activity in alpha and beta bands associated with attention and problem-solving. 78% of LLM users couldn't quote a single passage from their own essays. And here's the kicker: when LLM users were later forced to write without AI, they showed weaker neural connectivity than people who had never used AI at all. They didn't just fail to improve. They regressed past baseline.

The researchers coined a term for this: cognitive debt. Like financial debt, it accumulates invisibly. You feel productive in the moment. The bill comes later — diminished critical thinking, reduced creativity, shallower information processing.

But that's adults. For kids, the picture is categorically worse.

Timothy Cook's 2026 piece in Psychology Today draws a distinction that should keep any parent up at night: cognitive atrophy vs. cognitive foreclosure. Adults who offload tasks to AI lose skills they once had. That's atrophy — bad, but potentially reversible. Children who grow up with AI never build those skills in the first place. That's foreclosure. And foreclosure, Cook argues, may not be reversible the way atrophy is.

The analogy: if you stop going to the gym, your muscles weaken. That's atrophy. You can rebuild them. But if a child never walks, the neural pathways for walking don't develop on schedule. That's not weakness. That's absence. A child offloading reasoning to AI isn't making a trade — they're skipping a developmental stage they don't even know exists.

Gerlich's 2025 study found a significant negative correlation between AI usage and critical thinking (r = -0.68), with younger participants showing the strongest dependence and the lowest critical thinking scores. The relationship was non-linear — moderate use showed minimal impact, but past a threshold, the decline was steep.

Now layer on the Bainbridge Paradox.

In 1983, automation researcher Lisanne Bainbridge identified an irony that has since been cited over 4,700 times: the more you automate a task, the less the human operator practices the manual skill. The less they practice, the worse they get. But the whole point of keeping a human in the loop is for those rare moments when automation fails — precisely when you need the skill most.

Air France Flight 447 is the textbook case. In 2009, when ice crystals blocked the pitot tubes and the autopilot disconnected — a recoverable event lasting less than a minute — the pilots couldn't fly the plane manually. They'd been deskilled by the very automation designed to protect them. The stall warning sounded 75 times. They held the plane in a stall for three minutes and thirty seconds. 228 people died. The aircraft was fully functional.

Now apply this to thinking. When AI hallucinates — confidently, fluently, in ways that are often undetectable without the very expertise being offloaded — will users have the cognitive capacity to notice? You need to already know the answer to evaluate the answer. And if you've been letting AI do the knowing for you, you're the pilot who can't recognize a stall.

There's biological precedent for this beyond aviation. A 2020 study in Scientific Reports found that greater lifetime GPS experience predicted worse spatial memory when navigating without GPS — a steeper decline in hippocampal-dependent spatial memory over time. London taxi drivers who navigate without GPS have measurably larger hippocampi. A population study of 8.9 million US death records found that taxi and ambulance drivers — professions requiring intensive real-time spatial navigation — had the lowest Alzheimer's-related mortality rates.

GPS offloaded one cognitive function and produced measurable brain changes. AI is offloading reasoning, analysis, creativity, and judgment simultaneously. If one narrow offload can shrink the hippocampus, what does comprehensive cognitive outsourcing do?

And there's a dimension that almost nobody talks about: thought homogenization. A 2026 study in Trends in Cognitive Sciences analyzed 130+ studies and found that despite pulling from enormous datasets, LLMs consistently produce outputs less varied than human thought. When people use LLMs to polish writing, stylistic individuality is lost. Groups using LLMs generate fewer ideas than groups collaborating without AI. And AI suggestions systematically shift non-Western writing styles toward American/Western norms — a form of cognitive imperialism baked into the training data.

The mechanism is architectural: next-token prediction favors high-probability continuations, smoothing over outliers and reinforcing dominant patterns. At civilizational scale, this doesn't just affect content quality. It narrows the range of human thought itself.

The Part Where Both Sides Are Right and That's the Problem

So here's where it gets genuinely uncomfortable, because both arguments are correct and they don't resolve.

The historical precedent is real. Humanity has traded specific cognitive skills for collective capability at every major technological inflection, and it has always been worth it. The printing press killed manuscript copying and birthed the Enlightenment. The calculator killed mental arithmetic and nobody misses it. This pattern is so consistent across millennia that dismissing it requires dismissing the entire arc of human progress.

And the neuroscience is also real. AI is measurably reducing neural connectivity. Children are experiencing cognitive foreclosure. The Bainbridge Paradox applies with terrifying precision. The "this time it's different" argument has genuine structural merit — AI isn't offloading a task, it's offloading the capacity for tasks.

Both things are true. And the tension between them isn't something you resolve with a hot take. It's a genuine paradox at the center of where we're headed.

David Krakauer at the Santa Fe Institute offers the most useful framework I've found. He distinguishes between complementary cognitive artifacts — tools that make you smarter even after you put them down, like an abacus — and competitive cognitive artifacts — tools that make you less capable without them, like GPS. His concern is that AI is the ultimate competitive artifact.

Every major competitive artifact in history turned out to be worth it. Writing is a competitive artifact. Take away books and we're worse off than pre-literate oral cultures. We've been accumulating dependency on competitive artifacts for millennia, and each one made civilization more capable, not less.

So the trade has always been worth it. Except the terms of this trade are unprecedented. Previous trades involved offloading specific skills in exchange for specific capabilities. This trade involves offloading the meta-skill — the ability to think, reason, and judge — in exchange for... what, exactly? Convenience? Speed? A 43% boost for bottom performers that might reverse once AI surpasses top performers too?

The Harvard/BCG equalizer data is powerful today. But researchers at Harvard's D3 Institute have already warned that once AI surpasses even top performers, "inequality will likely increase." The equalizing effect may be a temporary phase. And the centaur model — Kasparov's insight that "weak human + machine + better process" beats strong AI alone — is already crumbling. In chess, today's top engines beat all human-machine teams. The centaur window may close for cognitive work too.

What This Actually Means for You

There's a concept buried in the MIT study that I keep coming back to: you can't feel cognitive debt accumulating. That's the whole problem. Every individual interaction with AI feels like a win. You're faster. You're more productive. Your output looks better. The debt is invisible until you try to think without it — and by then, like the Air France pilots, you're in freefall and you don't know why.

The researchers who study this aren't saying "stop using AI." They're saying something more nuanced and harder to act on: the trade is only worth it if you maintain what Andy Clark calls "extended cognitive hygiene" — the meta-skill of knowing when to lean on AI and when to struggle through it yourself.

But here's the tension even within that advice: the people best equipped to practice cognitive hygiene are those who already have deep knowledge and strong metacognitive skills. The people most vulnerable to cognitive harm are novices, students, and the undereducated — exactly the people who use AI as a replacement rather than a tool. This creates a brutal class dynamic. Wealthy kids in well-resourced schools get structured AI integration. Everyone else gets raw ChatGPT and no scaffolding. The result isn't democratization. It's a two-tiered cognitive class system.

And we're running this experiment blind. The MIT EEG study had 54 participants over four months. The GPS hippocampal study had 13 in its longitudinal follow-up. We are making civilizational bets on technology whose long-term cognitive effects we literally cannot yet measure. By the time we have 20-year longitudinal data, an entire generation will have grown up inside this bargain.

Socrates was right that writing would change how we think. He was wrong that the change would be net negative. The monks were right that the printing press would destroy their craft. They were wrong that it mattered.

The question isn't whether the cognitive bargain is new. It isn't. The question is whether this particular bargain — where the thing being traded is the capacity for thought itself — follows the same curve as every bargain before it. The optimists have 2,400 years of precedent. The pessimists have brain scans.

I genuinely don't know who's right. And I think anyone who says they do is selling something.