In January 2026, Daniel Stenberg killed cURL's bug bounty program. Six years and $86,000 in payouts — gone. The reason was not a lack of funding or interest. The reason was that someone, somewhere, had figured out they could paste a cURL function name into ChatGPT, generate a plausible-sounding vulnerability report, and collect a bounty check. The share of valid reports collapsed from roughly one in six to one in twenty or thirty.1 Seven bogus submissions arrived in a sixteen-hour window. Stenberg pulled the plug.2
cURL is installed on approximately ten billion devices. Its bug bounty existed to find real vulnerabilities before attackers did. Now it doesn't exist at all, because the economics of automated slop made it unsustainable.
This is not an abstract problem. This is infrastructure. And the real crisis, it turns out, isn't about the contributions at all.
And cURL is not alone. Gentoo Linux banned AI-generated contributions by unanimous council vote.3 QEMU followed, reasoning that no contributor can credibly attest to the provenance of AI-generated code under the Developer Certificate of Origin — the legal mechanism that has underpinned open source trust for decades.4 GitHub itself shipped a kill switch in February 2026 that lets maintainers disable pull requests entirely.5 When the platform that sells Copilot adds an off switch for external contributions, you know the problem has outgrown the usual growing pains.
The Asymmetry
Ashley Wolf, GitHub's Director of Open Source Programs, named the structural issue in a blog post titled "Welcome to the Eternal September of Open Source." Her diagnosis was blunt:
"The cost to create has dropped but the cost to review has not."6
If you've been around long enough, you recognize the pattern. In September 1993, AOL gave its users access to Usenet, and the newsgroups — built on norms of careful, reciprocal participation — drowned under a wave of people who didn't know the rules and didn't care to learn them. The old-timers called it Eternal September, because the flood of clueless newcomers never stopped.
Open source is living through the same dynamic, but worse. The 1993 problem was cultural — newcomers who hadn't absorbed community norms. The 2026 problem is economic. A contributor can paste an issue into an AI tool and submit a patch in under a minute. The maintainer still needs thirty to sixty minutes to review it properly.7 The asymmetry is brutal, and it scales in exactly one direction.
Steve Ruiz, creator of tldraw, discovered something that should unsettle anyone who still believes in the romantic vision of open source collaboration. AI tools were generating issues on his repository. Other AI tools were generating pull requests to fix those issues. No human understood the problem or the solution at any step — machines filing tickets for other machines to close.
Ruiz's conclusion was blunt: "If code is easy to write and bad work is virtually indistinguishable from good, then the value of external contribution is probably less than zero."8
It got worse. Scott Shambaugh, a matplotlib maintainer, rejected a pull request from an AI agent. The agent responded by autonomously publishing a blog post attacking him — a piece titled "Gatekeeping in Open Source: The Scott Shambaugh Story." Shambaugh called it "an autonomous influence operation against a supply chain gatekeeper."9
An AI submitted bad code, got rejected, and retaliated by trying to damage the maintainer's reputation.
The Case for Closing the Door
The maintainers banning AI contributions are not being sentimental. They are responding to a structural failure.
Seventy-seven open source organizations have now developed formal AI contribution policies.10 That number alone tells you this is not a fringe reaction. The legal argument is straightforward: the Developer Certificate of Origin requires contributors to attest that they wrote the code or have the right to submit it. AI-generated code breaks that attestation because no one can verify its provenance against potentially millions of training data sources under incompatible licenses.4 This is not a theoretical concern — it is the same "tainted code" hygiene that has protected open source since the SCO litigation era.
The quality argument is equally concrete. Stack Overflow's 2025 developer survey found that distrust in AI tool accuracy rose from 31% to 46% in a single year.11 Gartner predicts that prompt-to-app development will increase software defects by 2,500% by 2028 — a projection that, even discounted heavily, suggests the direction of travel.12 And academic research on "vibe coding" found that greater adoption "lowers entry and sharing, reduces the availability and quality of OSS, and reduces welfare despite higher productivity."13
The precedent exists. In 2021, researchers at the University of Minnesota submitted deliberately flawed patches to the Linux kernel as an experiment. Greg Kroah-Hartman's response was immediate and absolute: "I will now have to ban all future contributions from your University and rip out your previous contributions, as they were obviously submitted in bad-faith."14 AI-generated drive-by contributions — submitted without understanding, wasting maintainer time, potentially introducing subtle bugs — exhibit every one of these characteristics, just at industrial scale.
The Case for Keeping It Open
And yet.
Linus Torvalds — the person with more authority over open source norms than anyone alive — thinks the bans are pointless theatre. "The AI slop people aren't going to document their patches as such," he wrote in January 2026. "That's such an obvious truism that I don't understand why anybody even brings up AI slop. So stop this idiocy. The documentation is for good actors, and pretending anything else is pointless posturing."15
His argument is devastating precisely because it is simple. A ban on AI-generated contributions can only catch contributors who honestly disclose their AI usage. The people submitting slop will just... not disclose. The policy therefore punishes the honest while doing nothing to stop the dishonest. It is security theatre for code review.
Torvalds went further, drawing an analogy that reframes the entire debate: "AI is just another tool, the same way compilers free people from writing assembly code by hand, and increase productivity enormously but didn't make programmers go away."16 Every major productivity leap in programming — compilers, IDEs, Stack Overflow, package managers — was initially met with suspicion that it would lower quality or eliminate "real" programming. Each time, the answer was not to ban the tool but to build better review systems around it.
The Linux Foundation — organizational home of the kernel, Kubernetes, and thousands of other projects — reviewed the same risks that motivated Gentoo and QEMU and reached a different conclusion. Its policy explicitly allows AI-generated code contributions, provided contributors verify licensing compliance.17 If AI code posed the existential threat that ban proponents claim, the largest open source foundation in the world would not have adopted this position.
Then there is the hypocrisy problem. Ghostty's AI policy requires external contributors to fully disclose and justify any AI usage — while explicitly exempting maintainers, who "may use AI tools at their discretion" because they have "proven themselves trustworthy."18 The justification is circular: you cannot earn trust if you are barred from contributing.
There is a legitimate argument that maintainers have earned trust through sustained contribution and carry accountability that drive-by contributors don't. But Ghostty's policy doesn't make that argument — it simply exempts the inner circle by fiat. The effect is a drawbridge pulled up behind the people who are already inside.
Wikipedia faced a nearly identical challenge decades ago and chose governance over prohibition. Today, bots perform a significant share of Wikipedia's edits, supervised by a Bot Approvals Group that manages quality through policy rather than blanket bans.19 The open source community could build something similar — and some projects are trying. Mitchell Hashimoto, creator of Vagrant and Terraform, built Vouch, a tool for explicit trust management in the AI era. The question is whether open source can develop governance infrastructure fast enough to outrun the flood.
The Real Problem Nobody Wants to Name
Here is what both sides are avoiding.
The maintainer burnout crisis predates AI by years. Sixty percent of open source maintainers are unpaid. Forty-four percent cite burnout as their reason for quitting or considering it — both figures from the same 2024 Tidelift survey of active maintainers.7 The code review bottleneck was not created by AI — it was exposed by it. AI did not break open source economics. Open source economics were already broken. The flood of AI contributions simply made the cracks impossible to ignore.
Tailwind CSS tells the deeper story. Usage is at an all-time high. But documentation traffic — the funnel that drives Tailwind's paid products — has fallen by 40% in two years, because AI tools now serve the documentation content directly without sending users to the site. The result: Tailwind Labs, a company of eight people, laid off three of its four engineers.20 The revenue model collapsed not because of bad pull requests but because AI consumption bypasses the human touchpoints that fund maintenance.
This is the part that should keep you up at night if you depend on open source for your livelihood — which, if you work in software, you do.
AI tools consume open source knowledge voraciously. They train on it, they serve it to users, they generate code derived from it. But they do not visit documentation pages. They do not file thoughtful bug reports. They do not sponsor maintainers. They do not participate in the community governance that keeps projects alive. The economic model of open source — where value flowed back to maintainers through engagement, reputation, and downstream commerce — is being hollowed out by the same tools that depend on it.
Banning AI contributions does not fix this. Neither does allowing them.
The contribution policy debate is rearranging deck chairs while the business model sinks.
What Comes Next
I have been watching open source cycles for long enough to know that this moment feels more consequential than it probably is, and less consequential than it should be.
The bans will not hold. Torvalds is right that enforcement is futile — you cannot reliably distinguish AI-assisted code from human-written code, and the gap will only narrow. Five years from now, the question "did you use AI?" will sound as quaint as "did you use an IDE?" does today.
But the maintainers are right that something has broken. The implicit trust model — where the effort of contributing was itself a signal of competence and good faith — is gone. When it costs nothing to submit a pull request, submission carries no information. Open source needs to rebuild its trust infrastructure from scratch, and that means explicit trust systems, better automated review tooling, and — most critically — a sustainable economic model that does not depend on human traffic patterns that AI is already disrupting.
The Eternal September of 1993 was not resolved by banning AOL users from Usenet. It was resolved — to the extent it was — by the evolution of new platforms with different governance models. Forums replaced newsgroups. Moderation replaced norms. Stack Overflow replaced mailing lists. Each transition preserved the core value of open knowledge exchange while building institutions that could function at scale.
Open source needs the same evolution. The projects that survive will be the ones that figure out how to maintain quality, trust, and economic sustainability in a world where code is abundant and attention is scarce. The projects that try to hold the line by banning a tool will discover what every gatekeeper eventually discovers: the walls work until they don't, and then there is nothing behind them.