cURL’s Daniel Stenberg: AI slop is DDoSing open source
Summary
cURL creator Daniel Stenberg says AI is a double-edged sword: it floods projects with bogus bug reports but also finds real, deep vulnerabilities that other tools miss.
cURL creator says AI is both a security threat and a vital tool
Daniel Stenberg, the creator of the widely-used data transfer tool cURL, says AI is a double-edged sword for open source. At FOSDEM 2026, he argued the technology is simultaneously flooding projects with fake bug reports and uncovering deep, critical vulnerabilities that older tools missed.
The flood of AI-generated slop
Stenberg was forced to shut down cURL’s bug bounty program on HackerOne in late 2025. The program, which offered up to $10,000 for critical bugs, created a perverse incentive for low-effort, AI-generated reports.
“Before, in the old days, someone actually invested a lot of time [in] the security report. There was a built-in friction,” Stenberg said. “Now there’s no effort at all. The floodgates are open.” He described one fabricated report about a non-existent “HTTP/3 stream dependency cycle exploit” that included detailed but entirely fake debugger sessions.
The volume and poor quality of these reports devastated the project’s small security team.
- Before early 2025, roughly one in six security reports was legitimate.
- By late 2025, that rate had plummeted to roughly one in 20 or 30.
Stenberg calls the triage process “terror reporting” that drains time and the “will to live.” He warned that this noise risks the software supply chain, as overwhelmed maintainers might miss real vulnerabilities.
AI as a powerful analysis tool
Despite the spam, Stenberg is a proponent of using AI as an analysis tool. He said cURL now works with several AI-powered analyzers that have found “more than 100 bugs” missed by years of fuzzing, static analysis, and human audits.
“They certainly find a lot of things no other tools previously found, and in ways no other tools previously could find,” he said. These tools can reason across protocols and specifications in “almost magical” ways, such as catching an invalid octet in a Telnet implementation against a spec “no one has read since 2012.”
Stenberg also uses three different AI review bots on pull requests. They run overnight, catching distinct classes of issues like missing tests or flawed assumptions about library behavior.
Skepticism on AI-generated code
Stenberg draws a firm line at using AI to write production code. He does not use AI coding assistants and said no one on the cURL project relies on them for serious development.
He is similarly unimpressed with AI-suggested code fixes, which he says are “never good” enough to accept blindly. He likens them to an “eager junior” whose ideas must be carefully vetted and tested. On legal concerns, he said AI-generated contributions don't fundamentally change cURL's risk model, as the project has always had to trust contributors' rights to submit code, regardless of its origin.
The broader open source burden
Stenberg connected cURL’s experience to a wider crisis in open source maintenance. He referenced, without naming, the recent conflict between FFmpeg and Google, where a large company used AI and vast resources to find many real bugs and then pressured a small volunteer project to fix them under strict deadlines without providing patches or funding.
This dynamic creates an unsustainable burden. “A giant company can now use AI and large security departments to find many real bugs, then pressure tiny volunteer projects to fix them,” he said.
Choosing a path forward
Looking ahead, Stenberg expects AI to continue “augmenting everything we do in different directions.” He urges projects to experiment with defenses, such as creating vetted-reporter “secret clubs” or implementing stricter submission requirements, even if these measures challenge traditional open source openness.
His core message is that the impact of AI is a human choice. The same technology enabling “terror reporting” and bandwidth-hogging scrapers is also making critical software like cURL safer. The outcome depends on how the community decides to wield it.
Related Articles
‘An AlphaFold 4’ – scientists marvel at DeepMind drug spin-off’s exclusive new AI
Isomorphic Labs, a Google DeepMind spin-off, has developed a proprietary AI model, IsoDDE, that predicts protein-drug interactions for drug discovery, but unlike AlphaFold, it is not being shared with the broader scientific community.
Gaza ‘stabilization force’ commander outlines security plans
US-led Gaza force to start in Rafah, aims for 20,000 troops. Indonesia pledges 8,000 and will be deputy commander.
Stay in the loop
Get the best AI-curated news delivered to your inbox. No spam, unsubscribe anytime.
