The case for gatekeeping, or: why medieval guilds had it figured out
Summary
AI-generated PRs are flooding open source, overwhelming maintainers and degrading code quality. A "guild system" (web of trust) is proposed to verify contributors and filter out bot-generated spam.
AI tools flood GitHub with slop
Open source maintainers report a massive surge in AI-generated pull requests (PRs) that are clogging repositories with low-quality "slop." These mass-produced submissions often follow templates and reference issues, making them look legitimate at a glance. Developers say these contributions force them to act as manual filters for bot-driven garbage rather than focusing on actual software development.
A high PR count once signaled a healthy, popular project where strangers fixed edge cases. Today, a high volume of activity often indicates a repository has become a target for resume-padding bots and AI-assisted "contribution farmers." These users aim to turn their GitHub activity graphs green to impress recruiters or hunt for vulnerabilities using automated scripts.
This "open slop" problem has turned many repositories into slush piles. Maintainers spend hours reviewing code that changes variable names to slightly worse alternatives or introduces subtle bugs. The sheer volume of these requests threatens to break the decentralized model that powers most of the modern internet.
A lesson from medieval weavers
The solution to this modern crisis mirrors a system used by 13th-century Florentine weavers. Medieval guilds like the Arte della Lana faced a similar challenge in maintaining quality standards across decentralized production. They relied on a "web of trust" where masters vouched for the competence of their peers and subordinates.
A master weaver in Florence could not personally inspect every bolt of cloth produced in the city. Instead, the guild verified that every producer had spent years as an apprentice and passed through the journeyman stage. This system created a network of accountability where every member had "skin in the game."
The guild functioned as a reputation-based filter. If a master vouched for a fraud, that master’s own reputation and standing suffered within the organization. This historical model provides a blueprint for managing a flood of low-quality contributions in a decentralized environment.
The cost of open access
The open source ecosystem originally relied on an organic version of this trust model. In the early days of Linux development, contributors would lurk on mailing lists and file bug reports before submitting patches. Linus Torvalds did not need a formal credentialing system because the community was small enough for reputations to emerge through repeated interactions.
Modern open source culture has developed an allergy to "gatekeeping" that makes it difficult to implement quality controls. This shift occurred because developers feared that any bar for entry would exclude talented individuals based on arbitrary social dynamics. However, this lack of friction now allows LLM-generated spam to overwhelm human maintainers.
The original vision for free software, championed by Richard Stallman, focused on user freedom. Eric Raymond later argued in The Cathedral and the Bazaar that "many eyes make all bugs shallow." Neither predicted a future where those "eyes" would be automated scripts producing thousands of meaningless changes per hour.
Building a modern software guild
The industry needs a decentralized, reputation-based mechanism to distinguish human contributors from bot-driven farmers. This system would function as a modern guild, where existing trusted contributors vouch for new ones. A contributor’s ability to vouch for others would remain proportional to their own established standing in the community.
The Debian project has successfully used a similar "Web of Trust" model for decades. Debian developers verify each other’s identities and technical competence through a network of cryptographic keys and personal vouching. This creates a legible reputation system that avoids the bureaucracy of a centralized certification body.
A modern implementation could offer several benefits for the open source ecosystem:
- Maintainers could filter incoming PRs by trust level to prioritize human-led reviews.
- Recruiters would have a more reliable signal of talent than a simple "green square" activity graph.
- New contributors would have a clear path for building a verified reputation through mentorship.
- Repositories would see a reduction in the volume of automated "slop" that requires manual rejection.
Filtering for human contributors
Platforms like GitHub or GitLab could implement "contributor rings" with relatively modest infrastructure. Inner-ring members would consist of people vouched for by other established developers. This would not prevent anyone from forking a project or submitting code, but it would give maintainers a way to sort the signal from the noise.
Every mass-generated PR a maintainer has to review is time stolen from actual development. Every fake contribution that gets merged into a project degrades the codebase and increases technical debt. If the contribution graph continues to lose its value as a signal, the incentive for "farming" will eventually disappear, but the damage to the projects may be permanent.
A guild system would not be perfect and would likely create new forms of exclusion. However, the current model of total openness is failing under the pressure of generative AI. By returning to a reputation-based web of trust, the open source community can protect its maintainers from burnout and preserve the quality of the world's most critical software.
Maintainers currently face a choice between strict gatekeeping or total exhaustion. The historical precedent of the Florentine guilds suggests that trust, backed by accountability, is the only way to manage decentralized production at scale. Without a way to verify "not-shit" people, the open source movement risks being buried under its own accessibility.
Related Articles
‘An AlphaFold 4’ – scientists marvel at DeepMind drug spin-off’s exclusive new AI
Isomorphic Labs, a Google DeepMind spin-off, has developed a proprietary AI model, IsoDDE, that predicts protein-drug interactions for drug discovery, but unlike AlphaFold, it is not being shared with the broader scientific community.
OpenAI’s Sam Altman: Global AI regulation ‘urgently’ needed
OpenAI's Sam Altman urgently calls for global AI regulation and an international oversight body for safe, fair development.
Stay in the loop
Get the best AI-curated news delivered to your inbox. No spam, unsubscribe anytime.
