Skip to Content

How can open source maintainers stop AI-generated pull request spam on GitHub without shutting down contributions?

What should GitHub repo owners do about Copilot-style AI issues and low-quality PRs to protect maintainer time?

You want open source to stay open, but many maintainers are running out of time—and patience—with AI-assisted pull request spam. The pattern looks the same across projects: a PR reads well, formats cleanly, and appears complete, yet fails basic review because the code is wrong, the tests do not pass, or the change does not match the issue it claims to solve. The result is not “free help.” It is review debt.

Maintainers describe the cost in minutes that become hours: reproducing the bug, checking edge cases, asking follow-up questions, and explaining why the change cannot merge. When that cycle repeats, triage starts to replace real maintenance work like fixing bugs, improving security, and shipping releases. That tradeoff matters to everyone who depends on open source, because slower reviews mean slower fixes and more fragile dependencies.

A clear example comes from tldraw. The project announced it would automatically close pull requests from external contributors, even though the team said they did not like taking that step. A follow-up from Steve Ruiz connected the decision to an influx of low-quality AI PRs and argued that public repositories need better controls so maintainers can protect focus and time. The point is not that outsiders are unwelcome. The point is that “open” does not mean “unfiltered,” especially when submissions can be produced at scale.

The same complaint shows up in GitHub discussions: maintainers ask for a way to block, label, or opt out of Copilot-generated issues and PRs. They describe machine-generated submissions as a time sink that nudges them toward default-closed workflows—closing issues quickly, limiting PR intake, or restricting who can open what. That shift changes the social contract of open source: fewer casual drive-by contributions, more gates, and more friction for legitimate newcomers.

Outside GitHub, the debate turns sharper. On Hacker News, developers argue about whether the classic PR workflow still works when low-effort contributions can be mass-produced and dropped into maintainers’ queues. On Reddit, the situation gets framed as open source being “DDoSed” by low-quality submissions, with maintainers comparing notes on how review work is crowding out building work.

Maintainer write-ups add a practical detail: the main damage often comes from review and communication, not from merges. A spam PR does not need to land to cause harm. It only needs to look plausible enough to demand careful attention. That is why this is not a cosmetic annoyance; it is an operational load problem.

Not every project is choosing hard bans. Some are moving toward stricter contribution norms that keep the door open while raising the cost of low-quality input. Ghostty’s contributing guide, for example, allows AI-assisted work but requires disclosure of AI use and how much of the change it affected. That approach treats AI as a tool, not a credential. It signals that maintainers value accountability: if a contributor uses AI, they still own the reasoning, the tests, and the outcome.

If you mostly use GitHub to download code, this can sound like a niche workflow fight. It is not. Maintainer bandwidth influences how quickly vulnerabilities get patched, how reliably dependencies behave, and whether experienced maintainers stay engaged. When maintainers start restricting PRs, it usually reflects capacity limits, not hostility. The review queue has started to feel like a second job, and the fastest way to survive a second job is to stop accepting more work.