Did you know we have a Forum? Come and say hi!
We use affiliate links to earn us some pennies. Learn more.

The team behind the popular PlayStation 3 emulator RPCS3 have seen a rise in what they say are "AI slop code pull requests".

We've seen other open source projects face similar issues, the rise of AI bots and coding agents has caused waves of people submitting all sorts of random junk everywhere they possibly can. The Godot Engine team previously complained about the same issue, as have lots of others.

Writing on X the RPCS3 team said:

Please stop submitting AI slop code pull requests to RPCS3. We will start banning those who do without disclosing. There are plenty of resources online to learn how to debug and code instead of generating slop that you don't understand and that doesn't work.

In a follow-up post they additionally said:

Our guidelines for submitting AI-generated code are now up in our repository!

As for all the AI bros seething on our socials, we're simply blocking you.

Learn how to debug, code, and leave behind something useful to humanity when you're gone, instead of peddling slop.

They're not actually against AI code, but people submitting requests that are likely 100% AI generated without the submitters understanding what it does - or even worse, the code just being useless untested junk. Their new guidelines as stated on GitHub:

Use of AI tools for research and reverse engineering purposes is permitted. However, contributors are expected to fully own and understand all code they submit. Any communication with the team — including code, code comments, and GitHub comments — must come from the human contributor, not an AI agent acting autonomously.

We have unfortunately seen a rise in untested and unverified AI-generated slop being submitted to this project. This wastes maintainer time and, in worse cases, such changes get merged and break functionality for all users. Repeated violations will result in a ban from the repository. Please be respectful of everyone's time.

Pull requests opened by AI agents or automated tools must include a disclosure in the PR description stating the scope of AI involvement — which parts were AI-generated and what human testing or review was performed prior to submission. PRs that omit this disclosure may be closed without review.

If you are unsure about your work, open a discussion issue to talk it through with the team, or reach out to a maintainer on Discord.

Article taken from GamingOnLinux.com.
16 Likes
About the author -
author picture
I am the owner of GamingOnLinux. After discovering Linux back in the days of Mandrake in 2003, I constantly checked on the progress of Linux until Ubuntu appeared on the scene and it helped me to really love it. You can reach me easily by emailing GamingOnLinux directly. You can follow me personally on Mastodon [External Link].
See more from me
All posts need to follow our rules. Please hit the Report Flag icon on any post that breaks the rules or contains illegal / harmful content. Readers can also email us for any issues or concerns.
6 comments

pb 11 hours ago
User Avatar
I've said it before and I'll say it again. We need three-strike system for accounts contributing to open source projects. First slop submitted to any project = warning, second = red flag, third = ban from opening any pull requests. Project maintainers would be able to opt in/out to allowing these accounts to contribute to their projects. Of course I realise that it's trivial to create a new account, but the main motivation behind such contributions are building "reputation" and filling these green squares, so that system should act perfectly fine as a deterrent.
jeisom 11 hours ago
I agree with @pb with the added that submissions could be used to increase social engineering attempt success rates. The code “looks right” and the AI verifier says it is “safe”. “I’ll merge it”. That on top of people “writing code” that they don’t understand in the first place.
User Avatar
That idea exists for decades and all times it was tried ended up in banning potential serious developers. I mean, just give a look how Reddit moderators do their job. We all heard about the horror stories of people getting banned for non harmful content if it just contains a minimum amount of critics to a topic. If I can ban you in the network, you are banned everywhere, not just on my project, even if I did not ban you for AI usage, I just need to tell you used AI.

Not even speaking about situations like Github writing "co-authored by Copilot" on non LLM PRs.

The mind behind this idea is great and I would support it, but the real world situation just shows that it harms more than it actually helps. The FOSS-community needs to think about a better solution.

Last edited by PlayingOnLinuxphone on 12 May 2026 at 2:01 pm UTC
awfulsauce 6 hours ago
User Avatar
It appears one of my worst fears when it comes to FOSS and AI is coming into fruition.
ToddL 2 hours ago
Quoting: awfulsauceIt appears one of my worst fears when it comes to FOSS and AI is coming into fruition.
AI makes a lot of wannabe developers lazier because they don't try to understand what results they're getting from it and think that's the be all end all to solving the problem.
scaine 3 minutes ago
User Avatar
Quoting: PlayingOnLinuxphoneThat idea exists for decades and all times it was tried ended up in banning potential serious developers. I mean, just give a look how Reddit moderators do their job. We all heard about the horror stories of people getting banned for non harmful content if it just contains a minimum amount of critics to a topic. If I can ban you in the network, you are banned everywhere, not just on my project, even if I did not ban you for AI usage, I just need to tell you used AI.

Not even speaking about situations like Github writing "co-authored by Copilot" on non LLM PRs.

The mind behind this idea is great and I would support it, but the real world situation just shows that it harms more than it actually helps. The FOSS-community needs to think about a better solution.
That was then, this is now. I think the whole situation needs a rethink. It's not a coincidence that multiple project have to create AI policies due to slop. Tools like Openclaw make this kind of thing very hard to counteract, because the AI won't give up - ban it, and it'll just create a new account and re-submit, over and over.

I suspect that the only way to truly police this, ultimately, is using a system like StackExchange, where pull/merge requests simply aren't allowed - you go through a step-ladder of ever-so-slightly-increasing rights, as your reputation rises, until, eventually, you are trusted to contribute within the boundaries of the policies of that project.

Sadly the infrastructure for this doesn't exist. Yet.
While you're here, please consider supporting GamingOnLinux on:

Reward Tiers: Patreon Logo Patreon. Plain Donations: PayPal Logo PayPal.

This ensures all of our main content remains totally free for everyone! Patreon supporters can also remove all adverts and sponsors! Supporting us helps bring good, fresh content. Without your continued support, we simply could not continue!

You can find even more ways to support us on this dedicated page any time. If you already are, thank you!
Login / Register