There's a bit of drama going on with the popular game manager Lutris right now, with users pointing out the developer using AI generated code via Anthropic's Claude.
Seems like something relevant to talk about, with AI tools being a huge cause of problems in the hardware industry. Like how the Steam Deck is constantly sold out and Valve can't even give us a price or release date of their upcoming hardware. All because these AI companies are sucking up all component manufacturing for their data centres. Every extra person using all these AI tools is only adding to the issue.
A user asked on the official Lutris GitHub two weeks ago "is lutris slop now" and noted an increasing amount of "LLM generated commits". To which the Lutris creator replied:
It's only slop if you don't know what you're doing and/or are using low quality tools. But I have over 30 years of programming experience and use the best tool currently available. It was tremendously helpful in helping me catch up with everything I wasn't able to do last year because of health issues / depression.
There are massive issues with AI tech, but those are caused by our current capitalist culture, not the tools themselves. In many ways, it couldn't have been implemented in a worse way but it was AI that bought all the RAM, it was OpenAI. It was not AI that stole copyrighted content, it was Facebook. It wasn't AI that laid off thousands of employees, it's deluded executives who don't understand that this tool is an augmentation, not a replacement for humans.
I'm not a big fan of having to pay a monthly sub to Anthropic, I don't like depending on cloud services. But a few months ago (and I was pretty much at my lowest back then, barely able to do anything), I realized that this stuff was starting to do a competent job and was very valuable. And at least I'm not paying Google, Facebook, OpenAI or some company that cooperates with the US army.
Anyway, I was suspecting that this "issue" might come up so I've removed the Claude co-authorship from the commits a few days ago. So good luck figuring out what's generated and what is not. Whether or not I use Claude is not going to change society, this requires changes at a deeper level, and we all know that nothing is going to improve with the current US administration.
Emphasis on that last part ours. Emphasised because it's a clear issue for a lot of people, so the developer has chosen to just hide what is and what isn't AI generated code. The Lutris creator expanded on that in a follow-up post elsewhere further defending their use of it from another user not happy about the situation.
The real problem, as pointed out by comments, is that part of the point of open source is trust. This is not a way to build that. Not just that but copyright becomes an issue. Who actually owns the generated code? And now it's being hidden, how can anyone tell? Can you even truly claim it's open source when it's using AI generated code?
If the code was crap or buggy and you could reasonably sniff it out that would make sense. But if the only way for you to tell if it is AI generated is the person behind it announces it, then the AI code is literally indistinguishable from the human generated code.
If he never labeled the stuff in the first place, no one would have ever noticed or cared.
Quoting: eggroleIf the AI generated code can't be found without labelling it, what is the problem?Quality of code is only one of the concerns. It being generated using the "Torment Nexus" is other in itself, even if the code was immaculate.
If the code was crap or buggy and you could reasonably sniff it out that would make sense. But if the only way for you to tell if it is AI generated is the person behind it announces it, then the AI code is literally indistinguishable from the human generated code.
If he never labeled the stuff in the first place, no one would have ever noticed or cared.
Lutris Patreon currently has 281 paying members. Well, 280 in a few moments.
Let's say it is okay to use an unethical tool to catch up with work and let's say it is really no slop, because every single line is verified by that person to work as intended, but the worst of all is the reaction here and so I don't accept this project any longer, even if AI will be removed.
Anyway, I don't want to use any kind of unethical AI code, if slop or not does not matter. If the model is trained 100% open source (respecting licenses, all training sources public) and the usage is 100% transparent while we have enough green energy to waste some on huge LLMs, we can talk again about it. I'm not against LLMs in general, but it has to meet the minimum requirements for ethical usage and in additional has to be handled in a trustworthy way.
Right now we are years away and so I am looking for another "game-installer". Is there anything beside Heroic to look at? Would love to have something in Qt-style (as KDE applications).
As software developer working with "high" tech for give solutions to a non-tech company (we are even "forced" to work with multiple AIs for test how much can they improve our efficiency) I must to says... Yes, AI helps A LOT using it correctly when developing software, but...
I think, having mixed feelings about using it in my work, that right now it is not the moment for using it, or even never... We, as costumers, as people, only can vote with our wallet, our time, and our public opinion. If you, like me, don't agree current situation with AI (companies driven by crazy people destroying everything in their way just for milking money, and using AI in absolutely everything even in your clock alarm, and that kind of stuff you already know) you MUST not use AI at all. This is our way to say "I don't want this. I don't agree with this. Loose a bit of money for change your mind". I, even being "forced" to use it, try to use it less as possible. Can I save 2 months of work with 2 days of using Claude? Yes... Should I? Probably not... And every time my company asks me for feedback, and just say this words I'm writing... It is amazing, it will be much expensive to use, and right know is a tool that people are trying to use like crazy and we must use as less as possible.
In this case, in which Lutris Creator claims AI helped a lot because he has health issues... Maybe you should stop working and pushing to have features in a date you want, instead of in a date you CAN. If you are ok with all the problems and worries that surround AI right now... It is fine, everyone has the right to have their own opinion and their own moral.
For ending:
Humanity doesn't need to be fast and efficient, humanity need to be patient.
... oh, and some people are just not prepared for use certain technologies (just a reminder, nothing related to this directly).
Sorry for my bad english, gonna read your comments.
Anyway, I was suspecting that this "issue" might come up so I've removed Lutris from my computer a few days ago. So good luck figuring out what's installed and what is not. Whether or not I use Lutris is not going to change society, this requires changes at a deeper level, and we all know that nothing is going to improve with the current developers.
Last edited by beaiouns on 12 Mar 2026 at 8:26 pm UTC
Quoting: eggroleIf the AI generated code can't be found without labelling it, what is the problem?LLMs, by definition, model language. They're designed to look right. Whether they are right is purely circumstantial and that requires the code to be deliberately differentiated from written code. It requires greater scrutiny because it hides its mistakes through language that we're susceptible to. It's called the Eliza effect, people have a bad habit of attributing human traits to things.
If the code was crap or buggy and you could reasonably sniff it out that would make sense. But if the only way for you to tell if it is AI generated is the person behind it announces it, then the AI code is literally indistinguishable from the human generated code.
If he never labeled the stuff in the first place, no one would have ever noticed or cared.
Last edited by GoEsr on 12 Mar 2026 at 3:10 pm UTC
Last edited by styx971 on 12 Mar 2026 at 3:35 pm UTC
Can you even truly claim it's open source when it's using AI generated code?That's debatable, open source works because copyright is enforcing the licence. And AI generated content don't have copyright.
So all the AI generated code don't have a licence. That is like having a very permissive licence, like MIT with not even attribution.
Quoting: federico_cbaHere's a counterpoint that I found to be a really interesting read:Quoting: scaine"I'm doing a thing people hate, so instead of not doing that, I'll continue to do it, but hide it better"You can use in a non-slop way, carefully reviewing the generated code and making adjustments and refactors. Something that he can do as an experienced developer. It would be different if he was blindly accepting the AI generated code.
That's a bold position to take in any project, let alone a FOSS project. And just because he can use it in non-slop manner (maybe? hopefully?) it doesn't absolve him of the ethical concerns many of us have about genAI.
https://blog.glyph.im/2026/03/what-is-code-review-for.html
But I do agree with you somewhat. If you're a truly excellent, experienced developer, then you might be able to use genAI in a non-slop way - keep the generated code to snippets and run extensive unit/UI tests on the outputs, etc. But if you're that experienced, you probably rarely need to lean on genAI in the first place.
And running all those unit/UI tests for the generated code will probably mean that, overall, you haven't saved any time anyway. Which is what they found in this study a year ago.
https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/
Maybe (probably even) genAI code is much better than it was a year ago? But still, ethically bereft nonetheless.
The sentiment shared by the developer at the end of his response I find highly concerning and as such I think I'm going to uninstall Lutris. The concept of hiding stuff in what's supposedly an open and transparent environment is making me lose trust in the program.
Author rebuttal is blunt? possibly bit having tk deal with all this "want want want complain want" must be exhausting AF...
Every extra person using all these AI tools is only adding to the issue.The more abstraction exists between the problem and those perpetuating it, the less likely you will be able to get the average person to recognize the problem. Less even to understand it.
It is exactly the same kind of issue we've been fighting in the OS space "Every extra person using [Windows] is only adding to the issue." When people use something, anything, it legitimizes it.
One thing I've learned is that the masses will always make the wrong choice. Large populations of humans do not make decisions that collectively help the best interests of themselves or others.
I definitely get too much secondhand propaganda due to a family member in enterprise software, but the quote (emphasized bit aside) seems broadly correct to me.
I still hate generative AI, it's still seemingly the culprit behind half of every problem I see in the world. I do at least understand the perspective of people who are trying to make the best of it.
And, sort of relevant to all of us is that Linus Torvalds seems to at least selectively approve of AI tools, even though he also condemns slop, etc. Not saying how anyone should feel about that, but I feel it bears acknowledging in this context.
Last edited by Johnologue on 12 Mar 2026 at 6:17 pm UTC
If it wasn't, then anyone could have forked the project if they didn't want to use the AI contributions.
The forked project could then just weed out the AI contributions in a simple manner without having to go through the code line-by-line to hopefully spot whatever vulnerabilities or bugs the AI might have added.
Start fresh with just the human-made code and fix whatever was wrong with that.
Wherever the rights of AI or projects using AI end up at, it would have secured the potential for a project like Lutris to exist without it.
Lutris' maintainers could have just said "Yep, we're using AI. Fork it if you're not happy with it."
Done.
No knee-jerk reaction. They could have kept up complaining about not having enough contributors as they have for years now.
It's the assholery of it which is egregious and a point of concern for many beyond the use of AI.
Having said that, I vehemetly disagree with the use of AI for any FOSS projects. Please keep it far away from me.
Quoting: GustyGhostOne thing I've learned is that the masses will always make the wrong choice. Large populations of humans do not make decisions that collectively help the best interests of themselves or others.I think that's a very anti-democratic stance to take.
I argue that people do tend to make the right choice, so long as they are informed and not under duress. The problem is that people are subject to a great deal of propaganda and hardship to keep them from making better choices.
I have friends who want to switch to things like Linux who are just so burnt out that they don't have the mental energy to make the change. Even if it turned out to be easy for them, making that choice and starting the process of doing that is not nothing.
The world would be a better place if everyone had access to good journalism, healthcare, and a full night of sleep.
Last edited by Johnologue on 12 Mar 2026 at 6:32 pm UTC
I like heroic but I like to use lutris for old abandonware games because it has community scripts that automatically handle patches, fixes ect




How to setup OpenMW for modern Morrowind on Linux / SteamOS and Steam Deck
How to install Hollow Knight: Silksong mods on Linux, SteamOS and Steam Deck