Patreon Logo Support us on Patreon to keep GamingOnLinux alive. This ensures all of our main content remains free for everyone. Just good, fresh content! Alternatively, you can donate through PayPal Logo PayPal. You can also buy games using our partner links for GOG and Humble Store.
We use affiliate links to earn us some pennies. Learn more.

Lutris now being built with Claude AI, developer decides to hide it after backlash

By - [updated]
Last updated: 13 Mar 2026 at 11:10 am UTC

There's a bit of drama going on with the popular game manager Lutris right now, with users pointing out the developer using AI generated code via Anthropic's Claude.

Seems like something relevant to talk about, with AI tools being a huge cause of problems in the hardware industry. Like how the Steam Deck is constantly sold out and Valve can't even give us a price or release date of their upcoming hardware. All because these AI companies are sucking up all component manufacturing for their data centres. Every extra person using all these AI tools is only adding to the issue.

A user asked on the official Lutris GitHub two weeks ago "is lutris slop now" and noted an increasing amount of "LLM generated commits". To which the Lutris creator replied:

It's only slop if you don't know what you're doing and/or are using low quality tools. But I have over 30 years of programming experience and use the best tool currently available. It was tremendously helpful in helping me catch up with everything I wasn't able to do last year because of health issues / depression.

There are massive issues with AI tech, but those are caused by our current capitalist culture, not the tools themselves. In many ways, it couldn't have been implemented in a worse way but it was AI that bought all the RAM, it was OpenAI. It was not AI that stole copyrighted content, it was Facebook. It wasn't AI that laid off thousands of employees, it's deluded executives who don't understand that this tool is an augmentation, not a replacement for humans.

I'm not a big fan of having to pay a monthly sub to Anthropic, I don't like depending on cloud services. But a few months ago (and I was pretty much at my lowest back then, barely able to do anything), I realized that this stuff was starting to do a competent job and was very valuable. And at least I'm not paying Google, Facebook, OpenAI or some company that cooperates with the US army.

Anyway, I was suspecting that this "issue" might come up so I've removed the Claude co-authorship from the commits a few days ago. So good luck figuring out what's generated and what is not. Whether or not I use Claude is not going to change society, this requires changes at a deeper level, and we all know that nothing is going to improve with the current US administration.

Emphasis on that last part ours. Emphasised because it's a clear issue for a lot of people, so the developer has chosen to just hide what is and what isn't AI generated code. The Lutris creator expanded on that in a follow-up post elsewhere further defending their use of it from another user not happy about the situation.

The real problem, as pointed out by comments, is that part of the point of open source is trust. This is not a way to build that. Not just that but copyright becomes an issue. Who actually owns the generated code? And now it's being hidden, how can anyone tell? Can you even truly claim it's open source when it's using AI generated code?


Update - 13/03/2026 11:10 UTC - The Lutris creator has restored the Claude attribution, with a comment noting "Since it's such a big fuss, I'm putting the Claude attribution back".

Article taken from GamingOnLinux.com.
Tags: AI, Apps, Misc, Open Source
26 Likes
About the author -
author picture
I am the owner of GamingOnLinux. After discovering Linux back in the days of Mandrake in 2003, I constantly checked on the progress of Linux until Ubuntu appeared on the scene and it helped me to really love it. You can reach me easily by emailing GamingOnLinux directly.
See more from me
All posts need to follow our rules. Please hit the Report Flag icon on any post that breaks the rules or contains illegal / harmful content. Readers can also email us for any issues or concerns.
56 comments
Page: 3/3
  Go to:

einherjar 15 hours ago
Everyone who things he should not reduce his work with using AI: Step in and do this work, or STFU!
Cloversheen 15 hours ago
User Avatar
This is not the first time he is in a controversy because of how he handles critique (valid or otherwise), and he pretty much react the same way each time.
t3g 14 hours ago
The developer of Lutris has always been a jerk, so I just use Heroic + Steam and forget that Lutris exists
The_Real_Bitterman 14 hours ago
Can you even truly claim it's open source when it's using AI generated code?
Of course it still is.

What you can not tell is if it violates OTHER projects copyrights / copyleft the LLM was trained on. It's more of a legal minefield rather than it not being open source. For example if it [the LLM] spilled out GPL-3 code they would have to attribute this. But since it is LLM code the developers themselves don't know.

So in any occasion IF some one can find "their" code in Lutirs they can probably sue them.

I can understand them [the developer]. Especially for an FOSS project this big. Users often times just request random features or bright up issues one singular person is barely able to fix.

If we're being honest most FOSS projects are maintained by like one or maybe two people. While tons of others made like one commit and never show up to further support the project. Neither by bug fixing, testing nor financially but expect the core devs to do so.

So yes there is no issue in using LLMs to make that two core developers more productive and keep up with the pressure. While entitled users argue about LLM code and "cancel" the developers?

As long as Lutris overall does not decrease in quality, which is often the pitfall by people NOT knowing how to code and just vibe in the hopes it works, there is no issue here.

Also we don't know how many PRs by other peoples which are merged with Lutris are actual LLM code themselves. Yet ppl hate on Mathieu?

PS.: I would not call this "AI" as none of these tools are truly intelligent.

Last edited by The_Real_Bitterman on 12 Mar 2026 at 9:53 pm UTC
Sakuretsu 14 hours ago
I haven't used Lutris for sometime now because I just prefer Steam and Heroic but I surely have nothing against it.
Let's wait and see if the launcher is going to start falling apart because of bad AI-generated code and if the project is going to survive after what the maintainer did.
Corben 14 hours ago
Imo he is not wrong. As an experienced dev, especially getting assist by claude pro is really helpful. He knows what he wants, he prompts it, he doesn't have to type it. It saves an insane amount of time. Reading code and verifying it is what you requested is way less time consuming than typing it on your own.

Most people's stance against LLM generated code is because many people have tried it, created stuff with it, which was bad because they were not experienced devs. LLMs are a new tool, you need to learn how to use it like in any other field. If you master it, it helps. Experienced devs master it way easier and better than beginners.

I'm specifically not saying AI, as there is no intelligence. They don't "think". They are just good at giving the impression they are intelligent or think. They are just incredible good at guessing and have an enormous data pool to get their info from and do their guesses on.

And claude is impressively good at coding.

I also think he should not hide it. But I can understand his reasons. Social pressure is real. So what to choose? Getting depressed because people are not well enough informed and are targeting you, or deciding to keep your mental peace? This is a personal thing, not everybody has the power to withstand the pressure of so many people.
StenPett 13 hours ago
I've been a Patreon supporter of Lutris since 2019. That ended the minute I read his reply. I'm instead going to spend that money on some other project. One that doesn't use generative AI...
Mountain Man 13 hours ago
User Avatar
...we all know that nothing is going to improve with the current US administration.
I see, so it's somehow Trump's fault that this developer started using AI and feels the need to obfuscate it after facing criticism.

Last edited by Mountain Man on 12 Mar 2026 at 10:46 pm UTC
Drawing Pixels 13 hours ago
My concern is the AI's were probably trained on code without respect to their licenses. So tagging the code is kind of important for legal purposes in case the landscape around it changes.

I personally don't use AI to generate code, but I might sometimes use it as a fuzzy search or use it for language or visual processing functions. Reason being that what ever I gain in efficiency, I also lose in opportunities to learn and improve as a developer.

Setting that aside code architecture is extremely important in regards to long term maintenance, robustness and security. My experience is that AI is just generally terrible at that, as I had them fail to architect something extremely basic and small correctly after nearly 8 hours of prompting across a lot of the top performing ones.

They seem to only excel at smaller functions or things that are easily templateable, which makes sense when you understand that they just duplicate what they have been trained on.

There are plenty of reasons why I favor self-hosting AI's and don't fund these corporations, the ethical issues are endless. But overall I just find the developers response here reason enough to not trust the long-term health of the project going forward.
HendrinMckay 11 hours ago
User Avatar
I use a local LLM model to help plan out stuff I'm going to code or if I run into an issue I can't solve and need help figuring out a way to implement it that I might not have considered, but I avoid agentic AI like Claude Code as I find more often than not it just ends up producing garbage as you run out of context very quickly even with the big cloud models.

I have no problem if someone wants to use it in there application, but you have to be open about it and be willing to deal with the fact some folks won't use your program and will look down on your code for obvious reasons.

Also, I find that very quickly you can get to the point where the AI has generated so much code that even an experience programmer can lose track of how the code base interacts with everything, especially in the case of something like Lutris with so many moving parts (Wine, DVK, etc).
RevenantDak 7 hours ago
User Avatar
I never liked Lutris, Heroic worked way better for me anyways. But I never wanted to be negative about it. But now I can just say it's AI slop.
Gerarderloper 6 hours ago
Meh, results matter. But also if stolen code ends up appearing; you don't want that happening.

There is a risk when using AI coding tools where eventually it could turn into bloatware. So hopefully that isn't what will end up happening a couple years down the line.
ZeroPointEnergy 3 hours ago
If you already understand the problems on capitalism, but then use the tools of a capitalist company that will atrophy your programming skills and basically take away your means of production, then I don't know what to tell you anymore...
octarine_dream 2 hours ago
User Avatar
While I realise the morality of models trained on stolen code is as yet, unsettled, to me that's one of the smallest issues. From a pragmatic point of view, we're all trained on stolen code when we study and gain experience as developers. For me, unethical use of AI would be the specifics of how it's used. For example, re-licensing copyleft software is genuinely easy now, and a huge threat to open source. I think vibe-coding is a fast way to an unmaintainable project that nosedives after a month or so, but as an aid to an experienced programmer who thoroughly understands everything that is being output it will be a boon, only enhancing the project. While I understand you could say 'How are we supposed to vet responsible use, how about we just say no across the board?', this developer is demonstrably proficient with this codebase already, so what's the big deal?
doragasu 1 hour ago
User Avatar
There are several critiques that can be made from different angles.

* From the "I just want to get things done" perspective, using these tools might or might not make sense. All rigorous studies I have seen state the productivity increase for experienced devs is negligible when not negative, and it comes at a high cost (loss of codebase understanding, cognitive decline...). But well, there's lots of people telling it helps them a lot, that people doesn't know how to use it, that Claude 4.6 is really the thing and earlier models where shit, etc. So I will give it the benefit of the doubt even if I'm not convinced at all.
* From the code license/copyright perspective, using these tools is a legal minefield. There's already an EEUU judicial sentence saying generative AI outputs cannot be copyrighted. And on top of that, there are several studies demonstrating how all LLM "frontier models" can output entire copyrighted books almost verbatim. How can you guarantee Claude is not outputting code from other software project with a non compatible license?
* From the ethical perspective, is when using these tools is without a doubt very very wrong, I have said it several times and will repeat it again: **there is no ethical use case for generative AI**. For these tools to be somehow effective at what they do, they need to suck tons of resources (energy, chips, raw materials) and data, most of it copyrighted/licensed. The generative AI craze is causing a massive resource hoarding by a handful companies (that want you to own nothing and rent everything from them), is feeding a gargantuan bubble that is leading to an economic collapse, is accelerating climatic collapse (with so many countries backing off their environmental compromises because of the AI craze), is actively being used to kill people (yes, Anthropic tools are also used actively for this, Amodei has always said he is OK with using Claude for war, he only backed of two very specific points: completely autonomous weapons and mass surveillance, but he is completely OK with everything else, including using Claude to select targets to bomb).

You might want to get things done, risk the legal issues and don't mind the ethical aspects and then use these tools. But don't try reasoning these tools are ethical. They are not. And Lutris is a tool for GNU/Linux users. Maybe most Windows/macos users don't mind that much, but many of us in the GNU/Linux community are here because we care about ethics. So it should not be that difficult understanding you will get pushback when writing GNU/Linux applications using unethical tools and procedures.
Liam Dawe 46 minutes ago
User Avatar
Article updated. See the bottom note.
While you're here, please consider supporting GamingOnLinux on:

Reward Tiers: Patreon Logo Patreon. Plain Donations: PayPal Logo PayPal.

This ensures all of our main content remains totally free for everyone! Patreon supporters can also remove all adverts and sponsors! Supporting us helps bring good, fresh content. Without your continued support, we simply could not continue!

You can find even more ways to support us on this dedicated page any time. If you already are, thank you!
Login / Register