I'm having to be steadily more careful on what I choose to cover here on GamingOnLinux, with more games releasing using generative AI.
To start things off to be clear — my current stance is generally to ignore games that use generative AI. My whole point is to support creative people, not machines. It's quickly getting more difficult because some won't disclose it at all (even though they're supposed to), it's steadily being used by more studios for all sorts of things big and small, and the AI notice on Steam is buried at the bottom of pages and sometimes I do forget to check before getting into a game and end up wasting my time when I see it later.
Which brings me to the latest, where the developers are just so completely brazen on how they're using generative AI. It's called GUG and the thing is, the idea of the game sounds really cool, but their full statement on their generative AI use really rubbed me the wrong way.
What is it?
GUG is a sandbox simulation roguelike where you generate monsters—”gugs”—by typing any word or phrase. Your choice of words determines how they look, behave and battle. Type “banana flamethrower” and you might get a spiky yellow pyromaniac. Type “emotional support gremlin” and get… Well, who really knows?
Sounds cool right? Then you see the Steam page AI notice:
The developers describe how their game uses AI Generated Content like this:
This game relies on AI to underpin its procedural image and functionality generation systems (the systems that allow for personalised creatures and their effects).
I looked a bit deeper, and found their full FAQ which they've linked in a Steam community post. This is where it just really irked me. Have a little read of the full quote:
Does GUG use ethical AI? What is the company’s stance on AI ethics?
As a company, we are frank and open about our usage of AI. To put it simply: we use language models and image models that are trained on content from the internet that’s been extracted without consideration for its copyright or licensing. Models of this size are huge for a reason. They must be able to store enormous amounts of information on the internet. When you type in “clam” or “bucket of milk” or “martian lawyers club” or “henry kissinger” you have a gug that (likely) incorporates content from that prompt — we can rely on the model’s vast understanding of content to help it synthesize knowledge into a usable format to inject into the game. The *same* is true of our image generation model. Our image model is fine-tuned on a collection of our own images. This allows the image model to faithfully utilize the game’s intended art direction, while incorporating the domain knowledge of the user’s request.It is important to understand that the game is running on contentious technology. It is also important to understand that at the current state of knowledge in machine learning is impossible to achieve the flexibility of the gug generation system in any other way. Anyone who claims that you can “train” your own model from scratch solely on your own data is either misinformed, is conflating the concepts of “training” and “finetuning”, or has enormous amounts of cash and time.
Of course we want to emphasize: as active members of the machine learning community and longtime video game developers, we are internally working on alternatives and planning to move ourselves further and further from unethical and unfair uses of AI. For example, there are open-source projects trying to replicate the capabilities of current models, while using fair-licensed datasets. We believe that what we have built is interesting and fun enough to our players and represents a valid use of this technology in its nascent state.
Emphasis in the above quote mine.
With that in mind then, they're fully aware that the generative AI models they're using are trained on others work, scraped from all corners of the internet. It's made their game GUG "interesting and fun enough" right now that who cares right? Why put in the hard work to make your own stuff when the totally intelligent machines can do it for you?
When we have 100s of games releasing on Steam constantly - why am I going to pick one where the developers aren't even doing a lot of the work? From artwork to story text - why am I going to play or read it if it's just made by a machine, what's the point? The answer is — I'm not. I'm going to continue clicking ignore on them. And I hope to be able to stick by that stance for a long time.
Looking over on SteamDB, there's now over 10,000 games with an AI disclosure notice. Of that number, 3,957 have a 2025 release date. It's only going to keep increasing.
I suppose it's yet another good thing about GamingOnLinux remaining 100% independent, I don't have to cover anything. And at times I can just write an article like this, pointing out how the industry is just changing and throwing some thoughts up on it. I don't like where this is going, but automation is inevitable in everything isn't it? Higher ups always want to maximise everything and some people are just lazy. What bugs me the most though, is that the more you rely on AI generation - the less you learn and improve.
- some game devs, apparently
It is important to understand...We do.

Can't make your game idea without a well trained AI and can't afford to source an ethical one? Maybe don't make that game then, make another one that you can afford to make.
Having the right to compete in a market does not mean having a right to finanical success in that market.

It's a ray of clean, uncorrupted, sunshine in the Internet.
we use language models and image models that are trained on content from the internet that’s been extracted without consideration for its copyright or licensing
That's the case of anyone doing anything with LLMs and the current AI cheerleading. I dearly wish to see some of the big pushers go bankrupt with copyright damages.
That's the case of anyone doing anything with LLMs and the current AI cheerleading.There are a few folks who are not doing things that way ...
https://www.engadget.com/ai/it-turns-out-you-can-train-ai-models-without-copyrighted-material-174016619.html
https://www.scaine.net/site/2025/06/the-ethics-of-ai-june-2025
Interesting take from these devs, basically admitting that their game is an unabashed cash-in on unethical technology. Uh... thanks?
planning to move ourselves further and further from unethical and unfair uses of AI.
Sure, sure. They know it's unethical and supposedly they don't want to do it that way but they did lol.
And the game doesn't even look good, all of the monsters are just different colored blobs with different numbers of holes.
They needed all of the stolen knowledge of the internet to create this garbage? I guess this is an extreme example of AI use making people dumber.
We do not care if they use AI, and won't even check. We don't see any big ethical concerns.
To be honest, like most peopleMost children, perhaps. None of the adults I know tolerate any AI bullshit in their lives whatsoever.
Beyond the vast ethical, legal and environmental problems that have already been pointed out, I'd also like to highlight a large issue with this technology that is particularly relevant for this and other communities adjacent to Linux and Free Software - its intrinsic ties to big tech as a whole.
I think the largest issue with generative AI today is the way in which it is pushed as a to-be-essential part of much of day to day life, without allowing pause to think about who actually runs these services. We more or less missed the boat in this regard through the advent of social media and smartphones, and assuming this bubble does not burst for some time it feels like the same mistake is about to be repeated once more.
I wrote a short piece on this back in 2023, which honestly I think has only gotten more relevant over time:
https://ivarhill.com/its-not-about-ai/
All this being said, I personally believe (or at least hope!) that the bubble will indeed burst - or maybe more accurately that the use for these technologies will plateau to a less speculative level which at least leaves some breathing room. Time will tell, but for now I'm certainly firmly in the "not interested in using generative AI unless it's both ethical and libre" camp!
Last edited by ivarhill on 14 Aug 2025 at 4:42 pm UTC
All this being said, I personally believe (or at least hope!) that the bubble will indeed burst - or maybe more accurately that the use for these technologies will plateau to a less speculative level which at least leaves some breathing roomIt will for sure. The AI bubble (like a lot of bubbles) seems to mainly be targeted towards attracting investor money to pay of the previous round of investors.
As far as I know, there is no way to make these big generic LLMs and actually be profitable, they are just way too expensive to produce and run. (and the potential law cases)
AI will stay, as it should, but it will be focused towards actual useful stuff in specific areas. Like finding interesting patterns in the night sky, folding proteins, automated translation etc. And the big boys, who are not profitable, will run to the next bubble to continue their pyramid scheme.
First thing - at least they were honest. You can't really say they weren't. If people dislike AI use, they can make a conscious decision not to buy that game. As is proper. I have boycotted games for personal reasons, too.
In the end, this debate is way more heated than it should be. Like using the word "stolen" in the context of AI training, when nothing ever gets stolen anywhere in the process. No trace of source material is ever remaining in the trained model, and people know that. As of today. the legality of using copyrighted material is disputed and there are no laws in effect in any jurisdiction I'd know of that would explicitly ban AI training with copyrighted material. It is explicitly allowed in same places and for some purposes, even. So far, every single lawsuit filed against use of copyrighted material in AI training has been struck down. That leaves ethical considerations, but ethics are the most subjective thing known to mankind, and people will have differing views on what's ethical and what's not. Throwing around insults, death threats and charged phrases like "it's theft!" will not help in finding a constructive solution.
Compensation of creators is something that needs and probably will be resolved in the near future. Personally I'd love providers of commercial (not open source!) AI models to have to pay a share of their revenue to fund compensation payments towards creators. The idea of having to seek permission for every single piece used in training is pretty impractical, at least. But that's just my opinion.
But some of the argument reminds me how people went ballistic over how pocket calculators would make people dumber at math, or how Photoshop would kill all art. People just hate change, particularly when they feel threatened by it. AI is a tool, nothing more, nothing less. It can make games better if used right, just like Photoshop can be used to enhance images with tools creators didn't have at their disposal prior to its existence. Sure, AI also has the potential to produce a lot of unimaginative slop, but it's not that Steam wasn't getting flooded in low-effort garbage prior to AI.
Particularly looking at small-budget or solo game projects, AI use can make projects possible that otherwise never would get realized. How many artists are there who can't code, but can't afford hiring a coder? Or how many coders can't do art and can't afford hiring an artist? Maybe this writer has an awesome story in mind, but can't paint the pictures for that visual novel, and doesn't want to take the financial risk that comes with sourcing it out? These projects normally would never have gotten done. Now they can. People can make their dream game, and AI can help them with the skills they don't have. I fail to see anything wrong with that, even if I can see some of you guys reaching for the pitchforks now.
I am a coder, and these days I use AI to help me coding in some situations. Shocking, I know. But tell you what? Most coders these days do it. The reason for that is that I can use my time for more interesting things than writing the 10,000th variant of some standard problem.
AI will never replace human creativity. These models are stochastic parrots after all, that can really invent anything new and exciting. There will always be demand for human art, human writing and human coding. Maybe people will realize that one day, and then we can have a sane discussion about how to compensate creators.
AI will stay, as it should, but it will be focused towards actual useful stuff in specific areas. Like finding interesting patterns in the night sky, folding proteins, automated translation etc. And the big boys, who are not profitable, will run to the next bubble to continue their pyramid scheme.
Some Google research head of department said, AI should better do things people are bad in - like folding proteins -, instead of things people are good in - writing texts, making music, ... (*)
As far as I know, there is no way to make these big generic LLMs and actually be profitable, they are just way too expensive to produce and run. (and the potential law cases)
Well, some already have been produced, and they're not that bad at writing texts, and people are using them, and will pay some money for it. So, this is here to stay, whatever we think of it.
(*) paywalled, but here's the link https://www.heise.de/select/ct/2025/2/2433008341903287854