While you're here, please consider supporting GamingOnLinux on:
Reward Tiers:
Patreon. Plain Donations:
PayPal.
This ensures all of our main content remains totally free for everyone! Patreon supporters can also remove all adverts and sponsors! Supporting us helps bring good, fresh content. Without your continued support, we simply could not continue!
You can find even more ways to support us on this dedicated page any time. If you already are, thank you!
Reward Tiers:
This ensures all of our main content remains totally free for everyone! Patreon supporters can also remove all adverts and sponsors! Supporting us helps bring good, fresh content. Without your continued support, we simply could not continue!
You can find even more ways to support us on this dedicated page any time. If you already are, thank you!
Login / Register
- If you drop (or throw) your new Steam Controller it will scream at you
- Proton Experimental gets fixes for Rocket League, Crimson Desert, Helldivers 2 and more
- Linux security flaws Dirty Frag and Copy Fail are a good reminder to stay up to date
- PlayStation 3 emulator RPCS3 devs battling "AI slop code pull requests"
- SteamOS 3.8.4 Beta brings further Steam Machine support and fixes for experimental nested desktop mode
- > See more over 30 days here
- What's bad about generative AI on the Internet and beyond, exactl…
- PlayingOnLinuxphone - Guns of Icarus
- CatKiller - thoughts and things?
- whizse - A New Game Screenshots Thread
- tmtvl - The Great Android lockdown of 2026.
- tuubi - See more posts
How to setup OpenMW for modern Morrowind on Linux / SteamOS and Steam Deck
How to install Hollow Knight: Silksong mods on Linux, SteamOS and Steam Deck
Secondly, AI is an extremely blunt instrument. Basically the most brute force solution you can apply to a problem. It should only be applied to problems for which there is no better solution because the cost is just too high.
And now we have this extremely unhealthy business that is losing billions upon billions of dollars on a promise that is unattainable.
So my opinion. The current AI business needs to die and it needs to die quick because the long it takes, the bigger the damage will be. So I do what I must. Ignore this entire business and everything that is created that benefits it.
AI requires massive data centers to run which are both environmentally damaging and using up resources. Electricity costs, drinking water, buying up all the RAM, buying up all the hard drives - These things are detrimental to the general public.
Owing to the unstable nature of the AI industry, one day you can use AI affordably, and at some later stage the AI company revises its pricing and suddenly everything is orders of magnitude greater costs.
Owing to the nature of AI code generation, it produces code quickly but that code can be flawed, with unintentional bugs and inefficiency within the design. A skilled developer is going to produce code slow and steady (compared to AI) but it will have been built with true thought applied to the objectives, and potentially bugs and efficiency concerns will be ironed out earlier. Code written by AI likely does not have good security designed into it.
Of particular concern to me is the fact that AI companies are profiling their users and collecting up all the queries that users are making. Potentially if you are using AI for corporate or private coding projects, the AI company gets access to your private code. There are generally massive privacy concerns signing into an online AI account to ask AI questions - All those questions get linked back to you as a user. [Local LLMs hosted on your own computer is the only privacy-respecting way to do it.]
Big tech, defense and capital love GLMLM, because development takes massive amounts of resources, so any advantage it gives its developer will stay exclusive to those with massive resources.
Its function is to deepen monopolies by consuming everything we hold dear.
It's also an useful tool and a technology which has shown potential for some great tricks.
Ever thought about hiding a message in chatbot output.
I hope big tech will be wrong and fail in locking in current GLMLM users and be burned to such an unimaginable extent competition gets a chance again.
The copyright issues are just the scaled up and overwhelmed version of communication tech's deeper copyright issues.
On hallucination: yeah that's an effect of using probabilistic technology in a deterministic environment. Can be usefull, can be not.
On GLMLM safety: 😆 good luck trying to develop a technology that can't be used for evil or expressing intelligent thought without disagreement.
On GLMLM security: You're trying to create a mixed instruction and data pipeline without security issues for context-sensitive language set. You'll need at least a recursively enumerable ruleset. We currently fail at implementing a context-sensitive ruleset for a context-free language(SQL-injection attacks) with the available theory, success suckerrrs. (Also I hope some papers are made about this, because I'm crazy and want to implement a recursively non-enumarable ruleset for a recursively enumarable language).
On AI bias: All useful technology has a bias and assumptions about what you desire. What I can do for you modify its biases, so you consider them helpful, so although the problem is generally wrongly identified it can be solved with enough usability testing.
On AI transparency: Won't happen, which is sad.
On the scorching of websites: tragedy of the commons.
On AI profitability: Google already does it more than 20 years and this also shows another secret of the AI boom. It's actually a "steal google's lunch" boom. To which I say: I would love to see you try. If the big fight the small benefit.
Last edited by LoudTechie on 13 May 2026 at 10:08 am UTC
AI, particularly LLMs, is a consensus machine. Let's imagine that LLMs and their surrounding tech existed in the 16th century. Almost all of the material that existed at the time, on which an LLM could have been trained, held the perspective of geocentrism. Any resulting model would then therefore adamantly perpetuate such a falsity as geocentrism.
Let's come back to the 21st century. What materials are largely used to train models? You can see where I'm going with this? LLMs are a technology which stands completely antithetical to critical thinking and to diversity of perspective.
Tangential thought:
Spoiler, click me
Why I am saying all of this? It explains that all our hype is based on files saving data just like MP4, DOCX, OGG or whatever. The difference is the save and read method and also the size and required data. To make a little specialized "book", there is not much data required to save it into ALA. Text to speech of a single language is relatively easy to realize and was also one of the first things that are made in 2010s. It does not require large scaled data centers and not even stolen data. But if you want to replace Wikipedia and beyond carrying the knowledge of the humanity, you have to make larger archives than anyone ever has made. And where does the knowledge come? From everyone participating on the internet. It is not just stolen, but also saved and shared around the world. Licenses become removed (even those protecting free knowledge), authors are no longer on any credits list and while we have to pay a lot for little piracy, companies like NVidia stealing the world without consequences other than becoming the most valuable company in the world. Who has to pay for? Everyone who was stolen. And even if something cannot be reconstructed from this save-file, it is still part of it somehow. Even if it is 99,99% damaged, it helped to form the archive and we just don't know if it is damaged or if we just don't find the right prompt. But a bad MP3 rip with artifacts is still a pirated MP3. Why it should change for LLMs?
And as told earlier. Compressing files cost a lot of energy, same with decompression. The energy and money aspect are told by the others above, but this explains why it is so heavily in costs. When I say "costs", I also mean it in the way of impact against nature. It increases/supports the earth warming to probably over the point of no return. RAM shortages and power bills should be our least fear. Through Iran war 15% more people will starve this year. What do you think happens when less food can be produced through worse climate even to our lifetime, not even speaking about beyond?
When we go the route of big tech even further. Everything becomes smart (toaster, cars, fridges, doorlocks, ...) and smart devices do not get long term support. After some years nobody cares about these proprietary systems any longer. Except automated LLM hacking tools that attack all the infrastructure. Ops, car crash, ops no power on hospital, ops toaster gets on fire, ops light is flickering and person has epilepsy. We building a robot in world size that can be attacked by a 5 years old child, playing around with an teddy that has speech modules and internet connection to an LLM. Sounds crazy, but that's the potential roadmap we are on.
You also probably heard about Grok with undressing people. There is a whole porn industrie around LLM services to undress people and creating porn material. Child abuse inclusive, where age verification does not help. Others build surveillance technologies on top of "age" (identity) verification or snooping all our messages, letting LLM "summerize" our political interests, sexual preferences, illnesses and delivering it to regimes that may kill you (let's say being gay is forbidden and you are not gay, but LLM prints out this garbage, because you talk to a friend that is gay which you don't even know..).
I could continue the whole night. It is just insane in what bad way "we" use this technology. And I am looking with a science oriented mind over this technology, which means I also can tell you a lot of positive things many people do not know about. The negative impact just outperforms anything positive at the moment (which I did not belief 5 years ago that it becomes this bad). We really just can hope that the bubble-crash corrects at least the worst consequences a bit.