Patreon Logo Support us on Patreon to keep GamingOnLinux alive. This ensures all of our main content remains free for everyone. Just good, fresh content! Alternatively, you can donate through PayPal Logo PayPal. You can also buy games using our partner links for GOG and Humble Store.
Title: What's bad about generative AI on the Internet and beyond, exactly?
RubyRose136 21 hours ago
I have seen multiple criticisms regarding generative AI, like how they infringe on copyrights on training data & output, how they cause "brain rot" on humans, and how they cause software to have bugs & vulnerabilities if used during software development. What are your opinions on this?
Ehvis 20 hours ago
User Avatar
  • Supporter Plus
Apart from the things you mentioned. Neural networks / machine learning models aren't actually intelligent. They basically spit out the most likely/generic patterns based on an input. That is why AI generated art looks so homogeneous. Now you can imagine what's going to happen to industries that are supposed to be making art, but are already trying to let that go in favour of higher margins.

Secondly, AI is an extremely blunt instrument. Basically the most brute force solution you can apply to a problem. It should only be applied to problems for which there is no better solution because the cost is just too high.

And now we have this extremely unhealthy business that is losing billions upon billions of dollars on a promise that is unattainable.

So my opinion. The current AI business needs to die and it needs to die quick because the long it takes, the bigger the damage will be. So I do what I must. Ignore this entire business and everything that is created that benefits it.
g000h 17 hours ago
There are plenty of concerns, for instance:

AI requires massive data centers to run which are both environmentally damaging and using up resources. Electricity costs, drinking water, buying up all the RAM, buying up all the hard drives - These things are detrimental to the general public.

Owing to the unstable nature of the AI industry, one day you can use AI affordably, and at some later stage the AI company revises its pricing and suddenly everything is orders of magnitude greater costs.

Owing to the nature of AI code generation, it produces code quickly but that code can be flawed, with unintentional bugs and inefficiency within the design. A skilled developer is going to produce code slow and steady (compared to AI) but it will have been built with true thought applied to the objectives, and potentially bugs and efficiency concerns will be ironed out earlier. Code written by AI likely does not have good security designed into it.

Of particular concern to me is the fact that AI companies are profiling their users and collecting up all the queries that users are making. Potentially if you are using AI for corporate or private coding projects, the AI company gets access to your private code. There are generally massive privacy concerns signing into an online AI account to ask AI questions - All those questions get linked back to you as a user. [Local LLMs hosted on your own computer is the only privacy-respecting way to do it.]
LoudTechie 8 hours ago
Quoting: RubyRose136I have seen multiple criticisms regarding generative AI, like how they infringe on copyrights on training data & output, how they cause "brain rot" on humans, and how they cause software to have bugs & vulnerabilities if used during software development. What are your opinions on this?
generative AI doesn't have an useful definition(a fractal and blender pass the wikipedia test), so I will stick to generative Large Machine Learing Models or GLMLM for short.

Big tech, defense and capital love GLMLM, because development takes massive amounts of resources, so any advantage it gives its developer will stay exclusive to those with massive resources.
Its function is to deepen monopolies by consuming everything we hold dear.
It's also an useful tool and a technology which has shown potential for some great tricks.
Ever thought about hiding a message in chatbot output.
I hope big tech will be wrong and fail in locking in current GLMLM users and be burned to such an unimaginable extent competition gets a chance again.
The copyright issues are just the scaled up and overwhelmed version of communication tech's deeper copyright issues.
On hallucination: yeah that's an effect of using probabilistic technology in a deterministic environment. Can be usefull, can be not.
On GLMLM safety: 😆 good luck trying to develop a technology that can't be used for evil or expressing intelligent thought without disagreement.
On GLMLM security: You're trying to create a mixed instruction and data pipeline without security issues for context-sensitive language set. You'll need at least a recursively enumerable ruleset. We currently fail at implementing a context-sensitive ruleset for a context-free language(SQL-injection attacks) with the available theory, success suckerrrs. (Also I hope some papers are made about this, because I'm crazy and want to implement a recursively non-enumarable ruleset for a recursively enumarable language).
On AI bias: All useful technology has a bias and assumptions about what you desire. What I can do for you modify its biases, so you consider them helpful, so although the problem is generally wrongly identified it can be solved with enough usability testing.
On AI transparency: Won't happen, which is sad.
On the scorching of websites: tragedy of the commons.
On AI profitability: Google already does it more than 20 years and this also shows another secret of the AI boom. It's actually a "steal google's lunch" boom. To which I say: I would love to see you try. If the big fight the small benefit.

Last edited by LoudTechie on 13 May 2026 at 10:08 am UTC
GustyGhost 1 hour ago
Here's a criticism that isn't often voiced.

AI, particularly LLMs, is a consensus machine. Let's imagine that LLMs and their surrounding tech existed in the 16th century. Almost all of the material that existed at the time, on which an LLM could have been trained, held the perspective of geocentrism. Any resulting model would then therefore adamantly perpetuate such a falsity as geocentrism.

Let's come back to the 21st century. What materials are largely used to train models? You can see where I'm going with this? LLMs are a technology which stands completely antithetical to critical thinking and to diversity of perspective.

Tangential thought:
Spoiler, click me
If I were still in university I would love to conduct a project where blind study is done; groups of users are presented with a chat prompt. 50% of the time that chat prompt is just a reddit user. The other 50% of the time it is to a popular LLM chatbot. After having a brief conversation, the users are informed of this and surveyed whether they think they were connected with a redditor or with an LLM chatbot. Would you be able to tell the difference?
While you're here, please consider supporting GamingOnLinux on:

Reward Tiers: Patreon Logo Patreon. Plain Donations: PayPal Logo PayPal.

This ensures all of our main content remains totally free for everyone! Patreon supporters can also remove all adverts and sponsors! Supporting us helps bring good, fresh content. Without your continued support, we simply could not continue!

You can find even more ways to support us on this dedicated page any time. If you already are, thank you!
Login / Register