Use Reddit? Come join our Reddit Sub as another place to follow the community!
Title: What's bad about generative AI on the Internet and beyond, exactly?
RubyRose136 2 days ago
I have seen multiple criticisms regarding generative AI, like how they infringe on copyrights on training data & output, how they cause "brain rot" on humans, and how they cause software to have bugs & vulnerabilities if used during software development. What are your opinions on this?
Ehvis 2 days ago
User Avatar
  • Supporter Plus
Apart from the things you mentioned. Neural networks / machine learning models aren't actually intelligent. They basically spit out the most likely/generic patterns based on an input. That is why AI generated art looks so homogeneous. Now you can imagine what's going to happen to industries that are supposed to be making art, but are already trying to let that go in favour of higher margins.

Secondly, AI is an extremely blunt instrument. Basically the most brute force solution you can apply to a problem. It should only be applied to problems for which there is no better solution because the cost is just too high.

And now we have this extremely unhealthy business that is losing billions upon billions of dollars on a promise that is unattainable.

So my opinion. The current AI business needs to die and it needs to die quick because the long it takes, the bigger the damage will be. So I do what I must. Ignore this entire business and everything that is created that benefits it.
g000h 2 days ago
There are plenty of concerns, for instance:

AI requires massive data centers to run which are both environmentally damaging and using up resources. Electricity costs, drinking water, buying up all the RAM, buying up all the hard drives - These things are detrimental to the general public.

Owing to the unstable nature of the AI industry, one day you can use AI affordably, and at some later stage the AI company revises its pricing and suddenly everything is orders of magnitude greater costs.

Owing to the nature of AI code generation, it produces code quickly but that code can be flawed, with unintentional bugs and inefficiency within the design. A skilled developer is going to produce code slow and steady (compared to AI) but it will have been built with true thought applied to the objectives, and potentially bugs and efficiency concerns will be ironed out earlier. Code written by AI likely does not have good security designed into it.

Of particular concern to me is the fact that AI companies are profiling their users and collecting up all the queries that users are making. Potentially if you are using AI for corporate or private coding projects, the AI company gets access to your private code. There are generally massive privacy concerns signing into an online AI account to ask AI questions - All those questions get linked back to you as a user. [Local LLMs hosted on your own computer is the only privacy-respecting way to do it.]
LoudTechie a day ago
Quoting: RubyRose136I have seen multiple criticisms regarding generative AI, like how they infringe on copyrights on training data & output, how they cause "brain rot" on humans, and how they cause software to have bugs & vulnerabilities if used during software development. What are your opinions on this?
generative AI doesn't have an useful definition(a fractal and blender pass the wikipedia test), so I will stick to generative Large Machine Learing Models or GLMLM for short.

Big tech, defense and capital love GLMLM, because development takes massive amounts of resources, so any advantage it gives its developer will stay exclusive to those with massive resources.
Its function is to deepen monopolies by consuming everything we hold dear.
It's also an useful tool and a technology which has shown potential for some great tricks.
Ever thought about hiding a message in chatbot output.
I hope big tech will be wrong and fail in locking in current GLMLM users and be burned to such an unimaginable extent competition gets a chance again.
The copyright issues are just the scaled up and overwhelmed version of communication tech's deeper copyright issues.
On hallucination: yeah that's an effect of using probabilistic technology in a deterministic environment. Can be usefull, can be not.
On GLMLM safety: 😆 good luck trying to develop a technology that can't be used for evil or expressing intelligent thought without disagreement.
On GLMLM security: You're trying to create a mixed instruction and data pipeline without security issues for context-sensitive language set. You'll need at least a recursively enumerable ruleset. We currently fail at implementing a context-sensitive ruleset for a context-free language(SQL-injection attacks) with the available theory, success suckerrrs. (Also I hope some papers are made about this, because I'm crazy and want to implement a recursively non-enumarable ruleset for a recursively enumarable language).
On AI bias: All useful technology has a bias and assumptions about what you desire. What I can do for you modify its biases, so you consider them helpful, so although the problem is generally wrongly identified it can be solved with enough usability testing.
On AI transparency: Won't happen, which is sad.
On the scorching of websites: tragedy of the commons.
On AI profitability: Google already does it more than 20 years and this also shows another secret of the AI boom. It's actually a "steal google's lunch" boom. To which I say: I would love to see you try. If the big fight the small benefit.

Last edited by LoudTechie on 13 May 2026 at 10:08 am UTC
GustyGhost a day ago
Here's a criticism that isn't often voiced.

AI, particularly LLMs, is a consensus machine. Let's imagine that LLMs and their surrounding tech existed in the 16th century. Almost all of the material that existed at the time, on which an LLM could have been trained, held the perspective of geocentrism. Any resulting model would then therefore adamantly perpetuate such a falsity as geocentrism.

Let's come back to the 21st century. What materials are largely used to train models? You can see where I'm going with this? LLMs are a technology which stands completely antithetical to critical thinking and to diversity of perspective.

Tangential thought:
Spoiler, click me
If I were still in university I would love to conduct a project where blind study is done; groups of users are presented with a chat prompt. 50% of the time that chat prompt is just a reddit user. The other 50% of the time it is to a popular LLM chatbot. After having a brief conversation, the users are informed of this and surveyed whether they think they were connected with a redditor or with an LLM chatbot. Would you be able to tell the difference?
RubyRose136 5 hours ago
Thank you everyone for your comments. I believe indeed that the AI craze will end in the future definitely, as nothing lasts forever (not even our universe 😅) and none of the AI companies that exist today are profitable. I think that the major AI tools we use (ChatGPT, Claude, Gemini, etc.) won't be around 8 years from now. (Especially Gemini, given that Google is known for shutting down perfectly working services. Look at Stadia, for example).
User Avatar
I call it Algorithmic Lossy Archives (ALA) and remove the "human being" aspect. What are neuronal network models if we look deeper into it? Datafiles where the "training" process is nothing else than saving data. It is probably easier to think about when using compression tools as 7Z, XZ, ZIP or whatever. If you compress it more, you have to put more energy into the compression. And LLMs are super huge archives of data, extremely well pressed together, but also losing data just like MP3 files. And everything that was saved can also be read. On a typical media file you have a timestamp that tells your program where it has to read from. On ALA you have a prompt. Depending on the prompt it gives exactly one very specific output. You can repeat that process as often as you want, nothing changes (which proves that data is deterministic saved). To get different answers, the prompts become changed as with random numbers (you have no control about on cloud services) or adding more and more words over the chat session that become a longer and longer prompt (and therefor result in different output).

Why I am saying all of this? It explains that all our hype is based on files saving data just like MP4, DOCX, OGG or whatever. The difference is the save and read method and also the size and required data. To make a little specialized "book", there is not much data required to save it into ALA. Text to speech of a single language is relatively easy to realize and was also one of the first things that are made in 2010s. It does not require large scaled data centers and not even stolen data. But if you want to replace Wikipedia and beyond carrying the knowledge of the humanity, you have to make larger archives than anyone ever has made. And where does the knowledge come? From everyone participating on the internet. It is not just stolen, but also saved and shared around the world. Licenses become removed (even those protecting free knowledge), authors are no longer on any credits list and while we have to pay a lot for little piracy, companies like NVidia stealing the world without consequences other than becoming the most valuable company in the world. Who has to pay for? Everyone who was stolen. And even if something cannot be reconstructed from this save-file, it is still part of it somehow. Even if it is 99,99% damaged, it helped to form the archive and we just don't know if it is damaged or if we just don't find the right prompt. But a bad MP3 rip with artifacts is still a pirated MP3. Why it should change for LLMs?

And as told earlier. Compressing files cost a lot of energy, same with decompression. The energy and money aspect are told by the others above, but this explains why it is so heavily in costs. When I say "costs", I also mean it in the way of impact against nature. It increases/supports the earth warming to probably over the point of no return. RAM shortages and power bills should be our least fear. Through Iran war 15% more people will starve this year. What do you think happens when less food can be produced through worse climate even to our lifetime, not even speaking about beyond?

When we go the route of big tech even further. Everything becomes smart (toaster, cars, fridges, doorlocks, ...) and smart devices do not get long term support. After some years nobody cares about these proprietary systems any longer. Except automated LLM hacking tools that attack all the infrastructure. Ops, car crash, ops no power on hospital, ops toaster gets on fire, ops light is flickering and person has epilepsy. We building a robot in world size that can be attacked by a 5 years old child, playing around with an teddy that has speech modules and internet connection to an LLM. Sounds crazy, but that's the potential roadmap we are on.

You also probably heard about Grok with undressing people. There is a whole porn industrie around LLM services to undress people and creating porn material. Child abuse inclusive, where age verification does not help. Others build surveillance technologies on top of "age" (identity) verification or snooping all our messages, letting LLM "summerize" our political interests, sexual preferences, illnesses and delivering it to regimes that may kill you (let's say being gay is forbidden and you are not gay, but LLM prints out this garbage, because you talk to a friend that is gay which you don't even know..).

I could continue the whole night. It is just insane in what bad way "we" use this technology. And I am looking with a science oriented mind over this technology, which means I also can tell you a lot of positive things many people do not know about. The negative impact just outperforms anything positive at the moment (which I did not belief 5 years ago that it becomes this bad). We really just can hope that the bubble-crash corrects at least the worst consequences a bit.
While you're here, please consider supporting GamingOnLinux on:

Reward Tiers: Patreon Logo Patreon. Plain Donations: PayPal Logo PayPal.

This ensures all of our main content remains totally free for everyone! Patreon supporters can also remove all adverts and sponsors! Supporting us helps bring good, fresh content. Without your continued support, we simply could not continue!

You can find even more ways to support us on this dedicated page any time. If you already are, thank you!
Login / Register