Latest Comments by Purple Library Guy
Battlestar Galactica Deadlock is getting delisted starting November 15
7 Nov 2025 at 3:49 pm UTC Likes: 2
7 Nov 2025 at 3:49 pm UTC Likes: 2
Ahhh, hardly anyone remembers the original TV show . . . and deservedly so! After the original movie, which at the time seemed almost as big as the original Star Wars, they did a TV show, which was actually not bad for a while, but sank deeper and deeper in cheese as it went along until I gave up. I don't know how it ended, or if it really ended at all as opposed to getting a sudden axe.
Scribbly, open-world action adventure Scrabdackle arrives December 2
5 Nov 2025 at 5:58 pm UTC Likes: 1
5 Nov 2025 at 5:58 pm UTC Likes: 1
I feel like I've been hearing about this game for a long time.
Linux gamers on Steam finally cross over the 3% mark
3 Nov 2025 at 5:41 am UTC Likes: 5
3 Nov 2025 at 5:41 am UTC Likes: 5
So at current pace it looks like 4% in a bit less than 2 years, maybe 5% in 3 years and a bit. But, a bit of acceleration is looking quite possible. In a world where I'm pessimistic about a whole lot of stuff, Linux progress is a welcome spot of optimism. Let's go!
The extraction shooter ARC Raiders is out and appears to work on Linux
30 Oct 2025 at 7:44 pm UTC Likes: 1
30 Oct 2025 at 7:44 pm UTC Likes: 1
Is text-to-speech actually the same technology as generative AI? I feel like it should be a different thing, but I don't know.
Fedora Linux project agrees to allow AI-assisted contributions with a new policy
30 Oct 2025 at 5:46 pm UTC Likes: 1
30 Oct 2025 at 5:46 pm UTC Likes: 1
I think the question is more "When you ask a Large Language Model to 'write' you some code, where did that code come from and whose copyrights is it infringing?"
Fedora Linux 43 has officially arrived
30 Oct 2025 at 5:43 pm UTC Likes: 1
30 Oct 2025 at 5:43 pm UTC Likes: 1
I really think you're misunderstanding the technology (of specifically Large Language Model, generative "AI") and what it can do. The thing is that while there's a real thing there, and it can do some interesting things, it cannot actually do most of the transformational things that are claimed about it, and some of the key stuff that it supposedly does, it actually kind of doesn't. And while propenents will say sure, but it's a technology in its infancy . . . it actually isn't, and it seems to have plateaued in its capabilities.
So like, if it could actually be used to vastly increase coding productivity, then it would be here to stay in that function and perhaps nobody would be able to do anything about it. Firms that use it would outcompete firms that didn't and so on. But if it doesn't increase coding productivity, and there is significant evidence that actually it does not and may even reduce it, then it's mostly just a fad in that space.
Similarly for jobs--if the job is something like making third rate rehashed articles for publication, then yes, AI is disrupting that space. But most jobs require making decisions--not necessarily important decisions, but all those little day to day decisions, often in chains where making one little decision leads to the next one. And the AI people are touting AI "agents" to do this. If those worked, then there are a lot of jobs generative AI would be able to disrupt. But they don't, they're complete and utter crap at that stuff. And we're not talking like with asking AI questions where sometimes it will hallucinate and give a nonsense answer. We're talking tiny percentages of success, pretty much random level. Agents just don't work.
The implication of that is that AI just can't replace most jobs. Companies jumping on the hype wagon and doing it anyway will have problems. So there again, it's probably not "here to stay" in the "replacing ordinary jobs in organizations that do real things" sector.
Again, as for its continuing to improve . . . the thing is that what they did to make the big LLMs is not really based on very new ideas. People have been studying this stuff for a long time, and as far as I can make out LLMs are based on a fairly normal line of research that computer scientists had been thinking about for a while. It's just the first time someone really threw the big money at them and made the Language Models really Large. So it's not as infant a technology as it seems. Further, it shares a difficulty with all these sort of software-evolution approaches: You can't iteratively improve them in the normal way because nobody wrote the program and nobody understands it. So you can't just go "I'll clean up this bit of code so it runs better", "I'll add this feature, it will be fairly easy because it hooks into the API I made just so that I'd be able to add features", "I'll fix this error" or that sort of thing, because you don't know what's in there. All you can do is train again with slightly different parameters and hope something good comes out. Or scrape even more data to make the Language Model even Larger, or curate it a bit differently. But they're about at the limits of size already. And they also seem to have hit the limits of what kind of thing LLMs are willing to do. They have an elephant; it is fine at walking, but to get it to do the transformative things they want, they need it to be able to climb, and that's not what elephants do. Even the hallucinations seem to be kind of baked into what gives the LLMs the ability to say lots of different things in the first place. At this point I think LLMs are a surprisingly mature technology for their apparent age, which have hit a development plateau.
So bottom line, I think you're just simply wrong. Whether I wanted generative AI to replace everyone's job or not, it is not going to, and it may well not be "here to stay" even in some roles it's being used for at the moment. It's being used in those roles not because it is good at them, but because of the hype; if and when the hype goes away, its tide will recede.
Secondarily, generative AI as done by the big hyped Western companies is a bubble. Its costs are far greater than its revenue, and nobody seems to have any plans to change that very much. The key AI companies seem to be run by a bunch of Sam Bankman-Frieds. Hucksters and grifters who keep changing their stories to whatever sounds impressive in the moment. So those companies will go under, and when they go under and stop paying their staff and utility bills, all their server farms will stop answering queries. And when their server farms stop answering queries, the companies that had been using them won't be able to make queries. At that point, for those companies, generative AI will not be here to stay even if it was actually working for them. So in that limited sense, generative AI will not be here to stay. Although the Chinese stuff will still be going.
I expect in the future some other AI technology will come along that does more things and impacts more areas. But it will be another technology, not this one.
Self-driving cars are also a different technology entirely. Yes, they're both called "AI" even though they aren't really, and they're both kind of black boxes that get "trained" rather than being actual programs that anyone really understands or can maintain, but beyond that I don't think there's a ton of similarity. Self-driving cars also seemed to have a lot of promise, also turn out to actually kind of suck and also seem to have run into a technological plateau, so the grand plans for them have also stalled out rather, but they're a separate technology doing that pattern for separate reasons.
So like, if it could actually be used to vastly increase coding productivity, then it would be here to stay in that function and perhaps nobody would be able to do anything about it. Firms that use it would outcompete firms that didn't and so on. But if it doesn't increase coding productivity, and there is significant evidence that actually it does not and may even reduce it, then it's mostly just a fad in that space.
Similarly for jobs--if the job is something like making third rate rehashed articles for publication, then yes, AI is disrupting that space. But most jobs require making decisions--not necessarily important decisions, but all those little day to day decisions, often in chains where making one little decision leads to the next one. And the AI people are touting AI "agents" to do this. If those worked, then there are a lot of jobs generative AI would be able to disrupt. But they don't, they're complete and utter crap at that stuff. And we're not talking like with asking AI questions where sometimes it will hallucinate and give a nonsense answer. We're talking tiny percentages of success, pretty much random level. Agents just don't work.
The implication of that is that AI just can't replace most jobs. Companies jumping on the hype wagon and doing it anyway will have problems. So there again, it's probably not "here to stay" in the "replacing ordinary jobs in organizations that do real things" sector.
Again, as for its continuing to improve . . . the thing is that what they did to make the big LLMs is not really based on very new ideas. People have been studying this stuff for a long time, and as far as I can make out LLMs are based on a fairly normal line of research that computer scientists had been thinking about for a while. It's just the first time someone really threw the big money at them and made the Language Models really Large. So it's not as infant a technology as it seems. Further, it shares a difficulty with all these sort of software-evolution approaches: You can't iteratively improve them in the normal way because nobody wrote the program and nobody understands it. So you can't just go "I'll clean up this bit of code so it runs better", "I'll add this feature, it will be fairly easy because it hooks into the API I made just so that I'd be able to add features", "I'll fix this error" or that sort of thing, because you don't know what's in there. All you can do is train again with slightly different parameters and hope something good comes out. Or scrape even more data to make the Language Model even Larger, or curate it a bit differently. But they're about at the limits of size already. And they also seem to have hit the limits of what kind of thing LLMs are willing to do. They have an elephant; it is fine at walking, but to get it to do the transformative things they want, they need it to be able to climb, and that's not what elephants do. Even the hallucinations seem to be kind of baked into what gives the LLMs the ability to say lots of different things in the first place. At this point I think LLMs are a surprisingly mature technology for their apparent age, which have hit a development plateau.
So bottom line, I think you're just simply wrong. Whether I wanted generative AI to replace everyone's job or not, it is not going to, and it may well not be "here to stay" even in some roles it's being used for at the moment. It's being used in those roles not because it is good at them, but because of the hype; if and when the hype goes away, its tide will recede.
Secondarily, generative AI as done by the big hyped Western companies is a bubble. Its costs are far greater than its revenue, and nobody seems to have any plans to change that very much. The key AI companies seem to be run by a bunch of Sam Bankman-Frieds. Hucksters and grifters who keep changing their stories to whatever sounds impressive in the moment. So those companies will go under, and when they go under and stop paying their staff and utility bills, all their server farms will stop answering queries. And when their server farms stop answering queries, the companies that had been using them won't be able to make queries. At that point, for those companies, generative AI will not be here to stay even if it was actually working for them. So in that limited sense, generative AI will not be here to stay. Although the Chinese stuff will still be going.
I expect in the future some other AI technology will come along that does more things and impacts more areas. But it will be another technology, not this one.
Self-driving cars are also a different technology entirely. Yes, they're both called "AI" even though they aren't really, and they're both kind of black boxes that get "trained" rather than being actual programs that anyone really understands or can maintain, but beyond that I don't think there's a ton of similarity. Self-driving cars also seemed to have a lot of promise, also turn out to actually kind of suck and also seem to have run into a technological plateau, so the grand plans for them have also stalled out rather, but they're a separate technology doing that pattern for separate reasons.
Fedora Linux 43 has officially arrived
29 Oct 2025 at 11:22 pm UTC Likes: 2
29 Oct 2025 at 11:22 pm UTC Likes: 2
Well, I will say one thing: The bit about automation increasing people's free time is a bit of a mirage. In practice, that doesn't happen. At the local level, if automation allows for greater productivity, workers are expected to produce more. At the global level, if automation allows for greater productivity, you don't get a shorter work week or more money for employees. You get some combination of higher unemployment, creation of demand such as by marketing more luxuries or planned obsolescence, and bullshit jobs. More broadly still, creation of more surplus normally just means that more surplus goes to the rich. Productivity has been decoupled from income or leisure since at least 1980--basically, since the fall of "new deal" and "social democratic" thinking. I mean, when most women in North America joined the workforce and so typical nuclear families went from one "breadwinner" to two, in theory that should have meant both could work half time. Instead, the cost of living was changed so that maintaining a half decent middle class household took two incomes.
Maybe with some kind of bottom-up socialism, automation would result in broadly shared prosperity and leisure. But with the system we have, not so much. This is why unions so often end up opposing tech change--their experience is that it leads to speedup and layoffs, while the productivity increase does nothing for them. Sure, it may make the firm more competitive . . . but that's a Red Queen's race for workers: They run faster and faster just to stay in the same place and shovel a few more billion to Jeff Bezos or whoever.
Maybe with some kind of bottom-up socialism, automation would result in broadly shared prosperity and leisure. But with the system we have, not so much. This is why unions so often end up opposing tech change--their experience is that it leads to speedup and layoffs, while the productivity increase does nothing for them. Sure, it may make the firm more competitive . . . but that's a Red Queen's race for workers: They run faster and faster just to stay in the same place and shovel a few more billion to Jeff Bezos or whoever.
As Amazon cut thousands of jobs, New World: Aeternum will see no more updates
29 Oct 2025 at 2:26 pm UTC Likes: 7
29 Oct 2025 at 2:26 pm UTC Likes: 7
I don't think that's going to work out the way top executives think it will. My prediction: Managers and executives will have the clout to avoid being laid off and will instead lay off people who do the real work. Then, they will find out that AI can't do most of that real work, and a lot of stuff that is supposed to be getting done will not.
Fedora Linux 43 has officially arrived
29 Oct 2025 at 2:03 pm UTC Likes: 3
29 Oct 2025 at 2:03 pm UTC Likes: 3
As to productivity, AI seems to be one of those things like multitasking, where people think it makes them productive but it doesn't. There was a study done that found it seemed to make people slower at coding . . . but they thought it made them faster.
Fedora Linux 43 has officially arrived
28 Oct 2025 at 11:11 pm UTC Likes: 1
It would be new to have the default desktop do a constant strobe effect at the best frequency for inducing seizures. And yet, new though the idea is, I suspect some would have quibbles about it. Some people just don't understand progress.
28 Oct 2025 at 11:11 pm UTC Likes: 1
Wasn't always Fedora the testbed for everything new?Not every new thing is the same.
It would be new to have the default desktop do a constant strobe effect at the best frequency for inducing seizures. And yet, new though the idea is, I suspect some would have quibbles about it. Some people just don't understand progress.
- Linux smashes past 5% on the Steam Survey for the first time
- Wine 11.6 is an exciting release to make modding Windows games on Linux simpler
- French consumer group UFC-Que Choisir sues Ubisoft over The Crew shutdown
- Lakehopper looks like a wonderful casual seaplane flight simulator
- NVIDIA announce a preview of "DRM Per-Plane Color Pipeline API" support on Linux (good for HDR)
- > See more over 30 days here
- The Great Android lockdown of 2026.
- tmtvl - New Desktop Screenshot Thread
- Hamish - Away all of next week
- Xpander - What Multiplayer Shooters are yall playing?
- Liam Dawe - Proton/Wine Games Locking Up
- Caldathras - See more posts
How to setup OpenMW for modern Morrowind on Linux / SteamOS and Steam Deck
How to install Hollow Knight: Silksong mods on Linux, SteamOS and Steam Deck