Latest Comments by Purple Library Guy
Fedora Linux project agrees to allow AI-assisted contributions with a new policy
30 Oct 2025 at 5:46 pm UTC Likes: 1
30 Oct 2025 at 5:46 pm UTC Likes: 1
I think the question is more "When you ask a Large Language Model to 'write' you some code, where did that code come from and whose copyrights is it infringing?"
Fedora Linux 43 has officially arrived
30 Oct 2025 at 5:43 pm UTC Likes: 1
30 Oct 2025 at 5:43 pm UTC Likes: 1
I really think you're misunderstanding the technology (of specifically Large Language Model, generative "AI") and what it can do. The thing is that while there's a real thing there, and it can do some interesting things, it cannot actually do most of the transformational things that are claimed about it, and some of the key stuff that it supposedly does, it actually kind of doesn't. And while propenents will say sure, but it's a technology in its infancy . . . it actually isn't, and it seems to have plateaued in its capabilities.
So like, if it could actually be used to vastly increase coding productivity, then it would be here to stay in that function and perhaps nobody would be able to do anything about it. Firms that use it would outcompete firms that didn't and so on. But if it doesn't increase coding productivity, and there is significant evidence that actually it does not and may even reduce it, then it's mostly just a fad in that space.
Similarly for jobs--if the job is something like making third rate rehashed articles for publication, then yes, AI is disrupting that space. But most jobs require making decisions--not necessarily important decisions, but all those little day to day decisions, often in chains where making one little decision leads to the next one. And the AI people are touting AI "agents" to do this. If those worked, then there are a lot of jobs generative AI would be able to disrupt. But they don't, they're complete and utter crap at that stuff. And we're not talking like with asking AI questions where sometimes it will hallucinate and give a nonsense answer. We're talking tiny percentages of success, pretty much random level. Agents just don't work.
The implication of that is that AI just can't replace most jobs. Companies jumping on the hype wagon and doing it anyway will have problems. So there again, it's probably not "here to stay" in the "replacing ordinary jobs in organizations that do real things" sector.
Again, as for its continuing to improve . . . the thing is that what they did to make the big LLMs is not really based on very new ideas. People have been studying this stuff for a long time, and as far as I can make out LLMs are based on a fairly normal line of research that computer scientists had been thinking about for a while. It's just the first time someone really threw the big money at them and made the Language Models really Large. So it's not as infant a technology as it seems. Further, it shares a difficulty with all these sort of software-evolution approaches: You can't iteratively improve them in the normal way because nobody wrote the program and nobody understands it. So you can't just go "I'll clean up this bit of code so it runs better", "I'll add this feature, it will be fairly easy because it hooks into the API I made just so that I'd be able to add features", "I'll fix this error" or that sort of thing, because you don't know what's in there. All you can do is train again with slightly different parameters and hope something good comes out. Or scrape even more data to make the Language Model even Larger, or curate it a bit differently. But they're about at the limits of size already. And they also seem to have hit the limits of what kind of thing LLMs are willing to do. They have an elephant; it is fine at walking, but to get it to do the transformative things they want, they need it to be able to climb, and that's not what elephants do. Even the hallucinations seem to be kind of baked into what gives the LLMs the ability to say lots of different things in the first place. At this point I think LLMs are a surprisingly mature technology for their apparent age, which have hit a development plateau.
So bottom line, I think you're just simply wrong. Whether I wanted generative AI to replace everyone's job or not, it is not going to, and it may well not be "here to stay" even in some roles it's being used for at the moment. It's being used in those roles not because it is good at them, but because of the hype; if and when the hype goes away, its tide will recede.
Secondarily, generative AI as done by the big hyped Western companies is a bubble. Its costs are far greater than its revenue, and nobody seems to have any plans to change that very much. The key AI companies seem to be run by a bunch of Sam Bankman-Frieds. Hucksters and grifters who keep changing their stories to whatever sounds impressive in the moment. So those companies will go under, and when they go under and stop paying their staff and utility bills, all their server farms will stop answering queries. And when their server farms stop answering queries, the companies that had been using them won't be able to make queries. At that point, for those companies, generative AI will not be here to stay even if it was actually working for them. So in that limited sense, generative AI will not be here to stay. Although the Chinese stuff will still be going.
I expect in the future some other AI technology will come along that does more things and impacts more areas. But it will be another technology, not this one.
Self-driving cars are also a different technology entirely. Yes, they're both called "AI" even though they aren't really, and they're both kind of black boxes that get "trained" rather than being actual programs that anyone really understands or can maintain, but beyond that I don't think there's a ton of similarity. Self-driving cars also seemed to have a lot of promise, also turn out to actually kind of suck and also seem to have run into a technological plateau, so the grand plans for them have also stalled out rather, but they're a separate technology doing that pattern for separate reasons.
So like, if it could actually be used to vastly increase coding productivity, then it would be here to stay in that function and perhaps nobody would be able to do anything about it. Firms that use it would outcompete firms that didn't and so on. But if it doesn't increase coding productivity, and there is significant evidence that actually it does not and may even reduce it, then it's mostly just a fad in that space.
Similarly for jobs--if the job is something like making third rate rehashed articles for publication, then yes, AI is disrupting that space. But most jobs require making decisions--not necessarily important decisions, but all those little day to day decisions, often in chains where making one little decision leads to the next one. And the AI people are touting AI "agents" to do this. If those worked, then there are a lot of jobs generative AI would be able to disrupt. But they don't, they're complete and utter crap at that stuff. And we're not talking like with asking AI questions where sometimes it will hallucinate and give a nonsense answer. We're talking tiny percentages of success, pretty much random level. Agents just don't work.
The implication of that is that AI just can't replace most jobs. Companies jumping on the hype wagon and doing it anyway will have problems. So there again, it's probably not "here to stay" in the "replacing ordinary jobs in organizations that do real things" sector.
Again, as for its continuing to improve . . . the thing is that what they did to make the big LLMs is not really based on very new ideas. People have been studying this stuff for a long time, and as far as I can make out LLMs are based on a fairly normal line of research that computer scientists had been thinking about for a while. It's just the first time someone really threw the big money at them and made the Language Models really Large. So it's not as infant a technology as it seems. Further, it shares a difficulty with all these sort of software-evolution approaches: You can't iteratively improve them in the normal way because nobody wrote the program and nobody understands it. So you can't just go "I'll clean up this bit of code so it runs better", "I'll add this feature, it will be fairly easy because it hooks into the API I made just so that I'd be able to add features", "I'll fix this error" or that sort of thing, because you don't know what's in there. All you can do is train again with slightly different parameters and hope something good comes out. Or scrape even more data to make the Language Model even Larger, or curate it a bit differently. But they're about at the limits of size already. And they also seem to have hit the limits of what kind of thing LLMs are willing to do. They have an elephant; it is fine at walking, but to get it to do the transformative things they want, they need it to be able to climb, and that's not what elephants do. Even the hallucinations seem to be kind of baked into what gives the LLMs the ability to say lots of different things in the first place. At this point I think LLMs are a surprisingly mature technology for their apparent age, which have hit a development plateau.
So bottom line, I think you're just simply wrong. Whether I wanted generative AI to replace everyone's job or not, it is not going to, and it may well not be "here to stay" even in some roles it's being used for at the moment. It's being used in those roles not because it is good at them, but because of the hype; if and when the hype goes away, its tide will recede.
Secondarily, generative AI as done by the big hyped Western companies is a bubble. Its costs are far greater than its revenue, and nobody seems to have any plans to change that very much. The key AI companies seem to be run by a bunch of Sam Bankman-Frieds. Hucksters and grifters who keep changing their stories to whatever sounds impressive in the moment. So those companies will go under, and when they go under and stop paying their staff and utility bills, all their server farms will stop answering queries. And when their server farms stop answering queries, the companies that had been using them won't be able to make queries. At that point, for those companies, generative AI will not be here to stay even if it was actually working for them. So in that limited sense, generative AI will not be here to stay. Although the Chinese stuff will still be going.
I expect in the future some other AI technology will come along that does more things and impacts more areas. But it will be another technology, not this one.
Self-driving cars are also a different technology entirely. Yes, they're both called "AI" even though they aren't really, and they're both kind of black boxes that get "trained" rather than being actual programs that anyone really understands or can maintain, but beyond that I don't think there's a ton of similarity. Self-driving cars also seemed to have a lot of promise, also turn out to actually kind of suck and also seem to have run into a technological plateau, so the grand plans for them have also stalled out rather, but they're a separate technology doing that pattern for separate reasons.
Fedora Linux 43 has officially arrived
29 Oct 2025 at 11:22 pm UTC Likes: 2
29 Oct 2025 at 11:22 pm UTC Likes: 2
Well, I will say one thing: The bit about automation increasing people's free time is a bit of a mirage. In practice, that doesn't happen. At the local level, if automation allows for greater productivity, workers are expected to produce more. At the global level, if automation allows for greater productivity, you don't get a shorter work week or more money for employees. You get some combination of higher unemployment, creation of demand such as by marketing more luxuries or planned obsolescence, and bullshit jobs. More broadly still, creation of more surplus normally just means that more surplus goes to the rich. Productivity has been decoupled from income or leisure since at least 1980--basically, since the fall of "new deal" and "social democratic" thinking. I mean, when most women in North America joined the workforce and so typical nuclear families went from one "breadwinner" to two, in theory that should have meant both could work half time. Instead, the cost of living was changed so that maintaining a half decent middle class household took two incomes.
Maybe with some kind of bottom-up socialism, automation would result in broadly shared prosperity and leisure. But with the system we have, not so much. This is why unions so often end up opposing tech change--their experience is that it leads to speedup and layoffs, while the productivity increase does nothing for them. Sure, it may make the firm more competitive . . . but that's a Red Queen's race for workers: They run faster and faster just to stay in the same place and shovel a few more billion to Jeff Bezos or whoever.
Maybe with some kind of bottom-up socialism, automation would result in broadly shared prosperity and leisure. But with the system we have, not so much. This is why unions so often end up opposing tech change--their experience is that it leads to speedup and layoffs, while the productivity increase does nothing for them. Sure, it may make the firm more competitive . . . but that's a Red Queen's race for workers: They run faster and faster just to stay in the same place and shovel a few more billion to Jeff Bezos or whoever.
As Amazon cut thousands of jobs, New World: Aeternum will see no more updates
29 Oct 2025 at 2:26 pm UTC Likes: 7
29 Oct 2025 at 2:26 pm UTC Likes: 7
I don't think that's going to work out the way top executives think it will. My prediction: Managers and executives will have the clout to avoid being laid off and will instead lay off people who do the real work. Then, they will find out that AI can't do most of that real work, and a lot of stuff that is supposed to be getting done will not.
Fedora Linux 43 has officially arrived
29 Oct 2025 at 2:03 pm UTC Likes: 3
29 Oct 2025 at 2:03 pm UTC Likes: 3
As to productivity, AI seems to be one of those things like multitasking, where people think it makes them productive but it doesn't. There was a study done that found it seemed to make people slower at coding . . . but they thought it made them faster.
Fedora Linux 43 has officially arrived
28 Oct 2025 at 11:11 pm UTC Likes: 1
It would be new to have the default desktop do a constant strobe effect at the best frequency for inducing seizures. And yet, new though the idea is, I suspect some would have quibbles about it. Some people just don't understand progress.
28 Oct 2025 at 11:11 pm UTC Likes: 1
Wasn't always Fedora the testbed for everything new?Not every new thing is the same.
It would be new to have the default desktop do a constant strobe effect at the best frequency for inducing seizures. And yet, new though the idea is, I suspect some would have quibbles about it. Some people just don't understand progress.
Gaijin Entertainment announced EdenSpark, an open source "AI-assisted" platform for making games
28 Oct 2025 at 3:37 am UTC Likes: 1
28 Oct 2025 at 3:37 am UTC Likes: 1
NFTs died. Crypto's still griftin' away pretty hard. Especially in the US--buying up politicians and deregulations like anything.
GOG asking for more donations from gamers with the new GOG Patrons program
25 Oct 2025 at 7:33 pm UTC Likes: 7
25 Oct 2025 at 7:33 pm UTC Likes: 7
The whole idea that a for-profit company that frequently shifts its strategies, or gets new executives with different ideas about how to make a profit, is doing preservation has always seemed a bit underwhelming to me. If there is ever a conflict between "preservation" and "making or saving money" . . . which seems fairly likely . . . "preservation" is not the primary mission, "making money" is. And while some executives may value the idea that a good reputation may lead to more revenue, others pretty much do not. Preservation is a long term thing, and corporations are not entities that keep doing the same stuff in the long term.
I do also have an instinctive "So, this company wants to make more money by just having people . . . give it to them for free?" If I give money it's gonna be to a charity. GOG may like trying to blur the line to pretend to be sort of partly a charity . . . but they aren't. They're a thing that's there to feed money to the shareholders or CEO or whatever. And again, I don't see why I should be giving those guys my money for nothing.
This isn't about not liking GOG in particular or anything. Valve are one of my least-unfavourite companies, but I still wouldn't donate to them on Patreon either. If a company wants my money, it can offer me a product in return.
I do also have an instinctive "So, this company wants to make more money by just having people . . . give it to them for free?" If I give money it's gonna be to a charity. GOG may like trying to blur the line to pretend to be sort of partly a charity . . . but they aren't. They're a thing that's there to feed money to the shareholders or CEO or whatever. And again, I don't see why I should be giving those guys my money for nothing.
This isn't about not liking GOG in particular or anything. Valve are one of my least-unfavourite companies, but I still wouldn't donate to them on Patreon either. If a company wants my money, it can offer me a product in return.
Krafton (PUBG, Subnautica, inZOI) becoming an "AI-First" company
24 Oct 2025 at 3:01 pm UTC Likes: 1
24 Oct 2025 at 3:01 pm UTC Likes: 1
They're going to fire the managers and the devs will be given even more space to be creative!A lot of managerial/exec jobs are among the few that probably COULD be automated with AI. I mean, if there's one thing AI can probably do fine, it's belt out a bunch of buzzword bingo that doesn't mean anything in particular, suitable for deployment at a pointless meeting. Probably wouldn't be able to make actual executive decisions, but that's a bonus because when that kind of executive decides the buzzwords should have real-world impacts is when things start going really badly.
Right?!?
Amazon Luna cloud gaming relaunched, with Prime Gaming merged in and a new AI game
23 Oct 2025 at 7:30 pm UTC Likes: 1
23 Oct 2025 at 7:30 pm UTC Likes: 1
Snoop Dogg lends his voice, looks, and personalityI'm not really up on pop culture . . . does Snoop Dogg have a personality?
- GOG now using AI generated images on their store [updated]
- CachyOS founder explains why they didn't join the new Open Gaming Collective (OGC)
- The original FINAL FANTASY VII is getting a new refreshed edition
- GOG job listing for a Senior Software Engineer notes "Linux is the next major frontier"
- UK lawsuit against Valve given the go-ahead, Steam owner facing up to £656 million in damages
- > See more over 30 days here
Recently Updated
- I need help making SWTOR work on Linux without the default Steam …
- BlackSun - Browsers
- Johnologue - What are you playing this week? 26-01-26
- Caldathras - Game recommendation?
- buono - Will you buy the new Steam Machine?
- CatGirlKatie143 - See more posts
How to setup OpenMW for modern Morrowind on Linux / SteamOS and Steam Deck
How to install Hollow Knight: Silksong mods on Linux, SteamOS and Steam Deck