Latest Comments by Purple Library Guy
Valve reveal the new Steam Frame, Steam Controller and Steam Machine with SteamOS
14 Nov 2025 at 5:53 pm UTC
14 Nov 2025 at 5:53 pm UTC
I didn't even know the mouse had an orientation. Goes to show what kinds of games I (don't) play. 'Cause like, in normal use, it doesn't matter how you twist the mouse around, the cursor arrow or whatever still points the same way. I didn't know the verticality of the mouse ever mattered at all for anything, or was even detected.
Anti-cheat will still be one of the biggest problems for the new Steam Machine
14 Nov 2025 at 5:10 pm UTC Likes: 1
14 Nov 2025 at 5:10 pm UTC Likes: 1
Say, by modern standards of file sizes, the kernel isn't actually very big, is it? So OK, this is kind of ludicrous, but . . . imagine that the Linux version of Easy Anti-Cheat or whatever DOWNLOADED A CUSTOM KERNEL every time you started logging into the game, and you played the game (and only the game) on that, probably in a sandbox of some sort, and it would get deleted after your session was over. And the custom kernel would be constantly changing so they could tell whether you were using the latest one (and then, yeah, EAC's servers got hacked and malware got put in, but that was only that one time :grin:).
I mean, clearly it would turn the "rootkit" problem up to the max, but from the game developers' perspective it would be the most trustable anti-cheat in town. And you'd have to wait for the download every damn time you wanted to play the game, but . . . if the kernel is pretty small, it wouldn't be that bad.
I mean, clearly it would turn the "rootkit" problem up to the max, but from the game developers' perspective it would be the most trustable anti-cheat in town. And you'd have to wait for the download every damn time you wanted to play the game, but . . . if the kernel is pretty small, it wouldn't be that bad.
Valve reveal the new Steam Frame, Steam Controller and Steam Machine with SteamOS
13 Nov 2025 at 4:07 am UTC Likes: 4
13 Nov 2025 at 4:07 am UTC Likes: 4
As to Steam Machine performance, I guess my question is: What's the resolution on most living room TVs?
Final Sentence is a unique horror battle royale where you're all on typewriters
11 Nov 2025 at 9:45 pm UTC Likes: 4
11 Nov 2025 at 9:45 pm UTC Likes: 4
I might not do too bad at this. I'm a sort of mediocre-to-decent touch typist, but that probably puts me leagues ahead of the hunt-and-peck-with-two-thumbs phone kids. Of course they probably won't play this game . . .
Battlestar Galactica Deadlock is getting delisted starting November 15
7 Nov 2025 at 3:49 pm UTC Likes: 2
7 Nov 2025 at 3:49 pm UTC Likes: 2
Ahhh, hardly anyone remembers the original TV show . . . and deservedly so! After the original movie, which at the time seemed almost as big as the original Star Wars, they did a TV show, which was actually not bad for a while, but sank deeper and deeper in cheese as it went along until I gave up. I don't know how it ended, or if it really ended at all as opposed to getting a sudden axe.
Scribbly, open-world action adventure Scrabdackle arrives December 2
5 Nov 2025 at 5:58 pm UTC Likes: 1
5 Nov 2025 at 5:58 pm UTC Likes: 1
I feel like I've been hearing about this game for a long time.
Linux gamers on Steam finally cross over the 3% mark
3 Nov 2025 at 5:41 am UTC Likes: 5
3 Nov 2025 at 5:41 am UTC Likes: 5
So at current pace it looks like 4% in a bit less than 2 years, maybe 5% in 3 years and a bit. But, a bit of acceleration is looking quite possible. In a world where I'm pessimistic about a whole lot of stuff, Linux progress is a welcome spot of optimism. Let's go!
The extraction shooter ARC Raiders is out and appears to work on Linux
30 Oct 2025 at 7:44 pm UTC Likes: 1
30 Oct 2025 at 7:44 pm UTC Likes: 1
Is text-to-speech actually the same technology as generative AI? I feel like it should be a different thing, but I don't know.
Fedora Linux project agrees to allow AI-assisted contributions with a new policy
30 Oct 2025 at 5:46 pm UTC Likes: 1
30 Oct 2025 at 5:46 pm UTC Likes: 1
I think the question is more "When you ask a Large Language Model to 'write' you some code, where did that code come from and whose copyrights is it infringing?"
Fedora Linux 43 has officially arrived
30 Oct 2025 at 5:43 pm UTC Likes: 1
30 Oct 2025 at 5:43 pm UTC Likes: 1
I really think you're misunderstanding the technology (of specifically Large Language Model, generative "AI") and what it can do. The thing is that while there's a real thing there, and it can do some interesting things, it cannot actually do most of the transformational things that are claimed about it, and some of the key stuff that it supposedly does, it actually kind of doesn't. And while propenents will say sure, but it's a technology in its infancy . . . it actually isn't, and it seems to have plateaued in its capabilities.
So like, if it could actually be used to vastly increase coding productivity, then it would be here to stay in that function and perhaps nobody would be able to do anything about it. Firms that use it would outcompete firms that didn't and so on. But if it doesn't increase coding productivity, and there is significant evidence that actually it does not and may even reduce it, then it's mostly just a fad in that space.
Similarly for jobs--if the job is something like making third rate rehashed articles for publication, then yes, AI is disrupting that space. But most jobs require making decisions--not necessarily important decisions, but all those little day to day decisions, often in chains where making one little decision leads to the next one. And the AI people are touting AI "agents" to do this. If those worked, then there are a lot of jobs generative AI would be able to disrupt. But they don't, they're complete and utter crap at that stuff. And we're not talking like with asking AI questions where sometimes it will hallucinate and give a nonsense answer. We're talking tiny percentages of success, pretty much random level. Agents just don't work.
The implication of that is that AI just can't replace most jobs. Companies jumping on the hype wagon and doing it anyway will have problems. So there again, it's probably not "here to stay" in the "replacing ordinary jobs in organizations that do real things" sector.
Again, as for its continuing to improve . . . the thing is that what they did to make the big LLMs is not really based on very new ideas. People have been studying this stuff for a long time, and as far as I can make out LLMs are based on a fairly normal line of research that computer scientists had been thinking about for a while. It's just the first time someone really threw the big money at them and made the Language Models really Large. So it's not as infant a technology as it seems. Further, it shares a difficulty with all these sort of software-evolution approaches: You can't iteratively improve them in the normal way because nobody wrote the program and nobody understands it. So you can't just go "I'll clean up this bit of code so it runs better", "I'll add this feature, it will be fairly easy because it hooks into the API I made just so that I'd be able to add features", "I'll fix this error" or that sort of thing, because you don't know what's in there. All you can do is train again with slightly different parameters and hope something good comes out. Or scrape even more data to make the Language Model even Larger, or curate it a bit differently. But they're about at the limits of size already. And they also seem to have hit the limits of what kind of thing LLMs are willing to do. They have an elephant; it is fine at walking, but to get it to do the transformative things they want, they need it to be able to climb, and that's not what elephants do. Even the hallucinations seem to be kind of baked into what gives the LLMs the ability to say lots of different things in the first place. At this point I think LLMs are a surprisingly mature technology for their apparent age, which have hit a development plateau.
So bottom line, I think you're just simply wrong. Whether I wanted generative AI to replace everyone's job or not, it is not going to, and it may well not be "here to stay" even in some roles it's being used for at the moment. It's being used in those roles not because it is good at them, but because of the hype; if and when the hype goes away, its tide will recede.
Secondarily, generative AI as done by the big hyped Western companies is a bubble. Its costs are far greater than its revenue, and nobody seems to have any plans to change that very much. The key AI companies seem to be run by a bunch of Sam Bankman-Frieds. Hucksters and grifters who keep changing their stories to whatever sounds impressive in the moment. So those companies will go under, and when they go under and stop paying their staff and utility bills, all their server farms will stop answering queries. And when their server farms stop answering queries, the companies that had been using them won't be able to make queries. At that point, for those companies, generative AI will not be here to stay even if it was actually working for them. So in that limited sense, generative AI will not be here to stay. Although the Chinese stuff will still be going.
I expect in the future some other AI technology will come along that does more things and impacts more areas. But it will be another technology, not this one.
Self-driving cars are also a different technology entirely. Yes, they're both called "AI" even though they aren't really, and they're both kind of black boxes that get "trained" rather than being actual programs that anyone really understands or can maintain, but beyond that I don't think there's a ton of similarity. Self-driving cars also seemed to have a lot of promise, also turn out to actually kind of suck and also seem to have run into a technological plateau, so the grand plans for them have also stalled out rather, but they're a separate technology doing that pattern for separate reasons.
So like, if it could actually be used to vastly increase coding productivity, then it would be here to stay in that function and perhaps nobody would be able to do anything about it. Firms that use it would outcompete firms that didn't and so on. But if it doesn't increase coding productivity, and there is significant evidence that actually it does not and may even reduce it, then it's mostly just a fad in that space.
Similarly for jobs--if the job is something like making third rate rehashed articles for publication, then yes, AI is disrupting that space. But most jobs require making decisions--not necessarily important decisions, but all those little day to day decisions, often in chains where making one little decision leads to the next one. And the AI people are touting AI "agents" to do this. If those worked, then there are a lot of jobs generative AI would be able to disrupt. But they don't, they're complete and utter crap at that stuff. And we're not talking like with asking AI questions where sometimes it will hallucinate and give a nonsense answer. We're talking tiny percentages of success, pretty much random level. Agents just don't work.
The implication of that is that AI just can't replace most jobs. Companies jumping on the hype wagon and doing it anyway will have problems. So there again, it's probably not "here to stay" in the "replacing ordinary jobs in organizations that do real things" sector.
Again, as for its continuing to improve . . . the thing is that what they did to make the big LLMs is not really based on very new ideas. People have been studying this stuff for a long time, and as far as I can make out LLMs are based on a fairly normal line of research that computer scientists had been thinking about for a while. It's just the first time someone really threw the big money at them and made the Language Models really Large. So it's not as infant a technology as it seems. Further, it shares a difficulty with all these sort of software-evolution approaches: You can't iteratively improve them in the normal way because nobody wrote the program and nobody understands it. So you can't just go "I'll clean up this bit of code so it runs better", "I'll add this feature, it will be fairly easy because it hooks into the API I made just so that I'd be able to add features", "I'll fix this error" or that sort of thing, because you don't know what's in there. All you can do is train again with slightly different parameters and hope something good comes out. Or scrape even more data to make the Language Model even Larger, or curate it a bit differently. But they're about at the limits of size already. And they also seem to have hit the limits of what kind of thing LLMs are willing to do. They have an elephant; it is fine at walking, but to get it to do the transformative things they want, they need it to be able to climb, and that's not what elephants do. Even the hallucinations seem to be kind of baked into what gives the LLMs the ability to say lots of different things in the first place. At this point I think LLMs are a surprisingly mature technology for their apparent age, which have hit a development plateau.
So bottom line, I think you're just simply wrong. Whether I wanted generative AI to replace everyone's job or not, it is not going to, and it may well not be "here to stay" even in some roles it's being used for at the moment. It's being used in those roles not because it is good at them, but because of the hype; if and when the hype goes away, its tide will recede.
Secondarily, generative AI as done by the big hyped Western companies is a bubble. Its costs are far greater than its revenue, and nobody seems to have any plans to change that very much. The key AI companies seem to be run by a bunch of Sam Bankman-Frieds. Hucksters and grifters who keep changing their stories to whatever sounds impressive in the moment. So those companies will go under, and when they go under and stop paying their staff and utility bills, all their server farms will stop answering queries. And when their server farms stop answering queries, the companies that had been using them won't be able to make queries. At that point, for those companies, generative AI will not be here to stay even if it was actually working for them. So in that limited sense, generative AI will not be here to stay. Although the Chinese stuff will still be going.
I expect in the future some other AI technology will come along that does more things and impacts more areas. But it will be another technology, not this one.
Self-driving cars are also a different technology entirely. Yes, they're both called "AI" even though they aren't really, and they're both kind of black boxes that get "trained" rather than being actual programs that anyone really understands or can maintain, but beyond that I don't think there's a ton of similarity. Self-driving cars also seemed to have a lot of promise, also turn out to actually kind of suck and also seem to have run into a technological plateau, so the grand plans for them have also stalled out rather, but they're a separate technology doing that pattern for separate reasons.
- GOG now using AI generated images on their store [updated]
- CachyOS founder explains why they didn't join the new Open Gaming Collective (OGC)
- The original FINAL FANTASY VII is getting a new refreshed edition
- GOG job listing for a Senior Software Engineer notes "Linux is the next major frontier"
- UK lawsuit against Valve given the go-ahead, Steam owner facing up to £656 million in damages
- > See more over 30 days here
Recently Updated
- I need help making SWTOR work on Linux without the default Steam …
- whizse - Browsers
- Johnologue - What are you playing this week? 26-01-26
- Caldathras - Game recommendation?
- buono - Will you buy the new Steam Machine?
- CatGirlKatie143 - See more posts
How to setup OpenMW for modern Morrowind on Linux / SteamOS and Steam Deck
How to install Hollow Knight: Silksong mods on Linux, SteamOS and Steam Deck