Latest Comments by ivarhill
It just keeps getting worse - Firefox to "evolve into a modern AI browser"
17 Dec 2025 at 7:47 pm UTC Likes: 2
17 Dec 2025 at 7:47 pm UTC Likes: 2
I've been using LibreWolf for the past year or so and have absolutely no complaints - definitely a strong recommend!
Humble Choice for November 2025 has Total War: WARHAMMER III
5 Nov 2025 at 6:13 pm UTC Likes: 2
5 Nov 2025 at 6:13 pm UTC Likes: 2
Really been enjoying the TW:WH quite a lot over the years, though I really wish they had some smaller-scale campaigns. The Immortal Empires mode can be great fun, but it's also a massive map with many hundreds of factions with large demands both on mental focus and accepting longer turn times. The smaller story-focused campaigns in both Warhammer 2 and 3 are nice enough, but they are also a lot more directly story-focused and even they are quite huge in scope.
I would really love the ability to just pick a smaller sub-region of the world to play a IE-style freeform campaign in. Since a smaller world means better performance, shorter turn times, and a more focused style of play, it'd be a really nice complement to the larger Immortal Empires - but I wouldn't keep my hopes up...
Still, the desire for a smaller, more compact campaign notwithstanding, it's a great game and has lots of really fun parts to it! :grin:
I would really love the ability to just pick a smaller sub-region of the world to play a IE-style freeform campaign in. Since a smaller world means better performance, shorter turn times, and a more focused style of play, it'd be a really nice complement to the larger Immortal Empires - but I wouldn't keep my hopes up...
Still, the desire for a smaller, more compact campaign notwithstanding, it's a great game and has lots of really fun parts to it! :grin:
Fedora Linux project agrees to allow AI-assisted contributions with a new policy
30 Oct 2025 at 5:50 pm UTC Likes: 2
There's way more to this though, both in terms of free software ideals and in terms how to define LLMs. I think it would be fair to compare this to Microsoft's recent efforts in advancing WSL and OSS more broadly (very intentionally leaving out the FL there!) - after all, Microsoft has a lot of projects out there that theoretically adhere to open licenses and in a purely practical sense support the free software community.
However, if anyone within said community says "I'm choosing not to engage with any Microsoft-developed projects" I think almost everyone would understand why and find that reasonable even if one can find some projects that technically adhere to certain standards.
Within the LLM space, OpenAI is a good example of this as well. Sure, they provide models that by a particular definition are "open", but engaging with these models ignores the bigger context of how they came to be developed, who is furthering their development and through what means, and whether they actively strive to maximize user freedom.
And they absolutely do not - which is fine, this is more or less representative of the distinction between open source and free/libre software - but that is the metric by which I'm arguing here. I don't think it's enough to see "open source" LLMs, since that definition is purely practical in nature and ignores the bigger picture. What is really necessary is:
I'm of course not trying to just reiterate FSF speaking points here :grin: - but in all honesty, this rise in LLMs and how they have been developed thus far I think really illustrates why it's important to draw a distinction between open source and free software, and why it matters to take a more holistic view.
By definition, a free/libre software approach implies caring about the user above the code, and there can be no free users if the code (directly or indirectly) contributes to a technocratic oligarchy or if there is no livable planet for users to live on. I get that this may seem a bit out of left field, but this has to be the main metric by which we look at LLMs or very soon it will be too late to even attempt any genuinely libre approaches to this entire category of technology. These are the points that companies such as OpenAI, Microsoft or Google could never make the top priority, and why even if they use open licenses, that well is poisoned by its very definition.
30 Oct 2025 at 5:50 pm UTC Likes: 2
I am honestly not sure what about MIT-licensed (Deepseek) or Apache 2.0 (Qwen) isn't free enough. Even OpenAI has a OSS model now, if you absolutely insist it being Western-made (it's garbage, though).I completely agree, within the context of the models themselves and the licenses they use.
There's way more to this though, both in terms of free software ideals and in terms how to define LLMs. I think it would be fair to compare this to Microsoft's recent efforts in advancing WSL and OSS more broadly (very intentionally leaving out the FL there!) - after all, Microsoft has a lot of projects out there that theoretically adhere to open licenses and in a purely practical sense support the free software community.
However, if anyone within said community says "I'm choosing not to engage with any Microsoft-developed projects" I think almost everyone would understand why and find that reasonable even if one can find some projects that technically adhere to certain standards.
Within the LLM space, OpenAI is a good example of this as well. Sure, they provide models that by a particular definition are "open", but engaging with these models ignores the bigger context of how they came to be developed, who is furthering their development and through what means, and whether they actively strive to maximize user freedom.
And they absolutely do not - which is fine, this is more or less representative of the distinction between open source and free/libre software - but that is the metric by which I'm arguing here. I don't think it's enough to see "open source" LLMs, since that definition is purely practical in nature and ignores the bigger picture. What is really necessary is:
- Technology that has been developed through free software standards from a foundational level. This includes not only where the technology comes from and how it is controlled, but also addressing environmental concerns! An 'open source' project can ignore these things, but an honestly libre LLM technology has to address this before anything else.
- Models that have been developed entirely on top of these foundations, and through fully consenting use of data. Like the point before, this last matter has to be resolved before moving forward.
- And finally, open distribution where anyone is free to adapt, use, develop on and further these technologies. This is the step that I believe you are addressing, and it is very important - but far from the whole picture.
I'm of course not trying to just reiterate FSF speaking points here :grin: - but in all honesty, this rise in LLMs and how they have been developed thus far I think really illustrates why it's important to draw a distinction between open source and free software, and why it matters to take a more holistic view.
By definition, a free/libre software approach implies caring about the user above the code, and there can be no free users if the code (directly or indirectly) contributes to a technocratic oligarchy or if there is no livable planet for users to live on. I get that this may seem a bit out of left field, but this has to be the main metric by which we look at LLMs or very soon it will be too late to even attempt any genuinely libre approaches to this entire category of technology. These are the points that companies such as OpenAI, Microsoft or Google could never make the top priority, and why even if they use open licenses, that well is poisoned by its very definition.
Fedora Linux project agrees to allow AI-assisted contributions with a new policy
30 Oct 2025 at 1:36 am UTC Likes: 3
In general, I think there's more to be gained by working towards solutions compatible with free software ideals than by forcing big tech to operate in any particular way - and of course LLMs do have very real practical uses (if far more narrow than big tech makes it out to be!) which would be very useful if their downsides were addressed in whole.
My main argument would be that projects such as Fedora, which by their own mission statement exist to further free software ideals, ought to approach these issues from that angle. That doesn't mean prohibiting LLMs forever, but rather than say "You can use LLM-generated code under these criteria", a far more reasonable approach would be to say "You cannot use LLM-generated code" for now, and consider assisting other projects up- and downstream that seek to advance new and free technologies around LLMs and generative AI that actually respects these ideals, if that is something the Fedora project wants to actively help out make happen faster.
After all this is the huge strength of communally developed software - there are no profits to chase, no need to be first to market. The rise of LLMs and generative AI is really the perfect example of how projects such as Fedora can take the much wiser approach of helping out build up free and respectful foundations for these technologies and integrate them into actual development only once this point has been reached.
30 Oct 2025 at 1:36 am UTC Likes: 3
I like your hammer analogy. And I would still argue that a blanket ban on hammers and forcing people to drive nails with rocks is barking at the wrong tree.Absolutely, fair enough! :grin:
In general, I think there's more to be gained by working towards solutions compatible with free software ideals than by forcing big tech to operate in any particular way - and of course LLMs do have very real practical uses (if far more narrow than big tech makes it out to be!) which would be very useful if their downsides were addressed in whole.
My main argument would be that projects such as Fedora, which by their own mission statement exist to further free software ideals, ought to approach these issues from that angle. That doesn't mean prohibiting LLMs forever, but rather than say "You can use LLM-generated code under these criteria", a far more reasonable approach would be to say "You cannot use LLM-generated code" for now, and consider assisting other projects up- and downstream that seek to advance new and free technologies around LLMs and generative AI that actually respects these ideals, if that is something the Fedora project wants to actively help out make happen faster.
After all this is the huge strength of communally developed software - there are no profits to chase, no need to be first to market. The rise of LLMs and generative AI is really the perfect example of how projects such as Fedora can take the much wiser approach of helping out build up free and respectful foundations for these technologies and integrate them into actual development only once this point has been reached.
Fedora Linux project agrees to allow AI-assisted contributions with a new policy
29 Oct 2025 at 5:54 pm UTC Likes: 4
The "LLMs are just tools" argument is one that seems to go around a lot, but rarely with any context or further explanation. A hammer is a tool, but if companies started selling hammers built through unethical labor, using materials that destroy the planet, and directly funnel money into megacorporations, it doesn't really matter if the hammer in a vacuum is just a tool or not. The context matters, and in this situation it really is impossible to separate the product from the process of creating the product and the immense harm it is causing to the planet and society at large.
Even this is ignoring the biggest issue of LLMs however, which is that we are inviting these technologies to become essential to day-to-day life and work, ignoring the fact that this puts our lives in the hands of a few companies who do not have our best interests at heart. Even if there were no ethical concerns regarding LLMs whatsoever, it is still incredibly dangerous to embrace commercial products as public services, as we have seen again and again through the advance of Big Tech.
To be fair, a lot of these problems have more to do with the underlying fabric of Big Tech more so than LLMs specifically. In that sense LLMs really are just a tool, but a tool towards an end purely benefiting Big Tech and not those who actually use them.
29 Oct 2025 at 5:54 pm UTC Likes: 4
I'd much rather trust a project that acknowledge LLM-based tools and handle them appropriately than one which pretends they don't exist and nobody uses them.This seems like a false dichotomy. Surely a project can acknowledge that LLM-based tools exist, and then choose not to use them on practical or ideological grounds (or both) - one doesn't really exclude the other.
The "LLMs are just tools" argument is one that seems to go around a lot, but rarely with any context or further explanation. A hammer is a tool, but if companies started selling hammers built through unethical labor, using materials that destroy the planet, and directly funnel money into megacorporations, it doesn't really matter if the hammer in a vacuum is just a tool or not. The context matters, and in this situation it really is impossible to separate the product from the process of creating the product and the immense harm it is causing to the planet and society at large.
Even this is ignoring the biggest issue of LLMs however, which is that we are inviting these technologies to become essential to day-to-day life and work, ignoring the fact that this puts our lives in the hands of a few companies who do not have our best interests at heart. Even if there were no ethical concerns regarding LLMs whatsoever, it is still incredibly dangerous to embrace commercial products as public services, as we have seen again and again through the advance of Big Tech.
To be fair, a lot of these problems have more to do with the underlying fabric of Big Tech more so than LLMs specifically. In that sense LLMs really are just a tool, but a tool towards an end purely benefiting Big Tech and not those who actually use them.
The new survival game VEIN looks awesome with intelligent AI and interactions with nearly everything
28 Oct 2025 at 5:04 pm UTC Likes: 6
28 Oct 2025 at 5:04 pm UTC Likes: 6
I'm running into this frustration on a near-daily basis as a game dev! :grin:
What I've personally settled on is to just use more specific descriptors, which is honestly a good thing since it's less ambiguous anyway. Mostly this means referring to "NPC behavior" since that's the most common use case. Internally, "AI" is actually far from a standard term in most game dev contexts, with terms like state machines, nav systems etc. being used instead since they are a lot more specific about what is actually being talked about.
Of course plenty of games aren't really focused around "NPCs" as a concept in that way, for instance strategy games - but there too it's pretty easy to just describe what's being talked about ("This RTS has really advanced computer opponents") so really, I think almost all of this comes down to just inserting a little specificity to things :grin:
What I've personally settled on is to just use more specific descriptors, which is honestly a good thing since it's less ambiguous anyway. Mostly this means referring to "NPC behavior" since that's the most common use case. Internally, "AI" is actually far from a standard term in most game dev contexts, with terms like state machines, nav systems etc. being used instead since they are a lot more specific about what is actually being talked about.
Of course plenty of games aren't really focused around "NPCs" as a concept in that way, for instance strategy games - but there too it's pretty easy to just describe what's being talked about ("This RTS has really advanced computer opponents") so really, I think almost all of this comes down to just inserting a little specificity to things :grin:
Fedora Linux project agrees to allow AI-assisted contributions with a new policy
23 Oct 2025 at 11:09 pm UTC Likes: 4
To be clear though - while the comparison is apt when taken at face value, I also think it's a good comparison in a more abstract way. More specifically, I think that the FOSS community really ought to be reacting to this kind of policy in the same way as to a policy that overtly allowed proprietary code copied into Fedora - not just because it's similar in practice, but more importantly because it is equally violating of Free Software principles and ideals.
Beyond the matter of copyright, there's a huge array of issues with LLMs, their development and history, their application in practice, and so much of this is directly tied to things directly antithetical to libre software development. One can discuss the technological, functional, ethical, sociopolitical etc. aspects of generative AI and there's clearly nuanced arguments here in places, but when looking at it specifically through the lens of Free Software it really should be a complete non-issue.
Of course it isn't, and I understand why, but it's a shame.
23 Oct 2025 at 11:09 pm UTC Likes: 4
It just seems to invite an uncessary risk for fedora at this point. Humans breaking copyright is something that courts have dealt with and we have good idea for how that will work out, but AI? The wise and responsible thing to do is sit quietly in the boat until courts have decided how to classify AI generated content and who is legally responsible for the output.I entirely agree, and the entire copyright aspect of things would on its own be enough of a reason for this policy to cause a lot of issues.
To be clear though - while the comparison is apt when taken at face value, I also think it's a good comparison in a more abstract way. More specifically, I think that the FOSS community really ought to be reacting to this kind of policy in the same way as to a policy that overtly allowed proprietary code copied into Fedora - not just because it's similar in practice, but more importantly because it is equally violating of Free Software principles and ideals.
Beyond the matter of copyright, there's a huge array of issues with LLMs, their development and history, their application in practice, and so much of this is directly tied to things directly antithetical to libre software development. One can discuss the technological, functional, ethical, sociopolitical etc. aspects of generative AI and there's clearly nuanced arguments here in places, but when looking at it specifically through the lens of Free Software it really should be a complete non-issue.
Of course it isn't, and I understand why, but it's a shame.
Fedora Linux project agrees to allow AI-assisted contributions with a new policy
23 Oct 2025 at 10:40 pm UTC Likes: 10
23 Oct 2025 at 10:40 pm UTC Likes: 10
This shouldn't even be a contentious matter, and the Fedora Project's take is honestly mind-boggling.
This policy is comparable to saying "You're allowed to copy-paste parts of proprietary codebases into Fedora contributions, as long as you leave a comment saying which proprietary product you are copying from" - which is clearly ridiculous. That's not to say the use of LLMs (and other generative AI technologies) is literally the same thing, necessarily, but it absolutely violates Free Software principles in a comparable way.
Amazingly, the policy even partially addresses this where it specifies that assistance such as grammar corrections do not need disclosure. This part feels reasonable, and somewhat comparable to say, allowing the use of proprietary IDEs to write the code that is being contributed to the project. Obviously some ground has to be ceded, and while some (myself included) might take issue even with this usage of LLMs, this nonetheless feels like where it is realistic to draw the line.
If this policy had ended things there, and simply stated "any contribution of LLM-generated code is prohibited, except for grammar corrections and similar clarifications", that would have been an unfortunate but understandable compromise for the project to make. Looking at the actual policy, though, it's legitimately hard for me to grasp how it was approached from a Free Software perspective.
Of course the Fedora Project is not, say, the FSF, and decisions have to be practical as well as ideological. To so openly invite technologies so hostile to fundamentals of Free Software, however - heck, going as far as specifically suggesting OpenAI's ChatGPT, a proprietary product from an extremely user-hostile corporation, as an example in the policy - makes me think we likely haven't seen the last of conversations around this change.
Hopefully some degree of sanity will prevail.
This policy is comparable to saying "You're allowed to copy-paste parts of proprietary codebases into Fedora contributions, as long as you leave a comment saying which proprietary product you are copying from" - which is clearly ridiculous. That's not to say the use of LLMs (and other generative AI technologies) is literally the same thing, necessarily, but it absolutely violates Free Software principles in a comparable way.
Amazingly, the policy even partially addresses this where it specifies that assistance such as grammar corrections do not need disclosure. This part feels reasonable, and somewhat comparable to say, allowing the use of proprietary IDEs to write the code that is being contributed to the project. Obviously some ground has to be ceded, and while some (myself included) might take issue even with this usage of LLMs, this nonetheless feels like where it is realistic to draw the line.
If this policy had ended things there, and simply stated "any contribution of LLM-generated code is prohibited, except for grammar corrections and similar clarifications", that would have been an unfortunate but understandable compromise for the project to make. Looking at the actual policy, though, it's legitimately hard for me to grasp how it was approached from a Free Software perspective.
Of course the Fedora Project is not, say, the FSF, and decisions have to be practical as well as ideological. To so openly invite technologies so hostile to fundamentals of Free Software, however - heck, going as far as specifically suggesting OpenAI's ChatGPT, a proprietary product from an extremely user-hostile corporation, as an example in the policy - makes me think we likely haven't seen the last of conversations around this change.
Hopefully some degree of sanity will prevail.
Some game developers are far too shameless about generative AI use
14 Aug 2025 at 4:41 pm UTC Likes: 11
14 Aug 2025 at 4:41 pm UTC Likes: 11
I also want to repeat what others have said - thank you Liam for taking a reasonable stance on this!
Beyond the vast ethical, legal and environmental problems that have already been pointed out, I'd also like to highlight a large issue with this technology that is particularly relevant for this and other communities adjacent to Linux and Free Software - its intrinsic ties to big tech as a whole.
I think the largest issue with generative AI today is the way in which it is pushed as a to-be-essential part of much of day to day life, without allowing pause to think about who actually runs these services. We more or less missed the boat in this regard through the advent of social media and smartphones, and assuming this bubble does not burst for some time it feels like the same mistake is about to be repeated once more.
I wrote a short piece on this back in 2023, which honestly I think has only gotten more relevant over time:
https://ivarhill.com/its-not-about-ai/ [External Link]
All this being said, I personally believe (or at least hope!) that the bubble will indeed burst - or maybe more accurately that the use for these technologies will plateau to a less speculative level which at least leaves some breathing room. Time will tell, but for now I'm certainly firmly in the "not interested in using generative AI unless it's both ethical and libre" camp!
Beyond the vast ethical, legal and environmental problems that have already been pointed out, I'd also like to highlight a large issue with this technology that is particularly relevant for this and other communities adjacent to Linux and Free Software - its intrinsic ties to big tech as a whole.
I think the largest issue with generative AI today is the way in which it is pushed as a to-be-essential part of much of day to day life, without allowing pause to think about who actually runs these services. We more or less missed the boat in this regard through the advent of social media and smartphones, and assuming this bubble does not burst for some time it feels like the same mistake is about to be repeated once more.
I wrote a short piece on this back in 2023, which honestly I think has only gotten more relevant over time:
https://ivarhill.com/its-not-about-ai/ [External Link]
All this being said, I personally believe (or at least hope!) that the bubble will indeed burst - or maybe more accurately that the use for these technologies will plateau to a less speculative level which at least leaves some breathing room. Time will tell, but for now I'm certainly firmly in the "not interested in using generative AI unless it's both ethical and libre" camp!
March 25 reminder - GamingOnLinux needs your support
2 Mar 2025 at 8:37 pm UTC Likes: 4
2 Mar 2025 at 8:37 pm UTC Likes: 4
In the same boat here as well, though at least I always enjoy reading the comments on each article! Hopefully at some point in the future it will once again be legally viable to host an independent forum. Wishing you all the best, Liam! :)
- Nexus Mods retire their in-development cross-platform app to focus back on Vortex
- GOG plan to look a bit closer at Linux through 2026
- Valve reveal all the Steam events scheduled for 2026
- Valve's documentation highlights the different ways standalone games run on Steam Frame
- Even more AMD ray tracing performance improvements heading to Mesa on Linux
- > See more over 30 days here
- Away later this week...
- Liam Dawe - Weekend Players' Club 2026-01-16
- grigi - Venting about open source security.
- LoudTechie - Welcome back to the GamingOnLinux Forum
- simplyseven - A New Game Screenshots Thread
- JohnLambrechts - See more posts
How to setup OpenMW for modern Morrowind on Linux / SteamOS and Steam Deck
How to install Hollow Knight: Silksong mods on Linux, SteamOS and Steam Deck