Patreon Logo Support us on Patreon to keep GamingOnLinux alive. This ensures all of our main content remains free for everyone. Just good, fresh content! Alternatively, you can donate through PayPal Logo PayPal. You can also buy games using our partner links for GOG and Humble Store.
We use affiliate links to earn us some pennies. Learn more.

AI generation, it's everywhere! And now it's going to be formally accepted into Fedora Linux, as per the latest approved change with a new policy.

The announcement came on the Fedora discussion board from Aoife Moloney that notes "the Fedora Council has approved the latest version of the AI-Assisted Contributions policy formally". The post links to what will be the final version of the policy and it seems at least reasonable, with whoever contributing the code being required to be fully transparent on what AI tool has been used for it.

Copied below is the approved policy wording:

Fedora AI-Assisted Contributions Policy
You MAY use AI assistance for contributing to Fedora, as long as you follow the principles described below.

Accountability: You MUST take the responsibility for your contribution: Contributing to Fedora means vouching for the quality, license compliance, and utility of your submission. All contributions, whether from a human author or assisted by large language models (LLMs) or other generative AI tools, must meet the project’s standards for inclusion. The contributor is always the author and is fully accountable for the entirety of these contributions.

Transparency: You MUST disclose the use of AI tools when the significant part of the contribution is taken from a tool without changes. You SHOULD disclose the other uses of AI tools, where it might be useful. Routine use of assistive tools for correcting grammar and spelling, or for clarifying language, does not require disclosure.

Information about the use of AI tools will help us evaluate their impact, build new best practices and adjust existing processes.

Disclosures are made where authorship is normally indicated. For contributions tracked in git, the recommended method is an Assisted-by: commit message trailer. For other contributions, disclosure may include document preambles, design file metadata, or translation notes.

Examples:
Assisted-by: generic LLM chatbot
Assisted-by: ChatGPTv5

Contribution & Community Evaluation: AI tools may be used to assist human reviewers by providing analysis and suggestions. You MUST NOT use AI as the sole or final arbiter in making a substantive or subjective judgment on a contribution, nor may it be used to evaluate a person’s standing within the community (e.g., for funding, leadership roles, or Code of Conduct matters). This does not prohibit the use of automated tooling for objective technical validation, such as CI/CD pipelines, automated testing, or spam filtering. The final accountability for accepting a contribution, even if implemented by an automated system, always rests with the human contributor who authorizes the action.

Large scale initiatives: The policy doesn’t cover the large scale initiatives which may significantly change the ways the project operates or lead to exponential growth in contributions in some parts of the project. Such initiatives need to be discussed separately with the Fedora Council.

Concerns about possible policy violations should be reported via private tickets to Fedora Council.

This also follows a recent change to the Mesa graphics drivers contributor guidelines, with a note about AI as well. Discussions on whether Mesa will actually allow AI contributions seem to still be ongoing though.

What are your thoughts on this?

Article taken from GamingOnLinux.com.
11 Likes
About the author -
author picture
I am the owner of GamingOnLinux. After discovering Linux back in the days of Mandrake in 2003, I constantly checked on the progress of Linux until Ubuntu appeared on the scene and it helped me to really love it. You can reach me easily by emailing GamingOnLinux directly.
See more from me
All posts need to follow our rules. Please hit the Report Flag icon on any post that breaks the rules or contains illegal / harmful content. Readers can also email us for any issues or concerns.
32 comments Subscribe
Page: 2/2
  Go to:

dpanter 6 days ago
User Avatar
I'm utterly disgusted with every new low point Glorious Eggroll is hitting. Can't wait to hear what controversial issue they will sweep under the carpet next since apparently no rules or boundaries need be respected any more.

Promoting cryptobro scam web browser? Check. Fostering a toxic Discord community where it's OK to harass people who dare disagree with GE? Check. Pretending AI code is fine? Check. Being an arrogant jerkface in public spaces? Double check. emoji
CyborgZeta 6 days ago
I'm not at a point where I'm ready to quit Bazzite and Aurora over this, but I'm not sure I like this. Between this and the earlier 32-bit fiasco, it saddens me that Fedora keeps doing stuff to attract controversy after I found what I thought would be a good home on Linux. Maybe I should consider migrating to KDE's new distro once it becomes stable.


Last edited by CyborgZeta on 24 Oct 2025 at 4:02 pm UTC
tmtvl 6 days ago
User Avatar
Fedora can allow or disallow whatever they want. Personally I'd sooner make a list of which models are OK to use (open code, open weights, ethically sourced training material) and which models aren't, but I suppose that's a rather tall task. I'd take a contribution assisted by StarCoder2 or Granite over a ChatGPT-assisted one any day of the week.
Grishnakh 6 days ago
User Avatar
This is the correct move. I hope Fedora keep it. Opinions from non-developers and people not understanding how AIs work can be actively disregarded.
Without even touching the AI-good, AI-bad argument, I can only point to individuals, companies, and government entities who self-certify that they are good actors. To bad actors, the opportunity to abuse that trust is irresistable. When it comes to code check-ins and the AI/ no AI declaration is appended, I would want to know:

a) What third-party tests determine what is or isn't AI generated
b) Did the submitter follow the tests
c) Can the same tests be confirmed independently

This is a huge barrier. Human nature being what it is, few submitters would be willing to do those steps, fewer able to test and verify the declaration. Assuming that all self-certified AI submissions will be bad is wasteful; yet assuming that all self-certified AI submissions will be good is unreasonably risky. The recent `xz` and `npm` poisonings show that blanket trust in a system allows bad actors to do bad things.

So knowing the system is inherently flawed means that opportunities for abuse exist. I freely admit I don't have deep insight into "how AI works." But I do have pretty good insight into history, human nature, and and working with flawed processes. Pragmatic checks and balances on AI generation and use are needed, so maybe we can work together to build them.
hell0 2 days ago
I hate the AI circlejerk, the models are built by greedy corporations appropriating the work of thousands without a second thought. Then they sell it to the mass (keeping the money for themselves ofc). Then most people proceed to use "AI" to generate a flood of approximate junk they smear all over the world. The noise makes it harder to reach quality information and reinforces the use of LLM models.

But with that said, LLMs are just tools. Whether they're used for good or bad is in the hand of the wielder.

For example, Daniel Stenberg publicly (and rightfully) denounced a flood of "ai slop" security reports to cURL [External Link]. But the same person also acknowledged [External Link] the relevance of a massive report openly built using "AI-based" tools [External Link].

If anything, open source projects are the most rightful beneficiaries of LLM-powered improvements since most available models heavily leverage open source code.

I'd much rather trust a project that acknowledge LLM-based tools and handle them appropriately than one which pretends they don't exist and nobody uses them.
ivarhill a day ago
User Avatar
I'd much rather trust a project that acknowledge LLM-based tools and handle them appropriately than one which pretends they don't exist and nobody uses them.

This seems like a false dichotomy. Surely a project can acknowledge that LLM-based tools exist, and then choose not to use them on practical or ideological grounds (or both) - one doesn't really exclude the other.

The "LLMs are just tools" argument is one that seems to go around a lot, but rarely with any context or further explanation. A hammer is a tool, but if companies started selling hammers built through unethical labor, using materials that destroy the planet, and directly funnel money into megacorporations, it doesn't really matter if the hammer in a vacuum is just a tool or not. The context matters, and in this situation it really is impossible to separate the product from the process of creating the product and the immense harm it is causing to the planet and society at large.

Even this is ignoring the biggest issue of LLMs however, which is that we are inviting these technologies to become essential to day-to-day life and work, ignoring the fact that this puts our lives in the hands of a few companies who do not have our best interests at heart. Even if there were no ethical concerns regarding LLMs whatsoever, it is still incredibly dangerous to embrace commercial products as public services, as we have seen again and again through the advance of Big Tech.

To be fair, a lot of these problems have more to do with the underlying fabric of Big Tech more so than LLMs specifically. In that sense LLMs really are just a tool, but a tool towards an end purely benefiting Big Tech and not those who actually use them.
hell0 24 hours ago
This seems like a false dichotomy. Surely a project can acknowledge that LLM-based tools exist, and then choose not to use them on practical or ideological grounds (or both) - one doesn't really exclude the other.

Indeed, I might have worded that poorly, "handle [LLM tools] appropriately" was not meant to equal allowing their use. A ban or restriction is also an appropriate way to handle the issue.

I like your hammer analogy. And I would still argue that a blanket ban on hammers and forcing people to drive nails with rocks is barking at the wrong tree.

On the other hand, we are definitely seeing a lot of people bashing wood screws in concrete beams with their newfound unethically-made hammers. A scene which doesn't give hammers the best reputation for sure.
ivarhill 19 hours ago
User Avatar
I like your hammer analogy. And I would still argue that a blanket ban on hammers and forcing people to drive nails with rocks is barking at the wrong tree.

Absolutely, fair enough! emoji

In general, I think there's more to be gained by working towards solutions compatible with free software ideals than by forcing big tech to operate in any particular way - and of course LLMs do have very real practical uses (if far more narrow than big tech makes it out to be!) which would be very useful if their downsides were addressed in whole.

My main argument would be that projects such as Fedora, which by their own mission statement exist to further free software ideals, ought to approach these issues from that angle. That doesn't mean prohibiting LLMs forever, but rather than say "You can use LLM-generated code under these criteria", a far more reasonable approach would be to say "You cannot use LLM-generated code" for now, and consider assisting other projects up- and downstream that seek to advance new and free technologies around LLMs and generative AI that actually respects these ideals, if that is something the Fedora project wants to actively help out make happen faster.

After all this is the huge strength of communally developed software - there are no profits to chase, no need to be first to market. The rise of LLMs and generative AI is really the perfect example of how projects such as Fedora can take the much wiser approach of helping out build up free and respectful foundations for these technologies and integrate them into actual development only once this point has been reached.


Last edited by ivarhill on 30 Oct 2025 at 1:52 am UTC
Kimyrielle 4 hours ago
User Avatar
to advance new and free technologies around LLMs and generative AI that actually respects these ideals

I am honestly not sure what about MIT-licensed (Deepseek) or Apache 2.0 (Qwen) isn't free enough. Even OpenAI has a OSS model now, if you absolutely insist it being Western-made (it's garbage, though).
Purple Library Guy 3 hours ago
I think the question is more "When you ask a Large Language Model to 'write' you some code, where did that code come from and whose copyrights is it infringing?"
ivarhill 3 hours ago
User Avatar
I am honestly not sure what about MIT-licensed (Deepseek) or Apache 2.0 (Qwen) isn't free enough. Even OpenAI has a OSS model now, if you absolutely insist it being Western-made (it's garbage, though).
I completely agree, within the context of the models themselves and the licenses they use.

There's way more to this though, both in terms of free software ideals and in terms how to define LLMs. I think it would be fair to compare this to Microsoft's recent efforts in advancing WSL and OSS more broadly (very intentionally leaving out the FL there!) - after all, Microsoft has a lot of projects out there that theoretically adhere to open licenses and in a purely practical sense support the free software community.

However, if anyone within said community says "I'm choosing not to engage with any Microsoft-developed projects" I think almost everyone would understand why and find that reasonable even if one can find some projects that technically adhere to certain standards.

Within the LLM space, OpenAI is a good example of this as well. Sure, they provide models that by a particular definition are "open", but engaging with these models ignores the bigger context of how they came to be developed, who is furthering their development and through what means, and whether they actively strive to maximize user freedom.

And they absolutely do not - which is fine, this is more or less representative of the distinction between open source and free/libre software - but that is the metric by which I'm arguing here. I don't think it's enough to see "open source" LLMs, since that definition is purely practical in nature and ignores the bigger picture. What is really necessary is:

  • Technology that has been developed through free software standards from a foundational level. This includes not only where the technology comes from and how it is controlled, but also addressing environmental concerns! An 'open source' project can ignore these things, but an honestly libre LLM technology has to address this before anything else.

  • Models that have been developed entirely on top of these foundations, and through fully consenting use of data. Like the point before, this last matter has to be resolved before moving forward.

  • And finally, open distribution where anyone is free to adapt, use, develop on and further these technologies. This is the step that I believe you are addressing, and it is very important - but far from the whole picture.

I'm of course not trying to just reiterate FSF speaking points here emoji - but in all honesty, this rise in LLMs and how they have been developed thus far I think really illustrates why it's important to draw a distinction between open source and free software, and why it matters to take a more holistic view.

By definition, a free/libre software approach implies caring about the user above the code, and there can be no free users if the code (directly or indirectly) contributes to a technocratic oligarchy or if there is no livable planet for users to live on. I get that this may seem a bit out of left field, but this has to be the main metric by which we look at LLMs or very soon it will be too late to even attempt any genuinely libre approaches to this entire category of technology. These are the points that companies such as OpenAI, Microsoft or Google could never make the top priority, and why even if they use open licenses, that well is poisoned by its very definition.


Last edited by ivarhill on 30 Oct 2025 at 6:00 pm UTC
Kimyrielle 2 hours ago
User Avatar
I think the question is more "When you ask a Large Language Model to 'write' you some code, where did that code come from and whose copyrights is it infringing?"

Well, from a purely technical point of view, the question is easy to answer: It made the code up, based on knowledge it gained from looking at other people's code. That's really all there is to it.

The legality of doing that is murky, as of today. Mostly because traditional copyright law wasn't designed with AI training in mind. Keep in in mind that no trace of source material is left in the trained model, which puts the model weights outside of copyright law's reach. Several lawsuits have been filed, arguing AI training with copyrighted material to be illegal. Every single one of them so far has been tossed out by courts. In case you wonder, yes, Meta was found guilty of copyright infringement, but that wasn't about the training, it was about them torrenting books they used for the training.

Unless copyright law is getting updated (I am not seeing anything in the pipes in any relevant jurisdiction), that leaves ethical considerations. And as we know, these are very much subjective.

Same applies to the actual output. To be copyrightable, a work needs to be human created (anyone remember the famous case about the monkey selfie?). AI output is clearly not human made, so the output is not copyrightable - and thus cannot be affected or bound by any kind of license. It's legally public domain.
The one issue is if the model accidentally or on purpose produces full replicas of copyrighted/trademarked material. Queen Elsa doesn't stop being copyrighted just because an AI model drew her. Which is behind the case of Disney vs Midjourney - their model is trained on Disney's work and can reproduce it on prompt. Which - since the outputs are technically distributed when the customer downloads them - could be a copyright violation. I do actually expect Disney to win this case, but let's see. In the end, it looks like a bigger issue than it is. People could make a replica of Disney IP by copy/pasting it, without the AI detour. The result will probably be API model providers having to block people from generating copyrighted/trademarked material. Most newer models I am aware of already aren't trained on specific artists to prevent these issues.
While you're here, please consider supporting GamingOnLinux on:

Reward Tiers: Patreon Logo Patreon. Plain Donations: PayPal Logo PayPal.

This ensures all of our main content remains totally free for everyone! Patreon supporters can also remove all adverts and sponsors! Supporting us helps bring good, fresh content. Without your continued support, we simply could not continue!

You can find even more ways to support us on this dedicated page any time. If you already are, thank you!
Login / Register