AI generation, it's everywhere! And now it's going to be formally accepted into Fedora Linux, as per the latest approved change with a new policy.
The announcement came on the Fedora discussion board from Aoife Moloney that notes "the Fedora Council has approved the latest version of the AI-Assisted Contributions policy formally". The post links to what will be the final version of the policy and it seems at least reasonable, with whoever contributing the code being required to be fully transparent on what AI tool has been used for it.
Copied below is the approved policy wording:
Fedora AI-Assisted Contributions Policy
You MAY use AI assistance for contributing to Fedora, as long as you follow the principles described below.Accountability: You MUST take the responsibility for your contribution: Contributing to Fedora means vouching for the quality, license compliance, and utility of your submission. All contributions, whether from a human author or assisted by large language models (LLMs) or other generative AI tools, must meet the project’s standards for inclusion. The contributor is always the author and is fully accountable for the entirety of these contributions.
Transparency: You MUST disclose the use of AI tools when the significant part of the contribution is taken from a tool without changes. You SHOULD disclose the other uses of AI tools, where it might be useful. Routine use of assistive tools for correcting grammar and spelling, or for clarifying language, does not require disclosure.
Information about the use of AI tools will help us evaluate their impact, build new best practices and adjust existing processes.
Disclosures are made where authorship is normally indicated. For contributions tracked in git, the recommended method is an Assisted-by: commit message trailer. For other contributions, disclosure may include document preambles, design file metadata, or translation notes.
Examples:
Assisted-by: generic LLM chatbot
Assisted-by: ChatGPTv5Contribution & Community Evaluation: AI tools may be used to assist human reviewers by providing analysis and suggestions. You MUST NOT use AI as the sole or final arbiter in making a substantive or subjective judgment on a contribution, nor may it be used to evaluate a person’s standing within the community (e.g., for funding, leadership roles, or Code of Conduct matters). This does not prohibit the use of automated tooling for objective technical validation, such as CI/CD pipelines, automated testing, or spam filtering. The final accountability for accepting a contribution, even if implemented by an automated system, always rests with the human contributor who authorizes the action.
Large scale initiatives: The policy doesn’t cover the large scale initiatives which may significantly change the ways the project operates or lead to exponential growth in contributions in some parts of the project. Such initiatives need to be discussed separately with the Fedora Council.
Concerns about possible policy violations should be reported via private tickets to Fedora Council.
This also follows a recent change to the Mesa graphics drivers contributor guidelines, with a note about AI as well. Discussions on whether Mesa will actually allow AI contributions seem to still be ongoing though.
What are your thoughts on this?
Promoting cryptobro scam web browser? Check. Fostering a toxic Discord community where it's OK to harass people who dare disagree with GE? Check. Pretending AI code is fine? Check. Being an arrogant jerkface in public spaces? Double check.
Last edited by CyborgZeta on 24 Oct 2025 at 4:02 pm UTC
This is the correct move. I hope Fedora keep it. Opinions from non-developers and people not understanding how AIs work can be actively disregarded.Without even touching the AI-good, AI-bad argument, I can only point to individuals, companies, and government entities who self-certify that they are good actors. To bad actors, the opportunity to abuse that trust is irresistable. When it comes to code check-ins and the AI/ no AI declaration is appended, I would want to know:
a) What third-party tests determine what is or isn't AI generated
b) Did the submitter follow the tests
c) Can the same tests be confirmed independently
This is a huge barrier. Human nature being what it is, few submitters would be willing to do those steps, fewer able to test and verify the declaration. Assuming that all self-certified AI submissions will be bad is wasteful; yet assuming that all self-certified AI submissions will be good is unreasonably risky. The recent `xz` and `npm` poisonings show that blanket trust in a system allows bad actors to do bad things.
So knowing the system is inherently flawed means that opportunities for abuse exist. I freely admit I don't have deep insight into "how AI works." But I do have pretty good insight into history, human nature, and and working with flawed processes. Pragmatic checks and balances on AI generation and use are needed, so maybe we can work together to build them.
But with that said, LLMs are just tools. Whether they're used for good or bad is in the hand of the wielder.
For example, Daniel Stenberg publicly (and rightfully) denounced a flood of "ai slop" security reports to cURL [External Link]. But the same person also acknowledged [External Link] the relevance of a massive report openly built using "AI-based" tools [External Link].
If anything, open source projects are the most rightful beneficiaries of LLM-powered improvements since most available models heavily leverage open source code.
I'd much rather trust a project that acknowledge LLM-based tools and handle them appropriately than one which pretends they don't exist and nobody uses them.
I'd much rather trust a project that acknowledge LLM-based tools and handle them appropriately than one which pretends they don't exist and nobody uses them.
This seems like a false dichotomy. Surely a project can acknowledge that LLM-based tools exist, and then choose not to use them on practical or ideological grounds (or both) - one doesn't really exclude the other.
The "LLMs are just tools" argument is one that seems to go around a lot, but rarely with any context or further explanation. A hammer is a tool, but if companies started selling hammers built through unethical labor, using materials that destroy the planet, and directly funnel money into megacorporations, it doesn't really matter if the hammer in a vacuum is just a tool or not. The context matters, and in this situation it really is impossible to separate the product from the process of creating the product and the immense harm it is causing to the planet and society at large.
Even this is ignoring the biggest issue of LLMs however, which is that we are inviting these technologies to become essential to day-to-day life and work, ignoring the fact that this puts our lives in the hands of a few companies who do not have our best interests at heart. Even if there were no ethical concerns regarding LLMs whatsoever, it is still incredibly dangerous to embrace commercial products as public services, as we have seen again and again through the advance of Big Tech.
To be fair, a lot of these problems have more to do with the underlying fabric of Big Tech more so than LLMs specifically. In that sense LLMs really are just a tool, but a tool towards an end purely benefiting Big Tech and not those who actually use them.
This seems like a false dichotomy. Surely a project can acknowledge that LLM-based tools exist, and then choose not to use them on practical or ideological grounds (or both) - one doesn't really exclude the other.
Indeed, I might have worded that poorly, "handle [LLM tools] appropriately" was not meant to equal allowing their use. A ban or restriction is also an appropriate way to handle the issue.
I like your hammer analogy. And I would still argue that a blanket ban on hammers and forcing people to drive nails with rocks is barking at the wrong tree.
On the other hand, we are definitely seeing a lot of people bashing wood screws in concrete beams with their newfound unethically-made hammers. A scene which doesn't give hammers the best reputation for sure.
I like your hammer analogy. And I would still argue that a blanket ban on hammers and forcing people to drive nails with rocks is barking at the wrong tree.
Absolutely, fair enough!
In general, I think there's more to be gained by working towards solutions compatible with free software ideals than by forcing big tech to operate in any particular way - and of course LLMs do have very real practical uses (if far more narrow than big tech makes it out to be!) which would be very useful if their downsides were addressed in whole.
My main argument would be that projects such as Fedora, which by their own mission statement exist to further free software ideals, ought to approach these issues from that angle. That doesn't mean prohibiting LLMs forever, but rather than say "You can use LLM-generated code under these criteria", a far more reasonable approach would be to say "You cannot use LLM-generated code" for now, and consider assisting other projects up- and downstream that seek to advance new and free technologies around LLMs and generative AI that actually respects these ideals, if that is something the Fedora project wants to actively help out make happen faster.
After all this is the huge strength of communally developed software - there are no profits to chase, no need to be first to market. The rise of LLMs and generative AI is really the perfect example of how projects such as Fedora can take the much wiser approach of helping out build up free and respectful foundations for these technologies and integrate them into actual development only once this point has been reached.
Last edited by ivarhill on 30 Oct 2025 at 1:52 am UTC




How to setup OpenMW for modern Morrowind on Linux / SteamOS and Steam Deck
How to install Hollow Knight: Silksong mods on Linux, SteamOS and Steam Deck