AI generation, it's everywhere! And now it's going to be formally accepted into Fedora Linux, as per the latest approved change with a new policy.
The announcement came on the Fedora discussion board from Aoife Moloney that notes "the Fedora Council has approved the latest version of the AI-Assisted Contributions policy formally". The post links to what will be the final version of the policy and it seems at least reasonable, with whoever contributing the code being required to be fully transparent on what AI tool has been used for it.
Copied below is the approved policy wording:
Fedora AI-Assisted Contributions Policy
You MAY use AI assistance for contributing to Fedora, as long as you follow the principles described below.Accountability: You MUST take the responsibility for your contribution: Contributing to Fedora means vouching for the quality, license compliance, and utility of your submission. All contributions, whether from a human author or assisted by large language models (LLMs) or other generative AI tools, must meet the project’s standards for inclusion. The contributor is always the author and is fully accountable for the entirety of these contributions.
Transparency: You MUST disclose the use of AI tools when the significant part of the contribution is taken from a tool without changes. You SHOULD disclose the other uses of AI tools, where it might be useful. Routine use of assistive tools for correcting grammar and spelling, or for clarifying language, does not require disclosure.
Information about the use of AI tools will help us evaluate their impact, build new best practices and adjust existing processes.
Disclosures are made where authorship is normally indicated. For contributions tracked in git, the recommended method is an Assisted-by: commit message trailer. For other contributions, disclosure may include document preambles, design file metadata, or translation notes.
Examples:
Assisted-by: generic LLM chatbot
Assisted-by: ChatGPTv5Contribution & Community Evaluation: AI tools may be used to assist human reviewers by providing analysis and suggestions. You MUST NOT use AI as the sole or final arbiter in making a substantive or subjective judgment on a contribution, nor may it be used to evaluate a person’s standing within the community (e.g., for funding, leadership roles, or Code of Conduct matters). This does not prohibit the use of automated tooling for objective technical validation, such as CI/CD pipelines, automated testing, or spam filtering. The final accountability for accepting a contribution, even if implemented by an automated system, always rests with the human contributor who authorizes the action.
Large scale initiatives: The policy doesn’t cover the large scale initiatives which may significantly change the ways the project operates or lead to exponential growth in contributions in some parts of the project. Such initiatives need to be discussed separately with the Fedora Council.
Concerns about possible policy violations should be reported via private tickets to Fedora Council.
This also follows a recent change to the Mesa graphics drivers contributor guidelines, with a note about AI as well. Discussions on whether Mesa will actually allow AI contributions seem to still be ongoing though.
What are your thoughts on this?
Promoting cryptobro scam web browser? Check. Fostering a toxic Discord community where it's OK to harass people who dare disagree with GE? Check. Pretending AI code is fine? Check. Being an arrogant jerkface in public spaces? Double check.
Last edited by CyborgZeta on 24 Oct 2025 at 4:02 pm UTC
This is the correct move. I hope Fedora keep it. Opinions from non-developers and people not understanding how AIs work can be actively disregarded.Without even touching the AI-good, AI-bad argument, I can only point to individuals, companies, and government entities who self-certify that they are good actors. To bad actors, the opportunity to abuse that trust is irresistable. When it comes to code check-ins and the AI/ no AI declaration is appended, I would want to know:
a) What third-party tests determine what is or isn't AI generated
b) Did the submitter follow the tests
c) Can the same tests be confirmed independently
This is a huge barrier. Human nature being what it is, few submitters would be willing to do those steps, fewer able to test and verify the declaration. Assuming that all self-certified AI submissions will be bad is wasteful; yet assuming that all self-certified AI submissions will be good is unreasonably risky. The recent `xz` and `npm` poisonings show that blanket trust in a system allows bad actors to do bad things.
So knowing the system is inherently flawed means that opportunities for abuse exist. I freely admit I don't have deep insight into "how AI works." But I do have pretty good insight into history, human nature, and and working with flawed processes. Pragmatic checks and balances on AI generation and use are needed, so maybe we can work together to build them.




How to setup OpenMW for modern Morrowind on Linux / SteamOS and Steam Deck
How to install Hollow Knight: Silksong mods on Linux, SteamOS and Steam Deck