Patreon Logo Support us on Patreon to keep GamingOnLinux alive. This ensures all of our main content remains free for everyone. Just good, fresh content! Alternatively, you can donate through PayPal Logo PayPal. You can also buy games using our partner links for GOG and Humble Store.
We use affiliate links to earn us some pennies. Learn more.

AI generation, it's everywhere! And now it's going to be formally accepted into Fedora Linux, as per the latest approved change with a new policy.

The announcement came on the Fedora discussion board from Aoife Moloney that notes "the Fedora Council has approved the latest version of the AI-Assisted Contributions policy formally". The post links to what will be the final version of the policy and it seems at least reasonable, with whoever contributing the code being required to be fully transparent on what AI tool has been used for it.

Copied below is the approved policy wording:

Fedora AI-Assisted Contributions Policy
You MAY use AI assistance for contributing to Fedora, as long as you follow the principles described below.

Accountability: You MUST take the responsibility for your contribution: Contributing to Fedora means vouching for the quality, license compliance, and utility of your submission. All contributions, whether from a human author or assisted by large language models (LLMs) or other generative AI tools, must meet the project’s standards for inclusion. The contributor is always the author and is fully accountable for the entirety of these contributions.

Transparency: You MUST disclose the use of AI tools when the significant part of the contribution is taken from a tool without changes. You SHOULD disclose the other uses of AI tools, where it might be useful. Routine use of assistive tools for correcting grammar and spelling, or for clarifying language, does not require disclosure.

Information about the use of AI tools will help us evaluate their impact, build new best practices and adjust existing processes.

Disclosures are made where authorship is normally indicated. For contributions tracked in git, the recommended method is an Assisted-by: commit message trailer. For other contributions, disclosure may include document preambles, design file metadata, or translation notes.

Examples:
Assisted-by: generic LLM chatbot
Assisted-by: ChatGPTv5

Contribution & Community Evaluation: AI tools may be used to assist human reviewers by providing analysis and suggestions. You MUST NOT use AI as the sole or final arbiter in making a substantive or subjective judgment on a contribution, nor may it be used to evaluate a person’s standing within the community (e.g., for funding, leadership roles, or Code of Conduct matters). This does not prohibit the use of automated tooling for objective technical validation, such as CI/CD pipelines, automated testing, or spam filtering. The final accountability for accepting a contribution, even if implemented by an automated system, always rests with the human contributor who authorizes the action.

Large scale initiatives: The policy doesn’t cover the large scale initiatives which may significantly change the ways the project operates or lead to exponential growth in contributions in some parts of the project. Such initiatives need to be discussed separately with the Fedora Council.

Concerns about possible policy violations should be reported via private tickets to Fedora Council.

This also follows a recent change to the Mesa graphics drivers contributor guidelines, with a note about AI as well. Discussions on whether Mesa will actually allow AI contributions seem to still be ongoing though.

What are your thoughts on this?

Article taken from GamingOnLinux.com.
10 Likes
About the author -
author picture
I am the owner of GamingOnLinux. After discovering Linux back in the days of Mandrake in 2003, I constantly checked on the progress of Linux until Ubuntu appeared on the scene and it helped me to really love it. You can reach me easily by emailing GamingOnLinux directly.
See more from me
All posts need to follow our rules. Please hit the Report Flag icon on any post that breaks the rules or contains illegal / harmful content. Readers can also email us for any issues or concerns.
24 comments Subscribe
Page: 2/2
  Go to:

dpanter 8 hours ago
User Avatar
I'm utterly disgusted with every new low point Glorious Eggroll is hitting. Can't wait to hear what controversial issue they will sweep under the carpet next since apparently no rules or boundaries need be respected any more.

Promoting cryptobro scam web browser? Check. Fostering a toxic Discord community where it's OK to harass people who dare disagree with GE? Check. Pretending AI code is fine? Check. Being an arrogant jerkface in public spaces? Double check. emoji
CyborgZeta 6 hours ago
I'm not at a point where I'm ready to quit Bazzite and Aurora over this, but I'm not sure I like this. Between this and the earlier 32-bit fiasco, it saddens me that Fedora keeps doing stuff to attract controversy after I found what I thought would be a good home on Linux. Maybe I should consider migrating to KDE's new distro once it becomes stable.


Last edited by CyborgZeta on 24 Oct 2025 at 4:02 pm UTC
tmtvl 5 hours ago
User Avatar
Fedora can allow or disallow whatever they want. Personally I'd sooner make a list of which models are OK to use (open code, open weights, ethically sourced training material) and which models aren't, but I suppose that's a rather tall task. I'd take a contribution assisted by StarCoder2 or Granite over a ChatGPT-assisted one any day of the week.
Grishnakh 5 hours ago
User Avatar
This is the correct move. I hope Fedora keep it. Opinions from non-developers and people not understanding how AIs work can be actively disregarded.
Without even touching the AI-good, AI-bad argument, I can only point to individuals, companies, and government entities who self-certify that they are good actors. To bad actors, the opportunity to abuse that trust is irresistable. When it comes to code check-ins and the AI/ no AI declaration is appended, I would want to know:

a) What third-party tests determine what is or isn't AI generated
b) Did the submitter follow the tests
c) Can the same tests be confirmed independently

This is a huge barrier. Human nature being what it is, few submitters would be willing to do those steps, fewer able to test and verify the declaration. Assuming that all self-certified AI submissions will be bad is wasteful; yet assuming that all self-certified AI submissions will be good is unreasonably risky. The recent `xz` and `npm` poisonings show that blanket trust in a system allows bad actors to do bad things.

So knowing the system is inherently flawed means that opportunities for abuse exist. I freely admit I don't have deep insight into "how AI works." But I do have pretty good insight into history, human nature, and and working with flawed processes. Pragmatic checks and balances on AI generation and use are needed, so maybe we can work together to build them.
While you're here, please consider supporting GamingOnLinux on:

Reward Tiers: Patreon Logo Patreon. Plain Donations: PayPal Logo PayPal.

This ensures all of our main content remains totally free for everyone! Patreon supporters can also remove all adverts and sponsors! Supporting us helps bring good, fresh content. Without your continued support, we simply could not continue!

You can find even more ways to support us on this dedicated page any time. If you already are, thank you!
Login / Register