Recently GamingOnLinux highlighted Canonical's plans for adding AI features into Ubuntu Linux, and naturally this has caused plenty of concern.
People are right to be concerned, as we've all seen the horror stories about AI going rogue and doing all sorts of stupid things. From deleting entire databases, to security issues - there's a lot to think about. Providing some clarifications on what the plans are going forward, Jon Seager, the VP Engineering for Canonical, replied to the original post noting some important bullet points about it:
- On the idea of a kill switch: while I said that we won’t add a “global kill switch”, all of these capabilities will be delivered as Snaps to the OS, layered on top of the existing Ubuntu stack. That means there will always be the option of removing those Snaps - which I suppose acts as a sort of kill switch for the features we’re planning on shipping.
- Opt-in vs Opt-out: my plan is to introduce AI-backed features as a “preview” on a strictly opt-in basis in 26.10. In subsequent releases, my plan is to have a step in the initial setup wizard that allows the user to choose whether or not they’d like the AI-native features enabled. Because of the size of most LLMs, we simply couldn’t ship them in the installer anyway, so opting out at first run is simple: they just won’t be there.
- On cloud providers: there appears to be some concern about “sending logs to the cloud” and such. To be clear, this will not be part of our plans. Default configurations of these tools will always be to use local inference against local models. In order to use cloud-based inference, you would need to explicitly configure that, and provide an API token or other credential
At least this seems like a sane plan, if there are going to be AI features available. Not there by default, fully opt-in and easy to remove. Making it easy for both sides to deal with: those who want it, and those who do not.
What about Canonical shipping code that was created or co-authored by AI? That will be a thing in Ubuntu. Seager mentioned "in reality we will be doing this", backing it up by mentioning how "Even foundational projects in the ecosystem like the Kernel itself now have policies around how to govern this, and will accept tasteful, correct contributions that have been authored with AI".
For a Linux distribution, getting away from AI code is going to be increasingly impossible it seems.
They also noted how none of this is in the recent Ubuntu 26.04 LTS release, these are just future plans for Ubuntu 26.10 and beyond.
But never before that.
*ahem*
More seriously, if they stick to their word about opt-in, visible onboarding, and all that, there's little room to complain for. Sure, it's another piece in the "AI everywhere", but the option is likely to always have materialized somewhere; better have it with full human control.
At least, as long as they stick to their words, which is not granted.
Yes, I can see how these tidbits of information were impossible to incorporate in the original press release and had to be moved to a separate damage control “clarification”. They are just too big. /s
I agree with @syylk: it is silly to observe that these two words were left out of the original release. Not that it would make any difference to me, I won’t touch their distribution with a 10-foot-pole.
Last edited by benstor214 on 28 Apr 2026 at 10:28 am UTC
So there's a massive disconnect between these higher-management decision-makers, and the people who actually use the product that the decision-makers influence. Even the slightest awareness from these board execs would have prevented the need for Seager's hurried clarifications. He genuinely must have thought - "whoa boy, people are gonna LOVE this!" and out comes the press release.
Then, suddenly, bafflement from Seager/Canoncial, and damage control.
I'd say it's embarrassing, but that makes it sound like a little "whoopsadasie, sorry about that". Instead, this is deeply disrespectful. They're making decisions about a well-loved project, but without any awareness of a) the people that use it, or b) the complete shitstorm that MS went through just a handful of months ago by doing a very, very similar thing (yeah, yeah, it's local models, blah blah).
And I know this... that they have no awareness... because otherwise these simple clarifications wouldn't have been necessary at all - they'd have been explicitly mentioned in the initial release.
Because of the size of most LLMs, we simply couldn’t ship them in the installer anyway, so opting out at first run is simple: they just won’t be there.I was also thinking this yesterday. Especially if they’re going to support any language other than English, including the models in the installer directly would create a huge amount of bloat. It wouldn’t have made any sense for it to be opt-out.
For me, I only want to run models using my own choices of software. For instance, I would not be happy with Ubuntu or some corporation pushing some agentic features alongside the LLM. The agentic side is where A.I. gets into mischief - It exposes security holes, e.g. prompt attacks which can affect backend systems.
Also, agentic features enable Big Tech corporations (Microsoft, Google, OpenAI) to conceal telemetry and other self-serving capabilities (backdoors, etc). Naturally, any built-in agentic capabilities would be the first place hackers will be aiming to exploit.
Quoting: g000hYou can fit a small local model in 500MB, although naturally its accuracy will suffer.For camera focus, which is one of the features evoked by Seager, Canon, Sony, Nikon, Olympus, Panasonic and Pentax had integrated neural network trained using Deep Learning for autofocus in their DSLR and Miroless for around 15 years. This is notably how they greatly increased the number of species on which focusing using the "Focus on eyes" feature works.
For me, I only want to run models using my own choices of software. For instance, I would not be happy with Ubuntu or some corporation pushing some agentic features alongside the LLM. The agentic side is where A.I. gets into mischief - It exposes security holes, e.g. prompt attacks which can affect backend systems.
Also, agentic features enable Big Tech corporations (Microsoft, Google, OpenAI) to conceal telemetry and other self-serving capabilities (backdoors, etc). Naturally, any built-in agentic capabilities would be the first place hackers will be aiming to exploit.
So "AI powered" camera focus can use very small models depending of what you call AI powered.




How to setup OpenMW for modern Morrowind on Linux / SteamOS and Steam Deck
How to install Hollow Knight: Silksong mods on Linux, SteamOS and Steam Deck