Patreon Logo Support us on Patreon to keep GamingOnLinux alive. This ensures all of our main content remains free for everyone. Just good, fresh content! Alternatively, you can donate through PayPal Logo PayPal. You can also buy games using our partner links for GOG and Humble Store.
We use affiliate links to earn us some pennies. Learn more.

The recent Firefox 150 release includes fixes for 271 vulnerabilities identified using Claude Mythos Preview AI. Mozilla revealed the information in a new blog post, detailing how they've been using an early preview of the unreleased AI model as part of Project Glasswing.

Anthropic have been talking up their new Claude Mythos, which is apparently so powerful and dangerous at finding security vulnerabilities in various software that they've currently chosen to keep it private so only a few select companies and organisations have access to it in Project Glasswing.

We can expect to see much more of this appear as all these AI models expand and improve, even though some of them enable anyone to spam projects with completely junk reports - it seems Claude Mythos is actually helping identify some legitimate serious issues.

For all the hype and potential fear coming from it though, Mozilla do state that "Encouragingly, we also haven’t seen any bugs that couldn’t have been found by an elite human researcher" and they don't believe that these AI models will find "entirely new forms of vulnerabilities that defy our current comprehension".

It seems like every bit of major software will be using some form of AI for something, at least checking over for security issues feels like a good sane usage for it. Even the Linux kernel recently introduced documentation to allow AI code helpers, under certain conditions.

Article taken from GamingOnLinux.com.
Tags: Security, AI, Apps, Misc
10 Likes
About the author -
author picture
I am the owner of GamingOnLinux. After discovering Linux back in the days of Mandrake in 2003, I constantly checked on the progress of Linux until Ubuntu appeared on the scene and it helped me to really love it. You can reach me easily by emailing GamingOnLinux directly. You can follow me personally on Mastodon [External Link].
See more from me
All posts need to follow our rules. Please hit the Report Flag icon on any post that breaks the rules or contains illegal / harmful content. Readers can also email us for any issues or concerns.
13 comments

pb a day ago
User Avatar
Anthropic have been talking up their new Claude Mythos, which is apparently so powerful and dangerous at finding security vulnerabilities in various software that they've currently chosen to keep it private so only a few select companies and organisations have access to it in Project Glasswing.
in other news:

Anthropic confirms breach of Claude Mythos model and has launched an investigation into the unauthorized access, per bloomberg. Attackers used contractor login credentials of a compromised third party vendor and guessed internal URLs to access multiple unreleased models.
AlveKatt a day ago
I have a feeling this isn't an LLM but an actual specialized AI system. The companies keep conflating different machine learning cases to drive their AGI narrative.
doragasu a day ago
User Avatar
Quoting: AlveKattI have a feeling this isn't an LLM but an actual specialized AI system. The companies keep conflating different machine learning cases to drive their AGI narrative.
It's LLM, and might be nothing special, just marketing as usual: https://www.flyingpenguin.com/the-boy-that-cried-mythos-verification-is-collapsing-trust-in-anthropic/
User Avatar
LLM is just saying "large language model" and programming languages are very well structured languages. And it seems that Claude is specialized (or better "optimized") for programming languages, but it can also handle human languages and even has to (see commit messages and issue reports).

Of course it is in first place a PR campaign, but at least it becomes somehow helpful to find vulnerabilities. We still should not replace researchers and use such LLMs just as additional layer in the way of "four eyes see more than two".

There are still a lot of problems with these LLMs, even on this fully legit use case. It is still "endless" amount of energy on times of climate change, LLM trained on stolen data (so it is not GPL2.0 compatible - hello Linux kernel team) and these companies become new gate keepers to these super expensive technologies, at least for the next decades: how can I access Claude Mythos to find vulnerabilities in my own software that is not developed by such big players (maybe I am the solo developer that maintain one keystone of the internet software infrastructure)? And once Mythos become public, companies will push serious [codemaxxing](https://github.com/jshchnz/codemaxxing) even further.

By all the good use cases, we should not forget the whole picture around LLMs.
hardpenguin 23 hours ago
User Avatar
Using AI for any kind of recognition is good, actually (and often helps with accessibility).

Using AI for creation is not good.
stormtux 23 hours ago
Not exactly a revolutionary technology, ever heard about "static program analysis"?

https://en.wikipedia.org/wiki/Static_program_analysis

Are they surprised that a new approach to the problem has uncovered new vulnerabilities? Who would have thought! 🤔

How many vulnerabilities have been systematically fixed by traditional code analyzers throughout the entire history of Firefox without all this marketing hype?
scaine 21 hours ago
User Avatar
Quoting: stormtuxNot exactly a revolutionary technology, ever heard about "static program analysis"?

https://en.wikipedia.org/wiki/Static_program_analysis

Are they surprised that a new approach to the problem has uncovered new vulnerabilities? Who would have thought! 🤔

How many vulnerabilities have been systematically fixed by traditional code analyzers throughout the entire history of Firefox without all this marketing hype?
Yeah, these tools have been around for years (by which I mean "before the genAI hype cycle began late-2022"), but typically only affordable to large Enterprises. With their desperate need to overhype genAI, the AI companies are loss-leading the access to these models to drive adoption, so we're seeing lots of weird hype for things that have existed for years, but have been priced... well, if not fairly, then at least "normally", within the industry. So now smaller outfits like Mozilla have access to these capabilities, and we're seeing headlines like this.
Mountain Man 21 hours ago
User Avatar
Using AI to fix security holes in software is like using window screen to repair a hole in a bucket.
elmapul 19 hours ago
Quoting: Mountain ManUsing AI to fix security holes in software is like using window screen to repair a hole in a bucket.
its not to fix, but to find, then an human being can confirm the issue is real (not an halucination) and fix it.
Caldathras 16 hours ago
For those that might not know (myself included),

AGI = Artificial General Intelligence
PlayingOnLinuxphone 15 hours ago
User Avatar
Quoting: CaldathrasAGI = Artificial General Intelligence
And that term means something with real intelligence, which non of these AI models will ever be able to do (also not those in 10, 20 or 100 years, based on similar technology), because it is just a super complex deterministic program, where a specific input gives a specific output (if you control all input parameters, which are not exposed on cloud services, but on some local models). That companies speak about AGI is a pure marketing lie to make their bubble grow further.
Seegras 5 hours ago
Quoting: hardpenguinUsing AI for any kind of recognition is good, actually (and often helps with accessibility).

Using AI for creation is not good.
Yeah, on a related note:
https://www.forbes.com/sites/the-wiretap/2026/04/22/anthropics-claude-is-pumping-out-vulnerable-code-cyber-experts-warn/
devland 1 hour ago
This is a marketing stunt.

The "too dangerous too release" and the "access leak" from yesterday is all hidden behind NDAs that everyone who's been given access has to sign. There's no way to verify any of those claims.

The only ones that are pushing this besides anthropic are the corpo hats from mozilla. And even then, if you look at the much hailed release 150 patch log you'll only see a handful of fixes coengineered via AI which is far behind the much touted 250+ fixes figure that is being publicly passed around in press releases.

The whole point of all this fearmongering is to drive anthropic IPO valuation up. And it's working.
While you're here, please consider supporting GamingOnLinux on:

Reward Tiers: Patreon Logo Patreon. Plain Donations: PayPal Logo PayPal.

This ensures all of our main content remains totally free for everyone! Patreon supporters can also remove all adverts and sponsors! Supporting us helps bring good, fresh content. Without your continued support, we simply could not continue!

You can find even more ways to support us on this dedicated page any time. If you already are, thank you!
Login / Register