The recent Firefox 150 release includes fixes for 271 vulnerabilities identified using Claude Mythos Preview AI. Mozilla revealed the information in a new blog post, detailing how they've been using an early preview of the unreleased AI model as part of Project Glasswing.
Anthropic have been talking up their new Claude Mythos, which is apparently so powerful and dangerous at finding security vulnerabilities in various software that they've currently chosen to keep it private so only a few select companies and organisations have access to it in Project Glasswing.
We can expect to see much more of this appear as all these AI models expand and improve, even though some of them enable anyone to spam projects with completely junk reports - it seems Claude Mythos is actually helping identify some legitimate serious issues.
For all the hype and potential fear coming from it though, Mozilla do state that "Encouragingly, we also haven’t seen any bugs that couldn’t have been found by an elite human researcher" and they don't believe that these AI models will find "entirely new forms of vulnerabilities that defy our current comprehension".
It seems like every bit of major software will be using some form of AI for something, at least checking over for security issues feels like a good sane usage for it. Even the Linux kernel recently introduced documentation to allow AI code helpers, under certain conditions.
Anthropic have been talking up their new Claude Mythos, which is apparently so powerful and dangerous at finding security vulnerabilities in various software that they've currently chosen to keep it private so only a few select companies and organisations have access to it in Project Glasswing.in other news:
Anthropic confirms breach of Claude Mythos model and has launched an investigation into the unauthorized access, per bloomberg. Attackers used contractor login credentials of a compromised third party vendor and guessed internal URLs to access multiple unreleased models.
Quoting: AlveKattI have a feeling this isn't an LLM but an actual specialized AI system. The companies keep conflating different machine learning cases to drive their AGI narrative.It's LLM, and might be nothing special, just marketing as usual: https://www.flyingpenguin.com/the-boy-that-cried-mythos-verification-is-collapsing-trust-in-anthropic/
Of course it is in first place a PR campaign, but at least it becomes somehow helpful to find vulnerabilities. We still should not replace researchers and use such LLMs just as additional layer in the way of "four eyes see more than two".
There are still a lot of problems with these LLMs, even on this fully legit use case. It is still "endless" amount of energy on times of climate change, LLM trained on stolen data (so it is not GPL2.0 compatible - hello Linux kernel team) and these companies become new gate keepers to these super expensive technologies, at least for the next decades: how can I access Claude Mythos to find vulnerabilities in my own software that is not developed by such big players (maybe I am the solo developer that maintain one keystone of the internet software infrastructure)? And once Mythos become public, companies will push serious [codemaxxing](https://github.com/jshchnz/codemaxxing) even further.
By all the good use cases, we should not forget the whole picture around LLMs.
Using AI for creation is not good.
https://en.wikipedia.org/wiki/Static_program_analysis
Are they surprised that a new approach to the problem has uncovered new vulnerabilities? Who would have thought! 🤔
How many vulnerabilities have been systematically fixed by traditional code analyzers throughout the entire history of Firefox without all this marketing hype?
Quoting: stormtuxNot exactly a revolutionary technology, ever heard about "static program analysis"?Yeah, these tools have been around for years (by which I mean "before the genAI hype cycle began late-2022"), but typically only affordable to large Enterprises. With their desperate need to overhype genAI, the AI companies are loss-leading the access to these models to drive adoption, so we're seeing lots of weird hype for things that have existed for years, but have been priced... well, if not fairly, then at least "normally", within the industry. So now smaller outfits like Mozilla have access to these capabilities, and we're seeing headlines like this.
https://en.wikipedia.org/wiki/Static_program_analysis
Are they surprised that a new approach to the problem has uncovered new vulnerabilities? Who would have thought! 🤔
How many vulnerabilities have been systematically fixed by traditional code analyzers throughout the entire history of Firefox without all this marketing hype?
Quoting: Mountain ManUsing AI to fix security holes in software is like using window screen to repair a hole in a bucket.its not to fix, but to find, then an human being can confirm the issue is real (not an halucination) and fix it.
AGI = Artificial General Intelligence
Quoting: CaldathrasAGI = Artificial General IntelligenceAnd that term means something with real intelligence, which non of these AI models will ever be able to do (also not those in 10, 20 or 100 years, based on similar technology), because it is just a super complex deterministic program, where a specific input gives a specific output (if you control all input parameters, which are not exposed on cloud services, but on some local models). That companies speak about AGI is a pure marketing lie to make their bubble grow further.
Quoting: hardpenguinUsing AI for any kind of recognition is good, actually (and often helps with accessibility).Yeah, on a related note:
Using AI for creation is not good.
https://www.forbes.com/sites/the-wiretap/2026/04/22/anthropics-claude-is-pumping-out-vulnerable-code-cyber-experts-warn/




How to setup OpenMW for modern Morrowind on Linux / SteamOS and Steam Deck
How to install Hollow Knight: Silksong mods on Linux, SteamOS and Steam Deck