The recent Firefox 150 release includes fixes for 271 vulnerabilities identified using Claude Mythos Preview AI. Mozilla revealed the information in a new blog post, detailing how they've been using an early preview of the unreleased AI model as part of Project Glasswing.
Anthropic have been talking up their new Claude Mythos, which is apparently so powerful and dangerous at finding security vulnerabilities in various software that they've currently chosen to keep it private so only a few select companies and organisations have access to it in Project Glasswing.
We can expect to see much more of this appear as all these AI models expand and improve, even though some of them enable anyone to spam projects with completely junk reports - it seems Claude Mythos is actually helping identify some legitimate serious issues.
For all the hype and potential fear coming from it though, Mozilla do state that "Encouragingly, we also haven’t seen any bugs that couldn’t have been found by an elite human researcher" and they don't believe that these AI models will find "entirely new forms of vulnerabilities that defy our current comprehension".
It seems like every bit of major software will be using some form of AI for something, at least checking over for security issues feels like a good sane usage for it. Even the Linux kernel recently introduced documentation to allow AI code helpers, under certain conditions.
Anthropic have been talking up their new Claude Mythos, which is apparently so powerful and dangerous at finding security vulnerabilities in various software that they've currently chosen to keep it private so only a few select companies and organisations have access to it in Project Glasswing.in other news:
Anthropic confirms breach of Claude Mythos model and has launched an investigation into the unauthorized access, per bloomberg. Attackers used contractor login credentials of a compromised third party vendor and guessed internal URLs to access multiple unreleased models.
Quoting: AlveKattI have a feeling this isn't an LLM but an actual specialized AI system. The companies keep conflating different machine learning cases to drive their AGI narrative.It's LLM, and might be nothing special, just marketing as usual: https://www.flyingpenguin.com/the-boy-that-cried-mythos-verification-is-collapsing-trust-in-anthropic/
Of course it is in first place a PR campaign, but at least it becomes somehow helpful to find vulnerabilities. We still should not replace researchers and use such LLMs just as additional layer in the way of "four eyes see more than two".
There are still a lot of problems with these LLMs, even on this fully legit use case. It is still "endless" amount of energy on times of climate change, LLM trained on stolen data (so it is not GPL2.0 compatible - hello Linux kernel team) and these companies become new gate keepers to these super expensive technologies, at least for the next decades: how can I access Claude Mythos to find vulnerabilities in my own software that is not developed by such big players (maybe I am the solo developer that maintain one keystone of the internet software infrastructure)? And once Mythos become public, companies will push serious [codemaxxing](https://github.com/jshchnz/codemaxxing) even further.
By all the good use cases, we should not forget the whole picture around LLMs.




How to setup OpenMW for modern Morrowind on Linux / SteamOS and Steam Deck
How to install Hollow Knight: Silksong mods on Linux, SteamOS and Steam Deck