Patreon Logo Support us on Patreon to keep GamingOnLinux alive. This ensures all of our main content remains free for everyone. Just good, fresh content! Alternatively, you can donate through PayPal Logo PayPal. You can also buy games using our partner links for GOG and Humble Store.
We use affiliate links to earn us some pennies. Learn more.

The recent Firefox 150 release includes fixes for 271 vulnerabilities identified using Claude Mythos Preview AI. Mozilla revealed the information in a new blog post, detailing how they've been using an early preview of the unreleased AI model as part of Project Glasswing.

Anthropic have been talking up their new Claude Mythos, which is apparently so powerful and dangerous at finding security vulnerabilities in various software that they've currently chosen to keep it private so only a few select companies and organisations have access to it in Project Glasswing.

We can expect to see much more of this appear as all these AI models expand and improve, even though some of them enable anyone to spam projects with completely junk reports - it seems Claude Mythos is actually helping identify some legitimate serious issues.

For all the hype and potential fear coming from it though, Mozilla do state that "Encouragingly, we also haven’t seen any bugs that couldn’t have been found by an elite human researcher" and they don't believe that these AI models will find "entirely new forms of vulnerabilities that defy our current comprehension".

It seems like every bit of major software will be using some form of AI for something, at least checking over for security issues feels like a good sane usage for it. Even the Linux kernel recently introduced documentation to allow AI code helpers, under certain conditions.

Article taken from GamingOnLinux.com.
Tags: Security, AI, Apps, Misc
0 Likes
About the author -
author picture
I am the owner of GamingOnLinux. After discovering Linux back in the days of Mandrake in 2003, I constantly checked on the progress of Linux until Ubuntu appeared on the scene and it helped me to really love it. You can reach me easily by emailing GamingOnLinux directly. You can follow me personally on Mastodon [External Link].
See more from me
All posts need to follow our rules. Please hit the Report Flag icon on any post that breaks the rules or contains illegal / harmful content. Readers can also email us for any issues or concerns.
5 comments

pb 3 hours ago
User Avatar
Anthropic have been talking up their new Claude Mythos, which is apparently so powerful and dangerous at finding security vulnerabilities in various software that they've currently chosen to keep it private so only a few select companies and organisations have access to it in Project Glasswing.
in other news:

Anthropic confirms breach of Claude Mythos model and has launched an investigation into the unauthorized access, per bloomberg. Attackers used contractor login credentials of a compromised third party vendor and guessed internal URLs to access multiple unreleased models.
AlveKatt 3 hours ago
I have a feeling this isn't an LLM but an actual specialized AI system. The companies keep conflating different machine learning cases to drive their AGI narrative.
doragasu 3 hours ago
User Avatar
Quoting: AlveKattI have a feeling this isn't an LLM but an actual specialized AI system. The companies keep conflating different machine learning cases to drive their AGI narrative.
It's LLM, and might be nothing special, just marketing as usual: https://www.flyingpenguin.com/the-boy-that-cried-mythos-verification-is-collapsing-trust-in-anthropic/
User Avatar
LLM is just saying "large language model" and programming languages are very well structured languages. And it seems that Claude is specialized (or better "optimized") for programming languages, but it can also handle human languages and even has to (see commit messages and issue reports).

Of course it is in first place a PR campaign, but at least it becomes somehow helpful to find vulnerabilities. We still should not replace researchers and use such LLMs just as additional layer in the way of "four eyes see more than two".

There are still a lot of problems with these LLMs, even on this fully legit use case. It is still "endless" amount of energy on times of climate change, LLM trained on stolen data (so it is not GPL2.0 compatible - hello Linux kernel team) and these companies become new gate keepers to these super expensive technologies, at least for the next decades: how can I access Claude Mythos to find vulnerabilities in my own software that is not developed by such big players (maybe I am the solo developer that maintain one keystone of the internet software infrastructure)? And once Mythos become public, companies will push serious [codemaxxing](https://github.com/jshchnz/codemaxxing) even further.

By all the good use cases, we should not forget the whole picture around LLMs.
hardpenguin 40 minutes ago
User Avatar
Using AI for any kind of recognition is good, actually (and often helps with accessibility).

Using AI for creation is not good.
While you're here, please consider supporting GamingOnLinux on:

Reward Tiers: Patreon Logo Patreon. Plain Donations: PayPal Logo PayPal.

This ensures all of our main content remains totally free for everyone! Patreon supporters can also remove all adverts and sponsors! Supporting us helps bring good, fresh content. Without your continued support, we simply could not continue!

You can find even more ways to support us on this dedicated page any time. If you already are, thank you!
Login / Register