Confused on Steam Play and Proton? Be sure to check out our guide.
Days since AI last hallucinated and suggested something terrible: 0
Liam Dawe May 24
*This counter will never need to be incremented or reset.

I am very much enjoying seeing all these utterly ridiculous AI things from Google. Arse Technica did a nice overview.

Last edited by Liam Dawe on 24 May 2024 at 2:42 pm UTC
Pengling May 24
Did you see the thing where it was advising people to drink pee for 24 hours in order to pass kidney-stones?

Unrelated to Google's AI, but related to AI in general, the results of this AI-written cake recipe were quite funny.
Liam Dawe May 24
And they just keep coming, how about some spicy gasoline spaghetti lol
Linux_Rocks May 24
Quoting: PenglingDid you see the thing where it was advising people to drink pee for 24 hours in order to pass kidney-stones?
eldaking May 24
Quoting: Liam Dawe*This counter will never need to be incremented or reset.

I am very much enjoying seeing all these utterly ridiculous AI things from Google. Arse Technica did a nice overview.

The overview was pretty good until the end they put some platitudes about how it is "improving all the time", diminishing these serious concerns in a way that can dangerously mislead the general public.

Most people without a somewhat technical understanding of machine learning and LLMs see these mistakes as "bugs", as "growing pains" while the tech is being perfected... when they are in fact fundamental limitations of this approach. It isn't a few implementation errors you can debug and fix; to fix the general problem, all they can do is research and hope someone discovers a totally new technique, which might not even exist to be discovered.

What those scammy AI companies do is add "filters" to deal with those particular edge cases (which is why sometimes you can't replicate the problems people reported on social media), but they do nothing for all the other bullshit it will spew in the future. We keep seeing people "tricking" LLMs into assuming personas and talking about stuff it was not supposed to talk about, or using hypotheticals and double negatives. Sometimes we will even see these filters overcompensate and insist on a pre-programmed answer to one of those detected flaws, even when that answer isn't really relevant (but the question is similar enough). It all showcases how this approach is never foolproof: you need filters to protect your filters, you need your heuristics to cover every possibility. Instead of programming all correct answers into a program, they have to instead program all wrong answers so that their bullshit generator doesn't make up stuff that looks too bad.

Last edited by eldaking on 24 May 2024 at 5:36 pm UTC
While you're here, please consider supporting GamingOnLinux on:

Reward Tiers: Patreon. Plain Donations: PayPal.

This ensures all of our main content remains totally free for everyone! Patreon supporters can also remove all adverts and sponsors! Supporting us helps bring good, fresh content. Without your continued support, we simply could not continue!

You can find even more ways to support us on this dedicated page any time. If you already are, thank you!
Login / Register


Or login with...
Sign in with Steam Sign in with Google
Social logins require cookies to stay logged in.