While you're here, please consider supporting GamingOnLinux on:
Reward Tiers:
Patreon. Plain Donations:
PayPal.
This ensures all of our main content remains totally free for everyone! Patreon supporters can also remove all adverts and sponsors! Supporting us helps bring good, fresh content. Without your continued support, we simply could not continue!
You can find even more ways to support us on this dedicated page any time. If you already are, thank you!
Reward Tiers:
This ensures all of our main content remains totally free for everyone! Patreon supporters can also remove all adverts and sponsors! Supporting us helps bring good, fresh content. Without your continued support, we simply could not continue!
You can find even more ways to support us on this dedicated page any time. If you already are, thank you!
Login / Register
- NVIDIA announce a native Linux app for GeForce NOW
- KDE Plasma 6.6 will finally stop the system sleeping when gaming with a controller
- NVIDIA announce DLSS 4.5 with Dynamic Multi Frame Generation, plus DLSS Updater gets Linux support
- Mesa RADV driver on Linux looks set for a big ray tracing performance boost
- Linaro reveal they're collaborating with Valve for the Steam Frame
- > See more over 30 days here
- New Desktop Screenshot Thread
- Xpander - Weekend Players' Club 2026-01-09
- on_en_a_gros - Will you buy the new Steam Machine?
- Xpander - Browsers
- Xpander - A succesfull Windows-Ubuntu migration the story
- LoudTechie - See more posts
How to setup OpenMW for modern Morrowind on Linux / SteamOS and Steam Deck
How to install Hollow Knight: Silksong mods on Linux, SteamOS and Steam Deck
I am very much enjoying seeing all these utterly ridiculous AI things from Google. Arse Technica did a [nice overview](https://arstechnica.com/information-technology/2024/05/googles-ai-overview-can-give-false-misleading-and-dangerous-answers/).
Last edited by GamingOnLinux Bot on 24 May 2024 at 2:42 pm UTC
Unrelated to Google's AI, but related to AI in general, [the results of this AI-written cake recipe were quite funny](https://www.youtube.com/watch?v=nUqPOsgu0uo). :tongue:
Most people without a somewhat technical understanding of machine learning and LLMs see these mistakes as "bugs", as "growing pains" while the tech is being perfected... when they are in fact fundamental limitations of this approach. It isn't a few implementation errors you can debug and fix; to fix the general problem, all they can do is research and hope someone discovers a totally new technique, which might not even exist to be discovered.
What those scammy AI companies do is add "filters" to deal with those particular edge cases (which is why sometimes you can't replicate the problems people reported on social media), but they do nothing for all the other bullshit it will spew in the future. We keep seeing people "tricking" LLMs into assuming personas and talking about stuff it was not supposed to talk about, or using hypotheticals and double negatives. Sometimes we will even see these filters overcompensate and insist on a pre-programmed answer to one of those detected flaws, even when that answer isn't really relevant (but the question is similar enough). It all showcases how this approach is never foolproof: you need filters to protect your filters, you need your heuristics to cover every possibility. Instead of programming all correct answers into a program, they have to instead program all wrong answers so that their bullshit generator doesn't make up stuff that looks too bad.
Last edited by eldaking on 24 May 2024 at 5:36 pm UTC