While you're here, please consider supporting GamingOnLinux on:
Reward Tiers:
Patreon. Plain Donations:
PayPal.
This ensures all of our main content remains totally free for everyone! Patreon supporters can also remove all adverts and sponsors! Supporting us helps bring good, fresh content. Without your continued support, we simply could not continue!
You can find even more ways to support us on this dedicated page any time. If you already are, thank you!
Reward Tiers:
This ensures all of our main content remains totally free for everyone! Patreon supporters can also remove all adverts and sponsors! Supporting us helps bring good, fresh content. Without your continued support, we simply could not continue!
You can find even more ways to support us on this dedicated page any time. If you already are, thank you!
Login / Register
- NVIDIA announce a native Linux app for GeForce NOW
- KDE Plasma 6.6 will finally stop the system sleeping when gaming with a controller
- NVIDIA announce DLSS 4.5 with Dynamic Multi Frame Generation, plus DLSS Updater gets Linux support
- Mesa RADV driver on Linux looks set for a big ray tracing performance boost
- Linaro reveal they're collaborating with Valve for the Steam Frame
- > See more over 30 days here
- Weekend Players' Club 2026-01-09
- Xpander - Will you buy the new Steam Machine?
- Xpander - Browsers
- Xpander - New Desktop Screenshot Thread
- Klaas - A succesfull Windows-Ubuntu migration the story
- LoudTechie - See more posts
How to setup OpenMW for modern Morrowind on Linux / SteamOS and Steam Deck
How to install Hollow Knight: Silksong mods on Linux, SteamOS and Steam Deck
AI is new and AI developers are doing things right now that can only be done by keeping your sources secret and even than only temporary.
People keep demanding safe AI: AI that won't do certain unethical things.
They demand this, because they want nobody to be able to do these things.
This is the digital equivalent of demanding from individual hammer manufacturers that nobody should be able to use hamers to smash windows. They might be able to design hammers that can't smash windows, but they'll never be able to keep people from making their own window smashing hammers.
This is what proprietary AI is achieving.
By making certain they're the only one who can build good AI and only servicing its end products they ensure that they by making their product behave safe all AI behaves safe, because they're all AI.
This works under the assumption that nobody other than a limited set of sanctioned parties can make AI that's worth squad.Open source AI violates that assumption.
You know that even before ChatGPT came out there already existed AI to nudify pictures of women.
Is this unethical: yes.
Can OpenAI do anything about it: a little, they can make certain that they outcompete all the smaller AI models on everything except unethical behavior(including marketing), so that the unethical AI providers go out of business.
It's like keeping non-disabled people from competing in the Paralympics by training handicapped athletes beyond the level of non-handicapped ones.
Does this work forever.
No.
What allows them to do this?
These advantages are all deteriorating.
Also consumer devices now include NPU's
In the end not only open source AI will work be competitive, but its development will become, so easy that making criminal models based on it will become feasible and all we can do to fight it is armor up and make it so that the criminal models can't effect us.
I imagine a market for AI fooling clothing springing up.
There already is a market for AI generated content detection.
Last edited by LoudTechie on 25 May 2024 at 2:32 pm UTC
They can make AI that can do unethical things illegal(EU AI act), but they would still need to enforce it.
The internet allows people to hide behind behind the most permissive government in the world and even make combinations in it.
My fear of individuals is, because it's what happened with piracy, ddos, ransomware and criminal market places.
The technology becomes/is, so easy that a single bored and opiniated/ethically challenged specialist can make it available to everybody partime or even voluntary behind a bunch of privacy measures and since it's software on the internet it can be used to affect anybody.
I can hope it ends up as ransomware development and ddos where it can be pushed back into dark web market places where it falls into the hands of a few gangs.