Latest Comments by aluminumgriffin
NVIDIA GeForce NOW will soon limit your hours per month but some memberships get upgraded
8 Nov 2024 at 9:05 am UTC Likes: 3
1) Doesn't require me to have a beefy GPU (hello sweet, sweet silence. Also, since this allows me to get away with NUCs I can have my computer VESA-mounted to the monitor, hello very clean desktop)
2) Works pretty great on a chromecast (gen4) (handy for a quick "grab the living rooom controller and sit in the sofa"-gaming, however, headset mics do not work on the chromecast, or at least didn't six months ago)
3) Works on pretty much any device with a browser (great when visiting friend)
4) DRM is now located on someone else's machines (drastically lowers security concerns)
4b) Some companies are willing to work with cloud providers but not with Linux (Fortnite being an example here).
5) Installs/Game sizes. There is something to be said for having a fresh game up and running in the time it takes you to grab snacks, not to mention not having to care about free space on disk.
6) Cost. Cloud Gaming costs about 120-150$ / year, an RTX30703060* about 550-700$350-450$* (mid-tiers, both, pulled my local prices (incl tax)). But this really comes down to your upgrade cycle and if you have non-gaming needs for your dGPU.
And yes, it works perfectly fine on a ca 2017 iGPU (NUC7i5), or at least did as of last night.
Same also holds true for boosteroid (I've used both quite a bit, I mainly prefer GeForce NOW, but boosteroid is my fallback when GeForce NOW isn't viable (RDR2 for example, or I guess post-Jan-2026 gaming))
* = Edits: seems "friendly" search swapped it for a 4070, *sighs*
8 Nov 2024 at 9:05 am UTC Likes: 3
Quoting: BlackBloodRumWhy pay someone else for something you can do yourself? :huh:To answer "why?", for me.
1) Doesn't require me to have a beefy GPU (hello sweet, sweet silence. Also, since this allows me to get away with NUCs I can have my computer VESA-mounted to the monitor, hello very clean desktop)
2) Works pretty great on a chromecast (gen4) (handy for a quick "grab the living rooom controller and sit in the sofa"-gaming, however, headset mics do not work on the chromecast, or at least didn't six months ago)
3) Works on pretty much any device with a browser (great when visiting friend)
4) DRM is now located on someone else's machines (drastically lowers security concerns)
4b) Some companies are willing to work with cloud providers but not with Linux (Fortnite being an example here).
5) Installs/Game sizes. There is something to be said for having a fresh game up and running in the time it takes you to grab snacks, not to mention not having to care about free space on disk.
6) Cost. Cloud Gaming costs about 120-150$ / year, an RTX
And yes, it works perfectly fine on a ca 2017 iGPU (NUC7i5), or at least did as of last night.
Same also holds true for boosteroid (I've used both quite a bit, I mainly prefer GeForce NOW, but boosteroid is my fallback when GeForce NOW isn't viable (RDR2 for example, or I guess post-Jan-2026 gaming))
* = Edits: seems "friendly" search swapped it for a 4070, *sighs*
APT 2.3.12 package manager released, will no longer let you break everything
19 Nov 2021 at 8:37 am UTC
19 Nov 2021 at 8:37 am UTC
I kinda wish they would use the error message "Please make my system unusable" (or similar) that you got (still?) on some versions of apt-get when you try to uninstall libc6, that one really drove home the point.
(for reasons: NEVER remove libc unless you want the system ruined, really)
(for reasons: NEVER remove libc unless you want the system ruined, really)
Debian Linux is planning a gaming-focused event online in November
3 Oct 2020 at 5:40 pm UTC
The reason why I don't do testing (or frankenDebian) and pull in stable packages as an override is that it handles (even over time) it somewhat badly when stable catches up if one decides to go stable (my current system (this summer) is quite frankly my first pure stable since I started to use debian back in 2000 (potato)). Historically I've been using testing mainly but recently (last two years) I've found it to be far too unstable for my liking.
3 Oct 2020 at 5:40 pm UTC
Quoting: Purple Library GuyI fully agree, however some things has a tedency to break surprisingly often in testing (QEMU in particular in my case, and also the iwlwifi tends to get really messed up with two out of three kernel upgrades in testing, not to mention the entire usb-audio on logitech webcams mess) - and I kinda like the "calmness" of the infrequent updates in stable, hence it would be nice with a "sub-section" that one could enable when one wanted to jump ahead and still mainly remain on stable.Quoting: aluminumgriffinNice, however I really wish they would keep mesa somewhat up to date without forcing one into a FrankenDebian, maybe start with yet another "distro sub-section" (akin to non-free) that is "gaming" with the note that it is a slight sacrifice of stability for the sake of more bleeding edge (would also be a good place to put things like fresh OBS).IMO Debian Stable is mostly for servers and stuff . . . things that are doing basic workloads and you want them to just keep doing it and never die. If you're going to be playing non-ancient games on a machine, it should probably be using at least Testing and maybe Unstable, which is still about as stable as most up-to-date distros.
(mesa in Debian/Stable is at 18.3.6 , the iris drivers (matters if you uses intel iGPU) became good quite a bit after that (in the 19.x series) - to make it all that much funnier debian stable ships with libdrm 2.4.97 (to build the 19.x mesa and later you need at least libdrm 2.4.100)
In Debian/Testing it is mesa 20.1.8 and libdrm 2.4.102 so it is a night-and-day difference in terms of performance you get in stable and testing)).
The reason why I don't do testing (or frankenDebian) and pull in stable packages as an override is that it handles (even over time) it somewhat badly when stable catches up if one decides to go stable (my current system (this summer) is quite frankly my first pure stable since I started to use debian back in 2000 (potato)). Historically I've been using testing mainly but recently (last two years) I've found it to be far too unstable for my liking.
Debian Linux is planning a gaming-focused event online in November
3 Oct 2020 at 4:37 pm UTC
3 Oct 2020 at 4:37 pm UTC
Nice, however I really wish they would keep mesa somewhat up to date without forcing one into a FrankenDebian, maybe start with yet another "distro sub-section" (akin to non-free) that is "gaming" with the note that it is a slight sacrifice of stability for the sake of more bleeding edge (would also be a good place to put things like fresh OBS).
(mesa in Debian/Stable is at 18.3.6 , the iris drivers (matters if you uses intel iGPU) became good quite a bit after that (in the 19.x series) - to make it all that much funnier debian stable ships with libdrm 2.4.97 (to build the 19.x mesa and later you need at least libdrm 2.4.100)
In Debian/Testing it is mesa 20.1.8 and libdrm 2.4.102 so it is a night-and-day difference in terms of performance you get in stable and testing)).
(mesa in Debian/Stable is at 18.3.6 , the iris drivers (matters if you uses intel iGPU) became good quite a bit after that (in the 19.x series) - to make it all that much funnier debian stable ships with libdrm 2.4.97 (to build the 19.x mesa and later you need at least libdrm 2.4.100)
In Debian/Testing it is mesa 20.1.8 and libdrm 2.4.102 so it is a night-and-day difference in terms of performance you get in stable and testing)).
OpenTTD, the open source simulation game based on Transport Tycoon Deluxe has a new release
5 Apr 2019 at 9:25 am UTC
5 Apr 2019 at 9:25 am UTC
Chaning the fonts and UI separately has been available for at least a couple of years if you are up to editing the config-file ( ~/.config/openttd/openttd.cfg ), you can even choose which font you want from within the config file (also, since the small, normal, big and mono fonts are controlled separately you can even pick different fonts and sizes for all of those individually).
Mainly intended as a "do read the config-file for fine-tuning" more than anything else.
Mainly intended as a "do read the config-file for fine-tuning" more than anything else.
There's a new release candidate of OBS Studio out with a VAAPI video encoder on Linux
11 Feb 2019 at 7:32 am UTC Likes: 1
11 Feb 2019 at 7:32 am UTC Likes: 1
This one is pretty awesome on Intel CPUs, their integrated GPUs does a fairly good job and the offloading of the CPU is very nice.
For intel iGPU a thing to note is that (in debian/testing at least) the open source drivers (i965-va-driver) do not fully support this but rather you need the non-free package i965-va-driver-shaders that also includes encode shaders. Or at least that was the case about two months ago.
(I'm on a Intel NUC7i5, which means Intel Iris Plus Graphics 640 (GT3e))
For intel iGPU a thing to note is that (in debian/testing at least) the open source drivers (i965-va-driver) do not fully support this but rather you need the non-free package i965-va-driver-shaders that also includes encode shaders. Or at least that was the case about two months ago.
(I'm on a Intel NUC7i5, which means Intel Iris Plus Graphics 640 (GT3e))
The developer of Smith and Winston made an interesting blog post about supporting multiple platforms
10 Jan 2019 at 10:48 pm UTC Likes: 2
Three things that tends to bite people hard if they don't regularly jump platforms are memory alignment, size of datatypes and bit-order/byte-order.
Using a spoiler tag to hide a bit more in depth explanation of two of the issues
Datatypes:
The "int" datatype. Depending on which compiler, dialect, version of language, and platform you're on it varies in size. It historically was "the native word size of the platform". Which means that on 16bit machines it should be 16bit, on 32bit platforms it should be 32bit, on 64bit platforms it should be 64bit. However this does not hold true today since it now is defined to only be guaranteed to hold -32768 to 32767 (16bit, signed).
Do note that this already makes it weird it for machines with a word size smaller than 16bit, and also to make it even funnier on 32bit machines where it can be either 16bit or 32bit, and on 64bit machines it is normally (almost - but not quite - always) 32bit as well.
So, an "int" can only be assumed to be "at least 16bit", and on embedded you really should read the datasheets and specs anyways.
Now add to that you often also can change the compiler behaviour by selecting different methods of packing.
And yes, "int" is the most common datatype.
Memory alignment:
Take the simple declaration "int a, b;" and tell me how that is arranged in memory. it is 4 bytes (16bit*2), is it 8bytes (32bit*2), or is it 16bytes (64bit*2) (this might happen either is the int is 8bits or if it is aligned to match with memory boundaries). Also, do the 'a' come before the 'b' in in physical memory? is it something between them? (padding and such) and overflowing 'a' in a such a way that would not be caught, will that alter 'b' or will it cause a memory access violation?
Even funnier is when the runtime of your compiler does not exactly match the settings of the specific build of the libraries you're using (so yes, you can end up with b=a+a; working but calling a function that does b=a+a; will crash - even when fed with the exact same datatypes and values).
Long story short - every place where you've made an assumption can bite you when you jump platforms (this is why you often see a "datatypes.h" in multi-platform projects)
How this makes stuff run smoother - assume an overflowing variable isn't caught by the runtime of one compiler but rather overwrites whatever is next in memory, in this compiler it will corrupt the data in the following variable and this corruption can cause an undesired behaviour in a place that isn't even near the code that overflowed (even worse, where the undesired behaviour arise can be in correct code). But if you try the same code in a compiler with a more strict runtime it will crash at the overflow itself.
10 Jan 2019 at 10:48 pm UTC Likes: 2
Quoting: BeamboomI don't understand how different compilers can expose different bugs in the same(?) code. I mean, a bug is a bug isn't it? Or is it because the use of different libraries expose bugs caused by those particular libraries/APIs? If so, how will the code run smoother on a different set of libraries if the bug is related to that other library?Each dialect, compiler, and platform behaves somewhat differently - this is pretty much why each coder has its favorite compiler.
I don't get this?
Three things that tends to bite people hard if they don't regularly jump platforms are memory alignment, size of datatypes and bit-order/byte-order.
Using a spoiler tag to hide a bit more in depth explanation of two of the issues
Spoiler, click me
Datatypes:
The "int" datatype. Depending on which compiler, dialect, version of language, and platform you're on it varies in size. It historically was "the native word size of the platform". Which means that on 16bit machines it should be 16bit, on 32bit platforms it should be 32bit, on 64bit platforms it should be 64bit. However this does not hold true today since it now is defined to only be guaranteed to hold -32768 to 32767 (16bit, signed).
Do note that this already makes it weird it for machines with a word size smaller than 16bit, and also to make it even funnier on 32bit machines where it can be either 16bit or 32bit, and on 64bit machines it is normally (almost - but not quite - always) 32bit as well.
So, an "int" can only be assumed to be "at least 16bit", and on embedded you really should read the datasheets and specs anyways.
Now add to that you often also can change the compiler behaviour by selecting different methods of packing.
And yes, "int" is the most common datatype.
Memory alignment:
Take the simple declaration "int a, b;" and tell me how that is arranged in memory. it is 4 bytes (16bit*2), is it 8bytes (32bit*2), or is it 16bytes (64bit*2) (this might happen either is the int is 8bits or if it is aligned to match with memory boundaries). Also, do the 'a' come before the 'b' in in physical memory? is it something between them? (padding and such) and overflowing 'a' in a such a way that would not be caught, will that alter 'b' or will it cause a memory access violation?
Even funnier is when the runtime of your compiler does not exactly match the settings of the specific build of the libraries you're using (so yes, you can end up with b=a+a; working but calling a function that does b=a+a; will crash - even when fed with the exact same datatypes and values).
Long story short - every place where you've made an assumption can bite you when you jump platforms (this is why you often see a "datatypes.h" in multi-platform projects)
How this makes stuff run smoother - assume an overflowing variable isn't caught by the runtime of one compiler but rather overwrites whatever is next in memory, in this compiler it will corrupt the data in the following variable and this corruption can cause an undesired behaviour in a place that isn't even near the code that overflowed (even worse, where the undesired behaviour arise can be in correct code). But if you try the same code in a compiler with a more strict runtime it will crash at the overflow itself.
SC Controller, the driver and UI for the Steam Controller is being rewritten to be more portable
26 Nov 2018 at 12:34 pm UTC Likes: 4
26 Nov 2018 at 12:34 pm UTC Likes: 4
Quick thing about the C vs Python for being portable.
A python interpreter is a quite hefty penalty, and also there are C-compilers for _far_ more target-systems than there are platforms with python-interpreters.
Also, with C you basically "only" need to port the libs in use (with shims for everything you don't care about) when moving to an esoteric platform (this kinda was the purpose of C to begin with) while with Python you need to port the entire intepreter.
For a bit extreme case that still is easy to imagine - consider the case of an arduino (say, for an homebrew arcade machine). What would you find the easiest to do: port a C-program to fit into a couple of kB of memory, or try to wedge an entire interpreter in those same kB?
And as an aside, when working from another language it is a lot easier to interface with C than with Python (the former is often fairly automated as well as being common, not so about the latter).
A python interpreter is a quite hefty penalty, and also there are C-compilers for _far_ more target-systems than there are platforms with python-interpreters.
Also, with C you basically "only" need to port the libs in use (with shims for everything you don't care about) when moving to an esoteric platform (this kinda was the purpose of C to begin with) while with Python you need to port the entire intepreter.
For a bit extreme case that still is easy to imagine - consider the case of an arduino (say, for an homebrew arcade machine). What would you find the easiest to do: port a C-program to fit into a couple of kB of memory, or try to wedge an entire interpreter in those same kB?
And as an aside, when working from another language it is a lot easier to interface with C than with Python (the former is often fairly automated as well as being common, not so about the latter).
Village building god sim 'Rise to Ruins' had an absolutely massive update
20 Sep 2018 at 6:45 am UTC
20 Sep 2018 at 6:45 am UTC
One bit of warning however - if you are into city-builders/god-simulators then this game is addictive, and I don't mean in the sense of "oh, it might be a fun way to kill a few hours" but rather "oh, look, where did 650 hours of my life go" (seriously, that is my steam time on this game for this year).
One major feature of this game is that each region gets progressivly nastier the longer you stay in it, but the longer you stay in a region the more resources you can hand off to the nearby regions you expand to.
And the "survival"-mode is aptly named, I am yet to try out nightmare...
One major feature of this game is that each region gets progressivly nastier the longer you stay in it, but the longer you stay in a region the more resources you can hand off to the nearby regions you expand to.
And the "survival"-mode is aptly named, I am yet to try out nightmare...
What are you playing this weekend?
2 Sep 2018 at 7:00 am UTC
2 Sep 2018 at 7:00 am UTC
I'm playing the same things as I've been playing for the last year.
* Rise to Ruins (it just released 31 unstable5c if you do the beta) (steam: 529h)
* Dwarf Fortress (still the July release)
* Battlevoid:Harbinger. (steam: 139h)
* Settlers II (relaxing)
* Turmoil (great if you wait for the dinner to finish cooking) (steam: 79h)
And yes, Liam, you are responsible for three of those habits ;)
* Rise to Ruins (it just released 31 unstable5c if you do the beta) (steam: 529h)
* Dwarf Fortress (still the July release)
* Battlevoid:Harbinger. (steam: 139h)
* Settlers II (relaxing)
* Turmoil (great if you wait for the dinner to finish cooking) (steam: 79h)
And yes, Liam, you are responsible for three of those habits ;)
- GOG are giving away Alone in the Dark: The Trilogy to celebrate their Preservation Program
- Here's the most played games on Steam Deck for January 2026
- Steam Survey for January 2026 shows a small drop for Linux and macOS
- Valheim gets a big birthday update with optimizations, Steam Deck upgrades and new content
- AMD say the Steam Machine is "on track" for an early 2026 release
- > See more over 30 days here
- I need help making SWTOR work on Linux without the default Steam …
- WheatMcGrass - Browsers
- Jarmer - New Desktop Screenshot Thread
- Hamish - Is it possible to have 2 Steam instances (different accounts) at …
- whizse - Will you buy the new Steam Machine?
- DoctorJunglist - See more posts
How to setup OpenMW for modern Morrowind on Linux / SteamOS and Steam Deck
How to install Hollow Knight: Silksong mods on Linux, SteamOS and Steam Deck