Latest Comments by 3zekiel
Amazon announces 'Luna', their own take on cloud game streaming
25 Sep 2020 at 7:35 am UTC
25 Sep 2020 at 7:35 am UTC
Quoting: Liam DaweNot that it matters to the vast majority (us Linux fans don't count for much...) it's based on Windows https://twitter.com/JeffGrubb/status/1309271277325049856?s=20 [External Link]I wonder, should we jump on the stadia train for now? I mean, they are the only one who kinda support us for now... Do you think that a strong stadia will encourage more devs to go for Linux ?
Amazon announces 'Luna', their own take on cloud game streaming
24 Sep 2020 at 10:05 pm UTC Likes: 2
I'd say game streaming is very promising BUT:
24 Sep 2020 at 10:05 pm UTC Likes: 2
Quoting: Purple Library GuyWhat I keep wondering is, OK, we got these streaming game services. A couple already, plus Amazon now. Soooo . . . what's their market share like?Hmmm for me, I'd just repeat good old industry wisdom I was administrated, don't be the first one to do, be the first one to do at the right time.
There's plenty of hype, and some people clearly are playing them. But are they catching on? Are they eating anyone's lunch? Or are they currently a relatively fringe thing, for all the talk? Is it a bandwagon Valve better jump on, or just mostly a money sink? Note that I'm willing to buy either possibility, I'm just complaining I don't have the data to judge from.
It's the same thing I've been wondering about the Epic store, which I haven't heard as much about lately. Everyone was talking how Valve needed to do various things to meet this threat, and they mostly didn't, and it's still unclear whether they had any need to.
I'd say game streaming is very promising BUT:
- First, latency and bandwidth issues are not yet solved
- Second, I doubt "game pass like" services are sustainable in the long term. The cost to make a modern game is very high, so if you want to recoup the cost of a subscription giving away a few games every months (day 1) you would need to either have a very high price, or cut on development cost. If you choose the first one, you'll lose users who just can't pay that price (not even counting that you won't have a monopoly etc etc). If you choose the second one, people will prefer to buy games that are better realized/more interesting. You can see that with xbox vs ps5, where the PS5 still seems to lead, and mostly because of the exclusives.
- This bring me to say that valve should not rush too much yet, and come to market with a clean solution both for price and latency. For now, even in home streaming is far from perfect. But if they can solve that, and make the experience hassle free for most, then they should jump. And then use their own leverage to create their own subscription model: keep the games you already possess, and add some games every month to some pool + some reduction seems to be the most reasonable for me.
A new security flaw is revealed with 'BlindSide' on Linux affecting Intel and AMD
13 Sep 2020 at 11:13 am UTC
For me, I think games are art, as such, they don't really enter the "proprietary software" category. For the privacy part, I run steam in a sandbox.
I'm way more concerned with what "ring -1" stuff my CPU runs behind my back, because no sandbox and pretty much no analysis can save you from that ...
I'm just concerned by how usable the platform would be (Power based I mean).
13 Sep 2020 at 11:13 am UTC
Quoting: GustyGhostYeah, Hangover is also based on qemu user mode. If it works that's very cool :)Quoting: 3zekielActually, can you run steam games on powerpc with qemu usermod ?Or a Wine accompaniment program called Hangover, IIRC. Although I don't personally have any interest in running proprietary gaming software.
For me, I think games are art, as such, they don't really enter the "proprietary software" category. For the privacy part, I run steam in a sandbox.
I'm way more concerned with what "ring -1" stuff my CPU runs behind my back, because no sandbox and pretty much no analysis can save you from that ...
I'm just concerned by how usable the platform would be (Power based I mean).
A new security flaw is revealed with 'BlindSide' on Linux affecting Intel and AMD
12 Sep 2020 at 8:29 pm UTC Likes: 6
These information can be various things: the presence of anything (or its absence) in your L1/L2/L3 cache, the state of the branch predictor in some spectre attacks, the state even of some specific cache (the tlb). All those caches are made to usually accelerate your cpu, whenever you do something costly, such as translating virtual to physical address, you put it in a cache in the hope that you will reuse it later (preferably soon).
Now, the problem is that, to go as fast as possible, when a program that does not have the right to access the data in theses caches tries to access the data, it still procedes almost the same as if it had the right to. The only difference is that at the very end an error will be raised. But the speed at which this error occurs will be faster if the data was here, or if the branch would have been taken etc etc.
So now, imagine that you execute a program on the same core as the program you want to attack, you would indeed share all those caches. As such, you will be able to measure time you take to get refused access to various data, with different types of accesses (direct access, branches, etc etc).
The result is that you can then infer a lot of stuff on the program. For branches in particular let's the say you write a code like "if(data != 0) then smthg else another thing" . Then if you can recover the "state" of that branch (what the branch predictor was corrected with), then you can deduce that data is 0 or not.
With other attacks, if you try to branch depending on the value of a private data for example, you might be able to deduce the value too. for example, you try if (forbidden_data == 0), and then you measure time of execution, or you find the state of the branch predictor afterwards (by trying to take the same branch and see if "taken" or "not taken" path is faster), and you can deduce that forbidden_data is equal to 0 or not. If you proceed byte by byte, you should only have to test 255 values at worst for each bytes. If you search for a 256bits AES key, that makes it 32 bytes, so 32 times 255. Even if you need multiple tries in reality, that really really not a lot.
For one of the spectre attack, what you essentially did was just make an access like this one:
read(array[forbidden_data]) where forbidden_data is a byte. The access will be refused, but the value at array[forbidden_data] will be cached by the cpu. Then you just read back every indices of the array, and the index which was represented by forbidden_data will have an access time that will be slightly faster than the rest of the array, telling you the exact value of forbidden data. You do need a bit of setup between each runs of the attack, (you need to load a large piece of data to make the cache filled with something else), but that's actually fairly efficient, a rogue javascript script could have begun guessing at what other scripts (such as the one where you put your passwords / credit card number...) were having in memory.
Hope this clarifies a little bit. But yeah, this is not so much programmer stuff as CPU architect / OS engineers stuff.
12 Sep 2020 at 8:29 pm UTC Likes: 6
Quoting: denyasisThanks, I tried to read a bit, but it was way over my head. I'm seriously jealous of this of you who can program and understand this stuffTo make it simple, all these attacks are based on finding bits and pieces of information left in the CPU state.
These information can be various things: the presence of anything (or its absence) in your L1/L2/L3 cache, the state of the branch predictor in some spectre attacks, the state even of some specific cache (the tlb). All those caches are made to usually accelerate your cpu, whenever you do something costly, such as translating virtual to physical address, you put it in a cache in the hope that you will reuse it later (preferably soon).
Now, the problem is that, to go as fast as possible, when a program that does not have the right to access the data in theses caches tries to access the data, it still procedes almost the same as if it had the right to. The only difference is that at the very end an error will be raised. But the speed at which this error occurs will be faster if the data was here, or if the branch would have been taken etc etc.
So now, imagine that you execute a program on the same core as the program you want to attack, you would indeed share all those caches. As such, you will be able to measure time you take to get refused access to various data, with different types of accesses (direct access, branches, etc etc).
The result is that you can then infer a lot of stuff on the program. For branches in particular let's the say you write a code like "if(data != 0) then smthg else another thing" . Then if you can recover the "state" of that branch (what the branch predictor was corrected with), then you can deduce that data is 0 or not.
With other attacks, if you try to branch depending on the value of a private data for example, you might be able to deduce the value too. for example, you try if (forbidden_data == 0), and then you measure time of execution, or you find the state of the branch predictor afterwards (by trying to take the same branch and see if "taken" or "not taken" path is faster), and you can deduce that forbidden_data is equal to 0 or not. If you proceed byte by byte, you should only have to test 255 values at worst for each bytes. If you search for a 256bits AES key, that makes it 32 bytes, so 32 times 255. Even if you need multiple tries in reality, that really really not a lot.
For one of the spectre attack, what you essentially did was just make an access like this one:
read(array[forbidden_data]) where forbidden_data is a byte. The access will be refused, but the value at array[forbidden_data] will be cached by the cpu. Then you just read back every indices of the array, and the index which was represented by forbidden_data will have an access time that will be slightly faster than the rest of the array, telling you the exact value of forbidden data. You do need a bit of setup between each runs of the attack, (you need to load a large piece of data to make the cache filled with something else), but that's actually fairly efficient, a rogue javascript script could have begun guessing at what other scripts (such as the one where you put your passwords / credit card number...) were having in memory.
Hope this clarifies a little bit. But yeah, this is not so much programmer stuff as CPU architect / OS engineers stuff.
A new security flaw is revealed with 'BlindSide' on Linux affecting Intel and AMD
12 Sep 2020 at 8:04 pm UTC
12 Sep 2020 at 8:04 pm UTC
Quoting: GustyGhostActually, can you run steam games on powerpc with qemu usermod ?
To be fair, there are probably a shit ton of undiscovered vulns for Power9.
AMD tease two dates in October for Zen 3 and RDNA 2
10 Sep 2020 at 11:54 am UTC
10 Sep 2020 at 11:54 am UTC
I'm very excited about the Zen 3 announcement, I want to be rid of my 9900k (crazy temps & power consumption making my life a misery in summer, although it warms my home office in winter :) ). My main hope is they drop the motherboard fan, and hit the 5GHz. With & 10 or 12 core parts, needing no more than a reasonable 130w would be awesome (since when is 130w reasonable ? :p ).
For RDNA 2, I just can't get excited. Somehow I feel it won't be any kind of real competitor to Ampere, wait and see I guess. But I don't think there's much hope to have with a 7nm to "7nm+" transition, to get better perf than 5700XT, they will have to raise power like crazy, which will probably bottleneck before they reach the 2080ti equivalent that the 3070 is, or at least not for the same power consumption at all.
For RDNA 2, I just can't get excited. Somehow I feel it won't be any kind of real competitor to Ampere, wait and see I guess. But I don't think there's much hope to have with a 7nm to "7nm+" transition, to get better perf than 5700XT, they will have to raise power like crazy, which will probably bottleneck before they reach the 2080ti equivalent that the 3070 is, or at least not for the same power consumption at all.
NVIDIA announce the RTX 3090, RTX 3080, RTX 3070 with 2nd generation RTX
2 Sep 2020 at 8:04 am UTC Likes: 2
2 Sep 2020 at 8:04 am UTC Likes: 2
Quoting: dubigrasuInteresting bit from Nvidia labs:Actually, you touch something I see all the time in the silicon industry: all the internals are using Linux, or at least open source RTOSes when it comes to smaller targets - albeit in that case the sdk is usually developed on Linux too -. I don't know many OS/driver/runtime devs who enjoy working on something else than Linux. But you have some pressure from above to support losedow$ and such ... In some company you reach a ridiculous point where you have an "official" losedow$ PC that you must have because it's the rules & a grey zone (still paid by the company though) Linux one to actually work. In others you can only work in a VM, which is even worst.
(from the video posted in op)
Lenovo begins rollout of Fedora Linux on their laptops, Ubuntu systems due soon
30 Aug 2020 at 9:21 pm UTC Likes: 1
30 Aug 2020 at 9:21 pm UTC Likes: 1
I think that's actually the first vendor to offer Fedora by default, cool ! :) I tend to much prefer Fedora to ubuntu, as it manages to be both more bleeding edge and much more stable in my taste.
NVIDIA GeForce NOW adds Chromebook support, so you can run it on Linux too
18 Aug 2020 at 5:04 pm UTC Likes: 1
18 Aug 2020 at 5:04 pm UTC Likes: 1
Quoting: ShmerlI hope they could add a Linux VM option. Potentially with a small reduction since there is no need to pay for a licence on their side. Even without a reduction, that would be quite nice, could still use the windows VM for rare games that can't run even on proton, and use the Linux VM most of the time for my laptop.Obviously, at this point NVIDIA are not supporting the Linux desktop with GeForce NOW in any way and it could break any time - so keep that in mind.Keep in mind also that Geforce Now is using Windows on the server. So it's not any better than dual booting or running Windows in VM locally. You just get a longer cable for it. It's essentially a glorified remote Windows VM.
Looks like the recent upwards trend of the Linux market share has calmed down
3 Aug 2020 at 11:35 am UTC
3 Aug 2020 at 11:35 am UTC
Quoting: Liam DaweHaha yes, although not sure how many of their readers can read English, French people tend to not be so good at learning languages (Or more likely at teaching them). Has to do with thinking our country is the only one in the world I guess. If you want to really conquer that market guess you'll have to translate :pQuoting: 3zekielquite a few of the mainstream tech medias are now regularly talking about proton/lutris & coWell, hopefully a few of their readers will continue to point out GOL is here and covers Linux all the time ;)
- The "video game preservation service" Myrient is shutting down in March
- SpaghettiKart the Mario Kart 64 fan-made PC port gets a big upgrade
- KDE Plasma 6.6.1 rolls out with lots of fixes for KWin
- Lutris v0.5.21 and v0.5.22 arrive with Valve's Sniper runtime support and new game runners
- Open source graphics drivers Mesa 26.0.1 released with various bug fixes and a security fix
- > See more over 30 days here
- steam overlay performance monitor - issues
- Xpander - Nacon under financial troubles... no new WRC game (?)
- Xpander - Establishing root of ownership for Steam account
- Nonjuffo - Total Noob general questions about gaming and squeezing every oun…
- GustyGhost - Looking for Linux MMORPG sandbox players (Open Source–friendly …
- Jarmer - See more posts
How to setup OpenMW for modern Morrowind on Linux / SteamOS and Steam Deck
How to install Hollow Knight: Silksong mods on Linux, SteamOS and Steam Deck
Source: files.catbox.moe
View cookie preferences.
Accept & Show Accept All & Don't show this again Direct Link