Every article tag can be clicked to get a list of all articles in that category. Every article tag also has an RSS feed! You can customize an RSS feed too!
We do often include affiliate links to earn us some pennies. See more here.

NVIDIA to launch DLSS support for Proton on Linux tomorrow (June 22)

By - | Views: 26,692

While DLSS has been technically available in the NVIDIA drivers for Linux for some time now, the missing piece was support for Proton which will be landing tomorrow - June 22.

In one of their GeForce blog posts, they made it very clear:

Today we’re announcing DLSS is coming to Facepunch Studios’ massively popular multiplayer survival game, Rust, on July 1st, and is available now in Necromunda: Hired Gun and Chernobylite. Tomorrow, with a Linux graphics driver update, we’ll also be adding support for Vulkan API DLSS games on Proton.

This was revealed originally on June 1 along with the GeForce RTX 3080 Ti and GeForce RTX 3070 Ti announcements. At least now we have a date for part of this extra support for Linux and DLSS. This, as stated, will be limited to games that natively use Vulkan as their graphics API which will be a short list including DOOM Eternal, No Man’s Sky, and Wolfenstein: Youngblood. Support for running Windows games that use DirectX with DLSS in Proton will arrive "this Fall".

With that in mind then, it's likely we'll see the 470 driver land tomorrow, that is unless NVIDIA have a smaller driver coming first with this added in. We're excited for the 470 driver as a whole, since that will include support for async reprojection to help VR on Linux and hardware accelerated GL and Vulkan rendering with Xwayland.

Article taken from GamingOnLinux.com.
32 Likes
About the author -
author picture
I am the owner of GamingOnLinux. After discovering Linux back in the days of Mandrake in 2003, I constantly came back to check on the progress of Linux until Ubuntu appeared on the scene and it helped me to really love it. You can reach me easily by emailing GamingOnLinux directly. Find me on Mastodon.
See more from me
The comments on this article are closed.
47 comments
Page: «3/5»
  Go to:

x_wing Jun 22, 2021
Quoting: 3zekielCUDA did/does work, physx for a long time too. When you invest so much R&D in something cutting edge, you will try to monetize it to death, and if a method worked before, you will try again.

Physx was in life support for many years, at the end it was the same as their tessellation strategy. The discussion here is about bringing solutions and not gimmick features, which is what any user should look at. Unless you're a shareholder of Nvidia, this strategy cannot be appreciated (mainly from a Linux user pov).

Quoting: 3zekielNow, they did open source the APIs as far as I can tell, so everyone should be able to implement a source compatible solution. I agree they could have made some standard APIs from the start though, the best would be to make it a Vulkan extension.

Link? Unless you mean the sdk.


Last edited by x_wing on 22 June 2021 at 1:47 pm UTC
3zekiel Jun 22, 2021
Quoting: x_wingPhysx was in life support for many years, at the end it was the same as their tessellation strategy. The discussion here is about bringing solutions and not gimmick features, which is what any user should look at. Unless you're a shareholder of Nvidia, this strategy cannot be appreciated (mainly from a Linux user pov).

PhysX was a success for a long time, tessellation also made a lot of noise for them, and did give them an edge. OF COURSE it does not last forever - for as long a there is competition - (CUDA has for a very long time though). I am not appreciating it, I am being purely realist. I don't particularly like it, I don't encourage it as a consumer, but I do understand the rational from their PoV. And wishing them to do otherwise in their position is, well, wishful thinking.
We can wish all that we want, but R&D cost money, a lot of money. So companies want some pay back for it. Nvidia is already doing the effort of supporting most features faster and faster on Linux. And now contributing directly to Proton too so we get even more. So from our point of view, it is a clear win.

As for the open standard DLSS, it would be useless as of now, and while it might help getting more games with XeSS if Intel does make it good, it would not change much anyway as long as they do not open the background which they won't for very obvious reasons I already pointed out in another message.

To this day, AMD still has no real support for RT on Linux (except in the proprietary driver that one uses and no developers target). Also they have a very bad track record in term of day 1 support for GPUs themselves (yes they tend to boot now, clap clap, well done, thx for allowing us to boot your gpu, now also give all features and a stable driver). Nvidia has lagged behing for wayland support (but honestly, from a user perspective this does not matter one bit).
Who even knows when/if FSR will have (good) support on Linux, and even more so on Proton. It might work in reshade though according to GN's video.

Quoting: x_wingLink? Unless you mean the sdk.
[/quote]
They open sourced the headers ( of NVAPI), it was in news here multiple times. So I would guess you can do the plumbing behind that. Obviously no implem behind that, just headers. I am not saying it became an "open standard" per se either. It has no frozen version for others to implement etc. It might come, who knows.

Now, ignoring all that, FSR might still help a little with sub par configs, and it is always nice to have. But it does not seem like support is too hot either - metro said they wouldn't, and the games which does are not so hot either. Maybe reshade will save it... And even in consoles, I doubt it does much better than checkboard.
Shmerl Jun 22, 2021
Quoting: 3zekielNow the main issue is, with HW accelerated inference, you tend to need to fine tune the network for each accelerator architecture. So it is unlikely you will have s one size fit it all network you can deploy everywhere directly.

That's why cramming AI ASICs into GPU isn't necessarily a good idea. If you really need an ASIC it's better to just add another card.
3zekiel Jun 22, 2021
Quoting: Shmerl
Quoting: 3zekielNow the main issue is, with HW accelerated inference, you tend to need to fine tune the network for each accelerator architecture. So it is unlikely you will have s one size fit it all network you can deploy everywhere directly.

That's why cramming AI ASICs into GPU isn't necessarily a good idea. If you really need an ASIC it's better to just add another card.

Hmmmm, yes and no. First, dedicated HW in ML can still bring performance improvements which are dramatic enough that you can not say no to them, especially to keep power consumption in check (in datacenter, you will not double the number of GPUs without a crazy electricity bill...).
Secondly, even not talking about full blown ASICs, you will use specific AI instructions to get a better result (unless you really do not care about efficiency). So your quantization will not be the same and maybe you will need to swap some layer for better perf/result too. No magic there.
No real alternative either. Especially if you want something real time. In time, things will likely standardize more too, as acceleration is easier with better hw (less choice to be made if you have more transistor, you can just accel everything, or just have high freq with same power consumption and accel less stuff anyway). But we are not quite there yet.
Shmerl Jun 22, 2021
What I mean it's better to have a separete ASIC card just for that AI, instead of stuffing every new ASIC idea into the GPU making it bloated and less useful for actual tasks like graphics.


Last edited by Shmerl on 22 June 2021 at 3:27 pm UTC
3zekiel Jun 22, 2021
Quoting: ShmerlWhat I mean it's better to have a separete ASIC card just for that AI, instead of stuffing every new ASIC idea into the GPU making it bloated and less useful for actual tasks like graphics.

Well, GPUs are ASIC theme park anyway :)

more seriously, I guess die size is dominated by IO, so you have room to add some stuff. Also, more cuda cores has diminutive return, so why not give some other cool stuff to the buyer. Be it RT or Tensor cores (which you can use for other cool stuff than DLSS/games if you would like to have a try in the ML field). I think it is quite nice to give access to this kind of hw for consumer products.

For data center gpus even, you get better locality that way, if you need to run more general computation on your cuda core on the side, so I guess this is good for them too (otherwise they would probably have pressured for a change too).
x_wing Jun 22, 2021
Quoting: 3zekielPhysX was a success for a long time, tessellation also made a lot of noise for them, and did give them an edge. OF COURSE it does not last forever - for as long a there is competition - (CUDA has for a very long time though). I am not appreciating it, I am being purely realist. I don't particularly like it, I don't encourage it as a consumer, but I do understand the rational from their PoV. And wishing them to do otherwise in their position is, well, wishful thinking.
We can wish all that we want, but R&D cost money, a lot of money. So companies want some pay back for it. Nvidia is already doing the effort of supporting most features faster and faster on Linux. And now contributing directly to Proton too so we get even more. So from our point of view, it is a clear win.

https://en.wikipedia.org/wiki/List_of_games_with_hardware-accelerated_PhysX_support

40 games in ten years... I call that far away from a success.

You don't encourage it but you see it as a win. idk, for me it's clear that the best that can happen is that DLSS has the same fate as Physx, which is quite probable as their implementation requires a lot of resources from Nvidia.

Quoting: 3zekielAs for the open standard DLSS, it would be useless as of now, and while it might help getting more games with XeSS if Intel does make it good, it would not change much anyway as long as they do not open the background which they won't for very obvious reasons I already pointed out in another message.

To this day, AMD still has no real support for RT on Linux (except in the proprietary driver that one uses and no developers target).Also they have a very bad track record in term of day 1 support for GPUs themselves (yes they tend to boot now, clap clap, well done, thx for allowing us to boot your gpu, now also give all features and a stable driver). Nvidia has lagged behing for wayland support (but honestly, from a user perspective this does not matter one bit).
Who even knows when/if FSR will have (good) support on Linux, and even more so on Proton. It might work in reshade though according to GN's video.

And somehow you end up with a rant against AMD using arguments that applies for past releases of Nvidia hw as well...

All I have to say is hat any AMD problems of the past won't change the fact that Nvidia practices are anti-competitive. You may like them from a corporate point of view, but as a end user you should definitely feel them as despicable.

Quoting: 3zekielThey open sourced the headers ( of NVAPI), it was in news here multiple times. So I would guess you can do the plumbing behind that. Obviously no implem behind that, just headers. I am not saying it became an "open standard" per se either. It has no frozen version for others to implement etc. It might come, who knows.

Now, ignoring all that, FSR might still help a little with sub par configs, and it is always nice to have. But it does not seem like support is too hot either - metro said they wouldn't, and the games which does are not so hot either. Maybe reshade will save it... And even in consoles, I doubt it does much better than checkboard.

Correct me if I'm wrong, but I always understood that DLSS was part of NGX, not NVAPI.


Last edited by x_wing on 22 June 2021 at 4:34 pm UTC
slaapliedje Jun 22, 2021
Quoting: CatKiller
Quoting: slaapliedjeGranted, nvidia doesn't support Mac at all, which I still find amusing.
They can't. On Windows and Linux, the GPU vendor provides the API implementation. On Macs, Apple do. Apple and Nvidia had a falling out, so no more Nvidia hardware in Macs, so no support from Apple for Nvidia hardware in Macs.
Yup, that's what I find amusing. Like two people who used to be best buds let a woman get between them or something. Though to be fair (to be fai-uh), it isn't like nvidia is hurting for money because of it.
slaapliedje Jun 22, 2021
Quoting: x_winghttps://en.wikipedia.org/wiki/List_of_games_with_hardware-accelerated_PhysX_support

40 games in ten years... I call that far away from a success.
This is kind of a false pretense. The PhysX engines have been built into the GPUs for years now, and so special support for it is no longer a thing. So 40 sounds about right. New games for the most part just use the hardware if they need/want to.
x_wing Jun 22, 2021
Quoting: slaapliedje
Quoting: x_winghttps://en.wikipedia.org/wiki/List_of_games_with_hardware-accelerated_PhysX_support

40 games in ten years... I call that far away from a success.
This is kind of a false pretense. The PhysX engines have been built into the GPUs for years now, and so special support for it is no longer a thing. So 40 sounds about right. New games for the most part just use the hardware if they need/want to.

40 games and most of them (if not all of them) being sponsored by Nvidia in ten years. And as far I know, most of the nowdays game physics are still running on the CPU. So, the idea was to accelerate physics execution using the GPU but their reluctance to make a standard made them fail and 15 years after they first release of Physx we are still using the CPU. IMO, that's a failure.


Last edited by x_wing on 22 June 2021 at 5:30 pm UTC
While you're here, please consider supporting GamingOnLinux on:

Reward Tiers: Patreon. Plain Donations: PayPal.

This ensures all of our main content remains totally free for everyone! Patreon supporters can also remove all adverts and sponsors! Supporting us helps bring good, fresh content. Without your continued support, we simply could not continue!

You can find even more ways to support us on this dedicated page any time. If you already are, thank you!
The comments on this article are closed.