You can sign up to get a daily email of our articles, see the Mailing List page!

NVIDIA talk up bringing DirectX Ray Tracing to Vulkan

By - | Views: 18,050

With Ray Tracing becoming ever more popular, NVIDIA have written up a technical post on bringing DirectX Ray Tracing to Vulkan to encourage more developers to do it.

The blog post, titled "Bringing HLSL Ray Tracing to Vulkan" mentions that porting content requires both the API calls (so DirectX to Vulkan) and the Shaders (HLSL to SPIR-V). Something that's not so difficult now, with the SPIR-V backend to Microsoft's open source DirectXCompiler (DXC).

Since last year, NVIDIA added ray tracing support to DXC's SPIR-V back-end too using their SPV_NV_ray_tracing extension and there's already titles shipping with it like Quake II RTX and Wolfenstein: Youngblood. While this is all NVIDIA-only for now, The Khronos Group is having discussions to get a cross-vendor version of the Vulkan ray tracing extension implemented and NVIDIA expect the work already done can be used with it which does sound good.

NVIDIA go on to give an example and sum it all up with this:

The NVIDIA VKRay extension, with the DXC compiler and SPIR-V backend, provides the same level of ray tracing functionality in Vulkan through HLSL as is currently available in DXR. You can now develop ray-tracing applications using DXR or NVIDIA VKRay with minimized shader re-writing to deploy to either the DirectX or Vulkan APIs.

See the full post here.

Eventually, with efforts like this and when Vulkan has proper cross-vendor ray tracing bits all wired up, it would give developers an easier job to get Vulkan ports looking as good as they can with DirectX. This makes the future of the Vulkan API sound ever-more exciting.

Article taken from GamingOnLinux.com.
24 Likes, Who?
We do often include affiliate links to earn us some pennies. We are currently affiliated with GOG, Humble Store and Paradox Interactive. See more here.
About the author -
author picture
I am the owner of GamingOnLinux. After discovering Linux back in the days of Mandrake in 2003, I constantly came back to check on the progress of Linux until Ubuntu appeared on the scene and it helped me to really love it. You can reach me easily by emailing GamingOnLinux directly.
See more from me
44 comments
Page: «3/5»
  Go to:

tuubi 23 February 2020 at 6:36 pm UTC
1xokHow is raytracing actually implemented in the Linux version of Quake2 or is it switched off there? Can anyone comment on this?

I cannot try it, I only have a GTX 970.
Here's some reading for you:
https://www.gamingonlinux.com/index.php?module=search&q=quake+II+rtx
Shmerl 23 February 2020 at 7:12 pm UTC
ElectroDDLast example, g-sync... They went as far as manipulating the branding from monitor manufacturer when they lost the battle against AMD.

I'd say they failed overall. Example: https://www.lg.com/us/monitors/lg-27GL850-gaming-monitor

Quote* NVIDIA® G-SYNC® Compatible
* Adaptive-Sync (FreeSync™)

Adaptive sync is mentioned.
appetrosyan 23 February 2020 at 11:16 pm UTC
Liam Dawe
appetrosyanDon't quite get what they gain from this. Still, this means that we could (in theory) have RTX accelerated Quake 2 on Linux.
We already do. That's the point. Quake II RTX is out and supports Linux.

Thanks! I hand idea. I would like to say that I'd give it a try, but I have an old rx 480.
appetrosyan 23 February 2020 at 11:17 pm UTC
mirv
Shmerl
mirvThere's actually quite a lot of a video card that isn't used at any given time, so while adding some dedicated raytracing pathways may reduce area dedicated to other features, I don't think the impact is of the magnitude that you might be thinking.

If general GPU compute units can handle ray tracing - then fine, but apparently they aren't good enough for it (yet).

Indeed, and different vendor approaches to their compute units will definitely be worth keeping an eye on.

I'm of the opinion myself that despite nvidia pushing their own rtx extensions, eventually it will all collapse back into generic compute units in the end - maybe some differences to current designs to make them more efficient for raytracing type work, but compute units nonetheless.
That will make raytracing just be another software package, like Radeon Rays.

Another possibility is that the tensor cores become the new CUDA cores.
elmapul 24 February 2020 at 5:03 am UTC
Shmerl
EikeIt's just a matter of time.

That said, I avoided buying a GTX 2000, because at the moment, it feels more like an expensive gimmick.

It is a gimmick. More of a marketing tool than a really useful feature. To achieve good quality real time ray tracing, you need really powerful hardware. And one that can be fit in a single GPU gives at best some minor enhancement to the lighting, and as I said above, it naturally comes at the cost of everything else.

wtf?
Ray tracing is the holygrail of computer graphics.
maybe Rtx, their dedicated cores, may be gimick, but Ray tracing?
that is simply the reason why the computer graphics industry had to use countless other gimmicks, because they didnt had real time ray trace, what nvidia did was an miracle that was later followed by others, sure, its not as good as rendering the entire frame, the same way that eevee (on blender) is not as good as cycles, but its close enough.

rendering in 16ms what usually take hours in a much better machine is not an small deal, sure its not as good as, but its impressive nonetheless.

one thing that i hate in gamers in general is how clueless they are, i dont give a fuck about 4k, raytracing is an serious technology, 4k is just a gimmick, but when they realizes that they would have to give up on 4k to play with raytracing, what they did? trash talked the technology, and that is the reason why it didnt sell as it should.
sure, there are other factors too, like games that arent really optimized for it, but seem the reception that this technology had, just disgusts me.
Shmerl 24 February 2020 at 5:11 am UTC
elmapulwtf?
Ray tracing is the holygrail of computer graphics.
maybe Rtx, their dedicated cores, may be gimick, but Ray tracing?

We aren't talking about ray tracing, we are talking about Nvidia's implementation. See my post above. What they did it not a miracle, it's a gimmick. Once someone will make serious real time ray tracing on commodity hardware, you can call it a miracle. Nvidia did nothing of the sort.


Last edited by Shmerl on 24 February 2020 at 5:13 am UTC
TemplarGR 24 February 2020 at 8:33 am UTC
And like i was saying 2 years ago on various sites, Ray Tracing (at least on this form) is just a slightly different type of shaders.... I mean, in typical Nvidia fashion, they (when RTX 2000 was released) presented RTX like it is some type of new fixed function hardware that "AMD does not have" and their hardware supports ray tracing while AMD's does not.... This was bullshit, plain and simple, for anyone with even a mediocre knowledge of how gpu pipelines work.

In reality, any modern gpu from the last decade could perform ray tracing. The issue was not if they could, the issue was "how fast". AMD already has quite decent hardware to pull off ray tracing, the issue is there is no standard solution for implementing it yet. Nvidia, like with CUDA, just offered a better software solution to game developers, first. That's all. AMD's issue has always been in software, not in hardware. But RTX will probably never dominate the market because developers aren't going to invest on using RTX while open solutions and next gen consoles are around the corner....

And the other more important thing about ray tracing: It can be done efficiently on cpu cores! This means that a new ray tracing API that uses both cpu+gpu resources would be ideal and exploit 8+cores on modern systems for superior graphic quality. I don't think Nvidia's current solution does that. So we need to wait for the next consoles/big navi/intel's gpu and see. They will probably announce some kind of open standard framework for raytracing this year or the next.
TemplarGR 24 February 2020 at 8:45 am UTC
elmapul
Shmerl
EikeIt's just a matter of time.

That said, I avoided buying a GTX 2000, because at the moment, it feels more like an expensive gimmick.

It is a gimmick. More of a marketing tool than a really useful feature. To achieve good quality real time ray tracing, you need really powerful hardware. And one that can be fit in a single GPU gives at best some minor enhancement to the lighting, and as I said above, it naturally comes at the cost of everything else.

wtf?
Ray tracing is the holygrail of computer graphics.
maybe Rtx, their dedicated cores, may be gimick, but Ray tracing?
that is simply the reason why the computer graphics industry had to use countless other gimmicks, because they didnt had real time ray trace, what nvidia did was an miracle that was later followed by others, sure, its not as good as rendering the entire frame, the same way that eevee (on blender) is not as good as cycles, but its close enough.

rendering in 16ms what usually take hours in a much better machine is not an small deal, sure its not as good as, but its impressive nonetheless.

one thing that i hate in gamers in general is how clueless they are, i dont give a fuck about 4k, raytracing is an serious technology, 4k is just a gimmick, but when they realizes that they would have to give up on 4k to play with raytracing, what they did? trash talked the technology, and that is the reason why it didnt sell as it should.
sure, there are other factors too, like games that arent really optimized for it, but seem the reception that this technology had, just disgusts me.

I always love it when clueless people call others clueless.... It is funny.

No, Nvidia performed no such "miracle". Nvidia just caught up with AMD's hardware architecture after many years. Nvidia's gpus for the better part of this decade were lagging behind in technology. They lacked async compute (VERY important and if games actually utilized it we would be seeing superior games), they lacked enough shading power and relied on geometry and brute force, they overtesselated everything just to win in benchmarks etc.

RTX is just some CUDA shaders that perform raytracing effects. That is why Nvidia after some months enabled the 1000 series to have RTX too.... It was just software. And guess what, architecturally VEGA and NAVI from AMD could run RTX just as efficiently, if Nvidia allowed their shaders to be translated to AMD hardware legally.... Oops, i guess now they did.

4K is not a "gimmick". Alongside HDR, they can enhance graphical fidelity considerably. They do cost a lot of resources. But if i had the choice between 4K/HDR and RTX on 1080p, i would pick 4K, every single time. Why? Because most effects RTX performs can be done with traditional techniques and be quite good looking, while 4K and HDR color literally upgrade the detail level of the whole screen. so yeah, RTX is a gimmick.
mo0n_sniper 24 February 2020 at 9:00 am UTC
RTX on GTX1000 series cards doesn't run anywhere as fast as on RTX2000 cards. I have ran QuakeII on my GTX1600 and it had 3FPS.
TemplarGR 24 February 2020 at 9:02 am UTC
Shmerl
EikeTo turn it downside up: Do you know another promising way to go to make graphics rendered in realtime "photorealistic"(*)?

Make some kind of LPU (Lighting Processing Unit) that only has ray tracing ASICs and can work in parallel with everything else without hindering regular GPU performance.

There is no need for dedicated ray tracing hardware. Raytracing can run on cpu cores and can run on gpgpus. Gpgpus are already pretty much vector cpus for most intends and purposes, so what is the point of creating another vector cpu just for ray tracing?

There are certain things that are missing for 100% raytraced video games:

1) More cpu cores: More cpu cores can calculate raytracing quite efficiently in software.

2) GPGPUs with better FP16 performance and more efficient pipelines. we pretty much have that already, all new architectures have FP16 capabilities and async compute

3) Elimination of PCIE latency. That is the elephant in the room that no one talks about. In order for cpu cores to be utilized better for raytracing, you ideally need lightning fast cpu<->gpu communication. Just like with GPGPU. How can we achieve that? AMD Fusion.... AMD's early vision that they couldn't pull off because they didn't have the resources to market it. Intel is on to this, they clearly want to get "Intel Fusion" before AMD.

The future (though many years from now) is pure raytraced video games (no rasterazation) running on pure APU hardware in multiple sockets, each with its own HBM RAM. With raytracing you can easily cut the picture in 2, 4, or 8 portions, and dedicate each part to a different apu socket. This way you get superior performance with PCIE latency affecting nothing. This would also revolutionize non-game computing since it would allow OpenCL-next to have extreme levels of performance.


Last edited by TemplarGR on 24 February 2020 at 9:03 am UTC
While you're here, please consider supporting GamingOnLinux on Patreon, Liberapay or Paypal. We have no adverts, no paywalls, no timed exclusive articles. Just good, fresh content. Without your continued support, we simply could not continue!

You can find even more ways to support us on this dedicated page any time. If you already are, thank you!

You need to Register and Login to comment, submit articles and more.


Or login with...

Livestreams & Videos
Community Livestreams
Popular this week
View by Category
Contact
Latest Comments
Latest Forum Posts