We do often include affiliate links to earn us some pennies. See more here.

AMD FidelityFX Super Resolution 2.0 announced

By - | Views: 25,370

AMD has today revealed AMD FidelityFX Super Resolution 2.0, the next-generation version of their impressive spatial upscaling tech that can really help improve performance.

For those who don't use it and are confused: the whole idea is that it produces high-resolution outputs from lower resolution inputs. It's one way to get good performance at 4K for example, for games that are a bit too resource intensive. It can work with many resolutions and the Steam Deck has FSR built-in.

There's limitations of course, and AMD explained these examples for FSR 1.0:

  • FSR 1.0 requires a high quality anti-aliased source image, which is not always available without making further changes to code and/or the engine.
  • Upscaling quality is unavoidably a function of the source resolution input. So with a low resolution source, there is just not enough information with a spatial upscaler for thin detail.

Bring on FSR 2.0 then! Which continues to be open source.

"FSR 2.0 is the result of years of research from AMD, and is developed from the ground up. It uses cutting-edge temporal algorithms to reconstruct fine geometric and texture detail in the upscaled image, along with high-quality anti-aliasing."

Some of what's new in FSR 2.0 include:

  • Delivers similar or better than native image quality using temporal data.
  • Includes high-quality anti-aliasing.
  • Higher image quality than FSR 1.0 at all quality presets/resolutions.
  • Does not require dedicated Machine Learning (ML) hardware.
  • Boosts framerates in supported games across a wide range of products and platforms, both AMD and select competitors.

It will continue to work across all vendors too so NVIDIA and Intel will also benefit from this. Since it's open source, any developer can just pick it up and use it.

FSR 2.0 temporal upscaling uses frame color, depth, and motion vectors in the rendering pipeline and leverages information from past frames to create very high-quality upscaled output and it also includes optimized high-quality anti-aliasing. Spatial upscaling solutions like FSR 1.0 use data from the current frame to create the upscaled output and rely on the separate anti-aliasing incorporated into a game’s rendering pipeline. Because of these differences, FidelityFX Super Resolution 2.0 delivers significantly higher image quality than FSR 1.0 at all quality mode presets and screen resolutions.

An example AMD included was DEATHLOOP which is adding support for it:

YouTube Thumbnail
YouTube videos require cookies, you must accept their cookies to view. View cookie preferences.
Accept Cookies & Show   Direct Link

When will it actually be available? They're not saying other than a vague "Q2 2022". They will be attending GDC though next week to give a talk on it.

Article taken from GamingOnLinux.com.
33 Likes
About the author -
author picture
I am the owner of GamingOnLinux. After discovering Linux back in the days of Mandrake in 2003, I constantly came back to check on the progress of Linux until Ubuntu appeared on the scene and it helped me to really love it. You can reach me easily by emailing GamingOnLinux directly. Find me on Mastodon.
See more from me
The comments on this article are closed.
39 comments
Page: «3/4»
  Go to:

Doc Angelo Mar 18, 2022
Quoting: axredneckHow can i use them just for antialiasing (without upscaling) ?
(Especially for games that have neither DLSS/FSR nor antialiasing built in)

I'm not sure, I don't have an RTX card, and never tried FSR. But in theory, you could run your native resolution (for example 1080p), let it upscale to 1440p or 2160p, and then configure your display to downscale that to 1080p. That should result in a high quality anti alias effect.

But I never tried that.
elmapul Mar 18, 2022
"e "Q2 2022". They will be attending GDC though next week to give a talk on it."
you know what also will happen q2 2022? second batch of steam decks...
elmapul Mar 18, 2022
Quoting: gradyvuckovicI don't see how something like Gamescope could have access to that data.
gamescope dont, but proton can i guess.
elmapul Mar 18, 2022
Quoting: Doc AngeloRTX cards simply execute the process that was the result of the machine learning that was done on different hardware - which also happened to be regular hardware.
isnt that the definition of ASIC?
asic is much more efficient to do what it does, so i dont see the issue here.
if you know you gona need more multiplication than sums,subtractions and divisions there is no reason to care about sum/sub/div as much as you do for multiplication operation.
elmapul Mar 18, 2022
Quoting: somebody1121I hope they release the source code soon so godot 4 can add this (Although it need some TAA implementation first for the motion vectors)

i dont think they have enough man power for that, it would only delay vulkan, i think they will focus on deliver vulkan on 4.0, bring back openGL at 4.1, and those extra features either will go at =>4.2x or 5.0x.

one thing i think they could implement though, that shouldnt require too much effort to add is some form of rendering the text in a different render contex so FSR 1.0 could work on everything else from the game, then you render the text as the last step in an higher resolution instead of upscaling it.
(something like an text overlay on top of the game render.)

i think its doable.
elmapul Mar 18, 2022
Quoting: iskaputt
Quoting: Doc AngeloThe meaning of the terms around "artificial intelligence" are weird

"Machine Learning" is a huge pile of buzzword/b*llsh*t bingo. That's just how it is.

sigh, no it isnt.
go learn how it work and what is possible with it, you have no idea what you're talking about.

on a side note, i can agree that the term "learn" is miss leading, its like calling an AI an inteligent system, as if the machine could "think", that i agree , its bullshit.
iskaputt Mar 18, 2022
Quoting: elmapul
Quoting: iskaputt
Quoting: Doc AngeloThe meaning of the terms around "artificial intelligence" are weird

"Machine Learning" is a huge pile of buzzword/b*llsh*t bingo. That's just how it is.

sigh, no it isnt.
go learn how it work and what is possible with it, you have no idea what you're talking about.

on a side note, i can agree that the term "learn" is miss leading, its like calling an AI an inteligent system, as if the machine could "think", that i agree , its bullshit.

I'm strictly speaking about the rhetoric used around ML, to be fair that wasn't clear in my comment. What's coming from that field in terms of results is impressive at times.

As you seem to be knowledgeable about the topic, how far has it come on the "explainability"? Last time I dived deeper into ML (like a couple years ago) the "why" was pretty sketchy (coming from a more classic statistical modelling POV) and there was that whole issue about decisions made with these models no one could really understand (like ML-based credit ratings). I know there have been some advancements, but I'm not following ML closely. But I'm watching the occasional talk in academia about ML being used for some specific problem and that didn't bode well for the answer.
furaxhornyx Mar 18, 2022
View PC info
  • Supporter Plus
Quoting: Doc Angelo
Quoting: axredneckHow can i use them just for antialiasing (without upscaling) ?
(Especially for games that have neither DLSS/FSR nor antialiasing built in)

I'm not sure, I don't have an RTX card, and never tried FSR. But in theory, you could run your native resolution (for example 1080p), let it upscale to 1440p or 2160p, and then configure your display to downscale that to 1080p. That should result in a high quality anti alias effect.

But I never tried that.

I have tried that in the past, on Windows, on some older games that didn't have built-in AA.
Scaling up to 4K, then downscaling to 1080p (max resolution supported by my monitor), the edges did look better, as well as some other details.
But it was a bit clunky to use (you had to enable it manually for each .exe program)

I found back the link that explained how it works (with the dot grids) : https://www.nvidia.com/en-us/geforce/news/dynamic-super-resolution-instantly-improves-your-games-with-4k-quality-graphics/
soulsource Mar 18, 2022
Quoting: kokoko3kI'm curious to see how it will work with static frames.
I imagine that temporal AA induces some sort of subtle movement to the camera (?).
It this is true, then it would not be possible to implement such thing on compositor side, like with FSR today.
Somebody has some insight on TAA?
I've been playing with Unreal's temporal anti-aliasing some time ago. It indeed moves the camera very slightly every frame, using the FViewMatrices::HackAddTemporalAAProjectionJitter(const FVector2D& offset) method. The offset is taken from one of several hardcoded sample patterns, which can be selected using the r.TemporalAASamples setting.

Quoting: denyasis
QuoteDelivers similar or better than native image quality using temporal data
Wait, so it can make an image that's better than the original??
Yes. That's because the camera is moved sub-pixel distances between frames, giving you an effective resolution that can be above the original one. It's similar to Multisampling, but does not have the same performance overhead, as it re-uses images from previous frames, instead of doing multiple samples of the same frame.

Since it's using previous images, it cannot account for objects that were not visible in those frames or that change their movment speed. This causes ghosting and is the main drawback of Temporal over regular Multisampling.


Last edited by soulsource on 18 March 2022 at 10:34 am UTC
elmapul Mar 18, 2022
Quoting: iskaputt
Quoting: elmapul
Quoting: iskaputt
Quoting: Doc AngeloThe meaning of the terms around "artificial intelligence" are weird

"Machine Learning" is a huge pile of buzzword/b*llsh*t bingo. That's just how it is.

sigh, no it isnt.
go learn how it work and what is possible with it, you have no idea what you're talking about.

on a side note, i can agree that the term "learn" is miss leading, its like calling an AI an inteligent system, as if the machine could "think", that i agree , its bullshit.

I'm strictly speaking about the rhetoric used around ML, to be fair that wasn't clear in my comment. What's coming from that field in terms of results is impressive at times.

As you seem to be knowledgeable about the topic, how far has it come on the "explainability"? Last time I dived deeper into ML (like a couple years ago) the "why" was pretty sketchy (coming from a more classic statistical modelling POV) and there was that whole issue about decisions made with these models no one could really understand (like ML-based credit ratings). I know there have been some advancements, but I'm not following ML closely. But I'm watching the occasional talk in academia about ML being used for some specific problem and that didn't bode well for the answer.

im not an expert, i saw a few videos explaining the process, but i'm not an data scientist or anything and dont have an good video to recomend from the top of my head.
i was just a bit pissed off by what you said, but as you explained, it was just an miss fortunate choice of words, so lets ignore that.

i agree that a LOT of companies are puting machine learn as an buzz world to market their tech to investors or end users, but the area as an whole isnt limited to that.
unlike metaverse...
While you're here, please consider supporting GamingOnLinux on:

Reward Tiers: Patreon. Plain Donations: PayPal.

This ensures all of our main content remains totally free for everyone! Patreon supporters can also remove all adverts and sponsors! Supporting us helps bring good, fresh content. Without your continued support, we simply could not continue!

You can find even more ways to support us on this dedicated page any time. If you already are, thank you!
The comments on this article are closed.