It seems that you're using an outdated browser. Some things may not work as they should (or don't work at all).
We suggest you upgrade newer and better browser like: Chrome, Firefox, Internet Explorer or Opera

×
avatar
Zimerius: amongst other items yes, AMD is about to destroy both Intel AND Nvidia
:cough:... nVdia's ARM acquisition... :cough:

nVidia is about to branch out into the CPU's of the future while AMD's PC tech is essentially a dead end (at least in growth).
AMD will mostly survive on cannibalizing of Intel's corpse.
So just to get this straight, are DLSS and FSR just stopgap technologies, about which I will not care in the future as I will be able to play all (current 2020-2021) games without either with good framerates at 4K or 8K or 256K resolutions?

So games look better without DLSS and FSR, they are just fake methods to make games running in higher resolutions by not calculating (or rendering) everything in the higher resolution?

So it is not like missing Hairworks in The Witcher 3, or some Geforce-specific lightning method in Splinter Cell games, or the Matrox-exclusive bump mapping in some game (was it possibly Dungeon Keeper Gold or Dungeon Keeper 2? Not sure...), where missing any of those features will basically make the game look worse or "wrong", also with modern GPUs?

avatar
StingingVelvet: I feel like I'm in an alternative universe for thinking DLSS looks pretty crappy. I mean I used it in Cyberpunk because there was no way to get 60fps without it really, but it definitely looks far worse than native resolution. All the people who think it looks as good or better and call it a "free framerate boost" just baffle me.
Are you comparing how it looks with DLSS disabled, playing in the higher resolution with a lower framerate?

I was under the impression that is its whole purpose: it looks better than running the game in e.g. 1920x1080 (or even lower), but not necessarily quite as good as running in 4K or 8K or whatever, but offering much better performance than running at 4K or 8K without it?

So a trade-off of more performance for slight visual degradation? A bit like encoding an audio file with mp3 loses some fidelity, but offers much smaller file sizes in the process (and takes more processing power to play, I presume)? No one is claiming that mp3 would sound better or even as good, as an uncompressed flac file, but many may consider the audio difference negligible.
Post edited June 06, 2021 by timppu
high rated
avatar
StingingVelvet: I feel like I'm in an alternative universe for thinking DLSS looks pretty crappy. I mean I used it in Cyberpunk because there was no way to get 60fps without it really, but it definitely looks far worse than native resolution. All the people who think it looks as good or better and call it a "free framerate boost" just baffle me.
You're far from alone in thinking that. Native resolution is always best and there are plenty of example of DLSS just looking off even with dedicated Tensor Cores:-

Example 1 - See the "moving shirt" textures at 1:08.

Example 2 - Control. Native 4K vs DLSS (1440p upscale to 4K) vs Normal 1440p upscale to 4K. For the most part I can't see any difference other than DLSS washing out the detail on the floor texture even more than regular upscale.

avatar
StingingVelvet: That said, a lot of games use bad TAA that barely looks any better even at native resolution.
Totally this. I am regularly shocked by the number of people who complain about FXAA being blurry whilst TAA gets a free pass for being even more blurry... Far worse than SMAA was (all the performance benefits of FXAA with way less blur). Example - Terminator Resistance, the stairs, the grating, etc, are abnormally smeary (this is with Depth of Field turned OFF). Meanwhile here's a reminder of how sharp games looked 20 years ago with standard MSAA x4. I give up trying to make sense of where 20x years of gaming "evolution" has taken us.
avatar
timppu: So just to get this straight, are DLSS and FSR just stopgap technologies, about which I will not care in the future as I will be able to play all (current 2020-2021) games without either with good framerates at 4K or 8K or 256K resolutions? So games look better without DLSS and FSR, they are just fake methods to make games running in higher resolutions by not calculating (or rendering) everything in the higher resolution?

I was under the impression that is its whole purpose: it looks better than running the game in e.g. 1920x1080 (or even lower), but not necessarily quite as good as running in 4K or 8K or whatever, but offering much better performance than running at 4K or 8K without it?
Yes, yes and yes. They are fancy upscalers using a similar kind of AI as some "AI enhanced" HD texture packs but having to "guess" the missing pixels means they're not without visible artefacts though (see Brian's above post for examples). The biggest two issues I have is that 1. Support has to be individually added per game (so only a tiny fraction of games will ever be supported), and 2. Game devs will become ever lazier and use "play your games at 1600x900 then fancy upscale" as a crutch rather than an enhancement and will optimize their games even less than they are doing now.
Post edited June 06, 2021 by AB2012
Enabling huge performance gains with a minor visual fidelity hit for Nvidia geforce 1xxx cards too? Sign me up.

Nvidia: You need a RTX card for DLSS.
AMD: Hold my beer.
avatar
timppu: So just to get this straight, are DLSS and FSR just stopgap technologies, about which I will not care in the future as I will be able to play all (current 2020-2021) games without either with good framerates at 4K or 8K or 256K resolutions? So games look better without DLSS and FSR, they are just fake methods to make games running in higher resolutions by not calculating (or rendering) everything in the higher resolution?

I was under the impression that is its whole purpose: it looks better than running the game in e.g. 1920x1080 (or even lower), but not necessarily quite as good as running in 4K or 8K or whatever, but offering much better performance than running at 4K or 8K without it?
avatar
AB2012: Yes, yes and yes. They are fancy upscalers using a similar kind of AI as some "AI enhanced" HD texture packs but having to "guess" the missing pixels means they're not without visible artefacts though (see Brian's above post for examples). The biggest two issues I have is that 1. Support has to be individually added per game (so only a tiny fraction of games will ever be supported), and 2. Game devs will become ever lazier and use "play your games at 1600x900 then fancy upscale" as a crutch rather than an enhancement and will optimize their games even less than they are doing now.
Except unless I'm missing something, FSR doesn't actually use any AI, ML or DL. It's basically a software upscaler. DLSS isn't perfect, but the technology does offer significantly reduced fidelity loss. It's a shame as it would have been good to see a genuine alternative to DLSS.
The framerate boost with DLSS is so big that a small graphics downgrade is a very low price for using it. This is especially true if someone uses ray tracing as well. Games such as Cyberpunk 2077, Control and Metro Exodus highly profit from turning ray tracing on and DLSS is basically the only way you can make them run in satisfying framerates (unless you're using something like RTX 3090 in pair with 1080p that is). I bet that the AMD software upsacalling will offer a much worse effect. Maybe the performance will be quite good, but the quality of graphics will suffer greatly. It's always good to have an alternative though, especially given the fact that it'll support Nvidia's GPUs as well.
Post edited June 06, 2021 by Sarafan
avatar
Strijkbout: :cough:... nVdia's ARM acquisition... :cough:
...Has not actually happened yet and ongoing government investigations could still prevent it from happening.
avatar
timppu: Are you comparing how it looks with DLSS disabled, playing in the higher resolution with a lower framerate?

I was under the impression that is its whole purpose: it looks better than running the game in e.g. 1920x1080 (or even lower), but not necessarily quite as good as running in 4K or 8K or whatever, but offering much better performance than running at 4K or 8K without it?

So a trade-off of more performance for slight visual degradation? A bit like encoding an audio file with mp3 loses some fidelity, but offers much smaller file sizes in the process (and takes more processing power to play, I presume)? No one is claiming that mp3 would sound better or even as good, as an uncompressed flac file, but many may consider the audio difference negligible.
If you have a 1440p or 4k monitor but can't quite run the game at that resolution then DLSS will obviously have a benefit in that you can upscale it better (theoretically) than your monitor could. This is why I used it in Cyberpunk, to get 60fps at 1440p I had to use low settings. With DLSS I could use high settings, and the trade-off was worth it.

It's solely the "wow DLSS looks just as good as native and gives you a massive framerate boost!!!" people I am baffled by. It obviously makes the image look worse than native, it just might be worth it in certain scenarios. It's an alternative to putting other settings down.
There are plenty of techs that look kind of odd but people will defend to their deaths. The one apart from DLSS that I think of most recently was SSAO- I mean, I don't see black outlines around everything in reality, maybe others do, but that seems to be the sum total of what SSAO adds to games most of the time and at a decent overhead too. You still get people defending it though (and earlier; thermonuclear bloom, motion blur, lens flare etc, and some of those still get defended)

I get why people want to think that DLSS is uniquely great though and better than native, the RTX series are all massively overpriced compared to their predecessors and people want to think that the 3090 they've dropped... 2000 USD? on is value for money. The raytracing hardware is certainly required to get a decent framerate when raytracing so you can justify that (at least in theory AMD's hardware doesn't have added cost from added raytracing hardware though, albeit it's hard to tell with the GPU bubble, but RTX2000 was overpriced before that anyway), but the other addition that justified raising the cost- tensors- is way more of a hard sell.

The thing is, we're had a DLSS version that don't use the tensors, but use bog standard CUDA (DLSS 1.9). The current implementation doesn't have per game learning any more, which was another big selling point, and means that it's hard to argue that any implementation by AMD has to to be (far) worse due to not having tensors or per game training. There isn't exactly a wealth of information and detail on how FSR works, but working similarly to DLSS on standard Radeon CUs, nVidia CUDA or Intel's EUs is feasible, with the irony being that we know it is from nVidia itself.

(Slightly off the direct topic, but there's no reason why AMD couldn't add tensors themselves to future cards like they added ray accelerators to RDNA2, if they wanted to. They're not a specifically nVidia tech and are used in Google's ASICs and tensorflow)
Why is it that these technological advances always come with buzzwards and such, but no actual explanation of what's changing? "This'll improve 2k games!" In what way? How does it accomplish this (at least from a perspective someone who codes understands)? And how is this different from what game before (without redefining what came before like with the god forsaken "enhanced sync" shit).
avatar
Phasmid: The thing is, we're had a DLSS version that don't use the tensors, but use bog standard CUDA (DLSS 1.9). The current implementation doesn't have per game learning any more, which was another big selling point, and means that it's hard to argue that any implementation by AMD has to to be (far) worse due to not having tensors or per game training. There isn't exactly a wealth of information and detail on how FSR works, but working similarly to DLSS on standard Radeon CUs, nVidia CUDA or Intel's EUs is feasible, with the irony being that we know it is from nVidia itself.
In theory, you could have textures using DLSS that look better than native, if you're using a higher quality texture than the game to train the AI - theoretically, the tech could return details that don't exist in the original texture (imagine a book cover - the original game texture might not have that much detail on it, but if you train the system with a highly detailed cover, you could get a better texture in your game). However, I'd argue this isn't likely to happen much in reality.

Regarding not using tensors - yes, in the same way you can ray-trace without the RTX cores (I recall a video showing the Star Wars ray tracing demo on the 1080 (it was VERY slow). The compute efficiency won't be great though - so you get a benefit by using the specialist cores.

The point in my post was that in theory, ATI could run a genuine competitor to DLSS by using the compute cores (in the absence of specialist hardware). However, based on what I've seen, FSR is a simple upscaler, rather than using any actual DL/ML to boost quality.

What I expect to happen longer term, as RTX technology gets cheaper and faster is that DLSS reverts to its original intention - where the AI upscales from native resolution and then there's screen size reduction to reduce back to native. In other words, it becomes an option for AA.
I do agree with most of that, certainly in theory- it's how the practical applications work that are the problem*. With DLSS having moved to a generic training model it's hard to envisage it being able to add 'real' detail by training on megaresolution textures or whatever, because it's a generic model rather than a per title one. That suggests they're probably now using an AI enhanced version of something equivalent to the old checkerboard upscaling that consoles used rather than what was originally planned. Which is fine, and many on the consoles couldn't easily tell the difference even with their basic upscaler model.

We simply don't know enough about FSR yet to know how it works practically since the presentation was very much results rather than methods, but... AMD already has a simple upscaler and has had for some time, one of the criticisms of DLSS 1.0 was that AMD's software upscaler (combined w/ RIS, Radeon Image Sharpening) was actually better than it, in some cases, despite not using specialist hardware or even having been extensively worked on.

We do know that there is at least one FSR patent extant for an AI based solution which is more or less directly equivalent to the non per title machine learning model that DLSS currently utilises.

*Ultimately, the problem seems to be that nVidia decided to add tensors to consumer cards without a really pressing reason for doing so. The uplift in frame rates for RT cores (or RA on AMD) is extremely significant and in the order of 5 to 10 fold over generic hardware, measured practically, while even nVidia's PR only claims 2x improvement for tensors over generic.
avatar
Phasmid: There are plenty of techs that look kind of odd but people will defend to their deaths. The one apart from DLSS that I think of most recently was SSAO- I mean, I don't see black outlines around everything in reality, maybe others do, but that seems to be the sum total of what SSAO adds to games most of the time and at a decent overhead too. You still get people defending it though (and earlier; thermonuclear bloom, motion blur, lens flare etc, and some of those still get defended)

I get why people want to think that DLSS is uniquely great though and better than native, the RTX series are all massively overpriced compared to their predecessors and people want to think that the 3090 they've dropped... 2000 USD? on is value for money. The raytracing hardware is certainly required to get a decent framerate when raytracing so you can justify that (at least in theory AMD's hardware doesn't have added cost from added raytracing hardware though, albeit it's hard to tell with the GPU bubble, but RTX2000 was overpriced before that anyway), but the other addition that justified raising the cost- tensors- is way more of a hard sell.

The thing is, we're had a DLSS version that don't use the tensors, but use bog standard CUDA (DLSS 1.9). The current implementation doesn't have per game learning any more, which was another big selling point, and means that it's hard to argue that any implementation by AMD has to to be (far) worse due to not having tensors or per game training. There isn't exactly a wealth of information and detail on how FSR works, but working similarly to DLSS on standard Radeon CUs, nVidia CUDA or Intel's EUs is feasible, with the irony being that we know it is from nVidia itself.

(Slightly off the direct topic, but there's no reason why AMD couldn't add tensors themselves to future cards like they added ray accelerators to RDNA2, if they wanted to. They're not a specifically nVidia tech and are used in Google's ASICs and tensorflow)
Ho there wanderer, SSAO is my golden cow in, especially in warhammer 2, while I can approach the same shade results with decreasing the clarity of the screen, you have to forgive me for not remembering the right term... or was it gamma, << checks ingame, finds the right answer << i mean brightness of course, the subtile shading not to mention the added depth both as a illusion and in terms of color richness is so different anyways...... I even will configure the settings so SSAO can be enabled, like all the time

Another consideration for DLSS might be that it actually manages to upscale a lower resolution into a higher one without all the troubles you will find when you use settings such as fit to screen and or screen sharpening
As always, wait and see reviews based on thorough tests before buying into this. AMD's marketing is known to twist the truth, so are Nvidia and Intel too.