you make one big mistake, you always say dlss 3 vs dlss 2, BUT you can use dlss 3 without the generated frames, so maybe you should say generated vs normal renderer or something. still a great video
True. Also 120fps/120Hz is plenty fast for 99% of gamers. So if you're already at that level, it actually doesn't make sense to use Frame-Insertion. Most would argue the experience is worse due to the lag and artifacts. Just like how Max-Ultra Graphics are mostly pointless, since you can have a game looking great with High Settings, and choosing certain elements carefully. Or having beyond-4K Graphics are overkill, when you have 1900p resolution intelligently upscaled. Or even using Raypath Tracing at this point in modern technology. Not even the best RTX-4090 is capable of such feats (2200p/145fps/Ultra Settings/RTX-On), but you will get something visually competitive with significantly cheaper hardware like RX-6800 (1900p/FSR-2/120fps/High Settings/No RT). I suspect that DLSS v3 is in "beta testing". Just like how DLSS v1 finally had its quality and performance issues fixed with DLSS v2. We should see similar thing with DLSS v4 launching in 2-years. The question is, does AMD-ATi have something in the works as well?
Don't listen to this dude people like him compromise themselves for 13k likes of slack people who cant get a job or work harder for their dreams lol For DLSS3.0 the artifact is truly unnoticeable and Also with RTX4090 on every game you get at least 110fps 4k max with DLSS3.0 quality mode so HUD problem is unconceivable. lol
Wow! Bravo for this video. It is amazing how you broke this down to a very understandable level. It is also very hard to put a video like this out that is fair pointing out the good and the bad and allowing the audience to make a determination how much any of the information may or may not matter to them. I am beyond impressed. Thank you!
This has to be some of the best investigative hardware journalism I have ever seen done. I can't believe you got this depth of analysis in such a short time. You deserve a nap, Tim.
This is no different from all the DLSS 1 reviews way back when 2XXX series came out, making it like useless tech that makes games look worse, and than DLSS 1.1 came out, 1.2...2.2 and so on and now its perfectly find and technology most people wont turn off [i had 3090 and always used DLSS] Its too early to give deep analysis for DLSS3, we need to wait for them to fix all the bugs, make it out of beta and see that games implemented it right Now its just bunch of negativity
@@NoBodysGamer This is nothing like original DLSS. watch the video lmao. DLSS original was a the right product not doing what dlss3 is doing. Its just marketing crap. DLSS 1 just needed time. This is already doing what its suppose too do and just for small level of users.
Nvidia did AMD a favor by releasing DLSS 3. FSR may have never caught dlss 2.x but dlss 3 is definitely within reach. Now FSR 2.1 and DLSS 3.0 has image artifacts. One with greatly improved fps, the other with fps improved to a lesser degree but improved latency.
Maybe this is another reason for releasing the 4090 first ? If its worse when your base fps is lower, the other gpus probably benefit less and thus the initial impression of dlss 3 is better when used with the 4090.
@@PrefoX nope. Way to be confident in you're misinformation tho. It's logical. The more/faster "real" frames you feed an algorithm the "better" it can predict the motion of the next AI generated frame. If you took an algorithm to predict frames between a GPU operating at 1hz that's A LOT of motion to fill in via prediction.
@@FaridRudiansyah that's not true. The 30 series was the only gen to have a "90 or titan" card on release. Previously the 80 was upper gaming tier and released usually with the 70 then a "Titan" was released 6mos or a year later.
The UI artifacting and increased input latency are a huge dealbreaker for me. I'll stick to DLSS 2.x and FSR 2.0 (luckily they are seperate options in the menu)
Yeah i don't care about frame count, i care about latency. Its just that normally more frames = lower latency. DLSS 3 frames do nothing to lower latency, in fact it makes it bigger.
@@PadaV4 Yeah, I don't even mind that much how 30 fps looks ingame, it's just the moment I start playing it it's like a big fat turd through a funnel with how sluggish it is is what I can't stand about it. People don't mind low fps video playback because it looks good enough and latency and responsiveness is not an issue unlike with videogames.
I think frame generation would be much better used to maintain a minimum framerate, as artifacts in those scenarios may be preferable to stuttering. As a way to improve average framerate it seems worse than lowering image quality in its current state.
@@thezeke1984 Wouldn't work. Most open world games these days have highly variable fps based on scenario. Crowded city, tons of NPC? drop below 50fps. Out in the open desert? 70+ fps. If you want stable FPS, the only way to achieve that is through dynamic scaling techniques, such as DLSS, and other dynamic techniques.
@@thezeke1984 Again, you can't aim for stable FPS no matter what settings you choose. Decrease shadow settings to get 60fps in city = you'll have 80+ fps out in world no matter what settings you choose. Pick almost any game, that's the case.
@@hagaiak I think that with time and a MUCH more strategic use of frame gen could offer complete frame stability. Imagine. Youre getting your desired fps out in the wild, youre getting 90- 100 fps. Now you trek back to the city where ordinarily you'd drop to 70-80 fps. So instead of DLSS 3 rendering every other frame willy nilly and trying to achieve the highest possible fps, it aims to maintain the fps of environments where traditional raster is only needed. So instead of generating every other frame, it could generate 1 every 10 frames; or, in a crazy hectic scene it could temporarily boost to its highest frame gen to maintain frames. I think it best case use would be the game using internal benchmarks to see where youre getting the best FPS and progressively turning frame gen on and off in relation to your 10% highs. It could be a seamless experience aimed to decrease the amount of perceivable artifacting whilst maintaining average frame rate and as low as possible latency. So imagine, you're in a city and youre in a gunfight. All of sudden, a massive enemy crashes through the wall leaving a ton of debris and particle effects flying through the air. Ordinarily, your 90 fps experience just dropped to 50 fps in traditional raster. But aha, frame gen kicked on for that split second encounter smoothing out the entire encounter to that buttery 90fps, then going back to traditional raster. The perfect support system.
@@bryzeer5075 defending nvidia there, I see? Go on mate, have fun. At best it offers little over 1.5x the performance. At 2x the price. DLSS nonsense is irrelevant. You're a clown 🤡🤡🤡
@@bryzeer5075 Nobody cares about proprietary features like this or PhysX or GSync or whatever especially since the major consoles are on the AMD platform. Only the Ngreedia fangheys are spooging over this.
its really hard because video encoding would probably hide or exacerbate most of the AI artifacts, it would not be representative of the real experience
It was really bugging me that other content creators weren't really mentioning how DLSS 3.0 works, as it's really quite misleading how the extra FPS comes to be. Thanks for the detailed video! Really interesting stuff.
DLSS 3.0 which boosts framerate, but is only viable when you're already generating alot of frames but the 4090 doesn't support higher refresh rates like 4k240 because Nvidia is the only one who skimped out on dp 2.0 (even intel has it), and in lower refresh rates the 4090 is bottleknecked and already maxes out most games, while for esport player it increases latency. Tell me who's this feature for exactly?? P.S also breaks vsync currently and causes tearing when frame rates exceed refresh rate
@Doranvel except 4090 doesn't support over 4k120 it doesn't even reach 144hz. And it's such a pointless restriction as well for a buyer of a 1600$ card. Like if intel can put display port 2.0 on its a380 msrp 140$ and probably it's a310 which might be sub 100, it's a really bad look when nvidia skimps on it
I'd love to see a video of extracted dlss3 frames with no traditional frames. If every other frame in a dlss3 video is AI generated, perhaps it's possible to resample a screen capture video to only show the AI frames by skipping every other frame.
Nvidia will maybe send you a letter say how dare that you are not supporting their narrative, how could you do that, as an independent tester. :-) what a great bit of great information that I had no idea of from the advertising, this is why I love your videos, and always use it to determine my purchases.
So, in summary: DLSS 3.0 is perfect for people who are playing non-competitive titles and already achieving 120FPS on 144+ Hz adaptive sync monitors as long as the game is not input sensitive and does not have much moving UI text and the DLSS 3.0 is not pushing their frame rate above their monitor frequency. Glad to hear that this is not a feature for a niche market.
Don't forget not only do you not want to push over the frame rate but the game needs to have an even frame-rate by itself. If looking at an empty part of the city doubles your frame rate you'll not be able to be over 120 during the majority of the game and not over 240hz with DLSS3 when in the lighter parts of the game.
It's kinda like Nvidia asked: "How do we take this technology we developed that really helps people with low end hardware only benefit you if you buy high end hardware..."
I think its the opposite. It still looks bad at times, so I dont wanna use it and definetly would not pay $2k to use it lol, but its in games like msfs, and other slow paced low fps games where it is needed most, and there the frames being "fake" does not matter, the smoother image is whats wanted
Yeah it's really odd isn't it? You only benefit from it if you already have a good frame rate, but turning it on then penelizes you with screen tearing as there really aren't any 240hz 4K monitors. It also doesn't have DP 2.0 so getting monitors like that still seems unlikely unless they use HDMI.
@@MaTaMaDia Everything is inherently biased. It's more about how much bias there is and whether or not you want or can sift through the bias to find the data and context around the data.
This actually makes me more interested in RDNA 3, because the 4090 has solid generational uplift, so if Nvidia thinks they need to push this, the competition might be very close otherwise. Wait and see I guess.
This might be true, but I think it is more likely that NVidia is trying to make the second hand market GPUs obsolete by pushing the next gen as far as they can. So they can continue selling new GPUs.
Great video explaining how the extra DLSS3 fps is not going to improve imput latency. I was surprised about how much of a negative effect it has on latency. 👍
I'm seeing this 'feature' trick a lot of people who don't understand much beyond FPS numbers. Now with the launch of the 4060ti being no better than the 3060ti without frame generation it's become even more obvious how much Nvidia is leaning on this feature. My favorite thing to say is 'why not just enable your TV's motion smoothing instead' since they're functionally not that different.
My take from this is that DLSS 3 is similar to a high overdrive(or whatever the manufacturer calls it) mode on a monitor; It looks nice and fancy in the numbers, but the visual experience suffers to reach a better number for marketing. Yes I know the analogy doesn't hold weight once you get to around 120fps native, but at that point is there really any need to enable DLSS3 over DLSS2 or FSR2?
Agreed. Personally I play on a 75 inch TV so DLSS even when working I generally don’t enjoy visually since I notice it doesn’t look as good as native. If I can already get 1440p - 4K at 120FPS I see no need for it really. If I ever get a 360hz monitor one day then sure, but I’m not a competitive player anyways so I doubt I’d ever need it. Still, I love the idea of the technology and hope to continue see it evolve.
About the use cases: a 2070 owner already bypassed the DLSS 3 restriction getting double the framerate but at a unstable framepacing in Cyberpunk w/ RT. So, there's benefits for a high end 30 series cards running a game already at a high fps with ultra high end monitors.
Been watching several recent videos on this channel recently... I always discounted it because of the name, not really being interested in unboxings. How wrong I was on that count, these videos are excellent :) As for DLSS3, it's basically as I feared once it was explained that the tech needed to keep a frame in buffer for it to work. Basically you don't want to use it for the same reason that you don't want to have frame smoothing techniques turned on on a tv if you game on it. In the scenario described where you could use it (singleplayer, high framerate, not sensitive to inputlag...), that's the scenario where I personally wouldn't really value the result anyway. Maybe some people can really tell 200 fps (and Hz) over 100 fps in a lower pace singleplayer game, but I certainly can't.
It started as an unboxing channel, but changed its direction pretty early on (but not early enough to start fresh with a new name, I guess). Tim and Steve are providing some of the best hardware analysis in this space, imho - very competent, very thorough, and quite well explained and presented. I remember when DLSS was launched with the 20 series - almost every other reviewer just repeated nVidia's claims about the underlying AI technology. Tim seemed to be the only reviewer at that time who actually understood AI and neural networks on a technical level, and he was therefore able to quickly identify and demonstrate problems that most other reviewers missed (despite being glaringly obvious to anyone with experience in neural network design).
Frame generation should be implemented by game engines because they are aware of the user's input, in-game events etc. and can strategically generate frames when conditions are ideal. Nvidia should have exposed accelerated frame-blending APIs for developers. Slapping it on at the end of the rendering pipeline just creates nice performance graphs for marketing purposes. It's a bit like fitting a muffler tip at the end of a small engine so it makes a louder noise.
If Nvidia would give game dev they API, devs should ignore it, and use something open source, because this is not some kind of magic, because many TVs have this. And devs should think twice before getting any black box from Nvidia, or they ends as EVGA, they get it for free know, because there is AMD, but if not... they will have to paid for it, and when you become dependent on someone technology... oh Adobe :D
Close sourced narrow usecase even if it was better wouldn't make sense with DLSS 2.0 being a thing unless NVIDIA pays these developers. That's a lot of work for developers to put in for a very small group who have the latest gen GPUs but want frame interpolation that has worse latency.
Excellent video, Thanks guys!! You asked for our input? DLSS 3.0 is mostly a gimmick IMO and the conspiracy theorist in me believes it was conjured up solely as a vehicle of misinformation - as a graphic artist I would even go so far as to call their graphs fraudulent! The 4090 and 4080/16 are okay-ish cards as a next generation over the 3000 series but the prices are inspired by greed and blatant disregard for their customers. Additionally, the idea that we have gone from 240W max in the 2080Ti to 480W or so in the 4090 while just barely (and in some cases not) doubling performance and frame rates speaks volumes to me. Volumes of shame that is... If the 4090 were _between_ $800 and $900 with the _real_ 4080 being $100 to $150 less I would be singing NVidia's praises. As it is however, I think I will be looking much harder at whatever AMD/ATI brings to the table!
I was thinking the same. Based on the information provided, it's basically a fancier version of LG's TV true motion, clear motion or whatever other TVs use to display artificial high frame rate when watching movies. It even does the same thing with adding high input lag and even some artifacts. Which is why those TVs have "game mode", which basically turns off that tech and other post processing effects to achieve lower input latency when playing a game.
So from programmers perspective input lag results from bad code base. Every game that has input lag from dlss 3 or other technologies have a game loop that waits for completing rendering to receive new input. Some games are developed in a more modern way to receive input while rendering. Modern gameengines should not suffer from input lags (modern game != modern engine).
Great break down as always Tim. I think the other issue with DLSS 3.0 working better for high frame rates is simply the amount of interpolation that has to occur. The higher the native frame rate the less information change that's occurred in between frames. So not only are these interpolated frames on the screen for shorter intervals, the artifacting in them should also be reduced as the model has to infer less information to fill in the gaps. It's a substantial challenge to get this kind of technology to work well for lower frame rates, where you'd want it the most.
I applaud to the professionalism of HU! In my opinion, you guys are, by far, the most level-headed and objective hardware reviewers on YT. You consistently offer detailed, yet easy to understand explanations and you are refusing to fall for the hype. This is why HU is my absolute first choice and most trusted source for hardware reviews .👍👍🍻
@@pokealong GN is also great, as I said. But DF....While they are excellent in providing technical details, they're on Nvidia's payroll, usually shilling and overselling Nvidia's tech.
@@RandomlyDrumming They literally have a video very similar to this one ripping DLSS 3 to shreds. They list point by point what's wrong with it with countless examples of where it's flawed and how vsync hurts it. They're no more in Nvidia's payroll than Gamer's Nexus is for having an entire video praising the cooling of the 4090 WITH NVIDIA. Maybe you all missed every GPU video they've done in the last 3-4 years where they praise AMDs value.
It's such a weird technology as it's made to make a game run at higher frame rates than normal, but it doesn't work well when the GPU doesn't naturally output that many frames. It really seems its main goal is dealing with CPU bottlenecks but I also heard CPU bottlenecks also limit the bonus of it. It's really confusing. It basically like it was only designed with the idea of using a 4090 with the highest end CPUs. It's so odd. I was honestly expecting it to be a lot better for lower end cards. It's also kinda unfortunate as being that it brings games over 120fps at 4K, it means that it will almost always hit latency by being over the monitor's max refresh rate. It's incredibly rare to find higher than 144hz 4K monitors and the new GPUs still only have displayport 1.4. That means going higher is still unlikely.
That's my impression as well. Just like RT back on RTX 2xxx series was pretty much useless on the low end cards, this will also be another something people at the lower end (4060 and bellow) will pay premium without being able to use.
@@masteroak9724 Responding to all 3 comments. To be fair on 3050 and 3060 RT (except minimal RT settings) is still hard to use (DLSS 2 is fine thankfully). 40 series might be the first generation where it will be useful on lower end with decent RT settings, but not with DLSS 3. Also CPU bottlenecks that DLSS 3 is supposed to override are also causes of additional increased latency often, so it might hurt even more when CPU bound (though you'd have to be very CPU bound for that). As for Flight Sim and Civ - it would depend on how annoying the flickering of UI is. Since honestly in Civ you don't need that high FPS to play the game.
I'm not sure why you think it is weird, it's plainly obvious that the results of interpolation will be better the lower the interpolation window so it's naturally geared to boosting high frame rates. Most games already feel awful to play at 60 fps, so holding an additional frame in order to perform the interpolation is always going to feel bad. Probably a significant amount of non esports orientated titles can't be run over 150-200 fps whether it be via cpu bottlenecks or other engine limitations, so being able to double that to 300-400 fps for the next generation of high refresh rate monitors is a welcome addition imo, assuming the major kinks like ui elements and scene transitions causing problems get resolved.
Not quite. Motion interpolation purely processes the image. If someone moves a hand when it blends the frames together it will look more like the hand is fading out of position 1 and fading into the other position. With DLSS 3, the GPU KNOWS it's the hand that's moving, so it calculates the midpoint between the two positions and puts the hand there. Then use the backgrounds of each image to draw the background behind it. Frame interpolation has often been called the "soap opera effect" because it makes everyone's skin look super soft. Especially if there's big contrasts to their skin like a guy with stubble for example.
It would certainly be nice to have a setting, how many real frames are rendered per "fake" frame, so that the visual & latency compromise can be adjusted to personal liking
In terms of latency, the way it works now is the best case scenario. If you were able to configure it to generate a fake frame for every 3rd frame instead of every other frame then you would need to start triple buffering frames which would actually increase the latency between when a frame is generated and when it is displayed even further. Frame interpolation is a marketing gimmick that has no place in real high end hardware.
Let me *TRY* imagine this with my undervolted brain. If native fps is 120 fps and after turning on dlss 3 fps goes to 200fps then there are 90 *fake* frames, which means 1 fake frame between 1.33 real frame. I'm just imagining this so could be wrong.
I was noticing this on some of the first demonstrations. Reviewers have been saying it’s only a little when to me it looked like a lot was going off in the distance. We’re just trying to hit frame numbers while ignoring quality with the image it’s like coming full circle with low frames with good quality imaging that gets chopped up.
well look at dlss 1 and what we got now, 2.4.9 is mostly better than native. give frame generation a bit time. and you have to get it into the market at some point for future generations, like with RT. RT is going the last keystone for real gfx
@@PrefoX no it’s not better than native. The game’s taa implementation has to be really poor for dlss to look better. N DF is going to hide these things as they are nvidia shills.
@@PrefoX at 1080p/1440p,its better than native but at 4k its slightly worse and at 8k its garbage compared to native. Dlss 3.0 looks like dlss 2.0 perf mode at quality its bad visually at all those resolutions
@@anuzahyder7185 There have absolutely been games where its better than native as its able to infer finer details that were lost in native, such as making small lines complete, reducing aliasing. Its a better solution than TAA in those cases.
Here's the way I look at DLSS 3. It's a really cool piece of technology and a great proof of concept. But the entire point of getting higher framerates is to get lower latency and make the game feel more responsive. Turning on DLSS 3's frame generation either doesn't improve the latency at all or it makes it significantly worse. Meaning all you're getting with it on is higher framerate numbers to make your (and Nvidia's) benchmarks look better, while getting artifacts you weren't getting with DLSS 2 / 3 without frame generation. If you're not getting less input latency with the higher framerates from DLSS 3, then what's the point? It's a gimmick.
That's not the entire point though, higher frame rates "feel" more true to life, they are more immersive even in slower paced games and make the image less blurry. I had this very problem with Forza Horizon where at 60fps it looked more photo realistic but felt more fake due to the frame rate, but at 120fps it felt more realistic but looked worse due to having to reduce settings. I've spent hours in games like that just driving around enjoying the scenery, unlocking the roads, where the input latency being worse but the experience more pleasant is a night/day difference to me. Sure, a 4090 is already going to run that game at 4K 120fps fine, but this is about future games where they are pushing the graphics even harder, which is why NVIDIA used the example of Cyberpunk fully path traced. There are plenty of games where the frame rate looking better is more benefit than latency, just because you don't care about them doesn't make it a gimmick.
@@alexatkin Like the video said, using frame gen at low frame rates is going to increase the latency even more. If Cyberpunk path traced runs at 30-40fps on a 4080 12gb then turning on frame gen would increase the latency to unplayable territory, then there is also the question about how frame gen performs on slower GPUs, what if it takes even longer to generate frames? You really don't want to play a 1st person perspective game at 100ms+. It is a bit of a gimmick. Maybe DLSS 4 will be better.
Oh god, high latency is a big no no- I don't care how smooth it looks, if it feels sluggish I'll never use it -it's the Pc version of fondant for cakes (looks good, tastes bad). Great vid as always.
9:33 "We're inside the cockpit of an aircraft here, we'll also find details: Smaller knobs, bigger knobs, and even some large shafts to look at." Nice one, Tim xD
Here's what I think, motion on screen is a trick of fast frames moving, and while adding a generated frame between tricks your brain into interpreting a 'smoother" motion, I don't think that's it's a valid FPS benchmark to include because it isn't indicative of rasterization performance. However, It's a neat technology, but it probably shouldn't be used under an certain FPS threshold because it makes artifacts more visible. It reminds me of those TVs that would smooth the 24/29/30 fps video to 60fps, but at 4k it was easy to spot artifacts and blurs and to me, made it unwatchable. Where this tech excels is at frame rates past that the human brain already comprehends as motion but it doesn't add any perceivable difference to the gamer, only the benchmark.
Fair point, but what you have to realise is that the artefacts are less visible than TV interpolation because a lot of the places that falls down the motion vectors are able to fix it. Many years ago there I used to game with TV interpolation on because I hit on a TV that just seemed to do an insanely good job and the TV was so sluggish I didn't notice the lag difference. Granted I wouldn't do that now as I found I enjoyed a lot more games now I have a TV with decent latency and the difference is HUGE with interpolation on, but I will absolutely use DLSS 3.0 interpolation if it means I can have better graphics on modern titles with a native frame rate between 40-60fps interpolated to 80-120.
Wanted to comment on the UI element thing... While there can be some slight variances on the exact method games use to do this, each UI element is 3D geometry... Generally two triangles with a texture on it that is typically dynamically generated each frame. With text, the game dev can either bake the text directly into the texture, or make each letter a "3d object" and layer it on top of the background UI elements. I've used both approaches back in my indie game dev days. Because of this, I'd imagine it'd be extremely difficult for the AI to differentiate between UI elements and the rest of the frame, and handle them differently to eachother
So what is difference between this box which is same place of frame, and car which is same place of frame (as in CP2077)? Because it looks like they train this algorithm on texture data and not on fain detal data, and this is why it have so many artefacts on this elements. Something as using JPEG for UI and not using PNG. JPEG is better quality/size for photos, but is bad for gui type elements.
@@termitreter6545 so they do something really wrong, because when on screen eg in Flight Simulator you have this bubble with distance, if don't change place and it corrupt it most, and not side of it, but text in it. BTW AI is part of ML and for algorithm to name as ML you need only one number to change to fit model. Even we looking at NN it is enough to have perceptron, one neuron to name it NN (maybe 2 neurons but...), and perception is use by eg AMD in CPU in branch predict so is so fast, but it adapt so it is AI lol BTW2 most would think that if someone is talking about AI we are talking about DL = deep learning, which are deep neural networks (more then 1 hidden layer)
@@m_sedziwoj Idk, maybe its a bug, who knows. As for AI. Thing with neural networks is, they are made at nvidia with an "AI", but at home you just get the output of the NN at your PC. Youre not actually making one, your GPU is just applying the result. Tbf its a pointless detail, but it just bugs me sometimes.
@@termitreter6545 I making NN in home, but not this one used by Nvidia ;) So training is hard, using is relatively easy. Maybe so many people make this mistake, because everyone is comparing NN to human brain, but they works completely in different way. It is more as air plane is inspired by birds, but we know how much they are working similarly ;)
DLSS3 looks very cool. My main concerns are UI, a lack of framerate caps/vsync, and the fact that there’s simply not enough bandwidth to push anything higher than 4K 120hz through HDMI 2.1. In some cases it feels like it’s too fast or too good when it comes to the 4090. It’ll be interesting to see how DLSS3 evolves and scales on lower end hardware.
Hey Hardware Unboxed, fantastic video as always! I just wanted to add an idea into the mix. In the beginning of the video you talked about how showing high FPS scenarios with DLSS 3 is difficult with TH-cam's 60 FPS limitations. I was wondering if you could potentially provide a download of the footage or even just this whole video as a demonstration of what this new technology looks like. That would be really cool to see!
@@Quizack That is true. Maybe this is too much to ask, but as much as I personally would be open to subscribing as I would really like to see DLSS 3 with my own eyes before I purchase an NVIDIA card, I think the PC Building community would really appreciate some footage (maybe not as much as their floatplane members) as a sort of "see for yourself" approach to a buyer's guide. Even with just a couple scenes with some varying framerates would go a long way in helping regular consumers understand what they are actually buying.
Imagine all these Tests are without Frame Generation or aka DLSS3.0 with the power of DLSS3.0 quality mode you could get at least 120 fps on every game 4k max settings and picture are crispier and artifacts are not visible to human eyes ,damn RTX4090 is a beast ,True you cant use Frame Generation on competitive games in regard to latency but you don't even need that ,even without FG you could get at least 360fps over competitive games with Nvidia reflex you get lowest latency. lol
The biggest deal breaker for me isn't the image quality problem of DLSS 3, nor is the latency problem, but the fact if you force vsync in DLSS3 latency become completely unplayable, no vsync and you get tearing, only in one scenario where your DLSS 3 result is within VRR range of your monitor then you might get a better experience.
yeah, that's what dldsr is for. gotta find that sweet spot though. but agreed, latency is a non-issue. Most people don't even enable reflex. and the artifacts he was pointing out I couldn't see nearly any of them. gonna try it out myself later today
Amazing In-Depth analysis as always, results are as most people expected tbh, hopefully frame generation is gonna be a much more useful technology in the future for lower end cards.
Frame generation would be useless for first person shooters or quick twitch games as the latency can be felt as mentioned by optimum tech. If you only play slow games then it's okay.
I thought frame generation sounded like a gimmick, until I tried it. Using DLSS 2 is awesome. But remember, DLSS put more load on a cpu, and takes some load off the gpu. Poorly optimized games that are released too early, causes gamers to need more computer power to compensate for poor programming. After using frame geneartion for a few months, Frame generation is actually great. Games run more smoothly.
it's exactly what i expected, the technology is the same as on TVs for years same story artefacts and latency increased. But the numbers looks good and that's what marketing folks like - to fool people. Great work on you testing it so thoroughly!
Although ngl, I usually keep frame smoothing on on my tv all the time. Yes, I know it’s not doing me favors in terms of latency, but for me, it by far outweighs having to look at a somewhat 30fps game like bloodborne, especially the larger the tv gets. The jumps in gaps between frames at that tv size are just too bad. But that’s with a controller, with a mouse I never have these things on, and just lower settings to get a higher fps.
@@granatengeorg good for you if you can stand the artefacts :) i have tried it many times with consoles with different tvs and never liked it mostly hated it + latency sometimes i prefer to settle with "cinematic experience" if the title is worth it
@@RawbLV nothing wrong to bring it as bonus for those that does not have it in their tvs, whats wrong is bragging about it while it is so crippled and making whole release about it. Thankfully the raw performance is also good so rtx4090 is worth buying. But that shows direction, hey in future instead of making our products better we will slap on features that bump up stats but are horrible and nobody will use it instead of focusing on actual product development if you know what i mean. So calling deficencies in DLSS 3 Frame gen is for our own good the louder we do it the better. IMHO its not playable that way The release was not about gen over gen improvement but about improvements made in DLSS 3 which are fake and miserable
Also understand this tech is crappy on TVs where input lag does not matter you have 25/30 fps per second you can delay video and audio how you want to make sure those generated frames are great and aligned and yet still you will see artefacts. So no wonder it would be the same or worse when input lag matters and you have a lot less time to do the same thing. To be fair i expected even worse results so they did good work on this feature though this should not be selling point.
All I know is when I use DLSS in war thunder and escape from tarkov I get ghosting on things that are moving on the screen such as a gun barrel or gun. It makes the games look absolutely hideous to play so I always have it turned off. I game at 1440p.
Why would anyone want DLSS fake frames? they are obvious at low fps and not needed at high fps. I'm glad you made this video , very informative, but DLSS fake frames are useless to me and I suspect most others. Native frame rates is where its at.
If they were just generated off of an algorithm that uses only the previous frame (rather than generating two frames then generating the fake frame to fit between them) they might have something. There might be benefit once they introduce v-sync support, which could potentially only generate frames whenever there's frame drops.
Rasterized performance is always my preference when it comes to gaming. I’m not a big fan of the sometimes gimmicky tech, but I understand it’s a necessary thing to allow for innovation and not just linear progression because as a creator, chasing that goal would become stall. Thanks for making this video and taking such a deep dive into the tech. Thorough, concise, and informative as always.
@@fpsshani not really 4k 8k requires lot of vram than gpu computational power no matter how much cores you increase if it doesn't have enough or good vram the card will always underperformed similarly 4090 cant hit 60 fps in cyberpunk 4k rt on because of lack of vram and 4080 12 gb will be worse .
With DLSS 3 I can just "sense" that when objects are passing in front of each other that things just aren't right at the edges and I don't like it. It just feels off to me all the time. I don't like it and I don't like how it's being pushed. The added latency kills it for me too.
to me, it feels like a better version of the smoothing feature tvs have for years now. where it introduces a generated frame in between and sometimes produces artifacts. I used to use that on console when I played turn-based rpg to have a 60 fps feel because it does add latency
@@hitman480channel8 I use it sometimes with RPGs especially the ones with third person cameras where you effectively spin the whole world around every time you're trying to turn
For a moment I thought did I write this comment :D But yea I agree with you. Its obvious marketing gimmick to force high framerates. I dont see whats the point of high framerates if that is not going to decrease latency. Most important point of high framerates is more responsive control.
@@jcdenton868 I disagree on this last point. Displays require more frames to increase clarity when things move. If you remember there used to be a big push for gray to gray switching speed. Now this is very fast in gaming displays and oleds. The problem is that instant switching is frames still looks choppy, unless you have more frames. So fps starts to become more important than resolution, unless your game does not usually involve motion somehow.
Few months and they have alternative, because it is not magic, bo this is how you brainwash people to think that new GPU is 2-4x faster. They don't understand technical aspect, so...
RTX remix isn't really AI pioneering. AI pioneering seems to mostly have come out of academia and FAANG. Nvidia built fast matrix multipliers which is great. But Intel is carrying the torch now with open cross platform tools.
The disadvantages of frame generation are way too big for low to mid fps which leaves the question who is supposed to use it. The amount of people who target 120+ 4k gaming is pretty small.
At present this is hardware locked to NVIDIAs newest cards, and for those there is no point to be running at anything below 4k anymore. Of course as newer and newer games come out something is eventually going to push the limits, but that is not here yet.
dlss 3 is also useful for 1080p and 1440p higj refresh rate for lower tier cards as 4080 4070 and 4060. One of the bigger advantages of dlss 3 is that it improves smothness even when cpu is the bottleneck, so in lower resolutions it will be useful.
I wonder if DLSS 3.0 could be implemented with a frame cap so it's only interpolates inbetween late frames so you only lose image quality when youd experience a frame drop. So you can effectively get much better minimum frame times at the expense of latency. Especially if you have a cpu bottleneck
I'm certain Nvidia only truly cares about a fat fps figure it can slap on their newest flagship $1.6K GPU. Marketing "our newest generation can generate some frames here and there" to keep a solid fps isn't as sexy as saying our new alien tech creates 300% framerate magically out of thin air. So, in essence you won't see Nvidia helping out the gamers struggling with low framerates, if that was the case, they wouldn't have forced RT down the throats of gamers when clearly it was designed to create an artificial edge over their competition while allowing some additional compute headroom in serverspace applications. In reality something like UE's Lumen is much more beneficial to gamers.
@@ufukpolat3480 That is an incredibly weird take. Nvidia definitely cares about latency, there's nothing suggesting they don't. And they're not "forcing RT down the throats of gamers", they introduced RT and are pushing for devs to implement this entirely optional feature. In the few games where RT is implemented right, it looks phenomenal.
It's also important to note that DLSS 3.0 artifacts will be much more visible with the emergence of OLED displays in the next 1-2 years. With an almost instant pixel response time you lack the natural blurring effect of an LCD display that would additionally help to hide those artifacts. I'd be curious if you could just give us a subjective view of how DLSS 3.0 compares on OLED vs LCD screens in your opinion.
LG is releasing there in a few months 27-32 inch 1440p 240hz oled monitors. they are slated to start production in late octorber early nov and will be shown off at ces 2023 so almost 1st week of 2023, atleast according to the leaks
Thanks for the video. Basically confirmed my suspicions on DLSS 3.0. It’s definitely something I won’t use until they improve it with the inevitable DLSS 4.0
What makes me so sad about this implementation of framerate amplification is that Nvidia did not apply any of the things they already know full well to be useful from virtual reality research. ASW 2.0 used in Oculus and steamvr can interpolate double the frames, making 45 FPS feel like 90 FPS. It doesn't have near the same visual drawbacks because the VR headset is fed data directly from the game engine, and latency is considered and reduced at every point in the pipeline. Nvidia using an optical flow generator basically means that we have the same technology that exists inside of your television set and gives you the soap opera effect, but a version of that that is an order of magnitude improved. It Just sucks knowing that this could be better, because VR has been doing a better job with frame amplification since 2014. I don't know why they can't feed engine data in. If the optical flow generator was aware of where the camera is, and was fed data about areas with occlusion issues, half of the artifacts in these videos wouldn't even exist. It's a little insane.
Very good points. - For UI elements the game engine could specify areas to ignore to eliminate the UI issue entirely. - I totally agree with you . Nvidia should have simply accelerated the AWS 2.0 approach and that would have improved VR also while giving a better result for flat screen games.
To put it bluntly all they do is hardware comparisons and reviews at a really high quality, usually with little fluff humor unlike Canucks and such (all the better imo) There's a reason unboxing channels and LTT have more subs and views, it's because it's not just tech, it's technology with a spin Like that gameshow LTT did, the tech was a sidepiece. Not to diss HU, they are amazing and better than GN but that's just how it is.
I don't think it's a complex answer, or requires a conspiracy. The vast majority of people want their tech information with a side of entertainment. The presentations on this channel are no bullshit, with a constant stream of facts. This type of presentation just doesn't appeal to the same number of people.
This was an excellent review of DLSS 3. Tim explained the reality and facts about adopting this early technology. He provided his opinion and concerns about what Nvidia hasn't made clear, which is fantastic, and will help people avoid the pitfalls of using this technology incorrectly given their needs and expectations. Of course it is early days and we will have to see how developments improve this technology - I for one am excited about it and still feel very positive about DLSS 3 after this review. People can just mindlessly believe all the Nvidia marketing hype, but I'd rather know what is realistic and useful, so I can adjust my expectations and get the best out of what is currently working well, while keeping an eye out for fixes/updates that improve its limitations. Thanks Tim, informative and insightful as always!
The biggest problem with generated frames is that it will take away some of the "feel" even if the game is smooth. This is particularly noticeable in fast paced racing games ( like Rally games ) and fast shooters, it just feels unnatural, regardless of how Nvidia is inserting frames, you will notice that effect as they are not real frames. Probably not that important in slower games, however the "small flickering" in F1 reminds me of memory corruption on a GPU, would totally stick out for me at least, it's nothing you get used to.
This has been my argument against DLSS since it launched. It is nice as long as it is perfect, but it is distractive in competitive settings and downright immersion breaking where it fails. Plus the "best in slow games" argument is a bad one imho, since most slow games could just be run at native resolution instead.
@@andreewert6576 No, Hitman is a slow game and I recently had a one-off issue with it running at a low frame rate, nothing about that low rate was good. 'Slow' games still benefit from a higher frame rate.
Gents... absolutely amazing data and analysis of DLSS 3.0! That was top notch stuff. Much appreciated. Looks like NVIDIA is trying mostly to fake the frames. Looks like I will be leaning on getting an AMD card now when it comes up in few weeks.
@@bryzeer5075 I'm talking about DLSS3. What this review shows is that for the most part it should be ignored in your purchasing decision. Sites like PC Gamer who aren't able to look at it objectively were calling it black magic and dark wizardry. F that. It's an extremely niche feature based upon hiding lesser quality frames among normal quality frames.
In the end, what NVIDIA did was putting motion interpolation techniques present in TVs since 15 years into the rendering path of the GPU. They did apparently not solve the issues coming with these techniques, which is always when the algorithm needs to interpolate content which has been occluded / revealed between frames. Not really impressed, and far from being a "game changer" this technique has been labeled.
not only that but it is not rendering "actual" game play ... only an interpretation of what game play might be.. is has also been called "padding" the stats to make sales
Great review though in your excellent latency analysis graphs I would have liked to see more cases of reflex off. 'Reflex On' does reduce latency by nearly 40% which is significant. I know you were trying to isolate the effect of frame generation, but in my view there should be more comparisons between DLSS frame generation and settings where reflex is off. Because that is what gamers have been used to until now, and it is arguably new tech added in conjunction with DLSS 3. More DLSS 3 supported games means more games with reflex, which is a win for gamers even if they don't choose to enable frame generation.
The first time I heard about DLSS 3 I was already worried about the input latency, because usually the latency is much more noticeable then the frame rate it self, which - with DLSS 3 enabled - leads to this weird feeling of more frames but sluggish input that was explained in this video. But apparently DLSS 3 is even worse then I initially thought, having noticeable artifacts and even higher input latency. Thanks for the great video!
Can't find the video but I saw a blind test and no body could tell the difference between native / DLSS 2 or DLSS 3 - I bet you couldn't too if you didn't know whether it's on or not... :)
Really interesting stuff. Based on the increased latency I’m guessing they’re using two frames and interpolating between the two. At first I guessed they would only use previous frames to create the interpolated image and some fancy algorithm to guess when to display the interpolated frame.
Yes that's exactly what it is doing. And explains why it increases the latency. It renders frame 1, then 2. But instead of showing frame 2 when it is ready, it interpolates a frame 1.5 and shows that first. It increases the render latency by 50%, not considering other system latencies. In the graphs shown the total latency got worse by 25-35%.
How is nobody talking about the knobs and shafts in the cockpit joke? :D Anyway, awesome video, thanks! Another approach you might try for reducing 120fps to 60fps is frame averaging (basically cheap 2-sample motion blur)
History repeats itself. 4090 blows away all current gen parts in raster and has no competition in RT. Giant leap in production workloads. It's the successor to the GTX 1080Ti.
Thanks for laying NVidia's DLSS 3.0 bare naked for us consumers, Tim! Thanks to your hard work, we can see through the smoke and mirrors, so we can make an informed opinion on the technology.
DLSS 3 on a monitor appears analogous to Motion Smoothing (aka Motion Compensation, or ASW) on an HMD in VR. It extrapolates extra frames in order to increase perceived FPS without much CPU/GPU loading cost. But in VR the artifacts are much more noticeable for most people, and therefore I leave it turned off usually.
It’s a fancy motion interpolation, improving temporal resolution to something that isn’t pre-rendered is either guesswork or will require everything to run behind.. I imagine the generated frame here is using a partially rendered future frame. If all it does is push already high frame rates faster it’s hard to see the benefit beyond looking good on charts. Whereas dlss 2 is a bit like reverse MSAA this is quite different
Knew it was smoke and mirrors, awesome investigation Tim! Your level of detail and reviews keep me coming back time and time again. I can't tell you how many times I have recommended people to view your monitor reviews. Thank you for the insite into dlss3, I have no wish to introduce more latency into my games (not that I was going to buy a 40 series anyways). It's nice to see honest reviews.
@@Maseinface let's call it what it is , fake frames that offer no performance benefits what so ever.... what is the point then... you want more FPS so you increase your performance that's the whole point. This offers fake frames with a performance degradation... hence smoke and mirrors.
Conclusion: you need to already be at 120 fps to be able to use DLSS3 without drawbacks. But if you're not a pro gamer, you almost never need to be above 120 fps. The difference between 120 and 160 or even 200 fps is really small (to me anyway). It's really diminishing returns at this point. So DLSS3 is not all that useful. In cyberpunk if it brings you from 80 to 120 fps and the resulting lag and artifacts are acceptable, it could be a good use case, as that is an FPS increase that makes a difference. But that's basically only with 4090s and only in a few select situations. RT and DLSS2 made tremendous improvements from the 20 series (when they were clearly gimmicky) to the 30 series. I guess DLSS3 will really shine with the 50 series. For Nvidia though, the battle is already won. They innovate, they bring new technologies and occupy the discussions whilst AMD is still busy trying to follow in rasterization alone (they still don't have a gpu matching the 4090 on this), thus reinforcing the large perception that Nvidia is THE GPU company and AMD is more of a market filler.
Think it really would have also helped if they didn't call it DLSS 3 especially when DF said you can use it without DLSS on. It's it's own toggle in the games anyways so it should have had it's own marketing name instead of tarnishing the DLSS name. Especially weird as it isn't even super sampling, it's not an antialiasing technique so I'm just baffled why they call it that.
@@PrefoX we do, doesn't mean most people will. Plus usually having an acronym that means something can marginally make it easier to explain. I might not know MSAA means, but when they say "multi sample antialiasing" you now know it takes multiple of something for antialising. But DLSS 3 has nothing to do with DLSS, in fact Nvidia constantly labels it as "frame generation instead which just makes it more confusing as in some games it's literally called just frame generation making it even more confusing to people.
@@kejarebi I mean that's basically what DLSS ultra performance mode is. But he did show it is about like lowering DLSS from quality to performance mode so I guess it's similar.
I suppose one question to ask is whether it's Deep Learning based. If it is, then call it DLFG (Deep learning frame generation). Then it sounds like a good complement to the existing tech and shows both how it's related and how it's different well. If it isn't deep learning based then the naming is even worse!
I did not expect UI to be part of the frame interpolation. If the game has to support it, it could also work together with DLSS3 to create the new frame and then paint over the UI.
I think it can be solved it but the developer has to do code it in manually. It's like enabling SR from AMD control panel vs in-game FSR. I suspect the developers probably just hacked this in quickly for the launch demo and that's why it's interpolating the entire game.
@@monotoneone When drawing frames, the GPU must know some information about which elements are UI, as it won't cast shadows etc. based on them. It would need to interpolate everything, but could it interpolate two layers and then composite them together?
Neural net derived imagery in general does a poor job with text generation, unless specifically trained on that. Nvidia could thus update DLS3.0 with results from text-specific training data and improve the result that way if it is not possible to split the rendering pathways for scene and text.
@@Bayonet1809 I don't know what they use, but Avisynth had this plugin called mvtools, almost 20 years old, and it can do the same thing with only motion vectors. See a few examples I made fixing badly produced scenes from the series called Lexx, th-cam.com/video/uz0waWcsEzU/w-d-xo.html, th-cam.com/video/NhAoEesDF0k/w-d-xo.html. In the top right corner the missing frames were regenerated, similar to DLSS3's frame interpolation. Ignore that the source is interlaced, there are several missing frames in the source, even if you look at individual fields.
I really wonder how many people "really saw" these "fake frames". Cause when I've tried DLLS 3 on Cyberpunk, I genuinely couldn't tell the difference and I'm pretty sure 99% or the other human beings around here didn't either. Funny how everyone starts to see something as soon as they're being told that there is something to see
I find the issues really hard to spot too (except for some of them shown in this video, like the flickering UI elements). I probably won't ever notice these while playing myself. It is still interesting to know. I wonder if I'm able to tell the differences concerning input latency once i get my rtx 4070 delivered.
PC Building Simulator 2 is out now on Epic Games Store: store.epicgames.com/p/pc-building-simulator-2
ok
Not egg!
Epic...
on what? lol
you make one big mistake, you always say dlss 3 vs dlss 2, BUT you can use dlss 3 without the generated frames, so maybe you should say generated vs normal renderer or something. still a great video
Placing that 4090 so close to the edge of the table and then waving the hands around was quite the thrill
thats just what in the industry is called edging
Omg I can't unsee it now
yeah, he could hurt himself
with the size of that thing i think he would fall before the gpu does !!
his linus is locked out
'Frame hallucination' is my favourite description of what NV is trying to do here
native to quality has great increases though
They're sharing what Papa Jensen is smoking...
DLSD
We used to call this "smoke and mirrors" but this time around, smoke has hallucinogenic properties, it seems. And mirrors have imperfections.
In the future entire games will be hallucinated with NN's
This is why independent reviewers are invaluable to the industry and consumers alike!
True.
Also 120fps/120Hz is plenty fast for 99% of gamers. So if you're already at that level, it actually doesn't make sense to use Frame-Insertion.
Most would argue the experience is worse due to the lag and artifacts. Just like how Max-Ultra Graphics are mostly pointless, since you can have a game looking great with High Settings, and choosing certain elements carefully. Or having beyond-4K Graphics are overkill, when you have 1900p resolution intelligently upscaled. Or even using Raypath Tracing at this point in modern technology. Not even the best RTX-4090 is capable of such feats (2200p/145fps/Ultra Settings/RTX-On), but you will get something visually competitive with significantly cheaper hardware like RX-6800 (1900p/FSR-2/120fps/High Settings/No RT).
I suspect that DLSS v3 is in "beta testing". Just like how DLSS v1 finally had its quality and performance issues fixed with DLSS v2. We should see similar thing with DLSS v4 launching in 2-years.
The question is, does AMD-ATi have something in the works as well?
@@ekinteko 4k ultra are not pointless.....thats where i wanna play at 60 or even 70 fps or more
I agree.
@@nephilimslayer ultra is kinda pointless, it's a lot of GPU resources for not that much gain in visual fidelity compared to high settings
Don't listen to this dude people like him compromise themselves for 13k likes of slack people who cant get a job or work harder for their dreams lol
For DLSS3.0 the artifact is truly unnoticeable and Also with RTX4090 on every game you get at least 110fps 4k max with DLSS3.0 quality mode so HUD problem is unconceivable. lol
Wow! Bravo for this video. It is amazing how you broke this down to a very understandable level. It is also very hard to put a video like this out that is fair pointing out the good and the bad and allowing the audience to make a determination how much any of the information may or may not matter to them. I am beyond impressed. Thank you!
This has to be some of the best investigative hardware journalism I have ever seen done. I can't believe you got this depth of analysis in such a short time. You deserve a nap, Tim.
This is no different from all the DLSS 1 reviews way back when 2XXX series came out, making it like useless tech that makes games look worse, and than DLSS 1.1 came out, 1.2...2.2 and so on and now its perfectly find and technology most people wont turn off [i had 3090 and always used DLSS]
Its too early to give deep analysis for DLSS3, we need to wait for them to fix all the bugs, make it out of beta and see that games implemented it right
Now its just bunch of negativity
@@NoBodysGamer This is nothing like original DLSS. watch the video lmao. DLSS original was a the right product not doing what dlss3 is doing. Its just marketing crap. DLSS 1 just needed time. This is already doing what its suppose too do and just for small level of users.
They've been my go to for analysis and reviews.
Nvidia did AMD a favor by releasing DLSS 3. FSR may have never caught dlss 2.x but dlss 3 is definitely within reach.
Now FSR 2.1 and DLSS 3.0 has image artifacts. One with greatly improved fps, the other with fps improved to a lesser degree but improved latency.
Yes well done - I need a nap too from listening
Maybe this is another reason for releasing the 4090 first ? If its worse when your base fps is lower, the other gpus probably benefit less and thus the initial impression of dlss 3 is better when used with the 4090.
no, it is known that all Ada GPUs have the same power for dlss' frame generation.
@@PrefoX watch the video. Dlss 3 is better with higher framrates => its better with 4090 other the other cards
@@PrefoX nope. Way to be confident in you're misinformation tho. It's logical. The more/faster "real" frames you feed an algorithm the "better" it can predict the motion of the next AI generated frame. If you took an algorithm to predict frames between a GPU operating at 1hz that's A LOT of motion to fill in via prediction.
Not really, many high end card (
Or product in general always to be released first
@@FaridRudiansyah that's not true. The 30 series was the only gen to have a "90 or titan" card on release. Previously the 80 was upper gaming tier and released usually with the 70 then a "Titan" was released 6mos or a year later.
The UI artifacting and increased input latency are a huge dealbreaker for me. I'll stick to DLSS 2.x and FSR 2.0 (luckily they are seperate options in the menu)
Yeah i don't care about frame count, i care about latency. Its just that normally more frames = lower latency. DLSS 3 frames do nothing to lower latency, in fact it makes it bigger.
@@PadaV4 Yeah, I don't even mind that much how 30 fps looks ingame, it's just the moment I start playing it it's like a big fat turd through a funnel with how sluggish it is is what I can't stand about it. People don't mind low fps video playback because it looks good enough and latency and responsiveness is not an issue unlike with videogames.
@@PadaV4 With reflex, there is basically no difference in latency
@@bb5307 Reflex does iron that out to some degree. 7 millisecond in latency is not bad with DLSS 3.
you can use dlss 3 without frame generation, but it is still going to be dlss 3
I think frame generation would be much better used to maintain a minimum framerate, as artifacts in those scenarios may be preferable to stuttering. As a way to improve average framerate it seems worse than lowering image quality in its current state.
To get my 60 fps i prefer to lower shadow and grass rendering than getting weird artifacts like that.
@@thezeke1984 Wouldn't work. Most open world games these days have highly variable fps based on scenario. Crowded city, tons of NPC? drop below 50fps. Out in the open desert? 70+ fps.
If you want stable FPS, the only way to achieve that is through dynamic scaling techniques, such as DLSS, and other dynamic techniques.
@@hagaiak open world -> mass grass
crowed city -> lots of shadow.
@@thezeke1984 Again, you can't aim for stable FPS no matter what settings you choose. Decrease shadow settings to get 60fps in city = you'll have 80+ fps out in world no matter what settings you choose. Pick almost any game, that's the case.
@@hagaiak I think that with time and a MUCH more strategic use of frame gen could offer complete frame stability. Imagine. Youre getting your desired fps out in the wild, youre getting 90- 100 fps. Now you trek back to the city where ordinarily you'd drop to 70-80 fps. So instead of DLSS 3 rendering every other frame willy nilly and trying to achieve the highest possible fps, it aims to maintain the fps of environments where traditional raster is only needed. So instead of generating every other frame, it could generate 1 every 10 frames; or, in a crazy hectic scene it could temporarily boost to its highest frame gen to maintain frames. I think it best case use would be the game using internal benchmarks to see where youre getting the best FPS and progressively turning frame gen on and off in relation to your 10% highs. It could be a seamless experience aimed to decrease the amount of perceivable artifacting whilst maintaining average frame rate and as low as possible latency. So imagine, you're in a city and youre in a gunfight. All of sudden, a massive enemy crashes through the wall leaving a ton of debris and particle effects flying through the air. Ordinarily, your 90 fps experience just dropped to 50 fps in traditional raster. But aha, frame gen kicked on for that split second encounter smoothing out the entire encounter to that buttery 90fps, then going back to traditional raster. The perfect support system.
Great analysis!
So DLSS 3.0 gives you frames when you don't need it and kicks you in the face when you do.
Basically. 😂
And offers at least 2x performance over the 3090ti without DLSS and about 4x with DLSS.
@@bryzeer5075 defending nvidia there, I see? Go on mate, have fun. At best it offers little over 1.5x the performance. At 2x the price. DLSS nonsense is irrelevant. You're a clown 🤡🤡🤡
@@bryzeer5075 *40-50% rasterization, x2 with dlss in like 10 games.
@@bryzeer5075 Nobody cares about proprietary features like this or PhysX or GSync or whatever especially since the major consoles are on the AMD platform. Only the Ngreedia fangheys are spooging over this.
I would love to see someone compile 60 FPS footage of AI only frames.
th-cam.com/video/Co3nS39Yz9k/w-d-xo.html
That's would honestly be interesting to see
2kliksphillip plays with this idea in the video “upscaling” go check it out
its really hard because video encoding would probably hide or exacerbate most of the AI artifacts, it would not be representative of the real experience
@@elektromagnetik2786 Before the YT encode step, one could perhaps capture uncompressed.
It was really bugging me that other content creators weren't really mentioning how DLSS 3.0 works, as it's really quite misleading how the extra FPS comes to be.
Thanks for the detailed video! Really interesting stuff.
it has to do with availability, several have allready stated it would come in a separate vid, much like HU
DLSS 3.0 which boosts framerate, but is only viable when you're already generating alot of frames but the 4090 doesn't support higher refresh rates like 4k240 because Nvidia is the only one who skimped out on dp 2.0 (even intel has it), and in lower refresh rates the 4090 is bottleknecked and already maxes out most games, while for esport player it increases latency.
Tell me who's this feature for exactly??
P.S also breaks vsync currently and causes tearing when frame rates exceed refresh rate
@Doranvel except 4090 doesn't support over 4k120 it doesn't even reach 144hz. And it's such a pointless restriction as well for a buyer of a 1600$ card. Like if intel can put display port 2.0 on its a380 msrp 140$ and probably it's a310 which might be sub 100, it's a really bad look when nvidia skimps on it
Who hasn't been mentioning it? I've not seen a single review not talk about it and explain it
@Doranvel problem I see with dlss 3 is like here its 119fps had input delay similar to a native 40fps and thats a pretty bad experience
I'd love to see a video of extracted dlss3 frames with no traditional frames. If every other frame in a dlss3 video is AI generated, perhaps it's possible to resample a screen capture video to only show the AI frames by skipping every other frame.
That would be magnificent. 👌
Nvidia will maybe send you a letter say how dare that you are not supporting their narrative, how could you do that, as an independent tester. :-)
what a great bit of great information that I had no idea of from the advertising, this is why I love your videos, and always use it to determine my purchases.
you missspelled it, its Ngreedia.
now with magic shitty and softlooking extra frames.
Big companies don't do that ;-)
They would just silently blacklist him and NOT send anything interesting anymore.
Seeing as he had to slow down everything a stupid amount with the only really noticeable thing at full speed being some ui elements I doubt it
So, in summary: DLSS 3.0 is perfect for people who are playing non-competitive titles and already achieving 120FPS on 144+ Hz adaptive sync monitors as long as the game is not input sensitive and does not have much moving UI text and the DLSS 3.0 is not pushing their frame rate above their monitor frequency. Glad to hear that this is not a feature for a niche market.
Well, why should they limit you to sketchy interpolation artifacts? You want that screen tearing, too, so consider it an added feature.
@@xwize Half of which are in nVidia's marketing department
I think 60 to 90 fps is a good spot to enable DLSS3, should boost you to about 90 to 140 fps.
Don't forget not only do you not want to push over the frame rate but the game needs to have an even frame-rate by itself. If looking at an empty part of the city doubles your frame rate you'll not be able to be over 120 during the majority of the game and not over 240hz with DLSS3 when in the lighter parts of the game.
It's kinda like Nvidia asked: "How do we take this technology we developed that really helps people with low end hardware only benefit you if you buy high end hardware..."
So to summarise DLSS 3 is only useful when you just do not need it.
Yup, I thought it that when he was discussing 60fps rates.
I think its the opposite. It still looks bad at times, so I dont wanna use it and definetly would not pay $2k to use it lol, but its in games like msfs, and other slow paced low fps games where it is needed most, and there the frames being "fake" does not matter, the smoother image is whats wanted
Yeah it's really odd isn't it? You only benefit from it if you already have a good frame rate, but turning it on then penelizes you with screen tearing as there really aren't any 240hz 4K monitors. It also doesn't have DP 2.0 so getting monitors like that still seems unlikely unless they use HDMI.
The use case is rare
You do need it though, how else are you going to generate a graph that shows Nvidia is totally 3x faster than anybody else?
I love this kinds of deep dive analysis. I really appreciate this kind of awesome and unique work.
Love that Tim is paying attention to small and big knobs … and all those shafts !!!
You sir are a legend!!!
"While looking around the cockpit" no less
I'd love to know if that was a one-take, or if it took him a few tries to keep it straight.
I'm amazed how serious he made it sound.
Reported
@@stickboy8219 whats reported ......
we are so lucky to have Hardware unboxed. you guys are mad lads x
Never follow the official narrative, the mainstream is misleading... kudos to Hardware Unboxed for standing with the people
Sometimes I wonder if life is even worth living, and then I remember that we have Hardware Unboxed and I feel a little better.
@@jeremyphillips3087 Hey man get help...
Wait till you see their "Fans Only" site ...
@@jeremyphillips3087 It isn't.
'Small knobs, big knobs and large shafts' I've never been so eager to check out Flight Simulator before 😂
And all that in the cockpit
The was fucking deliberate 😆
This video is full of tips.
Excellent, unbiased analysis, as always. Thanks Tim and Steve for all the hard work you do.
@@MaTaMaDia Everything is inherently biased. It's more about how much bias there is and whether or not you want or can sift through the bias to find the data and context around the data.
It's not unbiased - they heavily favour AMD, always have, which is understandable but sadly Nvidia just make the best card.
They favor value over everything. Like everyone does. Doesn't make them biased. They still recognize a good product when they see it.
@@jlmonterrosa14 Nah, they are biased, everyone is, it's cool, they still do good stuff.
"unbiased" xDD This channel has had a massive anti Nvidia bias since I can remember.
This actually makes me more interested in RDNA 3, because the 4090 has solid generational uplift, so if Nvidia thinks they need to push this, the competition might be very close otherwise. Wait and see I guess.
might be another 1080 ti vs vega situation again
@@d9zirable I expect it to be more of a 30 series vs 6000 series, but with great RT support from AMD this time.
This might be true, but I think it is more likely that NVidia is trying to make the second hand market GPUs obsolete by pushing the next gen as far as they can. So they can continue selling new GPUs.
Very close? There is a reason Nvidia is prepering a 4090Ti, because the 4090 will be beaten, and not by a small margin.
4090 is decent (but too expensive), 4080's both suck (plus way too expensive)
This is the DLSS 3 video that we were all waiting for. Excellent work!
Way better than digital foundry's one
@@HorribleNimbo Shill foundry
@@HorribleNimbo Big surprise the guys who get nonstop exclusive nvidia content never say a bad word about nvidia.
Yes. DLSS3.0 is not good atm.
@@angrygreek1985 Shillital Foundry.
Great video explaining how the extra DLSS3 fps is not going to improve imput latency. I was surprised about how much of a negative effect it has on latency. 👍
Which means I don't need it 🤣
Yea this was always a big asterisk with the technology. sure stuff will be smoother but... what if you wiggle your mouse fast?
Not only that but the image generated is even worse.
Still better than native, wth are these reviewers complaining about when its better than native???
@@isaiya6799 the AI frames *_are not better than native_* . There's plenty examples of that in this video alone.
I'm seeing this 'feature' trick a lot of people who don't understand much beyond FPS numbers. Now with the launch of the 4060ti being no better than the 3060ti without frame generation it's become even more obvious how much Nvidia is leaning on this feature. My favorite thing to say is 'why not just enable your TV's motion smoothing instead' since they're functionally not that different.
Thanks Tim. Great research done. I learned a bunch about DLSS and ai image generation.
My take from this is that DLSS 3 is similar to a high overdrive(or whatever the manufacturer calls it) mode on a monitor; It looks nice and fancy in the numbers, but the visual experience suffers to reach a better number for marketing. Yes I know the analogy doesn't hold weight once you get to around 120fps native, but at that point is there really any need to enable DLSS3 over DLSS2 or FSR2?
The need is there for a small portion of users only which is sad but it is what it is i guess.
Trying playing a game like cyberpunk with ray tracing and you do need DLSS even with 4090.
Agreed. Personally I play on a 75 inch TV so DLSS even when working I generally don’t enjoy visually since I notice it doesn’t look as good as native. If I can already get 1440p - 4K at 120FPS I see no need for it really. If I ever get a 360hz monitor one day then sure, but I’m not a competitive player anyways so I doubt I’d ever need it. Still, I love the idea of the technology and hope to continue see it evolve.
@@TheEPICskwock 😕 I guess I will wait till I can play CP2077 with RTX 10090ti.
About the use cases: a 2070 owner already bypassed the DLSS 3 restriction getting double the framerate but at a unstable framepacing in Cyberpunk w/ RT. So, there's benefits for a high end 30 series cards running a game already at a high fps with ultra high end monitors.
Stellar work from Tim, clearing the fog for the masses and validating objectively everything we enthusiasts target for in performance PC gaming.
Been watching several recent videos on this channel recently... I always discounted it because of the name, not really being interested in unboxings. How wrong I was on that count, these videos are excellent :)
As for DLSS3, it's basically as I feared once it was explained that the tech needed to keep a frame in buffer for it to work. Basically you don't want to use it for the same reason that you don't want to have frame smoothing techniques turned on on a tv if you game on it. In the scenario described where you could use it (singleplayer, high framerate, not sensitive to inputlag...), that's the scenario where I personally wouldn't really value the result anyway. Maybe some people can really tell 200 fps (and Hz) over 100 fps in a lower pace singleplayer game, but I certainly can't.
It started as an unboxing channel, but changed its direction pretty early on (but not early enough to start fresh with a new name, I guess). Tim and Steve are providing some of the best hardware analysis in this space, imho - very competent, very thorough, and quite well explained and presented. I remember when DLSS was launched with the 20 series - almost every other reviewer just repeated nVidia's claims about the underlying AI technology. Tim seemed to be the only reviewer at that time who actually understood AI and neural networks on a technical level, and he was therefore able to quickly identify and demonstrate problems that most other reviewers missed (despite being glaringly obvious to anyone with experience in neural network design).
I think it could be used on single player games where you usually get 30fps, but want a 60+fps smooth image.
Thanks Tim for another great job covering DLSS, it's good when someone brings down to earth features that are sold as premium.
Frame generation should be implemented by game engines because they are aware of the user's input, in-game events etc. and can strategically generate frames when conditions are ideal. Nvidia should have exposed accelerated frame-blending APIs for developers.
Slapping it on at the end of the rendering pipeline just creates nice performance graphs for marketing purposes. It's a bit like fitting a muffler tip at the end of a small engine so it makes a louder noise.
If Nvidia would give game dev they API, devs should ignore it, and use something open source, because this is not some kind of magic, because many TVs have this.
And devs should think twice before getting any black box from Nvidia, or they ends as EVGA, they get it for free know, because there is AMD, but if not... they will have to paid for it, and when you become dependent on someone technology... oh Adobe :D
Close sourced narrow usecase even if it was better wouldn't make sense with DLSS 2.0 being a thing unless NVIDIA pays these developers. That's a lot of work for developers to put in for a very small group who have the latest gen GPUs but want frame interpolation that has worse latency.
Excellent video, Thanks guys!!
You asked for our input? DLSS 3.0 is mostly a gimmick IMO and the conspiracy theorist in me believes it was conjured up solely as a vehicle of misinformation - as a graphic artist I would even go so far as to call their graphs fraudulent! The 4090 and 4080/16 are okay-ish cards as a next generation over the 3000 series but the prices are inspired by greed and blatant disregard for their customers. Additionally, the idea that we have gone from 240W max in the 2080Ti to 480W or so in the 4090 while just barely (and in some cases not) doubling performance and frame rates speaks volumes to me. Volumes of shame that is...
If the 4090 were _between_ $800 and $900 with the _real_ 4080 being $100 to $150 less I would be singing NVidia's praises. As it is however, I think I will be looking much harder at whatever AMD/ATI brings to the table!
Agreed
I was thinking the same. Based on the information provided, it's basically a fancier version of LG's TV true motion, clear motion or whatever other TVs use to display artificial high frame rate when watching movies. It even does the same thing with adding high input lag and even some artifacts. Which is why those TVs have "game mode", which basically turns off that tech and other post processing effects to achieve lower input latency when playing a game.
So from programmers perspective input lag results from bad code base. Every game that has input lag from dlss 3 or other technologies have a game loop that waits for completing rendering to receive new input. Some games are developed in a more modern way to receive input while rendering. Modern gameengines should not suffer from input lags (modern game != modern engine).
4080 is more efficient than any 3000 card
@@harryhelliar1474 It is, but not $1200+ more efficient.
Great break down as always Tim.
I think the other issue with DLSS 3.0 working better for high frame rates is simply the amount of interpolation that has to occur. The higher the native frame rate the less information change that's occurred in between frames. So not only are these interpolated frames on the screen for shorter intervals, the artifacting in them should also be reduced as the model has to infer less information to fill in the gaps. It's a substantial challenge to get this kind of technology to work well for lower frame rates, where you'd want it the most.
I applaud to the professionalism of HU! In my opinion, you guys are, by far, the most level-headed and objective hardware reviewers on YT. You consistently offer detailed, yet easy to understand explanations and you are refusing to fall for the hype. This is why HU is my absolute first choice and most trusted source for hardware reviews .👍👍🍻
Gamer's Nexus is pretty good too
@@ofon2000 Yes, they are, too, without a doubt. However, HU tends to provide simpler, yet as detailed explanations of discussed topics.
The video isn't bad. But I'll take Gamers Nexus or Digital Foundry any day.
@@pokealong GN is also great, as I said. But DF....While they are excellent in providing technical details, they're on Nvidia's payroll, usually shilling and overselling Nvidia's tech.
@@RandomlyDrumming They literally have a video very similar to this one ripping DLSS 3 to shreds. They list point by point what's wrong with it with countless examples of where it's flawed and how vsync hurts it. They're no more in Nvidia's payroll than Gamer's Nexus is for having an entire video praising the cooling of the 4090 WITH NVIDIA.
Maybe you all missed every GPU video they've done in the last 3-4 years where they praise AMDs value.
It's such a weird technology as it's made to make a game run at higher frame rates than normal, but it doesn't work well when the GPU doesn't naturally output that many frames. It really seems its main goal is dealing with CPU bottlenecks but I also heard CPU bottlenecks also limit the bonus of it. It's really confusing. It basically like it was only designed with the idea of using a 4090 with the highest end CPUs. It's so odd. I was honestly expecting it to be a lot better for lower end cards.
It's also kinda unfortunate as being that it brings games over 120fps at 4K, it means that it will almost always hit latency by being over the monitor's max refresh rate. It's incredibly rare to find higher than 144hz 4K monitors and the new GPUs still only have displayport 1.4. That means going higher is still unlikely.
It is very strange. I think flight sim and civ are like the only games I could see myself using it on.
That's my impression as well. Just like RT back on RTX 2xxx series was pretty much useless on the low end cards, this will also be another something people at the lower end (4060 and bellow) will pay premium without being able to use.
Designed to sell at higher prices than it's actually worth.
@@masteroak9724 Responding to all 3 comments. To be fair on 3050 and 3060 RT (except minimal RT settings) is still hard to use (DLSS 2 is fine thankfully). 40 series might be the first generation where it will be useful on lower end with decent RT settings, but not with DLSS 3. Also CPU bottlenecks that DLSS 3 is supposed to override are also causes of additional increased latency often, so it might hurt even more when CPU bound (though you'd have to be very CPU bound for that).
As for Flight Sim and Civ - it would depend on how annoying the flickering of UI is. Since honestly in Civ you don't need that high FPS to play the game.
I'm not sure why you think it is weird, it's plainly obvious that the results of interpolation will be better the lower the interpolation window so it's naturally geared to boosting high frame rates. Most games already feel awful to play at 60 fps, so holding an additional frame in order to perform the interpolation is always going to feel bad. Probably a significant amount of non esports orientated titles can't be run over 150-200 fps whether it be via cpu bottlenecks or other engine limitations, so being able to double that to 300-400 fps for the next generation of high refresh rate monitors is a welcome addition imo, assuming the major kinks like ui elements and scene transitions causing problems get resolved.
As a latency nut this is crucial information. Really great work
Agreed! great video
Imagine calling yourself a latency nut...
Yeah, doesn't appeal to me either. It will likely get fixed, but by then it will probably have been hacked to work with the rtx30XX series anyway.😂
This is the best DLSS 3 analysis I've seen so far! Thank you for all the hard work
So basically this is really a similar experience to using a modern tv with frame interpolation.
To a degree. DLSS 3 will do a better job because it has access to motion vectors.
@@Jonatan606 unless a direct side to side comparison is available, the improvement here is just speculation.
@@Jonatan606 But you don't get the artifacts in the UI like with DLSS 3.
Not quite. Motion interpolation purely processes the image.
If someone moves a hand when it blends the frames together it will look more like the hand is fading out of position 1 and fading into the other position.
With DLSS 3, the GPU KNOWS it's the hand that's moving, so it calculates the midpoint between the two positions and puts the hand there. Then use the backgrounds of each image to draw the background behind it.
Frame interpolation has often been called the "soap opera effect" because it makes everyone's skin look super soft. Especially if there's big contrasts to their skin like a guy with stubble for example.
@@liuby33 Digital Foundry already compared it with other interpolation methods. Spoiler: DLSS 3 does a much better job.
It would certainly be nice to have a setting, how many real frames are rendered per "fake" frame, so that the visual & latency compromise can be adjusted to personal liking
great point!
In terms of latency, the way it works now is the best case scenario. If you were able to configure it to generate a fake frame for every 3rd frame instead of every other frame then you would need to start triple buffering frames which would actually increase the latency between when a frame is generated and when it is displayed even further. Frame interpolation is a marketing gimmick that has no place in real high end hardware.
Let me *TRY* imagine this with my undervolted brain. If native fps is 120 fps and after turning on dlss 3 fps goes to 200fps then there are 90 *fake* frames, which means 1 fake frame between 1.33 real frame. I'm just imagining this so could be wrong.
@@bingchilling1771 "fake" Frame inserted into two "real" frames iirc
This is one of the best and most objective videos on DLSS 3.0 I have seen so far. Thank you!
You and gamers nexus are doing wonders for PC journalism. Thank you
I was noticing this on some of the first demonstrations. Reviewers have been saying it’s only a little when to me it looked like a lot was going off in the distance. We’re just trying to hit frame numbers while ignoring quality with the image it’s like coming full circle with low frames with good quality imaging that gets chopped up.
well look at dlss 1 and what we got now, 2.4.9 is mostly better than native. give frame generation a bit time. and you have to get it into the market at some point for future generations, like with RT. RT is going the last keystone for real gfx
@@PrefoX no it’s not better than native. The game’s taa implementation has to be really poor for dlss to look better.
N DF is going to hide these things as they are nvidia shills.
@@PrefoX at 1080p/1440p,its better than native but at 4k its slightly worse and at 8k its garbage compared to native. Dlss 3.0 looks like dlss 2.0 perf mode at quality its bad visually at all those resolutions
@@anuzahyder7185 There have absolutely been games where its better than native as its able to infer finer details that were lost in native, such as making small lines complete, reducing aliasing. Its a better solution than TAA in those cases.
@@alexatkin yes resolving those fine details doesnt make it better overall. There are other things to consider as well
Here's the way I look at DLSS 3.
It's a really cool piece of technology and a great proof of concept. But the entire point of getting higher framerates is to get lower latency and make the game feel more responsive.
Turning on DLSS 3's frame generation either doesn't improve the latency at all or it makes it significantly worse. Meaning all you're getting with it on is higher framerate numbers to make your (and Nvidia's) benchmarks look better, while getting artifacts you weren't getting with DLSS 2 / 3 without frame generation.
If you're not getting less input latency with the higher framerates from DLSS 3, then what's the point? It's a gimmick.
That's not the entire point though, higher frame rates "feel" more true to life, they are more immersive even in slower paced games and make the image less blurry. I had this very problem with Forza Horizon where at 60fps it looked more photo realistic but felt more fake due to the frame rate, but at 120fps it felt more realistic but looked worse due to having to reduce settings. I've spent hours in games like that just driving around enjoying the scenery, unlocking the roads, where the input latency being worse but the experience more pleasant is a night/day difference to me.
Sure, a 4090 is already going to run that game at 4K 120fps fine, but this is about future games where they are pushing the graphics even harder, which is why NVIDIA used the example of Cyberpunk fully path traced. There are plenty of games where the frame rate looking better is more benefit than latency, just because you don't care about them doesn't make it a gimmick.
@@alexatkin Like the video said, using frame gen at low frame rates is going to increase the latency even more. If Cyberpunk path traced runs at 30-40fps on a 4080 12gb then turning on frame gen would increase the latency to unplayable territory, then there is also the question about how frame gen performs on slower GPUs, what if it takes even longer to generate frames? You really don't want to play a 1st person perspective game at 100ms+. It is a bit of a gimmick. Maybe DLSS 4 will be better.
Oh god, high latency is a big no no- I don't care how smooth it looks, if it feels sluggish I'll never use it -it's the Pc version of fondant for cakes (looks good, tastes bad). Great vid as always.
9:33 "We're inside the cockpit of an aircraft here, we'll also find details: Smaller knobs, bigger knobs, and even some large shafts to look at." Nice one, Tim xD
😂😂😂
I only caught it the second time around
Here's what I think, motion on screen is a trick of fast frames moving, and while adding a generated frame between tricks your brain into interpreting a 'smoother" motion, I don't think that's it's a valid FPS benchmark to include because it isn't indicative of rasterization performance. However, It's a neat technology, but it probably shouldn't be used under an certain FPS threshold because it makes artifacts more visible. It reminds me of those TVs that would smooth the 24/29/30 fps video to 60fps, but at 4k it was easy to spot artifacts and blurs and to me, made it unwatchable. Where this tech excels is at frame rates past that the human brain already comprehends as motion but it doesn't add any perceivable difference to the gamer, only the benchmark.
Exactly this. It's just motion interpolation, not real rendered frames.
Fair point, but what you have to realise is that the artefacts are less visible than TV interpolation because a lot of the places that falls down the motion vectors are able to fix it. Many years ago there I used to game with TV interpolation on because I hit on a TV that just seemed to do an insanely good job and the TV was so sluggish I didn't notice the lag difference.
Granted I wouldn't do that now as I found I enjoyed a lot more games now I have a TV with decent latency and the difference is HUGE with interpolation on, but I will absolutely use DLSS 3.0 interpolation if it means I can have better graphics on modern titles with a native frame rate between 40-60fps interpolated to 80-120.
Wanted to comment on the UI element thing... While there can be some slight variances on the exact method games use to do this, each UI element is 3D geometry... Generally two triangles with a texture on it that is typically dynamically generated each frame. With text, the game dev can either bake the text directly into the texture, or make each letter a "3d object" and layer it on top of the background UI elements. I've used both approaches back in my indie game dev days.
Because of this, I'd imagine it'd be extremely difficult for the AI to differentiate between UI elements and the rest of the frame, and handle them differently to eachother
So what is difference between this box which is same place of frame, and car which is same place of frame (as in CP2077)? Because it looks like they train this algorithm on texture data and not on fain detal data, and this is why it have so many artefacts on this elements. Something as using JPEG for UI and not using PNG. JPEG is better quality/size for photos, but is bad for gui type elements.
I feel like it really depends on the rendering method.
Btw, there is no AI running tho. its just interpolation.
@@termitreter6545 so they do something really wrong, because when on screen eg in Flight Simulator you have this bubble with distance, if don't change place and it corrupt it most, and not side of it, but text in it. BTW AI is part of ML and for algorithm to name as ML you need only one number to change to fit model. Even we looking at NN it is enough to have perceptron, one neuron to name it NN (maybe 2 neurons but...), and perception is use by eg AMD in CPU in branch predict so is so fast, but it adapt so it is AI lol
BTW2 most would think that if someone is talking about AI we are talking about DL = deep learning, which are deep neural networks (more then 1 hidden layer)
@@m_sedziwoj Idk, maybe its a bug, who knows.
As for AI. Thing with neural networks is, they are made at nvidia with an "AI", but at home you just get the output of the NN at your PC. Youre not actually making one, your GPU is just applying the result.
Tbf its a pointless detail, but it just bugs me sometimes.
@@termitreter6545 I making NN in home, but not this one used by Nvidia ;)
So training is hard, using is relatively easy.
Maybe so many people make this mistake, because everyone is comparing NN to human brain, but they works completely in different way. It is more as air plane is inspired by birds, but we know how much they are working similarly ;)
DLSS3 looks very cool. My main concerns are UI, a lack of framerate caps/vsync, and the fact that there’s simply not enough bandwidth to push anything higher than 4K 120hz through HDMI 2.1.
In some cases it feels like it’s too fast or too good when it comes to the 4090. It’ll be interesting to see how DLSS3 evolves and scales on lower end hardware.
Well, watching this at 30 Hz was surely a fun experience! Still a lot of amazingly useful info, thanks.
Hey Hardware Unboxed, fantastic video as always! I just wanted to add an idea into the mix. In the beginning of the video you talked about how showing high FPS scenarios with DLSS 3 is difficult with TH-cam's 60 FPS limitations. I was wondering if you could potentially provide a download of the footage or even just this whole video as a demonstration of what this new technology looks like. That would be really cool to see!
They do have a Floatplane, so it’s possible they provide it there if you sign up to support them. If not, I’m sure they’d be open to it
@@Quizack That is true. Maybe this is too much to ask, but as much as I personally would be open to subscribing as I would really like to see DLSS 3 with my own eyes before I purchase an NVIDIA card, I think the PC Building community would really appreciate some footage (maybe not as much as their floatplane members) as a sort of "see for yourself" approach to a buyer's guide. Even with just a couple scenes with some varying framerates would go a long way in helping regular consumers understand what they are actually buying.
@@jai2628 Yeah I get ya. It would be nice to see. It’s hard to tell with TH-cam’s compression.
really appreciated this breakdown and analysis, especially regarding latency. thanks for the time you put into making this!
What an amazing video. Just wanted to say thanks for all your hard work and effort into videos like these. Really appreciate this channel.
Imagine all these Tests are without Frame Generation or aka DLSS3.0 with the power of DLSS3.0 quality mode you could get at least 120 fps on every game 4k max settings and picture are crispier and artifacts are not visible to human eyes ,damn RTX4090 is a beast ,True you cant use Frame Generation on competitive games in regard to latency but you don't even need that ,even without FG you could get at least 360fps over competitive games with Nvidia reflex you get lowest latency. lol
LOL Imagine donating to a yout*ber
@@AbhorrentRed so what?
@@AbhorrentRed whats so wrong with that? everyone needs to make money and hes supporting somebody so whats so wrong with that?
@@AbhorrentRed did u just censor youtuber?
The biggest deal breaker for me isn't the image quality problem of DLSS 3, nor is the latency problem, but the fact if you force vsync in DLSS3 latency become completely unplayable, no vsync and you get tearing, only in one scenario where your DLSS 3 result is within VRR range of your monitor then you might get a better experience.
Do u have this issue with g sync monitor ?
Nobody use vsync, it's 2022, you either use freesync or vsync, at least there are fast sync available so vsync is literally the worset option
yeah, that's what dldsr is for. gotta find that sweet spot though. but agreed, latency is a non-issue. Most people don't even enable reflex. and the artifacts he was pointing out I couldn't see nearly any of them. gonna try it out myself later today
If it isn't in the VRR range then your frame rate is too low to benefit from the feature anyway.
@@Simon-tr9hv You need to use v-sync to eleminate tearing with a variable refresh rate display.
Amazing In-Depth analysis as always, results are as most people expected tbh, hopefully frame generation is gonna be a much more useful technology in the future for lower end cards.
Frame generation would be useless for first person shooters or quick twitch games as the latency can be felt as mentioned by optimum tech. If you only play slow games then it's okay.
I thought frame generation sounded like a gimmick, until I tried it. Using DLSS 2 is awesome. But remember, DLSS put more load on a cpu, and takes some load off the gpu. Poorly optimized games that are released too early, causes gamers to need more computer power to compensate for poor programming. After using frame geneartion for a few months, Frame generation is actually great. Games run more smoothly.
it's exactly what i expected, the technology is the same as on TVs for years same story artefacts and latency increased. But the numbers looks good and that's what marketing folks like - to fool people. Great work on you testing it so thoroughly!
Although ngl, I usually keep frame smoothing on on my tv all the time. Yes, I know it’s not doing me favors in terms of latency, but for me, it by far outweighs having to look at a somewhat 30fps game like bloodborne, especially the larger the tv gets. The jumps in gaps between frames at that tv size are just too bad. But that’s with a controller, with a mouse I never have these things on, and just lower settings to get a higher fps.
@@granatengeorg good for you if you can stand the artefacts :) i have tried it many times with consoles with different tvs and never liked it mostly hated it + latency sometimes i prefer to settle with "cinematic experience" if the title is worth it
I don't understand what's wrong with bringing an old technology that works to gaming market...
@@RawbLV nothing wrong to bring it as bonus for those that does not have it in their tvs, whats wrong is bragging about it while it is so crippled and making whole release about it. Thankfully the raw performance is also good so rtx4090 is worth buying. But that shows direction, hey in future instead of making our products better we will slap on features that bump up stats but are horrible and nobody will use it instead of focusing on actual product development if you know what i mean. So calling deficencies in DLSS 3 Frame gen is for our own good the louder we do it the better. IMHO its not playable that way
The release was not about gen over gen improvement but about improvements made in DLSS 3 which are fake and miserable
Also understand this tech is crappy on TVs where input lag does not matter you have 25/30 fps per second you can delay video and audio how you want to make sure those generated frames are great and aligned and yet still you will see artefacts. So no wonder it would be the same or worse when input lag matters and you have a lot less time to do the same thing. To be fair i expected even worse results so they did good work on this feature though this should not be selling point.
Nobody breaks it down like you guys. Very unbiased channel 👏
Perfectly Honest too.
Phrases that can be used for HU and during sex.
All I know is when I use DLSS in war thunder and escape from tarkov I get ghosting on things that are moving on the screen such as a gun barrel or gun.
It makes the games look absolutely hideous to play so I always have it turned off. I game at 1440p.
Why would anyone want DLSS fake frames? they are obvious at low fps and not needed at high fps. I'm glad you made this video , very informative, but DLSS fake frames are useless to me and I suspect most others. Native frame rates is where its at.
If they were just generated off of an algorithm that uses only the previous frame (rather than generating two frames then generating the fake frame to fit between them) they might have something. There might be benefit once they introduce v-sync support, which could potentially only generate frames whenever there's frame drops.
Thank you, you basically confirmed both my immediate concerns when I heard how DLSS3 frame insertion works. (input lag, artifacts)
Rasterized performance is always my preference when it comes to gaming. I’m not a big fan of the sometimes gimmicky tech, but I understand it’s a necessary thing to allow for innovation and not just linear progression because as a creator, chasing that goal would become stall. Thanks for making this video and taking such a deep dive into the tech. Thorough, concise, and informative as always.
You guys should do a CPU bottleneck test for the 4090
Every single CPU on the market bottlenecks a 4090
@@bryzeer5075 yea but how hard? There is execptable level of bottleneck
4090 is a 4k/8k gpu every cpu will be a bottleneck at those resolutions so it really doesnt matter
@@fpsshani not really 4k 8k requires lot of vram than gpu computational power no matter how much cores you increase if it doesn't have enough or good vram the card will always underperformed similarly 4090 cant hit 60 fps in cyberpunk 4k rt on because of lack of vram and 4080 12 gb will be worse .
@@fpsshani the cpu bottlenecks at lower resolutions not at higher. on 8k the 4090 is still the bottleneck
With DLSS 3 I can just "sense" that when objects are passing in front of each other that things just aren't right at the edges and I don't like it. It just feels off to me all the time. I don't like it and I don't like how it's being pushed. The added latency kills it for me too.
to me, it feels like a better version of the smoothing feature tvs have for years now. where it introduces a generated frame in between and sometimes produces artifacts. I used to use that on console when I played turn-based rpg to have a 60 fps feel because it does add latency
@@hitman480channel8 I use it sometimes with RPGs especially the ones with third person cameras where you effectively spin the whole world around every time you're trying to turn
For a moment I thought did I write this comment :D But yea I agree with you. Its obvious marketing gimmick to force high framerates. I dont see whats the point of high framerates if that is not going to decrease latency. Most important point of high framerates is more responsive control.
@@jcdenton868 I disagree on this last point. Displays require more frames to increase clarity when things move. If you remember there used to be a big push for gray to gray switching speed. Now this is very fast in gaming displays and oleds. The problem is that instant switching is frames still looks choppy, unless you have more frames. So fps starts to become more important than resolution, unless your game does not usually involve motion somehow.
@@splashmaker2 Yeah sure! What im saying is I would prefer to get more frames "naturally" without latency inducing effects like DLSS3.
Thank you, I am so glad I waited for you guy's review. Ill wait to see what AMD is going to do.
Few months and they have alternative, because it is not magic, bo this is how you brainwash people to think that new GPU is 2-4x faster. They don't understand technical aspect, so...
Nvidia's AI pioneering is pretty impressive. Especially RTX Remix.
Will be neat to see where this technology takes us in 10 years.
RTX remix isn't really AI pioneering. AI pioneering seems to mostly have come out of academia and FAANG. Nvidia built fast matrix multipliers which is great. But Intel is carrying the torch now with open cross platform tools.
I'm sure, retrospectively, some of these features will be akin to how we look at Nvidia Hairworks in 2023. Lol
The disadvantages of frame generation are way too big for low to mid fps which leaves the question who is supposed to use it. The amount of people who target 120+ 4k gaming is pretty small.
So is the amount of 2k GPU buyers tho
At present this is hardware locked to NVIDIAs newest cards, and for those there is no point to be running at anything below 4k anymore. Of course as newer and newer games come out something is eventually going to push the limits, but that is not here yet.
dlss 3 is also useful for 1080p and 1440p higj refresh rate for lower tier cards as 4080 4070 and 4060. One of the bigger advantages of dlss 3 is that it improves smothness even when cpu is the bottleneck, so in lower resolutions it will be useful.
@@darkshadow6556 3090ti gets 60fps to 90 fps in most modern titles in 4K, so no you are hitting 120hz unless you dial down settings.
The people that buy a GPU for $1699+ probably have 4K monitors at 120hz lmao
I wonder if DLSS 3.0 could be implemented with a frame cap so it's only interpolates inbetween late frames so you only lose image quality when youd experience a frame drop. So you can effectively get much better minimum frame times at the expense of latency. Especially if you have a cpu bottleneck
I'm certain Nvidia only truly cares about a fat fps figure it can slap on their newest flagship $1.6K GPU. Marketing "our newest generation can generate some frames here and there" to keep a solid fps isn't as sexy as saying our new alien tech creates 300% framerate magically out of thin air. So, in essence you won't see Nvidia helping out the gamers struggling with low framerates, if that was the case, they wouldn't have forced RT down the throats of gamers when clearly it was designed to create an artificial edge over their competition while allowing some additional compute headroom in serverspace applications. In reality something like UE's Lumen is much more beneficial to gamers.
@@ufukpolat3480 That is an incredibly weird take. Nvidia definitely cares about latency, there's nothing suggesting they don't. And they're not "forcing RT down the throats of gamers", they introduced RT and are pushing for devs to implement this entirely optional feature. In the few games where RT is implemented right, it looks phenomenal.
It's also important to note that DLSS 3.0 artifacts will be much more visible with the emergence of OLED displays in the next 1-2 years. With an almost instant pixel response time you lack the natural blurring effect of an LCD display that would additionally help to hide those artifacts.
I'd be curious if you could just give us a subjective view of how DLSS 3.0 compares on OLED vs LCD screens in your opinion.
LG is releasing there in a few months 27-32 inch 1440p 240hz oled monitors. they are slated to start production in late octorber early nov and will be shown off at ces 2023 so almost 1st week of 2023, atleast according to the leaks
Could you try and make a capture video of only the fake frames (drop the regular ones)? That would nicely reveal what the technology does.
I'm not surprised someone with that username in on this video. LMAO
I'd go my own way if I was Czech too, don't worry.
Thanks for the video. Basically confirmed my suspicions on DLSS 3.0. It’s definitely something I won’t use until they improve it with the inevitable DLSS 4.0
What makes me so sad about this implementation of framerate amplification is that Nvidia did not apply any of the things they already know full well to be useful from virtual reality research.
ASW 2.0 used in Oculus and steamvr can interpolate double the frames, making 45 FPS feel like 90 FPS.
It doesn't have near the same visual drawbacks because the VR headset is fed data directly from the game engine, and latency is considered and reduced at every point in the pipeline.
Nvidia using an optical flow generator basically means that we have the same technology that exists inside of your television set and gives you the soap opera effect, but a version of that that is an order of magnitude improved.
It Just sucks knowing that this could be better, because VR has been doing a better job with frame amplification since 2014.
I don't know why they can't feed engine data in. If the optical flow generator was aware of where the camera is, and was fed data about areas with occlusion issues, half of the artifacts in these videos wouldn't even exist. It's a little insane.
Very good points.
- For UI elements the game engine could specify areas to ignore to eliminate the UI issue entirely.
- I totally agree with you . Nvidia should have simply accelerated the AWS 2.0 approach and that would have improved VR also while giving a better result for flat screen games.
How these guys don't have more subscriber's is beyond me, as always great hardware and features review
To put it bluntly all they do is hardware comparisons and reviews at a really high quality, usually with little fluff humor unlike Canucks and such (all the better imo)
There's a reason unboxing channels and LTT have more subs and views, it's because it's not just tech, it's technology with a spin
Like that gameshow LTT did, the tech was a sidepiece. Not to diss HU, they are amazing and better than GN but that's just how it is.
Maybe corporations can pay TH-cam to reduce interest in certain channels. I don't see why that wouldn't be an option.
I don't think it's a complex answer, or requires a conspiracy. The vast majority of people want their tech information with a side of entertainment. The presentations on this channel are no bullshit, with a constant stream of facts. This type of presentation just doesn't appeal to the same number of people.
This was an excellent review of DLSS 3. Tim explained the reality and facts about adopting this early technology. He provided his opinion and concerns about what Nvidia hasn't made clear, which is fantastic, and will help people avoid the pitfalls of using this technology incorrectly given their needs and expectations. Of course it is early days and we will have to see how developments improve this technology - I for one am excited about it and still feel very positive about DLSS 3 after this review. People can just mindlessly believe all the Nvidia marketing hype, but I'd rather know what is realistic and useful, so I can adjust my expectations and get the best out of what is currently working well, while keeping an eye out for fixes/updates that improve its limitations. Thanks Tim, informative and insightful as always!
The biggest problem with generated frames is that it will take away some of the "feel" even if the game is smooth.
This is particularly noticeable in fast paced racing games ( like Rally games ) and fast shooters, it just feels unnatural, regardless of how Nvidia is inserting frames, you will notice that effect as they are not real frames.
Probably not that important in slower games, however the "small flickering" in F1 reminds me of memory corruption on a GPU, would totally stick out for me at least, it's nothing you get used to.
This has been my argument against DLSS since it launched. It is nice as long as it is perfect, but it is distractive in competitive settings and downright immersion breaking where it fails. Plus the "best in slow games" argument is a bad one imho, since most slow games could just be run at native resolution instead.
@@andreewert6576 No, Hitman is a slow game and I recently had a one-off issue with it running at a low frame rate, nothing about that low rate was good. 'Slow' games still benefit from a higher frame rate.
Gents... absolutely amazing data and analysis of DLSS 3.0!
That was top notch stuff. Much appreciated.
Looks like NVIDIA is trying mostly to fake the frames. Looks like I will be leaning on getting an AMD card now when it comes up in few weeks.
Seems like a catch 22: DLSS 3 works best at very high frame rates, but it would be most useful at low frame rates.
that is not what a catch 22 is
that's ironic
This is exactly why I always say "wait for the reviews". Excellent work from HUB as usual.
And the reviews all say at least 2x without DLSS and 4x with DLSS and RT vs. The 3090ti and runs much cooler and uses about the same power
@@bryzeer5075 shill.
@@bryzeer5075 I'm talking about DLSS3. What this review shows is that for the most part it should be ignored in your purchasing decision. Sites like PC Gamer who aren't able to look at it objectively were calling it black magic and dark wizardry. F that. It's an extremely niche feature based upon hiding lesser quality frames among normal quality frames.
@@campersruincod6134 Can't handle facts or what?
Aha.... and then buy an AMD Card, right?? 🤣
absolutely enjoying this detailed look at DLSS in the real world, thanks!
Great review of DLSS3, never shying away from saying it as it is. Keep up the awesome work.
I really, really hope we don't lose DLSS 2, it's so awesome.
we wont dlss3 is working on the base of dlss2 so dlss3 titles will have 2 available
@@duevi4916 It's called DLSS Super Resolution now.
@@grantusthighs9417 DLSSSR because a 4 letter acronym wasn't long enough.
In the end, what NVIDIA did was putting motion interpolation techniques present in TVs since 15 years into the rendering path of the GPU. They did apparently not solve the issues coming with these techniques, which is always when the algorithm needs to interpolate content which has been occluded / revealed between frames. Not really impressed, and far from being a "game changer" this technique has been labeled.
Irrespective of how good or bad frame insertion is, it shouldn't be a part of DLSS as there is no upscaling or super sampling happening here.
not only that but it is not rendering "actual" game play ... only an interpretation of what game play might be.. is has also been called "padding" the stats to make sales
@@surrealkiller7828 it’s not perfect rn, but as long as it looks like normal. I’m fine with it
Great review though in your excellent latency analysis graphs I would have liked to see more cases of reflex off. 'Reflex On' does reduce latency by nearly 40% which is significant. I know you were trying to isolate the effect of frame generation, but in my view there should be more comparisons between DLSS frame generation and settings where reflex is off. Because that is what gamers have been used to until now, and it is arguably new tech added in conjunction with DLSS 3. More DLSS 3 supported games means more games with reflex, which is a win for gamers even if they don't choose to enable frame generation.
Nvidia forces reflex on when dlss 3 is turned on. There might be a way to turn it off but I don’t think it’s a thing yet.
Great analysis HU! Your explanations and examples were really easy to understand and see. Seems like DLSS 3 is just smoke and mirrors for now.
The first time I heard about DLSS 3 I was already worried about the input latency, because usually the latency is much more noticeable then the frame rate it self, which - with DLSS 3 enabled - leads to this weird feeling of more frames but sluggish input that was explained in this video. But apparently DLSS 3 is even worse then I initially thought, having noticeable artifacts and even higher input latency. Thanks for the great video!
using fg on spiderman. its amazing.
@@jergernice1 That's one of the games where the issues are really noticeable..
Can't find the video but I saw a blind test and no body could tell the difference between native / DLSS 2 or DLSS 3 - I bet you couldn't too if you didn't know whether it's on or not... :)
@@ErrrorWayz DLSS 2, maybe. DLSS 3, no way.
10ms more latency is unnoticeable, but 100% more framerate is really noticeable. DLSS is a really powerful tech. In FS2020 it's incredible.
Really interesting stuff. Based on the increased latency I’m guessing they’re using two frames and interpolating between the two. At first I guessed they would only use previous frames to create the interpolated image and some fancy algorithm to guess when to display the interpolated frame.
Yes that's exactly what it is doing. And explains why it increases the latency. It renders frame 1, then 2. But instead of showing frame 2 when it is ready, it interpolates a frame 1.5 and shows that first. It increases the render latency by 50%, not considering other system latencies. In the graphs shown the total latency got worse by 25-35%.
great content as usual. thanks for the deep dive on DLSS 3!
How is nobody talking about the knobs and shafts in the cockpit joke? :D
Anyway, awesome video, thanks! Another approach you might try for reducing 120fps to 60fps is frame averaging (basically cheap 2-sample motion blur)
Cockpit...smaller knobs.... bigger knobs.... even a few large shafts. LOL! well played.
NVIDIA using dodgy rendering to try and keep pace? Guess history does repeat itself.
Gaysex
History repeats itself. 4090 blows away all current gen parts in raster and has no competition in RT.
Giant leap in production workloads.
It's the successor to the GTX 1080Ti.
@@Flex-cx7uj 4x performance in raster. 10x performance in applications like Blender.
@@DeepteshLovesTECH I paid £620 for my 1080ti it was stella value I'm not even thinking of dropping £1600-£1700 on a 4090
Keep pace with what?
4090 blows everything out of the water in every aspect, including traditional raster.
Thanks for laying NVidia's DLSS 3.0 bare naked for us consumers, Tim! Thanks to your hard work, we can see through the smoke and mirrors, so we can make an informed opinion on the technology.
Thank you E5rael, your support is much appreciated.
DLSS 3 on a monitor appears analogous to Motion Smoothing (aka Motion Compensation, or ASW) on an HMD in VR. It extrapolates extra frames in order to increase perceived FPS without much CPU/GPU loading cost. But in VR the artifacts are much more noticeable for most people, and therefore I leave it turned off usually.
Yeah its fairly sickening there, smears everything in my experience, better to just get the raw frames.
Great analysis, thank you. And yes, Nvidia made triple and quadruple lies about FFDLSS 0.1 aka FakeFrameDLSS beta version 0.1.
Thanks Tim for all the time and effort you put into this detailed video! Cheers! 🍻
It’s a fancy motion interpolation, improving temporal resolution to something that isn’t pre-rendered is either guesswork or will require everything to run behind.. I imagine the generated frame here is using a partially rendered future frame.
If all it does is push already high frame rates faster it’s hard to see the benefit beyond looking good on charts.
Whereas dlss 2 is a bit like reverse MSAA this is quite different
Knew it was smoke and mirrors, awesome investigation Tim! Your level of detail and reviews keep me coming back time and time again. I can't tell you how many times I have recommended people to view your monitor reviews. Thank you for the insite into dlss3, I have no wish to introduce more latency into my games (not that I was going to buy a 40 series anyways). It's nice to see honest reviews.
Smoke and mirrors is a bit harsh, the tech genuinely provides some benefits and it's the first iteration of frame generation.
@@Maseinface let's call it what it is , fake frames that offer no performance benefits what so ever.... what is the point then... you want more FPS so you increase your performance that's the whole point. This offers fake frames with a performance degradation... hence smoke and mirrors.
Conclusion: you need to already be at 120 fps to be able to use DLSS3 without drawbacks. But if you're not a pro gamer, you almost never need to be above 120 fps. The difference between 120 and 160 or even 200 fps is really small (to me anyway). It's really diminishing returns at this point. So DLSS3 is not all that useful.
In cyberpunk if it brings you from 80 to 120 fps and the resulting lag and artifacts are acceptable, it could be a good use case, as that is an FPS increase that makes a difference. But that's basically only with 4090s and only in a few select situations.
RT and DLSS2 made tremendous improvements from the 20 series (when they were clearly gimmicky) to the 30 series. I guess DLSS3 will really shine with the 50 series.
For Nvidia though, the battle is already won. They innovate, they bring new technologies and occupy the discussions whilst AMD is still busy trying to follow in rasterization alone (they still don't have a gpu matching the 4090 on this), thus reinforcing the large perception that Nvidia is THE GPU company and AMD is more of a market filler.
The fps difference is massive lol, 165 to 240fps is bigger then 60 to 165 lmao
Think it really would have also helped if they didn't call it DLSS 3 especially when DF said you can use it without DLSS on. It's it's own toggle in the games anyways so it should have had it's own marketing name instead of tarnishing the DLSS name. Especially weird as it isn't even super sampling, it's not an antialiasing technique so I'm just baffled why they call it that.
In my opinion it's just a really experimental motion blur.
who cares for names eh? we know the difference.
@@PrefoX we do, doesn't mean most people will. Plus usually having an acronym that means something can marginally make it easier to explain. I might not know MSAA means, but when they say "multi sample antialiasing" you now know it takes multiple of something for antialising. But DLSS 3 has nothing to do with DLSS, in fact Nvidia constantly labels it as "frame generation instead which just makes it more confusing as in some games it's literally called just frame generation making it even more confusing to people.
@@kejarebi I mean that's basically what DLSS ultra performance mode is. But he did show it is about like lowering DLSS from quality to performance mode so I guess it's similar.
I suppose one question to ask is whether it's Deep Learning based. If it is, then call it DLFG (Deep learning frame generation). Then it sounds like a good complement to the existing tech and shows both how it's related and how it's different well. If it isn't deep learning based then the naming is even worse!
I did not expect UI to be part of the frame interpolation. If the game has to support it, it could also work together with DLSS3 to create the new frame and then paint over the UI.
Unfortunately the frame generation happens solely on the GPU side.
I think it can be solved it but the developer has to do code it in manually. It's like enabling SR from AMD control panel vs in-game FSR. I suspect the developers probably just hacked this in quickly for the launch demo and that's why it's interpolating the entire game.
@@monotoneone When drawing frames, the GPU must know some information about which elements are UI, as it won't cast shadows etc. based on them. It would need to interpolate everything, but could it interpolate two layers and then composite them together?
Neural net derived imagery in general does a poor job with text generation, unless specifically trained on that. Nvidia could thus update DLS3.0 with results from text-specific training data and improve the result that way if it is not possible to split the rendering pathways for scene and text.
@@Bayonet1809 I don't know what they use, but Avisynth had this plugin called mvtools, almost 20 years old, and it can do the same thing with only motion vectors. See a few examples I made fixing badly produced scenes from the series called Lexx, th-cam.com/video/uz0waWcsEzU/w-d-xo.html, th-cam.com/video/NhAoEesDF0k/w-d-xo.html. In the top right corner the missing frames were regenerated, similar to DLSS3's frame interpolation. Ignore that the source is interlaced, there are several missing frames in the source, even if you look at individual fields.
I really wonder how many people "really saw" these "fake frames". Cause when I've tried DLLS 3 on Cyberpunk, I genuinely couldn't tell the difference and I'm pretty sure 99% or the other human beings around here didn't either. Funny how everyone starts to see something as soon as they're being told that there is something to see
I find the issues really hard to spot too (except for some of them shown in this video, like the flickering UI elements). I probably won't ever notice these while playing myself. It is still interesting to know.
I wonder if I'm able to tell the differences concerning input latency once i get my rtx 4070 delivered.