DLSS 3 Explained: How NVIDIA's RTX-4090 Uses AI to Increase Frame Rates

แชร์
ฝัง
  • เผยแพร่เมื่อ 31 พ.ค. 2024
  • Support AI and Games on Patreon:
    / ai_and_games
    --
    With the impending release of the new 4000 series of @NVIDIAGeForce, GPUs, NVIDIA has unveiled the latest version of their super sampling technology, DLSS 3.
    The folks over at @NVIDIA reached out to offer me a chance to not only try out the RTX-4090 GPU but also get a chance to read up on the DLSS 3 technology and get my questions answered on how it all works. It improves on DLSS 2 with new AI frame generation techniques that can increase a game's frame rate by almost double while maintaining full 4K high-fidelity graphics. This is achieving by using a frame interpolation technique that calculates intermediate frames between those received from the game engine.
    Special thanks to the folks at NVIDIA's UK team for their support in making this video happen. I hope you all enjoy it.
    Chapters
    [00:00] Intro
    [01:35] DLSS Basics
    [04:19] What DLSS 3 Does
    [06:32] How DLSS 3 Actually Works
    [13:36] NVIDIA Reflex
    [15:22] Closing
    --
    If you want to learn more about NVIDIA's DLSS tech, be sure to check out my previous video on the subject from 2021:
    • AI in Your Graphics Ca...
    --
    AI and Games is a TH-cam series on research and applications of Artificial Intelligence in video games. It's supported through and wouldn't be possible without the wonderful people who support it via Patreon and TH-cam memberships.
    / ai_and_games
    / @aiandgames
    --
    You can follow AI and Games on Twitter:
    / aiandgames
    Join me for live streams over on Twitch:
    / aiandgames
    Join our Discord Community:
    bit.ly/AIandGamesDiscord
    #nvidia #dlss #rtx4090
  • เกม

ความคิดเห็น • 113

  • @AIandGames
    @AIandGames  ปีที่แล้ว +20

    Special thanks to the team at NVIDIA for giving me a chance to not only try out the RTX-4090, but also for letting me read up on how DLSS 3 works, *and* answering all of my niggling little questions afterwards! I hope you found it interesting to learn a little bit more about how this technology works. It's really impressive and feels great to play. I'm keen to try it out on even more games as the feature rolls out.
    In the meantime, a new episode of AI and Games coming soon - with something extra weird coming up next! Plus, if you haven't seen it already, be sure to catch that older video on DLSS 1 and 2 that I made back in 2021.
    th-cam.com/video/ccuh5MNiLHE/w-d-xo.html

    • @isaacphoenix9200
      @isaacphoenix9200 ปีที่แล้ว +2

      I work with A.I. in narrative so I am always happy to learn more in this space.

    • @AIandGames
      @AIandGames  ปีที่แล้ว +2

      @@isaacphoenix9200 Oh well I think you're really gonna enjoy the next episode

    • @isaacphoenix9200
      @isaacphoenix9200 ปีที่แล้ว +2

      @@AIandGames I enjoy all of your videos. I'm on the art side of games so coming here for the technical side in a way I understand is amazing. However, I am so excited now. I'll also have to keep you updated as I progress through my system.

  • @synthetic240
    @synthetic240 ปีที่แล้ว +14

    Maybe it's just me, but I did notice some slight warping around some of the straight edges in the Cyberpunk footage (and not just from AA artefacts). But yeah, looks pretty amazing from my non-4K screen lol.

    • @AIandGames
      @AIandGames  ปีที่แล้ว +11

      As mentioned in another thread, Cyberpunk did have some issues at times using the DLSS.
      Though I will say the biggest problem in playing Cyberpunk was figuring out which rendering issues were from the DLSS or the game itself. It's still pretty glitchy at times. 😅

  • @GoatmanSam
    @GoatmanSam ปีที่แล้ว +14

    Excellent. Love the way you break things down, Tommy

  • @CyberBlaed
    @CyberBlaed ปีที่แล้ว

    Sweet video! Something other than usual deep dive into the card! :D brilliant work!

  • @floatingpoints1507
    @floatingpoints1507 ปีที่แล้ว +2

    Fascinating and pretty cool to know how it all works. Thanks!

  • @theblowupdollsmusic
    @theblowupdollsmusic ปีที่แล้ว +2

    Fantastic breakdown of DLSS3. Thank you for taking the time to make this easily digestible content. I hope DLSS3 technology makes it into future Nintendo products.

  • @justintime5021
    @justintime5021 ปีที่แล้ว

    Thanks for this video. It was extremely well made and easy to follow. I learned a ton

  • @rogeriojunior9459
    @rogeriojunior9459 ปีที่แล้ว +2

    These motion vectors and optical flow fields reminds me of calculus 3

  • @i_love_hitagi
    @i_love_hitagi ปีที่แล้ว +1

    Thank You for the hard work! ^o^

  • @dreddeus
    @dreddeus ปีที่แล้ว

    Great vid, mate!

  • @fuseinabowl
    @fuseinabowl ปีที่แล้ว +2

    Thanks for the clear explanation! This looks great for the games in the video like Plague Tale and Flight Sim, but I was wondering how applicable it is for FPSes.
    If the DLSS needs the new frame to generate the in between frame then it must add an extra half-native-frametime of input lag, and the perceived input lag would have an extra frame of variability. Is that right? DLSS is clearly great for the games you showed in the video, it'll look way smoother and the weighty controls won't highlight any extended input lag. But did you notice any DLSS-specific input lag issues with the first person shooter parts of Cyberpunk?

    • @AIandGames
      @AIandGames  ปีที่แล้ว +2

      Yeah that's the crux of it isn't it? Given it needs to have rendered the two frames in advance, then DLSS3 calculates the intermediate while it shows the first frame to the user.
      I can't say I noticed any input lag on any of the games I tried it on. One demo I did run against that I didn't show is the UE5 Lyra Demo (which is a 3rd person online shooter), and that worked fine. Though I decided not to use the footage as the AI bots were pretty OP and I struggled to stay alive for more than 5 seconds.

    • @fuseinabowl
      @fuseinabowl ปีที่แล้ว

      ​@@AIandGames Haha, yes I can imagine! Thanks, that's good to know

  • @GordonGordon
    @GordonGordon ปีที่แล้ว

    Nice explainer!

  • @skupipup
    @skupipup ปีที่แล้ว +1

    Dear Santa, I only have one wish for this Christmas. Please bring peace on Earth. Should that not be possible I'd settle for a 4090.

    • @OhNoBsod
      @OhNoBsod 28 วันที่ผ่านมา

      i ask a year later, did u get the 4090?

  • @Xxnightwalk1
    @Xxnightwalk1 ปีที่แล้ว

    Awesome video, as usual :)
    Burned my brain again, I got to go back to Uni I guess X)

  • @urlauber2884
    @urlauber2884 ปีที่แล้ว

    I may have understood it wrong, but won't there be a good amount of additional latency? If DLSS ai-renders a frame in between two already engine-rendered frames, you have to delay the second engine frame to show the ai one first right?

    • @AIandGames
      @AIandGames  ปีที่แล้ว

      Not really no. If we assume the second engine frame would have shown the input from the user, then the intermediate frame will begin to show it as well, given it's figuring out what the frame looked like in between. In theory, you should actually begin to see the response one frame earlier, albeit not entirely. Plus don't forget, that this is still all happening within a unit of time (i.e.one second), so in theory you should be seeing it slightly faster.

    • @urlauber2884
      @urlauber2884 ปีที่แล้ว

      @@AIandGames that's precisely my problem,
      'it's figuring out what the frame looked like in between'.
      So it generates the in between after the second frame is already made, so why not show the newest frame aka. the second one, not the in between right?

    • @Rabid_Mushroom
      @Rabid_Mushroom ปีที่แล้ว +2

      @@urlauber2884 As far as I am aware, "in between" is a somewhat inaccurate way to describe what's happening. the technique isn't waiting for a new frame to come out before it makes an in between stage, it's looking at previous frames to guess at what comes next, than putting its guess on your screen while the game is still working on making the next "real" frame

    • @urlauber2884
      @urlauber2884 ปีที่แล้ว +3

      @@Rabid_Mushroom that would make a lot more sense, but it's not how it's described in the Video🤷‍♂️.

    • @TheInevitableHulk
      @TheInevitableHulk ปีที่แล้ว

      @@urlauber2884 it's explained like once in the video but what is actually going on is that it's using the last 2 uprezzed real game engine frames (the previous classically rendered frame and the currently just completed frame as ground truth) and using the motion vectors provided by the game engine (used in dlss 2 to create the more accurate upscaling, now being reused to create a more accurate flow field) into another special ai to guess the future frame.
      This is essentially, knowing two positions in time to guess a third future position: 2, 4... 6?
      [Though more accurately, you're assuming the next real frame is 6 so it's actually more like:
      2, (3), 4, (5)
      Where 2 and 4 are the last real frames, 3 is the last guessed frame, and 5 is the current guessed frame based on 2 and 4.]

  • @Sh4un1r1k
    @Sh4un1r1k ปีที่แล้ว +3

    I wonder how 4060 and 4070 will perform with dlss 3 with less OFA cores (probably matching the numbers in the 3070 and 80 tiers)

  • @WunderWulfe
    @WunderWulfe ปีที่แล้ว +2

    so does this mean that if your input lies before the generation of an AI frame, you will always have a one frame delay before visual feedback?

    • @AIandGames
      @AIandGames  ปีที่แล้ว

      In theory, yes? Assuming the time taken from input to visual feedback is two frames, then the DLSS is interpolating an extra frames in between.
      But then you would have had to wait for that second engine frame for that visual feedback anyway.
      This I suspect is where NVIDIA Reflex comes in handy.

    • @WunderWulfe
      @WunderWulfe ปีที่แล้ว

      @@AIandGames so if i were to shoot while a real frame was rendering, it wouldn’t actually fire until the next real frame? Or would reflex take user input and predict the shooting effect

    • @AIandGames
      @AIandGames  ปีที่แล้ว

      @@WunderWulfe So this is what Reflex is trying to do, is reduce that latency as much as possible. I'd argue for a lot of games there is already going to be a frame or two delay for that input to be processed and output, and that was due to the render queue for the GPU being a couple of frames long. So with Reflex the idea is to shorten that, so you see the game react faster.
      But all that said, the important thing is the game was *always* going to show you that input feedback on frame X, given that's when the game generated it. Given we're increasing frames per second, it's not going to change how soon that feedback is presented. It's just adding an extra frame in between, and the frame interpolation is going to spot things changed between engine frames and try and present that better.

  • @DouglasHollingsworth1
    @DouglasHollingsworth1 ปีที่แล้ว +4

    Loved this - I'd love if you did something like this but covered all those options in your voice/methodical approach (vsync, ray tracing, etc etc).
    Not gonna lie, I haven't had to bother much with PC graphics settings because I only buy monitors for 1080p60 and set my games to that resolution and then lock in the frame rates with vsync ... then start lowering graphics settings until the frame rates never dip below 55 fps ... but I'm likely due for a new chassis and motherboard after 6 years so it would be nice to learn more so I can make informed decisions on new components.

    • @AIandGames
      @AIandGames  ปีที่แล้ว +3

      I'm tempted to put out a video on my second channel of my running things like Cyberpunk and Flight Simulator to play around with the settings.

    • @DouglasHollingsworth1
      @DouglasHollingsworth1 ปีที่แล้ว

      @@AIandGames That would be very cool, bc Cyberpunk is why I'm upgrading hahah

  • @ToxicXelta
    @ToxicXelta ปีที่แล้ว

    So what is better for image quality, frame generation (DLSS 3) or DLSS Quality?
    Got to say, being able to run A Plague Tale Requiem at 4K maxed out with either DLSS 3 or DLSS Quaility at 80 fps + and below 65 degree’s is glorious on my 4080.
    My 3080 used to run at 72 degrees + capped at 60 on most games.

  • @RedSaint83
    @RedSaint83 9 หลายเดือนก่อน

    I suppose it was just a matter of time before frame interpolation came to gaming. Been using Smooth Video Project for a long while, and know about the artifacts that can happen when just pure math is used for guessing the frames inbetween. I hope onboard tech can improve the quality for this as well.

    • @RedSaint83
      @RedSaint83 9 หลายเดือนก่อน

      lol, maybe it's just a matter of time before graphic cards essentially "dream up" the game for us based on instructions, like a puppet play. We can only hope, because games are getting close to hitting the 200 GB mark after all. Imagine a Farbrausch (fr-08: .the .product, 64 KB, my favorite) like tiny (in comparison to today) program, that is pretty much just AI looking at a blueprint that the game devs encoded by AI on the other end, and that your graphics card is then able to build from scratch.
      It wouldn't be a dumb tech if it meant less bandwidth used overall.

  • @FastFSharp
    @FastFSharp ปีที่แล้ว

    Love this content!

  • @JohnKuhles1966
    @JohnKuhles1966 ปีที่แล้ว +1

    DLSS 3 + A.I. + Unreal Engine 5 Games = One Super Big Step towards People Plugged in to "The Matrix" For Real!

  • @SolidReader
    @SolidReader ปีที่แล้ว +4

    What I actualy want to know is if DLSS 3 has artifacts or ghosting. And if the 4070 and 4060 will have DLSS 3 just as good.

    • @AIandGames
      @AIandGames  ปีที่แล้ว +5

      Playing through the games available to me, I spotted some slight artefacts a couple of times in Cyberpunk. Typically in areas with multiple light sources and reflective surfaces. That said, it was quite minimal and didn't detract from my experience all that much.

  • @filiplaskovski9993
    @filiplaskovski9993 9 หลายเดือนก่อน

    I gotta say I’m impressed with the frame generation !!

  • @mohammedosman4902
    @mohammedosman4902 ปีที่แล้ว +2

    Any chance DLSS 3.0 can be done by the 30 series?

    • @seamon9732
      @seamon9732 ปีที่แล้ว +4

      Nvidia said: "No, fork up the $$$ for the 40 series, peasant."
      Paraphrasing here^^

    • @joey199412
      @joey199412 ปีที่แล้ว +5

      Too few OFA cores in the 30 series for DLSS 3 to work in real time. That's the real reason.

    • @AIandGames
      @AIandGames  ปีที่แล้ว +1

      @@joey199412 That's what I was told from the NVIDIA team. Though I'm waiting for someone to prove them wrong. :D

  • @SmoothGaming63
    @SmoothGaming63 ปีที่แล้ว

    Will it make stable 1% lows?

  • @OscarRiba
    @OscarRiba ปีที่แล้ว +2

    Aw yiss, thanks

  • @carlosnumbertwo
    @carlosnumbertwo 5 หลายเดือนก่อน

    We are no longer paying for raster performance. We are paying for this AI wizardry! :D

  • @Keepit10011
    @Keepit10011 ปีที่แล้ว

    So if I wiggle the thumb stick making the camera go crazy will dlss3 break? 😂

  • @vegitoblue2187
    @vegitoblue2187 ปีที่แล้ว +3

    All of this with 1000W power demands. Makes me wonder what NVIDIA plans to do with future models increasing power demands like crazy

    • @AIandGames
      @AIandGames  ปีที่แล้ว +1

      Yeah so from the press documentation, the 4090 needs an 850W PSU at MINIMUM. I'm going to be eating beans for a month to make up for testing this thing.

    • @vegitoblue2187
      @vegitoblue2187 ปีที่แล้ว +1

      @@AIandGames I won't be surprised if we need a minI nuclear reactor to keep the future gens powered. Plus the costs are absurdly high.

    • @sanyanders
      @sanyanders ปีที่แล้ว

      Gaming will become less and less eco-friendly with each generation and that worries me. Visual experience is easier to sell than interactivity even if both can give the same experience and NVIDIA gotta sell hardware >_

  • @brillvideogames
    @brillvideogames ปีที่แล้ว +6

    One day we won't even need GPU cores for rendering, just an AI that generates a game for us. The traditional renderer will get smaller and smaller over time, while the AI component of the GPU takes on more of the workload.

    • @shashank1630
      @shashank1630 ปีที่แล้ว

      That’s not how that works…

    • @starofeden9277
      @starofeden9277 ปีที่แล้ว

      @@shashank1630 why u say this
      i think he is correct
      matter of fact if you did watch 4000 announcement by nvidia
      the reveal video
      u would have seen how they are workin on a Global AI system intergration network
      its really happening
      its a massive network where games and real life meet
      and companies use it to generate work purposes
      and machines do the work
      its scary
      looks like skynet in the making
      and NVidia and other comapnies are developing it
      and it has allready started
      AI is taking over
      it like a real life sim..its a game and its comapnies doing real life work in it
      probably will ber the biggest next VR experience

    • @KapinKrunch
      @KapinKrunch 10 วันที่ผ่านมา

      😂😂😂😂😂

  • @userDFboeing
    @userDFboeing ปีที่แล้ว

    it is incredibly complexe.

  • @Petch85
    @Petch85 ปีที่แล้ว +1

    other plates they say the AI frame is between the to GPU frames. This would increase the latency, and that is also what they se when using frame generation? Thus I am confused.

    • @AIandGames
      @AIandGames  ปีที่แล้ว +1

      Yes every other frame is made by the AI. Hence the need for NVIDIA Reflex to minimise the latency. DLSS3 games have to implement Reflex in order for it to work.

    • @Petch85
      @Petch85 ปีที่แล้ว +3

      @@AIandGames Yes... I get that part.
      What I am trying to ask is if the AI frames are extrapolated or interpolated from the "GPU frames".
      In the video at 6:16 it sounds like it is extrapolating the next frame, but at 12:15 it sounds like interpolation.
      In other videos I have seen it looks like the AI frame is made by the GPU frame before and the GPU frame after the AI frame (interpolated) and the frames are then buffered resulting in more latency.
      I find this hard to explain with text only. Maybe this will help.
      GPU frame = GF
      AI frame = AF
      frame shown on screen = 0 (-1 the frame before +1 the frame after)
      Extrapolating: GF-2, AF-1, GF 0, AF+1 (AF+1 is made using data from GF-2 and GF-1)
      Interpolating: GF 0, AF+1,GF+2 (AF+1 is made using data from GF 0 and GF+2. GF+2 is the newest rendered frame, but it is still in the buffer. GF 0 is show while creating AF+1)
      I do not know if this makes sense to anybody.

    • @NigelTolley
      @NigelTolley ปีที่แล้ว

      No, it doesn't increase latency. Imagine you could react instantly to a frame. You're playing at 40fps, and the change happens at frame 4 of that second. That's as fast as it would be possible to react, obviously. Nothing in 1, 2, 3, change in 4.
      Now, the GPU does some massive magic and you get 80fps. Now, you still react at frame 4, but you at to see frames 1, 1.5, 2, 2.5, 3, 3.5, before you saw 4, which happened at the same time it would have without the half frames.
      Also, 0.2 seconds, or 8+ frames, is the lower limit of reaction time to unexpected events.

    • @Petch85
      @Petch85 ปีที่แล้ว

      @@NigelTolley Sorry I can not understand your explanation. To me that just looks like having the GPU running at double the frame rate, thus halfing the GPU part of the latency.
      I have the latency part from this video.
      th-cam.com/video/kWGQ432O3Z4/w-d-xo.html

    • @Petch85
      @Petch85 ปีที่แล้ว

      @@NigelTolley Sorry I can not understand your explanation. To me that just looks like having the GPU running at double the frame rate, thus halfing the GPU part of the latency.
      I have the latency part from this video.
      Optimum Tech - Beast Mode - NVIDIA RTX 4090 Review (about 6 min in the video)

  • @jeanultra7939
    @jeanultra7939 ปีที่แล้ว

    Atrioc reference!!!!

  • @AgentOffice
    @AgentOffice ปีที่แล้ว

    Does it make mistakes

  • @AgentOffice
    @AgentOffice ปีที่แล้ว

    Hate when vsync is off

  • @caio5987
    @caio5987 ปีที่แล้ว

    It’s funny how all of a sudden the 30 series cards sound like crap 😂

    • @AIandGames
      @AIandGames  ปีที่แล้ว +1

      So I *just* upgraded to a 30-series not that long ago and it's still an amazing card. Had DOOM Eternal running full 4K with ray tracing and it looked gorgeous.

  • @zeyogoat
    @zeyogoat ปีที่แล้ว

    Great video as always, but these features are kind of depressing--only available on new (thousand dollar, thousand watt) cards, must be developed/trained by game designers, etc. Maybe in-engine rendering has reached a plateau, but will this discourage non-DLSS advances? I wonder what a more critical take of nvidia's advent tech would sound like. Aside from kids whining on reddit.
    Also just can't be happy w/ EVGA dipping out, shit is a bad omen man smh

    • @AIandGames
      @AIandGames  ปีที่แล้ว

      I think traditional rendering is going to continue to see improvements. Albeit at a slower, more gradual pace. As cool as any neural rendering technique is, it's still unreliable given it doesn't rely on the engine data. So the traditional rendering pipeline is a bottleneck that merits further work.

  • @AgentOffice
    @AgentOffice ปีที่แล้ว

    Fun

  • @tbird81
    @tbird81 ปีที่แล้ว

    Why would they even use cyberpunk as an example?!

    • @AIandGames
      @AIandGames  ปีที่แล้ว +3

      The simple answer is a lot of the games that will ship with DLSS 3 aren't ready to show yet. Hence they rely on older games. Opinions aside, Cyberpunk is a good tech demo for showcasing this - given it can't run smooth native 4K without DLSS.
      You might have clocked that I showed footage of A Plague Tale Requiem, which isn't out for another week or so. It's the only pre-release game the PR team provided. I didn't show much of that game, given I was limited on what I can show at this stage given it has an additional press embargo restriction on it. And I don't wanna spoil that for anyone excited to play it.
      I suspect they'll be plugging how this looks for new games in the coming weeks.

    • @LutraLovegood
      @LutraLovegood ปีที่แล้ว

      @@AIandGames Oh, it's the new Plague Tale? Looking nice.

  • @user-bt7pu7qs2f
    @user-bt7pu7qs2f ปีที่แล้ว

    Makes me so sad I got a 3080 ti last year

  • @choo_choo_
    @choo_choo_ ปีที่แล้ว +2

    The thing I don't like about these comparisons of Native 4k, and DLSS3 4k is that the native is performing considerably worse. I mean, I get that's the point to show off DLSS3, but if we're showing the difference of native and upscaled on the SAME GPU (i.e. let's say a 4080), doesn't that just mean that the GPU itself is still trash at rendering native 4k?
    If that's the case, and the only improvement is more or less something software based (let's be honest, DLSS3 could definitely run on 30 series, but they're locking it behind the 40 to get money), what exactly are people paying for? It certainly isn't power. Well, I guess they are paying for power, but not how they imagine considering how much energy it sucks out of the wall.
    So again, why show native vs DLSS and then show your GPU in a positive and negative light? It just seems counter intuitive.
    "Our GPUs aren't any better, but we gussied up the image a little bit with made up frames. Most people can't tell the difference, and neither will you, Joe Consumer. Now shell out that hard earned $800."
    Don't get me wrong, the tech is novel and impressive, but as many people often point out, NVIDIA is slowly backing themselves up against a wall with each generation. Improvements are marginal at best, but with skyrocketing power demands.

  • @TheRumpletiltskin
    @TheRumpletiltskin ปีที่แล้ว +3

    i think the DLSS 3 "guessing" for framerates is ok for flight sims / 1 player games, but for twitch shooters or MOBAs "guessing" what comes next is just adding more inconsistency like lag does.

    • @cluckendip
      @cluckendip ปีที่แล้ว +2

      we're talking about an incredibly miniscule fraction of a second difference here - for a lot of players, being able to play at a higher resolution allows them to play better than before, even with that latency hit. it's up to the player to try the option out for themselves for each game to see if it works for them.

    • @TheRumpletiltskin
      @TheRumpletiltskin ปีที่แล้ว

      ​@@cluckendip the main issue is that game makers already use "prediction" models to clean up online play. and for Spaghetti code games like League, that can cause issues where you think the opponent is at point A, when they are at Point C, and this gets cleared up when the next server cycle hits. Lets say the game prediction is wrong, that causes the Graphical prediction to be wrong. this lengthens the time you're getting incorrect information, and with the game types I'm talking about, those fractions of a second matter.

    • @LutraLovegood
      @LutraLovegood ปีที่แล้ว +5

      The frame generation isn't guessing what's coming next (extrapolating), it's interpolating what's in-between two frames, and the higher the original frame rate the more accurate said interpolation will be.

    • @cluckendip
      @cluckendip ปีที่แล้ว +1

      @@LutraLovegood ah i thought he was talking about the time it takes, but you're right - the generated images are pretty faithful and more accurate than other means. it obviously can't beat the image detail from the real frames, but it's not that much of a handicap for competitive players than one might think

    • @AIandGames
      @AIandGames  ปีที่แล้ว +3

      Yeah as stated above it's adding intermediate frames (interpolation), so it always knows what the next frame looks like. Sorry that wasn't clear.

  • @anteshell
    @anteshell ปีที่แล้ว +1

    This looks great, but it's still in its infancy. There are a lot of temporal incoherence where the AI fails to predict correctly. However, cannot blame Nvidia for it because nobody else has managed to solve that problem either.

    • @AIandGames
      @AIandGames  ปีที่แล้ว +3

      I felt it performed better than any of the original DLSS games I've played against. Not as many artefacts or glitched frames. It's definitely the best version they've put out thus far.
      As mentioned in another thread, Cyberpunk had issues in well lit and reflective scenes, but that doesn't surprise me as I honestly don't know how their prediction can handle that.
      But as you say, it's still in its infancy. Very interested to see how this improves in the next year or two.

    • @anteshell
      @anteshell ปีที่แล้ว +2

      @@AIandGames yes, it was most noticeable in Cyberpunk.
      Sadly I'm not in a position to have first hand experience with any of this, so I'm only talking from what I've seen, including the research on the subject. Thus, I can only talk about how it looks.
      But I am too figuratively jizzing my pants waiting for how this and similar technologies will develop.

  • @AgentOffice
    @AgentOffice ปีที่แล้ว

    We won't need a game engine soon just type in a title

  • @linaskvedaras
    @linaskvedaras ปีที่แล้ว

    So this is the image-smoothening "soap opera" option setting on new TV sets we love to hate on steroids. Found to be beneficial in video games. Who would've thought?

    • @LutraLovegood
      @LutraLovegood ปีที่แล้ว +1

      Big difference is this wasn't made to run on an underpowered SOC for TVs that can't even do 2GHz.

  • @Semperverus0
    @Semperverus0 ปีที่แล้ว

    "Looked just as good if not better than native" - I think the technology is cool and all but you have got to be smoking some crazy shit to think that the AI generated stuff looks *better.* It looks good, but there are severe degradations in every shot you showed, from lighting being dimmed, to aggressive shimmering in tree leaves. I'll certainly take something like this knowing it's a sacrifice in order to get higher framerate in something like VR, but native is still far superior in appearance just by the nature of what it is.

    • @filiplaskovski9993
      @filiplaskovski9993 9 หลายเดือนก่อน

      Man you know what shocked me I played system shock recently and I’ve got powerful hardware! 4090 and a 13900k natively I could run the game 120fps max settings at 4K! I tried to see how dlss would behave, I was shocked to see it looked better than native! The image was crisp and refined compared to native!! :/ weird tbh

  • @jameswredfoxslaughter2913
    @jameswredfoxslaughter2913 ปีที่แล้ว

    Fantastic sales pitch!
    "I'll wait for the 5090." ;}

  • @quaked2023
    @quaked2023 ปีที่แล้ว

    The real question is: Will it run DOOM?

    • @AIandGames
      @AIandGames  ปีที่แล้ว +1

      ... let me go and ask

    • @synthetic240
      @synthetic240 ปีที่แล้ว +1

      What about Crysis?

    • @AIandGames
      @AIandGames  ปีที่แล้ว +1

      @@synthetic240 oh god no, I mean nothing can right?

  • @tomfillot5453
    @tomfillot5453 ปีที่แล้ว +1

    I absolutely love the technology, but so bored of what it's being used for. I feel an intense disconnect from a seemingly large part of the gaming community, that values frame rate and 4k. To me this adds nothing of value to a game, and are only pursued because we fixate on metrics as if it shows something objective about the game. As if game A being 4k/120FPS, in comparison to game B being 1080/30FPS helps in any way to compare game A and B. Instead of, you know, gameplay, level design, system design.
    I would much prefer the technology being used to reduce the engine load, and use the CPU cycle budget to make better AI systems, better music engines and other typically underdeveloped parts of modern games. If the neural net handles upscaling, maybe you can lower the level of details in your assets and load more stuff.

    • @TheOrian34
      @TheOrian34 ปีที่แล้ว

      But muh graphics.

  • @SoCalFreelance
    @SoCalFreelance ปีที่แล้ว +2

    Is it real or is it Memorex. Highly skeptical. Visions of horrible TV motion interpolation 'soap opera effect' come to mind.

    • @RedSaint83
      @RedSaint83 9 หลายเดือนก่อน

      I mean they're using "AI" and a separate chip on the board to do it, so I'm hoping it's a lot better than frame interpolation of old, which was just based on pure math.