NVIDIA’s DLSS 3.5: This Should Be Impossible!

แชร์
ฝัง
  • เผยแพร่เมื่อ 15 พ.ค. 2024
  • ❤️ Check out Weights & Biases and sign up for a free demo here: wandb.me/2mp
    📝 My paper on neural rendering:
    users.cg.tuwien.ac.at/zsolnai...
    My latest paper on simulations that look almost like reality is available for free here:
    rdcu.be/cWPfD
    Or this is the orig. Nature Physics link with clickable citations:
    www.nature.com/articles/s4156...
    Digital trends on DLSS 3: www.digitaltrends.com/computi...
    🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible:
    Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Benji Rabhan, Bret Brizzee, Bryan Learn, B Shang, Christian Ahlin, Gaston Ingaramo, Geronimo Moralez, Gordon Child, Jace O'Brien, Jack Lukic, John Le, Kenneth Davis, Klaus Busse, Kyle Davis, Lukas Biewald, Martin, Matthew Valle, Michael Albrecht, Michael Tedder, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Richard Sundvall, Steef, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi.
    If you wish to appear here or pick up other perks, click here: / twominutepapers
    Chapters:
    0:00 What is DLSS?
    0:58 Neural rendering
    1:50 DLSS enters the scene
    2:39 Step 1: Super resolution
    2:56 Step 2: Optical flow
    3:41 Ray Reconstruction
    5:02 Results
    6:09 Who gets this?
    6:59 Not perfect
    7:36 First Law of Papers
    Thumbnail background design: Felícia Zsolnai-Fehér - felicia.hu
    Károly Zsolnai-Fehér's links:
    Twitter: / twominutepapers
    Web: cg.tuwien.ac.at/~zsolnai/
    #nvidia #dlss3
  • วิทยาศาสตร์และเทคโนโลยี

ความคิดเห็น • 1.1K

  • @brintparis7374
    @brintparis7374 7 หลายเดือนก่อน +478

    I love watching how incredible the AI developments have been. It's already mind-blowing and going to get so, so much better in the next few years.

    • @bansheezs
      @bansheezs 7 หลายเดือนก่อน +4

      meh ai has been around for 30 years, its about as good as it ever was, they just never used it for gaming image processing.

    • @rockdude1122
      @rockdude1122 7 หลายเดือนก่อน +19

      ​@@bansheezs smartest youtube commenter:

    • @Lucas-po6mn
      @Lucas-po6mn 7 หลายเดือนก่อน +13

      @@bansheezs as someone studying machine learning, it's impressive how wrong this comment is

    • @bansheezs
      @bansheezs 7 หลายเดือนก่อน +2

      @@Lucas-po6mn pff, machine learning has been around since computers practically started. Its not a new techinque or method. Its not like its some kind of revolutionary idea.

    • @Lucas-po6mn
      @Lucas-po6mn 7 หลายเดือนก่อน +13

      ​@@bansheezs that's like saying planes have been around for a while and there's no difference between the A-20 and an F-22.
      Algorithms used in ML before and now are completly different.
      I made a digit recognition AI, nothing complicated, and the techniques and methods used for other things are entirely different.

  • @pandoraeeris7860
    @pandoraeeris7860 7 หลายเดือนก่อน +1516

    What a time to be AI!

    • @jogo798
      @jogo798 7 หลายเดือนก่อน +15

      Lol😂

    • @urkururear
      @urkururear 7 หลายเดือนก่อน +15

      Best comment ever!😂

    • @furdiburd
      @furdiburd 7 หลายเดือนก่อน +10

      Pin this man!

    • @aleksitjvladica.
      @aleksitjvladica. 7 หลายเดือนก่อน +9

      How many times must people comment this same thing?! Annoyance!

    • @LC-pp4bb
      @LC-pp4bb 7 หลายเดือนก่อน +6

      Until it takes over

  • @jadonguthrie453
    @jadonguthrie453 7 หลายเดือนก่อน +379

    Actually insane that we can simulate lighting in real time as we run games. Cannot believe how fast the technology has improved. Can't wait to see what other places AI can improve gaming outside of graphics, like believable NPCs, better physics, etc.

    • @aerostorm_
      @aerostorm_ 7 หลายเดือนก่อน +41

      We are getting to the point where we are not just generating information for the final output - but the intermediary information required to improve these outputs.
      Physics calculations would be another amazing opportunity. NVidia used to talk about "Hairworks" and "Physx" as computational models it had built into its GPUs.
      Now, think about fluid, hair, and cloth physics where these computational models could be improved through generated information. Very bright future for these kinds of applications.

    • @PuppetMasterdaath144
      @PuppetMasterdaath144 7 หลายเดือนก่อน

      yes we are creating a new artificial moon projection system from saturn

    • @loki76
      @loki76 7 หลายเดือนก่อน +6

      Just 6 years ago it was only done with super computer server farms rendering outside real time and for the movie industry etc.
      Crazy to think because of AI and machine learning, my 4090 likely has the same real time rendering performance as a whole server farm with hundreds of thousands of cost in servers would do.

    • @VariantAEC
      @VariantAEC 7 หลายเดือนก่อน +1

      Now also on *PHONES* that top out at 3.5 Watts!

    • @najeebshah.
      @najeebshah. 7 หลายเดือนก่อน +1

      its not ai lol its predetermind data sets, AI means something completely else

  • @sapphyrus
    @sapphyrus 7 หลายเดือนก่อน +477

    DLSS is basically what those old CSI tech guys used when their boss said "ENHANCE! ENHANCE!" to them. Finally Hollywood CSI tech at out fingertips!

    • @martins7236
      @martins7236 7 หลายเดือนก่อน +77

      Yes and I was always annoyed how stupid and unrealistic this was xD
      What a time to be alive!

    • @markjacobs1086
      @markjacobs1086 7 หลายเดือนก่อน +65

      The only problem we'd have with it in reality is that it's fictional (halucinated) data 😆

    • @PapaP86
      @PapaP86 7 หลายเดือนก่อน +29

      Not quite. With DLSS, it's trained on running super high resolution images of a given game so that DLSS can emulate that and use that data when the game is running at lower resolution so it knows how to reconstruct and generate relevant data. So, the higher end data has to be already been provided (and for that specific game). Not going to be possible when it comes to unique photographs where the subject wasn't already known and data captured beforehand.

    • @Wobbothe3rd
      @Wobbothe3rd 7 หลายเดือนก่อน +10

      @@PapaP86 true, but eventually a more complex underlying understanding of materials and light transport (ie Omniverse) could theoretically help with making good guesses about real world images, especially where some human made data is known (like how license plates are made helping decipher an actual real-world license plate number from a low quality image). If you think about it, most of the things an investigator would want to look into are human made objects. You can't just extract that info from the image directly, but if the model understand more about how those objects (like license plates) are generally made and behave... ENHANCE!

    • @markjacobs1086
      @markjacobs1086 7 หลายเดือนก่อน +33

      @@PapaP86 DLSS 2 uses a generalised neural network. It doesn't actually do specific game data training anymore (deprecated since they shelved DLSS 1.x).

  • @theftking
    @theftking 7 หลายเดือนก่อน +89

    Even outside real-time use cases, RTX-accelerated RIFE framegen is super impressive. When you aren't constrained by having to run in real time, it can draw like 3 interpolated frames for every real one.
    You can render Blender animations at half FPS, interpolate them 2x in like a few minutes and it's... almost indistinguishable. For TH-cam-destined content that's gonna be compressed anyway, it's potentially incredibly useful.

  • @TashiRogo
    @TashiRogo 7 หลายเดือนก่อน +18

    This is the most unusual speech pattern I've ever heard in my life.

  • @vaendryl
    @vaendryl 7 หลายเดือนก่อน +248

    this tech makes me most excited in the context of VR headsets. when you want to render a game at 8k x 4k resolution with a minimum of 90fps we're going to need tech like this even with top of the line video cards.
    anyone who knows what asynchronous reprojection is and sees this tech and how it works should get excited about the possibilities.

    • @RasmusSchultz
      @RasmusSchultz 7 หลายเดือนก่อน +35

      Nvidia should partner with VR companies and put a DLSS capable chip in the headset - you could easily run-length compress the 5% pixels, it would probably be so fast you could do 4K wirelessly 😳

    • @Wobbothe3rd
      @Wobbothe3rd 7 หลายเดือนก่อน +33

      Imagine combining eye tracking with just-in-time raytracing! Only trace the extra rays you need for the foveal region, and cache the rest. The possibilities for VR are extreme. Lots of ray traced lighting effects are about adding DEPTH to a scene, and you tend to take that for granted looking at a flat screen, but with stereoscopic vision all those lighting details become much more important.

    • @rayujohnson1302
      @rayujohnson1302 7 หลายเดือนก่อน +19

      You wouldn't need it to run at 8k pixels if you tracked the eyes. Only a narrow area of our vision is high resolution, our peripheral is blurry. The AI could track our eye movement and it would only need to render the pixels that are directly in the line of sight at high definition.

    • @Cross-xm2fr
      @Cross-xm2fr 7 หลายเดือนก่อน +1

      Fovea bro

    • @vaendryl
      @vaendryl 7 หลายเดือนก่อน +2

      @@rayujohnson1302 foveated rendering is another big part of the puzzle but isn't relevant to the tech discussed in this video. also, it seems apple has been making great strides in this field.

  • @vexmyth0clast
    @vexmyth0clast 7 หลายเดือนก่อน +75

    This technology is so damn impressive. It’s crazy how far it’s come in such little time.

    • @codymadison9993
      @codymadison9993 7 หลายเดือนก่อน +3

      Just running native resolutions is more impressive lol

    • @Impala1439
      @Impala1439 7 หลายเดือนก่อน

      ​@@codymadison9993this!

  • @DeadlyDragon935
    @DeadlyDragon935 7 หลายเดือนก่อน +9

    Holy shit your voice is dlss 1 ai generated.

  • @sharkinahat
    @sharkinahat 7 หลายเดือนก่อน +45

    Not so long ago we used to render 16x samples to get a on-screen pixel, now we render 1/8 of a pixel(?) and the results look better. One more paper down the line and we'll be quoting Arthur C. Clarke.

    • @tuseroni6085
      @tuseroni6085 7 หลายเดือนก่อน +4

      i've been quoting him for years.

    • @MementoTurtle
      @MementoTurtle 7 หลายเดือนก่อน

      What are his quotes you would be willing to share?

    • @tuseroni6085
      @tuseroni6085 7 หลายเดือนก่อน +9

      @@MementoTurtle think he is referring to the quote "any sufficiently advanced technology is indistinguishable from magic"

    • @MementoTurtle
      @MementoTurtle 7 หลายเดือนก่อน

      @@tuseroni6085 hmm, well you got my curiosity, I'm going to check out his other quotes
      Thank you for your time stranger

    • @VRchitecture
      @VRchitecture 7 หลายเดือนก่อน +1

      Not really from a consumer’s point of view. I notice that people who know nothing about what’s going on under the hood of advanced technologies (like AI-related staff) take everything for granted and even complain about it not being “good enough”. But for me this is still a kind of magic, though I know how it works 🤷🏻‍♂️

  • @paxdriver
    @paxdriver 7 หลายเดือนก่อน +17

    3 milliseconds?! That's a huge performance jump for a personal project. Even if it only worked in that one scene, it's so neat you got it to work like a proof of concept!

  • @Wip3ou7
    @Wip3ou7 7 หลายเดือนก่อน +43

    The problem with frame generation is that you get the fluidity of whatever framerate youre targeting but the input delay of whatever the actual framerate is.

    • @JacobAsmuth-jw8uc
      @JacobAsmuth-jw8uc 7 หลายเดือนก่อน +1

      Not with frame warping :) DLSS 4.0 is going to blow your mind

    • @ProPuke
      @ProPuke 7 หลายเดือนก่อน +31

      @@JacobAsmuth-jw8ucIt can predict the future for inputs that haven't happened yet?

    • @Wip3ou7
      @Wip3ou7 7 หลายเดือนก่อน +8

      How so? In order to generate frames, the game engine has to execute its instruction set for anything to happen. Once the frame is generated, the AI compares the new frame with the previous frame and then generates the frames in-between those two slices of time. A GPU frame generater cannot predict the behavior of a game before the game engine. The GPU doesn't know what your buttons do. The GPU doesn't know anything about what's happening in the game.

    • @Wobbothe3rd
      @Wobbothe3rd 7 หลายเดือนก่อน +7

      NO! This is about RAY RECONSTRUCTION, NOT frame generation. STOP SPREADING this inaccurate BS!!! You can use ray reconstruction without super resolution OR frame generation entirely! VR games have had asynchronous time warp and space warp for DECADES and nobody complains about it, and that's basically the same thing but with lower quality.

    • @Wip3ou7
      @Wip3ou7 7 หลายเดือนก่อน +4

      ​@Wobbothe3rd he says it himself about 3 minutes into the video. It has to look at adjacent frames, it will increase fluidity but notice how he did NOT say that will improve game responsiveness.

  • @KingGJT
    @KingGJT 7 หลายเดือนก่อน +11

    Incredible. Great to see all the old papers being combined in one new video! Can't wait to get my hands on this technology.

    • @G0A7
      @G0A7 7 หลายเดือนก่อน +5

      U can use it right now lol

    • @G0A7
      @G0A7 7 หลายเดือนก่อน +1

      @@JohnnyWednesday Expensive? Dlss can be used with a 2000 series, u can buy one with 100-200$ or less

  • @LetsGenocide
    @LetsGenocide 7 หลายเดือนก่อน +8

    Your explanations are the best! Really easy to understand very complex topics

  • @Kazutopia808
    @Kazutopia808 7 หลายเดือนก่อน +10

    From Atari 2600 to this. I remember playing a game with my brother called Tank Battle. It was top down and the obstacles were just random shapes. The tank turrets were 1 pixel wide, firing a 1 pixel tank rounds. Never thought we'd be seeing something like this. I feel privileged to have witnessed this evolution.

    • @shaolin95
      @shaolin95 5 หลายเดือนก่อน +1

      Combat?

    • @Kazutopia808
      @Kazutopia808 5 หลายเดือนก่อน +1

      @shaolin95 Yes, that's it!! Combat. Loved that game. Thanks.

  • @nebuchadnezzar916
    @nebuchadnezzar916 7 หลายเดือนก่อน +20

    DLSS 3.5 appears to improve lighting response times and hopefully reduce or eliminate temporal ghosting artifacts too.

    • @MDwesLARL
      @MDwesLARL 6 หลายเดือนก่อน

      Despite preffering DLSS on, those ghosting artifacts is what drives me insane. It would actually be insane if they were eliminated!

    • @Wylie288
      @Wylie288 5 หลายเดือนก่อน

      @@MDwesLARL I mean. Imo sub pixel elements are FAR more distracting. TAA is a must with native rendering. I hardly notice TXAA ghosting on native. And DLSS makes it less noticable. Since DLSS 2.2 and on I have never seen a case where I think DLSS artifacts are worse than any of Native renderings artifacts.

  • @nickalfonso8616
    @nickalfonso8616 7 หลายเดือนก่อน +8

    This is so cool. Reminds me of a chapter I read from a book thats name I cant remember about the concept of the ai super sampling algorithms becoming so advanced that in combination with an affordable VR headset you could give it prompts and have it generate locations and playable framerates in real time. Generating entirely playable game levels just based on input from the end user. The author was talking about this tech as a implementation further in the future but it seems like were a lot closer to something like that than they thought. Very interested to see it evolutions going forward.

  • @j_m_b_1914
    @j_m_b_1914 6 หลายเดือนก่อน +4

    I remember back in 2003 having a conversation with a professor at MIT about the future of graphics and how someday we might be able to play games like "Toy Story" but instead of rendering each frame over hours, playing it in real time. Obviously today GPUs do a ton of "cheating" but the end result is still the same -- we're finally able to basically render Toy Story in real-time.
    It just blows my mind.

  • @ViniciusNegrao_
    @ViniciusNegrao_ 7 หลายเดือนก่อน +74

    When physics and hardware start becoming a problem, it's up to software to bridge the gaps. We had huge strides in technology but it seems to be plateauing, ai on the other hand is skyrocketing. Can't wait to see how ai and super computers will work together in the near future

    • @andrasbiro3007
      @andrasbiro3007 7 หลายเดือนก่อน +17

      AI hardware is just starting. You've seen nothing yet.
      And we are also extremely far from the theoretical limits of traditional computing, only silicon is reaching the end. There are other potential technologies that have been researched for decades now, eventually one or more will take over.

    • @ViniciusNegrao_
      @ViniciusNegrao_ 7 หลายเดือนก่อน +1

      @@andrasbiro3007 I've never heard of hardware ai, sounds really cool and should present really interesting challenges for the engineers

    • @loki76
      @loki76 7 หลายเดือนก่อน +5

      The beauty of the AI is it can as you say really do so much more with the hardware. So even if the hardware would increase by 50%. The AI could make it 200% because of how it can utilize it.
      Nvidia really went in the right path, they totally dominate AI now.

    • @Real_MisterSir
      @Real_MisterSir 7 หลายเดือนก่อน +7

      @@ViniciusNegrao_ Look into the exclusive deals Nvidia has made with TSMC, where they will use Nvidia powered AI to sample laser engraving on TSMC's chip manufacturing lenses in order to enable sub nanometer node precision. It's wild, and considering TSMC has traditionally been a heavy AMD supporter due to their deals, this partnership with Nvidia indicates the significance of AI accelerated hardware production. Hardware powering software to improve future hardware. Quite a cycle.

    • @JL-pc2eh
      @JL-pc2eh 7 หลายเดือนก่อน

      @@ViniciusNegrao_ I think some call them TPU. Just like CPU and GPU became a thing^^

  • @TheAkdzyn
    @TheAkdzyn 7 หลายเดือนก่อน +2

    It's so humbling when you discuss your work. Thank you for all your wonderful insights and explanations!

  • @ImiiVy
    @ImiiVy 7 หลายเดือนก่อน +8

    When I heard "how much has it improved in the last *5 years"* my papers were CLUTCHED. That's a lot of time for improvement

  • @TheApc95uk
    @TheApc95uk 7 หลายเดือนก่อน +10

    Hi károly great video as usual, just wondering if you could do a comparison video between DLSS and AMD's FSR weighing up the pro's and con's of both.

  • @mikehibbett3301
    @mikehibbett3301 7 หลายเดือนก่อน +1

    Your videos are so inspiring. I love your phrase "just imagine where we will be two papers down the line". Going to print that on a tee shirt :)

  • @iSkobeyYT
    @iSkobeyYT 7 หลายเดือนก่อน +1

    This video was so encapsulating that 8 minutes felt like only 30 seconds

  • @Natural_Mindset
    @Natural_Mindset 7 หลายเดือนก่อน +5

    Comments:
    1% : what a time to be alive!
    99% : aaand---soo---welll---naoow----meet---leeets

  • @TheAero
    @TheAero 7 หลายเดือนก่อน +1

    This is one of the best techonologies out there. Makes creating new gpus less important but also advancing everything decades faster. We already can use 10x the power of current gpus when using DLSS. The new GPUs will be able to better use DLSS style technologies, leading to even greater advancements.

    • @TheAero
      @TheAero 7 หลายเดือนก่อน

      Imagine if companies allow to fine-tune the Diffusion Model using a list of assets - sprites. Then we could make this 100x faster and more accurate with just letting the PC run for 1 hour to optimize the DLSS engine on the game we wanna play.

  • @DanFrederiksen
    @DanFrederiksen 7 หลายเดือนก่อน +11

    Because our reality has a lot of continuity, even if not strictly continuous like pebbles and trees, it stands to reason that it should be possible to render images largely resolution independently. Barely no difference between 1k and 4k. And there is continuity over time as well that can be utilized. Which is what UE lumen already does.
    It's interesting to consider just how good renderers could have been back in the 486 days just with what we know today. And taken to the extremes, how fantastic images could a 486 do with optimal software in practical render times. If God sat down and wrote a few lines of code. It would have to be nothing short of astonishing. Despite only having 4MB of ram.

    • @daveloomis
      @daveloomis 7 หลายเดือนก่อน +2

      This is largely possible because so much research has been done in massive parallelization and huge memory bandwidth so they can share info. Modern graphics cards are made to do many tiny jobs all at once. Classic CPU architecture couldn't take advantage of that since they were fast at single linear tasks and memory bandwidth was extremely limited. But I do like it as a sort of "what if." We could handwave and miniaturize 1000 486s or something and target DOS-like graphics, etc. : D

  • @Wyntrfang
    @Wyntrfang 7 หลายเดือนก่อน +4

    I'd imagine this being wonky on PVP-based games due to crazy random variables, but more excited about this cutting down rendering times with render engines more than gameplay; Guessing since the information is linear in a rendered animation for instance, DLSS should have more than sufficient data to pull from, right?

  • @jacksonyan7346
    @jacksonyan7346 7 หลายเดือนก่อน

    Is this general or do you need to fine tune the model on each game? It doesn’t feel impossible for it to be general but also kind of tricky given how varied game art styles can be.

  • @mariovelez578
    @mariovelez578 7 หลายเดือนก่อน +1

    So happy to see the live reveal of DLSS 3.5 at SIGGRAPH 2023, I couldn't believe it when I saw it, and still can't believe it either! truly a time to be alive!

  • @juhor.7594
    @juhor.7594 7 หลายเดือนก่อน +11

    Hmm. It would be interesting to see what the rendered input looks like before dlss completes the frames.

    • @markjacobs1086
      @markjacobs1086 7 หลายเดือนก่อน +5

      Essentially low resolution frames, together with motionvectors to indicate which way objects in a scene move.

    • @Wobbothe3rd
      @Wobbothe3rd 7 หลายเดือนก่อน +4

      Try it in Quake 2 RTX right now, you don't even need an Nvidia card to see it. I actually kinda LIKE how it looks without denoising, but obviously you couldn't ship a game like that. Unfortunately youtube compression makes it too hard to see (expect maybe in still images), you have to run it on your own computer to see it in motion. Runs fast too.

  • @zeogamingmc
    @zeogamingmc 7 หลายเดือนก่อน +20

    Huge drawback to this is the increased input latency.

    • @Suthriel
      @Suthriel 7 หลายเดือนก่อน +10

      Luckily only, if you use Frame generation, if you don´t use it, and just juse Ray Reconstruction, then you should still get the better quality, with your normal framerate.

    • @leoterss
      @leoterss 7 หลายเดือนก่อน +1

      adding to what the commenter said, less competitive, PvE or strategy games benefit the most from all of it. But parts are still usable in competitive games

    • @jadonguthrie453
      @jadonguthrie453 7 หลายเดือนก่อน +2

      Thanks to reflex, the increased latency of frame gen is not at all noticeable. Cyberpunk 2077 with frame gen and path tracing plays like a dream.

    • @luminert2287
      @luminert2287 7 หลายเดือนก่อน +2

      Yeah, not worth if you dont have at least 60-70 fps already

    • @erisium6988
      @erisium6988 7 หลายเดือนก่อน

      @@JohnnyWednesday only if u use it u make 20-30fps 50-70fps. If u have 60fps and u use it for 120fps its imposibble to notice. And i played cyberpunk for quite long time on 390hz panel (sadly because oleds are so much better now when i got my lg tv i cant play those games on ips) so i know what much better. And thinking that somehow playing on crt 20years ago makes any diff on lcd and inputlag feeling is just cringe if something u are now old and ur reflexes are that hidh that u cant see diff on 240vs 360 panels.

  • @owainwilliams9109
    @owainwilliams9109 7 หลายเดือนก่อน

    If the optical flow method requires two frames to render intermediate frames, does this mean that the render-latency will be worse since we need to wait for an additional frame to be rendered? Or is there a method of using previous frames to calculate a "momentum" of sorts?

    • @KiWeWi
      @KiWeWi 7 หลายเดือนก่อน

      Optical flow does momentum. Moving pixels have a tendency to keep moving in the same direction, so given 2 frames, we can get "information" about the third.

  • @gtamot0
    @gtamot0 6 หลายเดือนก่อน

    You're explanation of this topic is so great. Thanks for being an effective and appealing educator!

  • @anonymouse740
    @anonymouse740 7 หลายเดือนก่อน +3

    I think technologies like DLSS 3.5 will be vital for future VR applications. Can't wait to see this technology used in that space.

    • @toututu2993
      @toututu2993 4 หลายเดือนก่อน

      Yea VR is the only tech in 21th century that actually impressed me not just how the tech works but how it completely change gameplay and be able to do 100,000 x more than Flatscreen game can ever do. To br honest vr is the only reason I upgraded my Pc Spec

  • @fakepng1
    @fakepng1 7 หลายเดือนก่อน +3

    Finally, I can now experience gaming like a fewer dream I once had about AI generating every frame in the right context. Trippy

  • @DHPshow
    @DHPshow 7 หลายเดือนก่อน

    Any way I could try this upscaling stuff on a custom picture? (without writing a software)

  • @DeFaulty101
    @DeFaulty101 7 หลายเดือนก่อน

    I wonder if there is some optimal strategy - as of yet undiscovered - for decisions about which pixels to remove so that we can have an optimal algorithmic reconstruction with minimal data usage.

  • @itsd0nk
    @itsd0nk 7 หลายเดือนก่อน +3

    The Frame Generation function of DLSS 3 and 3.5 is only available on RTX 40XX series cards. The other functions work on the 20XX, 30XX, and 40XX series cards. Just wanted to clarify that.

    • @sc0rpi0n0
      @sc0rpi0n0 7 หลายเดือนก่อน

      DLSS 3.5 supersampling is available from RTX 20XX but frame generation requires RTX 40XX. Frame generation is like when you want to double the FPS further after DLSS supersampling.

    • @itsd0nk
      @itsd0nk 7 หลายเดือนก่อน +2

      @@sc0rpi0n0 That’s exactly what I said, but less correct lol. The supersampling/AA works on the 20XX and up, as well as the new Ray Reconstruction function. The motion-flow frame generation is exclusively only available on the 40XX series cards. They need to change their naming convention at Nvidia to be more clear, especially now that the newest 3.5 feature is available on older cards, while the 3.0 feature isn’t. Basically DLSS 2 is supersampling/AA, DLSS 3 is frame generation/motion-flow/smooth motion, and DLSS 3.5 is DLSS 2’s stuff but now with Ray Reconstruction built into a unified model. It’s possibly the most confusing naming convention of any recent modern products lol.

    • @Wylie288
      @Wylie288 5 หลายเดือนก่อน

      And its always been the case. No idea why people spread this misinformation so much.

  • @123FireSnake
    @123FireSnake 7 หลายเดือนก่อน +3

    I went from a GTX 970 to a RTX4080 recently, let me tell you DLSS is actually what it's made up to be it is truly amazing technology and i haven't seen any artifacts or issues related to the use of it in the games that have it.

    • @iikatinggangsengii2471
      @iikatinggangsengii2471 7 หลายเดือนก่อน +1

      it is even changing quality/setting easily noticeable

  • @JoeBob79569
    @JoeBob79569 7 หลายเดือนก่อน

    I remember a friend loaning me a demo disc from a PC magazine about 20 years ago (that's how we rolled back then!) and it had a ray tracing demo program on it where it would procedurally render an image of an object and it would add ray tracing.
    It was pretty cool back in the era where GPUs didn't even have fans on them, but creating the image was about as slow as downloading a HD image from the internet back then with a 56k modem, and it took a minute few minutes, but I remember us discussing how cool it would be when games had that technology..

  • @perfectionbox
    @perfectionbox 7 หลายเดือนก่อน

    i wonder if it would help the AI if the low-res scene included a "context buffer": an 8-bit bitmap that stores a content code per pixel. E.g. a green pixel could be associated with the code for "leaf" or "gaucamole", a brown pixel with "sand" or "tree bark" or "chocolate". Then the AI would have more knowledge about what it's trying to draw with more detail.

  • @dingozi3428
    @dingozi3428 7 หลายเดือนก่อน +10

    a wonderful video it's a good day when 2 minutes paper uploading video I love watching how incredible the AI developments have been. It's already mind-blowing and going to get so, so much better in the next few years.

  • @anno222
    @anno222 7 หลายเดือนก่อน +18

    was this video narrated by an AI as well?

  • @TheRealityWarper08
    @TheRealityWarper08 7 หลายเดือนก่อน +1

    Two more papers down the line, and we'll be able to run the highest quality game of DOOM the world has ever seen!

  • @pand1024
    @pand1024 7 หลายเดือนก่อน

    The frame interpolation could certainly help with making things smooth but, at least as is, it sounds like it won't actually address latency in a way that will be helpful for fast paced twitchy gaming. Maybe if in the future there are other signals that the interpolation algorithm has access to such as button inputs or enemy locations?

    • @Wobbothe3rd
      @Wobbothe3rd 7 หลายเดือนก่อน

      Reflex addresses latency. Using Frame Generation at 120hz is no more latency than 60hz native.

    • @OGPatriot03
      @OGPatriot03 7 หลายเดือนก่อน

      I don't care much about frame generation currently (Although it would mage sense that it would be more tolerable on high refreshrate displays which I have) but the ray reconstruction which improves RT quality? That's epic.

  • @xpdatabase1197
    @xpdatabase1197 7 หลายเดือนก่อน +3

    FSR3.0 is supported on pretty much any gpu from any manufacturer, its open source as well, and also runs in pretty much any game of any year. It does everything but the ray-reconstruction for ray tracing. Also this technology is good but I see too many developers will start to rely on it to give people ok performance in games, its already happening, the quality of programming is going drastically decrease over the coming years as people start to rely on technology like this to make simple games run normally. The best part comes when you combine this technology with very competent and passionate programmers where they can truly squeeze out as much quality and performance out of a game.

    • @Wylie288
      @Wylie288 5 หลายเดือนก่อน

      And it doesn't even remotely come close. The image quality sucks at best. Its a great FPS boost. But for those of us using DLSS to improve image quality and reduce rendering artifacts, its awful.

    • @MrVidification
      @MrVidification 5 หลายเดือนก่อน

      combine that with ai to fix programming flaws to balance things out and auto optimise

    • @xpdatabase1197
      @xpdatabase1197 5 หลายเดือนก่อน

      @@MrVidification programming is super complex, Ai will have to be sentient to be able to optimize programs off such high complexity like game engines. Game engine such as unreal engine have millions of lines of code, Ai can barelly handle a few thousand, and on top of that it can barelly write simple programs. Game engines are complex and their code can be incredibly hard to decipher and the overall logic. Now to do this in real time as well is impossible, right now it takes a few minutes at times for the Ai to somewhat figure out how to fix a simple 100 lines of code program, most of the time you have to guide the AI and keep reminding it what you and the AI are curently working on,

  • @gio3061
    @gio3061 7 หลายเดือนก่อน +24

    Gamers: "yay, so this means we'll get better performing games!"
    Publishers: "ok so let's not polish the game, and rely on DLSS to save money"

    • @StressKind
      @StressKind 7 หลายเดือนก่อน +6

      True for AAA titles but on the other hand, indie games may well profit from it because there never would have been time to polish the graphics anyway...

    • @42ndMoose
      @42ndMoose 7 หลายเดือนก่อน +7

      "minimum requirement: dlss"
      *game runs 25 fps on an average pc*
      fuck

    • @Wobbothe3rd
      @Wobbothe3rd 7 หลายเดือนก่อน +1

      You could say that about literally ANY technology that improves performance. TH-cam commenters just love whining.

    • @gio3061
      @gio3061 7 หลายเดือนก่อน

      ​@@Wobbothe3rd It'd be whining if it wasn't true, but Starfield proves me right.

    • @Wobbothe3rd
      @Wobbothe3rd 7 หลายเดือนก่อน

      @@gio3061 DOES STARFIELD HAVE DLSS!?!? You just owned your own argument, Starfield was NOT developed with DLSS in mind, bringing up Starfield actually PROVES YOU WRONG, lol. There will be games that are well optimized or poorly optimized NO MATTER WHAT Nvidia does, youtube commenters just love whining to cope with the fact that they don't have the latest hardware, lol.

  • @TrabberShir
    @TrabberShir 7 หลายเดือนก่อน +1

    Your 2 papers down the line statement used to get me excited, now it terrifies me. The last few years have been crazy.

  • @PabloHenrique-bq8si
    @PabloHenrique-bq8si 7 หลายเดือนก่อน

    This RC Car racing scene is from wich game/video?

  • @4rtoria
    @4rtoria 7 หลายเดือนก่อน +8

    Can’t wait for it to be in a total of 3 poorly optimized games in the future ❤

  • @perplexedon9834
    @perplexedon9834 7 หลายเดือนก่อน +7

    Whats amazing is that these same techniques could be used to compress video data to 20% its original size, and then traditional lossless compression techniques can be applied to the result! (Lossy compression already exploits redundancies within and between frames, so i think it may interact destructively with theae techniques).
    Itll be surreal to have entire movies taking up a couple of megabytes, with an AI "dreaming" almost the entire output based on a tiny reference file. For live action and "traditional" 3D animationthis would be great!
    I suspect it wouldnt work as well with 2D animation or recent moxed 2D/3D animation like Spiderverse, Puss in Boots and TMNT, where artists individually create frames. Part of the appeal of those is that you can pause at any moment and have a sharp, hand crafted still, and I think AI compression of framerate and resolution would destroy that.

    • @VariantAEC
      @VariantAEC 7 หลายเดือนก่อน

      No, it can't. DLSS is trained on specific games. Each game has it's own model. Imagine each video having its own AI upscaling model... yeah... that isn't happening.

    • @perplexedon9834
      @perplexedon9834 7 หลายเดือนก่อน

      @@VariantAEC oh that's interesting! i thought this was just a specific application of upscaling and interpolation. All the same, emerging AI upscaling and interpolation can do what I described, but it's good to know additional techniques are being used in games

    • @VariantAEC
      @VariantAEC 7 หลายเดือนก่อน

      @@perplexedon9834
      No... they can't, I literally explained why they can't. DLSS and any upscaling algorithm remotely like it is trained on a specific set of data that already exists. If Puss and Boots or Into The Spider-verse have to first be rendered out at 16k resolution... why not just release the films at 16k resolution on multipack UHD BD-ROMs?

  • @pedazodeboludo
    @pedazodeboludo 7 หลายเดือนก่อน +4

    I'm currently trying Portal RTX. It's a shame though that the denoising is so aggressive that it removes a lot of the ambient shadows... but we'll get there :)

  • @Brian2532010
    @Brian2532010 7 หลายเดือนก่อน

    Anyone got a guide that is good on exactly how to do your in-game settings with dlss in mind? I'm still confused. I have a 3080 ti and have a 1440p setup.

  • @adrianbiber5340
    @adrianbiber5340 7 หลายเดือนก่อน

    😍that rain splashing on puddles is astonishing! v exciting stuff

  • @abeidiot
    @abeidiot 7 หลายเดือนก่อน

    It's truly been a journey watching the foundational steps of this final product through the various papers presented on this channel

  • @DanielBrainbox
    @DanielBrainbox 7 หลายเดือนก่อน +4

    I'm following you for a very long timeI and really appreciate your videos as they are always well prepared and always showing great content but the way you talk makes it sometimes hard to follow. At least for me. Its not your voice, not the language and not the accent thats all great but the way how dramatic and overexcited it sounds with those gaps between the words. It feels so unnatural. Never heard anyone else talking like this. I will get used to it i guess. Please continue your great work. Always happy to see new content from you.

    • @RationalSlave
      @RationalSlave 6 หลายเดือนก่อน

      His voice is absolutely perfect. It is gentle, calming and relaxed. That's what I like about it. Not like others who just want to shove down ten paragraphs down our throats within the first 20 seconds.

  • @imjody
    @imjody 7 หลายเดือนก่อน +6

    If this was only deserving of a 0.5 increase in version number, to DLSS 3.5, I can only IMAGINE what they've got in the works for DLSS 4.0!!

  • @minarishell
    @minarishell 7 หลายเดือนก่อน +1

    I can't wait to see how this can be integrated into VR. Photorealistic VR graphics demand such incredible technology, and I think we're finally getting there!

    • @megagreatgodlywhiteshark2929
      @megagreatgodlywhiteshark2929 6 หลายเดือนก่อน +1

      AVR:Augmented virtual realism rpg,would mostly gunna come 1st in 7+ yrs or somewhat at that era etc lol

  • @nicollokhumalo2826
    @nicollokhumalo2826 7 หลายเดือนก่อน

    Does this tech work for online gaming as well?

  • @StressKind
    @StressKind 7 หลายเดือนก่อน +3

    So does 3.5 include frame generation? i.e. does my 3070ti get all the latest DLSS now??? that can't be true...

    • @backfischritter
      @backfischritter 7 หลายเดือนก่อน +6

      No frame generation is still only available for 4000 series cards as it requires the new optical flow architecture to run in real time without major input lag.

    • @StressKind
      @StressKind 7 หลายเดือนก่อน

      @@backfischritter I thought so... but this means that 3.5 is separate from 3.0?

    • @NyahNyum
      @NyahNyum 7 หลายเดือนก่อน

      @@StressKind yes 3.0 is for 4000serie 3.5 for all rtx cards, they basically include like 3 or 4 denoising pass/filters for ray tracing in just 1. It may give a little boost of fps compared to dlss 2 without ray tracing but the purpose is to denoise ray tracing with a incredible quality and quicker. So it will give you better ray tracing quality and better fps with dlss 3.5, but dont know if it will gives better fps compared to dlss 2 without ray tracing enabled. we'll see! =)

    • @VolTheProducer
      @VolTheProducer 7 หลายเดือนก่อน

      @@NyahNyum I remember from a video somewhere that nvidia claim 3.5 has a very small impact on fps. I definitely wouldn't expect it to be better than dlss 2 on it's own, but it won't be significantly worse either.

    • @Wobbothe3rd
      @Wobbothe3rd 7 หลายเดือนก่อน

      only Frame Generation is exclusive to rtx40 cards, ray generation will run on a 2060.

  • @OGPatriot03
    @OGPatriot03 7 หลายเดือนก่อน +11

    I can't wait for DLSS 3.5's Ray reconstruction. That's going to significantly enhance the quality of real time ray tracing/Path Tracing.
    It's great use of the AI processing, given how it can handle such practical tasks it will certainly be interested to how this will change everything going forward.

  • @zazry86100
    @zazry86100 7 หลายเดือนก่อน

    Stimpy: Hey, Ren. What are you doing?
    Ren: Don't disturb me Stimpy. I'm busy doing video about NVIDIA DLSS 3.5.

  • @manzell
    @manzell 7 หลายเดือนก่อน

    I wonder if LOD systems can be reappropriated to define which "channel" of DLSS to use based on their importance. Big structural blocks - the roadway, buildings, etc - get one DLSS thread that's super speed optimized; the player model has a different DLSS tuned to preserve minute details, running as fully separate processes.

  • @Exilum
    @Exilum 7 หลายเดือนก่อน +5

    Rarely been so early for a 2 minutes paper episode but damn am I happy to watch it!

  • @Eldesinstalado
    @Eldesinstalado 7 หลายเดือนก่อน +3

    dlls mod on starfield dsnt work at all, bethesda needs to add it natively

  • @kinngrimm
    @kinngrimm 7 หลายเดือนก่อน

    Companies which improve products even long after they have been sold, those are the kinds of companies i can respect.

  • @flottenheimer
    @flottenheimer 7 หลายเดือนก่อน

    How about AMD Super Resolution (FSR)? Can you do a video on that since it supports older graphics cards as well?

  • @snowballeffect7812
    @snowballeffect7812 7 หลายเดือนก่อน +34

    That's cool and all, but their monopoly of the market is a huge problem. They already tried to abuse their position pretty egregiously this past generation, but they totally fumbled it, but they could afford to. Hopefully for not much longer.

    • @RasmusSchultz
      @RasmusSchultz 7 หลายเดือนก่อน +9

      and they pretty much own this channel. 😐

    • @ALFTHADRADDAD
      @ALFTHADRADDAD 7 หลายเดือนก่อน +5

      ​@@RasmusSchultzI mean, not without good reason

    • @Wobbothe3rd
      @Wobbothe3rd 7 หลายเดือนก่อน +13

      @@JohnnyWednesday AMD doesn't have any Machine Learning/AI "stuff" to cover! This channel is dedicated to AI, not just graphics or even Ray Tracing as such, and AMD is way behind at all of the above. Nvidia invested BILLIONS over DECADES into CUDA and Tensor acceleration when no other company was interested and when Wall Street gave them no credit whatsoever, now that they're reaping the rewards losers want to whine about it. Nvidia does NOT have a "monopoly" - both AMD and Intel directly compete with them in both graphics, gaming, and datacenters. It's actually AMD who have the completely unfair duopoly over BOTH major current-gen consoles and NO ONE SAID A WORD ABOUT IT for almost 15 years! The FCC, FTC, and the press COMPLETELY IGNORED the fact that AMD won the contract for two DIRECTLY COMPETING companies (Sony and Microsoft) and I never saw any youtube comments whining about that!

    • @ALFTHADRADDAD
      @ALFTHADRADDAD 7 หลายเดือนก่อน +3

      @@JohnnyWednesday a bit reductive innit

    • @thisguy3500
      @thisguy3500 7 หลายเดือนก่อน

      @@Wobbothe3rdThe government contracts Radeon and Boeing in the same manner, why would the FCC or FTC need to stop AMD from competing in a bidding war between Nvidia or Intel? They're all using TSMC these days, right?

  • @MrMartin627
    @MrMartin627 7 หลายเดือนก่อน +3

    The main problem I'm aware of is that it reduces input latency. So any video you will see of it will be 'magic'. If you sit down and play it will feel sluggish. This will mostly be fine for handheld/console gaming but as someone who is quite susceptible to this I feel like we're going backwards. Games used to be sharp and amazingly responsive (but non-realistic) and now they're getting unresponsive and blurry-looking because of all the approximations in an attempt to come visually close to reality.

    • @monkgogimasedi5265
      @monkgogimasedi5265 7 หลายเดือนก่อน

      Frame gen has less latency than native on AMD GPUs because of reflex

    • @RadixRill
      @RadixRill 7 หลายเดือนก่อน

      The difference between native and dlss with fg is minuscule especially if you're starting with a frame rate above 60fps to start. It's negligible. Latency is only noticeable if you're starting with a very low frame rate.

    • @phizc
      @phizc 7 หลายเดือนก่อน

      Ray recreation introduced with DLSS 3.5 doesn't use frame generation. Of course it _can be combined with_ frame generation, but only on RTX 4000 series. So no sluggishness required, just better quality RT.

  • @Dimitris_of_Ark
    @Dimitris_of_Ark 7 หลายเดือนก่อน

    Will dlss 3.5 frame generation work in vr?

  • @josiah42
    @josiah42 7 หลายเดือนก่อน +1

    This is also a demonstration of how smart the design of the human eye is. Rather than acting like a camera, our eyes constantly flutter and flick around a scene, gathering samples of high interest points. Then a neural reconstruction happens in our brains, not of an image or video, but an awareness of the spatial context we're in.
    DLSS is essentially biomimicry. We have a neural attention sampler, temporal movement tracking, and neural reconstruction of the scene. We also have tilt stabilization and liquid cooling built into our eyes.
    Now we just need to build cameras and graphics cards that will last for 100 years of continuous use.

  • @AugustoFnrd
    @AugustoFnrd 7 หลายเดือนก่อน +3

    do you still record your voice for the videos or is it all deepfake?

  • @jackb4
    @jackb4 7 หลายเดือนก่อน +23

    One weakness of DLSS 3 is that it drastically increases input latency in game. Still an amazing technology, but that seems to be the main downside for now.

    • @SaltyDerps
      @SaltyDerps 7 หลายเดือนก่อน +9

      Unless you are playing competitive online games, it doesnt mater, also nvidia reflex brings down the latency

    • @OliM9595
      @OliM9595 7 หลายเดือนก่อน +4

      @@SaltyDerps when running dying lgiht 2 at 60 boosting it to 100fps with frame gen is great, but using frame gen to go from 30 to 60 does not feel great imo and i prefer the native 30fps experience

    • @Wobbothe3rd
      @Wobbothe3rd 7 หลายเดือนก่อน +3

      NO! This is about RAY RECONSTRUCTION, NOT frame generation. STOP SPREADING this inaccurate BS!!! You can use ray reconstruction without super resolution OR frame generation entirely!

    • @Relex_92
      @Relex_92 7 หลายเดือนก่อน +3

      No, Reflex completely compensates for any disadvantage in terms of input latency through frame generation. It often even overcompensates, so that the latency with Frame Generation + Reflex is even lower than without it.
      Reflex is so effective because latency in GPU bound scenarios is usually pretty high. Even a simple framerate limiter can already drastically improve latency if GPU usage is kept below 99%. Reflex goes even further than that but this is basically why it works at all and why Reflex can completely compensate any latency disadvantages trough frame generation. Though reflex without frame generation has even lower latency obviously. But saying latency with FG + Reflex is bad is just completely wrong. Latency is at the same level we are used to or even better as I just explained. Latency with FG is a non-issue, really.

  • @lortzo573
    @lortzo573 7 หลายเดือนก่อน

    when does it get released?

  • @AdiusOmega
    @AdiusOmega 7 หลายเดือนก่อน

    With each and each video you release the inflection of your voice gets more and more crazy.

  • @mubats
    @mubats 7 หลายเดือนก่อน +3

    Aaaaaand
    OOOOFF
    Throuuuugh
    nooOoww
    JuuUuust

  • @samourtr
    @samourtr 7 หลายเดือนก่อน +27

    So no one will pay attention to the way he speaks in the video?

    • @cactus22minus1
      @cactus22minus1 6 หลายเดือนก่อน +2

      It is extremely jarring

    • @mattwood1661
      @mattwood1661 4 หลายเดือนก่อน

      It made me angry.

    • @MrPhyxsyus
      @MrPhyxsyus 3 หลายเดือนก่อน

      You mean like an AI nvidia shill talking about how good AI is?

    • @jenot7164
      @jenot7164 3 หลายเดือนก่อน

      He is a robot. 🤖 Didn’t you know?

    • @emaadawazem6971
      @emaadawazem6971 2 หลายเดือนก่อน

      Ya know I checked out his oldest video and he sounds completely normal which was 8 years ago, then one in 6 years ago still normal, 3 years ago sounds more like now but still slightly natural, 1 year ago he really starts making odd pauses and different pitches of his voice and now. I wonder if it’s a condition or somehow he believes it keeps the audience engaged idk 🤷🏻‍♂️

  • @DerdOn0ner
    @DerdOn0ner 7 หลายเดือนก่อน

    That's what I like to see. People with factual knowledge talking about DLSS.

  • @16kn
    @16kn 7 หลายเดือนก่อน

    is this dlss 3.5 only works on the 30 series? or just the 40 series?

    • @XTHEJAGONX
      @XTHEJAGONX 7 หลายเดือนก่อน

      Ray Reconstruction works on all rtx cards but frame generation is only on 40 series

  • @VisoBoard
    @VisoBoard 5 หลายเดือนก่อน +4

    I love the videos but my god is it exhausting to listen to constant pausing and emphasizing.

  • @Nick_With_A_Stick
    @Nick_With_A_Stick 7 หลายเดือนก่อน +6

    You should mention in a pinned comment that only the 40 series can do frame generation. Due to the “OpTiCaL fLoW AcCeLeRaTor”.

    • @FleaOnMyWiener
      @FleaOnMyWiener 7 หลายเดือนก่อน +4

      20 and 30 series also have optical flow accelerators that are much slower than the ones found in 40 series cards. Framegen would technically be possible on these cards, but the performance would be so poor that it wouldn't be worth it at all.

    • @Nick_With_A_Stick
      @Nick_With_A_Stick 7 หลายเดือนก่อน +1

      @@FleaOnMyWiener there’s a video on TH-cam of a guy getting it to run on his I think 3080 and it gave like 30-40% more fps, and its not meant to run on it. So I would have to disagree with that, but I didn’t know that the 20 and 30 series also had the same hardware just slower, because at first nvidia said it was due to the faster tensor cores, if you do work with machine learning or cuda c+ programming, you would know tensor cores are instant, they wait more on memory latency then actual compute. The 30 and 40 series would have be so so ridiculously fast if they had hbm memory.

    • @pk417
      @pk417 7 หลายเดือนก่อน +2

      ​@@Nick_With_A_Stickwhich video plz share link
      Or please don't spread lies

    • @Mr.MasterOfTheMonsters
      @Mr.MasterOfTheMonsters 7 หลายเดือนก่อน +2

      @@FleaOnMyWiener even a 5% improvement is highly welcome. Nvidia ResBAR and AMD Sam are a minimal 5-10% improvement at best and were still marketed as a game changer.

    • @Wobbothe3rd
      @Wobbothe3rd 7 หลายเดือนก่อน

      @@JohnnyWednesday its not a marketing term, its a real concept, you're just ignorant and preaching to the choir of Nvidia haters on youtube. Look it up.

  • @carvalhoribeiro
    @carvalhoribeiro 7 หลายเดือนก่อน

    Absolutely incredible. Thanks for sharing this

  • @alh5470
    @alh5470 7 หลายเดือนก่อน

    so how do the coarse input to fine output work and is it live now?? like can you use it to see someone face when its blurred out to protect their identity?

  • @Exilum
    @Exilum 7 หลายเดือนก่อน +8

    When NVIDIA started going all in on tensor cores and AI tools, people thought I was crazy when saying it would become a core part of their business. Every day we're getting closer to that future.

    • @itzhexen0
      @itzhexen0 7 หลายเดือนก่อน +1

      I didn't think you were crazy. Because I didn't talk to you. Nice generated story though.

    • @Exilum
      @Exilum 7 หลายเดือนก่อน

      @@itzhexen0 Not generated. Literally what I said and keep saying.

    • @itzhexen0
      @itzhexen0 7 หลายเดือนก่อน

      @@Exilum Someone probably said it before you.

    • @itzhexen0
      @itzhexen0 7 หลายเดือนก่อน

      @@Exilum everyone who has liked your post probably is like i've been saying this for years.

    • @Exilum
      @Exilum 7 หลายเดือนก่อน

      @@itzhexen0 It's even certain. Even with how lukewarm (if not cold) the 20 series was. I don't doubt I wasn't the first. What I'm saying is that literally every single person I talked about this subject with back then didn't believe me. I could've probably gotten some people agreeing in a comment section or on reddit, due to the greater sample size.

  • @loki76
    @loki76 7 หลายเดือนก่อน +4

    The passion of this guy narrating is amazing, lol. He really sells it so well.
    Ray Reconstruction is pretty amazing. The frame generation is already improving a lot and fast. Within just 6 months nV cleaned up a lot of the motion artifacts etc.
    Realize we only got RT and DLSS since 2017. In just 6 years and 3 generations we went from that to Cyberpunk 2077 with frame Generation and Ray Reconstruction and path tracing.
    Another 6 years, can you imagine. Because the technology is advancing at an ever increasing pace. So what took 6 years to NOW. Will take even less time.
    I can easily see indistinguishable frame generation and DLSS without artifacts and better than native, offering double to triple performance.
    AMD is so far behind it's not even funny. No hardware to speak of. All their FSR crap is just software solution and looks bad. Their performance in RT which will be STANDARD in near future, they are like 2 years behind.

  • @aslanyureky
    @aslanyureky 7 หลายเดือนก่อน

    is there any list of games that supporting 3.5?

  • @TorQueMoD
    @TorQueMoD 7 หลายเดือนก่อน

    I seriously want Károly to do a short film or commercial of some kind with Will Shatner! It would be too awesome with the way they both speak :)

  • @raducristiandimitrie
    @raducristiandimitrie 7 หลายเดือนก่อน +6

    *laughs in AMD FSR*

    • @sonacphotos
      @sonacphotos 7 หลายเดือนก่อน +2

      At least AMD is trying, better than nothing. But yeah, not even close to what Nvidia can do.

    • @timhaldane7588
      @timhaldane7588 7 หลายเดือนก่อน

      🤣

    • @Wobbothe3rd
      @Wobbothe3rd 7 หลายเดือนก่อน

      ...which doesn't exist yet. GOD I hate youtube.

  • @AdorablBirds
    @AdorablBirds 7 หลายเดือนก่อน +4

    Please say more than two words at once.
    Listening to. Someone speak... Like this? Is extremely!! Distracting.

  • @kyrolazioko4783
    @kyrolazioko4783 7 หลายเดือนก่อน

    Ray Tracing was the nail. We used our hands to pound that nail into the wood. DLS3 is the hammer to actually pound that nail in. And some good ol' chalk on the hands will be the precision aiming that the DLS3 uses to get the finer details.

  • @jtjones4727
    @jtjones4727 7 หลายเดือนก่อน

    I'm still blown away with how nice Flight Simulator looks. Asobo really did some nice work on that title.

  • @roman9979
    @roman9979 7 หลายเดือนก่อน +6

    That's creepy voice)))

  • @Jonathan.mp4
    @Jonathan.mp4 5 หลายเดือนก่อน +3

    Bro speaks like an AI it's really hard to watch

  • @corahnihlost923
    @corahnihlost923 7 หลายเดือนก่อน

    I'm not sure if the game needs to natively support 3.5, I'm using DLSS swapper and it looks amazing.

  • @jmalmsten
    @jmalmsten 7 หลายเดือนก่อน +2

    Really hope Cycles will get something like this going. :)

  • @bumate90
    @bumate90 7 หลายเดือนก่อน +4

    The content is great, but my ears hurt, please work on your English brosky.

    • @Natural_Mindset
      @Natural_Mindset 7 หลายเดือนก่อน

      He can't, he's secretly a robot

  • @Spliffey81
    @Spliffey81 7 หลายเดือนก่อน +3

    Your videos are great and informative. But you are difficult to listen to. Stop hyping up -every- sentence with an inflection - It feels like you are doing a presentation in front of a kindergarten class.

    • @neil1922
      @neil1922 7 หลายเดือนก่อน +1

      He is really awful to listen to, he doesn't have to stop at every single word. It's like. This. DLSS. IS.Good.Yes.Because.Raytracing.

  • @Zanaki113
    @Zanaki113 7 หลายเดือนก่อน

    My question, if this makes running games so much more efficient (and it seems like it does as my old computer runs game way better with dlss) how will they monetize it? Feels like it'll be the answer for a lot of gaming requirements for a long time if not ever.

  • @scorptice7938
    @scorptice7938 7 หลายเดือนก่อน

    Finally, we will be able to see what's on those bank and train station cameras