New Super Resolution AI - Enhance ~10x Faster!

แชร์
ฝัง
  • เผยแพร่เมื่อ 21 ธ.ค. 2024

ความคิดเห็น •

  • @Cl0udgard3n
    @Cl0udgard3n วันที่ผ่านมา +123

    what a time to be alive!

    • @emircanerkul
      @emircanerkul วันที่ผ่านมา +4

      We re just away about 2 more papers

    • @mauricioalfaro9406
      @mauricioalfaro9406 20 ชั่วโมงที่ผ่านมา

      Deer feluo scalars...

  • @cube2fox
    @cube2fox 20 ชั่วโมงที่ผ่านมา +88

    To explain: Unlike current super resolution solutions like Nvidia DLSS or AMD FSR2, these new techniques (2023 FuseSR, this 2024 paper) do not just scale up the rendered frames a bit from a medium resolution to a somewhat higher like 720p to 1080p. They instead use the full (target) resolution (unshaded) texture and geometry data from the G-buffer and combine it with a very low resolution fully pixel shaded frame. E.g. 1080p geometry + 1080p textures + 270p fully shaded frame = 1080p fully upscaled frame. So the AI doesn't have to "guess" texture and geometry edges while upscaling, only shading effects, like shadows.
    Since pixel shading is a very expensive step, but most high frequency details are in the textures and geometry, I think it makes sense to separate the scaling factors here.
    That being said, it seems these techniques are still somewhat slow (more than 10 milliseconds per frame apparently, though I don't know the evaluation conditions), so they probably won't replace DLSS & Co just yet.

    • @derbybOyzZ
      @derbybOyzZ 17 ชั่วโมงที่ผ่านมา +1

      at 10ms per frame what hardware did they use? will new hardware on future GPUs make his a piece of cake

    • @ShadonicX7543
      @ShadonicX7543 16 ชั่วโมงที่ผ่านมา +5

      yeah there's just no way it could have guessed what those things should have looked like at that low of a res I figured it had to be something like this

    • @sizzlepants
      @sizzlepants 14 ชั่วโมงที่ผ่านมา

      Ahh, very good explanation. I was hoping you could feed in 2k and get 16k output, but it sounds like the original assets are the limiting factor.

    • @x70235868
      @x70235868 10 ชั่วโมงที่ผ่านมา

      Sometimes the comment section is very valuable.

    • @skaterkolesch
      @skaterkolesch 9 ชั่วโมงที่ผ่านมา

      ​@@derbybOyzZI don't think so yet, dlss for example costed 0.8 ms

  • @Mkaltered
    @Mkaltered วันที่ผ่านมา +111

    I’m watching on 144p and I’m impressed

  • @IAm-n7d
    @IAm-n7d 3 วันที่ผ่านมา +144

    No more blurry videos of UFOs and other strange phenomenon, finally!

    • @emircanerkul
      @emircanerkul วันที่ผ่านมา +18

      Nop! u didnt understand concept of this. Upscalers has bias like us.

    • @shirowolff9147
      @shirowolff9147 วันที่ผ่านมา +34

      ​@@emircanerkulthe joke flew above your head just like a ufo

    • @sealwheel
      @sealwheel วันที่ผ่านมา

      this made my morning

    • @stevenwessel9641
      @stevenwessel9641 วันที่ผ่านมา

      Promise, next year the graphic design trend will be that 270p look and will be hard as hell to accomplish (well maybe not with ai but y’all get it) we scale up to scale back🔄

    • @DG_5856
      @DG_5856 7 ชั่วโมงที่ผ่านมา

      this is for games, so it take into account gemoetry data from the meshes

  • @Napert
    @Napert 22 ชั่วโมงที่ผ่านมา +68

    Now even more hardware requirements can be crammed into blurrier and lower resolution images with a ton of motion blur
    What a time to be alive!

    • @Napert
      @Napert 22 ชั่วโมงที่ผ่านมา +18

      A "AAAA" game in a few years using this paper:
      minimum system requirements:
      Intel Processor Ultra 9 9999X (or whatever they're calling it now)
      RTX 6090 SUPER Ti Ultimate Edition
      64 GB of RAM
      450 GB free on 12000 mb/s nvme ssd
      Graphical settings:
      1080p
      Advanced AI upscaling (ultra performance setting (72p source))
      DLSS Frame Gen v7
      AdvancedTAA 2.6
      Graphical settings preset: minimum
      Target fps (not guaranteed): 25fps

    • @jvbfjvbf
      @jvbfjvbf 21 ชั่วโมงที่ผ่านมา +13

      Minimum requirements:
      NVIDIA RTX 7060 Upscaling Edition
      Then the game renders natively at 270p 30 fps because the devs don't need to optimize anymore since AI upscales your game to its guesses about the pixels on your screen.

    • @natan_amorim_moraes
      @natan_amorim_moraes 15 ชั่วโมงที่ผ่านมา +3

      Ah yes Gaming in 2025, Games requiring a 5080 to run using blurry TAA
      What a time to be alive!

    • @Vorexia
      @Vorexia 15 ชั่วโมงที่ผ่านมา

      Womp womp

  • @2Goood
    @2Goood วันที่ผ่านมา +33

    Looks promising, Thanks Karen, Jonah and Fahir

    • @mrburns366
      @mrburns366 23 ชั่วโมงที่ผ่านมา +4

      It's Karen Jonafa here! 😅

    • @davidl.e5203
      @davidl.e5203 11 ชั่วโมงที่ผ่านมา +1

      You forgot et al

  • @meemdizer
    @meemdizer 22 ชั่วโมงที่ผ่านมา +14

    Finally, the phrase “enhance the image” in movies will make sense.

    • @sizzlepants
      @sizzlepants 14 ชั่วโมงที่ผ่านมา

      It does because, as @cube2fox states, the full texture and geometry detail is used. So, you are not enhancing anything except lighting and shading.

    • @x70235868
      @x70235868 10 ชั่วโมงที่ผ่านมา

      All so we can find Mr Swallow's business address.

    • @EverRusting
      @EverRusting 2 ชั่วโมงที่ผ่านมา +1

      Man you beat me to it

    • @meemdizer
      @meemdizer ชั่วโมงที่ผ่านมา

      @ 😂😂

  • @dlaiy
    @dlaiy วันที่ผ่านมา +40

    Even less optimization for game performance due to lazy development options and blurry smeared games!
    What a time to be alive!!
    Impressive tech but not getting the desired result.

    • @cube2fox
      @cube2fox 20 ชั่วโมงที่ผ่านมา +1

      You are getting it wrong. This very technique is a case of optimization. It allows you to render more efficiently. It either increases resolution or frame rate or other effects (like ray tracing).

    • @aarrcchhoonntt
      @aarrcchhoonntt 20 ชั่วโมงที่ผ่านมา +2

      @@cube2fox And remains strictly inferior to native frame generation, which you don't see because of the carefully selected scenes. The camera is moving slower than old people flick, there is no vegetation. Which isn't a dealbreaker until you make the assumption that AI upscale/AA combo is mandatory, and your unprocessed effects contain flaws that you assume the upscaler will smear out - which is exactly the issue being discussed.

    • @mirukuteea
      @mirukuteea 14 ชั่วโมงที่ผ่านมา +1

      @@cube2fox its still 10ms probably on a super beefy GPU. Vertex to GBuffer is rendered at native resolution as well, it doesnt seem like it will save much at the current state to achieve the same quality as ground truth

    • @cube2fox
      @cube2fox 10 ชั่วโมงที่ผ่านมา

      @@mirukuteea They say this in the paper:
      > We display runtime comparison of our method against other baseline methods with different scaling factors which target at a 1080P resolution in Fig. 7. NVIDIA TensorRT is used for acceleration and all neural networks are evaluated in FP16. LIIF [Chen et al. 2021b] is not shown in the figure as its inference cost is much higher than 100 ms. For the 4 × 4 scaling factor, our method (11.27 ms) is slightly slower than FuseSR (8.52 ms) while we show better performance than other baseline methods at 2 × 2 and 3 × 3. It is also worth noting that the performance of our method can be significantly improved with delicate engineering, such as quantizing the MLPs to INT8 or compute shaders.

  • @RiPvI
    @RiPvI วันที่ผ่านมา +15

    If I understood this correctly this is not going to work on 2D Images, but only for something that is rendered on a GPU because it needs the G-Buffer?

    • @InternetListener
      @InternetListener 22 ชั่วโมงที่ผ่านมา

      You can record the video and 3d depth... Like the Xbox Kinect...

    • @cube2fox
      @cube2fox 20 ชั่วโมงที่ผ่านมา +1

      Exactly. It needs the full resolution data from the G-buffer.

  • @MrAndidorn
    @MrAndidorn วันที่ผ่านมา +37

    Wow, it even knows where to place warning signs on the wall 6:29 :)

    • @nephastgweiz1022
      @nephastgweiz1022 วันที่ผ่านมา +9

      Yes it's a bit weird. Look closely at 0:06, the signs are there but only appear when the camera gets closer to the wall, due to the low resolution.

    • @tim4375
      @tim4375 วันที่ผ่านมา +12

      I believe these models are trained specificaly on particular games, hence they know where what should be. Tbh it's a good idea for handhelds for example. You could render 360P and neural upscale them to FHD/2k without strain.

    • @AngryApple
      @AngryApple 6 ชั่วโมงที่ผ่านมา +1

      This technique only renders the shading pass in a lower resolution, geometry, depth and texture buffers are in full native resolution. Shading is usually the most expensive step in computer grahics so by rendering this pass in a lower resolution and the upscale it it could potentially save a lot of resources while not degrading the quality that much.
      Some effects like Ambient Occlusion or already upsampled to make them cheaper but this technique basically does this for all of the shading. Its similar to adaptive shading but for every pixel not just specific part that are recognised as not so important by a magic algorithm that doesnt work most of the time correctly and pixelates parts of the image. So maybe this technique could be combined with that to then bring these lower shaded pixel to a near native representation saving somewhat of 10-15% in resources.

  • @Telhias
    @Telhias วันที่ผ่านมา +7

    From the looks of the second scene, the technique seems to be creating extra detail based on external data. The warning sign on the second column started appearing (probably duet to LOD) in the LR example and the Ours example had it visible from the get go. The third column did not have the sign present in the LR at all, however Ours version has it. The sign is either repeated due to the sign on the first column, or it's existence on the further columns is inferred from external data.
    Don't get me wrong, I believe that an upscaler that uses the uncompressed game assets to perform more accurate upscaling is a great idea, however it doesn't seem to be a pure upscaler.

    • @cube2fox
      @cube2fox 20 ชั่วโมงที่ผ่านมา +2

      Exactly. The examples are misleading and honestly deceptive. It combines a (very) low resolution pixel shaded frame with high resolution frames from the G-buffer which already have the complete texture and geometry details. It just scales up the pixel shading step.

    • @joshuascholar3220
      @joshuascholar3220 5 ชั่วโมงที่ผ่านมา +1

      Yeah it's bullshit. He should have noticed that there are details and coherencies that couldn't possibly be deduced from the pixels without also having the models and textures.

  • @τεσττεστ-υ5θ
    @τεσττεστ-υ5θ 20 ชั่วโมงที่ผ่านมา +1

    i am going to point out that in 1:29 it is hallucinating the shape of the grass. it was short and it changed it to be long.
    also the examples presented were all pretty flat scenes (the eastern village scene was the best as you said but was also very boring texture wise, it just contained some flat colors with no other real effects going on ). this is to be expected because if you were to have actually complicated materials with intricate textures you would be asking the ai to figure out information it has no access to.
    don't get me wrong, the tech is really cool but i don't really get it. this can only really reliably work without artifacts for somewhat simple scenes were the ai can do less guesswork (if you are missing the pixels from a matte gray wall you can figure out that they are also just gray). issue is that we can already render such scenes with great ease, this would be useful for really complex scenes i would imagine but such scenes will probably always be artifact galore because you are asking the ai to basically guess, even if it is great at guessing it will make mistakes because multiple options might be reasonable.

  • @8u88letea
    @8u88letea 17 ชั่วโมงที่ผ่านมา +1

    bro AI is going wayyy to fast, its actually insane. Kinda glad im alive to see this, but boy the future is very unpredictable now

  • @emircanerkul
    @emircanerkul วันที่ผ่านมา +15

    There could be different upscalers per game. Specifically trained with only that game.
    For example, GTA6 train their model with 8k and 720p versions and map those data together. In consumer GPUs they only need to run at 720p but upscaler upscales 8k with perfect clarity and correctness without loosing any details. This way GTA6 can achieve 8k resolution with just 1060TI gpu (which mostly used for physics calculations)

    • @Steamrick
      @Steamrick วันที่ผ่านมา +9

      DLSS 1.0 was trained per game. It was a shitshow and nvidia changed tracks to a general upscaler model as soon as they could. I suppose a big studio with a inhouse game engine could do their own thing, but I think you're underestimating how much effort that'd take a studio, especially since they'd have to start by hiring a dev team from scratch. Plus the expense of building your own datacenter or renting the GPU hours from Microsoft or Amazon.

  • @letsdragthecave2017
    @letsdragthecave2017 วันที่ผ่านมา +27

    the thing is to get these insane upscales (like the sub-1 pixel grass) you have to train the NN directly on the scenes in question. Otherwise it would have no idea what those single pixels are supposed to be. This is the CSI "enhance" meme - it's physically impossible without the NN knowing the ground truth for that specific case. You know what that means? Game devs would have to spend a bunch of $$$ and time training the upscaler on every little nook and cranny of their game, and every time they update the game they would have to re-train it!

    • @stevekitt52
      @stevekitt52 วันที่ผ่านมา

      Funnily enough, CSI came to my mind as well.

    • @Churdington
      @Churdington วันที่ผ่านมา +6

      We're approaching a time where that may be viable, though. Hardware is still getting faster, as AI software also gets faster. Its compounding the performance, y'know. And the thing is... we're just recently starting to think of GPUs as something other than a graphics processing unit, and GPUs have exceeded Moore's Law in their progression. With the way we currently design games, your GPU could be idle for half, or more, of your game's total runtime. Just sitting there doing nothing, waiting until the CPU is finished with your game's logic and its time to render the frame. That's a HUGE amount of wasted capability, considering how fast GPUs can handle math in parallel.

    • @bananaboy482
      @bananaboy482 วันที่ผ่านมา +8

      Not necessarily. This is just another image reconstruction technique for real time rendering using temporal, spatial, and depth data. DLSS can achieve very similar results at a similar scale factor of 33%.

    • @Goodwin062
      @Goodwin062 21 ชั่วโมงที่ผ่านมา

      What if the neural network had knowledge of all the assets used. Could that solve this issue by training a neural network to essentially search through the image, find the assets and then predict the pixels?

    • @cowclucklater8448
      @cowclucklater8448 21 ชั่วโมงที่ผ่านมา +1

      Using temporal data, it would have a much better chance of knowing without training, that is unless the scene moves too much.

  • @ShawnGoff
    @ShawnGoff 13 ชั่วโมงที่ผ่านมา

    The advances in light transport rendering combined with new upscaling techniques mean we might have some astonishingly gorgeous interactive environments very soon

  • @emiliocespedes3685
    @emiliocespedes3685 21 ชั่วโมงที่ผ่านมา +2

    It's funny how life is 🤯
    I have no background in graphics or computing or engineering in general.
    But, I clearly remember one time back in med school (graduated in 2018) when I was reading about how MRI scanners work, and all the math behind it (in short it's all Fourier transforms) and I thought if it could maybe be used for this exact thing 😅

    • @kylek29
      @kylek29 20 ชั่วโมงที่ผ่านมา +2

      To be fair, that is a logical assumption given that Fourier Transforms are in so much.

  • @maximilianmander2471
    @maximilianmander2471 วันที่ผ่านมา +1

    ahhh! another 7-minute-paper has arrived!

  • @ibgib
    @ibgib 5 ชั่วโมงที่ผ่านมา

    Train video game to eye tracking -> model to prioritize focus -> super super resolution. I thought I remembered something like this on this channel before?

  • @Rollthered
    @Rollthered 22 ชั่วโมงที่ผ่านมา +1

    I need this as a feature in Davinci Resolve when editing videos.

  • @MatthewSanders-l7k
    @MatthewSanders-l7k 22 ชั่วโมงที่ผ่านมา

    Game changer! The new super resolution technique could really impact the gaming industry. Excited to see how this blend of Fourier-transform and deep learning progresses!

  • @prod.bykbrd5723
    @prod.bykbrd5723 16 ชั่วโมงที่ผ่านมา

    What a time to be alive. This is awesome.

  • @RaspyYeti
    @RaspyYeti วันที่ผ่านมา

    How does this compare to Sony's PSSR upscaling?

  • @thanatosor
    @thanatosor 13 ชั่วโมงที่ผ่านมา

    It's FOSS work, hope every vendor implement this along every games

  • @jonred233
    @jonred233 21 ชั่วโมงที่ผ่านมา

    I know this is cutting edge but I feel like something like this should have been out a long time ago. This will be a game changer for videographers!

  • @tomaszwaszka3394
    @tomaszwaszka3394 วันที่ผ่านมา +4

    0:15 wow :) I need it in the form of a box with an HDMI ports for my console with 20,000 retro games... :) Where to buy it? I'm joking of course, but maybe next year? Who knows?

    • @Wobbothe3rd
      @Wobbothe3rd วันที่ผ่านมา

      Sooner than you think.

    • @tomaszwaszka3394
      @tomaszwaszka3394 วันที่ผ่านมา

      @@Wobbothe3rd I hope :)👀

  • @95TurboSol
    @95TurboSol วันที่ผ่านมา

    Do you think AI can add information to old tv shows and movies to turn 4/3 or 16:9 aspect ratio into 21:9? I was watching my lost blue ray dvds on an ultra wide monitor and had the thought that AI could fix it and fill in the rest of the screen! I sure hope it can someday.

  • @brololler
    @brololler วันที่ผ่านมา +2

    enhance resolution: CSI is no longer a joke

    • @TheAilmam
      @TheAilmam วันที่ผ่านมา

      CSI was ahead of its time

  • @DownwithEA1
    @DownwithEA1 6 ชั่วโมงที่ผ่านมา

    This reminds me in the 90/00s where tv shows just "enhance" an image and it magically looks clear. Magic no more I guess. Although this is missing all the beep boop computer sounds.

  • @TropicalCoder
    @TropicalCoder วันที่ผ่านมา

    This was a 7 minute bonus episode of two minute papers.

  • @RaaynML
    @RaaynML 7 ชั่วโมงที่ผ่านมา

    I have been waiting for something like this since that one video about DLSS at "240p"

  • @tepafray
    @tepafray 19 ชั่วโมงที่ผ่านมา

    I've always wondered if you could get around a lot of problems of upscaling by just rendering native resolution maybe once a second and using that to inform the proceeding upscales.

  • @zander9486
    @zander9486 วันที่ผ่านมา

    can we watch stream then at 144p to never have internet lag, and ai locally installed upgrades to 4k simultaneously? or will rendering take too long and the idea is unfeasible ?

    • @RyanGrissett
      @RyanGrissett วันที่ผ่านมา

      I'm pretty sure nividia already allows you to do that with DLSS. If not, there is definitely a workaround to make it work. It would be much less resource intensive than doing it while you are also rendering a game, so it is much more than feasible. People already do this.

  • @guillermosanchez1224
    @guillermosanchez1224 13 ชั่วโมงที่ผ่านมา

    So its like Camrip to bluray converter?

  • @stbodora
    @stbodora วันที่ผ่านมา

    I wonder what the benefit is. Do I still need a 4090 to perform the upscaling of my 270p render in real-time? Why not rendering high resulution in this case in the first place?

  • @izored
    @izored วันที่ผ่านมา

    it's me or the 270p its the most prestine 270p they could feed the AI? usually in a pratical scenario non game related if you have a 270/360p video it's come with a truck load of grain and compression artifact, curious to see how this would uspacle those footage

  • @AndrewShepherdLEGO
    @AndrewShepherdLEGO วันที่ผ่านมา

    I’ve been seeing some videos of Runway Gen 3 video-to-video turning video game gameplay and turning it into real life looking videos. Now those take about a minute to render but a video like this makes me feel like it might not be too far away from being real time

  • @somezw
    @somezw 22 ชั่วโมงที่ผ่านมา +1

    I feel like this video is very informative to people who already understand what's going on... but it can be a bit misleading as this can only work for 3d scenes, but it easily comes across as a way to remove any sort of blur or censor to the general public

  • @AlexJohnson-g4n
    @AlexJohnson-g4n 14 ชั่วโมงที่ผ่านมา

    Stunning! This could revolutionize gaming with low-res inputs. Excited about real-world performance.

  • @NoHandleToSpeakOf
    @NoHandleToSpeakOf วันที่ผ่านมา

    It is also temporally coherent. Wonder if it works on one frame at a time or takes in several of them.

  • @germanc97-o6i
    @germanc97-o6i วันที่ผ่านมา +14

    Great, now game developers can be even lazier when optimizing their games!

    • @Wobbothe3rd
      @Wobbothe3rd วันที่ผ่านมา +1

      Game development is expensive enough, there were never enough programmers to go around.

    • @Giftless
      @Giftless 21 ชั่วโมงที่ผ่านมา

      What in your opinion isn't a lazy optimization and what in your opinion can game devs improve rn without touching upscalers and such?

  • @Steamrick
    @Steamrick วันที่ผ่านมา

    I wonder if this model is compact enough to run natively on the 40TOPS that current-gen 'AI' PCs that Microsoft is pushing have? That'd be great for watching old videos that are 720p or 480p or even lower. I hope something like this gets implemented in VLC.
    VLC has worked to include nvidia rtx super resolution, but that puts a huge load on the GPU.

  • @sanketvaria9734
    @sanketvaria9734 วันที่ผ่านมา

    but will this be coming in DLSS, FSR, XeSS, PSSR and TSR?

  • @NaveenReddy-p5j
    @NaveenReddy-p5j 12 ชั่วโมงที่ผ่านมา

    Really sci-fi stuff! This tech could drastically reduce costs for high-quality game development. Looking forward to more updates!

  • @eSKAone-
    @eSKAone- วันที่ผ่านมา +2

    I need that for Deep Space 9

  • @Uhfgood
    @Uhfgood 16 ชั่วโมงที่ผ่านมา

    Honestly, I don't think we'll even need low res versions in the future. It will just do it.

  • @davidm2.johnston684
    @davidm2.johnston684 21 ชั่วโมงที่ผ่านมา

    Good job on this video! Thanks

  • @byGDur
    @byGDur 5 ชั่วโมงที่ผ่านมา

    We need this for old youtube videos.

  • @Temac000
    @Temac000 วันที่ผ่านมา +20

    This is all amazing, but honestly, the gaming industry would've been in much better shape if no AI upscalers were ever created. There are still optimised games, but nowadays instead of making the game run at 60+ fps by itself a lot of game devs just target at 60 fps with upscalers, or in other words you'd at best have 30 fps native. Really sad to see that as the norm now

    • @Angel-Azrael
      @Angel-Azrael วันที่ผ่านมา +5

      You don't have a clue how significant is AI upscaling.

    • @Steamrick
      @Steamrick วันที่ผ่านมา +8

      That's rose tinted glasses doing the talking. There have always been poorly optimized games. Consoles have been running below 'native' (or maybe call it target) resolution since long before DLSS and other AI upscalers became a thing.
      For example, the PS3 was marketed as a 1080p console, but games mostly ran at 720p or even 576p to hit 30fps. Except it didn't have a good upscaler, so you had to hope your TV was up to the task of getting a decent image. Same for the xbox 360. The next gen of consoles also often ran below 1080p, though usually 900p, 792p or 720p. I think that console generation is also when games started using dynamic resolution.

    • @Wobbothe3rd
      @Wobbothe3rd วันที่ผ่านมา +6

      AI upscaling itself is not the culprit for poor optimization. AI upscsling is the scapegoat for budgets that can't hire programmers, Epic's lack of care for products other than Fortnite, and the structural dependency of developers on pre made assets (especially the UE marketplace). Also the RDNA2 GPUs on consoles are really weak compared to Nvidia GPUs, no amount of optimization will turn 10 teraflops into a PC level GPU. AMD really are falling behind Nvidia

    • @nephastgweiz1022
      @nephastgweiz1022 วันที่ผ่านมา +1

      True. I've been playing Marvel Rivals recently, the frame interpolation is a mess, granted my computer is not that good anymore, but everything looks fuzzy for me

    • @Temac000
      @Temac000 วันที่ผ่านมา +2

      @@Wobbothe3rd while it is obvious that pc gpus are much faster, 10 teraflops is plenty enough to make modern games run and look much better than pastgen games. And yes, AI upscalers are not the culprit, but they are the tool that allows gamedevs to disregard optimisations. Just look at GTA 5 - it came out on PS3 and XBOX 360 so it was limited to 512 mb shared ram/vram and 0.2 TFLOPS gpu, yet it featured massive open world with dynamic lighting etc. And nowadays there are games that require 10-20 times more processing power just to look the same or even worse.
      Even with previous console upscalers developers had to implement them and still think about the limitations of the hardware, but now there's this magical button with a label "make the game look nice" with lumen, raytracing, nanite etc behind it, and another button with a label "make it run smooth" with dlss, framegen etc behind it. AI upscalers are not the reason, they just allow the devs to take shortcuts for which the player pays (both literally and figuratively)

  • @Chef_PC
    @Chef_PC 12 ชั่วโมงที่ผ่านมา

    Imagine asking your GPU-based AI to interpret Skyrim, but from an Old West theme...

  • @cyberhodl
    @cyberhodl วันที่ผ่านมา +5

    a blurry mess. wow gameplay

  • @pengiswe
    @pengiswe วันที่ผ่านมา +2

    This is cool. Would love to see it, or something similar, being used on old Monkey Island, or other scummvm-like games, and see how far it is possible to push it.

    • @DeltaNovum
      @DeltaNovum วันที่ผ่านมา

      This one is meant for 3d scenes by looking at the G buffer.

    • @pengiswe
      @pengiswe วันที่ผ่านมา +1

      @@DeltaNovum That's why I mentioned "or something similar". Innovation and development doesn't come from saying what's not possible, but thinking of how to make it possible instead. Would still be cool to see :)

    • @DeltaNovum
      @DeltaNovum 22 ชั่วโมงที่ผ่านมา +1

      @@pengiswe right... My bad, sorry😅. This was me reacting too quickly when not feeling too well at that moment.
      These things already exist for 2d art (fast enough), but don't know of an easy solution for running it with games in realtime.
      I'd love to see it too btw. If it's able to keep with the intended style I do not mind enhancing original graphics. I listen to high quality stereo signals processed by a dolby atmos chip for example. Not original or the way it was intended to be listened to. But in most cases I enjoy the experience waaaaaaay more.

  • @sarwar.266
    @sarwar.266 22 ชั่วโมงที่ผ่านมา

    Well, we are so close to the those SIFI movies when there security cameras could make a blurry picture to a 4k and clean picture.

  • @sinephase
    @sinephase 21 ชั่วโมงที่ผ่านมา

    so you just lose fine details but experience a sharper and smoother upscaled version, nice for people with lower end cards

  • @feathersm7966
    @feathersm7966 วันที่ผ่านมา +4

    Looks terrible in fast motion though??? Literally all videogames and most videos have a lot of motion

    • @mrlightwriter
      @mrlightwriter 21 ชั่วโมงที่ผ่านมา

      Does minesweeper have lots of motion?

  • @cptairwolf
    @cptairwolf วันที่ผ่านมา

    This doesn't excite me that much for 240p to 1080p but it going to be awesome for upscaling 4k games up to 8k resolution without taxing future GPUs as much as it would otherwise.

  • @3djramiclone
    @3djramiclone วันที่ผ่านมา

    Can we test it or not ????

  • @CharlotteLopez-n3i
    @CharlotteLopez-n3i 22 ชั่วโมงที่ผ่านมา

    Impressive super resolution tech for gaming! Speed and image quality are promising, though thin structures and fog/particles remain tricky. Future of gaming just got bright!

  • @Capeau
    @Capeau 18 ชั่วโมงที่ผ่านมา

    It probably uses information like normals, motion vectors, zbuffer, etc.. not to mention it has multiple frames to extract data from. This is far 'easier' to get great results from, since there is a lot more data to work with.

  • @eugeneponomarov7429
    @eugeneponomarov7429 5 ชั่วโมงที่ผ่านมา

    That means: even fewer optimisations from developers, more blur and smearing, less actual gameplay and more microtransations - nice.

  • @BonBonBonni
    @BonBonBonni 21 ชั่วโมงที่ผ่านมา

    We need this for VR Headsets like Quest 3!

  • @SuperFurias
    @SuperFurias 18 ชั่วโมงที่ผ่านมา

    i don't understand how to use this thing though, and how. can i use it on still images or videos outside of softwares like blender? on the github there is nothing at all, and the github page is 7 months old, so why in 7 months they did nothing? is it even a real thing we can use?

  • @AshtonvanNiekerk
    @AshtonvanNiekerk วันที่ผ่านมา

    Imagine doing something like this on old computer games made for Voodoo Graphics cards. Instant Definitive edition for all your favorite games.

  • @juanromano-chucalescu2858
    @juanromano-chucalescu2858 20 ชั่วโมงที่ผ่านมา +1

    I don't see how this is an improvement if it is still blurry. The goal is to upscale video not images.

  • @WinonaNagy
    @WinonaNagy วันที่ผ่านมา

    Wow! From 270p to stunning visuals? This super resolution tech could totally change gaming. How real-time ready is it?

    • @GraveUypo
      @GraveUypo 21 ชั่วโมงที่ผ่านมา

      more like 270p to 1080p real video codec. looks smeary and garbage. better than 270p but i'd take native 720p over this

    • @dfhiklnoqr
      @dfhiklnoqr 20 ชั่วโมงที่ผ่านมา

      @@GraveUypo Its from 270p. Im assuming its better if you upscale from more usual resolutions like 1080p or 720p

  • @danekincade6299
    @danekincade6299 17 ชั่วโมงที่ผ่านมา

    can you go over genesis ai simulation?

  • @kalidesu
    @kalidesu 20 ชั่วโมงที่ผ่านมา

    I wonder how good it would be for low information vector based images (like svg) that would be a game changer.

  • @MrYerak5
    @MrYerak5 วันที่ผ่านมา

    Can it upscale ps1 games like metal gear

  • @fackarov9412
    @fackarov9412 7 ชั่วโมงที่ผ่านมา

    the first thing i do in video games is remove the motion blur and with these techniques i will have to play in constant blur with consequent eye strain, let's hope for the best

  • @monkeystealhead
    @monkeystealhead 20 ชั่วโมงที่ผ่านมา

    -Why aren't false color pictures used to see how good a Super Resolution works compared against Ground truth.
    -Why is no specular in the renderings?
    -It is impressive it works with such a low Resolution I thought SR is more usable if you want to go from FullHD to 4k for example.
    The need for powerful GPU's will never go away. If people have access to more Resources they will use it.

  •  วันที่ผ่านมา

    This is awesome! I will finally be able to use my S3 Trio64V+ to its full potential and render something at 12p while upscaling it to a bajillion pixels! What a time to be alive... 🤡

  • @punk3900
    @punk3900 วันที่ผ่านมา

    was it trained on the same game?

  • @NeroX-nh8se
    @NeroX-nh8se วันที่ผ่านมา +3

    1:40 you are contradicting yourself... How can a cheapo GPU be able to process this on the fly...

    • @MrNote-lz7lh
      @MrNote-lz7lh วันที่ผ่านมา

      Sounds like that's just your presumptions.

    • @NeroX-nh8se
      @NeroX-nh8se 20 ชั่วโมงที่ผ่านมา

      ​@@MrNote-lz7lh How is it my presumptions? Cheap GPUs currently available can't perform AI tasks. Have you performed Super Resolution using LORA models? Do you know what the minimum GPU spec is?
      And this video says it is 10x faster, doesnt mean it is running in real-time. If it is not real-time, then it is not applicable to gaming. If you want real-time, the minimum GPU spec will be even higher.

  • @tirednsleepy44
    @tirednsleepy44 18 ชั่วโมงที่ผ่านมา

    I can make out the low resolution, but my first computer was an acorn electron in the early 80’s

  • @xpdatabase1197
    @xpdatabase1197 23 ชั่วโมงที่ผ่านมา

    Wow can't wait for every company to miss use this again and again. To give us unoptimized experiences, that look blurry on modern hardware. What a time to be alive. Fuck me.

  • @agustinpizarro
    @agustinpizarro 23 ชั่วโมงที่ผ่านมา

    I was expecting your 2 minutes for Genesis 4D simulation

  • @_John_P
    @_John_P 15 ชั่วโมงที่ผ่านมา

    To be fair, that's the best 270p I have ever seen.

  • @ShadonicX7543
    @ShadonicX7543 16 ชั่วโมงที่ผ่านมา

    i wonder if things like this can combine nicely with NVIDIA 5000 series upcoming neural rendering (AI compressed assets) to make it even faster

  • @Sp0ntanCombust
    @Sp0ntanCombust 5 ชั่วโมงที่ผ่านมา

    Cool! I can't wait for game developers to ignore optimization even further!

  • @torarinvik4920
    @torarinvik4920 วันที่ผ่านมา +2

    What I actually hope will be the future is something like GeForce NOW. Cloud gaming, this would make it possible to have a huge library of games in extreme fidelity run on super expensive hardware. It would be a lot better spending of resources because most of the time we are not using our own private GPUs.

    • @juanromano-chucalescu2858
      @juanromano-chucalescu2858 20 ชั่วโมงที่ผ่านมา

      You still have to pay for the hardware one way or another

    • @torarinvik4920
      @torarinvik4920 20 ชั่วโมงที่ผ่านมา

      @juanromano-chucalescu2858 Yes of course, Im talking about a service where you rent similar to GeForce Now.

  • @aNewAgeWorld
    @aNewAgeWorld 5 ชั่วโมงที่ผ่านมา

    Apparently AGI is a thing now! When will you do a video talking about it?

  • @e1622zelda
    @e1622zelda 17 ชั่วโมงที่ผ่านมา

    i think the prob is if it does not see the orginal image in perfect qaulity it will always be a great looking version of a diffrint image than the original, it's never going to be magic, you cant get something out of nothing.

  • @dantemeriere5890
    @dantemeriere5890 วันที่ผ่านมา

    Wow, it's just like that technology that already exists and is in widespread practical use, just worse! Two more papers down the line and we will finally get what NVidia gave us 5 years ago! What a time to be alive!

  • @skelun
    @skelun 5 ชั่วโมงที่ผ่านมา

    Expectation: A 1060 running a next-gen game
    Reality: A 4090 struggling to run 4k 30fps because devs aren't optimizing anything anymore due to this

  • @berkeokur99
    @berkeokur99 วันที่ผ่านมา

    What if every game developer trains the super resolution model on their game’s high res assets and provide a custom super resolution model optimizef for their game?

  • @Steamrick
    @Steamrick วันที่ผ่านมา

    We're still a long way off from replacing high-resolution textures with super resolution ML. The method shown here might be better than all before it, but the textures are the most obvious weakness. It looks a lot more 'anime' compared to the original.

    • @Wobbothe3rd
      @Wobbothe3rd วันที่ผ่านมา +1

      Also this lacks motion vectors, it's guessing the optical flow in motion. Obviously, that makes the model MORE impressive though!

  • @CastleRene
    @CastleRene วันที่ผ่านมา

    Sorry guys, I can barely even hate on this one. One proper use of AI on a smaller scale.

  • @Blubb5000
    @Blubb5000 19 นาทีที่ผ่านมา

    How exciting!

  • @AlucardNoir
    @AlucardNoir 3 ชั่วโมงที่ผ่านมา

    x16 from 270 to 1080, or from 1080p to 8k which this is actually intended for. But the 270 to 1080 is a lot more impressive.

  • @cbuchner1
    @cbuchner1 วันที่ผ่านมา

    This could be great for making classic games look great in emulators.

  • @SurirPi7
    @SurirPi7 13 ชั่วโมงที่ผ่านมา

    Next generation of Nvidia GPUs might also have a new technique for rendering. I think it was called neural rendering in the leaked shop page.

  • @EverRusting
    @EverRusting 2 ชั่วโมงที่ผ่านมา

    CSI "zoom in" is becoming real

  • @mister_r447
    @mister_r447 5 ชั่วโมงที่ผ่านมา

    Waiting for you to talk about the Genesis Ai phisics engine!
    Also o3 by OpenAI!

  • @TheAkdzyn
    @TheAkdzyn วันที่ผ่านมา

    Would be great to finally compress games for smaller file size and less resource intensive performance.

  • @cmdr.o7
    @cmdr.o7 16 ชั่วโมงที่ผ่านมา

    while it's impressive to have this new research, what often ends up happening in reality is less crisp, more blurry and overall downgraded visual quality in games
    companies start to rely and it as a crutch. 'almost good enough' is not good enough - it needs to be equal to or better
    it's great for small devices, phones, laptops, but it is a clear overall downgrade to rely on these upscalers and AA models
    they essentially destroy fine details in graphics, perhaps unnoticeable to the average pHD researcher, but not to graphics artists
    it may work fine on the average ubislop game with a lot of training data, but potentially ruin outside the norm distribution (which are where good graphics actually exists)

  • @briananeuraysem3321
    @briananeuraysem3321 22 ชั่วโมงที่ผ่านมา

    Games are going to get so much more unoptimized...

  • @eSKAone-
    @eSKAone- วันที่ผ่านมา +4

    Everything over 3 ms is way too long for a video game.

    • @briananeuraysem3321
      @briananeuraysem3321 22 ชั่วโมงที่ผ่านมา

      Agree

    • @mrlightwriter
      @mrlightwriter 21 ชั่วโมงที่ผ่านมา +1

      Are you saying that 80fps doesn't cut it?

    • @GraveUypo
      @GraveUypo 21 ชั่วโมงที่ผ่านมา +1

      @@mrlightwriter 80fps if the gpu takes literally zero time to render the entire image other than the upscaling.
      if you had, say, 60fps on the original image, turning this on would send your frame rate to 34 fps, and that's only if it didn't slow down the rendering of the original image, which it would. it would probably cut it down to something like 27fps in the real world. at which point you're much better off upscaling from 720p from one of the existing upscalers.

    • @mrlightwriter
      @mrlightwriter 21 ชั่วโมงที่ผ่านมา

      @@GraveUypo Ok, that makes sense.

  • @billyoung9538
    @billyoung9538 7 ชั่วโมงที่ผ่านมา

    I think the subjective part really comes into play here, because many of the GT images look to be aliased game reference where they are literal contrast cliffs that do not typically occur in things other than video games. For this reason I prefer this method most of the time over the GT references, because it adds just enough anti-aliasing to make the objects appear more photo real. These pixel zoom in comparisons, in my opinion, are not as valuable as showing an aliased GT next to and anti-alised super sample and asking people, "which looks more real?" I would put money that outside of some gamers the majority of people would pick the anti-alised image the majority of the time; however, without seeing more side by sides to do a thorough comparison this is just a speculative opinion based only on what I'm seeing here.

  • @LacklusterFilms
    @LacklusterFilms 7 ชั่วโมงที่ผ่านมา

    Very impressive smart technique

  • @ryanskelton9548
    @ryanskelton9548 วันที่ผ่านมา

    this is very cool but at nearly 12ms still to expensive to be practical given it's a 1080p output and most people are looking for 1440p or 2160p. I can't find what gpu this is on which means it could be a high end one, even a 4090 which would mean it would take much longer on lower end gpu's. DLSS is less than 1ms cost on a 4090 and even lower for 1080p. if the input is 270p thats a 4x4 increase in resolution or 16times the detail.. but in a 16.7ms 60fps game you only have 4.7ms left to render the image and given that's close to 3/4 of the time upscaling and only 1/4 for the input resolution id wager you would be better of just increasing the input resolution as high as possible then using a current technology like DLSS to get a better result. Still a cool technology and if they can make it faster and improve it's temporal artifacts it could replace DLSS.