Epic's Unreal Optimization Disaster | Why Nanite Tanks Performance!

แชร์
ฝัง
  • เผยแพร่เมื่อ 20 ธ.ค. 2024

ความคิดเห็น • 1.8K

  • @ThreatInteractive
    @ThreatInteractive  4 หลายเดือนก่อน +566

    PLEASE READ FOR UPDATES & RESPONSES:
    Thank you so much for your support!
    1. As always, watch our videos in 4k (as in streaming settings) to see our comparison details through TH-cam compression.
    2. Please remember to *subscribe* so we can *socially compete* with leading tech influencers who push poor technology on to everyday consumers. Help us spread REAL data, empowering consumers to push back against studios that blame your sufficient hardware!
    * RESPONSE TO COMMUNITY QUESTIONS:
    1. We've repeatedly seen comments attempting to explain how Nanite works; arguing that quad overdraw isn't relevant. That's the whole point of the video. Comparing Nanite (which doesn't use quads) to Overdraw is the only contextually fair comparison.
    Additionally many have claimed that Nanite has a "large but flat and consistent cost". This is utterly false. Nanite can and does suffer from its own form of overdraw (though not quad related). A major issue that people are missing involves Virtual Shadow Maps. Which are tied to Nanite.
    Nanite's shadow method not only re-renders your digital scenes at massive resolutions but these maps are also re-drawn under basic scenarios typical in games. Such as moving the CAMERA's position, shifting the SUN/MOON, or having moving objects or characters spread across your scene. Does that SOUND like good performance to you? News flash…It's not.
    Even Epic Games admitted VSMs were terrible for Fortnite but instead accepting it wasn't fundamentally a good fit. They "bit the bullet" and use it anyway. But they didn't really bite anything...consumers did.
    2. To those defending Nanite because it saves on development time. We are fully aware of that. We have constantly stated this in previous videos and comments. We have also said this is a great thing to work towards.
    What these ignorant people fail to grasp is that Nanite is a FORCED alternative, due to a workflow deficiency in legitimate optimization for meshes.
    3. Like we stated in our ‘Fake Optimization Video’, Pro-Nanite users fail to recognize the CONTRADICTION Nanite causes in "visual fidelity".
    If you are using a technology that has such a massive domino effect on performance that you end up having to use a blurry, detail-crushing temporal upscaler to fix performance then you end up smearing all the detail anyway for a distorted presentation. Then if you were to explore CHEAP deferred MSAA options. All that subpixel detail possible in Nanite and VSM's gross use of soft shadow sampling is promoting temporal aliasing/reliance on flawed TAA/SS.
    4: The test shown at 3:36 shows a workflow deficiency rather than an implementation issue. Unreal *does support per-instance LOD selection but the engine defaults to ISM(Instanced Static Meshes) which doesn't support LODs.* But UE5's HISM(Hierarchical Instanced Static Meshes) does but the developers have not made this as accessible and have not produced a system that combines all these meshes with precomputed asset separation culling. Before some people complain about "duplicated" assets and increased file size, we encourage viewers to research how Spider-Man PS4's open worlds where managed.

    • @QuakeProBro
      @QuakeProBro 4 หลายเดือนก่อน +124

      @@HANAKINskywoker "...to a problem that never NEEDED TO EXIST." is what he said. We would not need DLSS that badly, if fundamental optimization methods would be pushed by the industry, instead of upscalers.

    • @QuakeProBro
      @QuakeProBro 4 หลายเดือนก่อน

      ​@@HANAKINskywoker Yes it is true, as the games get bigger and bigger, optimization gets harder. But this is why things like the proposed AI tools would really shine.
      As long as Nanite is like "You can now render a rock with 10 million tris and the performance is (in context of our other new features) better than before, but it really isn't, because the base cost is exponentially higher. But hey, here is the magic fix: Upscalers!" and not "You can now render 10 million rocks with near infinite draw distance, and the base cost is the same or even less than before.", it creates a problem that shouldn't exist.
      There is much more work that needs to be done and Epic should provide support for those who want a smooth transition, but instead they only really push the features that sound great in marketing.
      My own project went from 120fps in UE4 to about 40fps in UE5 on a map that only has a fucking cube and floor.
      Upscalers are definitely not evil and can be super useful, but they just hurt the visuals too much for them being the only answer you get, if the performance is bad.

    • @V1vil
      @V1vil 4 หลายเดือนก่อน +7

      Comparisons details were still noticeable while watching on Xbox 360 in 720p. :)

    • @Navhkrin
      @Navhkrin 4 หลายเดือนก่อน

      @@RandoTH-camAccount Google -> unreal.ShadowCacheInvalidationBehavior

    • @Navhkrin
      @Navhkrin 4 หลายเดือนก่อน

      @@RandoTH-camAccount TH-cam is removing my links but simply search shadow cache invalidation behaviour

  • @pyrus2814
    @pyrus2814 3 หลายเดือนก่อน +1866

    DLSS was originally conceived as a means to make real-time raytracing possible. It's sad to see how many games today rely on it for stable frames with mere rasterization.

    • @gudeandi
      @gudeandi 3 หลายเดือนก่อน +63

      tbh 99% of all players (especially in single player) won't see a difference.. and that's the point.
      Get 90% of the result by investing 10%. The last 10% cost you sooo much more and isn't often worth it.
      imo

    • @beetheimmortal
      @beetheimmortal 3 หลายเดือนก่อน +370

      I absolutely hate the way modern rendering went. They switched from Forward to Deferred, but then broke anti-aliasing completely, introduced TAA which sucks, then introduced DLSS, which is now MANDATORY in any game to run at a sort-of acceptable framerate. Nothing is optimized, and everything is blurry and low-res. Truly pathetic.

    • @dawienel1142
      @dawienel1142 3 หลายเดือนก่อน +116

      @@beetheimmortal agreed, can't believe that high end RTX40 series GPU's really struggle to run the latest games at acceptable settings and performance especially compared to the hardware we had with 2010-2015 games which all generally looked good and run well on the hardware of their time.
      I feel like we are at the point of gaining very slight graphical fidelity for way too much cost these days.

    • @techjunky9863
      @techjunky9863 3 หลายเดือนก่อน +24

      @@beetheimmortal only way to play games with proper resolution now is to buy a 4k monitor and render at 4k. Then you get somewhat a similar image quality as we had with foreward rendering

    • @kyberite
      @kyberite 3 หลายเดือนก่อน

      @@griffin5734 DLAA is amazing

  • @BunkerSquirrel
    @BunkerSquirrel 3 หลายเดือนก่อน +2584

    2d artists: worried about ai taking their jobs
    3d artists: worried ai will never be able to perform topology optimization

    • @とふこ
      @とふこ 3 หลายเดือนก่อน +76

      Me: want a robot to take my job 😂

    • @GeneralKenobi69420
      @GeneralKenobi69420 3 หลายเดือนก่อน +61

      Who let the furry out of the basement 💀

    • @dimmArtist
      @dimmArtist 3 หลายเดือนก่อน +27

      3d artists worried that hungry 2d artists will become better 3d artists and taking their jobs

    • @mainaccount888
      @mainaccount888 3 หลายเดือนก่อน +129

      @@dimmArtistsaid no one ever

    • @roilo8560
      @roilo8560 3 หลายเดือนก่อน +83

      ​@@GeneralKenobi69420bro has 69420 in his name in 2024 💀

  • @iansmith3301
    @iansmith3301 หลายเดือนก่อน +277

    The fact that Epic is telling developers to go ahead and use 200GB 3D models without optimization because Nanite 'can' render them in the engine should send off huge red flags because artists will think that it's fine to not optimize.. Everyone's SSD/HDD is fked as well, 300GB game installs could be the norm.

    • @SubThoRed
      @SubThoRed 25 วันที่ผ่านมา +15

      You mean "everyone's SSD" :) It's almost mandatory nowadays. These games refuse to run smooth on HDD.

    • @jwhi419
      @jwhi419 23 วันที่ผ่านมา +6

      ​​@@SubThoRedif a character alone is 10GB. Then yeah you'll need an hdd everytime you want to switch what game is on the ssd. Games are going to be 200GB or extremely cou bound because it's being used to compress and decompress over and over and over.
      That's a problem because cpus are not getting much better. We might never have something 4 times a strong as a 9800x3d. Meaning a 120fps will always be stuck there even when 4k 2000hz displays exist

    • @bennybouken
      @bennybouken 20 วันที่ผ่านมา +3

      ​@@SubThoRedWuthering Waves, a game using UE4 even struggle to run on SSD at launch. It improved overtime now it runs fine on SSDs.

    • @PauLtus_B
      @PauLtus_B 17 วันที่ผ่านมา +5

      I think nanite is really amazing tech.
      …but it is essentially an incredibly expensive optimisation technique that's beneficial for horrendously unoptimised scenarios.
      There's something paradoxical as it seems designed to fix a problem that shouldn't exist in the first place.

    • @kennichdendenn
      @kennichdendenn 14 วันที่ผ่านมา +2

      Btw: I'd love to have the option available to use higher-res textures - like way back when with the original skyrim, where 2k textures were packaged separately as a free dlc. If you had the graphical horsepower, you could run it, but you did not have to download it if not needed.

  • @SaltHuman
    @SaltHuman 4 หลายเดือนก่อน +2796

    Threat Interactive did not kill himself

    • @ook_3D
      @ook_3D 4 หลายเดือนก่อน +243

      For real, dude presents incredible information without bias, gotta piss alot of AAA companies off

    • @bublybublybubly
      @bublybublybubly 3 หลายเดือนก่อน +109

      @@ook_3D Epic and some twitter nerds may not be happy. I don't think the game companies care 🤷‍♀
      He might just bring them a different solution for saving money on dev time & optimizations. Why would they be mad about someone's who is this motivated to give them another option in their corporate oppression toolbox for free?

    • @168original7
      @168original7 3 หลายเดือนก่อน +2

      Timmy isn’t that bad lol

    • @mercai
      @mercai 3 หลายเดือนก่อน

      @@bublybublybubly Cause this someone acts like a raging asshat, makes lots of factually wrong claims, offers no solution and then tries to crowdfund to "fix" something - all from this place of being a literal nobody with zero actual experience or influence?
      Yeah, it's not maddening, but quite annoying.

    • @gdog8170
      @gdog8170 3 หลายเดือนก่อน +16

      I can't lie this is not funny, doesn't mean that he is revealing info like this that he will be targeted, hopefully that never happens

  • @sgredsch
    @sgredsch 3 หลายเดือนก่อน +630

    im a mod game dev that worked with the source engine. and looking back how we optimized for performance manually with every mesh, texture, shader, and seeing how modern studios deliver the worst garbage that runs like a brick also makes me angry. now we start throwing upscaling at games that should run 2x as fast on native resolution to begin with.
    we are subsidising the sloppy or not existing game optimization by overspending on overbuilt hardware that in return gets choked to death by terrible software that is a result of cost cutting measures / lazyness / manipulation. nvidia is really talented in finding non-issues, bloating them up and then selling a proprietary solution to an issue that shouldnt be one in the first place. they did it with physX, tessellation, gameworks, raytracing and upscaling. the best partner in crime is of course the one engine vendor with the biggest market share - epic with their unreal engine.
    fun fact: the overdraw bloat issue goes back to when nvidia forced tessellation in everyones face - nvidia made their gpus explicibly tolerant to sub pixel geometry spam (starting with thermi, i mean fermi), while GCN couldnt handle that abuse. the witcher 3 tessellation x64 hairworks sends its regards, absolutely choking the r9 290X.
    its a shame what he have come to.

    • @e.s.r5809
      @e.s.r5809 3 หลายเดือนก่อน +153

      I've got tired of seeing "if it's slow, your hardware isn't good enough" for graphics on par with decade-old releases- struggling on the same tech that ran those games like butter. You shouldn't *need* to spend half a year's rent on a gaming PC, to compensate for memory leaks and poor optimisation. The only winners are studio execs and tech shareholders. It's convenient to call your customers poor, instead of giving your developers time and resources to ship viable products.

    • @googIesux
      @googIesux 3 หลายเดือนก่อน +36

      Underrated comment. This has been the elephant in the room for so long

    • @T61APL89
      @T61APL89 3 หลายเดือนก่อน

      and yet people still buy the games in record numbers, this is what capitalism rewards. throwing shit at the wall and accepting the shit smeared remnants.

    • @Online-j8e
      @Online-j8e 3 หลายเดือนก่อน +17

      "starting with thermi" lol thats funny

    • @cptairwolf
      @cptairwolf 3 หลายเดือนก่อน +16

      You're not wrong in that too many studios are taking the easiest way out and skipping any sort of optimization but let's not blame new technology for they. I'd take well optimized nanite structures or micro polygon tech over LODs any day. LODs are time consuming to create, massively increase storage requirements and just plain look ugly. I'm not sad to see them get phased out.

  • @unrealcreation07
    @unrealcreation07 3 หลายเดือนก่อน +201

    10 years Unreal developer here. I've never really been into deep low-level rendering stuff, but I just discovered your channel and I'm glad your videos answered some long-lasting questions I had, like "why is this always ugly and/or blurry, no matter the options I select?? (or I drop to 20fps)".
    With time, i have developed a kind of 6th sense about which checkbox will visually destroy my game, or which option I should uncheck to "fix" some horrible glitches... but I still don't know why most of the time. In many cases, I just resign and have to choose which scenario I prefer being ugly, as I can never get a nice result in all situations. And it's kind of frustrating to have to constantly choose between very imperfect solutions or workarounds. I really hope you'll make standards change!

    • @fleaspoon
      @fleaspoon 2 หลายเดือนก่อน +14

      you can also learn what those checkboxes actually do and solve the issues by yourself for your specific needs

    • @papermartin879
      @papermartin879 13 วันที่ผ่านมา +2

      how are you a 10 years unreal dev doing low level rendering stuff yet couldn't figure that out yourself

    • @vikan3842
      @vikan3842 13 วันที่ผ่านมา +17

      @@papermartin879 "10 years Unreal developer here. I'VE NEVER really been into deep low-level rendering stuff"
      Read it again, smart guy

    • @JesterFlemming
      @JesterFlemming 2 วันที่ผ่านมา +1

      That's kind of a joke, that someone with over 10 years in an Engine has no fcking idea, how that stuff actually works.

    • @DemsW
      @DemsW 13 ชั่วโมงที่ผ่านมา

      @@JesterFlemming Engine dev covers way more that low level rendering, let's not act like douches because someone admits they don't know everything.

  • @sebbbi2
    @sebbbi2 3 หลายเดือนก่อน +685

    Nanite’s software raster solves quad overdraw. The problem is that software raster doesn’t have HiZ culling. Nanite must lean purely on cluster culling, and their clusters are over 100 triangles each. This results in significant overdraw to the V-buffer with kitbashed content (such as their own demos). But V-buffer is just a 64 bit triangle+instance ID. Overdraw doesn’t mean shading the pixel many times.
    While V-buffer is fast to write, it’s slow to resolve. Each pixel shader invocation needs to load the triangle and runs equivalent code to full vertex shader 3 times. The material resolve pass also needs to calculate analytic derivatives and and material binning has complexities (which manifest in potential performance cliffs).
    It’s definitely possible to beat Nanite with traditional pipeline if your content doesn’t suffer much from overdraw or quad efficiency issues. And your have good batching techniques for everything you render.
    However it’s worth noting that GPU-driven rendering doesn’t mandate V-buffer, SW rasterizer or deferred material system like Nanite does. Those techniques have advantages but they have big performance implications too. When I was working at Ubisoft (almost 10 years ago) we shipped several games with GPU-driven rendering (and virtual shadow mapping). Assassin’s Creed Unity with massive crowds in big city streets, Rainbox Six Siege with fully destructive environment, etc. These techniques were already usable on last gen consoles (1.8TFLOP/s GPU). Nanite is quite heavy in comparison. But they are targeting single pixel triangles. We werent.
    I am glad that we are having this conversation. Also mesh shaders are a perfect fit for GPU-driven render pipeline. AFAIK Nanite is using mesh shaders (primitive shaders) on consoles at least. Unless they use SW raster today for big triangles too. It’s been long time since I analyzed Nanite for the last time (UE5 preview). Back then their PC version was using non-indexed geometry for big triangles, which is slow.

    • @user-bl1lh1xv1s
      @user-bl1lh1xv1s 3 หลายเดือนก่อน +32

      thanks for the insight

    • @tubaeseries5705
      @tubaeseries5705 3 หลายเดือนก่อน +48

      the issue is that quad overdraw is not a big deal, modern GPUs are never limited by the amount of triangles they output, it's always shaders, and nanite adds a lot of additional work to the shader pipeline, which is already occupied as hell,
      for standard graphics with reasonable triangle counts nanite just doesn't make any sense, it offers better fidelity than standard methods, but performance is not what it can offer

    • @minotaursgamezone
      @minotaursgamezone 3 หลายเดือนก่อน +6

      I am confused 💀💀💀

    • @torginus
      @torginus 3 หลายเดือนก่อน +1

      Not an unreal expert, but from what I know of graphics, quad rasterization is unavoidable, since you need the derivatives for the pixel shader varyings that are needed for things like texture sampling. Honestly it might make sense to move beyond triangles to things like implicit surface rendering (think drawing that NURBS stuff directly) for the stuff nanite tries to accomplish.

    • @tubaeseries5705
      @tubaeseries5705 3 หลายเดือนก่อน +14

      ​@@torginus rendering nurbs and other non-primitive types always goes down to rendering primitives anyway, CAD software always processed nurbs into triangle meshes using various methods that produce a lot of overhead, GPUs are not capable of efficiently rendering anything else than primitives, we would need to build new hardware standard for rendering them, and that's not really reasonable

  • @shamkraffl6050
    @shamkraffl6050 20 วันที่ผ่านมา +131

    I'm not surprised if NVIDIA pays devs to not optimize games so that they can sell their gpus.

    • @S0up3rD0up3r99
      @S0up3rD0up3r99 14 วันที่ผ่านมา

      NVIDIA's biggest customer is in deep shit for fraud.

    • @jrodd13
      @jrodd13 11 วันที่ผ่านมา +22

      I feel like devs are just overworked, underpayed, and rushed nowaways.

    • @rakedos9057
      @rakedos9057 11 วันที่ผ่านมา +3

      It has been the case on all sides (not just NVIDIA) for decades. There is nothing new.

    • @deamooz9810
      @deamooz9810 6 วันที่ผ่านมา +1

      I like that theory lol

  • @doltBmB
    @doltBmB 3 หลายเดือนก่อน +178

    Key insight every optimizer should know: draw calls are not expensive, context-switching is. The more resources that a draw call shares with adjacent draw calls the cheaper it is to switch to it. If all draw calls share the same resources it's free(!). Don't worry about draw calls, worry about the order of the draw calls.

    • @h0bby23
      @h0bby23 3 หลายเดือนก่อน

      There is the issue of cpu bound to send work with few polygons drawcall but otherwise on modern hardware yes

    • @BlueBeam10
      @BlueBeam10 2 หลายเดือนก่อน +9

      So what you say is, if I spawn 10.000 of the same chairs, I don't even need to have them as instanced, or to have them merged into one mesh because the 10k draw calls will equate to one? I don't know man, I tend to disbelieve that...

    • @doltBmB
      @doltBmB 2 หลายเดือนก่อน +1

      @@BlueBeam10 theoretically if each chair can be rendered in a single drawcall, I guess, which would be a very simple chair. instancing is good for more complex renders.

    • @BlueBeam10
      @BlueBeam10 2 หลายเดือนก่อน +3

      @@doltBmB But I though draw calls would happen for each non instanced mesh regardless of the complexity right? Since that chair isn't instanced, why would the engine assume the same draw call can apply to those other meshes?

    • @doltBmB
      @doltBmB 2 หลายเดือนก่อน +2

      @@BlueBeam10 the engine doesn't assume anything, most engines are designed with 0 regard for these facts. it is up to the graphics programmer to batch their drawcalls appropriately. most engines implement some static batching at best which is an absolutely ancient way to do batching, requires some complicated preprocessing and eats bandwidth and memory

  • @annekedebruyn7797
    @annekedebruyn7797 หลายเดือนก่อน +25

    Not supporting instancing is horrifying to hear.
    We do that even for traditional renders.

  • @hungryhedgehog4201
    @hungryhedgehog4201 3 หลายเดือนก่อน +129

    So if I understand correctly: Nanite results in a performance gain if you just drop in high poly 3D sculpts without any touchups, but results in a performance loss if you put in models designed with the industry standard and use industry standard workflow regarding performance optimization?

    • @AndyE7
      @AndyE7 3 หลายเดือนก่อน +65

      I swear the point of Nanite was to allow the artists and level designers to just focus on high quality assets and scenes because nanite would do the optimisation for them. It was to stop the need for them to think of optimisation, LODs, how much can the console render in a scene etc because it will handle all of that for you allowing you to just focus on the task at hand.
      In theory you could even develop with Nanite and then do proper optimisations afterwards.

    • @hungryhedgehog4201
      @hungryhedgehog4201 3 หลายเดือนก่อน +39

      @@AndyE7 tbf it seems to do that but that means that you move from the industry standard to this new approach that works with no other engine pipeline. Which then binds you to the Unreal Engine 5 ecosystem, which then obviously benefits Epic. That's why they push it so hard.

    • @Noubers
      @Noubers 3 หลายเดือนก่อน +41

      The industry standard is significantly more labor intensive and restrictive. It's not a conspiracy to lock in people to Epic, it's just a better work flow overall that other engines should adopt because it is better.

    • @Armameteus
      @Armameteus 3 หลายเดือนก่อน +67

      @@Noubers Except it's not because that then off-loads the rendering overhead to the end-user. To render anything in nanite more quickly than through a traditional rendering pipeline would require the end-user to have the hardware necessary to accommodate it. This is simply unacceptable and shouldn't be the case. The end-user should, within reason, be able to expect their product (like a game) to be able to function on their hardware so long as it's within decent generational tolerances because the developers should have put in the effort to accommodate the end-user. That's part of the selling-point of video games. It's _their job_ to make a product worth our purchase.
      It would be like if Epic created an entirely new type of internal combustion engine that was incompatible with every single vehicle on earth, but was marketed as "easier" to build. Epic then forces factories they have contracts with to start manufacturing this new engine type because it's "easier" for _them_ to produce it - but it's still incompatible with all vehicles, everywhere. This then means the car manufacturers need to completely upend and rebuild their entire manufacturing process to accommodate this new engine design, which costs them a ton of money. The cost of this transition then gets off-loaded onto the regular customer - you and me - that's trying to buy a new car, because the cost of manufacturing these new cars is far higher now due to the alien design of their engine, which was the fault of Epic forcing this new engine on all of their manufacturers. And, even if you end up buying it, it will still run _worse_ than a comparable model car with a traditional engine design; it cost you more and you got a worse product out of it.
      Epic is forcing nanite as the default rendering pipeline going forward, meaning *you can't opt-out of it!* As a developer, this means the overhead of rendering your game falls to the end-user. As a result, your game can only be played by users with the absolute latest, bleeding-edge machines, because nanite only _works_ on those machines, due to its insane overhead. This instantly cuts your potential playerbase to a fraction of its original potential because the cost of purchasing a machine capable of rendering your game is going to be astronomical, not just immediately, but _exponentially_ into the future (as games attempt to render more and more complexity through nanite, chasing the dragon of "photo realism").
      The only parties benefiting from this are Epic (for obvious reasons) and large development companies that have _contracts_ with Epic (for the same reasons). But small studios or indie devs? You're screwed; optimising in nanite is currently impossible and, as an indie dev, you probably don't have the hardware necessary to even render your _own game._ And the end-user? You're _double-screwed;_ you have to front the cost of the hardware to run the game _and_ the exponentially-increased cost of developing that game that anyone outside of Epic's sphere of partners had to eat in developing their game! As a result, both games _and_ the hardware needed to render them will cost more for the end-user to purchase (unless you're best buddies with the executives at Epic).
      Nanite is a complete sham and a waste of effort and resources, built upon self-imposed problems that didn't need to exist, with shoddy "solutions" to those problems, which then create _new_ problems without even fixing the _old problems_ to begin with! It's just Epic's way of pushing out anyone that doesn't have contracts with them, practically requiring corporate nepotism in order to operate within their market. It exclusively benefits them and their friends and they know it.
      InB4: "StOp BeInG PoOr!!1" because that's the _only_ argument against this nonsense.

    • @TheReferrer72
      @TheReferrer72 2 หลายเดือนก่อน +14

      @@Armameteus History is going to proof you dead wrong and its a pity so many of you dev's just don't get that hardware always always catches up.
      Artists and designers should not be making LOD's its a kludge, they should not be combining meshes its a kludge, baking in lights and whatever tricks have to be done when algorithms can do the work.
      Artists and designers should be focussing on making their games look and play good.

  • @MondoMurderface
    @MondoMurderface 3 หลายเดือนก่อน +402

    Nanite isn't and shouldn't be a game developer tool. It is for movie and TV production. Unreal should be honest about this.

    • @Rev0verDrive
      @Rev0verDrive 3 หลายเดือนก่อน +38

      Been saying this since day one release w/testing.

    • @marcinnawrocki1437
      @marcinnawrocki1437 3 หลายเดือนก่อน +60

      Most of new "marketing buzzword friendly" stuff in unreal is not for games. If you as game dev want average steam PC run your game you will not use those new flashy systems.

    • @Ruleta_23
      @Ruleta_23 2 หลายเดือนก่อน +15

      Games are using nanite, don't talk if you don't know the topic, what you say is absurd anyway.

    • @0osk
      @0osk 2 หลายเดือนก่อน +61

      @@Ruleta_23 They didn't say it wasn't being used in games.

    • @Rev0verDrive
      @Rev0verDrive 2 หลายเดือนก่อน

      @@Ruleta_23 Every game I seen released with it runs like shit on highend systems. You have to use DLS/FSR to get 60-70FPS

  • @sideswipebl
    @sideswipebl 4 หลายเดือนก่อน +1224

    No wonder fidelity development seems so slow since around 2016. It's getting harder and harder to tell by looking how old games are because we already figured out idealized hyper-realism around 2010 and have just been floundering since.

    • @MrGamelover23
      @MrGamelover23 4 หลายเดือนก่อน +295

      Yeah, imagine the optimization of back then with ray tracing. It might actually be playable then.

    • @metacob
      @metacob 4 หลายเดือนก่อน +230

      GPU performance is still rising exponentially, but I literally can't tell a game made today from one made 5 years ago. As a kid my first console was a SNES, my second one an N64. That was THE generational leap. The closest to that experience we got in the last decade was VR, but that still wasn't quite the same.
      To be honest though, it's fine. I've heard the word "realism" a few too many times in my life. Now it's time for gameplay and style.

    • @Fearzzy
      @Fearzzy 4 หลายเดือนก่อน +68

      @@metacob if u want too look at the potential of todays games just look at bodycam game, you wont be making that 5 years ago. But i see your point, RDR2 is still the most beautiful game i've played (other than faces etc.), but its more to its style and attention to detail rather than "raw graphics"

    • @arkgaharandan5881
      @arkgaharandan5881 4 หลายเดือนก่อน +35

      @@metacob well, i was playing just cause 3 recently, the lighting has some ambient occlusion and reflections so it looks bad compared to modern standards, you can see the foliage lod appearing in real time and unless you are running in 4k the antialiasing it has smaa 2x is not good enough to hide the countless jaggies. Id say a better comparison is with 2018 games, also the 4060 is barely better tan the 3060 with more ram, low to mid range needs to improve alot.

    • @slyseal2091
      @slyseal2091 4 หลายเดือนก่อน

      @@arkgaharandan5881 Just cause 3 is 9 years old. "5 years ago" is 2019 buddy

  • @Bitshift1125
    @Bitshift1125 2 หลายเดือนก่อน +19

    9:20 This dithering is so, so common nowadays and it looks TERRIBLE! Hair especially just looks like a noisy mess in most new games, and there is no way to turn your settings up enough to make them look good due to dithering. Then everyone just tells you that TAA and upscaling fix it. I don't want either of those technologies active, because they ruin the final image no matter what you do. It drives me nuts that people say "Oh, there's a visual issue? Turn on a bigger one to fix it"

  • @gandev5285
    @gandev5285 4 หลายเดือนก่อน +354

    This may or may not be true, but I actually believe Unreal Engine's quad-overdraw viewmode has been broken for a long time (atleast beyond 4.21). In a Robo Recall talk they talk about the impact of AA on quad overdraw and show that if you enable MSAA your quad overdraw gets significantly worse. Now if you run the exact same test quad overdraw IMPROVES significantly. So unless Epic magically optimized AA to produce less overdraw, the overdraw viewmode is busted. I tested the same scenes in 5.2 and 4.21 and the overdraw view was much worse in 4.21 with it actually showing some red from opaque overdraw (all settings the same). I'm not even sure if opaque overdraw can get beyond green now. I would suspect the overdraw you show from extremely high poly meshes should actually be significantly worse and mostly red or white.

    • @tubaeseries5705
      @tubaeseries5705 4 หลายเดือนก่อน +37

      msaa by nature causes more overdraw because it takes multiple samples for each pixel, with varying amount depending on the settings, so in some cases where only 2pixels are occupied, meaning 50% overdraw, with msaa it could be for example 6 pixels out of 16, meaning 60% overdraw etc.

    • @futuremapper_
      @futuremapper_ 4 หลายเดือนก่อน +10

      I assume that Epic cheats the values when MSAA is enabled as that will cause overdraw in it's nature

    • @JorgetePanete
      @JorgetePanete 3 หลายเดือนก่อน +1

      at least*

    • @neattricks7678
      @neattricks7678 3 หลายเดือนก่อน +5

      It is true. Unreal sucks and everything about it is fucked

    • @FourtyFifth
      @FourtyFifth 3 หลายเดือนก่อน +12

      ​@@neattricks7678 Sure it does buddy

  • @florianschmoldt8659
    @florianschmoldt8659 4 หลายเดือนก่อน +156

    The current state of "next gen visuals" vs fidelity is indeed questionable. Many effects are lower res, stochastically rendered with low sample counts, dithered, upscaled. Compromises on top of compromises and frame generation in between. Good enough in 4k and 60fps but if your hardware can't handle it, you'll have to live with a blurry, pixelated, smeary mess. My guess is that Nvidia is fine with it. As much as I like Lumen&Nanite in theory but I'm not willing to pay the price.
    To be fair, it isn't the worst thing to have next gen effects available but there is a huge disconnect what gamers expect a game to look like, based on trailers and how it feels to play in 1080p at 20fps.
    JFZfilms defines himself as filmmaker and isn't great at communicating that he and his 4090 doesn't care much about realtime visuals or optimization. Tons of path tracing vs Lumen videos and a confused game dev audience when they accidentally learn that both went through the render queue with maxed settings.

    • @snark567
      @snark567 3 หลายเดือนก่อน +24

      Gamers confuse realism and high fidelity visuals for good graphics. Meanwhile a lot of devs use realism and graphical advancements as a crutch because they lack the imagination to know how to make a game that looks good without these features.

    • @florianschmoldt8659
      @florianschmoldt8659 3 หลายเดือนก่อน +25

      @@snark567 I'm a graphic artist myself and as much as I love a good art direction, realism definitely has it's place. But I get the point. It for sure shouldn't be the only definition of next gen visuals.
      It became much easier to gather free photoscanned models than creating your own in an interesting style. Devs rely on nanite over optimized assets and even games with mostly static light sources use Lumen over "good old" lightmaps. Even if it could look just as good and makes a difference of 120 vs 30fps.
      And Nvidia is like "Here are some new problems...how about a 4090 to solve them?"

  • @GraveUypo
    @GraveUypo 4 หลายเดือนก่อน +430

    this is the type of content i wish people would watch, but they don't. i'm tired of hearing that nanite is magic, that dlss is "better than native" and whatnot. when i made a scene in unreal 5, it ran like trash on my pc with 6700xt at the time, and there was almost nothing in the scene even with 50% render scale. it was absurdly heavy for what it was.

    • @SuperXzm
      @SuperXzm 4 หลายเดือนก่อน +73

      Oh no! Think about shareholders! Please support corporations play pretend!

    • @chengong388
      @chengong388 4 หลายเดือนก่อน +59

      DLSS is better than native because it includes anti aliasing and native does not…

    • @FlippingLuna
      @FlippingLuna 4 หลายเดือนก่อน +13

      Just return to forward shading or even mobile forward renderer. It's kinda good and pity that epic made good optimization on mobile forward renderer, but still not ported it to desktop forward

    • @Jakiyyyyy
      @Jakiyyyyy 4 หลายเดือนก่อน +65

      DLSS is INDEED sometimes better than native because it uses its own antialiasing than the default poorly implemented TAA that blurring the games. If DLSS makes the games sharper and better, I would take it over forced-TAA. 🤷🏻‍♀️

    • @dingickso4098
      @dingickso4098 4 หลายเดือนก่อน +83

      DLSS is better than native, they think, because they tend to compare it to the blurfest TAA that is sometimes force enabled.

  • @IBMboy
    @IBMboy 4 หลายเดือนก่อน +367

    I think Nanite is innovative technology but it shouldnt replace old style of rendering for videogames as its still experimental in my opinion

    • @matiasinostroza1843
      @matiasinostroza1843 4 หลายเดือนก่อน +60

      yeah and its not made for optimization but for the sake of look, how it looks its beauty, there is almost no transition between lods, so for example in normal games when you get too far u can see how the mesh changes to lower poly models but with nanite this is almos imperceptible, so in that case is "better", thats why is being used for "realistic graphics".

    • @Cloroqx
      @Cloroqx 4 หลายเดือนก่อน +40

      Baseless opinions by non-developers. "Old style of rendering videogames".
      What is your take on the economy's sharp downturn due to a lack of rate cuts by the FED, opinionated one?

    • @vendetta1429
      @vendetta1429 4 หลายเดือนก่อน +86

      @@Cloroqx You're coming off as the opinionated one, if you didn't know.

    • @pizzaman11
      @pizzaman11 4 หลายเดือนก่อน +9

      Also has lower memory footprint and you don’t need to worry about creating lods. Which is perfect with large games with a large amount of unique models.

    • @inkoalawetrust
      @inkoalawetrust 4 หลายเดือนก่อน +48

      @@Cloroqx What are you even talking about.

  • @GonziHere
    @GonziHere 4 หลายเดือนก่อน +104

    Interesting video, but isn't nanite using it's own software solution for the small triangles exactly because of that issue? I'm pretty sure that they've said so when they started talking about it, it's in their tech talks, etc.
    This test feels somewhat fishy to me. Why not compare the normal scene, instead of captured frame, for example? That skips part of the work of one example, while using the full pipeline of the other...

    • @AdamKiraly_3d
      @AdamKiraly_3d 4 หลายเดือนก่อน +56

      I would love to get my hands on that test scene to give it a spin. I've been working with Nanite in a AA and AAA setting for almost 3 years now, and while it was an absolute pain to figure out the quirks and the correct workflows it has been overall a positive thing for production.
      In my experience Nanite REALLY struggles with anything using masked materials or any sort of Pixel Depth edits.
      I've also seen issues with perf when most meshes were "low poly" in the sense that the triangles are very large on screen, I vaguely remember the siggraph talk where they mention that larger triangles can rasterise slower because it's using a different raster path
      Nanite handling its own instancing and draws moves a lot of the cost off the cpu onto the GPU so the base cost being more should be a surprise to no one. It is also very VERY resolution dependent. The higher you go in res the exponential the cost of Nanite (and VSM, Lumen) not to mention the general cost increase of a larger res. I've grown to accept the only way forward with these new bits of tech is upscaling. I'm not happy as a dev, but as a gamer I couldn't care less and use it everywhere.
      VSM has similar issues, since for shadows you effectively re-render the nanite scene, but there are ways to optimise that with CVars, and generally it's been better performing than traditional shadow maps would when used with nanite - there's a caveat there, since if you bring foliage into the mix then nanite can get silly expensive both in view and in shadows.
      Collision memory has also been a major concern since for complex collision UE uses the Nanite fallback mesh by default, so in a completely Nanite scene you can end up with more collision mem than static mesh memory
      I also feel like having a go at Epic for not maintaining "legacy" feature compatibility indefinitely is a bit unfair. Both VSM and Lumen rely on Nanite to render efficiently and were created as a package. Epic decided that is the direction they want to take the engine, an engine that is as complex as it's large, it is almost expected to lose some things along the way. That being said I have run into many things that I wish they cared enough about to fix (static lighting for example has a ton of issues that will never be fixed cause "just use dynamic lighting"), but at the same time I won't start having a go at them for not supporting my very specific usecase that is not following their guidance.
      No part of this tech is perfect and I get the frustration, but it did unlock a level of quality for everyone to use that was only really available in proprietary engines before.
      Also do we really thing CPPR would drop their own insane engine for shits and giggles if they didn't think it was a better investment to switch to UE5 than to upgrade their own tech?
      same with many other AAA companies, and you can bet your ass they took their sweet time to evaluate if it makes sense or not (not to mention they will inevitably contribute to the engine a TON that will make it back to Main eventually)

    • @ThreatInteractive
      @ThreatInteractive  3 หลายเดือนก่อน +16

      re: Why not compare the normal scene, instead of captured frame, for example?
      You can compare yourself because we already analyzed the "normal" scene, same hardware, same resolution: th-cam.com/video/Te9xUNuR-U0/w-d-xo.html
      Watch the rest of our videos as everything we speak about is interconnected.

    • @SioxerNikita
      @SioxerNikita 3 หลายเดือนก่อน +21

      @@AdamKiraly_3d A thing a lot of non-devs forget, is time...
      Time is the ultimate decider on quality, and the more time spent on optimizing, and making things better, the less time is spent on the game. CDPR is dropping their own engine because at this point, upgrades to the engine that either makes it more performant, or have higher fidelity takes exponentially longer with each feature, and they don't have a "specific" game the engine needs to do. If it was only FPS games they developed, having a single engine they'd upgrade continuesly might make sense, but they want to make more games than just FPS.
      Every second they use on upgrading the engine or doing tedious graphical optimizations on models, is time that could've been spent fixing physics bugs, weird interactions, broken map geometry, and other stuff, that will in the end make the game feel a lot better... and they can also focus on larger scale optimizations that might increase performance more overall, because you don't have to do 2-5 LoD models per model, you can instead focus on getting the poly count significantly down on the base model, and increase the perceived fidelity...
      The days of hyper optimization of games is over. Hardware is becoming ever more complex, and it is becoming completely infeasible to talk to hardware directly, as there are SOOOO!!! much different hardware you'd have to account for. Games are becoming prettier, and even the gamers saying "Graphics don't matter" still wont buy a game that looks like a PS1 game, because it gives them a feeling of "Shoddy game". Increasing poly counts, increasing player expectations, and ever more competitive market... yeah, devs need tools to automate a lot of the process.

    • @satibel
      @satibel 3 หลายเดือนก่อน +4

      ​@@SioxerNikita imo graphics don't matter is more of a "yeah realistic graphics are neat, but consistent graphics are better"
      I'll take cartoony graphics like Kirby's epic yarn over battlefield graphics. yes realism looks good, but stylized does too, and a well stylized game can both be way more efficient and age way better, while still looking as good or better than wannabe photorealistic graphics.
      If we're talking about ps1, a game like vib ribbon still looks good nowadays, and for a more well known example, crash bandicoot.
      They look miles better than asset flip looking franchise games that have realistic graphics, but aren't that cohesive and have a meh game under.

    • @lycanthoss
      @lycanthoss 3 หลายเดือนก่อน +6

      ​@@satibelRealistic graphics clearly sell. Just look at Black Myth Wukong.

  • @ThiagoVieira91
    @ThiagoVieira91 4 หลายเดือนก่อน +380

    Acquiring the same level of rage this guy has for bad optimization, but for putting effort into my SE career, I can single handedly lead us into the the singularity. MORE! MORE! MORE!

    • @JorgetePanete
      @JorgetePanete 3 หลายเดือนก่อน +7

      the the

    • @DarkSession6208
      @DarkSession6208 3 หลายเดือนก่อน +20

      I have 100 other topics around Unreal Engine that make me rage about misinformation just like he rages about Nanite. The Nanite thematic i posted MULTIPLE times on Forums and Reddit to warn people how to not use it, and show how how it should be done. I was researching this topic since version 5.02. Nobody cared. Like i said there are 100 other similar topics (Blueprints, general functions, movement, prediction etc.) which are simply explained wrong by epic themselve and then aknowledged by users.
      If you google : Does culling really not work with nanite foliage?
      the first comment is mine, i was posting there since almost 3 YEARS and repeating myself just for people not listening.

  • @XCanG
    @XCanG 3 หลายเดือนก่อน +82

    I'm not game developer, but there is some questions that I have to ask and some personal opinion as a gamer.
    1. The main point I remember for introducing Nanite at first was that you can make models faster by not spending time on creating LODs. So in some of your examples comparison: how much time do you need to create same model with Nanite vs same model with LODs? Considering that all fresh games that being made with a bias in realism require a lot of details I think at a scale it will make a difference.
    2. May be because I'm not into this field I don't head that, but I really didn't hearing anyone who would render model with billions of triangles. The only closest example was Minecraft clones who was rewritten in C/Rust etc. who tried to achieve large render distance. Other rendering scenes where 1 frame make take hours of render time, so it was not realtime, but I really didn't hearing anyone else showing examples like that. I can imagine that you are some Senior game developer with at least 6 years of experience, but how many game devs also know about this optimizations? I can't imagine even more than 25%.
    3. Let's assume that you can optimize them better, does UE5 allow you to handle optimization by yourself? I imagine that UE4 do, but what about UE5? If answer is yes, then this problem is more arguing about better defaults where you have manual optimization with a cost of your time vs nanite auto-optimization with faster creation time.
    4. As a player I want to point out that many latest titles comes with bad optimizations, so much that gamers starting to hate their studios. My personal struggle was with Cities: Skylines 2. They use Unity engine where they definitely have all this abilities for optimization, but somehow they released lagging p* of s*, where some air conditioner on a building that you see from a fly height (far away) had 14k triangles and some pedestrians have 30k triangles. Considering that they can't optimize properly I believe if incompetent devs like them just use Nanite the game wouldn't be this laggy. For me it realistically to assume that by default a system that handle optimizations automatically is far better, than a manual one who can be only optimized properly very few individuals.

    • @XCanG
      @XCanG 3 หลายเดือนก่อน +1

      @@miskliy1 I see.

    • @MiniGui98
      @MiniGui98 3 หลายเดือนก่อน +14

      "For me it realistically to assume that by default a system that handle optimizations automatically is far better, than a manual one who can be only optimized properly very few individuals"
      Fair point, although the "manual" technique has been used since basically the beginning of fully 3D games and had been mastered not by "few individuals" but by the whole industry at some point. Throwing everything in the nanite pipeline just because it's simpler and faster is a false excuse once you know that the manual, traditional techniques will get you better performances with little to no difference in visual fidelity. Even better, the extra performance you free with the manual LODs allow you to minimize the need for FSR or DLSS, both of which indisputably degrades image fidelity as well.
      Performance tests with FSR/DLSS enabled are basically a lie over what the performances of the raw game are. Native resolution with more traditional anti-aliasing techniques should still be the norm as it always has been.
      Big games relying on super-sampling are just a sign of badly optimized games, it's as simple as that.

    • @SioxerNikita
      @SioxerNikita 3 หลายเดือนก่อน +26

      ​@@MiniGui98 No, it's never been "the whole industry" at some point.
      Frankly, the good optimizers are kind of far between, from the first days of optimization in general (not just 3D). Beyond that, optimizations are not a thing that you just apply every time, if so, everything would be using those optimizations. An optimization that works in one product, might not work in another, and might sometimes make it slower. Optimizations are not "Catch all solutions", and each project needs different optimizations... so there isn't a "manual" technique that has been "mastered".
      Beyond that, you have a different problem. Optimizations can only really be applied when the product is mostly done, and the rendering pipeline is done, so if the rendering pipeline gets continuesly developed until the end, then optimizations might be actual wasted work, or worse, create a buggy mess, because the rendering pipeline changes stuff that doesn't interact well with the optimizations.
      And then you have some of the most important stuff... Time... Development time... If you have an automated pipeline that is for example 30% better than you not doing manual optimization, then that tons of time you wouldn't spent on optimization, could be spent on doing optimizations and improvements elsewhere. For example, in LoD models, those take time to make... sometimes SIGNIFICANT amount of time. You create a model, then you have to create a lower poly model, and an even lower poly model model, and if you ever need to change that model, you have to do the lower poly versions again. That is a LOT! of time. Sure, may have better performance, but if you can offload that work to for example Nanite and it gives acceptable performance, you can focus on optimizing the main model instead.
      Performance tests with FSR/DLSS are a lie? If the game is intended to be run with FSR/DLSS, then that is the performance test you should do.
      Frankly, considering that you just focus on manual optimization being better, and NONE! of the other factors in it, kind of shows me that you don't really know much about optimizing or game development... and saying "Games relying on Super-Sampling are signs of a badly optimized game"...
      Even more interesting, Super Sampling is not just "DLSS" or similar... Do you even know what Super Sampling is? Relying on super sampling is the opposite of optimizing, because it is one of the more computationally expensive features. It is essentially rendering a LARGER image than you need... That is something you use if you either have optimized heavily, so you can afford it, or you have an option for it for people who have killer setups... Super Sampling is not an optimization technique at all.
      DLSS is kind of the opposite of Super Sampling. It uses Deep Learning (Or as we would call it today "AI") to infer a lot of information, using various different techniques, essentially getting some of the effects of Super Sampling, but without actually Super Sampling. So you saying "Big games relying on super-sampling are just a sign of badly optimized games, it's as simple as that." essentially means you have basically no clue what you are talking about...
      You start out saying "fair point" and then proceed to show you don't even understand the point.
      And relying completely on a single automated optimization technique is always bad... it's not just a "big games" thing... but almost no games do... but optimization is hard to do, very hard... if it wasn't you'd see games that were badly optimized.

    • @Megalomaniakaal
      @Megalomaniakaal 3 หลายเดือนก่อน +4

      @@MiniGui98 Majority of games out there were rather unoptimized messes that Nvidia and AMD optimized on 'game ready' drivers release level in the DX11 and older days.

    • @Jiffy360
      @Jiffy360 3 หลายเดือนก่อน +3

      About that final point about CS:2, that was exactly their plan. They were going to use Unity’s competitor to Nanite, but then Unity delayed or cancelled it, so they were stuck creating a brand new LOD system from scratch at the last moment.

  • @chriszuko
    @chriszuko 4 หลายเดือนก่อน +77

    As much as I think this is a good topic and the results seem genuinely well done, the solution proposed here to have spent the time and money on an AI solution for LOD creation and a much faster / seamless workflow for optimization... is something companies have been trying to do for years.. so I personally don't think it's a good way to move forward. TO me.. it would seem possible that nanite + lumen can evolve and become much more friendly to meshes produced in an already optimized way and rely way less on lower resolutions. I DO think they probably are pushing it a bit too early still and I also agree that DLSS, TSR, and any other upscaling/reconstruction technology is just not good to lean on. But to your point, companies continue to do so because they can say "look everything runs faster!" so it is hard to envision a future without needing it.
    A side note.. I don't think your particular attitude is warranted and, in my opinion, makes it much harder to have a constructive conversation on how to move forward. I'm not perfect either in this regard, but as this gains traction it's probably good to dial that back. In the first post for the mega thread for example you show your results and then feel the need to say "Worse without LOD FPS LMAO!". This type of stuff is just all over the place in these threads.. which to me just looks bad and makes it harder to take you and this topic seriously.

    • @wumi2419
      @wumi2419 3 หลายเดือนก่อน +8

      It ends up being a problem of "who needs to spend money", with choices being developers (on optimizing) or customers (on hardware that can run unoptimized software), and it's obvious which choice saves money for the company.
      Granted, it might result in lower sales due to higher hardware requirements, but that is a lot harder to prove than lowered dev costs.

    • @Viscte
      @Viscte 2 หลายเดือนก่อน +1

      This is a pretty down-to-earth take.

    • @exoqqen
      @exoqqen หลายเดือนก่อน +7

      i agree on the attitude. I'm amazed by the depth of knowledge in these videos but the aggressiveness makes me keep aa distance and be wary. Its not pleasant or professional

    • @Viscte
      @Viscte หลายเดือนก่อน

      @@exoqqen the guy comes across as a bitter asshole for some reason. Its important people discuss topics like this but the negativity is definitely off-putting

    • @jarrodkober
      @jarrodkober หลายเดือนก่อน +3

      Yeah he's comin in a little hot and I'm curious if everything is 100% factual. Some of his points seem credible and compelling.
      But he started to lose me when he said DLSS is over hyped and only compared to TAA. You can compair Native 4K no AA to DLSS and it's pretty good. We can all agree TAA is too soft but he creates a false dichotomy by quickly making that point (however grounded it may be) then moving on.

  • @TenaciousDilos
    @TenaciousDilos 4 หลายเดือนก่อน +211

    I'm not disputing what is shown here, but I've had cases where Nanite in UE 5.1.1 increased framerates 4x (low 20s fps to mid 90s fps) in my projects, and the only difference was turning Nanite on. "Well, there's more to it than that!" of course there is. But Nanite took hours of optimization work and turned it into enabling Nanite on a few meshes.

    • @forasago
      @forasago 3 หลายเดือนก่อน +115

      @@jcdentonunatco This is not true at all. When they showed the first demo with the pseudo Tomb Raider scene they explicitly said (paraphrasing) that having as many high poly meshes in a scene would be impossible without Nanite. And they definitely weren't talking about "what if you just stopped using LOD" as the thing Nanite is competing with. Nanite was supposed to raise the limits for polycount in scenes, full stop. That equates a claim to improved performance.

    • @MrSofazocker
      @MrSofazocker 3 หลายเดือนก่อน +128

      ​@@forasago "You can now render way denser meshes" does not equal "so less dense meshes now render faster"

    • @DeltaNovum
      @DeltaNovum 3 หลายเดือนก่อน +29

      Turn off virtual shadow maps and redo your test.

    • @lokosstratos7192
      @lokosstratos7192 3 หลายเดือนก่อน +1

      @@DeltaNovum YEAH!

    • @user-bl1lh1xv1s
      @user-bl1lh1xv1s 3 หลายเดือนก่อน +15

      ​@forasago "unlimited polycount" in scenes does not equate to a claim of improved performance over non-meshlet approaches. It merely states that poly count is not an issue... Which certainly has implications for asset pipelines, where full-quality meshes _could_ be brought into the scene without prior processing (except, of course, the nanite preprocessing step).

  • @YoutubePizzer
    @YoutubePizzer 4 หลายเดือนก่อน +162

    Here’s the question though: is this going to save significant enough resources in a development team to allow them to achieve more with less. If it’s not “optimal”, it’s fine, as long as the amount of dev time it saves is worth it. Ultimately, in an ideal world, we want the technology to improve so development can become easier

    • @wumi2419
      @wumi2419 3 หลายเดือนก่อน +99

      Cost of development doesn't disappear however, it's just transferred to customers. So they will have to either pay for a better GPU, run the game at lower quality, or will just "bounce off" and not purchase the game at all.

    • @blarghblargh
      @blarghblargh 3 หลายเดือนก่อน +3

      ​@wumi2419 they could just make the game look worse instead. The development style being done now is already burning something, and that something is developer talent. And there isn't an infinite pool of that.
      You also ignored that GPU performance increases over time. So it may be a rough tradeoff now, but the tech will continue to get better hardware over time.

    • @insentia8424
      @insentia8424 3 หลายเดือนก่อน +9

      I don't understand why you ask about whether this allows development teams to be more efficient (achieve more with less), then you believe that in an ideal world tech would make things easier. Something becoming more efficient, and something becoming easier are not the same thing.
      In fact, an increase in efficiency often times causes things to become harder or more complex to do.

    • @snark567
      @snark567 3 หลายเดือนก่อน +11

      You can always just go for stylized visuals instead of hyper complex geometry and realism. This performance min maxing only becomes an issue of concern when your game is too detailed and complex but you still want it to run on a toaster.

    • @seeibe
      @seeibe 3 หลายเดือนก่อน +24

      The problem is when newer games look and perform worse than older games while still costing the same. If that saved developer time will be passed on as price reduction to the user, sure. But that's not what happens for most games.

  • @GugureSux
    @GugureSux 3 หลายเดือนก่อน +67

    This explains both why I generally despise the modern UE games' looks (so much noise and blur) and why especially UE5 run like ass, even under decent HW.
    And since so many devs seem to use lazy upscalers as their "optimization" trick, things only get worse visually.

    • @kanta32100
      @kanta32100 3 หลายเดือนก่อน +3

      No visible LOD pop in is nice though.

    • @bennyboiii1196
      @bennyboiii1196 3 หลายเดือนก่อน +8

      The blurriness is due to TAA tbf, which is a scourge. This is why I use Godot lmao

    • @ThreatInteractive
      @ThreatInteractive  3 หลายเดือนก่อน +12

      @@kanta32100 Not true, you will still get pop and LODs have had various ways on reducing pop-visibility(just not in UE5) in a non-flawed TAA dependent way.

    • @PabloB888
      @PabloB888 3 หลายเดือนก่อน

      ​@@bennyboiii1196 TAA image looks blurry, but a good sharpening mask can help a lot. I use reshade CAS and luma filters. I have been using this method for the last couple of years on my old GTX1080. Now however I bought the RTX4080S and can get much better image quality by using DLSS balance and DLDSR2.25 (80% smoothness) at the same time. I get no performance penalty, and the image quality destroys native TAA. What's more if DLSS implementation is very good I can use DLSS ultra performance and still get sharper image compared to the native TAA and framerate is 2x better at this point.

    • @cdmpants
      @cdmpants 3 หลายเดือนก่อน +12

      @@bennyboiii1196 You use godot because you don't want to use TAA? You realize that TAA is optional?

  • @hoyteternal
    @hoyteternal 3 หลายเดือนก่อน +39

    nanite is specificaly made to render non-optimized scenes with ridiculous polycounts, and microgeometry when each pixel might render a separate triangle. and models that don't have lods. it has huge initial overhead but way higher scalability in this scenario. actually nanity is not a unique technology. it is an implementation of a techique called visibility buffer.

  • @bits360wastaken
    @bits360wastaken 3 หลายเดือนก่อน +61

    10:49, AI isnt some magic silver bullet, an actually good LOD algorithm is needed.

    • @michaelbuckers
      @michaelbuckers 3 หลายเดือนก่อน +31

      He's talking about a hypothetical AI based LODder with presumably better performance than algorithmic LODders. Which is a fair guess considering that autoencoding is the bread and butter of generative AI. I hazard a guess that you can adapt a conventional image making AI to inpaint an original mesh with a lower poly color-coded mesh from various angles, and use these pictures to reconstruct a LOD mesh.

    • @yeahbOOOIIÌIIII
      @yeahbOOOIIÌIIII 3 หลายเดือนก่อน +7

      What he is suggesting is amazing. I have to jump through hoops to get programs like reality capture (which is amazing) to simplify photogrammetry meshes in a smart fashion. They often destroy detail. This is an example where machine learning could shine, making micro-optimizations to the LOD that lead to better quality at higher performance. It's a great idea.

    • @MarcABrown-tt1fp
      @MarcABrown-tt1fp 2 หลายเดือนก่อน

      @@yeahbOOOIIÌIIII Of course the meshes would need to be hand made still. Generative AI doesn't seem to work well with limits like quads imposed.

    • @bionic_batman
      @bionic_batman หลายเดือนก่อน

      AI (machine learning) is just a way to approximate the algorithm you don't know
      In this particular case it would be better than Nanite because it does not need to be baked into the engine and can be replaced with manual/algorithmically-produced LODs when needed

    • @jcm2606
      @jcm2606 หลายเดือนก่อน +1

      It's just that, though, an idea. He doesn't actually provide any information on how _exactly_ a neural network could be trained to do this, he's basically just saying "just throw AI at it!" and pointing to RTX Remix using AI to convert flat albedo to material data, which is a different problem to solve (and we already have traditional solutions for this, though they're pretty bad). Just throwing an arbitrary neural network at a problem isn't good enough to deride Nanite and claim that an alternative exists, especially if you don't provide anything to back that claim up.

  • @salatwurzel-4388
    @salatwurzel-4388 3 หลายเดือนก่อน +31

    Remember the times when {new technology} was very ressource intensive and dropped your fps but after a relative short time it was ok because the hardware became 3x faster in that short time?
    Good times :D

    • @histhoryk2648
      @histhoryk2648 หลายเดือนก่อน +2

      Can it run Crysis meme

  • @yeah7267
    @yeah7267 4 หลายเดือนก่อน +308

    Finally someone taking about this... I was sick ok people claiming Nanites boost performance when in reality I was losing frames even in the most basic scenes

    • @cheater00
      @cheater00 4 หลายเดือนก่อน +30

      just another case of tencent timmy being confidently wrong. we've all known unreal engine was way worse than quake 3 back in the day as well, it was ancient by comparison and the performance was abysmal. the fact people kept spinning a yarn that unreal engine was somehow competitive with id tech was always such a funny thing to see.

    • @HankBaxter
      @HankBaxter 4 หลายเดือนก่อน +4

      And it seems nothing's changed.

    • @cheater00
      @cheater00 4 หลายเดือนก่อน +7

      @@randoguy7488 precisely. and in those 25 years, NOTHING has changed. this should make you think.

    • @ClowdyHowdy
      @ClowdyHowdy 4 หลายเดือนก่อน +53

      To be fair, I don't think nanite is designed at all to boost performance in the most simple scenes, so anybody saying that is either oversimplifying or just wrong. Instead, it was designed to provide an easier pipeline for artists and designers to build complex level designs, without inducing an equivalent increase in performance loss or time into developing LODs.
      It's designed to be a development performance boost.
      If you don't need nanite for your games then there's no reason to use it, but I think it's weird to pretend like the issue is because it doesn't do something it wasn't designed to do

    • @_Romulodovale
      @_Romulodovale 4 หลายเดือนก่อน +7

      All the last features in engines came to make developers life easier while affecting performance negatively. Its not a bad thing, in the future those features will be well polished and will help us developers without affecting performance. All good tech take time to be polished.

  • @Lil.Yahmeaner
    @Lil.Yahmeaner 4 หลายเดือนก่อน +67

    This is exactly how I’ve felt about graphics for years now. Especially at 1080p, all the ghosting, dithering, and shimmering of UE5 gets unbearable at times and everyone is using this engine. It’s like you have to play at 4k to mitigate inherent flaws of the engine but that’s so demanding you have to scale it back down which makes no sense. Especially bad when you’re trying to play at 165hz and most developers are still aiming at barely 30-60fps, now exacerbated by dlss/framegen.
    Just like all AI, garbage data in, garbage data out, games are too unique and unpredictable to be creating 30+ frames out of thin air.
    Love the videos, very informative and well spoken. Keep up the good fight!

    • @nikch1
      @nikch1 4 หลายเดือนก่อน +7

      > "all the ghosting, dithering, and shimmering"
      Instant uninstall from me. Bad experience.

    • @Khazar321
      @Khazar321 3 หลายเดือนก่อน +7

      For years now? Yeah I don't think that's UE5 mate. Maybe stop hopping on the misinformation train here...

    • @s1ndrome117
      @s1ndrome117 3 หลายเดือนก่อน +5

      @@Khazar321 you'd understand if you ever use unreal for yourself

    • @MrSofazocker
      @MrSofazocker 3 หลายเดือนก่อน +5

      The notion that these defaults cannot be changed, and it's somehow a systemic issue of Unreal is insane to me.
      If you just use smth you don't even understand leave everything on default on press a button "make game", what kind of optimization do you expect?

    • @Khazar321
      @Khazar321 3 หลายเดือนก่อน +3

      @@s1ndrome117 I did and I have also seen the train wrecks that lazy devs cause with UE4 and engines around the same time.
      Horrible stutters, bad AA options, blurry and grey(unless you fix it yourself with HDR/ReShade), shader issues, etc.
      So yeah tell me how horrible UE5 is and what lazy devs can do wrong with it. I have seen it all in 30 years of gaming.

  • @nazar7368
    @nazar7368 4 หลายเดือนก่อน +123

    Epic games marketers killed the fifth version of the engine. at the time of 2018, there were no video cards that supported Mesh Shaders. They took advantage of this and added the illusion of cheating support for old video cards. Lumens and nanites were supported on old video cards, but this is all software implementation, which cannot be at the level of hardware mesh shader and ray tracing. This led to real problems with the engine kernel, namely that DX12 and Vulkan do not work correctly and have low efficiency due to the old code that was written for DX11. I'm not taking into account the problems with the engine's layer rendering and compositing algorithms, because everyone already sees the eternal blurring of the picture and annoying sharpening.
    This will not be fixed until they add hardware support for mesh shaders and rtx of the new version (added literally by the dxr 1.2 library). For example, Nvidia has many conferences to show their algorithms for improved ReStir in unreal engine 5, and gives access to them, but developers are firmly in their own way and continue to feed the gaming industry with an anthem.

    • @JoseDiaz-he1nr
      @JoseDiaz-he1nr 4 หลายเดือนก่อน +5

      idk man Black Myth Wukong seems like a huge success and it was made in UE5

    • @manmansgotmans
      @manmansgotmans 4 หลายเดือนก่อน +14

      The push to mesh shaders could have been nvidia's work, as their 2018 gpus supported mesh shaders while amd's gpus did not. And nvidia is known for putting money where it hurts their competitor

    • @Ronaldo-se3ff
      @Ronaldo-se3ff 4 หลายเดือนก่อน +35

      @@JoseDiaz-he1nr its performance is horrendous and they reimplemented a lot of tech in-house so that it can run atleast.

    • @topraktunca1829
      @topraktunca1829 4 หลายเดือนก่อน +24

      @@JoseDiaz-he1nr yeah they made a lot of hard internal optimizations and in the end its running on a "ehh well enough I guess" level. Not to mention the companies that didn't bother to do such optimizations like Immortals of aveum, Remnant 2. These games are unplayable other than 4090s or 4080s. Even then they still have problems

    • @nazar7368
      @nazar7368 4 หลายเดือนก่อน +13

      @@JoseDiaz-he1nr A really great success. With a 2010 picture and 30fps on 4090 with a blurry picture

  • @almighty151986
    @almighty151986 3 หลายเดือนก่อน +17

    Nanite is designed for geometry way above what we currently have in games.
    So until games reach the point where their geometry is high enough to require Nanite then Nanite will be slower than traditional methods.
    Maybe next generation will get there.

    • @youtubehandlesux
      @youtubehandlesux 2 หลายเดือนก่อน +1

      With the current performance increase of hardware it'll take like 20 years to reach that point.

    • @korcommander
      @korcommander 2 หลายเดือนก่อน +1

      We are at the point of diminishing returns of poly count. Like how many more polygons do we need to render 2Bs buttcheeks?

  • @rimuruslimeball
    @rimuruslimeball 4 หลายเดือนก่อน +55

    These videos are amazing but to be honest, a lot of it flies over my head. What should we, as developers (especially indie), do to ensure we're doing good optimization practices? Alot of what your videos discuss seem to require an enormous amount of deep-level technical understanding of GPUs that I don't think many of us can realistically obtain. I'm very interested but not very sure where to start nor where to go from there. I'm sure I'm not the only one.

    • @cadmanfox
      @cadmanfox 4 หลายเดือนก่อน +4

      I think it is worth learning how it all works, there are lots of free resources you can use

    • @marcelenderle4904
      @marcelenderle4904 4 หลายเดือนก่อน +5

      As an Indie developer I feel that it's very important to know the problems and limitations of those techs and the concepts behind good practices. That doesn't necessary mean you have to apply them. Nanite, lumen, dlss etc can be very efficient as a cheap solution. If it speeds up your game by alot and gets to the result you want, for me at least, it's what you should aim for. Those critiques at Unreal are great for Studios and the industry itself.

    • @Vysair
      @Vysair 4 หลายเดือนก่อน +3

      I have a diploma in IT which is actually just CS and I dont have a clue on what this guy is talking about as well

    • @JorgetePanete
      @JorgetePanete 3 หลายเดือนก่อน

      A lot*

    • @Vadymaus
      @Vadymaus 3 หลายเดือนก่อน +1

      @@anonymousalexander6005. Bullshit. This is just basic graphical rendering terminology.

  • @Gurem
    @Gurem 3 หลายเดือนก่อน +11

    I remember epic saying this would not replace traditional methods but should be used in tandem with them as it is a way to increase productivity. Tbh this video taught me more about optimization than any optimization video and didnt waste my time. As an indie it did more to reinforce my desire to use nanite while also teaching me how to do more hands on techniques that while require more work may result in better performance that I can use when I have the free time to do so. I thank you for demistifing the BS as I really couldnt understand the tech from those other yt videos as they were purely surface level quick churned content.

  • @stephaneduhamel7706
    @stephaneduhamel7706 3 หลายเดือนก่อน +141

    The point of nanite was never to increase performance compared to a perfecty optimized mesh with LODs. It is made to allow devs to discard LODs for a reasonably low perfomance hit.

    • @doltBmB
      @doltBmB 3 หลายเดือนก่อน +92

      if losing 60% fps is "reasonably low" to you

    • @stephaneduhamel7706
      @stephaneduhamel7706 3 หลายเดือนก่อน +51

      @@doltBmB it's a lot less than that in most real life use cases.

    • @doltBmB
      @doltBmB 3 หลายเดือนก่อน +39

      @@stephaneduhamel7706 Yeah it might be as low as 40%, real great

    • @Harut.V
      @Harut.V 3 หลายเดือนก่อน +60

      In other words, they are shifting the cost from devs to hardware (consumers, console producers)

    • @ben.pueschel
      @ben.pueschel 3 หลายเดือนก่อน +15

      @@Harut.V that's how progress works, genius.

  • @azrielsatan8693
    @azrielsatan8693 3 หลายเดือนก่อน +3

    As a certified UE5 hater from the first few games that released with it, I'm happy to see more discussion about it.
    The VRAM usage also needs to be talked about. Games requiring a min 8GB VRAM for 60fps is ridiculous (though Nvidia refusing to put any on their cards is also to blame).

  • @2feetsandamushroom
    @2feetsandamushroom หลายเดือนก่อน +5

    Had no clue about performance, just how it handles lods for meshes on static objects and reduces notable popin. From my experience it reduces it greatly when built around it. Like Hellblade 2, barely any popin in sight for the entire game which was a breath of fresh air to be honest since lod shifts are one of the more annoying visual blemishes in games.
    The removal of tessellation is pure madness, why remove functions that we know work and have optimized for years? Like give us options!

  • @RiasatSalminSami
    @RiasatSalminSami 4 หลายเดือนก่อน +205

    Can't ever take Epic seriously when they can't even be arsed to prioritize shader compilation stutter problems.

    • @USP45Master
      @USP45Master 4 หลายเดือนก่อน +30

      Write the shader compiler yourself ! Only about 30 minutes... put it in ablank level that is async compiling and showing a loading screen :)

    • @user-ic5nv8lj9d
      @user-ic5nv8lj9d 4 หลายเดือนก่อน +2

      maybe provide a solution?

    • @bricaaron3978
      @bricaaron3978 4 หลายเดือนก่อน +7

      Unreal Engine has been a console engine for a long time.

    • @randomcommenter10_
      @randomcommenter10_ 4 หลายเดือนก่อน +28

      What's interesting is that UE4 & 5 actually has a built-in setting in the ini files to precompile shaders called "r.CreateShadersOnLoad" but the weird thing is that it's set to False by default, instead it should be True. What's even more weird is UE3 has a different ini setting called "bInitializeShadersOnDemand" which is False by default, which actually means all shaders will be precompiled before the game starts. I have no idea why Epic didn't set shader precompilation to true in UE4 & 5 but at least the setting is there and more devs should know about it to turn it on for their games in order to help reduce shader compilation stutter

    • @RiasatSalminSami
      @RiasatSalminSami 4 หลายเดือนก่อน +8

      @@user-ic5nv8lj9d why do I have to provide a solution? If devs need to have their own solution for such a game breaking problem, they might as well make their own engine.

  • @TheYamiks
    @TheYamiks 12 วันที่ผ่านมา +1

    what frame inspection softwere are you guys using?
    I've been trying renderDoc with most successbut mostly need manual injections.
    Nsight just fails most of the time. MS PIX does not work and intel...smth smth kind of works sometimes.

  • @piroman665
    @piroman665 4 หลายเดือนก่อน +53

    Its normal that new generation of rendering techniques introduces large overheads. Its a good tradeoff as it streamlines development and enables more dynamic games, Major issue is that developers ignore good practices and then blame the software, part of that is also epics fault as they try to sell it as magic solution for unlimited triangles which is not. Nanite might be slower but enable scenarios that would be impossible or very hard to achieve with static lighting. Sure you can render dense meshes with traditional methods but imagine lightmapping them or making large levels with realistic lighting and dynamic scenarios.

    • @SydGhosh
      @SydGhosh 4 หลายเดือนก่อน +23

      Yeah... In terms of all this dude's videos - I find myself technically agreeing; but, I don't think they see the big picture.

    • @thegreendude2086
      @thegreendude2086 3 หลายเดือนก่อน +9

      @@piroman665 I believe unreal was made to be somewhat artist friendly, systems you can work with even if you do not have a deep technical understanding. Hitting the "enable nanite" checkbox so you have to worry less about polycount seems to fit that idea.

    • @lau6438
      @lau6438 3 หลายเดือนก่อน +3

      ​@@SydGhosh The bigger picture being deprecating traditional LODs that perform better, to implement a half-baked solution? Wow, what a nice picture.

    • @DagobertX2
      @DagobertX2 3 หลายเดือนก่อน +5

      @@lau6438 They will make it better with time, just like back in the gamedev stoneage they made LOD perform better. Imagine there was a time where you had to optimize triangle strips for a game for best performance 💀

    • @Bdhdh-p7h
      @Bdhdh-p7h 2 หลายเดือนก่อน +2

      @@lau6438 LODs are costly and expensive to studios to develop.

  • @invertexyz
    @invertexyz 2 หลายเดือนก่อน +1

    Another part of the issue may be that Nanite wasn't designed exclusively around the Mesh Shaders system on the newer generations of GPUs. They use general compute to be able to support older hardware and the system is designed with having to do it that way in mind. They do have a fast-path using Mesh Shaders for polygons larger than a pixel, but it sounds like it's kind of just throw in there for a little extra performance boost instead of it being an entirely separate path for the whole system to run through.

  • @hoyteternal
    @hoyteternal 3 หลายเดือนก่อน +11

    nanite is an implementation of rendering technique called visibility buffer, this technique is specifically created to overcome quad utilisation issues. once triangle density reduces to a single pixel, the better quad utilization of Visibility (nanite) rendering greatly outweighs the additional cost of interpolating vertex attributes and analytically calculating partial derivatives. you can search for an article called "Visibility Buffer Rendering with Material Graphs", it is a good read on filmic worlds website, with lot of testing and illustrations

    • @ThreatInteractive
      @ThreatInteractive  3 หลายเดือนก่อน +8

      In the original paper on visibility buffers, the main focus was on bandwidth related performance. Visibility buffers might not be completely out of consideration. We've spoken with some graphic programmers who have stated their implementation can speed up opaque objects, but we are still in the process of exploring the options here.
      While Nanite is a solution to real issues, it's a poor solution regardless because the cons outweigh the pros.
      We've seen the paper you've mentioned and we also shown other papers by filmic worlds in out first video ( which discussed more issues with Nanite) .

  • @henryg6764
    @henryg6764 26 วันที่ผ่านมา +2

    the widespread performance issues reported with STALKER 2 (which uses both nanite and lumen) support these claims. great presentation.

  • @cenkercanbulut3069
    @cenkercanbulut3069 3 หลายเดือนก่อน +4

    Thanks for the video! I appreciate the effort you put into comparing Nanite with traditional optimization methods. However, the full potential of Nanite might not be fully apparent in a test with just a few meshes. Nanite shines when dealing with large-scale environments that have millions of polygons, where it can dynamically optimize the scene in real time. The true strength of Nanite is its ability to manage massive amounts of detail efficiently, which might be less visible in smaller, controlled setups. It would be interesting to see how both approaches perform in a more complex scene with more assets, where Nanite’s real-time optimization could show its advantages. Looking forward to more in-depth comparisons in the future!

  • @PanzerschrekCN
    @PanzerschrekCN 4 หลายเดือนก่อน +90

    The whole point of Nanite existence is not to make games faster, but to reduce content creation costs. With Nanite it's not needed anymore to spend time creating LODs.

    • @alex15095
      @alex15095 4 หลายเดือนก่อน +23

      Exactly this, it makes things a lot easier if you can just mash an unoptimized 3d scan and a bunch of 150k poly models to make a scene and just make it work. As we know with electron apps, sometimes it's not about what's most efficient but rather what's easier for developers. An AI solution is unlikely as we've not found a good architecture/representation that effectively combines mesh shape, topology, normals, UVs, and textures, it's a much more complicated problem than just image generation

    • @chiboreache
      @chiboreache 4 หลายเดือนก่อน

      @@alex15095 you can make synthetic dataset by using Blender Sverchok addon and model every thing in procedural way

    • @dbp_pc3500
      @dbp_pc3500 4 หลายเดือนก่อน +10

      Dude LOD are generated automatically by any tools. It’s not time consuming at all

    • @antiRuka
      @antiRuka 3 หลายเดือนก่อน +7

      generate and save lods for a couple 100k poly mesh please

    • @mercai
      @mercai 3 หลายเดือนก่อน +34

      @@dbp_pc3500 Tell us you haven't made actual quality LODs without telling us.

  • @mike64_t
    @mike64_t 3 หลายเดือนก่อน +5

    Good video, but I disagree that you can currently train an AI model to reduce overdraw.
    There is no architecture currently that can really take an AAA model as an input.

    • @ThreatInteractive
      @ThreatInteractive  3 หลายเดือนก่อน +1

      We are more detached from utilizing AI for implementing the max surface area topology than most people are giving us credit for. We just need faster systems for LODs. One of the biggest problems with the LOD workflow in UE vs Nanite is LOD calculations is extremely slow compared to Nanite(near instant even with millions of triangles). We also need a polished system that bakes micro detail into normal/depth maps faster.
      The way we see is it's always going to be an algorithm. Most AI that get trained enough revert to one anyway.

    • @mike64_t
      @mike64_t 3 หลายเดือนก่อน +4

      ​@@ThreatInteractive "most AI that get trained enough revert to one anyway" mhhh... I wouldn't say so. Yes, in a sense its an algorithm but the sort of compactness, discreteness and optimality that you picture when you hear the word "algorithm" is not present in a whole bunch of matrix multiplications that softly guide the input to its output. Just because it is meaningful computation doesn't make it deserving of the word algorithm. The LTT video isn't really accurate and makes some dangerous oversimplifications.
      I agree that tooling needs to become better, I also would love for there to be a magic architecture that you could just PPO minimize render time with and invents all of topology therory from that, but that is a long way to go... Transformers are constrained for sequence length and have a bias towards discrete tokens, not ideal for continous vertex data.
      For now it seems like you need to bite the bullet and write actual mesh rebuilding algorithms.

  • @gameworkerty
    @gameworkerty 4 หลายเดือนก่อน +5

    I would kill for an overdraw view like unreal has in Unity, especially because there is a ton of robust mesh instancing support via unity plugins that unreal doesn't have.

  • @QuakeProBro
    @QuakeProBro 4 หลายเดือนก่อน +65

    Great video, you've talked about many things that really bother me when working with UE5. Extreme ghosting, noise, flickering, a very blurry image, and compared to Unreal Engine 4, much much worse performance with practically emtpy scenes (sometimes up to 80 fps difference on my 2080ti). All this fancy tech, while great for cinematics and film, introduces so many unnessessary problems for games and Epic seem to simply not care.
    If they really want us to focus on art, instead on optimizing, give us next-gen worthy and automated optimization tools instead of upscalers and denoisers that destroy the image for a "better" experience. This is only battling the symptomes.
    And don't get me wrong, I find Lumen and Nanite fascinating, but they just don't keep what was promised (yet).
    Thanks for talking about this!

    • @keatonwastaken
      @keatonwastaken 3 หลายเดือนก่อน +1

      UE4 is still used and is capable, UE5 is more just for people who want fancier looks early on.

  • @gurujoe75
    @gurujoe75 3 หลายเดือนก่อน +2

    I'm not a programmer but I see that for ten years there has been a waiting for a big graphic paradigm. Goodbye classic renderpipeline as they teach it in school, goodbye rasterization, goodbye classic extremely inflexible polygons, goodbye endless problems with shadows, LOD, UW mapping, etc. etc. etc. etc. etc. etc.
    UE is the only big multiplat engine today. And R@D is extremely expensive. You understand what I'm implying here between the lines.

  • @rockoman100
    @rockoman100 3 หลายเดือนก่อน +20

    Am I the only one who thinks LOD dithering transitions are way more noticeable and distracting than just having the models "pop" instantly between LODs?

    • @Lylcaruis
      @Lylcaruis 2 หลายเดือนก่อน +1

      when i watched that part i thought that too. pretty sure having them instantly switch runs faster as well

    • @bertilorickardspelar
      @bertilorickardspelar 2 หลายเดือนก่อน +2

      Dithering works pretty ok if you do the fade when the camera pans or tilts. If you dither when just moving forward it is quite noticable. Also depends on the asset. Some assets can just flip without you noticing while others are very noticable when they flip drastically. A car may flip without you noticing while a tree may benefit from some dither.

    • @Shooha_Babe
      @Shooha_Babe 2 หลายเดือนก่อน +1

      Yeah I played a lot of games and seen both methods being used.
      The classic ''pop'' method seems to work best in terms of immersion.
      You don' focus your eyes when they transition between LoDs, but with dithering your eyes catches them on the side..

  • @ProjectFight
    @ProjectFight 4 หลายเดือนก่อน +3

    Okey... a few things. This video was way faster than I could follow. But it was sooo interesting. I love seeing the technical aspect of how games are properly optimize, what counts and what not. And I will ALWAYS support those that are willing to go the extra mile to properly research this things. Sooo, new sub :)

    • @NeverIsALongTime
      @NeverIsALongTime 3 หลายเดือนก่อน +1

      XD I know Kevin in real life; in real life he speaks way faster! He's chill in his videos. He is brilliant, probably a bit on the spectrum (in a good way). He is more passionate and even more intense in real life. I have read some of his screenplay for his upcoming game it is terrifying & edge-of-your-seat exciting!

  • @average_ms-dos_enjoyer
    @average_ms-dos_enjoyer 4 หลายเดือนก่อน +3

    It would be interesting to see similar breakdowns/criticisms of the other big 3D engines approaches to visual optimizations (Unity, Crytek, maybe even Godot at this point)

  • @samnwakefield2032
    @samnwakefield2032 หลายเดือนก่อน +10

    He wont be targeted by NO ONE..he is speaking facts and the trickery that gaming platforms use...to trick us into buying more expensive hardwares....keep up the good work kid...you are doing good..and no worries about anyone.

    • @mrman6035
      @mrman6035 25 วันที่ผ่านมา +2

      It's crazy looking at forums for new gpus and people arguing with needing stronger ones for current generation games when it's on the developers to better optimize games made with UE5

  • @Dom-zy1qy
    @Dom-zy1qy 4 หลายเดือนก่อน +3

    I am so glad I clicked on this video. Im a noobie when it comes to graphics programming, but I've learned quite a lot just hearing you talk about things. I didn't know shaders are ran on quads, I thought shaders were per-pixel. Maybe the reason behind this relates to the architecture of the gpus? Some kinda SIMD action going on?

    • @jcm2606
      @jcm2606 หลายเดือนก่อน

      It's because of how texture samples with automatic mipmap selection works. To figure out what mip level you need to sample from, you need to figure out how far away the sampling points are between adjacent pixels on the screen. Ideally you want as close to a 1:1 mapping between pixels on the screen and texels sampled from the texture, so that moving one pixel across on the screen moves as close to one texel across on the texture.
      Since pixels are ran on quads, this process is basically free. Pixels in a quad are executed in lockstep with each other within the same SM/CU on the GPU (alongside the other threads within their thread group, though that's getting a bit into the weeds), and so each pixel should be able to access the data of all other pixels within the same quad, assuming they've taken the same branch through a uniform control flow region (fancy way of saying they're in the same section of code).
      This lets the GPU access the texture coordinates of all four pixels in the quad at the same time, without any additional work. The GPU can use the texture coordinates to figure out how pixels on the screen map to texels sampled from the texture, so that it can calculate the optimal mip level to sample from.

  • @kitsune0689
    @kitsune0689 2 หลายเดือนก่อน +2

    the main problem is that foliage with traditional lods look really bad unless its stylized like botw or genshin or pokemon etc. Its probably not a performance upgrade but it solves the biggest immersion killer problem in games with open world/big zones: Pop-in. If youre traveling at high speeds foliage looks awful. And looking at far distances looks awful because of billboards.
    Its probably not the be all end all solution, but with anything that has leaves id say its worth the cost.

  • @therealvbw
    @therealvbw 3 หลายเดือนก่อน +4

    Glad people are talking about these things. You hear lots of chatter about fancy UE features and optimisations, while games get slower and look no better.

  • @grzes848909
    @grzes848909 หลายเดือนก่อน +7

    Greed and laziness. It's always those 2.

  • @AlexKolakowski
    @AlexKolakowski 3 หลายเดือนก่อน +56

    You didn't really touch on Nanite in combination with Lumen. Part of the benefit of this workflow is that run time GI doesn't require any light baking. The baseline fps is higher but the dev hours saved not having to optimize lights is worth leaving older cards behind for some devs.

    • @manoftherainshorts9075
      @manoftherainshorts9075 3 หลายเดือนก่อน +18

      "Why would we work if we can make players work more to afford better hardware for our game?" - Unreal developers circa UE5 release

    • @JensonTM
      @JensonTM 3 หลายเดือนก่อน +8

      horrible take

    • @gelisob
      @gelisob 3 หลายเดือนก่อน +7

      agreed, very horrible take. Havent seen game development prices? Want more dev's left jobless and projects canceled, because "there's a way to make it slightly better" for 300% development time? Yeah, i think get the card or accept few less frames, and project happening.

    • @gamechannel1271
      @gamechannel1271 3 หลายเดือนก่อน +9

      There's plenty of single-person developer teams who make very optimized and good looking games in a timely manner with traditional techniques. Your excuse basically boils down to "skill issue".

    • @realmrpger4432
      @realmrpger4432 2 หลายเดือนก่อน

      @@manoftherainshorts9075 For PC gamers, yeah. But consoles are more-or-less a fixed cost for gamers.

  • @ozonecandle
    @ozonecandle 3 วันที่ผ่านมา

    Commenting on all your vids. Youre explaining so clearly, and robustly what all of us have been thinking/feeling for a while now.

  • @riaayo5321
    @riaayo5321 3 หลายเดือนก่อน +4

    "It would have to be free" and thus the unsustainable, artificially low cost of entry for AI products that then do not actually generate profit marches on.
    I do appreciate the in depth look at nanite's problems, I don't mean to sound sour on that. But AI is in a huge bubble. Companies are going in on them at an already artificially low cost of entry and are *still* losing money. It's just not sustainable, and I'm not sure "make these tools exist so our less powerful gpus have more value because games are more optimized" is a good enough return on investment for AMD or Intel.
    I'm not saying I know the answer other than Epic just needs to not be sunsetting older, working methods of optimization.

    • @sylphianna
      @sylphianna 2 หลายเดือนก่อน +1

      dog heard the word AI and got scared
      this is the kind of thing it is actually supposed to be used for, making an approximation of something we already have, which is the literal main driving force behind LODs, especially because this isn't meant to be a replacement or better version of what we have, but a lower quality one, which is the folly of what companies are trying to do with generative AI; making a shittier version of things when they really really shouldn't be thinking it will be better or more convenient, when they mostly just create more work trying to make whatever the generative AI spit out into something workable, when it would be easier in the end for a human to just make the original asset instead and have a better outcome from the start.
      meanwhile what the AI suggested here will do is the opposite of all that nonsense in essence

    • @sylphianna
      @sylphianna 2 หลายเดือนก่อน +1

      tbh the fact we haven't already developed AI that could make LODs before all this gen AI stuff made people scared of the word is a tragedy

    • @jcm2606
      @jcm2606 หลายเดือนก่อน

      @@sylphianna I don't see where riaayo said he was scared? He just said releasing a model that can do this for free would be unsustainable, which it is. AI-driven LOD generation is still an open problem trying to be solved, so any attempts to generate LODs with AI currently would be basically lighting money on fire, since the results would be unusable until a breakthrough happens. If you don't even have a revenue stream to feed that fire, then you don't have a model because you can't afford to build, train and deploy one.

    • @sylphianna
      @sylphianna หลายเดือนก่อน

      @@jcm2606 it is not generative AI so the cost is not so high

    • @jcm2606
      @jcm2606 หลายเดือนก่อน

      @@sylphianna ... deep learning in general is a money pit, not just generative AI. Maybe educate yourself on how people actually design and train deep neural networks before spouting shit like this.

  • @raynel8495
    @raynel8495 7 วันที่ผ่านมา +4

    So AMD and INTEL are in on it aswell, is that what you're implying at the end of the video? 🤔

  • @MrSofazocker
    @MrSofazocker 3 หลายเดือนก่อน +20

    What this fails to capture is Nanite uses Software rasterizer, which doesn't suffer from quad overdraw at all.
    Small enough Clusters of Vertecies are offloaded to a software rasterizer and geometry assembler.
    The performance degradation comes from not using GPU ResizeableBAR most likely or doing other fudging on the messurements.
    Epic could do a better at providing information of incompatible project settings etc.
    But what it is, is allowing you to put way more polygons on your screen. that's still true.
    Bringing in an 8th gen game and enabling Nanite won't give you more performance, could even give you worse performance.
    Also, even today, if you want to start a game project, use 4.26 like they tell you.

  • @etherweb6796
    @etherweb6796 3 หลายเดือนก่อน +2

    This was probably a long time coming - the claims about nanite were definitely overpromising - imo, if you want good and efficient optimization, it needs to be done by hand and considering all the elements that will be on screen.

  • @imphonic
    @imphonic 4 หลายเดือนก่อน +41

    This kind of thing has been bugging me for such a long time and it's good to know that I'm not completely insane. I'm currently using Unreal Engine 5 to prototype my game, using it as a temporary game engine (like how Capcom used Unity to build RE7 before the RE Engine was finished). My game will be finished on a custom game engine, which I will open-source when I'm finished. I don't want my debut to be ruined by awful performance and questionable graphics quality. I currently target 120 FPS on PS5/XSX, not sure what resolution yet, but all I know is that we're in 2024 and a smeared & jittery 30 FPS is simply unacceptable.
    I'm not trying to compete with Unreal/Unity/Godot, but I am interested in implementing a lot of old-school techniques which were very effective without destroying performance, while also exploring automated optimization tools rather than pushing the load onto the customers that make my career possible. The neural network LOD system is intriguing, and it might not be perfect, but it might still be a net improvement, so I'll keep that one in mind.
    Edit: I might not be able to finish the game on the custom engine, and might just have to bite the bullet and ship on UE5. Game engine development is simply super expensive and I don't have graphics programming experience. That doesn't mean it won't happen - it's still possible if I can, say, find a graphics programmer - but I'm no longer comfortable guaranteeing that statement. However, I will be shipping future games on my custom engine to avoid the UE5 problem from plaguing future ones.

    • @4.0.4
      @4.0.4 4 หลายเดือนก่อน +11

      You're either a genius or insane. Much luck and caffeine to you regardless (we need more rule breakers).

    • @MrSofazocker
      @MrSofazocker 3 หลายเดือนก่อน +4

      Starting a game project in ue5... good luck with that.
      They even tell you should stay on ue4.26.
      Seeing you playing more Minecraft from your recent video uploads I call cap.
      If you just want an Editor, you can stil use Unreal Engine, without the engine part. and write your own renderer pipeline-.
      No point in rewriting your own graphical editor, asset system, sound system, tools and packaging for PS and XSX... especially getting it to works.
      I imagine you don't even have a dev console for any of those... so good luck getting one as an Indie dev.

  • @bartekburmistrz8679
    @bartekburmistrz8679 12 วันที่ผ่านมา

    I was pretty confident that unreal devs mentioned they use their own rendering pipeline that eliminates the quad issue

  • @Possessed_Owl
    @Possessed_Owl 3 หลายเดือนก่อน +3

    First step after UE installation - turning off nanite and lumen.

  • @eslorex
    @eslorex 4 หลายเดือนก่อน +40

    You should definitely make a general optimization guide on UE5 as soon as possible. I was barely able to find valid information about optimization in UE5 or how optimized nanite is on different scenerios. I'm so glad I found someone that finally explaining it with reasons.

    • @BoarsInRome
      @BoarsInRome 3 หลายเดือนก่อน +1

      Agreed!

    • @VolumeProfileHelp
      @VolumeProfileHelp 2 หลายเดือนก่อน

      @@BoarsInRome Double Agreed!

  • @ArtofWEZ
    @ArtofWEZ 3 หลายเดือนก่อน +6

    I see Nanite like blueprints. Blueprints run slower that pure C++ and Nanite runs slower than traditional meshes, but both of them are lot more fun to work with than traditional ways.

  • @devonjuvinall5409
    @devonjuvinall5409 3 หลายเดือนก่อน

    Great watch!
    I would also recommend Embark's Example-based texture syntheses video. They get into photogrammetry and their testing on the software for 3D Props. It's just rocks using displacement maps but I think the whole video could be relevant to this situation. I don't know enough to be confident though haha, still learning.

  • @fluffy_tail4365
    @fluffy_tail4365 4 หลายเดือนก่อน +4

    Aren't mesh shaders still a bit unoptimized and slower compared to a compute shader + draw indirect? I remeber reading something about thayt .The idea of compiling together lods still holds anyway

    • @internetexplorer781
      @internetexplorer781 3 หลายเดือนก่อน

      i guess it depends on hardware and api, but on my hw and vulkan, mesh shaders/meshlets are like 4-5x faster than traditional pipeline and with task shaders you can do some pretty interesting culling techniques etc.. i think i read from epics roadmap that eventually UE will move into this pipeline and ditch the vertex shaders completely. nanite is, iirc, already partly using that new pipeline.

  • @koctf3846
    @koctf3846 หลายเดือนก่อน +2

    Nanite advocates kitbashing, kitbashing brings worse performance, worse performance=need to buy new hardware.
    That's why we often see triple-A titles with similar graphics but require 4x more computation power than before.

  • @Madlion
    @Madlion 4 หลายเดือนก่อน +12

    Nanite is different because it saves development cost by saving time to create LODs, its a simple plug and play system that just works

  • @Neosin1
    @Neosin1 2 หลายเดือนก่อน +2

    This is exactly what I was telling people, but no one believed me!
    I taught 3d modelling at an Aus university 15 years ago and back then we modelled everything by hand, which meant our models were very low poly and optimised!
    Now days, devs just scan in hundreds of millions of polygon models with 1 click and call it a day and expect the software to do all the optimisation!
    This is why UE5 games run like garbage!

  • @happydappyman
    @happydappyman หลายเดือนก่อน +12

    We've been spoiled by blazing fast hardware to the point that we're now getting games that look worse AND run worse than their predecessors. "It's fine, everyone will be playing on at least a 3070 anyway".

    • @BlackParade01
      @BlackParade01 17 วันที่ผ่านมา

      What games look worse and run worse?
      I mean yes they run worse but they often look good as well.

    • @baronsengir187
      @baronsengir187 17 วันที่ผ่านมา

      What games are you all playing 🤣

    • @kaimaiiti
      @kaimaiiti 11 วันที่ผ่านมา

      3070 doesn't have enough vram to play Indiana Jones on anything above low settings at 1440p 😂

    • @happydappyman
      @happydappyman 11 วันที่ผ่านมา

      @@kaimaiiti yep, there it is. Minimum specs is now a 3070 lol

    • @BlackParade01
      @BlackParade01 11 วันที่ผ่านมา

      @@kaimaiiti that is nonsense.
      Even the 3060 plays Indiana at 1440p 60 fps with HIGH settings

  • @robosergTV
    @robosergTV 7 วันที่ผ่านมา +1

    works fine on my 5090 RTX

  • @B.M.Skyforest
    @B.M.Skyforest 3 หลายเดือนก่อน +14

    What many seem to forget is that DLSS and other upscaling things are meant for games to run on slow hardware. And now it's required to be ON if you want to have nice framerate on your top notch PC at ultra settings. It always makes me laugh and also sad at the same time seeing 2010s level of graphics with barely 30 fps on modern machines. We had better looking and faster games back in the day.

  • @Wkaelx
    @Wkaelx 19 วันที่ผ่านมา +1

    As a wise man once said: "When you abstract things you don't need to abstract you end up fucking everything up" - Maurice "Romance empire" 2077.

  • @cheesybrik
    @cheesybrik 3 หลายเดือนก่อน +7

    I really like this video except for your AI solution. Coming from the space I can tell you don’t have a lot of experience with neural nets. You need concrete data to achieve massive training that’s actually usable and to do that you have to take the LOD model from other games. You really think that many game studios are just gonna give over their model data for free? Especially when they know that you need their data? The problem is really how you plan to get the patterns for the net to learn from, you’d need millions of cases for it to start to become somewhat on par with industry modelers. Not to mention the inherit massive jump in complexity when working within a 3d space. It just seems like a naive proposal that I would like to see you flesh out more with concrete plans forward.

    • @ThreatInteractive
      @ThreatInteractive  3 หลายเดือนก่อน +2

      Re: You really think that many game studios are just gonna give over their model data for free?
      We never stated studios should give away free reference LOD data. We were extremely specific with who should invest in this since 6 billion dollar Epic Games won't: 11:11

  • @L1qu1d_5h4d0w
    @L1qu1d_5h4d0w 26 วันที่ผ่านมา +2

    Thank you so much for pointing these obvious flaws of UE5 out. I am not a programmer/dev but I knew from the get go that UE5 is a fail for consumers. UE5 is the sole reason why my 20gb VRAM 3090Ti makes any sense nowadays and the avg is what, 6-8gb? Games look sterile and all the so called high quality assets look awefull coz upscaling and anti aliasing is disastrous in UE5 or at least what the industry chose to utilize. why even bother with all of this when the avg PC can't even run 1080p at high setting in such games... catering to the top 5% PC-users and not even delivering at that regard. My RIG is suffering from these games lol... 3090Ti, 13600k and 32gb of RAM ain't enough for AAA games at max setting at 1440p 165hz which is the "standard" for "high end" to this day.
    I also believe quite frankly that UE5 enables lazyness in devs and it takes away uniqeness unless lots of resources are poured towards that subject. Hence again, games look sterile and similar when made on UE5 without the experience regarding the engine and I would also add passion to the equation. Non-consumer oriented management doesn't help the industry nor the consumers to add salt to the wound. Many studios are shutting down but it just doesn't seem like the industry is understanding the whys... We want well optimized games before anything else nowadays. Gamers are getting ever so more tired of paying full price for games we consider not finished and many of them won't be finished before the end of its lifespan. Espeacially "older gamers" who still remember the times were CD's were a thing and updates/patches were pretty much non-existent which forced devs to really pollish their games before putting 'em up on the shelves dispise this behaviour and the younger ones catch up in terms of intolerating this. It also doesn't help that AMD can't keep up with all of this "change" graphics wise and Nvidia scruting over consumers. like, if AMD can run frame gen. on my 3090Ti, Nvidia should be able too, but they rather pay-lock it with making it exclusive to the next gen. (40 series) which again got more expensive... Soon we will need 4k+ PC's to run games at max settings... ridiculous considering how it used to be and how consoles are (although even they get more and more expensive so... yeah)
    I think I speek for many gamers out there and hopefully this input from me might be helpful for devs to grasp how we see this topic.

  • @serijas737
    @serijas737 2 หลายเดือนก่อน +3

    Nanites is what you use when you want to replace 3D artists that understand topology.

  • @legice
    @legice 3 หลายเดือนก่อน +2

    Finally a video that talks about nanite!
    Honestly, nanite saved a project, because we had 100 meshes with 20mil poly each and nanite made it work on machines that had no right to be able to, but…
    It is in no way a silver bullet and the day to day use, as a quick LOD it is not.
    As a modeler, there are rules to modeling and if you do it right, you need a day max to optimise a big ass mesh, which you know how, because you made it!
    “Quick” and dirty modeling exists and optimisation down the road, but when you are making the prop, you KNOW or at least understand what and how to do it, for it to be the least destructive.
    Non destructive modeling exists, but it brings in different problems, such as time, approach, workflow and unless it requires it, you dont use it, as its a different beast all together.
    You can model a gun any way you want, but a trash can, house, something non hero, non changing and that has measurements set in stone, you do it the old fashion way.
    Texture and prop batching is simple, but being good at it is not.
    I love lumen, but it is clearly still in the early stages and needs additional work to be optimized for non nanite and optimized workflows.
    Im just so happy I wasnt the only one going insane about this

    • @XCanG
      @XCanG 3 หลายเดือนก่อน +1

      I have a comment above with my opinion on this, but I'm not working in game development, so my knowledge is limited. Considering that you are modeler I have a few questions to you:
      1. How long does it take to take a model for nanite vs LODs?
      2. How many years you work/have experience as modeler? It's for the sake that pro's making stuff way quicker, so that difference one vs another may vary depending on experience.
      3. How much you are aware of the optimizations mentioned in the video? My opinion is that he have at least 6 years of experience and probably already some Senior gamedev, but it's hard to imagine that new gamedevs would have that knowledge.
      4. Do you think nanite is useful right now? Do you think it will be useful in the future? (may be with some polishing and fixes)

    • @legice
      @legice 3 หลายเดือนก่อน

      @@XCanG Sure, I can answer those
      - nanite is obviously just a click, so nothing to do there really. As I said, when modeling, you ALWAYS plan ahead, so when you are doing retopo, its just a matter of how well you preplanned all your steps beforehand. So as there are rules to modeling, there are good practices in place for a long time and if you follow them, you are going to take a bit longer to make a prop, but when doing retopo/LODs, its going to take only a fraction of the time needed. Cant really time this, because its how you should be modeling regardless, unless you are doing for visualisation or movies only, where they dont need to be optimized.
      - professional, none really, as everything I have done is on personal works and game jams, due to the game dev industry being a bitch to really get in. There are steps or approaches, but in the end it dosent matter, as long as you deem it to be the best approach time wise, modeling wise and within budget, but most have the same workflow, because if you hand it off to somebody or leave the company, other need to be able to take your work and adapt/finish it.
      - very little/barely understood anything, because I learned from doing it and adapted my workflow base on the information I got, while searching for solutions. Honestly, you can basically skip anything he said, because that is all theory in the same way of how light works in games. As a modeler, you dont really need to know exactly how it works, but you get a feeling and slight understanding of how any why. He is going way more into technical stuff, something tech artists and programers deal. As a senior modeler, you touch this, but in the end your job is to do other things, such as modeling well, texture packing, instances, draw calls, modular design... in some areas and studios, this gets mixed in between and I 100% guarantee you, that most dont know it, even seniors, but they compensate in other areas.
      - nanite is already useful, it will be more useful, but limited. The fact is, nanite is a constantly active process going on in the screen, where as LODs are a one and done. LODs will never go away, but their dependency will be reduced, as less and less optimization will be needed to make games work so well, because computers are getting better.
      As I said, nanite straight up saved a project of ours when it was still in alpha/beta, so now if you use it for trying out how something looks in game, stress testing a rig pre optimization or whatever, it has its place, but should not be overused and has limitations. You cant use it on an skeleton/rigged model for example, as it relies on a constistent poly mesh.
      Take everything I said with a grain of salt. The video explained everything BEAUTIFULLY and I understand things I have unknowingly been doing for years, but never really grasped why, but I knew it worked.
      I learned things my way, studios teach their way, opinions clash and in the end, nobody really knows what they are doing, only how they feel they should be done and the final result dictates that.

  • @SpookySkeleton738
    @SpookySkeleton738 4 หลายเดือนก่อน +10

    you can also reduce draw calls using bindless techniques, like what they did in idtech 7, they are able to draw the entire scene with just a handful of draw calls.

    • @mitsuhh
      @mitsuhh 3 หลายเดือนก่อน +3

      What's a bindless technique?

    • @SpookySkeleton738
      @SpookySkeleton738 3 หลายเดือนก่อน

      @@mitsuhh with vulkan (and i believe also opengl 4.6), you can bind textures to a sparse descriptor array in your shaders that can be modified after pipeline creation, effectively allowing you to swap in and out textures on the fly. you can then put all your material data in a shader storage buffer, and use a vertex attribute to index into that shader storage buffer which can then index into your texture array, it basically means that you don't have to bind new textures when rendering meshes that use different textures, so long as they are on the same shader, which can reduce a TON of command buffer recording and submission in your gpu pipeline.
      "bindless" is technically a misnomer, since obviously there are still samplers being bound to a descriptor, but it's right insofar as you don't have to "rebind" them unless you are loading new textures in.

    • @mitsuhh
      @mitsuhh 3 หลายเดือนก่อน

      @@SpookySkeleton738 Cool

  • @Anewbis1
    @Anewbis1 3 หลายเดือนก่อน +2

    Great content, thank you this! One question that comes to mind. The industry has decades of experience with traditional rendering method while Nanite is only a few years old. Do you think it is a factor to take into consideration when comparing? Do you see Nanite in 5/10years being way more efficient ?

    • @ThreatInteractive
      @ThreatInteractive  3 หลายเดือนก่อน +1

      Great Question.
      The only way for for Nanite to improve would mean it would have to be drastically changed to the point where it would be a bit of a cheat to still call it "Nanite". Our video is talking about the same algorithm that has received 5 iterations.
      Nanite is an implementation of a couple of different concepts such as visibility buffers, mesh compression, and cluster culling but the industry could(already has) meet systems that touch on those same systems for a better performance result. For instance deferred texturing in the Decima Engine.
      How well Nanite runs or what hardware later in time doesn't matter. The industry has been given it's target hardware(9th gen) and the results we're getting from UE5 Nanite enabled games is a joke in terms of visuals and reasonable potential. If we waste potential now, next gen(10th) and so on will lose value as well.

  • @pchris
    @pchris 4 หลายเดือนก่อน +16

    I think easy, automatic optimizations that are less effective than manual ones still offer some value. When a studio has to dedicate fewer resources to technical things like they, the faster they can make games, even if they look slightly worse than they could if they wanted to take absolutely full advantage of the hardware.
    Every other app on your phone being over 100mb for what is basically a glorified web page should example how dirty but easy optimization and faster hardware mostly just enables faster and cheaper development by enabling developers to be a little sloppy.

    • @theultimateevil3430
      @theultimateevil3430 3 หลายเดือนก่อน +2

      it's great in theory, but in practice the development is still expensive as hell and the quality of the products is an absolute trash. It's the reason we have a volume control in Windows lagging for a whole second before opening up. The same stuff that worked fine on Windows 95, lags now. Dumbasses with cheap technology still make bad products for the same price.

    • @pchris
      @pchris 3 หลายเดือนก่อน +3

      @@theultimateevil3430 when you're looking at large products make by massive publicly traded corporations you should never expect any cost savings to get passed on to the consumer.
      I'm mostly talking about indies. The cheaper and easier it is to make something, the lower the bar of entry is, and the more you'll see small groups stepping in and competing with the massive selfish corps.

  • @TechArtAid
    @TechArtAid 3 หลายเดือนก่อน +1

    Do you know maybe why are non-Nanite meshes slower in VSM? I know it's in UE5 docs but... why? What's the origin of the problem here?

  • @zsigmondforianszabo4698
    @zsigmondforianszabo4698 4 หลายเดือนก่อน +3

    I'd rather think about Nanite as a magic wand for those who don't want to deal with mesh optimization and just want a consistent performance all across without manual optimization. This currently hits us heavily but as soon as technology evolves and everyone is going to have access to modern hardware that can utilize this system, the ease of the system and the decent performance will overcome these hardnesses.
    About the development: 4050 compared to 1060 has a 70% performance uplift in 7 years. 10% every year including hardware and software development and in 5 years nanite will work out really well for fast game development and consistent performance.
    PS: we need to mandate gamedevs when releasing a trailer to give performance statistics about the ingame scene and upscaling used :DD

  • @astrea555
    @astrea555 4 หลายเดือนก่อน +2

    Incredibly in depth video once again, really inspiring. I'm only dabbling into game dev but this explain so much about what we've started to see in recent games!

  • @samiraperi467
    @samiraperi467 2 หลายเดือนก่อน +4

    2:51 That is NOT how you write a quarter. What you wrote is a quarter OF A PERCENT.

  • @FVMods
    @FVMods 3 หลายเดือนก่อน +2

    Minimalist, quality, straight-to-the-point narration, tidy video editing with relevant information, engaging content. Great channel so far!

  • @robbyknobby007
    @robbyknobby007 4 หลายเดือนก่อน +30

    I usually find people who excuse unreal's performance in titles and mention how developers have the choice in not using these techniques like Lumen and Nanite, but unreal 5.0 deprecated TESSELLATION, so where was the developer choice there? You should really do some tests comparing tessellation, parallax occlusion, and Nanite. Because if tessellation doesn't perform better than Nanite then Epic did the community a service by removing it with a replacement. But if tessellation is faster, then Epic really is to blame for alot of shty performance.

    • @vitordelima
      @vitordelima 4 หลายเดือนก่อน +6

      Tesselation can be faster and better looking than Nanite, but nobody cared to code methods to export it from 3D modelling software or convert existing meshes into it.

    • @vitordelima
      @vitordelima 4 หลายเดือนก่อน +1

      @@RADkate There is no easy way to create content that uses it.

  • @dwupac
    @dwupac 16 วันที่ผ่านมา

    This channel is much needed. Its in our best interest to support it as gamers.

  • @AdamTechTips27
    @AdamTechTips27 4 หลายเดือนก่อน +9

    I've been feeling and saying the same thing for years. Just doesn't have the concrete testing yet. This videos proves it all. thankyou

  • @DimosasQuest
    @DimosasQuest 9 วันที่ผ่านมา

    I work as a VR dev, and performance is our N1 goal, before visual fidelity. It is shocking sometimes that with a lot of effort how great looking games you can make, that run really well when using good traditional methods. These however take a lot of time. We've experimented with Lumen and Nanite, and quickly realized what a shit show those 2 are for VR performance, and to boot, the visual artefacts they create can actually contribute to VR nausea. Short of creating our own engine, we've moved back to Unity for now, because it is a much more flexible platform to work with.

  • @CaptainVideoBlaster
    @CaptainVideoBlaster 4 หลายเดือนก่อน +22

    This is what I imagine Unabomber reading his own manifesto would be like if it was about video game development. In another words, I like it.

  • @陈子正
    @陈子正 3 หลายเดือนก่อน +1

    With the same amount of work and the same graphic quality, can you realize the relief scene of the Yellow Brow King boss fight in Black Myth without using nanite, and how much FPS can you achieve?

  • @Miki19910723
    @Miki19910723 3 หลายเดือนก่อน +9

    Technicly everything you say is right from a perspective. But youre conclusions are very wierd. Especially the Ai one given quality problems it usually has. I think we should not confuse what some youtubers say with how the feature works and why it was developed and if any one didnt catch that its exactly for the work flow. Nanite doesn't render more triangles it renders them more where there needed. And yes it will lose to very well hand optimised scene but the point is you dont have to do that. Also examples you showed are actually rather bad for nanite. The point is not that it renders single triangle faster its about the complex scene and work requaiered.

  • @AGuy-vq9qp
    @AGuy-vq9qp 4 วันที่ผ่านมา

    DLSS/XESS being able to do actually good looking TAA AND being able to get a better image than they're upscaling from for relatively cheap is a good thing, actually.

  • @2strokedesign
    @2strokedesign 4 หลายเดือนก่อน +4

    Very interesting tests, thank you for sharing! I was wondering about memory consumption though? I would think a LOD0 Nanite mesh is lighter than let's say LOD0-LOD7.

  • @TheShitpostExperience
    @TheShitpostExperience 3 หลายเดือนก่อน +1

    I may be missing something, but the point of nanite was not that it would be faster than regular LOD, but rather that you could have all your assets as high poly objects and nanite would take care of doing all the LOD in the engine right? Saving time for the artists/developers.
    Performance is not as good as properly built game (from the top of my mind I can't remember if they said it would be faster, they may have), but the importance of UE5 in general has been that you can have everything done inside the UE5 editor which streamlines the development pipeline for small/medium sized studios.
    Arguing wether regular LOD vs nanite performance in the context of using nanite in UE5 is somewhat a pointless argument because nanite is a convenience features, not a performance one.