Thanks for hanging out folks! This quick node setup is definitely just for fun and not a 1:1 nanite (relax, I'm not that smart!). Someone has also rightly pointed out that this method is only preferable when using lots of unique meshes. For any repeated meshes, using linked objects (Alt+D) is still way more performance friendly! And texture and UV friendly! So take with a grain of salt and follow other optimization best practices! I hope blender eventually has some kind of native functionality like nanite to help with viewport performance though! Do check out a recent vid made by 'stache' on scene optimisation. It's excellent!
yeah calling this "nanite" is a bit over the top, but gets the idea across. what nanite actually does involves a bunch more pre-calculation. they break models into chunks and swap the chunks, with clean 'seams' between the mesh chunks. it avoids the jitter that comes from most progressive decimation LoD techniques, and takes care of the processing problem that was slowing you down. i have no idea how you would implement that in blender, but its probably more than geometry nodes can handle. this is a pretty cool basic progressive decimation technique though. good video.
@@musikdoktor while being a fun experiment I wonder if it actually improves performance or if it actually requires more processing to run this dynamically?
I don’t know if someone already said but at 4:00 you can use the “Active Camera” node and connect it in the object info socket so you will always be good with any camera you’re using
For the decimation problem of recalculating all assets in the scene using the modifier, you can probably create your own LODs ( Level of Detail ) in a collection, and then use geometry nodes to determine which one to display, based on distance. However, this would be similar LODs in a game engine in the traditional sense and not a smooth transition, game engines traditionally have roughly 4-5 levels of decimation. But you can decide for yourself how many you actually need and place it in a collection. This way you don't recalculate all decimation on all assets on the fly. It's already precomputed and the geometry node just decides which to display at what distance.
Cool... I wonder if that switch from one LOD to another would be "very" noticeable. Anyways i would love to see something like that on a scene full of meshes.
Cool idea but the execution has a few issues. 1 - You do not need this weird setup with Geometry Proximity and the Cube. What you want is calculating the distance between the camera and the object origin. Just do that. Take 2 object info nodes, use Self Object node and the Camera as inputs, and retrieve Distance between the 2 positions with the Vector Math node. 2 - There is an Active camera node for the Object info, no need to specify your camera object in the node group. 3 - This is not very useful at the moment because it's going to be so laggy since everytime you move the camera GN is going to recompute every objects from their 1 million polycount to the currently needed polycount. To make it more useful I'd first bake a few LODs (all of this can be done in GN procedurally on Frame 1) and caching that. Then when calculating the merge by distance I'd retrieve the closest LOD that has a higher density than what's required and merge by distance from that LOD instead than from the High Poly. Also I'd include steps so it doesn't reculate for tiny movement but every 1 meters for example (although I would not do it linearly but exponentially as camera get closer to the object). More on the Geometry Proximity and the Cube: What your setup is doing is looking at the Cube and asking "What's the closets Face on the Cube to the Camera Location?" or "Which face as the shortest distance to that Camera?". Then you retrieve that distance and use that to Merge by Distance. That's what Geometry Proximity does, it's retrieving the closest mesh component. I hope it's obvious why this is not ideal.
Geat notes! Thanks for taking the time to respond! I'd love to see a better version of my crude attempt that's for sure! Do let me know if you find anything cool!
This is Nanite with a huge asterisk. If you render out an animation, I wonder if this would even be faster than using persistent data rendering, aside from having bad performance when moving the camera. Also one of the nanite traits that you could have easily replicated is that it has a specific geometry density on screen, not some arbitrary decimation based on linear distance to the camera.
excellent work mate! I had a feeling something like this ought to be possible within blender. Even though it isn’t really the same as Nanite, but yea kind of in principle. Adjusting polygon count based on the distance from camera automatically is a blessing for those of us who have done it manually. Im still new to geometry nodes but I have seen enough to understand Blender’s full potential hasn’t been tapped into just yet! Keep it going 🙏🏽
Yeah definitely not a 1:1 transfer lol. Just for fun! I love making fun little tools like this that can speed up my workflow 😊. Thanks for stopping by!
@ Well in the end it did prove to make a difference, especially in rendering time. That 2 second difference is exponential. When one is attempting to render hundreds/over a thousand frames, it does add up to some time saved! Subbed 🙏🏽
I was wondering why Blender doesn't have Nanite or Lumen techniques. These could be incredibly helpful for reducing render times. I'm definitely going to use this! Thanks a lot for sharing this technique!
Thanks! I hope they do some day! This is definitely a crude work around in comparison. Eevee is getting closer and closer to being realtime and behaving like lumen though. So at least that I can see being a reality!
its not a Realtime program so accuracy and quality are more important than speed! Nanite is just a fancy automated LOD system , Lumen is well just real time lighting system no benefit for them in a 3d tool like blender especially Lumen
Could be landed soonly when Vulkan backend gonna be fully implemented inside eevee-next :) See What "jeroen bakker"are do ! About Meshlets (Geometry streaming) for Vulkan
We have a group that working with unreal in our studio. they use Nanite only for viewport optimization and disable it on final render because it's caused so many unwanted flicker and small jumping on mesh parts. Literaly Nanite is a grate tool for Gaming but its not good for animation.
I think this is THE BEST Tutorial! The only thing that was keep getting in my way wehen building big scence was the amount of lag even in the viewport thanks to your tutorial that schould be an Issue anymore.
This was more of a fun experiment really! I would highly recommend you look up a recent video made by the channel 'stache' on scene optimisation if you're having trouble with performance. It's recent, and very comprehensive!
1:30 btw you could just add a Position node and an Object Info node (with the object being the camera). The object info node has a Location output, so just add a Vector Math node and choose Distance and plug the position and the location of the camera.
That would cause the output to be a field, and the merge-by-distance node only accepts single values in the distance input. He could take the length of the Location output though.
@@bean_mhm That's one way to do it. Assuming that both of the Object Info nodes are set to original, it should be functionally identical to plugging the Location output of a Object info set to Relative into a Vector Math node using the Length operator. I just have a tendency to use fewer nodes if there are two ways of doing things, even though I don't think that it makes any difference as far as performance goes.
This is very interesting, but unfortunately it doesn't work with instances . That is, instead of having 6 Ald+D copies of the wall (120k polygons in total), we will have 6 copies with 600+k polygons. This solution may be suitable for one very high poly object, but not for a set of objects, because using this method we will only increase the number of polygons with each new copy
To copy the geo nodes modifier to all objects you want to decimate, you can also select all objects and use the 'Copy to selected' button, hidden on the small arrow next to modifier name. This way the existing modifiers on the objects are kept. By the way you can also expose the camera selection in the modifier section by dragging the camera selection in the geo nodes editor to the group input. You can do the same for the map range max value, exposing it for easier use. By the way you can later change the value for multiple objects by selecting the objects and holding alt before clicking the value to change it, which will apply it to all selected objects.
that's clever idea will it work with animations as well, If the camera recalculating on every frame there might be issue, however i think it will not if we are setting a costume range. What's your thought on that.
The value of the decimate node isn't an arbitrary [0-1], but scene units distance between vertices. To improve this a bit, calculate the size of a pixel for each mesh (based on the centroid, or some other metric), and use that as the input to the decimate node. So you can say merge vertices that are about _n_ pixels apart
Ah sorry friend. Since I mainly make still frames for concept art, I hadn't considered many aspects of this node! It's pretty destructive unfortunately
Yeah this is a pretty crude experiment. A handy feature in blender is somewhere in the render tab you can set the maximum texture resolution for the whole scene. Great for bringing down memory usage if you have a bunch of unnecessary 4k+ textures
One question, why not use "Camera Data" node (it has a distance value) instead object info? I think this node refers to the active camera. Will be more "out the box" functionality. But I'm sure I'm losting something.
@@jamescombridgeart I checked and "Camera Data" appears only in Shader editor, not the Geonodes (don't know much more as I have not knowledgement). Don't know if possible or not!
Yeah I ran into this too. It's because when you drag into the modifier, it means it takes those values on a per object/modifier basis. Whereas if you leave them in the geo nodes area, they stay dynamic / more global. Either would work technically, just depends our your needs
For instance objects it's better don't make various version. because Instance meshes send once to render engine and the engine reuse the same data over and over.
@jamescombridgeart To get closer to nanite, you might try some way of adding more geometry as you get closer. Based on my understanding, nanite has "unlimited" geometric detail. While you'll likely be limited if you can't figure out a way to implement everything, it would be a bit closer than the dynamic LOD you have now. A very nice first attempt.
This is cool. I wonder if there’s a way to decimate a mesh based on its size in the camera, that way you wouldn’t have to fine tune this for every mesh.
It's not usable in any way. Modern viewports can handle few dozen of millions of triangles easily. And once your start exceeding that and the framerate starts dropping because of that, using your merge by distance setup is going to cost a LOT more performance to calculate than the large triangle count. And also memory, since there will now multiple permutations of mesh topology, and those meshes therefore can't be instanced. I know it's just for fun but even withing the context of fun it doesn't make much sense :)
Maybe use a simple plane for the modifyer with a collection input. Hide the collection in scene and render and let the Geonodes Rebuild the whole Collection with dynamic lods. So you could link your camera easy. Than you should consider to reduce geometry thats not in viewport. At least at a baking node and bake the node if there is camera movement.
Possibly, I haven't tried an animation with it. Being mainly a concept artist I'm more of a single image guy! I imagine if you dialed it in to look good at the right distance it'd work ok 🤔
We need animation test, as for now, for static frame / no camera movement it's cool, but Nanite's power is about animation / camera movement and real time geo, or am wrong? In anyway cool tutorial and good creative thinking.
Cheers. Yes you're right. I mainly use blender for still frames for concept art, so for me it has a niche use case. Not sure about other use cases. Definitely feel free to experiment and let me know how it goes!
This is very cool, and yeah it's not a 1-1 recreation of how "Nanite" works under the hood, but the idea of Nanite sums up as dynamic decimation based on the difference from camera and that's what you did, kinda. But I do think based on the Nanite official documentation, you can remake Nanite again but much closer to how Nanite actually works, and although it would still be limited to a geonodes modifier, and wouldn't change underneath how Blender renders polygons, unless you change the source code of Blender, it still could be a very useful tool for specific purposes. This is what I found from the official doc of Nanite. ""During import: meshes are analyzed and broken down into hierarchical clusters of triangle groups. During rendering: clusters are swapped on the fly at varying levels of detail based on the camera view, and connect perfectly without cracks to neighboring clusters within the same object. Data is streamed in on demand so that only visible detail needs to reside in memory. Nanite runs in its own rendering pass that completely bypasses traditional draw calls. Visualization modes can be used to inspect the Nanite pipeline. "
I done this decades ago with BGE in Blender, its just too slow, also Nanite works like Voxels but GPU, if you lower quality in Nanite you see the literal Voxel in real time, a photoscanned rock looks like a Minecraft rock.
Basically! There's some contention over whether or not Nanite is properly optimized for this purpose in unreal amongst some devs I've spoken to - but that's the concept of it yes. pretty cool though - especially if you hate making LOD's 😅
@ I hear some case LOD's increase file size and reduce performance, so nanite can be an alternative method. Still it take some power for decimate process in exchange for reduce memory load.
Yeah, some up front cost potentially in exchange for render efficiency. Same for LODs I suppose. Up front (or space cost in a games case) cost, for better performance
So basially you remade microdisplacement! P.S: No, i am actually wrong! This would complement microdisplacement really well! Since microdisplacement only adds more detail to close meshes, but does not decimate far meshes. Also Microdisplacement is reliant on a displacement map!
That's absolutely not anything like nanite... it's a just basic decimator trick that can't work in real situation needs. How would it cope within a 500 million poly scene ? It wouldn't because it is not design to work with high density data where the optimization would make sense. If you render an animated version of the rock with the cam moving, how stable frame to frame would it look like ? It would be jagged so much it'd be unwatchable.
@@rocksquared you're 100% right. It's nowhere close to what proper nanite would be capable of, that's way above my pay grade😅. In terms of a render, since it's offline rendering it wouldn't be noticeable in the final. Although with super dense scenes I'd always recommend optimising and applying modifiers tailored to the shots needs. Sorry if I got your hopes up only to let you down friend! Maybe I shall tweak the title to avoid a similar experience for others
@@jamescombridgeartYou replied with such sweetness and humbleness. I loved your video. Thanks for the video and keep posting these type of tips and tricks for blender and also for Ue5 if possible ❤
It's fun to try, but I think it's misleading. This isn’t Nanite, and it's not really useful; we could replicate it in other software, and it's not even new - we could do decades ago. But what's the point? You lose UVs, and the frame rate takes a hit because constantly merging by distance affects performance more than using high-poly geometry. Plus, the results aren’t predictable either.
Thanks for hanging out folks! This quick node setup is definitely just for fun and not a 1:1 nanite (relax, I'm not that smart!). Someone has also rightly pointed out that this method is only preferable when using lots of unique meshes. For any repeated meshes, using linked objects (Alt+D) is still way more performance friendly! And texture and UV friendly! So take with a grain of salt and follow other optimization best practices! I hope blender eventually has some kind of native functionality like nanite to help with viewport performance though!
Do check out a recent vid made by 'stache' on scene optimisation. It's excellent!
the best solution would have to be the decimate node built-in to geometry nodes itsef
yeah calling this "nanite" is a bit over the top, but gets the idea across.
what nanite actually does involves a bunch more pre-calculation. they break models into chunks and swap the chunks, with clean 'seams' between the mesh chunks. it avoids the jitter that comes from most progressive decimation LoD techniques, and takes care of the processing problem that was slowing you down. i have no idea how you would implement that in blender, but its probably more than geometry nodes can handle.
this is a pretty cool basic progressive decimation technique though. good video.
@@IIIDemonyeah fair call. I might have got a bit over excited by the concept😅
This is not Nanite, it's just dynamic LOD.
True! Just a fun experiment 🤓
Was going to say…
[Kind Of] he added in the title.
@@musikdoktor while being a fun experiment I wonder if it actually improves performance or if it actually requires more processing to run this dynamically?
That's literally what nanite is
I don’t know if someone already said but at 4:00 you can use the “Active Camera” node and connect it in the object info socket so you will always be good with any camera you’re using
Also you should probably use 1 / distance to get a more "logarithmic" curve because a distance of 100 and 101 are way less important than 1 and 2
This is better than nanite, you FOOL! You’ve already WON!
😂
For the decimation problem of recalculating all assets in the scene using the modifier, you can probably create your own LODs ( Level of Detail ) in a collection, and then use geometry nodes to determine which one to display, based on distance. However, this would be similar LODs in a game engine in the traditional sense and not a smooth transition, game engines traditionally have roughly 4-5 levels of decimation. But you can decide for yourself how many you actually need and place it in a collection.
This way you don't recalculate all decimation on all assets on the fly. It's already precomputed and the geometry node just decides which to display at what distance.
Huh! That's a cool idea. I'm sure there'd be a way to switch between them!
Cool... I wonder if that switch from one LOD to another would be "very" noticeable. Anyways i would love to see something like that on a scene full of meshes.
Cool idea but the execution has a few issues.
1 - You do not need this weird setup with Geometry Proximity and the Cube. What you want is calculating the distance between the camera and the object origin. Just do that. Take 2 object info nodes, use Self Object node and the Camera as inputs, and retrieve Distance between the 2 positions with the Vector Math node.
2 - There is an Active camera node for the Object info, no need to specify your camera object in the node group.
3 - This is not very useful at the moment because it's going to be so laggy since everytime you move the camera GN is going to recompute every objects from their 1 million polycount to the currently needed polycount. To make it more useful I'd first bake a few LODs (all of this can be done in GN procedurally on Frame 1) and caching that. Then when calculating the merge by distance I'd retrieve the closest LOD that has a higher density than what's required and merge by distance from that LOD instead than from the High Poly. Also I'd include steps so it doesn't reculate for tiny movement but every 1 meters for example (although I would not do it linearly but exponentially as camera get closer to the object).
More on the Geometry Proximity and the Cube:
What your setup is doing is looking at the Cube and asking "What's the closets Face on the Cube to the Camera Location?" or "Which face as the shortest distance to that Camera?". Then you retrieve that distance and use that to Merge by Distance. That's what Geometry Proximity does, it's retrieving the closest mesh component. I hope it's obvious why this is not ideal.
Geat notes! Thanks for taking the time to respond! I'd love to see a better version of my crude attempt that's for sure! Do let me know if you find anything cool!
This is Nanite with a huge asterisk. If you render out an animation, I wonder if this would even be faster than using persistent data rendering, aside from having bad performance when moving the camera. Also one of the nanite traits that you could have easily replicated is that it has a specific geometry density on screen, not some arbitrary decimation based on linear distance to the camera.
Very true! I may have got over excited. If you figure out a method to do what you described I'd love to know! 🙏
excellent work mate! I had a feeling something like this ought to be possible within blender. Even though it isn’t really the same as Nanite, but yea kind of in principle. Adjusting polygon count based on the distance from camera automatically is a blessing for those of us who have done it manually. Im still new to geometry nodes but I have seen enough to understand Blender’s full potential hasn’t been tapped into just yet! Keep it going 🙏🏽
Yeah definitely not a 1:1 transfer lol. Just for fun! I love making fun little tools like this that can speed up my workflow 😊. Thanks for stopping by!
@ Well in the end it did prove to make a difference, especially in rendering time. That 2 second difference is exponential. When one is attempting to render hundreds/over a thousand frames, it does add up to some time saved! Subbed 🙏🏽
I was wondering why Blender doesn't have Nanite or Lumen techniques. These could be incredibly helpful for reducing render times. I'm definitely going to use this! Thanks a lot for sharing this technique!
Thanks! I hope they do some day! This is definitely a crude work around in comparison. Eevee is getting closer and closer to being realtime and behaving like lumen though. So at least that I can see being a reality!
its not a Realtime program so accuracy and quality are more important than speed! Nanite is just a fancy automated LOD system , Lumen is well just real time lighting system no benefit for them in a 3d tool like blender especially Lumen
Could be landed soonly when Vulkan backend gonna be fully implemented inside eevee-next :) See What "jeroen bakker"are do ! About Meshlets (Geometry streaming) for Vulkan
We have a group that working with unreal in our studio. they use Nanite only for viewport optimization and disable it on final render because it's caused so many unwanted flicker and small jumping on mesh parts. Literaly Nanite is a grate tool for Gaming but its not good for animation.
Might be Epic having millions to spend on developing but not sure 🤔
This is gold thanks for sharing it for free thanks a lot to youtubers like you:D
tnx for the tutorial dude super useful
Congratulations, you have reinvented LODs
Good experiment, either way it's a handy way to optimize, thanks!
Thanks, yeah you're right 😆
Really cool and clever!
Thank you!
Genius! Thank you
Haha thank you
This is amazing
Thanks 😊
I think this is THE BEST Tutorial! The only thing that was keep getting in my way wehen building big scence was the amount of lag even in the viewport thanks to your tutorial that schould be an Issue anymore.
This was more of a fun experiment really! I would highly recommend you look up a recent video made by the channel 'stache' on scene optimisation if you're having trouble with performance. It's recent, and very comprehensive!
@@jamescombridgeart thanks
1:30 btw you could just add a Position node and an Object Info node (with the object being the camera). The object info node has a Location output, so just add a Vector Math node and choose Distance and plug the position and the location of the camera.
That would cause the output to be a field, and the merge-by-distance node only accepts single values in the distance input. He could take the length of the Location output though.
@anonymousanonymous2284 then you could add another Object Info set to self (this object) and use its location, instead of the Position node.
@@bean_mhm That's one way to do it. Assuming that both of the Object Info nodes are set to original, it should be functionally identical to plugging the Location output of a Object info set to Relative into a Vector Math node using the Length operator. I just have a tendency to use fewer nodes if there are two ways of doing things, even though I don't think that it makes any difference as far as performance goes.
@@anonymousanonymous2284 good point
Definitely earned a deserved sub!
Thank you! 🙏
Can u do new way of culling ?
This is very interesting, but unfortunately it doesn't work with instances . That is, instead of having 6 Ald+D copies of the wall (120k polygons in total), we will have 6 copies with 600+k polygons. This solution may be suitable for one very high poly object, but not for a set of objects, because using this method we will only increase the number of polygons with each new copy
Hey great point, you're absolutely right, this method would only be preferable when using Unique meshes. Thanks for pointing this out 👍
To copy the geo nodes modifier to all objects you want to decimate, you can also select all objects and use the 'Copy to selected' button, hidden on the small arrow next to modifier name. This way the existing modifiers on the objects are kept. By the way you can also expose the camera selection in the modifier section by dragging the camera selection in the geo nodes editor to the group input. You can do the same for the map range max value, exposing it for easier use. By the way you can later change the value for multiple objects by selecting the objects and holding alt before clicking the value to change it, which will apply it to all selected objects.
Wow love that ALT tip! Thanks!
Well done man
Thank you!
I have one question: Can I set input object to 'Active Camera'?
(Input - Scene - Active Camera)
Maybe!? If you figure something out let me know! Not sure if geo nodes has something native for that!
@@jamescombridgeart Update: I tested my self and active camera node work!
it's just wow
that's clever idea will it work with animations as well, If the camera recalculating on every frame there might be issue, however i think it will not if we are setting a costume range. What's your thought on that.
Yeah I definitely made it with static cam or still frames in mind, and using instances is probably still way more performance friendly lol
@@jamescombridgeart I see
The value of the decimate node isn't an arbitrary [0-1], but scene units distance between vertices. To improve this a bit, calculate the size of a pixel for each mesh (based on the centroid, or some other metric), and use that as the input to the decimate node. So you can say merge vertices that are about _n_ pixels apart
I would love to see this!
But does it cause flickering in animation, or larger frame compile times between frames because of the geometry nodes updating every frame?
Not sure to be perfectly honest! I haven't road tested that much, and I'm sure there are better ways to optimise besides from this experiment!
thanks!
this is so helpful thankyou
Cheers! Glad you liked it 😊
Is there a node that displays the result from a node? like a numerical value, rather than just anonymously pipes it through?
@@brandosbucket not that I'm aware of, but that's not saying much 😉. Besides from checking vertices count in viewport stats that is.
What to do with textures and uv? Tnx!!!
Ah sorry friend. Since I mainly make still frames for concept art, I hadn't considered many aspects of this node! It's pretty destructive unfortunately
Amazing thanks for this trick.
Cheers! Definitely not a replacement for instancing, but it was fun to figure this out either way 😊
Really great! And about textures? Cause textures can ocupy mush more memory than mesh
Yeah this is a pretty crude experiment. A handy feature in blender is somewhere in the render tab you can set the maximum texture resolution for the whole scene. Great for bringing down memory usage if you have a bunch of unnecessary 4k+ textures
One question, why not use "Camera Data" node (it has a distance value) instead object info?
I think this node refers to the active camera. Will be more "out the box" functionality.
But I'm sure I'm losting something.
No you aren't! I had no idea it existed to be perfectly honest. A few folks have pointed that out. Always something to learn!
@@jamescombridgeart I checked and "Camera Data" appears only in Shader editor, not the Geonodes (don't know much more as I have not knowledgement).
Don't know if possible or not!
when i plug min and max to group input to access it from modifier panel but it doesnt work same why so?
Yeah I ran into this too. It's because when you drag into the modifier, it means it takes those values on a per object/modifier basis. Whereas if you leave them in the geo nodes area, they stay dynamic / more global. Either would work technically, just depends our your needs
can we do that with instanced object? for example we instanced tree on the landscape and we want to nanite the tree.
Not sure! There have been some great suggestions in the comments already though! Could be worth looking into
For instance objects it's better don't make various version. because Instance meshes send once to render engine and the engine reuse the same data over and over.
For a florest is better to use the billboard technique
@jamescombridgeart To get closer to nanite, you might try some way of adding more geometry as you get closer.
Based on my understanding, nanite has "unlimited" geometric detail.
While you'll likely be limited if you can't figure out a way to implement everything, it would be a bit closer than the dynamic LOD you have now.
A very nice first attempt.
Thank you so much for this 🙏🏻
You're very welcome 😁
This is cool. I wonder if there’s a way to decimate a mesh based on its size in the camera, that way you wouldn’t have to fine tune this for every mesh.
That would be cool. Someone smarter than me, get on it!! 😂
It's not usable in any way. Modern viewports can handle few dozen of millions of triangles easily. And once your start exceeding that and the framerate starts dropping because of that, using your merge by distance setup is going to cost a LOT more performance to calculate than the large triangle count. And also memory, since there will now multiple permutations of mesh topology, and those meshes therefore can't be instanced. I know it's just for fun but even withing the context of fun it doesn't make much sense :)
True! Better to apply it on large scenes, if you do opt to use it at all.
Maybe use a simple plane for the modifyer with a collection input. Hide the collection in scene and render and let the Geonodes Rebuild the whole Collection with dynamic lods. So you could link your camera easy. Than you should consider to reduce geometry thats not in viewport. At least at a baking node and bake the node if there is camera movement.
Interesting approach! If you do end up experimenting with it let me know how it goes!
i think every 3d software should have nanite function since ue5
afaik nanite isnt open to use for everyone
Totally!
@@pansitostyle it really should cause it will boost on game character design even in low spec pc,
this would be the bomb for the game engine godot
this is fucking brilliant
Haha thanks. I'm sure there's more comprehensive and powerful ways to leverage this setup. Have fun!
this is so cool!
so, it slowdowns mesh calculation but potentially reduces memory consumption. Would be interesting to know if there is flickering in animation
Possibly, I haven't tried an animation with it. Being mainly a concept artist I'm more of a single image guy! I imagine if you dialed it in to look good at the right distance it'd work ok 🤔
We need animation test, as for now, for static frame / no camera movement it's cool, but Nanite's power is about animation / camera movement and real time geo, or am wrong? In anyway cool tutorial and good creative thinking.
Cheers. Yes you're right. I mainly use blender for still frames for concept art, so for me it has a niche use case. Not sure about other use cases. Definitely feel free to experiment and let me know how it goes!
what do you mean by decimating?
It works similar to the 'decimate' modifier - so I found it natural to use the same terminology!
@@jamescombridgeart i see what you mean now thanks for the quick response much respect
isn't creating a instance of that object will be better options ?🤷
Yes! this was just for fun really
I wonder how well this would work in UPBGE ..... ?
No idea, probably not very well I imagine lol. This was a very crude experiment made mainly with rendering still frames or a static camera
Bro u just made a game changer
Haha I don't know about that! instancing is probably still a better option depending on your needs ;) But I had fun figuring this out regardless!
Merging actually takes more time then rendering.
there’s no gain in nanite without hardware support for it
Not really the same as nanite, but is a cool setup
Yeah totally, a bit of fun really 😉 cheers
jiggery-pokery
This is very cool, and yeah it's not a 1-1 recreation of how "Nanite" works under the hood, but the idea of Nanite sums up as dynamic decimation based on the difference from camera and that's what you did, kinda. But I do think based on the Nanite official documentation, you can remake Nanite again but much closer to how Nanite actually works, and although it would still be limited to a geonodes modifier, and wouldn't change underneath how Blender renders polygons, unless you change the source code of Blender, it still could be a very useful tool for specific purposes.
This is what I found from the official doc of Nanite.
""During import: meshes are analyzed and broken down into hierarchical clusters of triangle groups.
During rendering: clusters are swapped on the fly at varying levels of detail based on the camera view, and connect perfectly without cracks to neighboring clusters within the same object. Data is streamed in on demand so that only visible detail needs to reside in memory. Nanite runs in its own rendering pass that completely bypasses traditional draw calls. Visualization modes can be used to inspect the Nanite pipeline. "
Interesting, that makes sense how it's way more performance than this crude version haha
decimate modifier
I done this decades ago with BGE in Blender, its just too slow, also Nanite works like Voxels but GPU, if you lower quality in Nanite you see the literal Voxel in real time, a photoscanned rock looks like a Minecraft rock.
Tim Sweeney HATES this man - find out why in this video
😆
👌
So nanite basically auto create LODs instead of manually make LODs
Basically! There's some contention over whether or not Nanite is properly optimized for this purpose in unreal amongst some devs I've spoken to - but that's the concept of it yes. pretty cool though - especially if you hate making LOD's 😅
@ I hear some case LOD's increase file size and reduce performance, so nanite can be an alternative method.
Still it take some power for decimate process in exchange for reduce memory load.
Yeah, some up front cost potentially in exchange for render efficiency. Same for LODs I suppose. Up front (or space cost in a games case) cost, for better performance
You are a blender unreal engine bridge.
The purpose of nanite is to speed up work. The purpose of this sh&T is to choke your PC fast. lol
So basially you remade microdisplacement!
P.S: No, i am actually wrong! This would complement microdisplacement really well! Since microdisplacement only adds more detail to close meshes, but does not decimate far meshes. Also Microdisplacement is reliant on a displacement map!
Yeah I get what you mean! Same same but different 😁
this isn't nanite, obviously.
💯😆
That's absolutely not anything like nanite... it's a just basic decimator trick that can't work in real situation needs.
How would it cope within a 500 million poly scene ? It wouldn't because it is not design to work with high density data where the optimization would make sense.
If you render an animated version of the rock with the cam moving, how stable frame to frame would it look like ? It would be jagged so much it'd be unwatchable.
@@rocksquared you're 100% right. It's nowhere close to what proper nanite would be capable of, that's way above my pay grade😅. In terms of a render, since it's offline rendering it wouldn't be noticeable in the final. Although with super dense scenes I'd always recommend optimising and applying modifiers tailored to the shots needs.
Sorry if I got your hopes up only to let you down friend! Maybe I shall tweak the title to avoid a similar experience for others
@@jamescombridgeartYou replied with such sweetness and humbleness. I loved your video. Thanks for the video and keep posting these type of tips and tricks for blender and also for Ue5 if possible ❤
millennials invent LOD From the 90s
😆💯
It's fun to try, but I think it's misleading. This isn’t Nanite, and it's not really useful; we could replicate it in other software, and it's not even new - we could do decades ago. But what's the point? You lose UVs, and the frame rate takes a hit because constantly merging by distance affects performance more than using high-poly geometry. Plus, the results aren’t predictable either.
Yeah I'm starting to realise this! Thank you for the feedback. ♥️
I know this is unrelated but give the Quran a read, also I've always wondered if this is possible
Me too! I hope they make something more official like nanite eventually that speeds up scenes in real time
Why this makes u bring uo Quran??
Don't bring religion into this moron
Why do people bring religion into every single thing
@satyamanu2211 I dont understand,,