Your audio sounds much nicer now! I'm so glad to have discovered your channel so early on, I think you're going to see a lot of growth in the near future. Top-tier content!
Woah, this is actually a really cool project. I have seen many voxel engines, but this ones seems to be one of the best. I hope you will go far with this, it has a huge potential!
By the way, you can use pyramids, instead of min blending to explot Z-buffer and store from which side the closest voxel is by coloring the triangles, so youre not bound to cubes, but rectangles.
This isn't exactly my area of expertise, so apologies if this is inaccurate, but there might be another optimisation for raymarched rendering that you can use. By calculating a low-res depth pass before firing your rays, you can use start the rays at each pixels low-res depth value instead of firing it from the camera, which should reduce the number of steps each ray has to travel by a significant amount.
Excellent suggestion - I actually already do this! If you go to the second to last clip in the video, you'll see a black and white image that represents the number of ray steps taken per pixel. You'll notice that the image is divided into 16x16 little squares - this is the result of a low-res depth pass.
Super incredible stuff u are doing here. GPU Optimization and rendering is something else and i think everyone who had a first look at that topic knows how hard it is. Keep it up! Im wondering how u got started with all this and if there are any tips u can give a beginner
Thanks for watching! I first got started programming games years ago, in Unity 3D. As I became more comfortable with concepts like networking, shaders, and the like, I began to pursue projects outside of Unity. Eventually, this led me to go through the OpenGL tutorial, and then I started experimenting with voxels in OpenGL. Out of all this, I would say that my biggest recommendation would be to pick a hobby project and get started on it! There's no better way to learn than to learn by doing. If you're unsure of where to start, then following Unity or OpenGL tutorials might be a good choice.
I didn't spend too much time with it, but what I *think* I remember doing to trace was very very silly and definitely wasn't a full optimization attempt. I believe I stored the manhattan distance to every voxel, which I computed in O(3*n^3*k) since I'd just do a 3 pass 1D search Then, I used that manhattan distance to do normal sphere tracing, but my spheres then would have to have a radius of the shortest diagonal of the discretized octohedron shape a manhattan distance represents. Then, once the manhattan distance dropped below some small value (I think it was 3) I would switch to normal DDA. Even with this approach, raytracing was really really fast, and I'm really excited to see how you approach reducing the computational complexity of updating the DF
I do appreciate you sharing your challenges and solutions. It's sad that your distance field structure was not an improvement over your existing structure. It was worth a try, I hope. I once spent 3 weeks on an optimization of a lightmap calculation that should have reduced the number of calculations by up to 50%, by storing and reusing redundant points. Sadly, the overhead to gather all of the shared data points was more than simply redundantly recalculating them. I had no idea that that would be the case. Oh well. Anyways, I LOVE seeing people make ingenious uses of GPUs. That alone is so kool!
Your mic quality improved a lot this episode, but I would still recommend some EQ or noise reduction. Like others have said, your channel/work is awesome and I'm always excited to see more.
Thanks for watching! Ironically, I'm using the same mic as before. The difference is, this time, I didn't apply a noise reduction pass in Audacity while editing the video. That seems to have been the culprit for the clipping audio :)
This is just a suggestion, but you could try using the rasterized depth to accelerate the tracing process. Basically rasterize up to each of the visible faces of each voxel "chunk" and store the depth result and store the resulting textures. When tracing, use a conservative line raster algorithm to iterate though every voxel in the chunk and utilize the textures as a lut, sampling one pixel from each. Take the minimum of each sample and check if you are "inside" a voxel. If you are, abort the trace and return the voxel coordinate for shading. Of course if you do use this technique there are some limitations. You cannot render any overhangs in any directions (which, luckily are rather rare either way). A solution to this would be to either split the "chunk" to get rid of the overhang or use a different rendering technique in another renderpass. Another limitation is that you cannot render any transparent voxels, similarly to how deferred pipelines fail in these cases. A possible solution would be to use depth peeling or something along the lines of deferred+. The biggest performance boost with this technique originates from another source though, MIP-mapping. You can use MIP levels to perform course depth checks before iterating deeper, analogue to an quadtree. This allows you to skip entire chunks in just *3-5 samples* if your ray will not hit any voxels. If you have any inquiry about this technique or any other voxel rendering techniques, feel free to message me on Discord (RedNicStone#8720). Btw, im the same guy that was hanging out on the projects github.
Hey RedNicStone, thanks for this comment! I fear it kind of got buried and I wasn't able to respond to it until now - sorry about that. I would love to hear more about your proposed technique! Would it be possible for you to a post a link to any resources describing this approach here? I think it would be helpful for other viewers to see in addition to myself.
personally i think i will go with a full raytracing and df optimization path since i feel like the simplicity in that will pay off later and avoid me getting stuck it’s wonderful that your parallax ray marching approach is more performant and i think you are uniquely suited to benefit from it since you came up with it :p for example you were able to get around the depth buffer problem which i know would have gotten me stuck with that approach thank you for these videos and the detail, it’s really awesome. i love that you were willing to abandon your beloved rendering method if you found a better option with no hesitation, that’s cool to see
Great insight in your thought process and conclusions! I'm working on a very similar voxel renderer, with 8x8x8 cubes rendered in the rasterizer! I was just thinking, would it be feasible to have distance information in these 8x8x8 chunks for the empty voxels? In order for the DDA algorithm to skip ahead? I guess it would use more data, but maybe it's worth a try!
It absolutely is possible (but more computation, of course). I probably won't apply this to my parallax ray marching work, though. This is because, in my renderer, the parallax boxes are "tight" - that is, the program shrinks them down so that there is minimal empty space, although they can be a maximum of 8x8x8 voxels. As such, with most boxes, the rays start close to the actual voxel surface and only perform 2 or 3 steps. This setup wouldn't benefit from a DF, because there's not much empty space to traverse.
hope you can turn this into a nice game :) feels as fresh as minecraft did 12 years ago, and nowadays it's much rarer to see someone try to make the most out of their hardware. rooting for you 💜
Cool stuff! I've been messing around with some Rust voxel shenanigans too, although I decided to use Jump Flooding to generate my distance fields on the GPU. I'm hoping that I'll be able to do some nice path traced lighting. Does your parallax raymarching method support doing multiple bounces?
The method I call parallax ray marching actually is a ray marching-rasterization hybrid technique. The technique is still projecting voxels onto a viewport, so it's not possible to cast arbitrary rays (and support photorealistic lighting, as you describe). The main benefit of parallax marching is that it's fast on all hardware :)
That is correct, I did leave Vulkan for OpenGL. I was forced to do this because I want to support the web as a platform, which has only WebGL as a graphics API. OpenGL is definitely a more limited API, and you don't get the very latest features (like SPIR-V or compute shaders) when you use the version that I'm supporting. However, the bottleneck in my application is the actual GPU, rather than the API or driver overhead. So the performance ends up being similar - in the end, the graphics card ends up doing more or less the same tasks :)
Soon - although those will probably need to be three separate videos! I should have more time to work on the engine over winter break. As I rewrite my engine, I will be reorganizing the parallax ray marching code to be able to render transparent things.
Oh a Rust game engine youtuber! I started work something similar (I mean I started to learn rust, and figured out my pet project will be a game engine.) Wish me luck!
Rust and Vulkan are two slightly different things, and both can be utilized together! Rust is of course a programming language, while Vulkan is a language-agnostic graphics API. The switch I made between APIs in episode four was from C# to Rust and Vulkan to OpenGL. With regards to programming languages, Rust does end up being faster than C#, because Rust is a lower-level language without a runtime or garbage collector. Coupled with its rich type system and strong compile-time guarantees, this makes it an ideal choice for game development. As for Vulkan, it is a more modern graphics API than OpenGL, and using it gives one access to newer features, like SPIR-V, compute shaders, and conservative depth. As such, if utilized correctly, one might be able to gain performance improvements over OpenGL. For most graphics applications, however, either one should be usable with similar performance - usually the bottleneck is the actual GPU rather than the API and drivers. But Vulkan would be the preferred API, if I could use it! To support the web, I am stuck with OpenGL.
Great video! As someone who exclusively writes Python for a living, I'll admit a lot of the low-level CS stuff goes straight over my head-but the videos are fascinating nonetheless and I love seeing the progress you're making. There's something deeply satisfying about getting an algorithm to run much faster through clever optimizations. If someone like me who has only ever used high-level scripting languages wanted to get into low-level graphics programming, are there any resources or primers you'd recommend? I never got a formal CS education and find a lot of the talk about compilers and memory allocation and buffers and registers and borrow checkers to be rather intimidating... but I've been itching to dive in nonetheless! I'd hugely appreciate any recommendations.
I would say that the best advice I have is to just dive in and make something cool. The majority of my knowledge in CS has come from independent research and building lots of hobby projects - learning by doing, if you will. That said, if you're looking for a good place to start in game development or computer graphics, I would try Unity 3D or the OpenGL tutorial. Both are great resources that enable you to make some interesting things even when you're first starting out :)
Can tell you're reading from a pre-written script you made. Can't really blame ya, most of us programmers lack social skills, and so we code the audio document ahead of time. If you ever feel like it though, you could do a point form overview approach, read and just talk and discuss.. But then again explaining the details could lead to more time in a video, and people getting sidetracked. The video is a year old, and you probably changed format by now. Veering off the planned and optimal time format may lead to some mistakes or hiccups.. But y'know what? We might wanna have that journey with ya.
You are one of the very few people sharing content related to voxel engine development, thank you!
Your audio sounds much nicer now! I'm so glad to have discovered your channel so early on, I think you're going to see a lot of growth in the near future. Top-tier content!
Woah, this is actually a really cool project. I have seen many voxel engines, but this ones seems to be one of the best. I hope you will go far with this, it has a huge potential!
this engine is coming along really well! i'm excited to see the rest of the stuff you do with it in the future!
By the way, you can use pyramids, instead of min blending to explot Z-buffer and store from which side the closest voxel is by coloring the triangles, so youre not bound to cubes, but rectangles.
This isn't exactly my area of expertise, so apologies if this is inaccurate, but there might be another optimisation for raymarched rendering that you can use. By calculating a low-res depth pass before firing your rays, you can use start the rays at each pixels low-res depth value instead of firing it from the camera, which should reduce the number of steps each ray has to travel by a significant amount.
Excellent suggestion - I actually already do this! If you go to the second to last clip in the video, you'll see a black and white image that represents the number of ray steps taken per pixel. You'll notice that the image is divided into 16x16 little squares - this is the result of a low-res depth pass.
Super incredible stuff u are doing here. GPU Optimization and rendering is something else and i think everyone who had a first look at that topic knows how hard it is. Keep it up!
Im wondering how u got started with all this and if there are any tips u can give a beginner
Thanks for watching! I first got started programming games years ago, in Unity 3D. As I became more comfortable with concepts like networking, shaders, and the like, I began to pursue projects outside of Unity. Eventually, this led me to go through the OpenGL tutorial, and then I started experimenting with voxels in OpenGL. Out of all this, I would say that my biggest recommendation would be to pick a hobby project and get started on it! There's no better way to learn than to learn by doing. If you're unsure of where to start, then following Unity or OpenGL tutorials might be a good choice.
Omg, the audio turnaround. Amazing
the spooky looking output of the dfs are really beautiful, I would love to see a voxel game with a softer look based on that
I didn't spend too much time with it, but what I *think* I remember doing to trace was very very silly and definitely wasn't a full optimization attempt.
I believe I stored the manhattan distance to every voxel, which I computed in O(3*n^3*k) since I'd just do a 3 pass 1D search
Then, I used that manhattan distance to do normal sphere tracing, but my spheres then would have to have a radius of the shortest diagonal of the discretized octohedron shape a manhattan distance represents.
Then, once the manhattan distance dropped below some small value (I think it was 3) I would switch to normal DDA.
Even with this approach, raytracing was really really fast, and I'm really excited to see how you approach reducing the computational complexity of updating the DF
I do appreciate you sharing your challenges and solutions. It's sad that your distance field structure was not an improvement over your existing structure. It was worth a try, I hope. I once spent 3 weeks on an optimization of a lightmap calculation that should have reduced the number of calculations by up to 50%, by storing and reusing redundant points. Sadly, the overhead to gather all of the shared data points was more than simply redundantly recalculating them. I had no idea that that would be the case. Oh well.
Anyways, I LOVE seeing people make ingenious uses of GPUs. That alone is so kool!
This is insane, though most of it is over my head, it's super interesting
Excellent work. I hope you take the multi-threading and optimization step, for a real head-to-head with paralax ray-marching. Keep up the good work!
Your mic quality improved a lot this episode, but I would still recommend some EQ or noise reduction. Like others have said, your channel/work is awesome and I'm always excited to see more.
Thanks for watching! Ironically, I'm using the same mic as before. The difference is, this time, I didn't apply a noise reduction pass in Audacity while editing the video. That seems to have been the culprit for the clipping audio :)
"30 seconds is completely unacceptable.." *laugns in 20+ hours production engine build*
This is just a suggestion, but you could try using the rasterized depth to accelerate the tracing process.
Basically rasterize up to each of the visible faces of each voxel "chunk" and store the depth result and store the resulting textures.
When tracing, use a conservative line raster algorithm to iterate though every voxel in the chunk and utilize the textures as a lut, sampling one pixel from each. Take the minimum of each sample and check if you are "inside" a voxel. If you are, abort the trace and return the voxel coordinate for shading.
Of course if you do use this technique there are some limitations. You cannot render any overhangs in any directions (which, luckily are rather rare either way). A solution to this would be to either split the "chunk" to get rid of the overhang or use a different rendering technique in another renderpass.
Another limitation is that you cannot render any transparent voxels, similarly to how deferred pipelines fail in these cases. A possible solution would be to use depth peeling or something along the lines of deferred+.
The biggest performance boost with this technique originates from another source though, MIP-mapping. You can use MIP levels to perform course depth checks before iterating deeper, analogue to an quadtree. This allows you to skip entire chunks in just *3-5 samples* if your ray will not hit any voxels.
If you have any inquiry about this technique or any other voxel rendering techniques, feel free to message me on Discord (RedNicStone#8720). Btw, im the same guy that was hanging out on the projects github.
sounds like a good idea
Hey RedNicStone, thanks for this comment! I fear it kind of got buried and I wasn't able to respond to it until now - sorry about that. I would love to hear more about your proposed technique! Would it be possible for you to a post a link to any resources describing this approach here? I think it would be helpful for other viewers to see in addition to myself.
Hmmm, using parallax marching could require some tricks when you need to render two pictures at the same time. Awaiting for a port for a VR headset😃
personally i think i will go with a full raytracing and df optimization path since i feel like the simplicity in that will pay off later and avoid me getting stuck
it’s wonderful that your parallax ray marching approach is more performant and i think you are uniquely suited to benefit from it since you came up with it :p
for example you were able to get around the depth buffer problem which i know would have gotten me stuck with that approach
thank you for these videos and the detail, it’s really awesome.
i love that you were willing to abandon your beloved rendering method if you found a better option with no hesitation, that’s cool to see
Great insight in your thought process and conclusions! I'm working on a very similar voxel renderer, with 8x8x8 cubes rendered in the rasterizer!
I was just thinking, would it be feasible to have distance information in these 8x8x8 chunks for the empty voxels? In order for the DDA algorithm to skip ahead?
I guess it would use more data, but maybe it's worth a try!
It absolutely is possible (but more computation, of course). I probably won't apply this to my parallax ray marching work, though. This is because, in my renderer, the parallax boxes are "tight" - that is, the program shrinks them down so that there is minimal empty space, although they can be a maximum of 8x8x8 voxels. As such, with most boxes, the rays start close to the actual voxel surface and only perform 2 or 3 steps. This setup wouldn't benefit from a DF, because there's not much empty space to traverse.
@@DouglasDwyer that makes a lot of sense, I'll implement that too! Thanks for the insight!
think a video on the cpu algorithms would help my project out so much
Dude nice, keep up the good work.
Very good explanation bro, I also am working on SDF generation on GPU for fonts
I tried this on the same smartphone as before, but unfortunately it now crashes during what appears to be the chunk generation step
hope you can turn this into a nice game :) feels as fresh as minecraft did 12 years ago, and nowadays it's much rarer to see someone try to make the most out of their hardware. rooting for you 💜
New mic hype
Here yall marching on rays or smthn and I'm here trying to pronounce PARALLELISABLE
Cool stuff! I've been messing around with some Rust voxel shenanigans too, although I decided to use Jump Flooding to generate my distance fields on the GPU. I'm hoping that I'll be able to do some nice path traced lighting. Does your parallax raymarching method support doing multiple bounces?
The method I call parallax ray marching actually is a ray marching-rasterization hybrid technique. The technique is still projecting voxels onto a viewport, so it's not possible to cast arbitrary rays (and support photorealistic lighting, as you describe). The main benefit of parallax marching is that it's fast on all hardware :)
I'm a bit confused, in your last video you mentioned opengl, did you ditch vulkan when abandoning the compute shader and when refactoring for rust?
and can this kind of performance be achieved with just opengl?
That is correct, I did leave Vulkan for OpenGL. I was forced to do this because I want to support the web as a platform, which has only WebGL as a graphics API. OpenGL is definitely a more limited API, and you don't get the very latest features (like SPIR-V or compute shaders) when you use the version that I'm supporting. However, the bottleneck in my application is the actual GPU, rather than the API or driver overhead. So the performance ends up being similar - in the end, the graphics card ends up doing more or less the same tasks :)
This is going to be revolutionary, even greater than Minecraft (if you work really hard on it)
WOOO YEAH BABY!
When is the promised "transparency, lighting and physics" video going to be released?
Soon - although those will probably need to be three separate videos! I should have more time to work on the engine over winter break. As I rewrite my engine, I will be reorganizing the parallax ray marching code to be able to render transparent things.
This is pogeranian (portmanteau of poggers and pomeranian).
Oh a Rust game engine youtuber! I started work something similar (I mean I started to learn rust, and figured out my pet project will be a game engine.) Wish me luck!
Epic! I'd love to hear about your project when it's completed :)
I can't remember if you've mentioned it before, but is rust faster than vulkan?
Rust and Vulkan are two slightly different things, and both can be utilized together! Rust is of course a programming language, while Vulkan is a language-agnostic graphics API. The switch I made between APIs in episode four was from C# to Rust and Vulkan to OpenGL.
With regards to programming languages, Rust does end up being faster than C#, because Rust is a lower-level language without a runtime or garbage collector. Coupled with its rich type system and strong compile-time guarantees, this makes it an ideal choice for game development.
As for Vulkan, it is a more modern graphics API than OpenGL, and using it gives one access to newer features, like SPIR-V, compute shaders, and conservative depth. As such, if utilized correctly, one might be able to gain performance improvements over OpenGL. For most graphics applications, however, either one should be usable with similar performance - usually the bottleneck is the actual GPU rather than the API and drivers. But Vulkan would be the preferred API, if I could use it! To support the web, I am stuck with OpenGL.
@@DouglasDwyer Interesting take! It's a shame vulkan isn't compatible with web stuff - We can only hope that in the future, it might.
the distance fields can be a voxel/octree grid themselves
sort distances to entity sphere boundary volumes
approximate the df's, does not need to accurate, ie fixed grid distance field
make it (df) update real time
Great video! As someone who exclusively writes Python for a living, I'll admit a lot of the low-level CS stuff goes straight over my head-but the videos are fascinating nonetheless and I love seeing the progress you're making. There's something deeply satisfying about getting an algorithm to run much faster through clever optimizations.
If someone like me who has only ever used high-level scripting languages wanted to get into low-level graphics programming, are there any resources or primers you'd recommend? I never got a formal CS education and find a lot of the talk about compilers and memory allocation and buffers and registers and borrow checkers to be rather intimidating... but I've been itching to dive in nonetheless! I'd hugely appreciate any recommendations.
I would say that the best advice I have is to just dive in and make something cool. The majority of my knowledge in CS has come from independent research and building lots of hobby projects - learning by doing, if you will. That said, if you're looking for a good place to start in game development or computer graphics, I would try Unity 3D or the OpenGL tutorial. Both are great resources that enable you to make some interesting things even when you're first starting out :)
good choice coding with rust, It's a great language
and I think it's going to grow a lot more, maybe fully replace c one day...? who knows
nice ;>
wooow
Ive subbed for the algo vid. PLEASE UPLOAD IT (if you have the time).
Thanks! Subbed!
12:00 that's why I do not choose Rust for large project. It so slow.
What do you choose?
Can tell you're reading from a pre-written script you made. Can't really blame ya, most of us programmers lack social skills, and so we code the audio document ahead of time.
If you ever feel like it though, you could do a point form overview approach, read and just talk and discuss.. But then again explaining the details could lead to more time in a video, and people getting sidetracked. The video is a year old, and you probably changed format by now.
Veering off the planned and optimal time format may lead to some mistakes or hiccups.. But y'know what? We might wanna have that journey with ya.
Haha yeah, trying to make more extemporaneous videos is something that I'm working on!
@@DouglasDwyer keep it up! You're doing good.
Great video but please fix your mic and stop talking like you’re a professor lecturing a class
first