I wrote a cpu raytracer (similar to this one) as my first rust project but more realtime ish, running at 40 fps multi threaded was a good learning experience with Rust
PPM was common as an X Windows icon format in the 1990s. Using something like TGA or TIF is probably easier as a "trivial to output without a library" format.
The timing of this video for me couldn't be better, I just finished doing book 1 in Rust a couple days ago, and it's really interesting to see your approach to translating the C++. Yours is definitely "rustier" than mine with more use of options and iterators, and at the same time some things turned out really similar, especially the use of traits. The whole point of doing this for me was to get more familiar with Rust, as I've done book 1 in the past in C++, and I think it was a really good exercise, but having yours to reference actually makes it even more valuable as I can see where I could have made more use of Rust's features. Thanks for sharing!
I did this project for a super computing course. We implemented it in C++ with MPI for the cross-communication and did some empirical studies comparing our own message-passing methods to that of MPI's built-in methods (which are optimized and topologically aware). No surprise, we got beat by quite a lot by MPI's own implementation. At most we ran a render of a 5k image on 4096 cores, which took something like ~1 second - crazy. We did a lot of samples per pixel and bounces too, but I don't remember quite how many.
I probably won't do a walkthrough on this exact series in particular. If I do a series it will likely be a workshop on Rust Adventure with a different approach to building a Raytracer. There's a lot in the Raytracing in a Weekend series that can be dropped or approached differently due to not using C++.
This is impressive stuff. I once did a toy pathtracer in C++, then ported it to Digital Mars D. It was 10 times as slow. The culprit was dynamic allocation everywhere, and not knowing when that actually takes place. I'm sure your Rust tracer is just as fast as the C++ one?
This honestly felt more like bashing c++, rather than focussing on RT. Nevertheless, nice to see a different implementation. It didn’t hinder me throughout the video. Few points; - the math optimization early on results in less assembly instructions and thus a faster raytracer. This is explained in the text around it ;) - c++ and rust have a lot in common, but also a lot of differences. People use libraries a lot in c++, so there honestly is no reason to add certain functionality to the language if very decent libraries already exist. Rust didn’t have those so made it before there even was a good library. Apples and oranges. Neither one is better because of it.
11:51 Yeah, this is why I want enums to be able to have trait bounds on the types they can store and obviate the need to have match expressions on methods every variant is guaranteed to have
This seems to be running awfully slowly compared to other implementations. There is a video by The Cherno where he reviews an implementation of the code from the raytracing in a weekend book from a year ago that I have watched previously. I don't know how much of it you can apply to Rust, as it's obviously a C++ code review. Options in rust are close to zero cost from what I know so you can probably ignore that part. The most applicable part of the video would probably be removing the branching and making sure that everything is in contiguous memory. The video is called "I made it FASTER // Code Review". Another thing you might want to try is compiling with the "-C target-cpu=native" flag for SIMD optimizations from the rust compiler but you might already be doing that.
to be clear, as I said in the video, I definitely think of this as a slow implementation and by no means a production level raytracer. As far as I can tell, the Raytracing in a Weekend series doesn't track performance *at all* or mention it in any of the material. So stack that on top of having not written a raytracing renderer before, and not knowing where the series was headed before going through it and you get slow output. You can't optimize code when that optimization may remove functionality you have to depend on in a future section. The Cherno spends that whole video painfully stating "Why is this so slow?" but the answer is that the series doesn't consider performance at all, seemingly intentionally. Its a learning tool that focuses on raytracing concepts, not performance. No hate to them, but for me the video didn't help at all. Its seemed to boil down to: "what if we didn't follow Raytracing in One Weekend and did something else instead", combined with "This is a bug in the C compiler"... which unfortunately isn't useful for me because I'm not writing C. Its worth noting in particular that the runtime vastly differs between my primary working computer (an m1 pro max) and my PC (7950x) which isn't featured in the video. A 24s workload on my mac runs in less than 10 seconds on my PC. I am *guessing* that this is due to simd, but I haven't gone further than running the program to see what happens and noting that there's an open PR for aarch64/neon in the underlying vec3 implementation I used -- github.com/bitshifter/glam-rs/pull/379 I also tried your suggestion to use target-cpu=native, but it doesn't make a difference on either platform. Worth noting here that glam indicates that "SSE2 is enabled by default on x86_64 targets." -- github.com/bitshifter/glam-rs#enabling-simd So anyway, to your point: the runtime of my "final scene" (which is what's being optimized in that video), running on a comparable platform, is within 20 seconds of the optimized C++ version. Is my version slower? absolutely. Is it "awfully slowly compared to other implementations"? Only if you don't run it on comparable platforms.
i just bookmarked two books from Jamis Buck for a potential Rust driven run. One of them coincidently called 'The Ray Tracer Challenge' and the other one is called 'Mazes for Programmers'.
Did you have any issue with the blue gradient part when implementing the rays? I decided to try this myself and I'm currently baffled as to why the ppm image is incorrect. I even downloaded the raw file of your implementation from your history to check to see if I wasn't crazy and it seems that your code produced the same issue. Edit: Nevermind! I worked it out. It was because there was no casting to u32 for the colours.
I'm not sure what issue you ran into (hard to tell from this description) but I'm happy you figured it out. I don't remember needing to cast the colors to a u32. Each component (r,g,b) should end up as a u8 (a number from 0-255) and there is some clamping later on in the series to guarantee this. The output I produced is here, which doesn't seem like it has any issues compared to the series' example output. github.com/rust-adventure/raytracing-in-one-weekend/blob/a565404b86143615106a618517b3dccfa109cfcc/outputs/blue-gradient.png
@chrisbiscardi yeah, it does seem weird. I pulled your code for the blue gradient to check, but for some reason, I got the same output. I'm not sure if it's a Rust version difference. I think I'm on Nightly.
I tried it 2 years ago and I failed after one third of the course cause the code was in c++ and I don't know it. C++ code is also hard to read if you aren't familiar with it, but rust is
The C++ was definitely tough to translate at times, and there were a lot of "C++ isms" like instantiating empty structs or mutating arbitrary arguments that felt really awkward. I definitely think it would be interesting to have a rust-forward version of the series.
There is a rust version of the book, i followed the book and made a rudimentary RT but implemented a display of hows going I've uploaded it to github and used rayon so it runs a lot faster
I've been fideling with raytracing in wgpu for some time and boy do I need a tutorial on raytracing in compute shaders. The fragment shaders are too slow and wgpu documentation is garbage
My advice is prepare to debug! Get Vulkan's SDK which includes glslangvalidator, and get RenderDoc to see the shader's runtime. Also, you should make a "debug buffer" if you're building something complex. Oh, and don't even touch WGSL, its a mess right now. Stick to GLSL, I've tried HLSL > SPIRV and it just won't take it for some reason.
It is indeed a tuesday
It is indeed another Tuesday
No, it's a Friday
This was my first project in rust a year ago, I struggled with it like hell, fun times.
I wrote a cpu raytracer (similar to this one) as my first rust project but more realtime ish, running at 40 fps multi threaded
was a good learning experience with Rust
PPM was common as an X Windows icon format in the 1990s. Using something like TGA or TIF is probably easier as a "trivial to output without a library" format.
PPM has a ascii text version which is very easy to implement and debug in a text editor
TGA is simple yes. TIFF a bit harder... to output. To input it's a nightmare: so many variants.
The timing of this video for me couldn't be better, I just finished doing book 1 in Rust a couple days ago, and it's really interesting to see your approach to translating the C++. Yours is definitely "rustier" than mine with more use of options and iterators, and at the same time some things turned out really similar, especially the use of traits. The whole point of doing this for me was to get more familiar with Rust, as I've done book 1 in the past in C++, and I think it was a really good exercise, but having yours to reference actually makes it even more valuable as I can see where I could have made more use of Rust's features. Thanks for sharing!
Really great and honest walk through. I appreciated the Rust short cuts and comparisons with C++.
I did this project for a super computing course. We implemented it in C++ with MPI for the cross-communication and did some empirical studies comparing our own message-passing methods to that of MPI's built-in methods (which are optimized and topologically aware). No surprise, we got beat by quite a lot by MPI's own implementation. At most we ran a render of a 5k image on 4096 cores, which took something like ~1 second - crazy. We did a lot of samples per pixel and bounces too, but I don't remember quite how many.
Oh that sounds like a load of fun. 4096 cores is mind boggling.
This was an excellent walk through. Thanks for sharing!
Great video, throwback to the things I learned in my graphics course 🌎
cool to see how much the Rust ecosystem provides
We need a video series of this 🎉
I probably won't do a walkthrough on this exact series in particular. If I do a series it will likely be a workshop on Rust Adventure with a different approach to building a Raytracer. There's a lot in the Raytracing in a Weekend series that can be dropped or approached differently due to not using C++.
I did this back in the day. Should probably revisit it some time. Fun project.
damn, this is definitely a full blown course on making ray tracing in rust
This is impressive stuff. I once did a toy pathtracer in C++, then ported it to Digital Mars D. It was 10 times as slow. The culprit was dynamic allocation everywhere, and not knowing when that actually takes place. I'm sure your Rust tracer is just as fast as the C++ one?
This honestly felt more like bashing c++, rather than focussing on RT. Nevertheless, nice to see a different implementation. It didn’t hinder me throughout the video.
Few points;
- the math optimization early on results in less assembly instructions and thus a faster raytracer. This is explained in the text around it ;)
- c++ and rust have a lot in common, but also a lot of differences. People use libraries a lot in c++, so there honestly is no reason to add certain functionality to the language if very decent libraries already exist. Rust didn’t have those so made it before there even was a good library. Apples and oranges. Neither one is better because of it.
11:51 Yeah, this is why I want enums to be able to have trait bounds on the types they can store and obviate the need to have match expressions on methods every variant is guaranteed to have
Literally released on a Tuesday...
This seems to be running awfully slowly compared to other implementations. There is a video by The Cherno where he reviews an implementation of the code from the raytracing in a weekend book from a year ago that I have watched previously. I don't know how much of it you can apply to Rust, as it's obviously a C++ code review. Options in rust are close to zero cost from what I know so you can probably ignore that part. The most applicable part of the video would probably be removing the branching and making sure that everything is in contiguous memory. The video is called "I made it FASTER // Code Review". Another thing you might want to try is compiling with the "-C target-cpu=native" flag for SIMD optimizations from the rust compiler but you might already be doing that.
to be clear, as I said in the video, I definitely think of this as a slow implementation and by no means a production level raytracer.
As far as I can tell, the Raytracing in a Weekend series doesn't track performance *at all* or mention it in any of the material. So stack that on top of having not written a raytracing renderer before, and not knowing where the series was headed before going through it and you get slow output. You can't optimize code when that optimization may remove functionality you have to depend on in a future section.
The Cherno spends that whole video painfully stating "Why is this so slow?" but the answer is that the series doesn't consider performance at all, seemingly intentionally. Its a learning tool that focuses on raytracing concepts, not performance. No hate to them, but for me the video didn't help at all. Its seemed to boil down to: "what if we didn't follow Raytracing in One Weekend and did something else instead", combined with "This is a bug in the C compiler"... which unfortunately isn't useful for me because I'm not writing C.
Its worth noting in particular that the runtime vastly differs between my primary working computer (an m1 pro max) and my PC (7950x) which isn't featured in the video. A 24s workload on my mac runs in less than 10 seconds on my PC. I am *guessing* that this is due to simd, but I haven't gone further than running the program to see what happens and noting that there's an open PR for aarch64/neon in the underlying vec3 implementation I used -- github.com/bitshifter/glam-rs/pull/379
I also tried your suggestion to use target-cpu=native, but it doesn't make a difference on either platform. Worth noting here that glam indicates that "SSE2 is enabled by default on x86_64 targets." -- github.com/bitshifter/glam-rs#enabling-simd
So anyway, to your point: the runtime of my "final scene" (which is what's being optimized in that video), running on a comparable platform, is within 20 seconds of the optimized C++ version. Is my version slower? absolutely. Is it "awfully slowly compared to other implementations"? Only if you don't run it on comparable platforms.
This is very cool. I've done some coding train exercises in rust with sdl2. Perhaps I should try this. Thanks for sharing.
i just bookmarked two books from Jamis Buck for a potential Rust driven run. One of them coincidently called 'The Ray Tracer Challenge' and the other one is called 'Mazes for Programmers'.
I just read through the ray tracer challenge this week. I liked that it was presented in a mostly language-independent way with pseudo code.
Hell yeah long form content lets goooo
27:30 rayon (leaving this for my future self)
Are you working furiously on a Bevy tutorial for ex-Unity devs?
haha, I don't think I have enough Unity experience to make that. I have some Bevy game workshops and videos but nothing from a "unity perspective".
I was planning to do the same thing and this video popped...
Rust is also used by developers on NEAR, can you share your opinion on this?
Hi, what do you think about coding in web3?
Did you have any issue with the blue gradient part when implementing the rays? I decided to try this myself and I'm currently baffled as to why the ppm image is incorrect. I even downloaded the raw file of your implementation from your history to check to see if I wasn't crazy and it seems that your code produced the same issue.
Edit: Nevermind! I worked it out. It was because there was no casting to u32 for the colours.
I'm not sure what issue you ran into (hard to tell from this description) but I'm happy you figured it out.
I don't remember needing to cast the colors to a u32. Each component (r,g,b) should end up as a u8 (a number from 0-255) and there is some clamping later on in the series to guarantee this.
The output I produced is here, which doesn't seem like it has any issues compared to the series' example output. github.com/rust-adventure/raytracing-in-one-weekend/blob/a565404b86143615106a618517b3dccfa109cfcc/outputs/blue-gradient.png
@chrisbiscardi yeah, it does seem weird. I pulled your code for the blue gradient to check, but for some reason, I got the same output. I'm not sure if it's a Rust version difference. I think I'm on Nightly.
I tried it 2 years ago and I failed after one third of the course cause the code was in c++ and I don't know it. C++ code is also hard to read if you aren't familiar with it, but rust is
The C++ was definitely tough to translate at times, and there were a lot of "C++ isms" like instantiating empty structs or mutating arbitrary arguments that felt really awkward.
I definitely think it would be interesting to have a rust-forward version of the series.
There is a rust version of the book, i followed the book and made a rudimentary RT but implemented a display of hows going I've uploaded it to github and used rayon so it runs a lot faster
I've been fideling with raytracing in wgpu for some time and boy do I need a tutorial on raytracing in compute shaders. The fragment shaders are too slow and wgpu documentation is garbage
I'm going to make a wgpu version of this, and I'm planning to use compute shaders. I don't have any timeline for that video though.
My advice is prepare to debug! Get Vulkan's SDK which includes glslangvalidator, and get RenderDoc to see the shader's runtime. Also, you should make a "debug buffer" if you're building something complex. Oh, and don't even touch WGSL, its a mess right now. Stick to GLSL, I've tried HLSL > SPIRV and it just won't take it for some reason.
ppm and pgm are the poor man's format, but so nice if you don't want to include complicated image format codecs.