Thank y'all for watching! ❤️ Do you want to see more optimization/performance stuff? Also don't forget that the first 1,000 people to use this link will get a 1 month free trial of Skillshare: skl.sh/thecherno12211
Just a couple of comments: - I am puzzled by the slowdown in accumulate! I remember testing it against a raw for loop in simpler usecases and it even turned out a few % faster. Granted, it was on gcc so it could be MSVC related, but somehow that would be equally surprising :\ - I don't like the whole 100% object oriented theme of the project as well, but I decided to follow the original book in that regard because I wanted to do everything in a weekend, and changing the whole architecture would have taken way longer :)
Glad you saw it! Was about to send you an email asking if you had these performance issues as well, I had a feeling it might be compiler/library-related. Thanks for sending in your code!
@Harry Byrne Well, the goal was to follow the original book and I wanted to be done within the weekend! So this was my attempt to get familiar with the topic in a reasonable amount of time :) I'd like to solve the big issues first (design a better scene model, get rid of all the unnecessary hierarchy and so on...) before getting on the GPU, which is a completely unexplored land for me!
@@riidefi1575 A unwanted copy of the shared_ptr would explain a lot (with the control block being copied for every ptr in the container...) But the lambda takes the arguments as const& so I think this is up to the library implementer to make sure things are forwarded correctly to the function, right? Maybe this is why I never had problems with gcc and clang libraries... definitely gonna check this out later on
Fun fact! std::accumulate actually did have a bug in it that was fixed in C++20! (See the #if _HAS_CXX20 clause at 24:59). std::accumulate was originally meant to accumulate small objects, so it passes things by value. As of C++20 it moves the values since they could be large. I'd be curious to see the performance improvement just by compiling with C++20! As a side note, I'd say that this operation, finding the closest object, doesn't really accumulate anything, so although accumulate can do the job, I would argue that a for loop expresses the intent better anyway.
As someone who does FP a lot, I feel like it does accumulate: it accumulates the minimum distance for a ray bounce. But maybe I'm just mind-poisoned by HOFs?
@@scoreunder Hahah I wouldn't say you're _poisoned_ by functional programming, perhaps enlightened? Of course, with higher order functionals, you can build very unexpected/general things out of some basic building blocks. But from a more grounded perspective, it seems unlikely that a highly optimized algorithm could remain optimized for an arbitrary functional as input. Personally, I find that if the lambda you pass in is larger in size than the algorithm itself, then it is highly likely you will find a performance hit waiting for you on the other end. I most often prioritize readability and I also find that when the intent of something becomes unclear (perhaps a _clever_ lambda to an algorithm) the optimizer will also not do very well. Don't get me wrong though, I'd love everyone take the opportunity to learn some category theory.
Why std::accumulate at all if you want to do std::for_each?? Typical „I don’t understand the new stuff, so I stick to the old“. He even thinks „range based loop“ is different from „old style for loop“. If so, he should get a working compiler.
@@fromgermany271 I may be misunderstanding what you mean, but several modern Cpp books (notably Accelerated C++) stress the semantic value of std:: accumulate. In other words, std::for_each doesn't tell us the intent if the code; std:: accumulate does (it's a summation).
@@VastyVastyVoid I meant: he has a problem with std::accumulate and says „to slow“ and „classic loop is faster“. I say, accumulate changes a „sum“, that does not really fit to „parallel“. But I cannot see why a „sum“ should be needed at all. But in general saying „I don’t understand something in stdlib, so I just fall back to C-style“ is something not uncommon, but not applicable for talks.
I for one would love to see more optimisation stuff. I'm currently in the process of learning how to write high performance code, so learning how to self evaluate a program's performance via either code inspection or using tools to develop metrics will be huge for me. That said, I'm just happy to see a Cherno video in my sub box either way, so thanks for taking the time to create and upload this video.
Btw. a really nice way that helps is to use a very slow chip that restricts you a lot, like for example an Arduino where you get hit hard if you code inefficiently.
A big part of optimization is similar to this video, finding slow parts, and cutting out unnecessary overhead. Giving the CPU less work. That and keeping cache sizes and layout in mind, which is easier to consider if using simpler methods with less abstraction. Modern desktop CPUs tend to max out at 32MB L3 cache, but it is better to try to stay within 8MB.
It's mostly what you see here - use a profiler, because often there's something bad but hidden somewhere non-obvious. (It's often not worth even guessing like he did, start with the profiler to avoid biases.) Avoid unneeded copies. Avoid replicating the same expensive work. And in C/Rust, prefer stack over heap. It's much much further down the line that it makes sense to start worrying in more detail about cache implementation details. But don't forget readability is critical. Your future self will appreciate it.
For the slowness of `std::accumulate` I'm sure it's down to the copy of the `std::shared_ptr` inside the HitRecord, which do atomic operations to keep the refcount up to date. In C++20, it's not copying it anymore. The best would be to not use shared pointer at all.
That's what I thought too, and made a little proof myself, by cheating and making it (for a bit just plain pointer), then std::accumulate was still fine (C++17) - with 20 and std::move() on it - even better.
@@Narblo It's not that, it's really lots of "lock xadd" (atomic inc/dec due to shared_ptr copying). Granted that was on my machine, with my RAM, my CPUs, etc. - it may be better on others, but maybe even much worse (with say NUMA configs). For in house game editor we had to disable one of the CPU's in it to avoid that, as we've been also guilty of over-using abit more of shared_ptr - though not like this.
Hello Yan, I would love to see you optimize this code. The fact alone that replacing one single function with a for loop can yield such a performance boost shows how good this code is. It is definitely a great starting point for a video about optimization. Given the fact that the accumulate function was written in order to be used in cases like this, it being so slow isn ´t something that a coder should have to worry about. It ´s a language related problem. You have already started talking about playing to the strengths of the hardware architecture. This performance boost was possible because the coder didn ´t know what the accumulate function actually does. You might even go as far as to explain the different features of 32 bit and 64 bit processors and assembly language and then go back to explaining how c++ as a language actually utilizes them. That would be highly informative, albeit really time consuming, but this very code example shows how easy it can be to go from a working piece of software to a really satisfying experience for both coders and users if you are aware of the inner workings of your tools. In short: Please do it. I love your content, by the way, and have been a subscriber for a long time. All the best to you and your loved ones from Germany.
Only the really bottlenecks need to be optimized. I think Yan tackled that well enough via the profiler. And assembly is diminishing returns these days compared to intrinsics and modern compilers that automatically find intrinsic equivalents.
This is exactly the content i wanted to see. Old school coder myself, i struggle a bit on learning modern c++ stuff on my free time, so it's great to see that raw quick comparison. The explanation and clarity from Cherno is hard to match on any other material you might find on the web. Loving it. ❤
In my experience, when changing a single part of the code can speed things up so much, the problem is almost always some kind of superfluous copy somewhere. Also, an additional optimization video would be really nice!
Less unnecessary copying and more usages of move semantics or using references to objects definitely helps towards improving performance and efficiency making the code already pre-optimized.
These aren't just code reviews, but absolute masterclasses as to why one approach wins over another. I've learnt so much in this 38 minutes than I've learn in the last 5 years!
also learning the methodology of diagnosing inefficiencies. the way he does it here for a c program still applies to all languages "break down to verbose and deliberate to see which step is causing the problem".
The problem is not std::accumulate. It is the fact that iterating an accumulate with std::optional make s a "copy" of it every pass of the accumulate. Accumulate should be used with trivial copy/move elements, or ensuring the elements handle properly the move.
This was a great video! As a person who just rewrote a pure C++ project in a more C-style C++ I can relate. I was taught that the explicitness of C is inherently bad and it was an eyeopener to realize this is not the case.
Very true! C is blazingly fast, C++ is well organized. There is a performance tradeoff in the organizing, forcing memory swaps etc., so would always do a back of the envelope performance calculation to be sure I wasn't wasting cpu. I started in C in '86 when we didn't have CPU to spare! So performance has always been in mind, and the success of many projects came down to that as they grew in size and complexity.
The speed you analyzed the whole project is impressive! Raytracer is one of the projects that I want to do in the future. At the same time I really love optimization videos so I cannot wait to see the continuation :^)
3 ปีที่แล้ว +4
I would love to see you go through and refactor/optimize the code (get rid of the std usage where it's superfluous etc). Someone also mentioned C# and using that to achieve similar results would be really cool. Span was added not too long ago to help with stack-allocations without having to use unsafe and stackalloc.
I think it would be an amazing series, take this one codebase, one episode going deeper on why virtuals were bad in this case, de-OOPing it, a longer one bringing misc stuff closer to the metal, and the final one doing it on a GPU.
No, the optimization part is very interesting, helpful and useful. I think continuing down this road is a great course of action to help others to improve their own code bases / frameworks. It should help to give them insight towards their own approaches on how to analyze and profile their own code, what to look for, and how to optimize or improve its performance. Great Video!
I'd love to see both of your own fully optimised version of this implementation as well as a GPU based "ray tracing in a weekend". I think they would both be fascinating.
A possible suggestion for video format, in the case you want a more complete exploration while not being a 2 hour video; Record your thoughts during the beginning, middle, and end of the process. Beginning - overall impressions of what the weeds will look like, guesses at what needs to change, etc Middle - what the weeds *actually* look like, things that have and haven't surprised you so far. Examples of transforms done. Also kinda fun to see whether you can guess where the 'middle' would be and if you're accurate on that by the end. End - obviously get to discussion the final results and overall learnings
10:02 It is so funny to see your cam video freezing while rendering that slow raytracing image. 😂 It's like being a party host kicked out of his own house.
Since you asked for opinions about the unique_ptr array, here's how I handle it. If I'm needing an array where I need to push/pop from the back only, I use std::vector. If I'm needing a buffer, I make a buffer class and still use the std::unique_ptr as the data holder. This allows me to focus on adding functionality without having write all the extra code like deleting the copy ctor and copy assignment operators, nor would I have to write the move ctor or move assignment operator, nor would I have to write a destructor. With std::vector, you get copy and move for free, but then you have to write the two lines to delete the copy ctor and copy assignment operator. As for wrapping a naked pointer, you have to write all those yourself including destructor. Not using std::unique_ptr for a heap allocated array is more work compared to a naked pointer, even if utility wrappers like this isn't the bulk of your program.
21:3824:54 Hi Cherno! I would always use a loop in this case, rather than overengineering it with std algorithms. But I would say that std::accumulate is fine for C++ compilers, don't be afraid it. When it compiled with debug mode, it will not be optimized. But it works well under the release setting. With release setting / -O2 flag, it does get inlined and unrolled into a loop.
I love unique pointer wrapping array, but only if this is necessary. Normally I would use a vector, but when I'm implementing something much lower level. In my codebase you'll see some `std::unique_ptr` and I use it to implement a vector of a runtime defined structure. Mainly to contains all uniforms for a list of objects to render, or to initialize a structure of buffer objects for opengl.
DEFINITELY A LIKE! Big! THANK YOU! The point of OPTIMIZING vs OOP reminds me of John Carmack's wild "Inverse Square Root" logic in Doom. Do the math not the objects...
Not sure how everyone else feels, but I like watching when you go in-depth in these reviews. Either just doing a proper review that takes an hour or doing these optimizations that take an hour. It's a really good way to learn when we actually understand what's wrong with it. Also, actually getting to see the optimized code being written is good when we aren't super familiar with the language but understand the concepts.
In most cases, I would avoid std::unique_ptr instance and instead use a std::vector. The only case where I would prefer unique_ptr is if the `any_type` is '`bool`. For most implementations std::vector is space optimized and creates issues when we need a raw pointer on it to pass on.
One downside to vector is that it allows easy resizing on the fly, which may not be something you want. If you need to keep a long-lived pointer to something inside the buffer, using a vector may not be ideal because if someone pushes to it, it may result in the internal data storage being copied to a new location and then freed, leaving you with a subtle use-after-free bug. That's still possible if you use a unique_ptr, but it will be much more obvious because you have to explicitly replace the entire buffer, as opposed to just calling push_back or emplace_back. I don't think there's anything wrong with using unique_ptr for a situation like this; it's basically a safer version of a C-style pointer-to-buffer. A vector serves a different purpose, as a dynamically-sized collection of items. If you don't need the resizing capabilities, why use the more complex type?
I use unique_ptr quite a lot. It's a nice pattern for when you want a non-resizable vector with size specified at runtime. The API is somewhat worse than a vector, but you can certainly wrap it into a nicer type (like an owning version of the buffer helper you describe in this video).
with the std::unique_ptr wrapping a buffer like that - the point of it, and what you're getting here - is exception safety and a guarantee of no use-after-free or forgotten delete call during object destruction - two things that given the multi-threaded nature of this project, are quite important to get right and impossible to do with raw pointers.
He mentioned not liking the usage of unique_ptr for a buffer, and preferring vector instead. vector gives you a size, which is nice and a bit less implicit, and resize() is a bit more terse. Though there is an overload of make_unique for arrays that takes an array size (e.g. make_unique(1024) or what not). It didn’t look like that was used in the given code, but it’s fairly terse too. One big difference in some contexts is that unique_ptr is a much more lightweight template than vector. I had some heavily templated code with a fixed but dynamically sized array where switching from vector to unique_ptr cut the compile time in half. This code didn’t look to be heavily templated so it likely doesn’t matter, but it’s useful to know.
OH my gosh I remember watching you and learning some programming from you when I was still in primary school. You made me pursuit programming and suddenly seeing one of your videos in my feed brought back a ton of memories of me trying to figure out Java as a tiny kid. Now I'm studying cybersecurity and software development, working my way through!
Thank's Cherno! As always, great video. Definitely interested to see you further optimize the code and squeeze out a better performance. Also I would love to see you porting that to a compute shader.
Go faster!! See how much improvements you can make! Would love to see that! And also I would have also loved to know what was the time using the improved code with the sampling rate and bounce set to the original, 500 and 50 I think...
Very interesting video. I would like to see more videos where code is checked for issues related to speed, crashing, freezing, etc. A lot of us are just hacking stuff together with no real understanding of why problems occur such as a window freezing for 10 seconds while performing an operation.
Taking the step to running on the GPU would be very interesting. I'd definitely watch a video (or series) of you coding up a more optimized implementation.
Thank you for delivering such a masterpiece video, I love it so much and learned a lot! I am as motivated as you do, just planning to write my own raytracer using compute shader next month!
Stupid comment, but I hate it when people sound out the letters “S T D” for Standard Library functions. Admittedly at least part of it has to do with Sexually Transmitted Diseases sharing the same generally used pronunciation “STDs”, and also in part because of semantic accuracy (you tend to pronounce the letters in such a case where the letters are part of an acronym as opposed to std:: which is a shortening of standard); however, the big thing for me is the fact that S T D is actually harder to say than even the full proper pronunciation of Standard. It’s literally easier to say, more semantically correct, and easier for people who are still trying to learn the language. All of that being said, I know how much we tech folks like to shorten things down to easy to say but somewhat nonsensical jargon. Fret not, there’s an option for that too that actually makes some degree of sense considering it *actually* shortens the pronunciation and makes it easier to say: stood. Stood is a perfectly viable and logical way to pronounce std::, is easy to remember, doesn’t imply a nonexistent acronym, takes only a single syllable, and as a bonus doesn’t imply we are giving the code syphilis! Don’t get me wrong, I like chlamydia as much as the next fella, but my computer has to go through enough suffering with all of the inefficient debug builds.
Watching you read through somebody else's code like it's a kindergarten book is so crazy to me. I forget how to read my own projects after a month of inactivity.
This was helpful a lot! I'm currently writing a game about spaceships and gravity and stuff, and by removing some inheritance and other object oriented stuff from my simulation low level and math layer I instantly doubled the performance.
It's really, really key to use profilers for this kind of thing. More often then not "optimized code" is a completely different beast to what the rest of your project will/should look like. You can end up writing some really hard to read/ugly stuff so its really important to only do it in the hot zones. Also "premature optimization" is the route of all evil :p i've seen code that starts using bit shifting and all sorts "to be faster" and then in the very next code block reallocates an array inside a for loop that could have happily lived outside it :p But its an interesting art, particularly on GPUs, like you would be surprised how often you can replace an if statement with pure maths that will achieve the same result and will be so much faster without the conditional. (this is true of standard programing too but more applicable in GPU stuff)
Yes please optimize it further! I learned so much just by watching this video. I'm very new to C++ but I already feel like starting a ray tracing project!
I think wrapping a uint8_t[] inside a unique_ptr is really good because unique_ptr will guarantee that the memory is freed. However, in this use case there should be no downside to just using std::vector. Unless you resize the vector it will behave the same way and it has the benefit that you don't need to store the size elsewhere. The only use case where you can't use vector for something like that is if your element type is not copy constructable (iirc the template instantiation of vector would fail) or if you want a vector of bools. Because yknow vector is some cursed template specialization that should've never be brought into the standard.
This code is amazing with its great description of the task through OOP structures and logic, but at the same time it is a great example of overusing OOP
Great video and fun to watch. Would definitely like to see the optimization part of this. Especially considering how much faster you got it going so soon after you started looking into it.
Great video as always! I've learnt it the hard way with my 3D "engine" that things like shared pointers, optionals, std::functions(they are pretty heavy and use heap allocation) - all these std:: things can be very perfomance heavy. Lol, there's even a c++ weekly episode about std::pair being 300 times slower than simple pair struct with two public members. So yes - don't overengineer your programm - keep it simple where you can!
@@lotrbuilders5041 Oh I'm sorry! My bad - should have wrote only about shared ptrs. I had perfomance issues with unique pointers because I was replacing them too often causing a lot of heap allocations and deallocations!
That was really sick, seeing which code impacts performance in what way is awesome! Definitely would be a great idea to investigate further optimizations and/or create the fastest ray tracer yourself !!!
i only ever read one book about c++, but never used what i learend in a software. i am so impressed, that he is able to grasp what's going on so quickly. reading someone elses code is so hard for me.
Would like to see a follow-up video on optimizing the code, including checking the assembly and if the compiler doesn't make use of SIMD already, then also something about that for the hot path.
Summing HitRecords doesn't really make sense - they're structs with no real way to "sum" them. The std::accumulate in the original implementation was simply choosing between two HitRecords and either keeping the old one or replacing the result, depending on the ray hit distance. Additionally I ran an image diff after recording this video (just to make sure) - aside from noise and other random scattering built into the renderer, the results are identical.
@@TheCherno Thought it was odd it didn't trigger a compile error. Turns out std::accumulate relies on the lambda to do the accumulating by passing in the accumulated value (temp_value). That value gets passed in, you're meant to add to it and return the accumulated value for each iteration. Obviously, "our" lambda isn't doing that. As you were. :)
@@TheCherno Have done some testing. Using the lambda in a standard for loop gets the same slow result - so it's not std::accumulate. However, changing the lambda to return std::optional() on no hits fixes it. Change it back to return temp_value.. slow. Really strange.
@@TroySchrapel Maybe we're losing time by copying HitRecord all the time? The lambda takes the accumulated value by const ref, then returns it by value, so for every iteration, there is an unnecessary copy of a non-empty std::optional.
@@szaszm_ Yeah. It is. Specifically the shared_ptr of mat_ptr. Replacing that with a regular pointer also fixes the issue. Also, replacing all instances of std::optional with std::shared_ptr fixes the issue.
Super informative, the stuff that I learned here I could implement in my own projects :) Would love to see another optimization video ( would not object to a series like this :D )
Thank y'all for watching! ❤️ Do you want to see more optimization/performance stuff? Also don't forget that the first 1,000 people to use this link will get a 1 month free trial of Skillshare: skl.sh/thecherno12211
Nice
I want more optimization stuff so we can implement it in our own work
I definitely would like to see more performance related stuff
this topic is so interesting and you explain it very good :) thx!
Hi :), just stop asking and record additional video RIGHT NOW!!! :). Your skills are incredibly insane. Pleeease mooooooore... 😅
Man, I never thought you would pick my project! Definitely made my day!
Just a couple of comments:
- I am puzzled by the slowdown in accumulate! I remember testing it against a raw for loop in simpler usecases and it even turned out a few % faster. Granted, it was on gcc so it could be MSVC related, but somehow that would be equally surprising :\
- I don't like the whole 100% object oriented theme of the project as well, but I decided to follow the original book in that regard because I wanted to do everything in a weekend, and changing the whole architecture would have taken way longer :)
Glad you saw it! Was about to send you an email asking if you had these performance issues as well, I had a feeling it might be compiler/library-related. Thanks for sending in your code!
@Harry Byrne Well, the goal was to follow the original book and I wanted to be done within the weekend! So this was my attempt to get familiar with the topic in a reasonable amount of time :)
I'd like to solve the big issues first (design a better scene model, get rid of all the unnecessary hierarchy and so on...) before getting on the GPU, which is a completely unexplored land for me!
@@riidefi1575 A unwanted copy of the shared_ptr would explain a lot (with the control block being copied for every ptr in the container...)
But the lambda takes the arguments as const& so I think this is up to the library implementer to make sure things are forwarded correctly to the function, right? Maybe this is why I never had problems with gcc and clang libraries... definitely gonna check this out later on
@Harry Byrne Yes, I am! I'll gladly take any help I can get :)
Fun fact! std::accumulate actually did have a bug in it that was fixed in C++20! (See the #if _HAS_CXX20 clause at 24:59). std::accumulate was originally meant to accumulate small objects, so it passes things by value. As of C++20 it moves the values since they could be large. I'd be curious to see the performance improvement just by compiling with C++20! As a side note, I'd say that this operation, finding the closest object, doesn't really accumulate anything, so although accumulate can do the job, I would argue that a for loop expresses the intent better anyway.
As someone who does FP a lot, I feel like it does accumulate: it accumulates the minimum distance for a ray bounce. But maybe I'm just mind-poisoned by HOFs?
@@scoreunder Hahah I wouldn't say you're _poisoned_ by functional programming, perhaps enlightened? Of course, with higher order functionals, you can build very unexpected/general things out of some basic building blocks. But from a more grounded perspective, it seems unlikely that a highly optimized algorithm could remain optimized for an arbitrary functional as input. Personally, I find that if the lambda you pass in is larger in size than the algorithm itself, then it is highly likely you will find a performance hit waiting for you on the other end. I most often prioritize readability and I also find that when the intent of something becomes unclear (perhaps a _clever_ lambda to an algorithm) the optimizer will also not do very well. Don't get me wrong though, I'd love everyone take the opportunity to learn some category theory.
Why std::accumulate at all if you want to do std::for_each??
Typical „I don’t understand the new stuff, so I stick to the old“.
He even thinks „range based loop“ is different from „old style for loop“. If so, he should get a working compiler.
@@fromgermany271 I may be misunderstanding what you mean, but several modern Cpp books (notably Accelerated C++) stress the semantic value of std:: accumulate. In other words, std::for_each doesn't tell us the intent if the code; std:: accumulate does (it's a summation).
@@VastyVastyVoid
I meant: he has a problem with std::accumulate and says „to slow“ and „classic loop is faster“.
I say, accumulate changes a „sum“, that does not really fit to „parallel“. But I cannot see why a „sum“ should be needed at all.
But in general saying „I don’t understand something in stdlib, so I just fall back to C-style“ is something not uncommon, but not applicable for talks.
I for one would love to see more optimisation stuff. I'm currently in the process of learning how to write high performance code, so learning how to self evaluate a program's performance via either code inspection or using tools to develop metrics will be huge for me.
That said, I'm just happy to see a Cherno video in my sub box either way, so thanks for taking the time to create and upload this video.
Btw. a really nice way that helps is to use a very slow chip that restricts you a lot, like for example an Arduino where you get hit hard if you code inefficiently.
A big part of optimization is similar to this video, finding slow parts, and cutting out unnecessary overhead. Giving the CPU less work. That and keeping cache sizes and layout in mind, which is easier to consider if using simpler methods with less abstraction. Modern desktop CPUs tend to max out at 32MB L3 cache, but it is better to try to stay within 8MB.
It's mostly what you see here - use a profiler, because often there's something bad but hidden somewhere non-obvious. (It's often not worth even guessing like he did, start with the profiler to avoid biases.) Avoid unneeded copies. Avoid replicating the same expensive work. And in C/Rust, prefer stack over heap.
It's much much further down the line that it makes sense to start worrying in more detail about cache implementation details.
But don't forget readability is critical. Your future self will appreciate it.
@@TimeFadesMemoryLasts Sorry im missing something do you mean running with the whole ide or just the compiled code itself?
You can tell this is a physicist's code because they're used to wait weeks for simulation results... 😂
Malappropriation of physics, then, as programming is also physics.
too much philosophy, not enough practicality@@colonthree
Ahh, physicists bashing, I'm in. 😂
For the slowness of `std::accumulate` I'm sure it's down to the copy of the `std::shared_ptr` inside the HitRecord, which do atomic operations to keep the refcount up to date. In C++20, it's not copying it anymore. The best would be to not use shared pointer at all.
That's what I thought too, and made a little proof myself, by cheating and making it (for a bit just plain pointer), then std::accumulate was still fine (C++17) - with 20 and std::move() on it - even better.
Yes and we could also have use std::reduce and use C++17 parallele algorithm to get the code vectorized easily
this should be pinned. std::accumulate is in no way the problem. Shared_ptr is.
Could be also the lambda call where here is just a loop. The jump may mess the cache unless the compiler is good enough to inline the lambda in
@@Narblo It's not that, it's really lots of "lock xadd" (atomic inc/dec due to shared_ptr copying). Granted that was on my machine, with my RAM, my CPUs, etc. - it may be better on others, but maybe even much worse (with say NUMA configs). For in house game editor we had to disable one of the CPU's in it to avoid that, as we've been also guilty of over-using abit more of shared_ptr - though not like this.
With 7.5 minutes of rendering time, the objective "Raytracing in a weekend" had already been achieved by a large margin 😊
Hello Yan,
I would love to see you optimize this code. The fact alone that replacing one single function with a for loop can yield such a performance boost shows how good this code is. It is definitely a great starting point for a video about optimization. Given the fact that the accumulate function was written in order to be used in cases like this, it being so slow isn ´t something that a coder should have to worry about. It ´s a language related problem. You have already started talking about playing to the strengths of the hardware architecture. This performance boost was possible because the coder didn ´t know what the accumulate function actually does. You might even go as far as to explain the different features of 32 bit and 64 bit processors and assembly language and then go back to explaining how c++ as a language actually utilizes them. That would be highly informative, albeit really time consuming, but this very code example shows how easy it can be to go from a working piece of software to a really satisfying experience for both coders and users if you are aware of the inner workings of your tools. In short: Please do it. I love your content, by the way, and have been a subscriber for a long time. All the best to you and your loved ones from Germany.
Its not the function, but the copy of the shared pointer that causes the slow down
@@phantom_stnd Exactly this.
Only the really bottlenecks need to be optimized. I think Yan tackled that well enough via the profiler. And assembly is diminishing returns these days compared to intrinsics and modern compilers that automatically find intrinsic equivalents.
Please a optimization on this code! It would be very entertaining and educational for everybody! I personally would love to see that video!!!
This is exactly the content i wanted to see. Old school coder myself, i struggle a bit on learning modern c++ stuff on my free time, so it's great to see that raw quick comparison. The explanation and clarity from Cherno is hard to match on any other material you might find on the web. Loving it. ❤
In my experience, when changing a single part of the code can speed things up so much, the problem is almost always some kind of superfluous copy somewhere. Also, an additional optimization video would be really nice!
Less unnecessary copying and more usages of move semantics or using references to objects definitely helps towards improving performance and efficiency making the code already pre-optimized.
These aren't just code reviews, but absolute masterclasses as to why one approach wins over another. I've learnt so much in this 38 minutes than I've learn in the last 5 years!
What have you been doing the last 5 years? Did you have internet?
@@yohannes2kifle Too busy churning out products and applications! 😉
also learning the methodology of diagnosing inefficiencies. the way he does it here for a c program still applies to all languages "break down to verbose and deliberate to see which step is causing the problem".
"I could have *written* a ray tracer in that amount of time..." GOLDEN! :)
The problem is not std::accumulate.
It is the fact that iterating an accumulate with std::optional make s a "copy" of it every pass of the accumulate.
Accumulate should be used with trivial copy/move elements, or ensuring the elements handle properly the move.
This was a great video! As a person who just rewrote a pure C++ project in a more C-style C++ I can relate. I was taught that the explicitness of C is inherently bad and it was an eyeopener to realize this is not the case.
Very true! C is blazingly fast, C++ is well organized. There is a performance tradeoff in the organizing, forcing memory swaps etc., so would always do a back of the envelope performance calculation to be sure I wasn't wasting cpu. I started in C in '86 when we didn't have CPU to spare! So performance has always been in mind, and the success of many projects came down to that as they grew in size and complexity.
The speed you analyzed the whole project is impressive! Raytracer is one of the projects that I want to do in the future. At the same time I really love optimization videos so I cannot wait to see the continuation :^)
I would love to see you go through and refactor/optimize the code (get rid of the std usage where it's superfluous etc). Someone also mentioned C# and using that to achieve similar results would be really cool. Span was added not too long ago to help with stack-allocations without having to use unsafe and stackalloc.
I think it would be an amazing series, take this one codebase, one episode going deeper on why virtuals were bad in this case, de-OOPing it, a longer one bringing misc stuff closer to the metal, and the final one doing it on a GPU.
No, the optimization part is very interesting, helpful and useful. I think continuing down this road is a great course of action to help others to improve their own code bases / frameworks. It should help to give them insight towards their own approaches on how to analyze and profile their own code, what to look for, and how to optimize or improve its performance. Great Video!
I want a Cherno, who does such videos with the Rust language 🦀. But still love it.
I love watching optimization. It helps my mind negotiate code decisions to do optimizations by default
I'd love to see both of your own fully optimised version of this implementation as well as a GPU based "ray tracing in a weekend". I think they would both be fascinating.
A possible suggestion for video format, in the case you want a more complete exploration while not being a 2 hour video;
Record your thoughts during the beginning, middle, and end of the process.
Beginning - overall impressions of what the weeds will look like, guesses at what needs to change, etc
Middle - what the weeds *actually* look like, things that have and haven't surprised you so far. Examples of transforms done. Also kinda fun to see whether you can guess where the 'middle' would be and if you're accurate on that by the end.
End - obviously get to discussion the final results and overall learnings
10:02 It is so funny to see your cam video freezing while rendering that slow raytracing image. 😂 It's like being a party host kicked out of his own house.
Since you asked for opinions about the unique_ptr array, here's how I handle it.
If I'm needing an array where I need to push/pop from the back only, I use std::vector.
If I'm needing a buffer, I make a buffer class and still use the std::unique_ptr as the data holder. This allows me to focus on adding functionality without having write all the extra code like deleting the copy ctor and copy assignment operators, nor would I have to write the move ctor or move assignment operator, nor would I have to write a destructor. With std::vector, you get copy and move for free, but then you have to write the two lines to delete the copy ctor and copy assignment operator. As for wrapping a naked pointer, you have to write all those yourself including destructor. Not using std::unique_ptr for a heap allocated array is more work compared to a naked pointer, even if utility wrappers like this isn't the bulk of your program.
Please do more of this!!
Bakko: It's multithreaded 😎
Cherno: 👁👄👁
Keep going. Let's see how far you can take it!
glad to hear other people from Italy!
I THINK the problem with using accumulate here is it returns by value instead of by reference, so thats a LOT of copies being made
Congratulations on 420k subscribers. Celebrate it with a big fat blunt.
This Videos are amazing. Thanks to watching them I learn a bit of how to optimise my code and I don't even use C++.
21:38 24:54
Hi Cherno!
I would always use a loop in this case, rather than overengineering it with std algorithms. But I would say that std::accumulate is fine for C++ compilers, don't be afraid it. When it compiled with debug mode, it will not be optimized. But it works well under the release setting. With release setting / -O2 flag, it does get inlined and unrolled into a loop.
But developers should always use debug mode for everything. Don’t get into this mentality.
Started following you recently because of a friend recommendation. I'd love to see more optimization videos. Loving your channel!!
I love unique pointer wrapping array, but only if this is necessary. Normally I would use a vector, but when I'm implementing something much lower level.
In my codebase you'll see some `std::unique_ptr` and I use it to implement a vector of a runtime defined structure. Mainly to contains all uniforms for a list of objects to render, or to initialize a structure of buffer objects for opengl.
Incredibly interesting! If you do end up making a follow up video, then I personally would like to see your take on CPU cache coherency.
DEFINITELY A LIKE! Big! THANK YOU! The point of OPTIMIZING vs OOP reminds me of John Carmack's wild "Inverse Square Root" logic in Doom. Do the math not the objects...
More optimizing of ray tracing. I appreciate how you got the language overhead out of the way and made the logic closer to the metal.
You should do a video on how to navigate VS, like shortcuts and what not. Your ability to navigate VS like it's vim is impressive
That was a very interesting code review to watch! Man your understanding on this topic is impressive, keep on the quality work
This has been an excellent bloody episode Cherno! Please continue with this guys project! so so so informative and practical!
Not sure how everyone else feels, but I like watching when you go in-depth in these reviews. Either just doing a proper review that takes an hour or doing these optimizations that take an hour. It's a really good way to learn when we actually understand what's wrong with it. Also, actually getting to see the optimized code being written is good when we aren't super familiar with the language but understand the concepts.
In most cases, I would avoid std::unique_ptr instance and instead use a std::vector. The only case where I would prefer unique_ptr is if the `any_type` is '`bool`. For most implementations std::vector is space optimized and creates issues when we need a raw pointer on it to pass on.
One downside to vector is that it allows easy resizing on the fly, which may not be something you want. If you need to keep a long-lived pointer to something inside the buffer, using a vector may not be ideal because if someone pushes to it, it may result in the internal data storage being copied to a new location and then freed, leaving you with a subtle use-after-free bug. That's still possible if you use a unique_ptr, but it will be much more obvious because you have to explicitly replace the entire buffer, as opposed to just calling push_back or emplace_back.
I don't think there's anything wrong with using unique_ptr for a situation like this; it's basically a safer version of a C-style pointer-to-buffer. A vector serves a different purpose, as a dynamically-sized collection of items. If you don't need the resizing capabilities, why use the more complex type?
I use unique_ptr quite a lot. It's a nice pattern for when you want a non-resizable vector with size specified at runtime. The API is somewhat worse than a vector, but you can certainly wrap it into a nicer type (like an owning version of the buffer helper you describe in this video).
with the std::unique_ptr wrapping a buffer like that - the point of it, and what you're getting here - is exception safety and a guarantee of no use-after-free or forgotten delete call during object destruction - two things that given the multi-threaded nature of this project, are quite important to get right and impossible to do with raw pointers.
He mentioned not liking the usage of unique_ptr for a buffer, and preferring vector instead.
vector gives you a size, which is nice and a bit less implicit, and resize() is a bit more terse. Though there is an overload of make_unique for arrays that takes an array size (e.g. make_unique(1024) or what not). It didn’t look like that was used in the given code, but it’s fairly terse too.
One big difference in some contexts is that unique_ptr is a much more lightweight template than vector. I had some heavily templated code with a fixed but dynamically sized array where switching from vector to unique_ptr cut the compile time in half. This code didn’t look to be heavily templated so it likely doesn’t matter, but it’s useful to know.
OH my gosh I remember watching you and learning some programming from you when I was still in primary school. You made me pursuit programming and suddenly seeing one of your videos in my feed brought back a ton of memories of me trying to figure out Java as a tiny kid. Now I'm studying cybersecurity and software development, working my way through!
I would love for a series revolved around optimization of generic code (not just HAZEL)!
as a dev of 20 years, its always fun to watch optimizations. its very satisfying to watch
"raytracing in a weekend" is actually how long it takes to render the full image on an average computer
😂😂😂😂😂🎉
Thank's Cherno! As always, great video.
Definitely interested to see you further optimize the code and squeeze out a better performance.
Also I would love to see you porting that to a compute shader.
I would love to see how you are optimizing it to its fullest potential
Hey Cherno, first video I've seen, but I like your approach to code improvement. Hope you kept a similar style since then :D
It will good to see you optimize that further.
yess bro iron out this project in another video. love hearing you optimize stuff and seeing this sort of thing from you!! awesome vid man
Go faster!! See how much improvements you can make! Would love to see that! And also I would have also loved to know what was the time using the improved code with the sampling rate and bounce set to the original, 500 and 50 I think...
You know things are slow when the camera stutters XD.
Totally +1 on the optimization thing ,the difference is HUGE, thanks a lot for this great quality content
Very interesting video. I would like to see more videos where code is checked for issues related to speed, crashing, freezing, etc. A lot of us are just hacking stuff together with no real understanding of why problems occur such as a window freezing for 10 seconds while performing an operation.
The optimization episodes are fun lol more please
Taking the step to running on the GPU would be very interesting.
I'd definitely watch a video (or series) of you coding up a more optimized implementation.
I would love that additional video, I love optimization/profiling videos!
People should know about you more this channel deserves millions of subs they are seriously missing ALOT
33:52 Yesssss you king
Thank you for delivering such a masterpiece video, I love it so much and learned a lot! I am as motivated as you do, just planning to write my own raytracer using compute shader next month!
LOVE the optimization videos, please make more!
Honestly, this is the best TH-cam video you've made so far.
The frozen cherno just makes this video so much funnier than it should be
Yes! You should do an entire series on this! Like the lowest level you can reasonably get. Math and all would be awesome!
Stupid comment, but I hate it when people sound out the letters “S T D” for Standard Library functions. Admittedly at least part of it has to do with Sexually Transmitted Diseases sharing the same generally used pronunciation “STDs”, and also in part because of semantic accuracy (you tend to pronounce the letters in such a case where the letters are part of an acronym as opposed to std:: which is a shortening of standard); however, the big thing for me is the fact that S T D is actually harder to say than even the full proper pronunciation of Standard. It’s literally easier to say, more semantically correct, and easier for people who are still trying to learn the language.
All of that being said, I know how much we tech folks like to shorten things down to easy to say but somewhat nonsensical jargon. Fret not, there’s an option for that too that actually makes some degree of sense considering it *actually* shortens the pronunciation and makes it easier to say: stood. Stood is a perfectly viable and logical way to pronounce std::, is easy to remember, doesn’t imply a nonexistent acronym, takes only a single syllable, and as a bonus doesn’t imply we are giving the code syphilis!
Don’t get me wrong, I like chlamydia as much as the next fella, but my computer has to go through enough suffering with all of the inefficient debug builds.
I would love to see you further improving this code!
Fantastic video. I would love to see you optimize step by step this code.
Great Job Cherno :)
Watching you read through somebody else's code like it's a kindergarten book is so crazy to me. I forget how to read my own projects after a month of inactivity.
😃
This was helpful a lot! I'm currently writing a game about spaceships and gravity and stuff, and by removing some inheritance and other object oriented stuff from my simulation low level and math layer I instantly doubled the performance.
It's really, really key to use profilers for this kind of thing. More often then not "optimized code" is a completely different beast to what the rest of your project will/should look like.
You can end up writing some really hard to read/ugly stuff so its really important to only do it in the hot zones. Also "premature optimization" is the route of all evil :p i've seen code that starts using bit shifting and all sorts "to be faster" and then in the very next code block reallocates an array inside a for loop that could have happily lived outside it :p
But its an interesting art, particularly on GPUs, like you would be surprised how often you can replace an if statement with pure maths that will achieve the same result and will be so much faster without the conditional. (this is true of standard programing too but more applicable in GPU stuff)
Yes please optimize it further! I learned so much just by watching this video. I'm very new to C++ but I already feel like starting a ray tracing project!
YES I would like to see you fix it to the MAX like you partially did today!
I'm always happy to see these go into "Cherno fixes your code", which is actually my preferred series.
I think wrapping a uint8_t[] inside a unique_ptr is really good because unique_ptr will guarantee that the memory is freed. However, in this use case there should be no downside to just using std::vector. Unless you resize the vector it will behave the same way and it has the benefit that you don't need to store the size elsewhere.
The only use case where you can't use vector for something like that is if your element type is not copy constructable (iirc the template instantiation of vector would fail) or if you want a vector of bools. Because yknow vector is some cursed template specialization that should've never be brought into the standard.
This code is amazing with its great description of the task through OOP structures and logic,
but at the same time it is a great example of overusing OOP
Inspirational! I'd love to see your optimization process
Great video and fun to watch. Would definitely like to see the optimization part of this. Especially considering how much faster you got it going so soon after you started looking into it.
Great video as always!
I've learnt it the hard way with my 3D "engine" that things like shared pointers, optionals, std::functions(they are pretty heavy and use heap allocation) - all these std:: things can be very perfomance heavy.
Lol, there's even a c++ weekly episode about std::pair being 300 times slower than simple pair struct with two public members.
So yes - don't overengineer your programm - keep it simple where you can!
I’ve yet to see any example where std::unique pointer causes performance degradation. Any examples?
@@lotrbuilders5041 Oh I'm sorry! My bad - should have wrote only about shared ptrs. I had perfomance issues with unique pointers because I was replacing them too often causing a lot of heap allocations and deallocations!
I understand none of the code, but I am intrigued in how well you are able to read through the code and fix things.
This is exactly how video game developer feel every time anyone reporting a Bug in the game.
Mixing aggregation with such a deep level of recursion might cause such overhead and for loops are way cheaper it seems, at the very least in C++.
best video of the series so far, loved it :D
I'd be interested in the optimization part. It would be interesting to dig in to see what exactly was slowing it down before.
Optimizations would be great to see, there is so much you can learn out of that!
That was really sick, seeing which code impacts performance in what way is awesome! Definitely would be a great idea to investigate further optimizations and/or create the fastest ray tracer yourself !!!
Great content! I loved this. :-) I always enjoy learning new things after profiling a bit of code.
i only ever read one book about c++, but never used what i learend in a software.
i am so impressed, that he is able to grasp what's going on so quickly. reading someone elses code is so hard for me.
Would like to see a follow-up video on optimizing the code, including checking the assembly and if the compiler doesn't make use of SIMD already, then also something about that for the hot path.
I'd love to watch this sort of stuff based on UE4's blueprint system. Really informative, dude! Love it :)
24:40 is me whenever I read someone else' code these days. I'm so glad I'm not the only one!
Yes, please continue digging into this.
Ofc we do chernno. Everything u do is amazing and so informative ❤❤❤ tnx a lot for ur time
I don’t know what you’re saying, but you say it so well that I feel like I have to keep watching.
I think the result has changed. The old code was summing (accumulating) the HitRecord. You're just returning the last one.
Summing HitRecords doesn't really make sense - they're structs with no real way to "sum" them. The std::accumulate in the original implementation was simply choosing between two HitRecords and either keeping the old one or replacing the result, depending on the ray hit distance.
Additionally I ran an image diff after recording this video (just to make sure) - aside from noise and other random scattering built into the renderer, the results are identical.
@@TheCherno Thought it was odd it didn't trigger a compile error. Turns out std::accumulate relies on the lambda to do the accumulating by passing in the accumulated value (temp_value). That value gets passed in, you're meant to add to it and return the accumulated value for each iteration. Obviously, "our" lambda isn't doing that. As you were. :)
@@TheCherno Have done some testing. Using the lambda in a standard for loop gets the same slow result - so it's not std::accumulate. However, changing the lambda to return std::optional() on no hits fixes it. Change it back to return temp_value.. slow. Really strange.
@@TroySchrapel Maybe we're losing time by copying HitRecord all the time? The lambda takes the accumulated value by const ref, then returns it by value, so for every iteration, there is an unnecessary copy of a non-empty std::optional.
@@szaszm_ Yeah. It is. Specifically the shared_ptr of mat_ptr. Replacing that with a regular pointer also fixes the issue. Also, replacing all instances of std::optional with std::shared_ptr fixes the issue.
Super informative, the stuff that I learned here I could implement in my own projects :)
Would love to see another optimization video ( would not object to a series like this :D )
8:51 a pointer to an array looks like a double pointer to me...
yeah,it does