It is a great talk. I use now exceptions since some years for really exceptional errors(most of them). The code gets much more clear. I will use your video, if I encounter again 'we don't use exceptions because...' Thank you!😊
This was a great talk. Takeaways, big picture and thesis first, then some interesting nitty gritty stuff. I know C++ gets dogged on a lot for being such a huge language, but I guess in this case having the option to switch between centralized and decentralized error handling was a plus.
Great talk, I look forward to seeing the other two! I've long felt that the C++ exception handling mechanism was close to greatness and just needed some tweaks that it couldn't get due to backward compatibility concerns. In embedded systems where you control all the code, it seems you can really make those tweaks happen and get the ideal outcome. I wonder how long it will take for something like this to end up in Rust or another systems programming language.
Great talk. I would love to hear more about it and see that implemented, added to current tooling. Because that sounds awesome. It would maybe even change the Embedded Landscape
28:17 Pretty cool categories. A few years back, I've started separating functions into my own categories as well, detection functions, transparent functions and handle functions. They are pretty self-explanatory - Detection functions detect an error - (Error) Transparent functions are functions that don't error or just don't care about handling the error (Most functions are in this category) - Handling functions have a try/catch block Yours probably work better in the context of optimizing the error handling. Or maybe they are just sides of the exact same coin
@@KhalilEstell The definitions I gave are kind of flimsy as it kind of ignores the nuanced fact that functions can overlap in the categories... it's pretty much a Venn diagram of categories.... it might be ultra rare, but you might have a function that is in all 3 categories, of course they're a mess and somewhat more harder to reason about... A pure handling function is probably rarer than a mixed handling + transparent function AKA catches some set of errors, but not all, but one things for sure is the pure transparent functions is the largest group... I guess a slightly better definition would be - Detection functions are functions which invokes the act of throwing, as in it either actually has "throw exception", or calls a functions whose semantics is to throw, like helper throw functions. - Transparent functions are functions that don't error, or doesn't care about handling a set of errors which aren't detected by itself. - Handling functions just have a try / catch block which catches and handles a set of errors. Which is to say, rethrowing doesn't really count as handling, if anything there should be techniques, and there are with C++ anyways, to remove that try catch as to make your function read more like a transparent one. There is a fourth category actually, but at least in c++ it's so rarely used that I kind of forgot about it. - Conversion functions, turning an exception from a T -> U. Even though it's not super important to have dedicated syntax for it, it does mean catch blocks would have 2 semantics attached to it which can make it a bit harder to reason about as instead of being able to look at a catch block and just go "yup that's a handling function", you now have to do the horrible thing of skimming for any throws.
I need that second talk!!, I always liked exceptions and I'm implementing a Hobby language where exceptions are Value like, and table based unwinding, currently it is much closer to 8x on bad path. (edited measured, actually closer to 8x)
Good talk and I agree handling expected code can be a pain but with monadic expressions it can be fixed. Why did he choose to use macros instead of using and_then functions.
Hey, speaker here! I got this question recently and I should have addressed it. The monadic code approach is akin to JS callback hell or chaining promises in JS. I've never liked that way of coding and neither did my students who found it far more complicated than a macro. JS would probably agree given they have the await keyword. Especially if you have +20 function calls I'm a frame, it's annoying and not that easy to write the monadic code.
This looks rather over-engineered to me. I program C++ since BorlandC++ 2.1 (pre c++98). And, over the decades on various compilers. And I remember, c++ exception handling used to be much simpler (implementation wise). So, I conclude, this is just g++ insanity or feature creep. Back in the early days, you could use exceptions without heap and without RTTI. I am pretty sure about that. You also did not need "std::exception". You could "int error_code = 42; throw error_code;" C++ turned horrible with the c++11 revision and I started looking for alternatives (C++17 was fixing some of the terrible c++11 stuff). I would not be surprised, if all this overly complicated exception handling stuff crept in at that very c++11 revision. Maybe a good topic for another talk: "The history of c++ exception handling and how it worked."
The libunwind code has been there for about 25 years so way before C++11. Exceptions and libunwind have always been complicated. You may be considering SJLJ (set jump long jump) exceptions which, on function entry, pushes the unwind information for the function onto a global list and removes it on function exit. This is simpler, kinda, but also adds runtime costs. To remove that runtime cost, we use the exception tables. But I don't see this as any more complicated than what compilers and linkers have to do in terms of optimizing code. Also, I don't see this as any more complicated than any other compression algorithm. So we trade off some complexity for improved performance and code size. I say it's a win.
My personal POV is use the language best for the platform. If you are so memory constrained that the STL is an issue then what are you gaining by using C++ ? If you are having to build a special version of GCC to build this you are creating a nightmare 10 years from now when you or some other guy has to try to figure out what is going on here and get GCC to build with the options that were nicely documented (yeah right). The problem domain has moved from the product functionality to the tools used to create the product. It's lose lose all the way down. Chances are all this will be thrown out and replaced. In which case what have you gained? This really falls into the "figure out what I've done here peasant schmucks" bucket. Even the code size is bigger.. I just don't see an upside, but lots and lots of late nights. Academically it's very cool, but stuff like this does not belong in production.
> If you are so memory constrained that the STL is an issue then what are you gaining by using C++ ? I don't believe that the STL is an issue. Dynamic memory allocation is though. So anything leveraging dynamic memory allocation is not generally used in embedded systems (at least not beyond initialization and there are always exceptions). `std::fixed_vector` was voted into C++26 and that will do nicely in the embedded space. C++ is a great tool for embedded. I actually find it to be the best tool for embedded programming actually. 😄 > If you are having to build a special version of GCC to build this you are creating a nightmare 10 years from now when you or some other guy has to try to figure out what is going on here and get GCC to build with the options that were nicely documented (yeah right). The problem domain has moved from the product functionality to the tools used to create the product. It's lose lose all the way down. Totally agree with this. Luckily there were no changes to the compiler in this talk. Besides wrapping `__cxa_allocate_exception`, and the terminate handler, everything else is left alone. The ABI is the same as it was 25 years ago. In the future we may have a `std::set_exception_allocator` that takes a `std::pmr::memory_resource` so devs can choose their own allocator. Then there will be no need for the function wrapping. And as for the terminate handler, that could also be an additional flag added to GCC or LLVM so it would be in the main line and not some custom compiler. > Academically it's very cool, but stuff like this does not belong in production. What part exactly shouldn't be in production? Exceptions? Wrapping functions? Overwriting the default terminate handler? I'd like to know so such issues can be patched so a more compact, more writable, more performant, and safer error handling scheme can be used in software. Cheers! Keep an eye out for the next talks. There is a lot of exciting stuff to come.
Oh, I did recompile GCC with the exceptions flag enabled because the ARM company builds GCC with it disabled. I don't see this as a custom compiler though. Its really a change that deletes 3 letters "no-" from "-fno-exceptions" in the GCC build script.
By using C++, you gain type-safety, the ability to create low cost abstractions, type-safe code generation using templates (instead of macro magic), constant expressions and whole lot of other compiler checks which you don't get in C. The problem is not that we are memory constrained. The problem is that Embedded developers mindsets and academic teaching have not changed to reflect the state of the Embedded world today. More and more Embedded Software is getting updated regularly with new features. The business/engineering tradeoff between BOM cost and picking a microcontroller with more RAM and Flash is not as prominent as it used to be. Businesses are more likely to opt for the microcontroller with increased RAM and Flash, if it means they can continue to add features to their product for longer, without upgrading the HW. This even applies for the cost sensitive industries. And even with these higher performance microcontrollers available to us (at very low prices), some Embedded developers are still coding like they have 1kB of RAM and 8kB of Flash. The code they wrote worked but was not portable. That was made very clear in the chip shortage when some businesses had to rewrite their application code to accommodate HW changes. Whilst businesses who had written well abstracted code in C++ (or C), were able to port their code with relatively little effort and little to no changes in their application code (changes only at the HW BSP level).
This is probably one of the best structured and informative talks I've seen lately. And on a such an important topic, too!
Fascinating talk with an engaging presenter who challenges a lot of common assumptions with solid data. Very informative and enjoyable to watch 🙂
It is a great talk. I use now exceptions since some years for really exceptional errors(most of them). The code gets much more clear. I will use your video, if I encounter again 'we don't use exceptions because...' Thank you!😊
Fantastic! It was a pleasure to watch, listen and learn
Excellent video, thanks for this. Exceptions are much maligned and greatly misunderstood especially in the embedded environment.
This was a great talk. Takeaways, big picture and thesis first, then some interesting nitty gritty stuff.
I know C++ gets dogged on a lot for being such a huge language, but I guess in this case having the option to switch between centralized and decentralized error handling was a plus.
Great talk, I look forward to seeing the other two! I've long felt that the C++ exception handling mechanism was close to greatness and just needed some tweaks that it couldn't get due to backward compatibility concerns. In embedded systems where you control all the code, it seems you can really make those tweaks happen and get the ideal outcome. I wonder how long it will take for something like this to end up in Rust or another systems programming language.
Great talk. I would love to hear more about it and see that implemented, added to current tooling. Because that sounds awesome. It would maybe even change the Embedded Landscape
28:17 Pretty cool categories. A few years back, I've started separating functions into my own categories as well, detection functions, transparent functions and handle functions. They are pretty self-explanatory
- Detection functions detect an error
- (Error) Transparent functions are functions that don't error or just don't care about handling the error (Most functions are in this category)
- Handling functions have a try/catch block
Yours probably work better in the context of optimizing the error handling. Or maybe they are just sides of the exact same coin
That's a great way to categorize the class of functions in exceptions. I may borrow these terms for the next time I do the talk.
@@KhalilEstell The definitions I gave are kind of flimsy as it kind of ignores the nuanced fact that functions can overlap in the categories... it's pretty much a Venn diagram of categories.... it might be ultra rare, but you might have a function that is in all 3 categories, of course they're a mess and somewhat more harder to reason about...
A pure handling function is probably rarer than a mixed handling + transparent function AKA catches some set of errors, but not all, but one things for sure is the pure transparent functions is the largest group... I guess a slightly better definition would be
- Detection functions are functions which invokes the act of throwing, as in it either actually has "throw exception", or calls a functions whose semantics is to throw, like helper throw functions.
- Transparent functions are functions that don't error, or doesn't care about handling a set of errors which aren't detected by itself.
- Handling functions just have a try / catch block which catches and handles a set of errors. Which is to say, rethrowing doesn't really count as handling, if anything there should be techniques, and there are with C++ anyways, to remove that try catch as to make your function read more like a transparent one.
There is a fourth category actually, but at least in c++ it's so rarely used that I kind of forgot about it.
- Conversion functions, turning an exception from a T -> U.
Even though it's not super important to have dedicated syntax for it, it does mean catch blocks would have 2 semantics attached to it which can make it a bit harder to reason about as instead of being able to look at a catch block and just go "yup that's a handling function", you now have to do the horrible thing of skimming for any throws.
I need that second talk!!, I always liked exceptions and I'm implementing a Hobby language where exceptions are Value like, and table based unwinding, currently it is much closer to 8x on bad path. (edited measured, actually closer to 8x)
The talk is great
Excellent talk!
Good talk and I agree handling expected code can be a pain but with monadic expressions it can be fixed. Why did he choose to use macros instead of using and_then functions.
Hey, speaker here! I got this question recently and I should have addressed it. The monadic code approach is akin to JS callback hell or chaining promises in JS. I've never liked that way of coding and neither did my students who found it far more complicated than a macro. JS would probably agree given they have the await keyword. Especially if you have +20 function calls I'm a frame, it's annoying and not that easy to write the monadic code.
really cool talk :)
Awesome
Got lost at ARM instructions
These look like IPv4 packets
This looks rather over-engineered to me. I program C++ since BorlandC++ 2.1 (pre c++98). And, over the decades on various compilers. And I remember, c++ exception handling used to be much simpler (implementation wise). So, I conclude, this is just g++ insanity or feature creep. Back in the early days, you could use exceptions without heap and without RTTI. I am pretty sure about that. You also did not need "std::exception". You could "int error_code = 42; throw error_code;"
C++ turned horrible with the c++11 revision and I started looking for alternatives (C++17 was fixing some of the terrible c++11 stuff). I would not be surprised, if all this overly complicated exception handling stuff crept in at that very c++11 revision.
Maybe a good topic for another talk: "The history of c++ exception handling and how it worked."
The libunwind code has been there for about 25 years so way before C++11. Exceptions and libunwind have always been complicated. You may be considering SJLJ (set jump long jump) exceptions which, on function entry, pushes the unwind information for the function onto a global list and removes it on function exit. This is simpler, kinda, but also adds runtime costs. To remove that runtime cost, we use the exception tables.
But I don't see this as any more complicated than what compilers and linkers have to do in terms of optimizing code.
Also, I don't see this as any more complicated than any other compression algorithm. So we trade off some complexity for improved performance and code size. I say it's a win.
My personal POV is use the language best for the platform. If you are so memory constrained that the STL is an issue then what are you gaining by using C++ ? If you are having to build a special version of GCC to build this you are creating a nightmare 10 years from now when you or some other guy has to try to figure out what is going on here and get GCC to build with the options that were nicely documented (yeah right). The problem domain has moved from the product functionality to the tools used to create the product. It's lose lose all the way down.
Chances are all this will be thrown out and replaced. In which case what have you gained? This really falls into the "figure out what I've done here peasant schmucks" bucket.
Even the code size is bigger.. I just don't see an upside, but lots and lots of late nights.
Academically it's very cool, but stuff like this does not belong in production.
> If you are so memory constrained that the STL is an issue then what are you gaining by using C++ ?
I don't believe that the STL is an issue. Dynamic memory allocation is though. So anything leveraging dynamic memory allocation is not generally used in embedded systems (at least not beyond initialization and there are always exceptions). `std::fixed_vector` was voted into C++26 and that will do nicely in the embedded space. C++ is a great tool for embedded. I actually find it to be the best tool for embedded programming actually. 😄
> If you are having to build a special version of GCC to build this you are creating a nightmare 10 years from now when you or some other guy has to try to figure out what is going on here and get GCC to build with the options that were nicely documented (yeah right). The problem domain has moved from the product functionality to the tools used to create the product. It's lose lose all the way down.
Totally agree with this. Luckily there were no changes to the compiler in this talk. Besides wrapping `__cxa_allocate_exception`, and the terminate handler, everything else is left alone. The ABI is the same as it was 25 years ago. In the future we may have a `std::set_exception_allocator` that takes a `std::pmr::memory_resource` so devs can choose their own allocator. Then there will be no need for the function wrapping. And as for the terminate handler, that could also be an additional flag added to GCC or LLVM so it would be in the main line and not some custom compiler.
> Academically it's very cool, but stuff like this does not belong in production.
What part exactly shouldn't be in production? Exceptions? Wrapping functions? Overwriting the default terminate handler? I'd like to know so such issues can be patched so a more compact, more writable, more performant, and safer error handling scheme can be used in software.
Cheers! Keep an eye out for the next talks. There is a lot of exciting stuff to come.
Oh, I did recompile GCC with the exceptions flag enabled because the ARM company builds GCC with it disabled. I don't see this as a custom compiler though. Its really a change that deletes 3 letters "no-" from "-fno-exceptions" in the GCC build script.
By using C++, you gain type-safety, the ability to create low cost abstractions, type-safe code generation using templates (instead of macro magic), constant expressions and whole lot of other compiler checks which you don't get in C. The problem is not that we are memory constrained. The problem is that Embedded developers mindsets and academic teaching have not changed to reflect the state of the Embedded world today. More and more Embedded Software is getting updated regularly with new features. The business/engineering tradeoff between BOM cost and picking a microcontroller with more RAM and Flash is not as prominent as it used to be. Businesses are more likely to opt for the microcontroller with increased RAM and Flash, if it means they can continue to add features to their product for longer, without upgrading the HW. This even applies for the cost sensitive industries.
And even with these higher performance microcontrollers available to us (at very low prices), some Embedded developers are still coding like they have 1kB of RAM and 8kB of Flash. The code they wrote worked but was not portable. That was made very clear in the chip shortage when some businesses had to rewrite their application code to accommodate HW changes. Whilst businesses who had written well abstracted code in C++ (or C), were able to port their code with relatively little effort and little to no changes in their application code (changes only at the HW BSP level).
@@KhalilEstell 1. Very cool talk! Thanks a lot! 2. I am looking forward to your next talk :)