Overcommit is system dependent. Linux can be even configured to not overcommit. Memory arenas can throw bad alloc or similar if there is no more pool memory. GPUs do not overcommit so bad alloc exceptions are still useful. Many algorithms can recover from bad alloc if you give it the opportunity. The title of this video should have been, “don’t rely on bad alloc on (most) linux configurations”. Generic code can still benefit from bad_alloc, this is all that matters.
@@coshvjicujmlqef6047Just apply std::nothrow if you don't want exceptions from new operator. The default behavior (without std::nothrow) should still throw exceptions because libraries don't usually handle out-of-memory errors immediately. Besides, modern C++ code should rarely call new operator directly anyway.
std::nothrow is a terrible design. You should just go check C++ standard library implementation, it calls throw operator new with try catch plus it is not allowed to override it by the standard. So internally it still throws EH. It is completely useless.
@4:50 Trivial detail here, but I think what you saw happening was the OOM (Out of Memory) killer in the Linux kernel seeing a process under gnome-terminal taking up too much RAM too quickly and killing it, so that's why your process wasn't able to handle anything (you can verify this in the system logs or with dmesg). The OOM killer has a bit of a reputation for misfires so this behavior may be temporary. I have to say I find it very annoying when it takes out 6 tabs of context instead of killing the one process in the one tab.
it could also have been systemd-oomd PS: it's actually very likely it was systemd-oomd, especially since it is a "bit" overzealous when it comes to that
@@coshvjicujmlqef6047That's one of the many reasons I hate Android. Sometimes I want an app keeps its current state while the phone is locked and no other app is running. It can be paused or go to sleep, but just don't kill it.
I got a std::bad_alloc on a normal x64 PC program yesterday (possibly for the first time ever). I had a dynamically allocated struct which contains a std::string, I was making another object, in the constructor of which I was passing a pointer to the first object and calling the constructor of its base class which takes a string_view. If the struct pointer is not nullptr, it passes the string into the base class's string_view parameter, otherwise it passes a const char[] literal as a backup. The base class constructs a std::string from the string_view. (I know It sounds like a messy design, but there's more going on than just the pathway of this string data through the construction of these objects) This has worked fine for 2 years, but for some reason yesterday, only in a release build, that final std::string allocation was trying to allocate a vast amount of memory from a valid struct pointer containing a valid string of 16 characters and I don't know why. I was working on a completely different part of the code and haven't touched this part for a long time. I changed the string_view parameters to const std::string& and it got rid of the bad_alloc. I haven't had time to go looking through asan. My guess is it's either something to do with using temporaries along the way (not that I'm explicitly making any temporaries), or something about the string -> string_view -> string pathway that made it miss it's null terminator and think the string was forever long.
I have one question: Is it reasonable to throw std::bad_alloc from a user-defined memory pool that has been filled up and could this be another real-life use case for std::bad_alloc?
@@zamf bad_alloc has two interesting properties, 1) it is a vocabulary type and 2) doesn’t allocate anything (unlike many other standard exceptions). if two independent components need to communicate about lack of memory they might as well use something already provided in the library. But other solutions are possible. the main advantage of bad_alloc (or something derived from it) is that it will work out of the box if the component uses standard allocators. Since bad_alloc doesn’t carry any baggage (not even member variables), it is a good starting building block also. (some say that bad_alloc not allocating has dubious value, however is philosophically correct IMO)
@@coshvjicujmlqef6047 bad_alloc might be allocated somewhere (for example in the heap), but bad_alloc doesn't allocate anything extra itself, such as a std::string, like runtime_error does. That is what I meant.
that is simply false. itanium abi requires all eh to call __cxa_allocate_exception. What libsupc++ does is putting all eh that is smaller than a size to emergency heap and the objects themselves were preallocated from malloc. You are just wrong.
EH can oom. Also even Microsoft Word would crash when the memory is below 5 MB, even they had all the guard against bad_alloc. They ended up removing all the code for checking
@@coshvjicujmlqef6047 It's still work having OOM handling in place so it can be handled properly sometimes even if it can't be handled properly always.
fwiw I don't really care whether bad_alloc exists or not. But ultimately I rarely find myself in a situation where if it were thrown (or malloc returned a nullptr) I could actually do something meaningful to recover. I suspect that might be why many people just don't bother.
If the operation just requires an unusually large amount of memory, one might want to log the error, save the current progress, invoke garbage collection that is not part of the C++ memory management, or writing some data into files to free up memory and then retry.
Does it work the same with malloc instead of std::vector? From this it looks like to me that vector and the standard allocator is not using malloc, but uses mmap on linux (and possibly VirtualAlloc on windows) under the hood isn't it? Or is malloc already supporting overcommit as-is on modern linux nowadays? I always do mmap by hand in C when I want this.
False. C++ std::vector uses operator new which is implemented with malloc, no matter what OS you are running. What's the point of even using new when it just wraps malloc tbh?
@@coshvjicujmlqef6047 Then probably just malloc-ing giant areas in Linux is the same as mmap-ing without commit and my tiny library around these only do commit / reserve separately because of windows they are separate then... But otherwise could have just malloc-ed a huge area, just not touch the unnecessary parts... Of course mmap-ing it also have other benefits, like you can wrap around or tell to which address to map and so on and so on... Like with mmap you can utilize the topmost bits of your pointers by mapping them to the same areas - just like many JIT does... but was not sure pure malloc on linux lets me overcommit this easily.
This has nothing to do with how C++ is implemented. NONE. You just live in your imaginary world when operator new is just a fancy wrapper among malloc. Who cares JIT? You are just red herring for things you have no idea what you are fking talking about.
Explain how fragmentation will not cause issues on a 64 bit address space? Regardless of address space, if you allocate every page on the system and free half of them, its not like you can allocate contiguous memory unless the system uses swap space to evict them to disk.... Do OS memory allocators do swapping if there is no contiguous block?
Another question is what to handle std::bad_alloc. If I have an application that pushes type-erased function objects onto a queue, and it cannot allocate space to store the function object. What should happen?
In the old days ( early MacOs, no virtual memory ) we had a rainy day fund. You'd pre allocate a block of memory. Then when the memory manager ended up using this block you'd get a warning that you're running low on memory and you could warn the user should save their work, close some documents, etc. it would be nice if the OSes of today had a similar feature. I recall trying to implement this once by overriding new() delete() but I could not make it work.
@@DeckerCreek This is definitely the way programs should behave. When an OOM is encountered, do as much cleanup as you can, tell the user and allow them to save their work. Either crashing or just killing the process is just bad manners.
No virtual memory lol. Are you still using 1960s garbage without virtual memory? Android even kills your process in the background randomly. Guess what? Nobody complains. So what are you even talking about? See? Another fear mongering tactics. BTW, do you think the "good old" days without virtual memory have correct C++ EH support? I have seen very few systems can actually throw EH.
Another situation is with custom allocations where it is easier to run out of memory. For example, a pmr::monotonic_memory_resource backed by a null_memory_resource upstream can throw when run-out-of predefined buffer memory. I think part of the confusing stems from misunderstanding of Herb Sutter's "zero-overhead exception" work. Though Herb was not proposing that out-of-memory was not possible. Instead, he was proposing to make it to terminate or report depending on circumstances (and terminate by def5for global allocator).
Yes he did. He had all the data from Microsoft internally of office. The Word just crashes when the system has less than 5 MB of memory even you had all the bad_alloc check. They finally removed all the bad_alloc code and made allocation failure fail fast. They have seen no statistical differences of crash rate on windows, android and mac
I have couple OOM exception in soft on Windows 10 when I try use more memory that physical memory in my computer. This all depend on soft, as sometimes you could probably limit how much memory given proces can even declare.
You can conditionally catch stack overflow too, but the behavior is nowhere consistent. Do you think C++ should throw bad_alloc for stack allocation or integer overflow?
@@coshvjicujmlqef6047 What? `bad_alloc` is SPECIFIC for heap allocations, whole point is to be different to any other exception source. Second both are code errors, you can easy control both and prevent them, OOM on other hand in many cases external problems, your code do not control anyway user and what soft he run (aside case when you try use all possible memory).
1. bad_alloc specific for heap allocations? why not stack? No consistency of your logic. 2. Also what are you even talking about. most C++ environment cannot even throw bad_alloc since exceptions are banned (including android, chromium, llvm, GCC). Nobody complains. The entire "bad_alloc" is just fear mongering nothing more. 3. So what? Abstract machine corruption like bad_alloc or overflow or programming bugs must just crash to prevent remote code execution.
@@coshvjicujmlqef6047 1) Because name say so? "bad" - some thing go wrong and "alloc" - allocated something (memory), Nether integer overflow or stack overflow fit it as both not allocate anything. In many cases thread have fixed stack and putting anything on stack only bump pointer. C# throw exception for every case, but each time is different type, do you expect C# to throw `new System.OutOfMemoryExcpetion()` when I make integer overflow? Btw do you know that exception have hierarchy? You can catch one `std::exception` and it will handle all standard exception types. You do not need "throw" every possible case into one type. you could have in thory one type for stack overflow and another for integer overflow. Even better in C# as it force every one to use `System.Exception` even in user defined exceptions, then one `catch (System.Exception ex)` can handle every case. 2) "fear mongering" sorry what? System is not allowed to report that allocations is impossible? Every env you show gracefully handle OOM, and `bad_alloc` is one of way that this can be done. Beside last time I check I could easy use exceptions when running code in android. 3) LOOL, you understanding of "Abstract machine corruption" is lacking, if throwing "bad_alloc" as AM ask for external resources then asking for not existing file should crash program too as this is too external resource. "remote code execution" this is good one. Could you explain me how throwing any exception can lead to remote code execution? Unusually is opposite, as exception need explicit handling if you forget handle them whole program exit, this is opposite to return codes that can be ignored.
@10:19 You description is incorrect. Problem is you don't have *continuous* free memory block of size you try to allocate, so bad_alloc is proper here.
that can break other software which takes advantage of overcommit for example I have seen a program which just gets an array of 2^32 elements but only uses a handful of indexes and just let's the OS figure out which pages are actually in use
malloc that returns null for the failure was a historical mistake. In fact, 99% of C++ applications that disable eh in some way like chromium or android apps, nor compilers, they never have the issue
One of the nice things about C++ is that there are very few things that are built-in that you can't write an alternative for. In enough "standard" use cases, the best solution us to just throw a bad_alloc. Have a non-standard solution? Well, first off as this video shows it's probably going to need to be system-dependent, and second of all is it really that difficult to write your own (or use plenty of memory libraries that already exist)?
Because we are stuck with our operating systems. We handle this the way we always do - by creating another layer of abstraction. Go was created as a workaround for the fact that Rob Pike had to run Linux and use C++ instead of using sweet Plan 9 and Alef.
@@krumbergify Nobody's forcing anyone to use linux, and even if they were, you can configure the overcommit behavior to something more sane than the total yolo it is by default. Saying "one of the target operating systems has a giant glaring bug so we'll break our language for all the others" is zany.
@@isodoublet Disabling overcommit might not be a general option for the whole OS as it would consume more memory than what will statistically be needed. Custom allocators don’t just solve this issue, it allows you to write much faster code. For it to be practical you need a system library that is allocator friendly. Zig has that and all functions that need to allocate accept an allocator as a parameter.
@@krumbergify That's for the operating system to fix, not the language. I mean hell, just buying more memory would be a preferable solution to using a buggy kernel. "But allocators" is not an answer.
No such thing. I'll use megabyte as 1024^^2 bytes and gigabyte as 1024^^3 bytes until the day I die. The only thing I'm open to is whether the power operator should exist and if so should it be double asterisk as most languages use or ^ as a few others use. I kind of prefer ^ because of the overloading of asterisk as an operator in general already, not to mention that stupid TH-cam likes to use it as markdown nonsense, but I've seen very few languages use ^, such as `bc`.
Dude, maybe you know C++, but you clearly don't know OS design and you completely miss the point. Memory overcommit is an OS feature, it has nothing to do with the language.
The entire fear mongering mentality of "not crashing" for bad_alloc is exactly why C++ will and NEVER will be memory safe. array or vector out of index? Not checking bounds because of the fear mongering of "crashing". Things like integer overflow of size of std::string push_back can NEVER EVER happen on a 64 bit system. Hey but no crash. While all the invisible control flow caused by exception control flow path makes unit testing completely impossible and even makes fuzzing much worse.
Running of address space is also a b s argument. You can run of stack too. I would like to have an open debate with you because you are so uninformed. In fact, all the devices and targets I am using don't even support C++ EH correctly.
Meaningful? C++ EH was based on wrong idea in the first place. EH should NEVER EVER be used for abstract machine corruption or programming bugs. Overflow 32 bit integer on a 32 bit machine is a programming bug.
I find it funny "modern C++" users like you are so uninformed of the reality. 32 bit machine never supported C++ EH correctly. 32 bit x86 uses SJLJ. It was only until itanium 64 bit you have table based EH (but it is still not zero-overhead). In the embedded systems or kernel, you don't even have EH runtime support. Today 45% of all consumer devices use android, (Android is the largest platform) android has no C++ EH runtime support since android does not ship libunwind and libc++ is compiled with EH support. Not mentioning all other C++ big projects like chromium, LLVM, GCC, they all ban C++ EH. Other isa like AVR or even Wasm never supports EH correctly too. What's the point of talking about AVR when you cannot even throw std::bad_alloc on it?
Definitely not true, and the platforms that don't support exceptions don't support heap allocation either, making this whole video irrelevant in those cases
@@coshvjicujmlqef6047 exceptions being disabled in the Linux kernel is a policy choice, not a technical one, stemming from the Linux kernel not supporting C++ in general. The linux kernel has its own exceptions, and makes heavy use of setjmp and longjmp, which do the same thing as exceptions. C++ exceptions can be made to work on bare metal. I don't know anything about wasm
It is the technical one that causes the policy choice. Any OS dev would tell you that because C++ EH relies on libc for threading and heap which are hosted features to implement the unwinding. EH just does not work in kernel. I do not think you even understand. Plus the kernel has hard time requirement for CPU scheduling, all the time must be done in deterministic manner, C++ EH cannot be thrown in deterministic manner. They really do not work.
Overcommit is system dependent. Linux can be even configured to not overcommit.
Memory arenas can throw bad alloc or similar if there is no more pool memory.
GPUs do not overcommit so bad alloc exceptions are still useful.
Many algorithms can recover from bad alloc if you give it the opportunity.
The title of this video should have been, “don’t rely on bad alloc on (most) linux configurations”.
Generic code can still benefit from bad_alloc, this is all that matters.
Still useful != C++ operator new should throw EH>
@@coshvjicujmlqef6047Just apply std::nothrow if you don't want exceptions from new operator. The default behavior (without std::nothrow) should still throw exceptions because libraries don't usually handle out-of-memory errors immediately. Besides, modern C++ code should rarely call new operator directly anyway.
std::nothrow is a terrible design. You should just go check C++ standard library implementation, it calls throw operator new with try catch plus it is not allowed to override it by the standard. So internally it still throws EH. It is completely useless.
@4:50 Trivial detail here, but I think what you saw happening was the OOM (Out of Memory) killer in the Linux kernel seeing a process under gnome-terminal taking up too much RAM too quickly and killing it, so that's why your process wasn't able to handle anything (you can verify this in the system logs or with dmesg). The OOM killer has a bit of a reputation for misfires so this behavior may be temporary. I have to say I find it very annoying when it takes out 6 tabs of context instead of killing the one process in the one tab.
it could also have been systemd-oomd
PS: it's actually very likely it was systemd-oomd, especially since it is a "bit" overzealous when it comes to that
So what? Phone does background apps killing all the time and nobody complains.
@@coshvjicujmlqef6047 Mobile apps are designed to keep their state even if they're killed.
Not for C++ you dumb
@@coshvjicujmlqef6047That's one of the many reasons I hate Android. Sometimes I want an app keeps its current state while the phone is locked and no other app is running. It can be paused or go to sleep, but just don't kill it.
I got a std::bad_alloc on a normal x64 PC program yesterday (possibly for the first time ever).
I had a dynamically allocated struct which contains a std::string,
I was making another object, in the constructor of which I was passing a pointer to the first object and calling the constructor of its base class which takes a string_view.
If the struct pointer is not nullptr, it passes the string into the base class's string_view parameter, otherwise it passes a const char[] literal as a backup.
The base class constructs a std::string from the string_view.
(I know It sounds like a messy design, but there's more going on than just the pathway of this string data through the construction of these objects)
This has worked fine for 2 years, but for some reason yesterday, only in a release build, that final std::string allocation was trying to allocate a vast amount of memory from a valid struct pointer containing a valid string of 16 characters and I don't know why. I was working on a completely different part of the code and haven't touched this part for a long time. I changed the string_view parameters to const std::string& and it got rid of the bad_alloc.
I haven't had time to go looking through asan. My guess is it's either something to do with using temporaries along the way (not that I'm explicitly making any temporaries), or something about the string -> string_view -> string pathway that made it miss it's null terminator and think the string was forever long.
I have one question: Is it reasonable to throw std::bad_alloc from a user-defined memory pool that has been filled up and could this be another real-life use case for std::bad_alloc?
@@zamf bad_alloc has two interesting properties, 1) it is a vocabulary type and 2) doesn’t allocate anything (unlike many other standard exceptions). if two independent components need to communicate about lack of memory they might as well use something already provided in the library. But other solutions are possible. the main advantage of bad_alloc (or something derived from it) is that it will work out of the box if the component uses standard allocators. Since bad_alloc doesn’t carry any baggage (not even member variables), it is a good starting building block also.
(some say that bad_alloc not allocating has dubious value, however is philosophically correct IMO)
That is simply untrue. bad_alloc ABSOLUTELY allocates memory from heap
There is really no special treatment for bad_alloc. full stop
@@coshvjicujmlqef6047 bad_alloc might be allocated somewhere (for example in the heap), but bad_alloc doesn't allocate anything extra itself, such as a std::string, like runtime_error does. That is what I meant.
that is simply false. itanium abi requires all eh to call __cxa_allocate_exception. What libsupc++ does is putting all eh that is smaller than a size to emergency heap and the objects themselves were preallocated from malloc. You are just wrong.
Windows doesn't overcommit, even in 64-bit, so OOM is typically recoverable on Windows, assuming the exception handling doesn't also OOM
EH can oom. Also even Microsoft Word would crash when the memory is below 5 MB, even they had all the guard against bad_alloc. They ended up removing all the code for checking
@@coshvjicujmlqef6047 It's still work having OOM handling in place so it can be handled properly sometimes even if it can't be handled properly always.
fwiw I don't really care whether bad_alloc exists or not. But ultimately I rarely find myself in a situation where if it were thrown (or malloc returned a nullptr) I could actually do something meaningful to recover. I suspect that might be why many people just don't bother.
If the operation just requires an unusually large amount of memory, one might want to log the error, save the current progress, invoke garbage collection that is not part of the C++ memory management, or writing some data into files to free up memory and then retry.
lol logging. Logging is a mistake
still. why do you want unusally large amount of memory?
Does it work the same with malloc instead of std::vector? From this it looks like to me that vector and the standard allocator is not using malloc, but uses mmap on linux (and possibly VirtualAlloc on windows) under the hood isn't it? Or is malloc already supporting overcommit as-is on modern linux nowadays? I always do mmap by hand in C when I want this.
False. C++ std::vector uses operator new which is implemented with malloc, no matter what OS you are running. What's the point of even using new when it just wraps malloc tbh?
@@coshvjicujmlqef6047 Then probably just malloc-ing giant areas in Linux is the same as mmap-ing without commit and my tiny library around these only do commit / reserve separately because of windows they are separate then...
But otherwise could have just malloc-ed a huge area, just not touch the unnecessary parts...
Of course mmap-ing it also have other benefits, like you can wrap around or tell to which address to map and so on and so on...
Like with mmap you can utilize the topmost bits of your pointers by mapping them to the same areas - just like many JIT does... but was not sure pure malloc on linux lets me overcommit this easily.
This has nothing to do with how C++ is implemented. NONE. You just live in your imaginary world when operator new is just a fancy wrapper among malloc. Who cares JIT? You are just red herring for things you have no idea what you are fking talking about.
The entire C++ runtime is a terrible mess. EH, RTTI, new and threads. 4 Evil
Go and check GCC libstdc++ source code please. You have no idea what you are talking about tbh.
Explain how fragmentation will not cause issues on a 64 bit address space?
Regardless of address space, if you allocate every page on the system and free half of them, its not like you can allocate contiguous memory unless the system uses swap space to evict them to disk....
Do OS memory allocators do swapping if there is no contiguous block?
I haven't seen anyone say that bad_alloc shouldn't exist, only that the default allocator should be noexcept
That's equally silly though.
yes. i say that. bad_alloc SHOULDN'T EXIST.
i have said that. bad_alloc SHOULD NOT EXIST. FULL STOP. People like you are completely uninformed.
@@coshvjicujmlqef6047You're suggesting that the allocator should just abort when it fails to allocate memory or what?
Yes. MSFT tried that with office and they saw NO significant difference of crash rate statistically on android, windows and mac.
Another question is what to handle std::bad_alloc. If I have an application that pushes type-erased function objects onto a queue, and it cannot allocate space to store the function object. What should happen?
In the old days ( early MacOs, no virtual memory ) we had a rainy day fund. You'd pre allocate a block of memory. Then when the memory manager ended up using this block you'd get a warning that you're running low on memory and you could warn the user should save their work, close some documents, etc. it would be nice if the OSes of today had a similar feature. I recall trying to implement this once by overriding new() delete() but I could not make it work.
@@DeckerCreek This is definitely the way programs should behave. When an OOM is encountered, do as much cleanup as you can, tell the user and allow them to save their work. Either crashing or just killing the process is just bad manners.
overriding operator new/operator delete is a mistake. the entire C++ new/delete was a mistake in the first place.
It is not. It is to protect users since users are dumb in general.
No virtual memory lol. Are you still using 1960s garbage without virtual memory? Android even kills your process in the background randomly. Guess what? Nobody complains. So what are you even talking about? See? Another fear mongering tactics. BTW, do you think the "good old" days without virtual memory have correct C++ EH support? I have seen very few systems can actually throw EH.
4:45 If you would have had enough swap reserved, would it just give the same bad_alloc eventually?
no.
If your system allows overcommitting, you'll bet the bad_alloc when you run out of address space, regardless of available swap space
Fucking android even does phantom process killing. Guess what. Nobody cares
You can easily run into failed allocations with custom allocators, e.g. when using memory pools.
thanks,i will learn more
Shame that the comment section was ruined by one person lol. Good video
He is just wrong. I would like to have an open debate with him because he is so wrong and uninformed.
Remember kids, autism and cocaine are a bad combination.
@@mytech6779That explains the long video rants too LMAO 😂
@@RustIsWinning What did I originally write? YT just ghosted whatever you responded too.
@@mytech6779"Remember kids, and are a bad combination." (Sorting comments by new will show it.)
6:57 Interesting that it terminated after i16_max iterations
That is not a coincidence since 1GB (per interation) * 65535 = 64TB, which is the address space size of the system.
Another situation is with custom allocations where it is easier to run out of memory. For example, a pmr::monotonic_memory_resource backed by a null_memory_resource upstream can throw when run-out-of predefined buffer memory.
I think part of the confusing stems from misunderstanding of Herb Sutter's "zero-overhead exception" work. Though Herb was not proposing that out-of-memory was not possible. Instead, he was proposing to make it to terminate or report depending on circumstances (and terminate by def5for global allocator).
I respect Herb Sutter greatly but I think it's pretty clear he just didn't think this one through.
Why do you even use pmr? What's the point of this junk?
Yes he did. He had all the data from Microsoft internally of office. The Word just crashes when the system has less than 5 MB of memory even you had all the bad_alloc check. They finally removed all the bad_alloc code and made allocation failure fail fast. They have seen no statistical differences of crash rate on windows, android and mac
@@coshvjicujmlqef6047 If only there existed software with different usage patterns than ms office
If only there existed any software with different memory usage patterns than ms office
I have couple OOM exception in soft on Windows 10 when I try use more memory that physical memory in my computer. This all depend on soft, as sometimes you could probably limit how much memory given proces can even declare.
You can conditionally catch stack overflow too, but the behavior is nowhere consistent. Do you think C++ should throw bad_alloc for stack allocation or integer overflow?
@@coshvjicujmlqef6047 What? `bad_alloc` is SPECIFIC for heap allocations, whole point is to be different to any other exception source.
Second both are code errors, you can easy control both and prevent them, OOM on other hand in many cases external problems, your code do not control anyway user and what soft he run (aside case when you try use all possible memory).
1. bad_alloc specific for heap allocations? why not stack? No consistency of your logic.
2. Also what are you even talking about. most C++ environment cannot even throw bad_alloc since exceptions are banned (including android, chromium, llvm, GCC). Nobody complains. The entire "bad_alloc" is just fear mongering nothing more.
3. So what? Abstract machine corruption like bad_alloc or overflow or programming bugs must just crash to prevent remote code execution.
@@coshvjicujmlqef6047 1) Because name say so? "bad" - some thing go wrong and "alloc" - allocated something (memory),
Nether integer overflow or stack overflow fit it as both not allocate anything. In many cases thread have fixed stack and putting anything on stack only bump pointer. C# throw exception for every case, but each time is different type, do you expect C# to throw `new System.OutOfMemoryExcpetion()` when I make integer overflow?
Btw do you know that exception have hierarchy? You can catch one `std::exception` and it will handle all standard exception types.
You do not need "throw" every possible case into one type. you could have in thory one type for stack overflow and another for integer overflow.
Even better in C# as it force every one to use `System.Exception` even in user defined exceptions, then one `catch (System.Exception ex)` can handle every case.
2) "fear mongering" sorry what? System is not allowed to report that allocations is impossible? Every env you show gracefully handle OOM, and `bad_alloc` is one of way that this can be done. Beside last time I check I could easy use exceptions when running code in android.
3) LOOL, you understanding of "Abstract machine corruption" is lacking, if throwing "bad_alloc" as AM ask for external resources then asking for not existing file should crash program too as this is too external resource.
"remote code execution" this is good one. Could you explain me how throwing any exception can lead to remote code execution?
Unusually is opposite, as exception need explicit handling if you forget handle them whole program exit, this is opposite to return codes that can be ignored.
throwing EH will call destructors. destructors might have bugs in it that would do use-after-free and double-free.
@10:19 You description is incorrect. Problem is you don't have *continuous* free memory block of size you try to allocate, so bad_alloc is proper here.
I like this video!
Or just disable overcommitment
that can break other software which takes advantage of overcommit
for example I have seen a program which just gets an array of 2^32 elements but only uses a handful of indexes and just let's the OS figure out which pages are actually in use
@@kuhluhOGDolphin Emulator does this as well, it's a neat trick.
malloc that returns null for the failure was a historical mistake. In fact, 99% of C++ applications that disable eh in some way like chromium or android apps, nor compilers, they never have the issue
One of the nice things about C++ is that there are very few things that are built-in that you can't write an alternative for.
In enough "standard" use cases, the best solution us to just throw a bad_alloc. Have a non-standard solution? Well, first off as this video shows it's probably going to need to be system-dependent, and second of all is it really that difficult to write your own (or use plenty of memory libraries that already exist)?
I never understood why languages should be designed around specific operating system bugs anyway.
(and windows doesn't have this particular bug, so it's definitely an important distinction).
Because we are stuck with our operating systems. We handle this the way we always do - by creating another layer of abstraction. Go was created as a workaround for the fact that Rob Pike had to run Linux and use C++ instead of using sweet Plan 9 and Alef.
@@krumbergify Nobody's forcing anyone to use linux, and even if they were, you can configure the overcommit behavior to something more sane than the total yolo it is by default.
Saying "one of the target operating systems has a giant glaring bug so we'll break our language for all the others" is zany.
@@isodoublet Disabling overcommit might not be a general option for the whole OS as it would consume more memory than what will statistically be needed.
Custom allocators don’t just solve this issue, it allows you to write much faster code. For it to be practical you need a system library that is allocator friendly. Zig has that and all functions that need to allocate accept an allocator as a parameter.
@@krumbergify That's for the operating system to fix, not the language. I mean hell, just buying more memory would be a preferable solution to using a buggy kernel.
"But allocators" is not an answer.
small embedded systems, memory arenas.
I wouldn't be surprised if this behavior could be modified with sysctl commands.
One mebibyte and one gibibyte :)
No such thing. I'll use megabyte as 1024^^2 bytes and gigabyte as 1024^^3 bytes until the day I die. The only thing I'm open to is whether the power operator should exist and if so should it be double asterisk as most languages use or ^ as a few others use. I kind of prefer ^ because of the overloading of asterisk as an operator in general already, not to mention that stupid TH-cam likes to use it as markdown nonsense, but I've seen very few languages use ^, such as `bc`.
@@anon_y_mousse^ is already used for xor, so maybe ^^ ?
Dude, maybe you know C++, but you clearly don't know OS design and you completely miss the point. Memory overcommit is an OS feature, it has nothing to do with the language.
When C++ language does not satisfy its reality, the standard has to change. In fact C++ EH does not even work
Very interesting, thank you.
He is wrong.
AVR LOL. AVR has no C++ EH runtime support. What are you even talking about?
The entire fear mongering mentality of "not crashing" for bad_alloc is exactly why C++ will and NEVER will be memory safe. array or vector out of index? Not checking bounds because of the fear mongering of "crashing". Things like integer overflow of size of std::string push_back can NEVER EVER happen on a 64 bit system. Hey but no crash. While all the invisible control flow caused by exception control flow path makes unit testing completely impossible and even makes fuzzing much worse.
Running of address space is also a b s argument. You can run of stack too. I would like to have an open debate with you because you are so uninformed. In fact, all the devices and targets I am using don't even support C++ EH correctly.
I would like to have an open debate with you because you are so wrong.
Meaningful? C++ EH was based on wrong idea in the first place. EH should NEVER EVER be used for abstract machine corruption or programming bugs. Overflow 32 bit integer on a 32 bit machine is a programming bug.
I find it funny "modern C++" users like you are so uninformed of the reality. 32 bit machine never supported C++ EH correctly. 32 bit x86 uses SJLJ. It was only until itanium 64 bit you have table based EH (but it is still not zero-overhead). In the embedded systems or kernel, you don't even have EH runtime support. Today 45% of all consumer devices use android, (Android is the largest platform) android has no C++ EH runtime support since android does not ship libunwind and libc++ is compiled with EH support. Not mentioning all other C++ big projects like chromium, LLVM, GCC, they all ban C++ EH.
Other isa like AVR or even Wasm never supports EH correctly too. What's the point of talking about AVR when you cannot even throw std::bad_alloc on it?
Handling oom is a mistake. I just call builtin_trap/std::abort after malloc and I never use C++ standard new
Embedded platforms don't support C++ EH. Full stop.
Definitely not true, and the platforms that don't support exceptions don't support heap allocation either, making this whole video irrelevant in those cases
false. A lot of embedded systems, webassembly and even android, including linux kernel supports heap allocation, but no EH support
C++ EH Runtime is EXTREMELY HARD TO implement.
@@coshvjicujmlqef6047 exceptions being disabled in the Linux kernel is a policy choice, not a technical one, stemming from the Linux kernel not supporting C++ in general. The linux kernel has its own exceptions, and makes heavy use of setjmp and longjmp, which do the same thing as exceptions. C++ exceptions can be made to work on bare metal.
I don't know anything about wasm
It is the technical one that causes the policy choice. Any OS dev would tell you that because C++ EH relies on libc for threading and heap which are hosted features to implement the unwinding. EH just does not work in kernel. I do not think you even understand. Plus the kernel has hard time requirement for CPU scheduling, all the time must be done in deterministic manner, C++ EH cannot be thrown in deterministic manner. They really do not work.
Smaller lol. Then why buy a 32 bit CPU in 2024? Even an android phone from Walmart that costs $39.99 is aarch64.