It's probably the best video on how to structure C programs, in history. I have downloaded it and kept it in every media files I got. Hopefully Eskil realizes how changing that video is. Even though I don't program in C in anything, it's really the best general programming ethics guide.
Wrote my first C compiler in 1982 for CDC6400 machine. 60 bit words so 60 bit chars, pointers, ints. Just enough memory to do simple constant folding of expressions.
As someone learning C/CPP this is a true goldmine. I feel like I have managed to at least experience time travel and issues caused by volatile values not being declared as such, when writing code for arduino. I just wish GCC was more helpful. This might be a RTFM issue on my part, but it would be nice to get a hint like ”maybe you meant to write a function that has defined behaviour?” or something.
Not sure if that's what you mean, but you can use the flag "-fsanitize=undefined" with gcc. And also don't forget to add the same thing to the linker flags, if you're doing separate compilation-linking.
@@xugro there's a bunch of UB that cannot be found during compilation but qualifies as UB. for example, you can declare an "extern float x" in one file and "int x" in another (which is prime time UB) and compiler is unable to find it (since type information per symbol is not preserved after compilation). also, there is a bunch of UB that can happen when passing arguments to functions. let's say you have a function that takes two pointers and compares them - there is no way for compiler to determine whether you passed the correct values to the call, since the function can be defined in another file ("correct values" part relates to the provenance part of this video, meaning you can only compare addresses within the same object address space). this kind of things generally makes it impossible to get rid of UB and also is the reason why C requires the programmers to know what they are doing.
I like that you chose a dark color scheme for the slides but the random white flashes in between really hurt my eyes because of the stark contrast to the rest of the video
Ty for making a video abt this that doesnt feel like it relies on peoples short attention span. This is exactly what I'm looking for when I look for a coding video on youtube
How is ommiting the malloc() == NULL not a compiler bug? The standard clearly defines this to be a possible error case which has to be checked against? Edit: the real issue seems to be that the compiler optimizes the malloc itself away, because it knows the memory is never used. Therefore it can assume it always succeeds because it never called it in the first place.
Thank you for making this. As someone who gets asked when 'the compiler does weird easily biodegradable matter', being able to point people to this is gold. Restrict is something I miss in C++, it is so useful for SIMD intrinsics.
Restrict, or an equivalent, is available in all major C++ compilers. That said, restrict itself is a woefully inadequate tool for working with aliasing semantics. It hasn't been standardized in C++ because it's fundamentally a dead end. Also, C++ is leaps and bounds better for SIMD programming relative to C. Libraries like E.V.E. or Eigen are literally impossible to write in C.
49:29 bamboozled me a lot. Binging TH-cam in bed on my iPad, apparently with it about half an arm's length away, BOTH my blind spots converge on the closing curly brace when I look at the second _i_ in the for loop. Was kinda freaky seeing an instance of UB in my own retinas after you talked about instances of it in C so much. Sub earned.
I am a bit baffled that when a C compiler encounters user code that does the impossible (such as a range check that always passes/fails at compile time, or guaranteed undefined behaviour detectable at compile time) that its first instinct is "how can I exploit this to make the code run faster" rather than "tell the user their code probably has a bug".
FI agree that compilers should be a lot better at explaining what they are doing. For instance syntax highlight code deletion. However, it should also do the optimizations that the standard affords it.
@@eskilsteenberg yeah it would be great if the compiler gave some notice that its just ignoring code because it thinks its pointless, like 'hey maybe use volatile' or 'this expression is always true' etc. its been a while so maybe they are warnings now but it doesnt sound like it lol
The story here isn't actually too hard to explain! If you remember back when GCC and clang/LLVM were at each other's throats for being the "better compiler", the number one issue was speed- the faster compiler, the one that won all the benchmarks, was expected to win the compiler holy war. Therefore, compiler developers put massive numbers of hours into making their compiler generate the fastest code possible. Until shockingly recently, they didn't really worry about the effects this would have on developers, so they didn't put nearly as many hours into warnings and heuristics that warn when the code exerts unexpected behavior. As a result, the warnings that exist are mostly for simple rule breaks, and there's just not enough reporting infrastructure for the optimizer to report that some function is being optimized out of existence in a way that's probably not what the programmer intended. The fix is to put pressure on the devs- either make the patches on your own and contribute them to the projects (the best option!), or repeatedly ask for improved UB detection and ask others to advocate with you.
@@cosmic3689 Right, more warnings about those strange optimizations are wanted. But there is a catch: macros sometimes result in such code, especially when used with literal arguments. So at the same time, there must be some method to avoid overwhelming the developer with such warnings.
33:00 another way to understand this issue is: The multiplication of a and b first multiply as shorts, wrapping if needed, and then is cast to an unsigned int. This means that the highest 16 bits will always be 0, and it will eliminate the if.
This was interesting! I did not know that the compiler did (or could do) such weird and scary optimizations. Now I appreciate that I know assembly even more because at least there you know what you write is gonna stay there no matter what. Or at least I can debug C code by viewing the assembly.
So much genuinely valuable information that contextualizes and explains many C intuitions that I've built over time. Seriously one of the best quality videos I've seen on this platform in recent memory.
@34:50 not really sure if promotion happens only if hardware has no 16-bit operations. promotions HAVE TO happen, but you can use 16 bit version of instructions only if the behavior is the same as if the promotions actually happened (according to ISO 9899:1999 - 5.1.2.3.10). this is just a way to standardize the semantics of the program, so it can be agnostic to whether the machine supports lower bit instructions or not.
Yeah, he did a great job explaining the rationale behind integer promotions (and why they scare me), but neglected to explain that his hypothetical implementation--one which can't do 16-bit operations--would define (signed and unsigned) `int` to be 32 bits *_because_* it would allow the implementation to promote any integer type with a shorter width.
Such an amazing video! I loved all these fascinating tidbits about C (and compiler design in general) and you held my attention the entire time. I think I'll watch it a few more times to really grok the material. Bravo!
24:42 Subtitles aren't helping me here, because it also hears both signed and unsigned having possible optimizations. I *think* the second one is "can't, but let's just say it's not clearly defined ;)
Is the explanation at 7:57 actually correct? I would have assumed the problem is that *a* can change elsewhere, meaning x and y are not necessarily the same, not that x can change elsewhere, causing y to be equal to x but not a.
this is maximum anxiety for everything ive ever written. At first it was like "alright, perhaps i should reorganise some things for better performance" and then it was "oh god, i hope i didn't implicitly assume that the padding in my structs would be persistent."
30:30 I think here it might be better to define an enum with values 0,1,2,3 and to cast a to that type / to have it that type. With -Wswitch, I would hope that means that the value being outside of that enum should also be UB / unreachable (although I would have to look it up, it also depends on how the compiler warnings work here). I would prefer that since it doesn't depend on compiler intrinsics, and it also doesn't let you skip values in between (at least if it's a sensible enum like "enum value_t {A,B,C,D};" and not something strange like "enum weird_t {A=55, B=17, C=1, D= -1854};").
in C, it is not undefined behavior for an enum to have a value that is not enumerated. Basically enums are just ints or whatever integer type you picked.
@@ronald3836 absolutely, that's why I pointed to -Wswitch, which makes it a warning (hopefully). It's not in the standard, but it is a pretty typical optional limitation of what you can do in most compilers. Also I should say that I usually use -Werror with lots of warnings turned on. I know many people are not as diligent tho
29:35, I actually have a better way to write that code, make member 0 a default function that does nothing or at least assumes invalid input, then MULTIPLY a against it's boolean check, so in this example it would be a *= (a >= 0 && a < 4); func[a](); Notice how there's no if statement that would result in a jump instruction which in turn slows down the code, if the functions are all in the same memory chunk then even if the cpu assumes a is not 0 it only has to read backwards a bit to get the correct function and from my understanding reading backwards in memory is faster than reading forwards
39:04 There's a new proposal document, N3128, on the WG14 site that actually wants to stop this exact thing because of how observable behavior (printf) is affected.
34:00 Would be fun to see this run on an architecture that uses something other than 2's complement for hardware acceleration of signed integer operations
The optimizations flags in gcc bit me a long time ago. My code had no bugs without optimization flags on but then would develop a bug after O2. I don’t recall what the exact issue was, but from then on I would run my unit tests with and without optimization flags to minimize the potential for aggressive optimizations or missing a keyword to force the compiler to be more careful with a function.
Your code was buggy before you turned on the optimization flags, the optimization flags just revealed them. your strategy of testing in multiple different optimization modes is the right one!
Even though VLA objects with automatic storage (stack-allocated) are not very useful in practice, the VLA **types** are really useful for handling multidimensional arrays.
Two things: C23 now requires VLAs again, rather ridiculously. And, GDB has a TUI mode that is a little buggy, but quite good, and gives you a visual debugger featureset.
Why are VLA requirements ridiculous? What's feasible for implementations can change with time. The first cc(1) I used had =+ & =- and didn't even support K&R C or the C widely published in books. BTW the VLA inclusion unbroke the single error I made in an exam at Uni. which cost me a 100% result long before the C standard inclusion so you need a really good rationale.
@@RobBCactive It's ridiculous because only GCC properly supports it, and the feature was added and then deprecated and then re-added to the standard. This is an absurd thing to do, especially for a committee that is so overwhelmingly committed to keeping the language as much the same as possible over the decades.
@@RobBCactive That's a disingenuous argument. The situation is not comparable. C didn't add function prototypes to the standard and then remove them in the next version and then add them back in the next version. They didn't do that with any feature except VLAs. And they haven't done that with any other compiler-specific feature, either. They didn't add an MSVC-specific extension or a Clang-specific extension or a Sun extension that no one else implemented. They only did that with GCC's VLAs.
I spend my time working with people who ponder sources of truth and believe that there is one true dogma that will safe our souls (keep it simple). I learned C some 30 years ago and when I feel nostalgia watching this video it's not because I miss C. What I miss are people who actually know what they're talking about and why, people like Eskil.
38:40 I think this is wrong - the compiler isn't allowed to propagate undefined behaviour backwards past an I/O operation like printf, which might cause an external effect such as the OS stopping the program anyway. (depending on what the output is piped into)
There is nothing in the standard that forbids this, but you are not alone thinking this does not make sense (many people in the ISO C standard group agree with you). People do file compiler bugs for this behaviour, and some compilers try to minimize this, even thought the standard does not forbid it. I think ISO will come out with some guidance of this soon-ish.
The compiler "knows" that *x can be accessed, so x cannot be NULL. If what the compiler "knows" turns out to be false, then that is undefined behavior and anything is allowed to happen, both before and after. The C standard allows the compiler to annihilate the universe if a program exhibits UB.
Rust is my language of choice these last three years or so. However I still love C and would be happy to use it where needed. I love it for its hard core simplicity.I love it because it has hardly changed in decades and I hope that remains the case. However I've have also use C++ a lot and absolutely refuse to ever go back to that deranged monster.
@@Heater-v1.0.0 Say what you will about C++, you'll have to square it with the fact that even the major C implementations (Clang/GCC/MSVC/etc) choose the "deranged monster" of C++ over the "hard core simplicity" of C. Simply put, the fact is that C++ is more popular than ever because it's actually *more* insane to use C lmfao
@@69696969696969666 The is true. Most of the worlds C compilers were written in C. C++ evolved from C and the compiler implementations followed. All seems quite reasonable. I agree that C++ offers a lot of conveniences that can make life much easier than C, although I'm still happy to use C or the C subset of C++ where appropriate. It is possible to write nice C++ code if one stays away form much of the ugliness the language. Unfortunately it's hard to do that on a large project with many people working on it as they tend to start introducing all kind of C++ weirdness. Anyway, all that long and tortuous history does not mean we have ended up in a good place with C++. Many agree with me. Like Hurb Sutter with.his C++Front work. And Hurb is on the C++ committee!
I'm looking forward to compilers optimising away array index checks, assuming programmers are too clever to make mistakes is obviously the way forward.
The compiler can't optimize away my bounds checks because I don't check in the first place. Hopefully in the long term the undefined behavior in my out-of-bounds array accesses will result in even greater performance. Ideally compilers will become sophisticated enough to replace my entire code base with "return 0".
33:55, um int is NOT always 32bit though,sometimes it's 16bit like short, the compiler could easily optimise out the call altogether in that situation, better to have used a long, at least that is guaranteed to be bigger than a short. Also (and I'm assuming you're leading up to this) should've put a in x 1st then multiplied x by b, a *b by itself might & probably will, remain a unsigned short operation and will just lose the upper bits before it even gets to x
Interesting talk! It’s always fun seeing C code and realizing that it’s undefined :) One thing I don’t understand is: in what scenario would you ever free an array and then check that you didn’t reallocate the same block? I kind of get if thread A allocates, thread B does some calculation, thread A frees and reallocates, then thread B checks if it’s already done the calculation for the current block. Seems like a flawed architecture though, if this is the case then A should trigger B on a reallocation and B will wait otherwise. Maybe I just don’t get it though
There is a common pattern using a mechanic called "compare and exchange". Lets say you have a shared counter that is shared and many threads want to merriment the value. Each thread wants to access this value and add one to it. To do this you read the value, add one to it, and write it back. The problem with this is that between reading and writing it back some other thread may have incremented the value. so if thread one reads the value 5, adds one to it, then tread two reads the value and adds one to it, and then both write it back, then the value is set to 6, not 7, even though 2 threads have added 1 to 5. To deal with this processors, have a set of instructions that are called "compare and exchange" , thy let a thread say "if this calye is X, change it to Y". So our threads that use that to say: if the shared value is still 5 change it to 6. If two threads try to change 5 to 6, the first one will succeed, and the second one will fail, and will have to re-read the value and try again. This teqniqe is often used with pointer swaps. So you have a pointer to some data that describes a state, you read that state, creates some new state, and then uses compare and exchange to swap out the pointer to the new state. In this case you are using the pointer to see if the pointer has changed since you read it, and this is where an ABA bug can happen, if two states have the same pointer.
@@zabotheother423 Lockless algorithms are generally faster because they don't require any operating system intervention. Mutexes are convenient because if you use a function that locks them, any thread that gets stuck on a lock will sleep until the lock is available, and the operating system can wake up the thread when the lock gets unlocked. This OS intervention is good, because threads don't take up CPU while waiting for each other. On the other hand, sleeping and waking threads take many cycles, so if you really want good performance its better not to have a sleeping lock but just do a spin lock if you expect to wait for a few cycles for the resource to become available. This means that you can only hold things for very short amounts of time, so its harder to design lockless systems, but also more fun!
34:24 If we have unsigned shorts a,b = USHRT_MAX; then multiplying a and b together produces undefined (implementation specific) behavior. Do I understand this example correctly? We might expect unsigned integers to wrap around modulo USHRT_MAX+1, but in fact they do not due to implicit type promotion to signed integers. And this only applies to types with rank lower than integer (i.e. char, short).
Hi Mister Steenberg! If you happen to read this message, would you consider doing a video about C23? I'd like to hear what you think about the new features coming in C23.
Lots of learnings here! Thanks a ton! Regarding uninit values example - the compiler optimization kicks in because there’s no *buf = … statement before the reads in c1 and c2 right?
At 47:29 why is this not optimizable because of aliasing concerns? `count` will be derefed once, used for a calculation and the result will be passed to `memset` by value. Even if while `memset` runs `*count` gets overwritten, that should not affect the behavior, provided `*count` was valid to begin with
One thing I see/hear often regarding C++ is that the compiler defines the behavior of your program in terms of the "Abstract Machine". UB and the "as-if" rule are consequences of this machine's behavior, even if it would be ok on real hardware. Does C have a similar concept? For example, what you say at 55:46: In the C++ Abstract Machine, every allocation is effectively its own address space. This has important consequences: no allocation can be reached by a pointer to another allocation, comparison of pointers to different allocations is not well defined, etc.
1:03:22 To be honest I don't think such optimizations should be done by the compiler at all. Instead the compiler should warn the user not to use malloc but the stack here. At least there should be an option to get warnings, where the compiler does fancy optimizations and I would always turn them on.
The thing I hate the most is strict aliasing. What do you mean pointers of different types cannot overlap? The whole point of union was to allow these operations. What are these compiler vendors thinking that they could achieve by optimizing a union? Why does msvc don't have an option to disable strict aliasing. There is a "restrict" keyword goddamit. If I am optimizing a critical code, I am smart enough to use restrict keyword to allow these optimizations.
Finally a good explanation of what the volatile keyword does mean in c\c++. Just finished watching. VERY GOOD stuff here. It's shame that this's no mention of how do the things relate to c++. Is it same or different in c++. I wish I had the same quality video about c++.
Yeah, i was surprise that he started explaining volatile accurately. So often, even from very brilliant people, you hear rants about volatile and how it does not mean what we think it means - and then it turns out they them self are giving false explanations.
Thank you, this is terrifying. Compilers are amazing. So many times I think I've found a faster way to do something, then the compiler just shakes it's head at me and produces the same binary. @23:47 Sometimes I depend on overlap. Splitting the operation into multiple statements ie. x *= 2; x /= 2; has always produced the behaviour I want. It is interesting that x *= 2; x/= 2; is not always the same as x = (x*2)/2. @34:32 I'm sceptical that this can happen. I can't reproduce it on GCC 8.3.0, even if I add the casts! @51:08 there's something wrong with your newline here ;-)
If you write nonsense code that gets into language or compiler details unnecessarily, you are not doing anyone any favors. Clearing the high bit can be done by masking e.g. x &= (1
@@gregorymorse8423 I don't. It was a bad example. I would never intentionally overflow a multiply. The only times I depend on overflow are for addition and subtraction. In 8 bits 2-251=7. This is necessary if you want to calculate the elapsed time of a free running 8 bit timer. People tend to think of number ranges as lines, which is why overflow causes some confusion. For addition and subtraction It can help to think of number ranges as circular, or dials. Then the boundaries become irrelevant.
@Michael Clift overflows are well defined behavior in twos complement number systems. And applications like cryptography rely on this, and deliberately overflowing multiplication when doing modular arithmetic is practically vital to schieve performance. That C has tried to be low level but introduced bizarre undefined behavior concepts all over to capture generality that is useless is beyond me. The formal concept is beyond the dial analogy that a+b is e.g. for 32 bit unsigned (a+b) % 2^32 or likewise for multiplication. C does seem to respect thus for unsigned numbers in fact, it's signed ones that are trickier to describe so they chickened out.
with regards to 34:32, copying the code as written in the video and compiling with just "gcc -O3 t.c -o t" reproduced the result for me on gcc 9.3.0 (ubuntu, wsl)
@@nim64 Thanks nim. I tried it with -O3 and now I see the symptom too (still on GCC 8.3.0). It appears to happen with any optimisation level apart from -O0
I loved this video, thanks for sharing! As someone who started programming on the x86 processor, which I think has a more forgiving memory model, it's great to review the acquire/release semantics and other little things that may trip me up. Regarding undefined behavior: Do you have an estimate on how often the compiler will raise a warning before relying on the UD to delete a bunch of code? To me it seems most or all of these should be a big red flag that there's an error in the program - even thought the C language assumes the programmer knows what they're doing.
My favourite is @1:04:04. The compiler assumes malloc can't return null when it literally can!? Am I understanding that correctly!? I wonder, were there ever compiler wars? Like the browser wars that gave us so much crap. There's that saying in coding: "You should throw the first one away"; I'm beginning to think it applies to the whole industry. We just need to learn from our mistakes and design a new one.
It just isn't true: the compiler can not and DOES NOT assume that malloc always returns a non-null value. But malloc is performing syscalls to ask the OS for dynamic memory and things like the linux memory allocation scheme is opportunistic: It will always give you a valid address and only when trying to access that memory will you know if you can really use that. but that is not a problem of C but of Linux.
I don't understand!? Who highlighted your reply to my post? If it was Eskil Steenberg then he seems to be disagreeing with his own statement at 1:04:04. What's going on? BTW, I don't claim to know the answer, I was commenting on the statement in the video, and assuming it was something Eskil Steenberg had experienced. @@ABaumstumpf
Understanding underlying hardware and coding while taking that into account, is a dying breed. People are coding large programs with languages that far remove them from the fact that it'sa running on a HARDWARE that has limitations, idiosyncrasies, it's not immediate when you tell it to do something... that has multiple processes running on it... And that code is very, VERY inefficient. WE're running into a wall with constantly rising performance / dollar and that's starting to cause real issues and people who understand how to write code for certain architecture, taking that into account, are valuable again. Hopefully enough people watch this and realize that it MATTERS what you write and that you understand the hardware as well.
25:12 In Rust, wrapping is an error. But the compiler might optimize it, so there won't be an error. I wonder if it also takes advantage of such optimizations.
23:05 for me, this doesn't illustrate the power of C, but how vague the semantics of the this language are. It probably also will work differently in debug and release/optimized builds.
42:35 wouldn't it still be wrong even if we write to memory immediately after allocation because between execution of these lines another process could've allocated the same memory, introducing the same problem? Imagine 2 processes A and B with this code: Process A: buf = malloc(1) *buf = 'a'; printf("This should but may not be 'a': '%c' ", *buf); -------- Process B: buf = malloc(1); *buf = 'b'; // do some stuff with buf so that compiler does not remove the write And consider this order of execution: A: 0x123 == (buf = malloc(1)) B: 0x123 == (buf = malloc(1)) A: *buf = 'a'; B: *buf = 'b'; A: printf("This should but may not be 'a': %c ", *buf); At the end the printf will print 'b' even though by looking at A's code it should be 'a'. Is this specific to volatile pointers on that platform (even then, I think malloc should return unique addresses anyways?)?
Nice vid, just one question: In your union aliasing example around the 52m mark, the union has a compatible type as a member, as per C 2011 6.5 7, is this not valid and defined behavior?
43:04 I don’t think it is the way you explained this. Another process cannot obtain the same memory due to virtual memory pages protection. It could execute apis like readProcessMemory and WriteProcessMemory to change it but it is purposeful manipulating of memory.
It's ok in not high im just in a daze, not used to specifying the sizes of everything i work with, like working with scanf input, i get it since i allocate the memory to begin with i need to know the length of everything if i want to do anything at all with data, on the plus side i almost stopped using classes in oo unless absolutely necessary
45:35 is there a reason compilers will avoid overwriting padding in the initialization example, but they can overwrite padding in the case that writing a larger value is faster? or are both examples the same in that compilers *can* overwrite padding, but sometimes choose not to?
Thats a really good question! Ive been trying to figure this out myself. I think they are scared of overwriting padding because it may break some rare program, but they don't even do it with freshly allocated memory. They do it with memory that has been memset, but not memory that has been calloced. I think it might just be an oversight.
24:42 Is floating point precision also UB? Because I think that would be much more likely to break (with a general multiplier/divider, not with the special case of 2).
No it is not. C follows the IEEE floating points standards and most things are defined, the things that are not defined are platform defined, and not UB. Platform defined means that the platform should define what the behavior should be for that platform. Than means that it is defined and consistent, on that platform, but that it may work differently on other platforms. UB means that there is no defined behavior at all and anything can happen.
@@eskilsteenberg Ah good. Yes, most languages follow that one, and that bit of platform dependent behavior is also the reason Rust doesn't do floating point math in const fn (aka constexpr in C++). Just to expand my knowledge, if you know: This is assymetric with whole numbers, and in many instances you see floating point numbers get special treatment to follow that standard. Do whole numbers indeed not have a similar standard? It's certainly much less important for the behavior of code, the general type already gives all the info (minus, for C and C++, platform specific stuff like the mapping of int etc to size), so I can see why that would be the case.
I was very surprised by this advice, given the talk seemed to be targeting fairly experienced programmers. Possibly the main reason I stick with C is its powerful preprocessor. If you know exactly how it works, you can create incredible powerful abstractions and generic code - all completely safe. I assume he meant people doesn't know how to do that, otherwise this would be a very poor advice.
@@tylovset I wouldn't use C because of the powerful preprocessor. If I want a language with a low level core and powerful abstractions, I'd rather use Scopes.
41:30 this makes me angry. buf is volatile. How the heck does the compiler know at the time of assigning buf[0] to c1 and c2 it hasn't been assigned by an external resource? It makes assumptions about the value despite volatile!
sorry, I am sort of a beginner, but regarding the example at 9:40, there are many examples of code in the linux kernel that do this kind of thing without volatile keywords. What's up with that?
I am in third year computer science and somehow my program never taught me C. I learned Java, Go, assembly, Scheme Prolog and more but not C. I can read it and I understood this video but I lack fundamentals. I'll look into the ressources you mentionned and i'll try to hack some of the software you wrote. There's a game called "stephen's sausage roll" that has a minimal tutorial and its first levels are not trivial. Even at the start they require thought. I need that but for C.
You should write a small game with code reloading. Like Handmade Hero. That'll teach you everything you need to know. You don't need to make the whole game, by the time you draw some textured quads and maybe some text, you will have learned.
Skip C and learn C++. Not only does C++ allow for all of the "low-level" bit-fiddling of C, but it also makes it possible to automate most of the uninteresting busy work required in C. Moreover, C++ is the language of choice for GPU/SIMD programming, as well as far better parallelism and concurrency.
I'm not that well versed in C so I don't get what's happening with 51:33. How does printing the first struct member through a pointer influence the second?
Its not a struct its a union. A union is a struct where all members occupy the same memory, so writing one will overwrite all others. This lets you write a value as one type, and read it as another.
43:53: no sane OS gives you uninitialized memory... right? Also, I didn't really get this example. If the buffer is volatile then the two reads can very well be different, for instance because it's mapped to hardware. Or does the compiler know that malloc()'d buffer is not memory-mapped?
"no sane OS gives you uninitialized memory... right? " Nah, no sane OS forces memory-initialisation at all times. It would create insanely bad performance if you actually have a high memory throughput.
@@ABaumstumpf I don't believe that's true. Uniniatialized memory is a security concern when it comes from the OS. Of course this doesn't mean malloc() will initialize memory, but the underlying mmap will.
@@guidomartinez5099 "Uniniatialized memory is a security concern when it comes from the OS." If there was any relevant information still left in memory than that is a security-Bug of the application that put that data into memory. If malloc initialises memory that would be a huge performance-degradation all across the board just cause some other programs were buggy.
@@guidomartinez5099 "sorry that's just not true" If you say so - then seems Linux, every other major OS, even Java, Rust disagree with you there buddy. So no, you are on your own.
It all makes a lot of sense if you learn it properly. the problem is that there are far too few people who teach all the details correctly. C is not, like many think, a regular language where just some modern stuff have been removed.
At 28:40 you state that compilers don't actually optimize away shifts by register-length (32) as being UB. This is no longer true, I have seen them do so, particularly in the context of doing rotation by n bits where n happens to be zero.
Its always difficult to say with authority what compilers do because there are many of them and they all work differently, perhaps i should have said they more rarely do that kind of optimization.
@@eskilsteenberg I've used C for nearly 40 years for low-level/systems code so I've seen most of the cases you talk about but it was still a very nice reminder! Personally I think there's a few areas where it would have been better to go with implementation defined instead of UB.
@@TerjeMathisen This is a big argument in the Iso C working group. I think almost everyone agrees with you, however its much harder to get agreement on what UB should be turned into implementation defined.
Nice to see a new C lang focussed video.
Your "How I program C" video was great.
It's probably the best video on how to structure C programs, in history. I have downloaded it and kept it in every media files I got. Hopefully Eskil realizes how changing that video is. Even though I don't program in C in anything, it's really the best general programming ethics guide.
Wrote my first C compiler in 1982 for CDC6400 machine. 60 bit words so 60 bit chars, pointers, ints. Just enough memory to do simple constant folding of expressions.
Do you want a cookie?
As someone learning C/CPP this is a true goldmine. I feel like I have managed to at least experience time travel and issues caused by volatile values not being declared as such, when writing code for arduino.
I just wish GCC was more helpful. This might be a RTFM issue on my part, but it would be nice to get a hint like ”maybe you meant to write a function that has defined behaviour?” or something.
I guess checking for undefined behaviour is slow but I wish there was an option to warn if they exist. (At least the known ones)
Not sure if that's what you mean, but you can use the flag "-fsanitize=undefined" with gcc. And also don't forget to add the same thing to the linker flags, if you're doing separate compilation-linking.
Bash can tell me I probably meant a different command when I typo but gcc can’t tell me that I forgot to close a bracket, fantastic.
@@xugro there's a bunch of UB that cannot be found during compilation but qualifies as UB. for example, you can declare an "extern float x" in one file and "int x" in another (which is prime time UB) and compiler is unable to find it (since type information per symbol is not preserved after compilation). also, there is a bunch of UB that can happen when passing arguments to functions. let's say you have a function that takes two pointers and compares them - there is no way for compiler to determine whether you passed the correct values to the call, since the function can be defined in another file ("correct values" part relates to the provenance part of this video, meaning you can only compare addresses within the same object address space). this kind of things generally makes it impossible to get rid of UB and also is the reason why C requires the programmers to know what they are doing.
I like that you chose a dark color scheme for the slides but the random white flashes in between really hurt my eyes because of the stark contrast to the rest of the video
It is like a very effective Flashbang
It's my 2nd favorite part of the video. The whole time I was like, "lmao off someone is probably really pissed about this...".
Dark color schemes hurt my eyes. I can't even look at them for a minute without getting a horrible headache.
@@macicoinc9363your brain terrifies me
I once blinked while he was trying to flashbang me that was funny
Fantastic - the only video I know of, that reach the level of "How I program C".
And no music and other disturbing video stuff - just pure and clean.
Ty for making a video abt this that doesnt feel like it relies on peoples short attention span. This is exactly what I'm looking for when I look for a coding video on youtube
How is ommiting the malloc() == NULL not a compiler bug?
The standard clearly defines this to be a possible error case which has to be checked against?
Edit: the real issue seems to be that the compiler optimizes the malloc itself away, because it knows the memory is never used.
Therefore it can assume it always succeeds because it never called it in the first place.
I didn't realize that was the issue! Thanks for pointing it out, was also confused why it was misbehaving.
Thank you for making this. As someone who gets asked when 'the compiler does weird easily biodegradable matter', being able to point people to this is gold. Restrict is something I miss in C++, it is so useful for SIMD intrinsics.
most c++ compilers allow for a restrict extension like __restrict for g++ and clang.........
Restrict, or an equivalent, is available in all major C++ compilers. That said, restrict itself is a woefully inadequate tool for working with aliasing semantics. It hasn't been standardized in C++ because it's fundamentally a dead end.
Also, C++ is leaps and bounds better for SIMD programming relative to C. Libraries like E.V.E. or Eigen are literally impossible to write in C.
Why am I getting flashbanged on a video about c
The C compiler is really that guy that says "oh yeah buddy you didnt mean to do that did you, lemme get that for ya *deletes code block*"
49:29 bamboozled me a lot. Binging TH-cam in bed on my iPad, apparently with it about half an arm's length away, BOTH my blind spots converge on the closing curly brace when I look at the second _i_ in the for loop.
Was kinda freaky seeing an instance of UB in my own retinas after you talked about instances of it in C so much.
Sub earned.
I am a bit baffled that when a C compiler encounters user code that does the impossible (such as a range check that always passes/fails at compile time, or guaranteed undefined behaviour detectable at compile time) that its first instinct is "how can I exploit this to make the code run faster" rather than "tell the user their code probably has a bug".
FI agree that compilers should be a lot better at explaining what they are doing. For instance syntax highlight code deletion. However, it should also do the optimizations that the standard affords it.
@@eskilsteenberg yeah it would be great if the compiler gave some notice that its just ignoring code because it thinks its pointless, like 'hey maybe use volatile' or 'this expression is always true' etc.
its been a while so maybe they are warnings now but it doesnt sound like it lol
The story here isn't actually too hard to explain! If you remember back when GCC and clang/LLVM were at each other's throats for being the "better compiler", the number one issue was speed- the faster compiler, the one that won all the benchmarks, was expected to win the compiler holy war. Therefore, compiler developers put massive numbers of hours into making their compiler generate the fastest code possible. Until shockingly recently, they didn't really worry about the effects this would have on developers, so they didn't put nearly as many hours into warnings and heuristics that warn when the code exerts unexpected behavior. As a result, the warnings that exist are mostly for simple rule breaks, and there's just not enough reporting infrastructure for the optimizer to report that some function is being optimized out of existence in a way that's probably not what the programmer intended. The fix is to put pressure on the devs- either make the patches on your own and contribute them to the projects (the best option!), or repeatedly ask for improved UB detection and ask others to advocate with you.
@@cosmic3689 Right, more warnings about those strange optimizations are wanted. But there is a catch: macros sometimes result in such code, especially when used with literal arguments. So at the same time, there must be some method to avoid overwhelming the developer with such warnings.
@@HauketalThe compiler invokes the pre-processor, it can report a few instances of an error, then summarise repetitions.
33:00 another way to understand this issue is:
The multiplication of a and b first multiply as shorts, wrapping if needed, and then is cast to an unsigned int. This means that the highest 16 bits will always be 0, and it will eliminate the if.
This can explain the branch decision. But the the result won't be 4 billion this way (?)
This was interesting! I did not know that the compiler did (or could do) such weird and scary optimizations. Now I appreciate that I know assembly even more because at least there you know what you write is gonna stay there no matter what. Or at least I can debug C code by viewing the assembly.
40:53 Assembly jumpscare. I'm damn terrified.
Native Android dev here. Such good explanation, you captured my attention. Thanks for this!
So much genuinely valuable information that contextualizes and explains many C intuitions that I've built over time.
Seriously one of the best quality videos I've seen on this platform in recent memory.
@34:50 not really sure if promotion happens only if hardware has no 16-bit operations. promotions HAVE TO happen, but you can use 16 bit version of instructions only if the behavior is the same as if the promotions actually happened (according to ISO 9899:1999 - 5.1.2.3.10). this is just a way to standardize the semantics of the program, so it can be agnostic to whether the machine supports lower bit instructions or not.
Yeah, he did a great job explaining the rationale behind integer promotions (and why they scare me), but neglected to explain that his hypothetical implementation--one which can't do 16-bit operations--would define (signed and unsigned) `int` to be 32 bits *_because_* it would allow the implementation to promote any integer type with a shorter width.
I wish they had a C con the way they have Cpp cons. C is like a fine wine and I wish there were conferences.
C: I'm not going to crash therefore I don't need a seatbelt
"You never drive so I removed your car"
Such an amazing video! I loved all these fascinating tidbits about C (and compiler design in general) and you held my attention the entire time. I think I'll watch it a few more times to really grok the material. Bravo!
24:42 Subtitles aren't helping me here, because it also hears both signed and unsigned having possible optimizations. I *think* the second one is "can't, but let's just say it's not clearly defined ;)
Clean and constructive talk about great language.
Is the explanation at 7:57 actually correct? I would have assumed the problem is that *a* can change elsewhere, meaning x and y are not necessarily the same, not that x can change elsewhere, causing y to be equal to x but not a.
this is maximum anxiety for everything ive ever written. At first it was like "alright, perhaps i should reorganise some things for better performance" and then it was "oh god, i hope i didn't implicitly assume that the padding in my structs would be persistent."
Thank you for this new C lesson.Great as always.
30:30 I think here it might be better to define an enum with values 0,1,2,3 and to cast a to that type / to have it that type. With -Wswitch, I would hope that means that the value being outside of that enum should also be UB / unreachable (although I would have to look it up, it also depends on how the compiler warnings work here). I would prefer that since it doesn't depend on compiler intrinsics, and it also doesn't let you skip values in between (at least if it's a sensible enum like "enum value_t {A,B,C,D};" and not something strange like "enum weird_t {A=55, B=17, C=1, D= -1854};").
in C, it is not undefined behavior for an enum to have a value that is not enumerated. Basically enums are just ints or whatever integer type you picked.
@@ronald3836 absolutely, that's why I pointed to -Wswitch, which makes it a warning (hopefully). It's not in the standard, but it is a pretty typical optional limitation of what you can do in most compilers.
Also I should say that I usually use -Werror with lots of warnings turned on. I know many people are not as diligent tho
29:35, I actually have a better way to write that code, make member 0 a default function that does nothing or at least assumes invalid input, then MULTIPLY a against it's boolean check, so in this example it would be
a *= (a >= 0 && a < 4);
func[a]();
Notice how there's no if statement that would result in a jump instruction which in turn slows down the code, if the functions are all in the same memory chunk then even if the cpu assumes a is not 0 it only has to read backwards a bit to get the correct function and from my understanding reading backwards in memory is faster than reading forwards
39:04
There's a new proposal document, N3128, on the WG14 site that actually wants to stop this exact thing because of how observable behavior (printf) is affected.
34:00 Would be fun to see this run on an architecture that uses something other than 2's complement for hardware acceleration of signed integer operations
The optimizations flags in gcc bit me a long time ago. My code had no bugs without optimization flags on but then would develop a bug after O2. I don’t recall what the exact issue was, but from then on I would run my unit tests with and without optimization flags to minimize the potential for aggressive optimizations or missing a keyword to force the compiler to be more careful with a function.
Your code was buggy before you turned on the optimization flags, the optimization flags just revealed them. your strategy of testing in multiple different optimization modes is the right one!
Even though VLA objects with automatic storage (stack-allocated) are not very useful in practice, the VLA **types** are really useful for handling multidimensional arrays.
Two things: C23 now requires VLAs again, rather ridiculously. And, GDB has a TUI mode that is a little buggy, but quite good, and gives you a visual debugger featureset.
Why are VLA requirements ridiculous? What's feasible for implementations can change with time.
The first cc(1) I used had =+ & =- and didn't even support K&R C or the C widely published in books.
BTW the VLA inclusion unbroke the single error I made in an exam at Uni. which cost me a 100% result long before the C standard inclusion so you need a really good rationale.
@@RobBCactive It's ridiculous because only GCC properly supports it, and the feature was added and then deprecated and then re-added to the standard. This is an absurd thing to do, especially for a committee that is so overwhelmingly committed to keeping the language as much the same as possible over the decades.
@@greyfade compilers didn't support ANSI C until they did, obviously function prototypes are absurd by your reasoning.
@@RobBCactive That's a disingenuous argument. The situation is not comparable. C didn't add function prototypes to the standard and then remove them in the next version and then add them back in the next version. They didn't do that with any feature except VLAs. And they haven't done that with any other compiler-specific feature, either. They didn't add an MSVC-specific extension or a Clang-specific extension or a Sun extension that no one else implemented. They only did that with GCC's VLAs.
@@greyfade nope, VLA is implementable and understandable by competent people.
You've made no case why VLA is not useful or impractical.
c is awesome! please make more about EVERYTHING you would like to share! 🥺
I spend my time working with people who ponder sources of truth and believe that there is one true dogma that will safe our souls (keep it simple).
I learned C some 30 years ago and when I feel nostalgia watching this video it's not because I miss C. What I miss are people who actually know what they're talking about and why, people like Eskil.
Jesus is that truth.
The whole talk I had a feeling that it's John Carmack talking (funny fact, he also mentioned he prefers Visual Studio debugger)
I thought I knew C on an above-average-level at least but after watching this video, the only thing I know is that I’m scared of C now…
Thank you so much, this video is awesome! I appreciate this a lot
38:40 I think this is wrong - the compiler isn't allowed to propagate undefined behaviour backwards past an I/O operation like printf, which might cause an external effect such as the OS stopping the program anyway. (depending on what the output is piped into)
There is nothing in the standard that forbids this, but you are not alone thinking this does not make sense (many people in the ISO C standard group agree with you). People do file compiler bugs for this behaviour, and some compilers try to minimize this, even thought the standard does not forbid it. I think ISO will come out with some guidance of this soon-ish.
The compiler "knows" that *x can be accessed, so x cannot be NULL. If what the compiler "knows" turns out to be false, then that is undefined behavior and anything is allowed to happen, both before and after. The C standard allows the compiler to annihilate the universe if a program exhibits UB.
I've found I prefer Rust these days, but I have fond memories of the C and C++ standards from 20 years ago, thanks for the fun video
Rust is my language of choice these last three years or so. However I still love C and would be happy to use it where needed. I love it for its hard core simplicity.I love it because it has hardly changed in decades and I hope that remains the case. However I've have also use C++ a lot and absolutely refuse to ever go back to that deranged monster.
@@Heater-v1.0.0 Say what you will about C++, you'll have to square it with the fact that even the major C implementations (Clang/GCC/MSVC/etc) choose the "deranged monster" of C++ over the "hard core simplicity" of C. Simply put, the fact is that C++ is more popular than ever because it's actually *more* insane to use C lmfao
@@69696969696969666 The is true. Most of the worlds C compilers were written in C. C++ evolved from C and the compiler implementations followed. All seems quite reasonable. I agree that C++ offers a lot of conveniences that can make life much easier than C, although I'm still happy to use C or the C subset of C++ where appropriate. It is possible to write nice C++ code if one stays away form much of the ugliness the language. Unfortunately it's hard to do that on a large project with many people working on it as they tend to start introducing all kind of C++ weirdness.
Anyway, all that long and tortuous history does not mean we have ended up in a good place with C++. Many agree with me. Like Hurb Sutter with.his C++Front work. And Hurb is on the C++ committee!
I'm looking forward to compilers optimising away array index checks, assuming programmers are too clever to make mistakes is obviously the way forward.
The compiler can't optimize away my bounds checks because I don't check in the first place. Hopefully in the long term the undefined behavior in my out-of-bounds array accesses will result in even greater performance. Ideally compilers will become sophisticated enough to replace my entire code base with "return 0".
If you check array bounds and then access the array ANYWAY, then the compiler is indeed free to remove the bounds check.
33:55, um int is NOT always 32bit though,sometimes it's 16bit like short, the compiler could easily optimise out the call altogether in that situation, better to have used a long, at least that is guaranteed to be bigger than a short. Also (and I'm assuming you're leading up to this) should've put a in x 1st then multiplied x by b, a *b by itself might & probably will, remain a unsigned short operation and will just lose the upper bits before it even gets to x
35:38 C without automatic casting would be nice, I guess. Especially when having such weird casting rules.
Interesting talk! It’s always fun seeing C code and realizing that it’s undefined :)
One thing I don’t understand is: in what scenario would you ever free an array and then check that you didn’t reallocate the same block? I kind of get if thread A allocates, thread B does some calculation, thread A frees and reallocates, then thread B checks if it’s already done the calculation for the current block. Seems like a flawed architecture though, if this is the case then A should trigger B on a reallocation and B will wait otherwise. Maybe I just don’t get it though
There is a common pattern using a mechanic called "compare and exchange". Lets say you have a shared counter that is shared and many threads want to merriment the value. Each thread wants to access this value and add one to it. To do this you read the value, add one to it, and write it back. The problem with this is that between reading and writing it back some other thread may have incremented the value. so if thread one reads the value 5, adds one to it, then tread two reads the value and adds one to it, and then both write it back, then the value is set to 6, not 7, even though 2 threads have added 1 to 5.
To deal with this processors, have a set of instructions that are called "compare and exchange" , thy let a thread say "if this calye is X, change it to Y". So our threads that use that to say: if the shared value is still 5 change it to 6. If two threads try to change 5 to 6, the first one will succeed, and the second one will fail, and will have to re-read the value and try again.
This teqniqe is often used with pointer swaps. So you have a pointer to some data that describes a state, you read that state, creates some new state, and then uses compare and exchange to swap out the pointer to the new state. In this case you are using the pointer to see if the pointer has changed since you read it, and this is where an ABA bug can happen, if two states have the same pointer.
Yes, some kind of smart pointers can be easily implemented with C.
@@eskilsteenberg why is this advantageous to using a lock? Seems like a rather roundabout way to solve the shared resource problem
@@zabotheother423 Lockless algorithms are generally faster because they don't require any operating system intervention. Mutexes are convenient because if you use a function that locks them, any thread that gets stuck on a lock will sleep until the lock is available, and the operating system can wake up the thread when the lock gets unlocked. This OS intervention is good, because threads don't take up CPU while waiting for each other. On the other hand, sleeping and waking threads take many cycles, so if you really want good performance its better not to have a sleeping lock but just do a spin lock if you expect to wait for a few cycles for the resource to become available. This means that you can only hold things for very short amounts of time, so its harder to design lockless systems, but also more fun!
@@eskilsteenberg interesting. I’ve heard of lockless designs before but never really explored them. Thanks
34:24 If we have unsigned shorts a,b = USHRT_MAX; then multiplying a and b together produces undefined (implementation specific) behavior. Do I understand this example correctly? We might expect unsigned integers to wrap around modulo USHRT_MAX+1, but in fact they do not due to implicit type promotion to signed integers. And this only applies to types with rank lower than integer (i.e. char, short).
Congratulation! you cracked it!
48:30 That's why the Rust rules for mutable references are so nice.
Hi Mister Steenberg! If you happen to read this message, would you consider doing a video about C23? I'd like to hear what you think about the new features coming in C23.
Lots of learnings here! Thanks a ton!
Regarding uninit values example - the compiler optimization kicks in because there’s no *buf = … statement before the reads in c1 and c2 right?
43:00 -- could you provide an example of a platform where this happens? It's certainly not the case on Linux or any Unix system.
It doesnt happen.
At 47:29 why is this not optimizable because of aliasing concerns? `count` will be derefed once, used for a calculation and the result will be passed to `memset` by value. Even if while `memset` runs `*count` gets overwritten, that should not affect the behavior, provided `*count` was valid to begin with
The point is that the code shown before the memset() code cannot be optimized into the memset() code.
One thing I see/hear often regarding C++ is that the compiler defines the behavior of your program in terms of the "Abstract Machine". UB and the "as-if" rule are consequences of this machine's behavior, even if it would be ok on real hardware. Does C have a similar concept? For example, what you say at 55:46: In the C++ Abstract Machine, every allocation is effectively its own address space. This has important consequences: no allocation can be reached by a pointer to another allocation, comparison of pointers to different allocations is not well defined, etc.
Yes, the C language standard also uses the abstract machine concept.
1:03:22 To be honest I don't think such optimizations should be done by the compiler at all.
Instead the compiler should warn the user not to use malloc but the stack here.
At least there should be an option to get warnings, where the compiler does fancy optimizations and I would always turn them on.
The thing I hate the most is strict aliasing. What do you mean pointers of different types cannot overlap? The whole point of union was to allow these operations. What are these compiler vendors thinking that they could achieve by optimizing a union? Why does msvc don't have an option to disable strict aliasing. There is a "restrict" keyword goddamit. If I am optimizing a critical code, I am smart enough to use restrict keyword to allow these optimizations.
Finally a good explanation of what the volatile keyword does mean in c\c++.
Just finished watching. VERY GOOD stuff here. It's shame that this's no mention of how do the things relate to c++. Is it same or different in c++. I wish I had the same quality video about c++.
Yeah, i was surprise that he started explaining volatile accurately. So often, even from very brilliant people, you hear rants about volatile and how it does not mean what we think it means - and then it turns out they them self are giving false explanations.
1:07:34 It is my understanding that it is UB to define macros with names identical to standard library functions. Am I mistaken about this?
Those white flashes (when changing slides) are hurting my eyes 😕
Thank you, this is terrifying. Compilers are amazing. So many times I think I've found a faster way to do something, then the compiler just shakes it's head at me and produces the same binary.
@23:47 Sometimes I depend on overlap. Splitting the operation into multiple statements ie. x *= 2; x /= 2; has always produced the behaviour I want. It is interesting that x *= 2; x/= 2; is not always the same as x = (x*2)/2.
@34:32 I'm sceptical that this can happen. I can't reproduce it on GCC 8.3.0, even if I add the casts!
@51:08 there's something wrong with your newline here ;-)
If you write nonsense code that gets into language or compiler details unnecessarily, you are not doing anyone any favors. Clearing the high bit can be done by masking e.g. x &= (1
@@gregorymorse8423 I don't. It was a bad example. I would never intentionally overflow a multiply. The only times I depend on overflow are for addition and subtraction. In 8 bits 2-251=7. This is necessary if you want to calculate the elapsed time of a free running 8 bit timer. People tend to think of number ranges as lines, which is why overflow causes some confusion. For addition and subtraction It can help to think of number ranges as circular, or dials. Then the boundaries become irrelevant.
@Michael Clift overflows are well defined behavior in twos complement number systems. And applications like cryptography rely on this, and deliberately overflowing multiplication when doing modular arithmetic is practically vital to schieve performance. That C has tried to be low level but introduced bizarre undefined behavior concepts all over to capture generality that is useless is beyond me. The formal concept is beyond the dial analogy that a+b is e.g. for 32 bit unsigned (a+b) % 2^32 or likewise for multiplication. C does seem to respect thus for unsigned numbers in fact, it's signed ones that are trickier to describe so they chickened out.
with regards to 34:32, copying the code as written in the video and compiling with just "gcc -O3 t.c -o t" reproduced the result for me on gcc 9.3.0 (ubuntu, wsl)
@@nim64 Thanks nim. I tried it with -O3 and now I see the symptom too (still on GCC 8.3.0). It appears to happen with any optimisation level apart from -O0
It seems to start boring and thick and slow but it gets interesting fast. Excellent.
I loved this video, thanks for sharing! As someone who started programming on the x86 processor, which I think has a more forgiving memory model, it's great to review the acquire/release semantics and other little things that may trip me up.
Regarding undefined behavior: Do you have an estimate on how often the compiler will raise a warning before relying on the UD to delete a bunch of code? To me it seems most or all of these should be a big red flag that there's an error in the program - even thought the C language assumes the programmer knows what they're doing.
My favourite is @1:04:04. The compiler assumes malloc can't return null when it literally can!? Am I understanding that correctly!?
I wonder, were there ever compiler wars? Like the browser wars that gave us so much crap.
There's that saying in coding: "You should throw the first one away"; I'm beginning to think it applies to the whole industry. We just need to learn from our mistakes and design a new one.
It just isn't true: the compiler can not and DOES NOT assume that malloc always returns a non-null value. But malloc is performing syscalls to ask the OS for dynamic memory and things like the linux memory allocation scheme is opportunistic: It will always give you a valid address and only when trying to access that memory will you know if you can really use that.
but that is not a problem of C but of Linux.
I don't understand!? Who highlighted your reply to my post? If it was Eskil Steenberg then he seems to be disagreeing with his own statement at 1:04:04.
What's going on? BTW, I don't claim to know the answer, I was commenting on the statement in the video, and assuming it was something Eskil Steenberg had experienced.
@@ABaumstumpf
Understanding underlying hardware and coding while taking that into account, is a dying breed. People are coding large programs with languages that far remove them from the fact that it'sa running on a HARDWARE that has limitations, idiosyncrasies, it's not immediate when you tell it to do something... that has multiple processes running on it...
And that code is very, VERY inefficient. WE're running into a wall with constantly rising performance / dollar and that's starting to cause real issues and people who understand how to write code for certain architecture, taking that into account, are valuable again.
Hopefully enough people watch this and realize that it MATTERS what you write and that you understand the hardware as well.
25:12 In Rust, wrapping is an error. But the compiler might optimize it, so there won't be an error. I wonder if it also takes advantage of such optimizations.
I think, I need to use unchecked (unsafe) functions.
39:45 Does this happen for this, too?
assert(x && "Error!");
or does it notice that assert will guard the program from dereferencing a null pointer?
Don't do the flashing white screen. It hurts my eyes and is annoying in general.
I had to stop watching about 10 mins in because of it. Seizure inducing.
23:05 for me, this doesn't illustrate the power of C, but how vague the semantics of the this language are. It probably also will work differently in debug and release/optimized builds.
What isnt defined, isnt defined in any mode.
Definitely request more of these detailed C videos from Eskil. Its a space where there just is not a lot of content
Ill try but im not a youtuber so i have limited time.
@@eskilsteenberg understood just if you ever feel a little inspired like this one and How I Program C, a lot of fans will appreciate it I think 👍
Agreed, so cool that i accidentally stumbled on this video!
hmm, weird that slides are not text files in Visual Studio (-:
26:37 OK, that's a pretty cool idea, but surely compilers aren't this smart, right? That seems like it would be hard af to deduce
Compilers have no problem deducing that x < 5 and using that to eliminate the conditional statement.
42:35 wouldn't it still be wrong even if we write to memory immediately after allocation because between execution of these lines another process could've allocated the same memory, introducing the same problem? Imagine 2 processes A and B with this code:
Process A:
buf = malloc(1)
*buf = 'a';
printf("This should but may not be 'a': '%c'
", *buf);
--------
Process B:
buf = malloc(1);
*buf = 'b';
// do some stuff with buf so that compiler does not remove the write
And consider this order of execution:
A: 0x123 == (buf = malloc(1))
B: 0x123 == (buf = malloc(1))
A: *buf = 'a';
B: *buf = 'b';
A: printf("This should but may not be 'a': %c
", *buf);
At the end the printf will print 'b' even though by looking at A's code it should be 'a'. Is this specific to volatile pointers on that platform (even then, I think malloc should return unique addresses anyways?)?
Nice vid, just one question: In your union aliasing example around the 52m mark, the union has a compatible type as a member, as per C 2011 6.5 7, is this not valid and defined behavior?
43:04 I don’t think it is the way you explained this. Another process cannot obtain the same memory due to virtual memory pages protection. It could execute apis like readProcessMemory and WriteProcessMemory to change it but it is purposeful manipulating of memory.
This was fascinating! I had no idea about some of the things happening under the hood. Thanks!
The white flashes whenever the slide changes make this impossible to watch.
It's ok in not high im just in a daze, not used to specifying the sizes of everything i work with, like working with scanf input, i get it since i allocate the memory to begin with i need to know the length of everything if i want to do anything at all with data, on the plus side i almost stopped using classes in oo unless absolutely necessary
45:35 is there a reason compilers will avoid overwriting padding in the initialization example, but they can overwrite padding in the case that writing a larger value is faster? or are both examples the same in that compilers *can* overwrite padding, but sometimes choose not to?
Thats a really good question! Ive been trying to figure this out myself. I think they are scared of overwriting padding because it may break some rare program, but they don't even do it with freshly allocated memory. They do it with memory that has been memset, but not memory that has been calloced. I think it might just be an oversight.
24:42
Is floating point precision also UB? Because I think that would be much more likely to break (with a general multiplier/divider, not with the special case of 2).
No it is not. C follows the IEEE floating points standards and most things are defined, the things that are not defined are platform defined, and not UB. Platform defined means that the platform should define what the behavior should be for that platform. Than means that it is defined and consistent, on that platform, but that it may work differently on other platforms. UB means that there is no defined behavior at all and anything can happen.
@@eskilsteenberg Ah good. Yes, most languages follow that one, and that bit of platform dependent behavior is also the reason Rust doesn't do floating point math in const fn (aka constexpr in C++).
Just to expand my knowledge, if you know: This is assymetric with whole numbers, and in many instances you see floating point numbers get special treatment to follow that standard. Do whole numbers indeed not have a similar standard?
It's certainly much less important for the behavior of code, the general type already gives all the info (minus, for C and C++, platform specific stuff like the mapping of int etc to size), so I can see why that would be the case.
ahah so basically the compiler is just a TH-cam comment troll, that look at your code and respond with "Ahaha too long didn't read".
1:05:37 What about using the preprocessor for common code snippets?
I was very surprised by this advice, given the talk seemed to be targeting fairly experienced programmers. Possibly the main reason I stick with C is its powerful preprocessor. If you know exactly how it works, you can create incredible powerful abstractions and generic code - all completely safe. I assume he meant people doesn't know how to do that, otherwise this would be a very poor advice.
@@tylovset I wouldn't use C because of the powerful preprocessor. If I want a language with a low level core and powerful abstractions, I'd rather use Scopes.
1:11:07 You seem to vastly overestimate my reading speed, especially when I try to understand what I'm reading.
Is there a compiler that warns you when it decides to delete your code? :)
Possible, but realistically it does so all the time in large code bases so it won’t really help most of the time.
41:30 this makes me angry. buf is volatile. How the heck does the compiler know at the time of assigning buf[0] to c1 and c2 it hasn't been assigned by an external resource? It makes assumptions about the value despite volatile!
sorry, I am sort of a beginner, but regarding the example at 9:40, there are many examples of code in the linux kernel that do this kind of thing without volatile keywords. What's up with that?
I am in third year computer science and somehow my program never taught me C. I learned Java, Go, assembly, Scheme Prolog and more but not C. I can read it and I understood this video but I lack fundamentals. I'll look into the ressources you mentionned and i'll try to hack some of the software you wrote.
There's a game called "stephen's sausage roll" that has a minimal tutorial and its first levels are not trivial. Even at the start they require thought. I need that but for C.
You should write a small game with code reloading. Like Handmade Hero. That'll teach you everything you need to know. You don't need to make the whole game, by the time you draw some textured quads and maybe some text, you will have learned.
@@MagpieMcGraw This is good advice.
Skip C and learn C++. Not only does C++ allow for all of the "low-level" bit-fiddling of C, but it also makes it possible to automate most of the uninteresting busy work required in C. Moreover, C++ is the language of choice for GPU/SIMD programming, as well as far better parallelism and concurrency.
@69696969696969666 calm down big guy its just some text, no need to get worked up and crusade in the comments :)
At least I definitely know when the slide changes
Great video, but the flashes between slides are quite irritating.
34:40, Wow I knew about promotion, but not that small unsigned types promote to a signed int. That’s -stupid- really surprising and inconvenient.
I'm not that well versed in C so I don't get what's happening with 51:33.
How does printing the first struct member through a pointer influence the second?
Its not a struct its a union. A union is a struct where all members occupy the same memory, so writing one will overwrite all others. This lets you write a value as one type, and read it as another.
hi where should one start c to have fun ? can you make video for begineers?
43:53: no sane OS gives you uninitialized memory... right? Also, I didn't really get this example. If the buffer is volatile then the two reads can very well be different, for instance because it's mapped to hardware. Or does the compiler know that malloc()'d buffer is not memory-mapped?
"no sane OS gives you uninitialized memory... right? "
Nah, no sane OS forces memory-initialisation at all times. It would create insanely bad performance if you actually have a high memory throughput.
@@ABaumstumpf I don't believe that's true. Uniniatialized memory is a security concern when it comes from the OS. Of course this doesn't mean malloc() will initialize memory, but the underlying mmap will.
@@guidomartinez5099 "Uniniatialized memory is a security concern when it comes from the OS."
If there was any relevant information still left in memory than that is a security-Bug of the application that put that data into memory.
If malloc initialises memory that would be a huge performance-degradation all across the board just cause some other programs were buggy.
@@ABaumstumpfsorry that's just not true
@@guidomartinez5099 "sorry that's just not true"
If you say so - then seems Linux, every other major OS, even Java, Rust disagree with you there buddy.
So no, you are on your own.
I don't even know C but I find this extremely entertaining
Great video Eskil!
That is very nice content!! Nice effort
What puzzles me is that a+b operation takes one clock, but 'if (a
27:56 why is this UB and not just plain wrong?
memcpy takes void* pointers not char*, also the alias rules are the same for char* and signed char*
Around 8:30 a is volatile but you talk about x being so. The end result is the same (disallowed reorganization), but probably not what you meant. 🙂
Thank you vey much for this! Absolutely love your talks
Would you give a hint how C99 is broken?
C is the programming language equivalent of the "this is fine" meme
It all makes a lot of sense if you learn it properly. the problem is that there are far too few people who teach all the details correctly. C is not, like many think, a regular language where just some modern stuff have been removed.
I now have a clearer understanding as to why C is :
a) Fast
b) Dangerous
:D
Thats what the video is meant to do! Thank you!
At 28:40 you state that compilers don't actually optimize away shifts by register-length (32) as being UB. This is no longer true, I have seen them do so, particularly in the context of doing rotation by n bits where n happens to be zero.
Its always difficult to say with authority what compilers do because there are many of them and they all work differently, perhaps i should have said they more rarely do that kind of optimization.
@@eskilsteenberg I've used C for nearly 40 years for low-level/systems code so I've seen most of the cases you talk about but it was still a very nice reminder!
Personally I think there's a few areas where it would have been better to go with implementation defined instead of UB.
@@TerjeMathisen This is a big argument in the Iso C working group. I think almost everyone agrees with you, however its much harder to get agreement on what UB should be turned into implementation defined.
@@eskilsteenberg Yeah, we spent one extra year on the 2018 revision to ieee754, so it became 754-2019 instead. Standards discussion are hard!