4:13 this. C _feels_ like it’s a direct translation of the assembly into a little bit more friendly and universal language. But in reality we got caching, pipelineing, branching probability dependent pipeline prefetching, hyper threading etc etc on top of that we have compiler optimizations In reality much much more is happening under the hood, that is abstracted away into a black box so the cpu acts like we think it should.
"caching, pipelineing, branching probability dependent pipeline prefetching,.. compiler optimizations" - I'm not really sure why people bring these things up. If you write C, all of these are deterministic systems that are directly affected by how you structure your code. Caching is dependent on how you structure your memory, branch prediction is dependent on how you structure your code (and can be hinted), pipelining is just being aware of what assembly your code in translating into to ensure you're doing things in the most effective order, prefetching depends on how you structure your code and can be hinted. The ability of the compiler to optimize your code is again, entirely dependent on how well you structure it to play well with the optimizer. None of these things are magic black boxes, they are predictable systems that you learn with experience how to write code that plays well with them. By the way, if we had more explicit control over those things, believe me we WOULD use that control, but sadly we don't have it. If instructions are available as intrinsics, we absolutely use them, but with the way things currently are, it's more control by tricking the CPU into doing what we want at this point.
@@StevenOBrien none of these things a in fact black magic boxes. Of course things are documented and the mechanisms and rules by which they function are deterministic, documented and known. Yet I know no one who has all those rules and functionality in their head when programming. If anything they assume the assembly that could come out of the compiler if it weren’t for all those optimizations and such. The rules for optimizations, pipelining etc. work in the way that you don’t have to think about them and the resulting code acts in a way like the assumed assembly without optimizations does. (In most cases. And as long as you stay away from UD) That is the black magic box. In reality it isn’t black but it is treated like a black box. And you can in fact have more control over compiler optimizations: switch them off and optimize yourself. People don’t do that anymore because the compiler is usually better in optimizing than they are. Instead they let the compiler optimize as much as possible and just trust it doesn’t change their codes runtime behavior.
After 40 years in IT as a reseller and now retired, I wanted to warm up my old hobby - programming. After two years of C++, I realized, that I'm not compatible with C++ and ended with C and it's great. When memory management and logic goes hand in hand, I just love it.
C is still the closest thing to cross-platform assembly. I also love the simplicity of the language, and it shows with how small and fast its compilers are. I only wish it had better tooling, as something like cargo really speeds up development.
The reason I use C (and Odin, because it’s philosophy is the same) is because it doesn’t get in my way when I code. That’s it. It is so much easier to reason about my code when I don’t have 50 different features to think about, all polluting my mental model.
In some higher-level languages you aren't forced to use the highest abstractions possible if you have full control over your project. That's why I like using Rust in personal projects, usually with enums for polymorphism which avoids 99% of weird annotations. But I'd probably hate it in a team because I know how addicted most are to using the fanciest abstraction features and adding complexity via third-party dependencies.
there was a time that if an applicant didn't have some years of commercial C programming on their resume that they shouldn't even bother applying. It was as expected in the industry as much as we expect everyone to know the alphabet and be able to read. Those were the days of real programmers, of course.
After trying out many languages (including Rust and Zig) I’ve also found that I strangely really enjoy C. There’s much less to keep in your head and with some good technique and knowledge of what you’re doing it’s very gratifying.
I agree, but its nice to have few drops of vector lists, and other stuff, the g++ finds normally more potential bugs, when casting. I’m bit too old school for all the C++ classes, most of my code is C. Templates is good way keep my code more DRY, can do the same with macros, but it seems easier to write templates.
@@Knirin I wherry pragmatic about that, I don’t want C/C++ to turn into Basic, if code contains boundary checks for every array access, it’s not C/C++ code, best practice be check things before use and only once. and for that to work, you can’t compromise your clean up code. (so I will say some boilerplate is good code)
@@kjetilhvalstrand1009 I was more thinking about how a lot of these arguments don’t count macros to handle the lack of generics or templates in C as boilerplate but consider type parameters for generics to be boilerplate. The number of reimplemented data structures also tend to be ignored as well. Always boundary check your code and sanitize inputs. C is very good at turning logic errors and bad inputs into memory problems or worse.
10:28 Do love those filters and ranges. Haven't written a boomer loop in anything but simple examples made readable for people unfamiliar with algorithms, in a long time.
8:40 - This is a really important thing for you to have called out, imo. I was feeling super burned out for _years_ of corporate software work; spending most "dev time" on cloud configurations, using a random mix of languages for every project, spending tons of time upgrading frameworks or switching to new ones because web development paradigms and "best practices" change every year or two... it's all so draining and soulless. Then I finally made myself do a project I'd been wanting to do for years - get back into making a game engine, combined with wanting to make it work for Web-Assembly for various reasons. Was originally going to use Rust, but learning two new things while already dealing with burnout was... not working. Eventually went with C and with the arbitrary deadline of a game jam, I managed to start the project and actually get into it again. Because of it, this is the first year in almost a decade where I've actually been _enjoying_ coding again.
People always pushing stuff further and further. In almost everything. Examples are the economic growth, your piano skills or your rizz skills - idk. Sometimes there is no need. Have a nice day.
The dichotomy you describe between drive to mastery and drive to get paid seems like it has a middle ground that I strive for that could be described as a drive to accomplish a task well and efficiently for the satisfaction of it.
Wait until you start implementing high level concepts within the C preprocessor Because no language feels closer at home than your own personal (it's readable to you) macro spaghetti
I once forked a lua library where the previous owner generated all the tests with a custom C program. It was such a confusing macro mess that I ended up just rewriting them. Probably a skill issue but idc it was still insane.
Try embedded development, the only abstraction you have is the startup code that calls your main function. Also working with memory you understand how the computer works. interrupts are another thing that allows you to demistify what happens at the processor level. also you need to study the specific controller's architecture.
I’m a former frontend engineer and currently learning C, so this really resonates with me. As he mentioned, I can’t resist my inner drive to be a mastery, however inefficient it does feel like.
His argument betrays a lack of understanding of the underlying hardware that he thinks C maps to so elegantly. I guess people who don't know what they're talking about are the most likely to think they're knowledgeable and be comfortable sharing their views.
@@JohnDoe-np7do I feel D doesn't have a natural niche or compelling features. D's sweet spot is between c++ and java. "C++ with GC" or "native code java". But C++ has accrued features like reference counting, and java has jit compilers now.
I don't know if I'm too dumb to realise i'm doing things wrong or if I never did something that actually required me knowing how to use them, but I never had issues with pointers in C
I have a theory that people's misunderstanding of pointers stems from a misunderstanding of memory. if you understand physical memory, pointers become - if not obvious - easier to reason about.
@@KingJellyfishII It's interesting to me. I always heard rumors that so many people having a hard time with pointers, but I've never really seen examples of things that real people had trouble with after not being an absolute beginner (unless you're buying into RAII instead of memory arenas). Have you encountered any actual misunderstandings from people who weren't outright beginners?
Yeah, I think it's entirely a beginner problem. Maybe if people are talking about null pointer issues that is a legit concern, but that is really just a symptom of lazy error handling or poor design. A lot of code is easier to read and reason about when using pointers.
Pointers are easy, just the syntax is hard to get at first because of the misguided notion of "declaration follows use" (one of many many sucky choices in C's design).
What some people miss about the undefined behavior in C is that much of it is a feature, not a bug in the spec. C deliberately has some UB to allow compilers to optimize on architectures that behave differently at a fundamental level. For example, AVR and PIC microcontrollers and certain RTOSes will not panic when dereferencing a null pointer. I've worked with some microcontrollers, like the TI MPS430, that actually initialize uninitialized variables. Not everything is ARM or x86. C's stance to define this as UB means that compilers can make the correct decision depending on the OS and hardware. Reading forums of competing programing languages, you'd get the impression that undefined behavior in C is a pernicious and capricious bug in the specification itself. It's not. It's quite deliberate. All this being said, languages that eliminate this UB and prescribe standards across hardware are limiting their optimizeability across different architectures and, imo, are unlikely to replace C as the common systems programming substrate.
It would be possible to make most such instances of UB as "implementation defined" instead of UB. UB means the program can legally do anything. It can delete your hard drive. It can turn off the life support machines. It can push the button and launch the missiles. UB is never a feature. It's *always* a defect in the spec.
@isodoubIet so what is your difference between undefined and implementation defined. If you take something like the aero-space, automotive or industrial standard compilers they all need to define a certain behavior for UB. So no missile launches by accident, your car flips by accident or your microwave burns down your house.
@@MrHaggyyIt's not "my" difference, it's the standard's (C and C++ both). For example the C++23 standard defined "Undefined behavior" as "behavior for which this document imposes no requirements", whereas "unspecified behavior" is "behavior, for a well-formed program ([defns.well.formed]) construct and correct data, that depends on the implementation", and implementation-defined is "behavior, for a well-formed program ([defns.well.formed]) construct and correct data, that depends on the implementation and that each implementation documents". Undefined behavior is e.g. accessing an array past the end. Unspecified behavior is which subexpression gets evaluated first in an expression like (a + b) + (b + c), and implementation-defined is like the width of an int. "If you take something like the aero-space, automotive or industrial standard compilers they all need to define a certain behavior for UB" The "compiler" always "defines" a certain behavior by UB, and its source code is a definition of sorts. The problem is you as the programmer have no idea what that might be, and even if it happens to work, it's a bad idea to rely on it.
@@MrHaggyy It's not "my" difference, it's the standard's (C and C++ both). For example the C++23 standard defined "Undefined behavior" as "behavior for which this document imposes no requirements", whereas "unspecified behavior" is "behavior, for a well-formed program construct and correct data, that depends on the implementation", and implementation-defined is "behavior, for a well-formed program construct and correct data, that depends on the implementation and that each implementation documents". Undefined behavior is e.g. accessing an array past the end. Unspecified behavior is which subexpression gets evaluated first in an expression like (a + b) + (b + c), and implementation-defined is like the width of an int. "If you take something like the aero-space, automotive or industrial standard compilers they all need to define a certain behavior for UB" The "compiler" always "defines" a certain behavior by UB, and its source code is a definition of sorts. The problem is you as the programmer have no idea what that might be, and even if it happens to work, it's a bad idea to rely on it.
What you said ab the level of abstraction hidden from you even when using C is so right. And that part about having more control being a pain sometimes is true depending on the scope of a project, the amount of heap/dynamic allocs my current project makes always makes me think "yo this sh*t is dangerous" altough thanks to arenas and defer pretty much solve it for me but still its risky. Im by no means a beginner but damn i still dont know a lot, im 700 lines into a compiler/language ive chosen to write (in zig btw) when i have the time & i realize just how much ive taken for granted. Im not even at the code gen part, was considering native asm but i think llvm ir or qbe would be a more sane option. the tokenizer i wrote is great, its 500 lines & works flawlessly & is fast. the parser however is another story, its works but is a mess & needs to be revised plus im using a map to resolve vars at runtime, pretty sure there are better ways to do this. At this point, the language already has the ability to resolve identifiers into values, booleans, floats & ints of numerous bases (namely 2, 8, 10 & 16, coz were programmers, these are the only ones that truly matter, in any case they all evaluate to base 10 at run time) the parser also has the ability to evaluate binary arith operations/expressions with numerous operands even when an operand is actually a nested expression, altough there is no precedence and is left-associative for now. However if im being honest with myself, the implementation of the parser is so dumb & the amount of branches the program takes is probably enough to grow an actual physical tree 😂 There are many functions/methods which use both recursion and loops at the same time + ive added so many constraints into the syntax of the language such as prefixing ids with $ or @ to ease the process. @ is supposed to reference global symbols either "imported" from zig std or c std or simply provided out of the box. Got the idea from llvm ir & zig too. theres still so much left to do such as adding that feature and by the time im done itll probably be well over 1000+ loc & hopefully ill come out more knowledgeable once its completed but more importantly, be able to accomplish my goals for this project. I dont think there is a necessarily a "best" language, in fact im at a point where i enjoy as many languages as possible, i used to be like this guy but for me it was c++ but i appreciate the more modern languages available nowadays. After all, its nice to have a change of language once every while coz 4 the most part the basics stay the same & it should always be a joy to program, skill issues suck but there really is no easy way to get good if you dont have inherent talent for this.
C is not assembler. People think it is without realizing how much you can't actually do in C. I once wrote a compiler that generated assembler code, that couldn't be replicated in C. It was just a different calling convention but that is enough. Also the amount of optimization that occurs is crazy and the generated code might be fairly different. Finally other languages, such as Rust, actually generate assembly that is equally close to the code as C does.
The points in the video are fair and well elaborated. As a programmer and an artist I say it feels good to go back to simpler things (but not always for me). People who hate on other languages for no reasonable argument are the worst. I am currently using JavaScript mainly but I enjoy using other languages and have used them on different projects. So far I've used JS (and TS), C, C++, C#, Java, PHP, Python and even Assembly (had too much fun with this one). I really liked (and disliked) some of them but if anyone asks me why would I use a language over another for a certain project, I believe I have the minimum required knowledge to have a fair judgement and give a good reason. People just need to remember: they are all tools and all tools come with good and bad things.
light mode is superior to dark mode, that's the truth people aren't ready to hear. Linus Torvalds, Dennis Ritchie, Brian Kernighan, Bram Moolenar, all light mode users. On the other hand, Rasmus Ledorf, creator of one of the worst languages in existence, dark mode user.
The cool thing about coding on old hardware like a commodore 64 is that you store ram data in actual memory addresses that reflect spots the physical chip.
I am a first year electrical engineering student and i had a 1 semester cource on C. It was some of the most fun that ive had learning and it sparked a desire to continue studying programming. There's something so special about starting from a blank file and bit by bit implementing a complex program that languages like java have yet to show me, but then again I'm comparing apples to oranges
What I like in C is that premature optimisation actually works. In other languages it simply fails due its unfriendly nature to the compiler or interpreter. C is much clearer on what is fast. I think the slowness of other languages is mostly due to knowing what is fast in that language. Often when you write code, we tend to use the most simple way, so we use language provided things, but those are often slower than writing the code with more basic things. So sugar is the devil and we can't stop using it due its charming simplicity. C simply doesn't have that option, you are forced to write the function with basic blocks. That way C code is often faster.
So long as you know what you're doing. If you don't then things can get funny: one time I thought I would try converting some Python code I had to C. It was a bunch of tight loops doing calculations, the sort of thing you don't expect Python to be good at, and I was curious what C's performance with basically the same code would be. Instead the stumbling block was the 50mb file of numbers I had to read in to an array first, that's trivial in Python but my uneducated attempt to hand roll a C file parser was so bad I never even got to the calculation. Writing in C requires a commitment to learning about things you would otherwise take for granted.
@@thesenamesaretaken that does indeed sound like an issue I would expect a Python-first programmer to encounter. Coming from the other way, a C developer would also probably be perplexed by performance for different reasons.
@thesenamesaretaken in the really heavy number crunching python is only a little bit slower. Your modules are usually written in the language that suits the problem, and the python language just distributes data as needed.
@@MrHaggyy Yeah, If you do number crunching in numpy or something, you are basically using C code anyways and often well optimized. If you want to write your own raytracer using Python, then it could be three orders of magnitude slower than C. By the way very niche language for number crunching is Julia, it has simplicity of Python, but much smaller communities and much less libraries. If you use strong typing, it's roughly 3-5 times slower than C which is not that bad. I tried it, but I have no use for it. Mostly I can write one purpose or not time critical stuff in Python and larger apps in C++.
In C the semantics are so ill-defined that higher level optimizations that require the language to specify how memory and pointers can be used more concretely will never be possible
Primeagen just got pumped bc the guy is pumped. 😂 jokes apart, it’s nice that we have different programming preferences bc we need this, we need different communities exploring different problems, imagine if everyone is playing javascript from now on, what happens with all other projects that is more suitable to other languages? Embbeded systems, OSs? So, lets celebrate the “diversity”
I do believe the feeling of "stacking black boxes on to black boxes on to black boxes" is partially a skill issue. I often feel that way when I begin to use a new tool or language or framework, until I have properly understood (even at a high level) the mechanisms by which the tools I am using work. Once I take the time to understand that, the black boxes feel less like black boxes and more like just yet another tool that I understand and can use and reason about. Except for javascript, that is, js is the ultimate black box.
I think part of it is a blackbox versus an eldritch horror reaching out from the shadows; what i mean is if its a blackbox with well defined boundaries that you call into then you can get good at using it and it becomes clear over time what it does and how it works/interacts with your code but when its an eldritch horror that implicitly appears at a distance and teleports itself inside your code thats a whole other can of worms *cough* spring boot injectors randomly failing with 0 indication because a random injector somewhere in the million lines of code changed with 0 indication that it would affect anything else *cough*
It's also a management tool. If you have a lot of people and need your software fast it's easier to divide the task in smaller ones and let them build a selection of black boxes. So your architects can assemble the project out of those.
I went from machine code to assembly, then C way back and never got into the newer toy programming languages. I want to focus on solving the actual engineering problems instead. I always thought that wasting so much time on the latest shiny programming languages was like focusing on the telescope instead of the stars and galaxies you were supposed to observe.
Does anyone with a brain actually think that? C was created as a way to enable rewriting of Unix from assembler so as to make Unix portable to many other architectures. So immediately we see that C is a kind of minimalist abstraction that can work efficiently over many instruction sets. Further to what you say even assembly language is not a perfect representation of the machine the code runs on now a days, it knows nothing of the caches, pipelines, multiple dispatch, branch prediction, instruction reordering and other magic that may or may not be happening on the machine it's running on.
most of the compiler magic for C is optimization, although there are still some abstractions that are not 1 to 1 mappings, but if the optimization magic was removed and the code were to be compiled literally, it would still work the same (although it certainly helps to have programmed in assembly for that understanding)
@@psteven5 That is true. However I have heard many old hand C programmers complain that "compiler broke my code". Which basically means they were assuming C is much closer to the generated code and machine than it actually is. Which is to say they had undefined behaviour in their code that worked for years or decades until some new optimisation broke it. Changing optimisation levels can certainly change the behaviour of ones code in the face of UB. I'm guessing it was a long time since they learned C and they relied on compiler behaviour to guide them rather than studying the C standard specification as it has been refined over the years.
Microcode is just an implementation detail. The CPU still has to fetch and decode instructions we're familiar with before it breaks it down into micro instructions.
really sold me on the ? zig operator. What's great about it is that as someone who is neither good at programming nor knows Zig, it gives me confidence that if I start on the Zig journey, one day I will be ready to brave ' the next level' and all it takes is backspacing that question mark
I'm of 2 extremes: Ruby for complex object-basedness (not mererly OO); and clean C for compact UNIX-like CLI tools, as well as the occasional Ruby-hot-spot speedup. GDB, the GNU debugger (and the Godbolt website) brings it all together for me: it lets me single-step through the produced assembly code. Beautiful.
I started watching about C. It’s interesting to learn the foundation to these other languages. Kinda provides you context to what has been developed to give you better methods, less complexity..
One of the things I dislike about the move away from exposing pointers to the programmer is that data structures where pointers are the most natural representation, like linked list and trees, actually become more confusing. This is one of the areas where I think go really excels. I honestly don't think I fully comprehended the link list until I implemented them with pointers in go.
@@Brian-ro7st that’s how I learned pointers well. It really helps with the concept of needing a pointer to actually change data itself as opposed to just reading data or changing a copy
Knowing some C makes understanding the difference between pass by reference and pass by value very obvious, whereas in say JS you just have to memorize when it happens so you don't shoot yourself in the foot.
the primagen life and professional takes are always from the point of view of a well versed, smart and lucky man. those doesn't apply for all people following.
The most fun I have programming is writing C for microcontrollers. There's something about poking bits around memory and they actually live right where I put them. Generally I don't even use heap alloc, just allocating static blocks and using a simple memory manager. It's miles away from my daily work.
I think that C's influence on hardware has been underestimated, as well. Look at the 8-bit world and how those assembly languages are architected. Then when you get to 16-bit and later (remember, C came about in the 1970s) you can see, even in extended ISAs like the x86, how instructions and architecture were added that can arguably be at least in support of C, if not more directly influenced. Then get to the load/store architectures and see how well their assembly languages and architecture map to C.
Interesting, I prefer going the other way. I.e. starting with Math and REDUCING the abstractions, i.e. functional programming. That said, it depends what project you're building and who you're building with. This video really only applies to hobby projects where you have control over those things.
What should I be inferring from the fact that the only representations of themself (?) that they put in this video are an anime avatar and someone wearing nitrile gloves with taped shut boiler suit sleeves to accomplish basic tasks? I feel like interacting with this person will lead to waking up naked in a basement chained to a metal pipe.
The more control you have - the more things you have to control, the more things is not controlled for you, it's like breathing manually. Good for control freaks, I guess.
It's the same reason why some companies prefer paying expensive cloud bills rather than paying people to take care of the infrastructure : it takes time, and time is expensive
When you were talking about how tiny and how difficult it is to make mechanical watches, optical lithography for modern cpu production came to mind and seemed pretty hard too. .. soooo tiny
I love C, it’s the best language to me ! Rust is interesting though, mostly because giving more information to the compiler enables more powerful optimisations… but I think C could add similar things, and become way more powerful than Rust in the end.
I don't think it can, and I don't think it will, C doesn't break, and the reason Rust is able to give more information to the compiler is because the language semantic is richer and more detailed. Changing C such that it could provide that kind of information would break a lot of things.
I absolutely love C and ASM. Something like Zig and Rust is cool and all, but no one talks about formal verification, which nullifies both of their existences for me. Compcert and the verified software toolchain (VST) is absolutely goated and nothing you bluehair devs say can change my mind. Coq is probably one of, if not the best language I've ever learned.
yeah - I noticed they all have blue hair too. They are the ones that go around disrespecting the foundational pillars of civilization - and for sure C lang is one of those
C didn't ship a formal verification toolchain, therefore it's unfair to dismiss new languages because they still don't have one. When they will get enough traction, there might be some tooling that does that. Also the whole reason why C had this in the first place, is because of how fragile the language semantic is. So I love C too, but the sooner I can get rid of it the better in my book, because C has that tendency to suck out the fun of programming.
@@pierreollivier1 No language group to my knowledge has shipped a formal verification toolchain themselves, so your point about unfairly dismissing other languages based on that is largely moot, since none of them have it, it becomes a user's preference. We need to advocate for formal verification to safe language groups. The fact VST and CompCert exists and safe languages like Go, Rust, or Zig hasn't built, attempted, or even talked about formal verification, shows a real lack of scope on their part about truly safe programming and safe programs. None of the safe language groups are talking about or even reference formal verification, you have to dig for small outgrowth groups, which are extraordinarily esoteric. That's the problem. Formal verification is about as esoteric as it gets. Without safe languages publicizing formal verification, it's likely always going to be esoteric. Safe language groups should be the ones pushing for formal verification because it only strengthens their argument for the need of safe languages. They are the ones that have the user base to plead with to develop such tools because they already have sold them safety. They are also the ones that need to show the other side of the safety coin, since safety is their whole argument. These groups are in a prime position to lead the charge in making formal verification more mainstream. Relying only on formal verification or only on the language is never going to be fruitful. Useful, sure, progressive, absolutely, but the real magic bullet here is a safe language and a formal verification toolchain. Until we have both nothing is safe. It's all about the mitigation and the acceptance of dangers, for now.
Full formal verification is out of scope for a general purpose language because it puts too many constraints on the user. Things like borrow checking aim to "prove" some properties hold for software by adding chained constraints without having to bring the whole machinery needed to "prove" more things about the program. Writing provable code requires a lot of effort both to write code that can mathematically be proven at all, and because even with that you need to hand-hold the software to allow it to prove things anyway. Meanwhile most software doesn't need formal proof that it works, unit tests serve as empirical proof that key values are handled properly and we trust people to write code in a way that unit tests can effectively cover real use cases. Most people using programming languages are doing engineering, not maths. There's a tradeoff to be made between cost, efficiency and "correctness" and "correctness" is rarely the top priority. And languages are written for people who use them (usually by people who use them) so of course most languages don't focus on probability. It still has its place and research on the matter is cool. Just wouldn't say it's a big deal that there isn't a focus on it. Let's talk about this in 50 years when formal proofs in most software is actually something reasonable to do.
C is pretty simple, but there is so much I hate about it, because it makes me think about stuff, that can be solved by tooling without giving up anything I consider remotely valuable. Here a non exhaustive list - Declaration order in files matters - no generic containers without macros (hashmap, vector, etc.) - hardware dependent integer sizes by default - standard library varies greatly between platforms - separate header files - fallthrough in switch statements - includes, macros and the preprocessor in general. - polymorphism only via void* - no references (i.e. never null pointers) - no namespacing of any kind - enums polluting global namespace
There is absolutely nothing wrong with starting a new project in 2024 and choosing to write it in C. I both use and like the "modern" systems languages like Zig and Rust, but don't understand the hate C gets, nor do I refuse to use it simply because those others exist. I admit that I often do choose one of those over C lately, but this is mainly because I like to expand my knowledge with new languages, not because I feel there is something inherently bad about C.
I got good at C and it made me pretty good at every other language I found out. Because I played "the farmer was replaced" and accidentally learned python in like an hour.
the binary files that you send for execution are already a high level language: it gets translated into micro ops, pipelined, optimized, predictive branching/speculative execution, etc. there is so much going on under the hood that you have no control over.... you cannot program how your code is pipelined for example, or program how the speculative execution is carried out... the binary code is just the lowest level of abstraction that you as a programmer has access to, but there is a whole other stack of complexity going down to the hardware from binary, just as there is one going down from the high level language down to the binary. it's absolutely crazy that with all these layers of abstraction everything kind of still works ok.
rust has an insanely nice package manager that bundles everything up nicely AND includes all dependencies, but adds heaps of abstraction c has no package manager, and you need to do slightly more complex builds on top of installing libraries just to get a binary, but it lets you have lots of control both are a double edged sword, there is no one programming language that can do everything flawlessly because there is no perfect language, and either way you are going to make sacrifices
The compiler magic is currently the real problem. You write down something like `a[i] = 5;` and the compiler figures out this shouldn't have an effect and removes the "store" assembly instruction. Now someone throws in `volatile`... yeah; but nowadays convincing the compiler to do what you wrote down can really be absolutely horrible. I think the reason behind this obsession with optimization is, that compiler developers just assume (unfortunately correctly) that the people writing code in C nowadays on average have no clue about what this might translate to in assembler or if it's efficient code. Result: The compiler will NOT translate your code into straight assembler, instead it will try to transform the code first into something which hopefully should do the same (with lots of assumptions about the environment) and then translates this resulting code into assembler. That's quite a nightmare because these assumptions under which these transformations are done are not obvious at all.
I cracked and went to C (definitely not c++), because it allows me to learn, but also I realize there are a lot of gotchas with it. So, it makes a great learning and exploratory language, but it seems like a terrible option for actually building stuff. A great discovery and prototyping or proof of concept tool, but building I think the abstracting those gotchas away, after you are aware of them is what you want. The concern and critique of the video, is that, yes, if building something yourself, it is probably more enjoyable and satisfying. When other people are potentially going to change your stuff, you work on a team, that all introduces a lot more variables and levels of complexity that I think make C not the best tool for that type of scenario. Not that a team can't build in C, but it would probably require a whole different model of collaboration than the tools made with that in mind like it sounds more modern tools are like Zig and Rust (Neither of which I am familiar with, especially not Zig yet, but I like pointers, so who knows)
I'd drag the kids analogy even further; you need like 30 to even 100 tries to start liking something; and it's always the same story. One day they say it's their favourite meal (while the other previous day they were hating it) and then it sticks, it's a "safe" food for the rest of the lifetime
I agree that C strikes a good balance between knowing what the computer really does and being able to program it efficiently. But yeah, obviously there's a ton of compiler magic, there's the abstraction provided by the OS itself, there's crazy hardware optimizations, there's crazy CPU firmware optimizations, there's even non-deterministic neural branch prediction. But we can't un-abstract those, this level of complexity is still required for current level of performance. While in case of a lot of languages, abstraction causes loss of performance, so C strikes the right balance there as well.
And yeah, languages like Zig or even Go are in many terms direct improvements over C. Zig doesn't really abstract anything away, it just provides developer ergonomics, means to understand the code better by just reading it, for the most part it doesn't physically prevent you from shooting yourself in the foot but it more or less makes it very clear when you're about to do so based on those features, those features aren't abstractions though, it's just syntax coercion. When it comes to Go, obviously it does abstract quite a bit but not in a way that's incomprehensible and in a way that might not fit all problems but it fits a large class of problems exceedingly well without much performance sacrifice. Same with Rust, though it does take its syntax and preprocessor coercion quite far, to the point where it's easy to forget about the underlying hardware/OS interactions.
Just wish C had some new features: namespaces, better built-in memory tools, auto pointers, type aware macros, automated #include guarding, pre and post handlers bound to procedures, etc.
If you badger a kid to eat something though they may try it and just to spite you decide to not like it for years, decades, maybe their whole life. So I think the best approach is to just have foods available, ask neutrally "do you want X". Never ever push the food, just have it available. Eventually they'll get curious and try it just because other people eat it.
Ada gives you complete control and clarity of where memory is stored and used but it isn't very painful to use. It's pointer equivalent (kinda it actually has addresses that can be used more like C pointers if you really want to) are access types which can also be created as not null. Anyone who hasn't tried Ada really ought to. The amount of time and money spent designing Ada 83 dwarfs anything else and it hasn't had or needed any major overhauls in 40 years. Just steady significant improvements. Ada 2022 ISO standard is still being implemented. It also wasn't built by committee but by competition.
For me, the time thing. I found that It just get easier with time. I don't really get memory leaks. If, hypothetically, I made a piece of code that does some complex stuff on the heap. If I'm not in the mood, and it's a situation where a shortcut is warrented. I'll just fork the process, let it do its thing, pipe stuff If I have to and let the OS clean it up when that process exits. A little bit of overhead, but you save some time. And you get some fault tolerance. You could also just go the NASA route and never touch the heap.
I would argue that Assembly Language is much “bigger” than C, on modern CISC systems. There are zillions of obscure instructions and myriad different CPU states and configurations, and C can really simplify that to a more standard arrangement of data and functionality.
Prime might be a web dev, but we all know hes moving towards C/C++
It's "C and/or C++". Stop mixing these languages, it confuses newcomers.
@@zyriab5797really ??? 😂
didn't expect to see you here
@@zyriab5797 it doesnt confuse anybody what are you talking about those two are literally the same language but one having some additional stuff
C++ never been promoted here. C++ is over complicated.
Turned 3 mins video into 13 mins, is this guy the best react streamer or what
Yes he is 🎉
Six seconds in, pauses video
so true lol
4x developer
The Asmongold of programming
4:13 this. C _feels_ like it’s a direct translation of the assembly into a little bit more friendly and universal language.
But in reality we got caching, pipelineing, branching probability dependent pipeline prefetching, hyper threading etc etc on top of that we have compiler optimizations
In reality much much more is happening under the hood, that is abstracted away into a black box so the cpu acts like we think it should.
"caching, pipelineing, branching probability dependent pipeline prefetching,.. compiler optimizations" - I'm not really sure why people bring these things up. If you write C, all of these are deterministic systems that are directly affected by how you structure your code. Caching is dependent on how you structure your memory, branch prediction is dependent on how you structure your code (and can be hinted), pipelining is just being aware of what assembly your code in translating into to ensure you're doing things in the most effective order, prefetching depends on how you structure your code and can be hinted. The ability of the compiler to optimize your code is again, entirely dependent on how well you structure it to play well with the optimizer. None of these things are magic black boxes, they are predictable systems that you learn with experience how to write code that plays well with them.
By the way, if we had more explicit control over those things, believe me we WOULD use that control, but sadly we don't have it. If instructions are available as intrinsics, we absolutely use them, but with the way things currently are, it's more control by tricking the CPU into doing what we want at this point.
@@StevenOBrien none of these things a in fact black magic boxes. Of course things are documented and the mechanisms and rules by which they function are deterministic, documented and known.
Yet I know no one who has all those rules and functionality in their head when programming. If anything they assume the assembly that could come out of the compiler if it weren’t for all those optimizations and such. The rules for optimizations, pipelining etc. work in the way that you don’t have to think about them and the resulting code acts in a way like the assumed assembly without optimizations does. (In most cases. And as long as you stay away from UD)
That is the black magic box. In reality it isn’t black but it is treated like a black box.
And you can in fact have more control over compiler optimizations: switch them off and optimize yourself. People don’t do that anymore because the compiler is usually better in optimizing than they are. Instead they let the compiler optimize as much as possible and just trust it doesn’t change their codes runtime behavior.
Wait, did Prime just subtly endorse the practice of quiche eating? Revoke this man's x86 Assembly license immediately!
"Real programmers don't eat quiche"
His kids clearly rejected his quiche eating ways. So he got that right at least
wait, did you just shif in your pants?can you like unshif?
All we are saying is give quiche a chance.
I f'ing love quiche. Nom nom
After 40 years in IT as a reseller and now retired, I wanted to warm up my old hobby - programming. After two years of C++, I realized, that I'm not compatible with C++ and ended with C and it's great. When memory management and logic goes hand in hand, I just love it.
C is still the closest thing to cross-platform assembly. I also love the simplicity of the language, and it shows with how small and fast its compilers are. I only wish it had better tooling, as something like cargo really speeds up development.
any tech stack that achieves esoteric priesthood status and I'm all in
my copy of K&R poses prominently in my shrine to the ATT Bell Labs pantheon
just add zig build tools and you're modern. Its the best
Sadly, even C cannot be called a cross-platform assembly. It's too far detached from the actual ABI details, for example.
@@vitalyl1327except inline assembly
I'd say LLVM and WASM are closer to cross-platform assembly than C. C is a high-level language with an abstract machine model as part of the spec.
The reason I use C (and Odin, because it’s philosophy is the same) is because it doesn’t get in my way when I code. That’s it. It is so much easier to reason about my code when I don’t have 50 different features to think about, all polluting my mental model.
Sure 😂 So we not mention the brain damage everyone gets when compiling their c code. Everyone who worked on bigger projects in c can relate to that
Then you must have an even bigger mental model to manage your make files lmao
In some higher-level languages you aren't forced to use the highest abstractions possible if you have full control over your project.
That's why I like using Rust in personal projects, usually with enums for polymorphism which avoids 99% of weird annotations. But I'd probably hate it in a team because I know how addicted most are to using the fanciest abstraction features and adding complexity via third-party dependencies.
Because it's the offical lingua franca of programming
I think you've meant "ligma".
@@eloniusz what's a ligma
there was a time that if an applicant didn't have some years of commercial C programming on their resume that they shouldn't even bother applying. It was as expected in the industry as much as we expect everyone to know the alphabet and be able to read.
Those were the days of real programmers, of course.
@@Vinoyl Ligma balls
@@Vinoyl💀💀
That's the reason I love programming in Common Lisp. You can start fairly abstract, but go down to machine level when you need to.
My reason for C is much simpler. I need to justify my alcoholism.
😂 I can relate.
After trying out many languages (including Rust and Zig) I’ve also found that I strangely really enjoy C. There’s much less to keep in your head and with some good technique and knowledge of what you’re doing it’s very gratifying.
Exactly
I agree, but its nice to have few drops of vector lists, and other stuff, the g++ finds normally more potential bugs, when casting. I’m bit too old school for all the C++ classes, most of my code is C. Templates is good way keep my code more DRY, can do the same with macros, but it seems easier to write templates.
@@kjetilhvalstrand1009I heard one person say one man’s boilerplate is another’s simplicity.
@@Knirin I wherry pragmatic about that, I don’t want C/C++ to turn into Basic, if code contains boundary checks for every array access, it’s not C/C++ code, best practice be check things before use and only once. and for that to work, you can’t compromise your clean up code. (so I will say some boilerplate is good code)
@@kjetilhvalstrand1009 I was more thinking about how a lot of these arguments don’t count macros to handle the lack of generics or templates in C as boilerplate but consider type parameters for generics to be boilerplate.
The number of reimplemented data structures also tend to be ignored as well.
Always boundary check your code and sanitize inputs. C is very good at turning logic errors and bad inputs into memory problems or worse.
10:28
Do love those filters and ranges.
Haven't written a boomer loop in anything but simple examples made readable for people unfamiliar with algorithms, in a long time.
„Giving yourself more control, but it comes at a cost of time“
Uses NeoVim and writes own Plugins
8:40 - This is a really important thing for you to have called out, imo.
I was feeling super burned out for _years_ of corporate software work; spending most "dev time" on cloud configurations, using a random mix of languages for every project, spending tons of time upgrading frameworks or switching to new ones because web development paradigms and "best practices" change every year or two... it's all so draining and soulless.
Then I finally made myself do a project I'd been wanting to do for years - get back into making a game engine, combined with wanting to make it work for Web-Assembly for various reasons. Was originally going to use Rust, but learning two new things while already dealing with burnout was... not working. Eventually went with C and with the arbitrary deadline of a game jam, I managed to start the project and actually get into it again. Because of it, this is the first year in almost a decade where I've actually been _enjoying_ coding again.
"all you need is a bunch of nand gates"
Better yet, all you need is a bunch of transistors.
Just need a lot Indians who know math
Even better - Abacus - Chinese.
all you need is electrons and holes
Better yet, all you need is some sand
The best skill to have as a programmer is understanding code someone else wrote
Not to mention the code you wrote yourself a few weeks back. :)
@@lmoelleb weeks? Bro, I come back two days later and im lost
@@MindBlowerWTF well, I did not want to push it too far.... But to be honest, sometimes the lunch break is enough. :)
@@lmoelleb lunch break is more accurate lol
I think writing a code that optimizes someone's time going through it is considerable (th-cam.com/video/ehTIhQpj9ys/w-d-xo.html)
People always pushing stuff further and further. In almost everything. Examples are the economic growth, your piano skills or your rizz skills - idk.
Sometimes there is no need.
Have a nice day.
The dichotomy you describe between drive to mastery and drive to get paid seems like it has a middle ground that I strive for that could be described as a drive to accomplish a task well and efficiently for the satisfaction of it.
Wait until you start implementing high level concepts within the C preprocessor
Because no language feels closer at home than your own personal (it's readable to you) macro spaghetti
I once forked a lua library where the previous owner generated all the tests with a custom C program. It was such a confusing macro mess that I ended up just rewriting them. Probably a skill issue but idc it was still insane.
"The more Control you have, the more Pain you have" dayumm real.
Try embedded development, the only abstraction you have is the startup code that calls your main function.
Also working with memory you understand how the computer works. interrupts are another thing that allows you to demistify what happens at the processor level.
also you need to study the specific controller's architecture.
1:45 In youtube you can press '
Thank you, wise Gentooman
ty
We got vim motions on TH-cam before we got GTA 6
@@sutirk tragic
Marry me.
I’m a former frontend engineer and currently learning C, so this really resonates with me. As he mentioned, I can’t resist my inner drive to be a mastery, however inefficient it does feel like.
His argument betrays a lack of understanding of the underlying hardware that he thinks C maps to so elegantly. I guess people who don't know what they're talking about are the most likely to think they're knowledgeable and be comfortable sharing their views.
I don't often have an opportunity to write C, but when I do, I enjoy it. And often it is still the best tool for the job.
Because B isn’t commercially available?
How about D?
@@the_mastermage i guess D already exists e-e
and cuz B is interpreted
Every now and then i rmb D is a language 😂 dont get why its basically unknown despite being easy & decent
@@JohnDoe-np7do I feel D doesn't have a natural niche or compelling features. D's sweet spot is between c++ and java. "C++ with GC" or "native code java". But C++ has accrued features like reference counting, and java has jit compilers now.
I don't know if I'm too dumb to realise i'm doing things wrong or if I never did something that actually required me knowing how to use them, but I never had issues with pointers in C
it mostly just stems from beginners not really understanding what they are, and then proceed to throw * and & at things until it (seems to) work
I have a theory that people's misunderstanding of pointers stems from a misunderstanding of memory. if you understand physical memory, pointers become - if not obvious - easier to reason about.
@@KingJellyfishII It's interesting to me. I always heard rumors that so many people having a hard time with pointers, but I've never really seen examples of things that real people had trouble with after not being an absolute beginner (unless you're buying into RAII instead of memory arenas). Have you encountered any actual misunderstandings from people who weren't outright beginners?
Yeah, I think it's entirely a beginner problem. Maybe if people are talking about null pointer issues that is a legit concern, but that is really just a symptom of lazy error handling or poor design.
A lot of code is easier to read and reason about when using pointers.
Pointers are easy, just the syntax is hard to get at first because of the misguided notion of "declaration follows use" (one of many many sucky choices in C's design).
What some people miss about the undefined behavior in C is that much of it is a feature, not a bug in the spec. C deliberately has some UB to allow compilers to optimize on architectures that behave differently at a fundamental level. For example, AVR and PIC microcontrollers and certain RTOSes will not panic when dereferencing a null pointer. I've worked with some microcontrollers, like the TI MPS430, that actually initialize uninitialized variables. Not everything is ARM or x86.
C's stance to define this as UB means that compilers can make the correct decision depending on the OS and hardware. Reading forums of competing programing languages, you'd get the impression that undefined behavior in C is a pernicious and capricious bug in the specification itself. It's not. It's quite deliberate.
All this being said, languages that eliminate this UB and prescribe standards across hardware are limiting their optimizeability across different architectures and, imo, are unlikely to replace C as the common systems programming substrate.
It would be possible to make most such instances of UB as "implementation defined" instead of UB. UB means the program can legally do anything. It can delete your hard drive. It can turn off the life support machines. It can push the button and launch the missiles. UB is never a feature. It's *always* a defect in the spec.
@isodoubIet so what is your difference between undefined and implementation defined.
If you take something like the aero-space, automotive or industrial standard compilers they all need to define a certain behavior for UB. So no missile launches by accident, your car flips by accident or your microwave burns down your house.
@@MrHaggyyIt's not "my" difference, it's the standard's (C and C++ both). For example the C++23 standard defined "Undefined behavior" as "behavior for which this document imposes no requirements", whereas "unspecified behavior" is "behavior, for a well-formed program ([defns.well.formed]) construct and correct data, that depends on the implementation", and implementation-defined is "behavior, for a well-formed program ([defns.well.formed]) construct and correct data, that depends on the implementation and that each implementation documents".
Undefined behavior is e.g. accessing an array past the end. Unspecified behavior is which subexpression gets evaluated first in an expression like (a + b) + (b + c), and implementation-defined is like the width of an int.
"If you take something like the aero-space, automotive or industrial standard compilers they all need to define a certain behavior for UB"
The "compiler" always "defines" a certain behavior by UB, and its source code is a definition of sorts. The problem is you as the programmer have no idea what that might be, and even if it happens to work, it's a bad idea to rely on it.
@@MrHaggyy It's not "my" difference, it's the standard's (C and C++ both). For example the C++23 standard defined "Undefined behavior" as "behavior for which this document imposes no requirements", whereas "unspecified behavior" is "behavior, for a well-formed program construct and correct data, that depends on the implementation", and implementation-defined is "behavior, for a well-formed program construct and correct data, that depends on the implementation and that each implementation documents".
Undefined behavior is e.g. accessing an array past the end. Unspecified behavior is which subexpression gets evaluated first in an expression like (a + b) + (b + c), and implementation-defined is like the width of an int.
"If you take something like the aero-space, automotive or industrial standard compilers they all need to define a certain behavior for UB"
The "compiler" always "defines" a certain behavior by UB, and its source code is a definition of sorts. The problem is you as the programmer have no idea what that might be, and even if it happens to work, it's a bad idea to rely on it.
Well UB means anything can. The program might order uranium, set your bed in fire.
What you said ab the level of abstraction hidden from you even when using C is so right. And that part about having more control being a pain sometimes is true depending on the scope of a project, the amount of heap/dynamic allocs my current project makes always makes me think "yo this sh*t is dangerous" altough thanks to arenas and defer pretty much solve it for me but still its risky.
Im by no means a beginner but damn i still dont know a lot, im 700 lines into a compiler/language ive chosen to write (in zig btw) when i have the time & i realize just how much ive taken for granted.
Im not even at the code gen part, was considering native asm but i think llvm ir or qbe would be a more sane option. the tokenizer i wrote is great, its 500 lines & works flawlessly & is fast.
the parser however is another story, its works but is a mess & needs to be revised plus im using a map to resolve vars at runtime, pretty sure there are better ways to do this.
At this point, the language already has the ability to resolve identifiers into values, booleans, floats & ints of numerous bases (namely 2, 8, 10 & 16, coz were programmers, these are the only ones that truly matter, in any case they all evaluate to base 10 at run time) the parser also has the ability to evaluate binary arith operations/expressions with numerous operands even when an operand is actually a nested expression, altough there is no precedence and is left-associative for now.
However if im being honest with myself, the implementation of the parser is so dumb & the amount of branches the program takes is probably enough to grow an actual physical tree 😂
There are many functions/methods which use both recursion and loops at the same time + ive added so many constraints into the syntax of the language such as prefixing ids with $ or @ to ease the process. @ is supposed to reference global symbols either "imported" from zig std or c std or simply provided out of the box. Got the idea from llvm ir & zig too.
theres still so much left to do such as adding that feature and by the time im done itll probably be well over 1000+ loc & hopefully ill come out more knowledgeable once its completed but more importantly, be able to accomplish my goals for this project.
I dont think there is a necessarily a "best" language, in fact im at a point where i enjoy as many languages as possible, i used to be like this guy but for me it was c++ but i appreciate the more modern languages available nowadays.
After all, its nice to have a change of language once every while coz 4 the most part the basics stay the same & it should always be a joy to program, skill issues suck but there really is no easy way to get good if you dont have inherent talent for this.
C is not assembler. People think it is without realizing how much you can't actually do in C. I once wrote a compiler that generated assembler code, that couldn't be replicated in C. It was just a different calling convention but that is enough. Also the amount of optimization that occurs is crazy and the generated code might be fairly different. Finally other languages, such as Rust, actually generate assembly that is equally close to the code as C does.
Yeah. Other example are intrinsic instructions such as AVX.
prime was talking real to the new gen of programmers real talk at the end there , gr8 vid
The points in the video are fair and well elaborated. As a programmer and an artist I say it feels good to go back to simpler things (but not always for me). People who hate on other languages for no reasonable argument are the worst. I am currently using JavaScript mainly but I enjoy using other languages and have used them on different projects. So far I've used JS (and TS), C, C++, C#, Java, PHP, Python and even Assembly (had too much fun with this one). I really liked (and disliked) some of them but if anyone asks me why would I use a language over another for a certain project, I believe I have the minimum required knowledge to have a fair judgement and give a good reason.
People just need to remember: they are all tools and all tools come with good and bad things.
why did prime flashbang us at the end for no reason
Twice!
FBI OPEN UP
light mode is superior to dark mode, that's the truth people aren't ready to hear. Linus Torvalds, Dennis Ritchie, Brian Kernighan, Bram Moolenar, all light mode users. On the other hand, Rasmus Ledorf, creator of one of the worst languages in existence, dark mode user.
G. K. Chesterton reference dropped, subscribe button slammed.
there are useful, well crafted, abstractions. And there is Vercel and Next.js
The cool thing about coding on old hardware like a commodore 64 is that you store ram data in actual memory addresses that reflect spots the physical chip.
I am a first year electrical engineering student and i had a 1 semester cource on C. It was some of the most fun that ive had learning and it sparked a desire to continue studying programming. There's something so special about starting from a blank file and bit by bit implementing a complex program that languages like java have yet to show me, but then again I'm comparing apples to oranges
What I like in C is that premature optimisation actually works. In other languages it simply fails due its unfriendly nature to the compiler or interpreter. C is much clearer on what is fast.
I think the slowness of other languages is mostly due to knowing what is fast in that language. Often when you write code, we tend to use the most simple way, so we use language provided things, but those are often slower than writing the code with more basic things. So sugar is the devil and we can't stop using it due its charming simplicity. C simply doesn't have that option, you are forced to write the function with basic blocks. That way C code is often faster.
So long as you know what you're doing. If you don't then things can get funny: one time I thought I would try converting some Python code I had to C. It was a bunch of tight loops doing calculations, the sort of thing you don't expect Python to be good at, and I was curious what C's performance with basically the same code would be. Instead the stumbling block was the 50mb file of numbers I had to read in to an array first, that's trivial in Python but my uneducated attempt to hand roll a C file parser was so bad I never even got to the calculation. Writing in C requires a commitment to learning about things you would otherwise take for granted.
@@thesenamesaretaken that does indeed sound like an issue I would expect a Python-first programmer to encounter. Coming from the other way, a C developer would also probably be perplexed by performance for different reasons.
@thesenamesaretaken in the really heavy number crunching python is only a little bit slower. Your modules are usually written in the language that suits the problem, and the python language just distributes data as needed.
@@MrHaggyy Yeah, If you do number crunching in numpy or something, you are basically using C code anyways and often well optimized.
If you want to write your own raytracer using Python, then it could be three orders of magnitude slower than C. By the way very niche language for number crunching is Julia, it has simplicity of Python, but much smaller communities and much less libraries. If you use strong typing, it's roughly 3-5 times slower than C which is not that bad. I tried it, but I have no use for it. Mostly I can write one purpose or not time critical stuff in Python and larger apps in C++.
In C the semantics are so ill-defined that higher level optimizations that require the language to specify how memory and pointers can be used more concretely will never be possible
Primeagen just got pumped bc the guy is pumped. 😂 jokes apart, it’s nice that we have different programming preferences bc we need this, we need different communities exploring different problems, imagine if everyone is playing javascript from now on, what happens with all other projects that is more suitable to other languages? Embbeded systems, OSs? So, lets celebrate the “diversity”
I do believe the feeling of "stacking black boxes on to black boxes on to black boxes" is partially a skill issue. I often feel that way when I begin to use a new tool or language or framework, until I have properly understood (even at a high level) the mechanisms by which the tools I am using work. Once I take the time to understand that, the black boxes feel less like black boxes and more like just yet another tool that I understand and can use and reason about. Except for javascript, that is, js is the ultimate black box.
javascript is easy to reason about. node and mountains of dependencies, not so much.
Understanding how something works (ie its interfaces) doesn't make it any less of a black box
depends on the person and what they feel a blackbox is
I think part of it is a blackbox versus an eldritch horror reaching out from the shadows; what i mean is if its a blackbox with well defined boundaries that you call into then you can get good at using it and it becomes clear over time what it does and how it works/interacts with your code but when its an eldritch horror that implicitly appears at a distance and teleports itself inside your code thats a whole other can of worms *cough* spring boot injectors randomly failing with 0 indication because a random injector somewhere in the million lines of code changed with 0 indication that it would affect anything else *cough*
It's also a management tool. If you have a lot of people and need your software fast it's easier to divide the task in smaller ones and let them build a selection of black boxes. So your architects can assemble the project out of those.
There must be a psychology thing to the level of abstraction you prefer too.
your video editing skills are incredible, what a treat to watch!
I went from machine code to assembly, then C way back and never got into the newer toy programming languages. I want to focus on solving the actual engineering problems instead. I always thought that wasting so much time on the latest shiny programming languages was like focusing on the telescope instead of the stars and galaxies you were supposed to observe.
I'm at 0:00, looking forward to seeing Prime's reaction to the video, which I believe is called "Why I don't Use Zig"
People thinking C is the perfect representation of the machine should look up microcode, and as primeagen says, compiler magic.
>look up microcode
so...C?
Does anyone with a brain actually think that? C was created as a way to enable rewriting of Unix from assembler so as to make Unix portable to many other architectures. So immediately we see that C is a kind of minimalist abstraction that can work efficiently over many instruction sets. Further to what you say even assembly language is not a perfect representation of the machine the code runs on now a days, it knows nothing of the caches, pipelines, multiple dispatch, branch prediction, instruction reordering and other magic that may or may not be happening on the machine it's running on.
most of the compiler magic for C is optimization, although there are still some abstractions that are not 1 to 1 mappings, but if the optimization magic was removed and the code were to be compiled literally, it would still work the same (although it certainly helps to have programmed in assembly for that understanding)
@@psteven5 That is true. However I have heard many old hand C programmers complain that "compiler broke my code". Which basically means they were assuming C is much closer to the generated code and machine than it actually is. Which is to say they had undefined behaviour in their code that worked for years or decades until some new optimisation broke it. Changing optimisation levels can certainly change the behaviour of ones code in the face of UB. I'm guessing it was a long time since they learned C and they relied on compiler behaviour to guide them rather than studying the C standard specification as it has been refined over the years.
Microcode is just an implementation detail. The CPU still has to fetch and decode instructions we're familiar with before it breaks it down into micro instructions.
really sold me on the ? zig operator. What's great about it is that as someone who is neither good at programming nor knows Zig, it gives me confidence that if I start on the Zig journey, one day I will be ready to brave ' the next level' and all it takes is backspacing that question mark
I'm of 2 extremes: Ruby for complex object-basedness (not mererly OO); and clean C for compact UNIX-like CLI tools, as well as the occasional Ruby-hot-spot speedup.
GDB, the GNU debugger (and the Godbolt website) brings it all together for me: it lets me single-step through the produced assembly code. Beautiful.
Understandable. These days I just write everything in Lisp, using C when needed, like interfacing with the system or libraries.
I started watching about C. It’s interesting to learn the foundation to these other languages. Kinda provides you context to what has been developed to give you better methods, less complexity..
One of the things I dislike about the move away from exposing pointers to the programmer is that data structures where pointers are the most natural representation, like linked list and trees, actually become more confusing. This is one of the areas where I think go really excels. I honestly don't think I fully comprehended the link list until I implemented them with pointers in go.
Or you can just implement a linked list in C.
@@rdubb77 Yeah, obviously an option.
@@Brian-ro7st that’s how I learned pointers well. It really helps with the concept of needing a pointer to actually change data itself as opposed to just reading data or changing a copy
Implementing linked lists in lisp is also natural
Knowing some C makes understanding the difference between pass by reference and pass by value very obvious, whereas in say JS you just have to memorize when it happens so you don't shoot yourself in the foot.
the primagen life and professional takes are always from the point of view of a well versed, smart and lucky man. those doesn't apply for all people following.
G.K. Chesterton mentioned!
The most fun I have programming is writing C for microcontrollers. There's something about poking bits around memory and they actually live right where I put them. Generally I don't even use heap alloc, just allocating static blocks and using a simple memory manager. It's miles away from my daily work.
Lmao, i imagined having myself programming my current problem with gates and i chuckled 😋
It sounds like the abstraction level he likes the most is “memory and logical operations”
You should run through some old classic tales like the Story of Mel.
I love mechanical watches. It's awesome to see an old German man putting tiny, fragile pieces together to make something we take for granite (pun).
I think that C's influence on hardware has been underestimated, as well. Look at the 8-bit world and how those assembly languages are architected. Then when you get to 16-bit and later (remember, C came about in the 1970s) you can see, even in extended ISAs like the x86, how instructions and architecture were added that can arguably be at least in support of C, if not more directly influenced. Then get to the load/store architectures and see how well their assembly languages and architecture map to C.
Interesting, I prefer going the other way. I.e. starting with Math and REDUCING the abstractions, i.e. functional programming.
That said, it depends what project you're building and who you're building with. This video really only applies to hobby projects where you have control over those things.
I love C. I don't use it in practical applications, but it's very fun to use in easy to medium Leetcode problems.
Dont worry buddy. After Zig you'll migrate to nim before calling it home ;)
I'm still amazed at it being able to compile down to C and run on an esp32
What should I be inferring from the fact that the only representations of themself (?) that they put in this video are an anime avatar and someone wearing nitrile gloves with taped shut boiler suit sleeves to accomplish basic tasks? I feel like interacting with this person will lead to waking up naked in a basement chained to a metal pipe.
100% agree. Been doing C for years. I love it.
The more control you have - the more things you have to control, the more things is not controlled for you, it's like breathing manually. Good for control freaks, I guess.
It's the same reason why some companies prefer paying expensive cloud bills rather than paying people to take care of the infrastructure : it takes time, and time is expensive
The Grand Daddy of pretty much ALL modern languages! All hail C!!!
Whoever was asking about OG lang, have they ever heard of "manually arranging vacuum tubes and patch cables"?
I don't like or wear watches, but I appreciate the level of art and craft.
"Assembly is smaller" spoken truly like someone who hasn't written a line of x86 assembly in their life.
Hey Prime, what are your thoughts on SVG and HTMX . Use a SVG editor to design the page and all its elements, and have HTMX be the event handler
When you were talking about how tiny and how difficult it is to make mechanical watches, optical lithography for modern cpu production came to mind and seemed pretty hard too. .. soooo tiny
I love C, it’s the best language to me ! Rust is interesting though, mostly because giving more information to the compiler enables more powerful optimisations… but I think C could add similar things, and become way more powerful than Rust in the end.
I don't think it can, and I don't think it will, C doesn't break, and the reason Rust is able to give more information to the compiler is because the language semantic is richer and more detailed. Changing C such that it could provide that kind of information would break a lot of things.
He's so afraid of exposure that he's treating this like he's living in a nuclear fallout zone.
Prime's body is a machine which turns 3 min video into 13 min video
I absolutely love C and ASM. Something like Zig and Rust is cool and all, but no one talks about formal verification, which nullifies both of their existences for me. Compcert and the verified software toolchain (VST) is absolutely goated and nothing you bluehair devs say can change my mind. Coq is probably one of, if not the best language I've ever learned.
yeah - I noticed they all have blue hair too. They are the ones that go around disrespecting the foundational pillars of civilization - and for sure C lang is one of those
C didn't ship a formal verification toolchain, therefore it's unfair to dismiss new languages because they still don't have one. When they will get enough traction, there might be some tooling that does that. Also the whole reason why C had this in the first place, is because of how fragile the language semantic is. So I love C too, but the sooner I can get rid of it the better in my book, because C has that tendency to suck out the fun of programming.
You should try ADA. SPARK is part of the language spec.
@@pierreollivier1 No language group to my knowledge has shipped a formal verification toolchain themselves, so your point about unfairly dismissing other languages based on that is largely moot, since none of them have it, it becomes a user's preference. We need to advocate for formal verification to safe language groups. The fact VST and CompCert exists and safe languages like Go, Rust, or Zig hasn't built, attempted, or even talked about formal verification, shows a real lack of scope on their part about truly safe programming and safe programs. None of the safe language groups are talking about or even reference formal verification, you have to dig for small outgrowth groups, which are extraordinarily esoteric. That's the problem. Formal verification is about as esoteric as it gets. Without safe languages publicizing formal verification, it's likely always going to be esoteric. Safe language groups should be the ones pushing for formal verification because it only strengthens their argument for the need of safe languages. They are the ones that have the user base to plead with to develop such tools because they already have sold them safety. They are also the ones that need to show the other side of the safety coin, since safety is their whole argument. These groups are in a prime position to lead the charge in making formal verification more mainstream. Relying only on formal verification or only on the language is never going to be fruitful. Useful, sure, progressive, absolutely, but the real magic bullet here is a safe language and a formal verification toolchain. Until we have both nothing is safe. It's all about the mitigation and the acceptance of dangers, for now.
Full formal verification is out of scope for a general purpose language because it puts too many constraints on the user. Things like borrow checking aim to "prove" some properties hold for software by adding chained constraints without having to bring the whole machinery needed to "prove" more things about the program.
Writing provable code requires a lot of effort both to write code that can mathematically be proven at all, and because even with that you need to hand-hold the software to allow it to prove things anyway. Meanwhile most software doesn't need formal proof that it works, unit tests serve as empirical proof that key values are handled properly and we trust people to write code in a way that unit tests can effectively cover real use cases.
Most people using programming languages are doing engineering, not maths. There's a tradeoff to be made between cost, efficiency and "correctness" and "correctness" is rarely the top priority. And languages are written for people who use them (usually by people who use them) so of course most languages don't focus on probability.
It still has its place and research on the matter is cool. Just wouldn't say it's a big deal that there isn't a focus on it.
Let's talk about this in 50 years when formal proofs in most software is actually something reasonable to do.
2:20 Said another way, you'd have to be a TimepieceMaster. ;)
C is pretty simple, but there is so much I hate about it, because it makes me think about stuff, that can be solved by tooling without giving up anything I consider remotely valuable. Here a non exhaustive list
- Declaration order in files matters
- no generic containers without macros (hashmap, vector, etc.)
- hardware dependent integer sizes by default
- standard library varies greatly between platforms
- separate header files
- fallthrough in switch statements
- includes, macros and the preprocessor in general.
- polymorphism only via void*
- no references (i.e. never null pointers)
- no namespacing of any kind
- enums polluting global namespace
i am excited about zig and the inevitable influx of people that are fed up with the silliness of JavaScript
What's silly about JavaScript?
@@leeroyjenkins0 search up "wtfjs"
the bit about white paper was a solid joke
Haven’t gotten there yet; were we talking about the 8 people using Haskell again?
Just got there. BOOM, called it!
There is absolutely nothing wrong with starting a new project in 2024 and choosing to write it in C.
I both use and like the "modern" systems languages like Zig and Rust, but don't understand the hate C gets, nor do I refuse to use it simply because those others exist. I admit that I often do choose one of those over C lately, but this is mainly because I like to expand my knowledge with new languages, not because I feel there is something inherently bad about C.
I got good at C and it made me pretty good at every other language I found out. Because I played "the farmer was replaced" and accidentally learned python in like an hour.
the binary files that you send for execution are already a high level language: it gets translated into micro ops, pipelined, optimized, predictive branching/speculative execution, etc. there is so much going on under the hood that you have no control over.... you cannot program how your code is pipelined for example, or program how the speculative execution is carried out... the binary code is just the lowest level of abstraction that you as a programmer has access to, but there is a whole other stack of complexity going down to the hardware from binary, just as there is one going down from the high level language down to the binary. it's absolutely crazy that with all these layers of abstraction everything kind of still works ok.
rust has an insanely nice package manager that bundles everything up nicely AND includes all dependencies, but adds heaps of abstraction
c has no package manager, and you need to do slightly more complex builds on top of installing libraries just to get a binary, but it lets you have lots of control
both are a double edged sword, there is no one programming language that can do everything flawlessly
because there is no perfect language, and either way you are going to make sacrifices
I know this is a old vid but Im watching this at fucking 5am and i was not expecting the 10:54 flashbang my eyes started burning for a sec
The compiler magic is currently the real problem. You write down something like `a[i] = 5;` and the compiler figures out this shouldn't have an effect and removes the "store" assembly instruction. Now someone throws in `volatile`... yeah; but nowadays convincing the compiler to do what you wrote down can really be absolutely horrible.
I think the reason behind this obsession with optimization is, that compiler developers just assume (unfortunately correctly) that the people writing code in C nowadays on average have no clue about what this might translate to in assembler or if it's efficient code. Result: The compiler will NOT translate your code into straight assembler, instead it will try to transform the code first into something which hopefully should do the same (with lots of assumptions about the environment) and then translates this resulting code into assembler. That's quite a nightmare because these assumptions under which these transformations are done are not obvious at all.
The video smacks of GPT-style verbiage.
I cracked and went to C (definitely not c++), because it allows me to learn, but also I realize there are a lot of gotchas with it. So, it makes a great learning and exploratory language, but it seems like a terrible option for actually building stuff. A great discovery and prototyping or proof of concept tool, but building I think the abstracting those gotchas away, after you are aware of them is what you want.
The concern and critique of the video, is that, yes, if building something yourself, it is probably more enjoyable and satisfying. When other people are potentially going to change your stuff, you work on a team, that all introduces a lot more variables and levels of complexity that I think make C not the best tool for that type of scenario. Not that a team can't build in C, but it would probably require a whole different model of collaboration than the tools made with that in mind like it sounds more modern tools are like Zig and Rust (Neither of which I am familiar with, especially not Zig yet, but I like pointers, so who knows)
I'd drag the kids analogy even further; you need like 30 to even 100 tries to start liking something; and it's always the same story. One day they say it's their favourite meal (while the other previous day they were hating it) and then it sticks, it's a "safe" food for the rest of the lifetime
yes, this is an awesome take, a profound vid , all the clutter, here is your clarion tune , listen to it.
I agree that C strikes a good balance between knowing what the computer really does and being able to program it efficiently. But yeah, obviously there's a ton of compiler magic, there's the abstraction provided by the OS itself, there's crazy hardware optimizations, there's crazy CPU firmware optimizations, there's even non-deterministic neural branch prediction. But we can't un-abstract those, this level of complexity is still required for current level of performance. While in case of a lot of languages, abstraction causes loss of performance, so C strikes the right balance there as well.
And yeah, languages like Zig or even Go are in many terms direct improvements over C. Zig doesn't really abstract anything away, it just provides developer ergonomics, means to understand the code better by just reading it, for the most part it doesn't physically prevent you from shooting yourself in the foot but it more or less makes it very clear when you're about to do so based on those features, those features aren't abstractions though, it's just syntax coercion. When it comes to Go, obviously it does abstract quite a bit but not in a way that's incomprehensible and in a way that might not fit all problems but it fits a large class of problems exceedingly well without much performance sacrifice. Same with Rust, though it does take its syntax and preprocessor coercion quite far, to the point where it's easy to forget about the underlying hardware/OS interactions.
In order to be a good debugger you need to be a master of the language.
Just wish C had some new features: namespaces, better built-in memory tools, auto pointers, type aware macros, automated #include guarding, pre and post handlers bound to procedures, etc.
And functions inside structs (void pointers dont count)
If you badger a kid to eat something though they may try it and just to spite you decide to not like it for years, decades, maybe their whole life. So I think the best approach is to just have foods available, ask neutrally "do you want X". Never ever push the food, just have it available. Eventually they'll get curious and try it just because other people eat it.
That's one thing I dislike about some platforms. Because of the framework switching, it's hard to really become the master of one
With great power, comes great trouble.
With great power comes a great core dump.
I hope my May won’t die when I’m endowed with such power.
Mechanical watches are the graphql of timekeeping
Ada gives you complete control and clarity of where memory is stored and used but it isn't very painful to use. It's pointer equivalent (kinda it actually has addresses that can be used more like C pointers if you really want to) are access types which can also be created as not null. Anyone who hasn't tried Ada really ought to. The amount of time and money spent designing Ada 83 dwarfs anything else and it hasn't had or needed any major overhauls in 40 years. Just steady significant improvements. Ada 2022 ISO standard is still being implemented. It also wasn't built by committee but by competition.
For me, the time thing. I found that It just get easier with time. I don't really get memory leaks. If, hypothetically, I made a piece of code that does some complex stuff on the heap. If I'm not in the mood, and it's a situation where a shortcut is warrented. I'll just fork the process, let it do its thing, pipe stuff If I have to and let the OS clean it up when that process exits. A little bit of overhead, but you save some time. And you get some fault tolerance. You could also just go the NASA route and never touch the heap.
Haskell is just the Rick Roll of the programmer community
I would argue that Assembly Language is much “bigger” than C, on modern CISC systems. There are zillions of obscure instructions and myriad different CPU states and configurations, and C can really simplify that to a more standard arrangement of data and functionality.
Aim for the moon.
Endup halfway there.
Has and will always be my motto.
Its far enough.. its also nowhere..