Something to note is that the size of your data can affect performance as well as memory usage. CPUs are specifically designed to handle 32 bit and 64 bit values very fast, and sometimes, counterintuitively, an 8 bit value may take longer to process. So, as with everything, premature optimisation is the root of all evil. Keep the age as a 32 bit integer for now, if you have 10 million of them and have identified it as a problem that it uses too much memory _then_ go down to a u8 or use bit-packing methods. It's actually even more nuanced than that, because of cache locality, so actually smaller data can be faster and slower depending on the circumstance. But that's very complex and should be left to experimentation if the need arises.
This is why I think using fixed size integers is a mistake in almost any context that isn't data serialization and/or protocols. For performance conscious parts of the codebases of my recent projects I'm considering having a type selection system that will define word types that will be the optimal size for the CPU of the current platform, like word, dword, qword, etc. Choosing types for your variables is a whole can of worms.
@@Acceleration3 but you have to consider that register access time (what you describe) isn't everything. You also can't treat an object like a lose collection of primitives. For example let's stick to the students example. If you make age, id, birthday,... All 64bit 1 student is a huge object. With tons of wasted space. If you then have an array of students that wastes enormous amounts of space. At some point leading to a cache miss. Then the cpu has to wait for the ram to load the rest of your mostly empty data. At this point your register access times are meaningless by multiple magnitudes. Side note: The optimal cpu spacings are often choosen by the compiler anyway. Meaning that a student with 1 32bit value and 2 8 bit value will at the end be 64bits long. The reason is that the compiler knows you need at least 8bits for a particular value but adding empty space besides it doesn't change the code. Tldr: By trying to be smart you waste space and keep information from the compiler making the result worse. A datatype size is more or less a suggestion for the compiler and it will make smarter decisions than you will. By giving him false information you will not achieve improvements.
@@redcrafterlppa303 I know I'm making matters more complex but if you're going to have lots of students, a better approach to improve on cache locality and reduce memory usage without taking a hit in the performance COULD BE* using Struct of Arrays. Have on array for ages, another one for IDs, another one for phoneNumber... The first student created will have ages[0], ids[0], phoneNumber[0]... COULD BE: IF, your code is going to opperate on only a few of the fields at a time. Operating in all ages of everyone, then later on all phoneNumbers... This is Data Oriented Design. Structuring your data to not keep jumping around in both data memory and code memory.
@@besknighter that's for sure possible and something databases are for. Operating on larger amounts of data is something databases should be used for. Structuring in memory data like this isn't all that helpful and leads to messy hard to read code.
I appreciate how almost everything that’s spoken is demonstrated on screen, even going as far to show real error logs from the different programming languages. Thanks for making these videos, great refresher and learning material.
Yeah, I know. I was supposed to include a little animation of the zig logo saying "are you challenging me?" I just forgot. However, as someone else said, the don't look like that in memory, although with bitwise operations you can do anything with singular bits.
there is same thing for rust just as module. These u31 seems iffy. It would be nice if we could pack them nicely together with NULLs ect, like lets say sizeof(Option) == sizeof(u32)
@@Lord2225 i would think they are amazing for things like rust enums. If you have types in a struct that are oddly sized and guarantee padding you can fit the varient bits in there and create zero cost enums just because a type sacrificed some bits it didn't need anyway. This is exactly what I do in my language that is heavily inspired by rust but tries to fix the sharp corners of the language like lifetimes and dyn dispatch.
Being a rookie to programing and languages as such, absolutely love how you touched upon stuff that I wouldnt have bothered learning about. What an absolutely great way of explaining usually boring stuff in an easier to understand and fun explanation. Way to go bro!
@@keppycs the channel owner said they were not a native speaker in some other comment replies which is why they use the AI voice. It's definitely the script.
@@Tech.Library it's a speech synthesizer, probably something like UTAU (used for music production) or eSpeakNG (a utility more than anything) should give you similar results
I write Java code for my school's engineering team and holy hell I hate seeing every single value be a double for NO reason. 8 bit signed integer? Sure. But there is no need to have the precision of a *double* of all things. At the very least, if you need precision down to the decimal point, then use a float. What's worse is that the people that develop the libraries should be a bit more considerate of their resources than that given that they are (hopefully) a lot more mature than me.
Best decision I did this year. It's the only reason I'm getting these channels suggested. Learnt about Zig, Rust, Linux Kernel, Assembly, an absolute rabbit hole of knowledge I didn't know I was missing.
The best bit for me was his remark about automatic type coercion when doing things like "adding" a string and a number: "and this kind of bullsh*t is marketed as a feature". It reminds me of why (among so many other reasons) I hate php so very very much.
@@danielscott4514 honest question, is automatic number to string conversion really that bad? I see people shitting on it all the time but IMHO it's mostly harmless (at least in a sane language). Many languages have misfeatures which are at least 1000x worse (dynamic typing, everything being a reference type, nullability by default, etc.). Of all these things why do people fixate on int + string so much?
@@k2aj710 Certainly being able to do something like "number of results: " + resultCount (where resultCount is a numeric variable) is something that is pretty harmless most of the time. However, if your language lets you concatenate a number onto a string - and if it uses the + operator for string concatenation as well as numeric addition - then what happens when you try "3" + 2? Do you get "32", or 5? For what it's worth, I took the "and this is marketed as a feature" comment in the video's narration to be aimed directly at dynamic typing generally rather than at the specific example they gave. Dynamic typing is what causes the above kind of conundrum to be a thing. No strictly typed language will allow something as vague as "3" + 2. In c# I would have to do it as "3" + 2.ToString() if I wanted "32", and I would have to do it as Int32.Parse("3") + 2 if I wanted 5 as the result. The very nature of the language eliminates that whole class of bug, which easily comes about when (normal non-superhuman) programmers are not intimately familiar with every last detail of their dynamic language's type coercion behaviour. For what it's worth, on the subject of doing something like my first example: as a regular user of c#, I'm very conditioned to combining strings and numbers using various methods that accept a format specifier, which outputs the number with things like currency symbols, commas to separate thousands, various numbers of decimal places etc). I find that in many cases you want more control over how your number "looks" as part of a string than simply concatenating the number in whatever default representation the programming language uses. So, in my view, the value of being able to write code like "number of results: " + resultCount is questionable anyway. Although I spend quite a bit of time in C# currently, I've coded plenty of Javascript, and suffered far too much php (which can truly make Javascript seem sane). Dynamic typing combined with some bad language design can really ruin your day (especially since the bugs appear at runtime only). I'm far happier and more productive in c# where a huge number of errors are definitely caught at compile time and the lack of any ability to write "string" + 3 avoids various footguns that aren't worth risking for the sake of writing "string" + 3.ToString() (or various more modern c# alternatives, like string interpolation, but you get my point). Given this is a comment on a video about low-level performance, as a side-side note, I rarely ever actually use the + operator to concatenate strings in c# - the reasons why are generally well known (if you're not sure why + is bad for concatenating strings, google c# string concatenation best practice and you're bound to get a pretty good rundown of how concatenating strings works and which approaches are best in which cases: they're considerations in any language that gets into the weeds deep enough to give you at least some choice over how much memory gets allocated/used/eventually destroyed in the process of combining multiple strings.
@@k2aj710 Gawsh my earlier reply ended up being a tome (ignore things like the side-side note on string concatenation - I put that there for the benefit of someone else that might read it later ... kind of StackOverflow learned-behaviour I think). Anyway, I just realised I've got so many horrible memories of things that various dynamically typed languages have done to me over the years that I probably didn't really answer your question (since, I've just bothered to check and Javascript seems to give the string in a string + int operation some kind of precedence, and it actually does what I would consider the preferable thing with "3" + 2 - and gives "32"). That said, I've just looked and at 7:26 in the video there are variations on that "3" + 2 theme which do end up doing arithmetic addition instead. I think probably the string + int thing - if it gets mentioned a lot - is more of an easy to explain and demonstrate example of the greater problem of dynamic types in languages. It kind of builds from there into the kinds of problems you can have with code like: if (myVar) { ... } where there are all kinds of rules for what string and numeric and every other kind of value might mean "true".
Regarding the alternative in 8:00, Some architectures prefer aligned address and this leaving least significant bits unused. Usually, 3 bits for x86-64. If you willing to sacrifice it, then you can encode 8 types that fit a register. It is tagged architecture used in dynamic programming language implementation. For example, a fixnum is data with lowest 3 bits zero encodes integers between 2^60-1 and -2^60. This fixnum representation fit perfectly one register size. The addition and subtraction of fixnums can use the same machine instructions. Right arithmetic shift need to be applied after multiplication. This example shows that even with clever optimization and representation, there will be some overheads in dynamic programming language. Other compound data like record (or struct) and array are pointers and aligned and their least significant bits can store the tag.
Something fun about sizes of things in Rust: for Option where there is a possible invalid state for T, Option is represented as just T, but None is the invalid state. For example, an optional pointer will just be a null pointer in memory if it is None, rather than actually using an extra byte for the discriminator. This concept applies to other enum types as well.
This will only work for Option As for example Option all 0 would be an ambiguous state since it would represent Some(0) and None at the same time. That optimization for pointers is only possible as each type in combination with option is treated individually. An Option would likely be 16 bits and an Option would maybe be 32 bits wide. On why it's not 24 bits look into "struct padding"
@@redcrafterlppa303 I said if there is an invalid state. Zero is a number. I thought I remembered testing this with Option and it working. Update: Just tried it and Option is in fact one byte, or at least my editor says so.
also important to add is that, according to the rustonomicon, such Option can be represented as &T, but doesnt need to, which can have consequences when using sth like transmute
@@redcrafterlppa303 I still think my “invalid state” idea might be correct. I just tested it. Specifically, I made an enum called Test with 255 variants and got the size of Option, which was in fact 1 byte. With 255 variants, every _bit_ is used, but the combination 11111111 remains unused, which if I am correct is what None is represented as in this case.
See right at the start, Memory usage is important, but when you're prototyping a functionality, and I've worked in circumstances where we didn't know what the final data sizes would be until later in the process, just that it would be an integer or a decimal, so we used larger containers, later I refactored the code when we knew what the final implementation limits should be. You should always optimize when you see a place to do so but making it behave the way you intend comes first you can always look at the memory footprint at multiple stages in development, I like a workflow of "get it to work, get it committed, benchmark, look for concerns, pull request" Also I'm glad to see I'm not the only one that doesn't appreciate when a language does an implicit cast and operates on it. I've had that absolutely wreck me before where I had to read the entire class to find the error, where a strongly typed language with explicit casts would have gone "This isn't something we can do implicitly, if you really want that behavior go explicitly cast it" I'd rather have an error tell me "Hey can't do that implicitly" and let me go review it, because chances are if I didn't cast it myself at the time of writing the code, I made a mistake and fed the function something I didn't mean to. This is also why by default my IDE is set so any warnings are treated as errors, so it won't compile if there are warnings, so I can go review those warning and determine if I just missed a nullable declaration or if I made a more serious error. (during rapid prototyping I will toggle that off and once it works I'll go through the warnings then, but default workflow is handle any warnings before building)
Dude, you are literally one of the best when it comes to explaining a complicated topic in a basic way. I am a C# developer with 6 years of experience and I've been researching computer science and electronics for a few months now. Simply because I wanted to understand how things actually work. And I can say without a doubt that your videos helped me a lot to understand the way things work. Thank you!
To summarise this for my reference later: Knowing variable types at compile time 1) Saves space as you know exact amount needed 2) Makes code more readable, no hidden logic behind the scenes. 3) Saves time and more space since you dont need to store data type and read, write and compare it later.
The speed of using 8-bit variables versus 32-bit variables can depend on several factors, including the specific CPU architecture and the access pattern of your program. On modern 32-bit and 64-bit CPUs, operations on 32-bit and 64-bit integers are usually the fastest, because these CPUs are optimized for these sizes. Operations on 8-bit integers can be slower because the CPU may need to perform additional operations to handle the smaller size. For example, the CPU might need to zero out the upper 24 bits of a 32-bit register to perform an operation on an 8-bit integer. However, using 8-bit integers can save memory, which can potentially improve cache efficiency and overall performance if your program is memory-bound. If your program accesses a large array of 8-bit integers, it can fit four times as many integers into the same amount of cache compared to an array of 32-bit integers. This can reduce cache misses and improve performance. So, whether it's faster to use 8-bit variables or 32-bit variables can depend on the specific circumstances. It's not a myth that operations on 8-bit integers can be slower on modern CPUs, but the impact on overall performance can vary. As always, if performance is a concern, it's best to measure and optimize based on the specific requirements and behavior of your program.
I'm at 4:30 and while this is generally true, for most uses an int32 is what you need, even if you're wasting some space. This is because due to modern architecture of CPUs, 32 bit int operations will be much faster than, say, byte operations. Conversely the best type for graphics calculations is float. Of course, sometimes it's beneficial to have more choice. But in many programming languages, the default is default for a reason and you still have that choice.
It amazes me how stupid these comments are. 8 bit ops are equally as fast as 32 or 64 ones. It's honestly mindless people who repeat nonsense without understanding. Sure the throughput is lower. Clueless people shouldn't make comments.
@@gregorymorse8423 It amazes me how stupid you are. 8 bit operations will definitely not be as fast as 32 or 64 ones. There are no instruction sets that support 8 bit operations, so these 8 bits have to be converted to 32, math done on that, and converted back to 8 bit. Those conversions takes up precious CPU cycles. On the other hand, smaller variables (like 16 or 32) vs larger ones (like 64) can win performance-wise if the constraint is memory, i.e. more tightly packed data won't have as many cache misses. It's honestly mindless people who repeat nonsense without understanding. Clueless people shouldn't make comments.
@@cheesepie4ever because modern architectures don't support bit operations on 8 bit data. This means these 8 bits have to be extended to 32, arithmetic done on that, and then converted back to 8 bits. This means extra CPU cycles for every operation. On the other hand, 8 bit operations can be potentially faster if you're operating on huge sets of data at a time - in which case the extra operations wouldn't hurt as much as cache misses, since 8 bits will obviously be more tightly packed and you can fit more of them in cache. As with all things performance related, don't theorize; benchmark. See for yourself.
I started learning Rust a few days ago after having 20+ years of Java and higher level language experience. It feels great to get closer to the metal and I can already see this series of videos will be invaluable for filling in the blank spots in my knowledge. Thanks.
5:50 this is basic beginner level JavaScript knowledge. All numbers are floats. The only time an actual integer gets used is when doing bitwise operations, which is the case where only integers get used. If a person does not know that all numbers are treated as floats, they don't know JavaScript. When teaching this, if the person already knows about int/float differences, I bring this up on day 1 as one of the first points to explain the type system, and I don't think hidden documentation or experience are good descriptors here.
7:33 Actually, in some javascript interpreters, the "default" type is a 64-bit double, but other types can be expressed by setting the exponent to a specific value used for NaN and Infinity. As long as those values are reserved, the rest 52 bits can be used to represent other types of values, including reference types, etc.
As far as I'm aware it's likely to be more than just "some interpreters". NaN boxing is an optimization used very widely in interpreters where the only number type is floating point. It provides a fast way to encode proper integers, allowing the use of the faster integer operations.
@@gregorymorse8423 NaN boxing uses doubles as tagged unions. All 11 exponent bits are 1s, and the most significant bit of the mantissa is a 0 (the NaN is marked "quiet"). The exact contents of the remaining 52 bits are effectively meaningless in floating point math. The number is still a NaN regardless of the data stored in these bits. That means we can use the lower 52 bits of the mantissa for anything we want. Currently x86_64 pointers are only 48 bits wide, so we can store pointers in these bits fine. We can also store integers or anything else we want in those bits. We can also operate in the values using integer math (with the exception that is the value would overflow the 52 bits we use that's a problem we need to deal with separately). So you can store a 52 (or less) bit integer in a NaN boxed double and treat it as though it's a regular integer for more efficient math. If you are only storing integers, you can use all 52 bits, if you have multiple types you are NaN boxing, you use several upper bits as a tag, and you can still have most of the bits dedicated to see data you want.
Technically, Javascript's implicit conversions aren't undefined behavior. They are defined in the ECMAScript spec, unlike undefined behavior in C or unsafe Rust which is literally whatever the compiler implementation decides to do. But implicit casts are not intuitive, so code can behave in ways that are unpredictable.
One of the ways to define a programming is its type system. I remember the time when I started with C# and the .NET Framework, I struggled to read and implement these verbose types without getting errors. But they were worth it because it helped me know how to write Python code professionally. Although I hated the semicolons.
"The reasons behind this limitation are beyond the scope of this video" Noooo...! That's exactly what I was hoping to learn, haha. I'll subscribe if it means you'll cover that in the future!
I remember programing on the DEC Alpha back in the early 90's, one of the first 64 bit processeors, the manual mentioned to use natural size processor uses for numbers as that is what it fetches in. If one make the size smaller ( for some size worry) the cpu fetches in the full size and then has and then cut down to size the programmer suggested. More instructions being executed. For most apps it slows things down as more processing is required to downsize to the programmer suggested. So making it smaller is not always optimal.
My first computer was a Nascom 1 with 1K of video RAM and 1K of user RAM (960 bytes available to me). I did a lot of Z80 coding, so things like the stack, pointers and different length integers are second nature for me. Hearing "Memory is limited" in 2024, when I have 64 GB RAM, is quite amusing!
Your videos are teaching me about considerations that I was previously unaware of. I particularly like making use of arrays when programming. I am realising that it is very important to specify the necessary number of cells per array, and also the specific type of information I need.
Jesus what an informative video. I had a general knowledge of this but the way you explained brings everything together so well. Also I like the way you sprinkle a bit of humor throughout the video lol
I like Rust's explici static types. Some programming languages are counterintuitively implicitly statically typed. Even C falls into this category. Despite it being a statically typed language, the data types are implicit as they can differ from hardware. There is an stdint header file to help with this, but it doesn't eliminate all problems because not all libraries use it. Even if you use it yourself all the time, there is no way to guarantee everyone will, and so occasionally you will import someone else's code that not only may be unclear what its behavior is, but may even be filled with bugs caused solely by you running it on a different piece of hardware. I've ran into this once spending ages trying to debug someone else's code only to figure out that the microcontroller I ported it from defines the implicit signage of a char differently from my own. I honestly cant get upset over that being the programmer's mistake or just being bad code. Personally, I think it is a flaw in the language. I cannot see any justification to make implicit signage or even the width of primitive data types not something defined platform independently. Hot take but code that is written the same generally should run the same on every platform. The only exception should be if the programmer _explicitly_ puts in an exception. Anything implicit and platform dependent is bad language design imo.
This is a good intro to software engineering. Though I already knew this, sort of, it is still good to revue. It is why I prefer static type languages, or at least the option to use types.
Another thing with dynamically typed languages, is that they're great for making quick and dirty scripts or smaller programs, but the second a project gains any size, it becomes hell because by default you don't know what the types of values are being passed around into functions, so you either have to guess, or read up how every function you try to use works. Not to mention it makes LSPs less than stellar because it has no idea what types of values a function expects. Or what methods are attached to a class instance etc. Yes i know that JS has TypeScript to alleviate that issue, but imagine that we had to make a while wrapper on top of a language to deal with that problem.
Thank you for this type of programming content. Programming channels usually never touch on these types of concepts because they assume you already know them.
Size matters but what you put in the variables is more important. Meaning helpful information. Is more important than technically aspect of variables .
I would add an asterisk to your description of interpreted languages. A lot of them ate going the jit compiler route, so you do get some benefits of normal compiled languages, like real primitives, no interpreter step running for every line read, etc. Ofc, not perfect, but its not as bad as you'd think it is.
I like the style / cadence / pacing that you explain things. Just the perfect amount of detail + speed, clean speaking. You have earned yourself a sub and I look forward to seeing what you come up with!
Hey, just stumbled on this channel, I really like channels like these. Seeing as you're a relatively new channel, and aren't a native English speaker, I'd be glad to proofread your scripts for you to ensure the grammar is correct and natural. By the way, I'm a senior video game engineer, so I've already got a firm grasp on these topics, so you wouldn't need to worry about me improperly altering the meanings either. Looking forward to more videos!
"a very bad language won't tell you anything but rather implicitly convert one of the values and then perform any possible operation." *shows JS code* My god that's based, I love this channel already.
For example to estimate the location of something in a data structure like linked list and jumping straight to the memory value to bypass the linked list at times for performance
Yeah I know that. What I was trying to portrait is that low level people really don't like that kind of things at all. Like, in what world does it make sense that 3 * "3" is 9 but 3 + "3" is "33" ?
If you're referring specifically to the Javascript (ECMAScript) spec, then it's "defined" behaviour across so so very many edge cases is the subject of a lot of "oh my God, what the actual f*ck!" humour among programmers who don't have to deal with that kind of nonsense in their daily lives. I suspect Javascript developers just cry themselves to sleep instead. Php developers have their own catalog of "this value + that value = nonsense", which they share among themselves when they want to speculate on what the designers of that language might have been smoking.
This video needs to include information on padding and alignment. This will change the video entirely, as your 8 bit values still take up 32 or 64 bits of space in a lot of instance, because padding is needed. An example is the structure at the end, where you have an id, len and reference. The reference will not be right after the 16 bit id and 8 bit len, as there will be padding added (unless you are on a 24 bit system?). This is because compilers will try to obey the alignment rules of the architecture, for maximum speed. Let the Rust compiler do the optimizations for you. Like the comment from @KingJellyfishII said, CPUs like 32 and 64 bit values, for most modern archs, and if you end up pre-optimizing, and packing values together, you might end up with slow reads and writes.
The AtariJaguar has a 64bit bus, but pixels are only 16bpp. Due to scaling, roto-zoom or just moving a sprite, writes into the framebuffer are not aligned. And padding would lead to black columns? So instead of making misaligned writes work on a hardware level (write queue), they decided to make alignment by the software driver their holy cow and slowed down graphics by a factor of two. N64 on the other hand has no problems to read unaligned triplets for liner texture filtering from TMEM .
One thing that I must nitpick is the list of things python can’t do. It can do far more than js at a lower level, and you have to worry about these things if you use the applicable packages. For example, sized data with ctypes, and parallelism with multiprocessing. I agree that most people won’t need it, however it is built into the language when you do.
In javascript all numbers are 64bit floats, untill you explicitly use bigint or typed arrays. For me it is the same kind of klowlege that knowing, that u in u8 means unsigned integer. Better example would be PHP, becouse it has 2 types that can convert automatically: int and float.
7:05 problem isn't that JS or C# convert's int to string automatically (C# has strong typing but also calls ToString() automatically), but rather that uses the same operator + for both concatenation and addition. For example in PHP, which also uses automatic type conversion, + is addition and . is concatenation, so problme of "2"+"2" equals "22" don't occurs. And it isn't undefined behavior when it work's like in a specification. it's only a skill issue.
The default is still often a signed int. Whatever an int may be for your system. Using it for age could help describe a person who in't to be born in decades, depending on your unit.
I wouldn't use an unsigned age. It's confusing. You could use a unix timestamp as date of birth and have a function that calculates the age (0 for born in the future) .
@@jwrm22 there is a 64bit version that gains adoption slowly but surely. It allows for prehistoric and astronomical dates solving 2 shortcomings at the same time.
I really like the visual style of this video - what software did you use to make the visualisations? I do presentations for other software engineers and would really like to level these up
I enjoyed the video and the shot at JavaScript for being able to add strings and integers. It would be awesome if you could make a video about the stack and heap. I get confused in c++ trying to figure out what gets put on the stack and heap all the time, so a video would be great!
Jorge** has added two excellent videos that cover each of those topics. His channel (this channel) is some of the very best content I've ever seen - anywhere - on this stuff. Definitely give the man a sub! (and go watch those two vids). In general anything that doesn't have a fixed size goes on the heap, whereas your various "primitive" types (integers, floating point numbers, single characters) and fixed-sized arrays of those primitive types go on the stack. In many languages you can also combine fixed combinations of those primitive types into "structs" which, because they are also a fixed size, can go on the stack. Things that don't have a fixed size (and must be stored on the heap) include; any sort of variable-length collection, strings (as opposed to a fixed-length array of characters), and objects. ** (it may be pronounced "hor-hey" unlike how the AI voice read it out, or maybe he's just had too many co-workers who can't pronounce his name and prefers "george" anyway?)
God I'm really glad you started spell checking and grammar checking your scripts after this video. The spelling/grammar mistakes in this one are killing me.
5:50: Minor issue... Yes, I'm experienced with JS but it mainly just was my ability to use basic logic. I think that if JS would not have it result in a float, we'd have bigger issues than `[] + []` returning an empty string.
I'm not planning on learning Rust, rather, I'm learning these concepts for Zig. Your video was still super helpful and well done. Looking forward to your next videos in the series! Subscribed.
There is a *galaxy* of difference between Python and C. Personally I would recommend you move to a statically typed, compiled language that doesn't put you so very very close to the workings of the physical hardware and include so very little "out of the box" (one look at handling strings in C and you'll run from it screaming). If you really want to jump straight to such a nuts-and-bolts level, then play with coding C for embedded micros like Arduino. Definitely don't try to write an "application" in C. If your interests are more in the software application space then languages like Java, and C# I can personally vouch for as being great things to learn (I've used both professionally and would hands-down recommend C#. Unlike Java it does make a proper distinction between data types that go on the stack and those that live on the heap, and it can be very performant if used wisely. I hear lots of good things about GoLang as well.
super cool video, fantastic channel! A question: in C/C++/Rust, how does the program knows that, if i define a variable “float a = 3.0;”, the memory location storing “a” is storing a float32? because in Python you said that there are addition bytes used to store this information, while in C the type is defined at compile time. But then how the program remembers it?
It doesn't. The compiler makes sure that whatever instruction flow accesses that memory location only uses float32 instructions to process it UNLESS you specify otherwise. There is nothing to stop you from pointing at your float's location with an int pointer and then the real fun begins. You can even give the same memory location multiple names and types with a union.
5:40 javascript doesn't have integers, and I'd argue that's intuitive to every beginner programmer. Also what language even allows you to implicitly truncate floats? Pretty sure even C wants an explicit cast for that
If it did not have integers under the hood it would be a nightmare dealing with floating point errors in code logic that assumes integers. Imagine a for loop that increments a variable by 1 every pass. If it was a float you would get values like 5.99999998 and when you try to use that as a array index you would get an error.
@@ParkourGrip no you wouldn't, that's not how IEEE754 works. Precision errors pop up when you try to represent fractions like 1/3, but you only start losing precision on integers once you get to ludicrously large numbers that a u64 wouldn't even be able to represent.
Something to note is that the size of your data can affect performance as well as memory usage. CPUs are specifically designed to handle 32 bit and 64 bit values very fast, and sometimes, counterintuitively, an 8 bit value may take longer to process. So, as with everything, premature optimisation is the root of all evil. Keep the age as a 32 bit integer for now, if you have 10 million of them and have identified it as a problem that it uses too much memory _then_ go down to a u8 or use bit-packing methods.
It's actually even more nuanced than that, because of cache locality, so actually smaller data can be faster and slower depending on the circumstance. But that's very complex and should be left to experimentation if the need arises.
This is why I think using fixed size integers is a mistake in almost any context that isn't data serialization and/or protocols. For performance conscious parts of the codebases of my recent projects I'm considering having a type selection system that will define word types that will be the optimal size for the CPU of the current platform, like word, dword, qword, etc. Choosing types for your variables is a whole can of worms.
The best comment so far.
@@Acceleration3 but you have to consider that register access time (what you describe) isn't everything. You also can't treat an object like a lose collection of primitives. For example let's stick to the students example. If you make age, id, birthday,...
All 64bit 1 student is a huge object. With tons of wasted space. If you then have an array of students that wastes enormous amounts of space. At some point leading to a cache miss. Then the cpu has to wait for the ram to load the rest of your mostly empty data. At this point your register access times are meaningless by multiple magnitudes.
Side note:
The optimal cpu spacings are often choosen by the compiler anyway. Meaning that a student with 1 32bit value and 2 8 bit value will at the end be 64bits long. The reason is that the compiler knows you need at least 8bits for a particular value but adding empty space besides it doesn't change the code.
Tldr:
By trying to be smart you waste space and keep information from the compiler making the result worse. A datatype size is more or less a suggestion for the compiler and it will make smarter decisions than you will. By giving him false information you will not achieve improvements.
@@redcrafterlppa303 I know I'm making matters more complex but if you're going to have lots of students, a better approach to improve on cache locality and reduce memory usage without taking a hit in the performance COULD BE* using Struct of Arrays. Have on array for ages, another one for IDs, another one for phoneNumber... The first student created will have ages[0], ids[0], phoneNumber[0]...
COULD BE: IF, your code is going to opperate on only a few of the fields at a time. Operating in all ages of everyone, then later on all phoneNumbers... This is Data Oriented Design. Structuring your data to not keep jumping around in both data memory and code memory.
@@besknighter that's for sure possible and something databases are for. Operating on larger amounts of data is something databases should be used for. Structuring in memory data like this isn't all that helpful and leads to messy hard to read code.
I appreciate how almost everything that’s spoken is demonstrated on screen, even going as far to show real error logs from the different programming languages. Thanks for making these videos, great refresher and learning material.
Agreed. I’m mostly listening, but the visualization is well done, and useful.
In Zig, you can make numbers with weird sizes.
const nummy: u23 = 205;
That's a cool feature for using types to enforce value bounds. But it won't look like that in memory. It will 100% be padded to 32bit.
Yeah, I know. I was supposed to include a little animation of the zig logo saying "are you challenging me?" I just forgot. However, as someone else said, the don't look like that in memory, although with bitwise operations you can do anything with singular bits.
@@CoreDumpped f128 and the fact that Zig can do larger numbers than Rust is cool.
there is same thing for rust just as module. These u31 seems iffy. It would be nice if we could pack them nicely together with NULLs ect, like lets say sizeof(Option) == sizeof(u32)
@@Lord2225 i would think they are amazing for things like rust enums. If you have types in a struct that are oddly sized and guarantee padding you can fit the varient bits in there and create zero cost enums just because a type sacrificed some bits it didn't need anyway.
This is exactly what I do in my language that is heavily inspired by rust but tries to fix the sharp corners of the language like lifetimes and dyn dispatch.
Being a rookie to programing and languages as such, absolutely love how you touched upon stuff that I wouldnt have bothered learning about. What an absolutely great way of explaining usually boring stuff in an easier to understand and fun explanation. Way to go bro!
3:36 "the size of your " *long awkward pause about the size of my*
That's what she said
Size of your…. bits haha cracking me up
The size of your ******* matters.
@@SUNNofODIN are we talking bout memory or 🫣🫣🫣
I really enjoyed that peen joke haha. Didn't know George got down like that.
5:27 "Just by looking this code."
The AI voice makes the grammar mistakes stand out a lot lol
Pretty sure it's a mistake in the script, not the AI's fault
@@keppycs the channel owner said they were not a native speaker in some other comment replies which is why they use the AI voice. It's definitely the script.
@@MrC0MPUT3R you were basically saying that from the very beginning. i misread, sorry
How can i get a similar AI voice
@@Tech.Library it's a speech synthesizer, probably something like UTAU (used for music production) or eSpeakNG (a utility more than anything) should give you similar results
I love this introduction to "slapping everything into a double is a waste of memory", I can't wait to see what else you come up with
I write Java code for my school's engineering team and holy hell I hate seeing every single value be a double for NO reason. 8 bit signed integer? Sure. But there is no need to have the precision of a *double* of all things. At the very least, if you need precision down to the decimal point, then use a float. What's worse is that the people that develop the libraries should be a bit more considerate of their resources than that given that they are (hopefully) a lot more mature than me.
@@bndlett8752 double calculations are faster these days than float.
@@bndlett8752 it's weird that java uses big endianness instead of little endian.
@@SirusStarTV how/why does the endianness make a difference?
@@bndlett8752 Inconsistent endianness makes data serialization/deserialization for sending packets between programs a light headache for one.
if you wan't to be a programmer, the absolute best way is to start by learning C
Maybe not start, but I would suggest to pick c up at some point maybe as a second or third language to learn the low level basics.
I disagree. C is very confusing if you don't know at least one assembly language
@@williamdrum9899 naw jit
@@williamdrum9899 C is not confusing. It is magic compared to Assembly.
Best decision I did this year. It's the only reason I'm getting these channels suggested. Learnt about Zig, Rust, Linux Kernel, Assembly, an absolute rabbit hole of knowledge I didn't know I was missing.
7:10 "A really bad language though..." Javascript appears on screen 🤣
The best bit for me was his remark about automatic type coercion when doing things like "adding" a string and a number: "and this kind of bullsh*t is marketed as a feature". It reminds me of why (among so many other reasons) I hate php so very very much.
@@danielscott4514 honest question, is automatic number to string conversion really that bad? I see people shitting on it all the time but IMHO it's mostly harmless (at least in a sane language).
Many languages have misfeatures which are at least 1000x worse (dynamic typing, everything being a reference type, nullability by default, etc.). Of all these things why do people fixate on int + string so much?
@@k2aj710 Certainly being able to do something like "number of results: " + resultCount (where resultCount is a numeric variable) is something that is pretty harmless most of the time.
However, if your language lets you concatenate a number onto a string - and if it uses the + operator for string concatenation as well as numeric addition - then what happens when you try "3" + 2? Do you get "32", or 5?
For what it's worth, I took the "and this is marketed as a feature" comment in the video's narration to be aimed directly at dynamic typing generally rather than at the specific example they gave. Dynamic typing is what causes the above kind of conundrum to be a thing.
No strictly typed language will allow something as vague as "3" + 2. In c# I would have to do it as "3" + 2.ToString() if I wanted "32", and I would have to do it as Int32.Parse("3") + 2 if I wanted 5 as the result. The very nature of the language eliminates that whole class of bug, which easily comes about when (normal non-superhuman) programmers are not intimately familiar with every last detail of their dynamic language's type coercion behaviour.
For what it's worth, on the subject of doing something like my first example: as a regular user of c#, I'm very conditioned to combining strings and numbers using various methods that accept a format specifier, which outputs the number with things like currency symbols, commas to separate thousands, various numbers of decimal places etc). I find that in many cases you want more control over how your number "looks" as part of a string than simply concatenating the number in whatever default representation the programming language uses. So, in my view, the value of being able to write code like "number of results: " + resultCount is questionable anyway.
Although I spend quite a bit of time in C# currently, I've coded plenty of Javascript, and suffered far too much php (which can truly make Javascript seem sane). Dynamic typing combined with some bad language design can really ruin your day (especially since the bugs appear at runtime only). I'm far happier and more productive in c# where a huge number of errors are definitely caught at compile time and the lack of any ability to write "string" + 3 avoids various footguns that aren't worth risking for the sake of writing "string" + 3.ToString() (or various more modern c# alternatives, like string interpolation, but you get my point).
Given this is a comment on a video about low-level performance, as a side-side note, I rarely ever actually use the + operator to concatenate strings in c# - the reasons why are generally well known (if you're not sure why + is bad for concatenating strings, google c# string concatenation best practice and you're bound to get a pretty good rundown of how concatenating strings works and which approaches are best in which cases: they're considerations in any language that gets into the weeds deep enough to give you at least some choice over how much memory gets allocated/used/eventually destroyed in the process of combining multiple strings.
@@k2aj710 Gawsh my earlier reply ended up being a tome (ignore things like the side-side note on string concatenation - I put that there for the benefit of someone else that might read it later ... kind of StackOverflow learned-behaviour I think).
Anyway, I just realised I've got so many horrible memories of things that various dynamically typed languages have done to me over the years that I probably didn't really answer your question (since, I've just bothered to check and Javascript seems to give the string in a string + int operation some kind of precedence, and it actually does what I would consider the preferable thing with "3" + 2 - and gives "32"). That said, I've just looked and at 7:26 in the video there are variations on that "3" + 2 theme which do end up doing arithmetic addition instead.
I think probably the string + int thing - if it gets mentioned a lot - is more of an easy to explain and demonstrate example of the greater problem of dynamic types in languages. It kind of builds from there into the kinds of problems you can have with code like: if (myVar) { ... } where there are all kinds of rules for what string and numeric and every other kind of value might mean "true".
@@danielscott4514 People like to shit on Javascript bc of stuff like this but C# does the same and I see no one talking about it.
Regarding the alternative in 8:00,
Some architectures prefer aligned address and this leaving least significant bits unused. Usually, 3 bits for x86-64. If you willing to sacrifice it, then you can encode 8 types that fit a register. It is tagged architecture used in dynamic programming language implementation.
For example, a fixnum is data with lowest 3 bits zero encodes integers between 2^60-1 and -2^60. This fixnum representation fit perfectly one register size. The addition and subtraction of fixnums can use the same machine instructions. Right arithmetic shift need to be applied after multiplication. This example shows that even with clever optimization and representation, there will be some overheads in dynamic programming language.
Other compound data like record (or struct) and array are pointers and aligned and their least significant bits can store the tag.
The text to speech improved heavily from the last video to this video. I didn't even know it was an AI until the very end.
There was one place (3:35) where it paused unusually long between two words and that's the only reason I noticed.
1:46 It became obvious to me when the TTS spoke a typo “what does that means”
@@davidt01 I made it till over 9 minutes in and got spoiled by the comments!
@@davidt01Actually, that pause was intentional. I'm just realizing people is not getting it lol
@@CoreDumppedOhhh now I get it 😂😂
Something fun about sizes of things in Rust: for Option where there is a possible invalid state for T, Option is represented as just T, but None is the invalid state.
For example, an optional pointer will just be a null pointer in memory if it is None, rather than actually using an extra byte for the discriminator.
This concept applies to other enum types as well.
This will only work for Option
As for example Option all 0 would be an ambiguous state since it would represent Some(0) and None at the same time.
That optimization for pointers is only possible as each type in combination with option is treated individually.
An Option would likely be 16 bits and an Option would maybe be 32 bits wide. On why it's not 24 bits look into "struct padding"
@@redcrafterlppa303 I said if there is an invalid state. Zero is a number. I thought I remembered testing this with Option and it working.
Update: Just tried it and Option is in fact one byte, or at least my editor says so.
also important to add is that, according to the rustonomicon, such Option can be represented as &T, but doesnt need to, which can have consequences when using sth like transmute
@@Blaineworld yes, think about it. The bool is a 1 bit datatype and the option is a 1 bit varient type. So the compiler cramps the 2 bits into 1 byte.
@@redcrafterlppa303 I still think my “invalid state” idea might be correct. I just tested it. Specifically, I made an enum called Test with 255 variants and got the size of Option, which was in fact 1 byte.
With 255 variants, every _bit_ is used, but the combination 11111111 remains unused, which if I am correct is what None is represented as in this case.
See right at the start, Memory usage is important, but when you're prototyping a functionality, and I've worked in circumstances where we didn't know what the final data sizes would be until later in the process, just that it would be an integer or a decimal, so we used larger containers, later I refactored the code when we knew what the final implementation limits should be.
You should always optimize when you see a place to do so but making it behave the way you intend comes first you can always look at the memory footprint at multiple stages in development, I like a workflow of "get it to work, get it committed, benchmark, look for concerns, pull request"
Also I'm glad to see I'm not the only one that doesn't appreciate when a language does an implicit cast and operates on it. I've had that absolutely wreck me before where I had to read the entire class to find the error, where a strongly typed language with explicit casts would have gone "This isn't something we can do implicitly, if you really want that behavior go explicitly cast it" I'd rather have an error tell me "Hey can't do that implicitly" and let me go review it, because chances are if I didn't cast it myself at the time of writing the code, I made a mistake and fed the function something I didn't mean to. This is also why by default my IDE is set so any warnings are treated as errors, so it won't compile if there are warnings, so I can go review those warning and determine if I just missed a nullable declaration or if I made a more serious error. (during rapid prototyping I will toggle that off and once it works I'll go through the warnings then, but default workflow is handle any warnings before building)
Dude, you are literally one of the best when it comes to explaining a complicated topic in a basic way. I am a C# developer with 6 years of experience and I've been researching computer science and electronics for a few months now. Simply because I wanted to understand how things actually work. And I can say without a doubt that your videos helped me a lot to understand the way things work. Thank you!
To summarise this for my reference later:
Knowing variable types at compile time
1) Saves space as you know exact amount needed
2) Makes code more readable, no hidden logic behind the scenes.
3) Saves time and more space since you dont need to store data type and read, write and compare it later.
The speed of using 8-bit variables versus 32-bit variables can depend on several factors, including the specific CPU architecture and the access pattern of your program.
On modern 32-bit and 64-bit CPUs, operations on 32-bit and 64-bit integers are usually the fastest, because these CPUs are optimized for these sizes. Operations on 8-bit integers can be slower because the CPU may need to perform additional operations to handle the smaller size. For example, the CPU might need to zero out the upper 24 bits of a 32-bit register to perform an operation on an 8-bit integer.
However, using 8-bit integers can save memory, which can potentially improve cache efficiency and overall performance if your program is memory-bound. If your program accesses a large array of 8-bit integers, it can fit four times as many integers into the same amount of cache compared to an array of 32-bit integers. This can reduce cache misses and improve performance.
So, whether it's faster to use 8-bit variables or 32-bit variables can depend on the specific circumstances. It's not a myth that operations on 8-bit integers can be slower on modern CPUs, but the impact on overall performance can vary. As always, if performance is a concern, it's best to measure and optimize based on the specific requirements and behavior of your program.
ok chatgpt
@@98danielray ☠
I'm at 4:30 and while this is generally true, for most uses an int32 is what you need, even if you're wasting some space. This is because due to modern architecture of CPUs, 32 bit int operations will be much faster than, say, byte operations. Conversely the best type for graphics calculations is float.
Of course, sometimes it's beneficial to have more choice. But in many programming languages, the default is default for a reason and you still have that choice.
Yeah but I think rust compiler is smart enough to pack all the operations into a single SIMD instruction
It amazes me how stupid these comments are. 8 bit ops are equally as fast as 32 or 64 ones. It's honestly mindless people who repeat nonsense without understanding. Sure the throughput is lower. Clueless people shouldn't make comments.
@@gregorymorse8423 It amazes me how stupid you are. 8 bit operations will definitely not be as fast as 32 or 64 ones. There are no instruction sets that support 8 bit operations, so these 8 bits have to be converted to 32, math done on that, and converted back to 8 bit. Those conversions takes up precious CPU cycles. On the other hand, smaller variables (like 16 or 32) vs larger ones (like 64) can win performance-wise if the constraint is memory, i.e. more tightly packed data won't have as many cache misses.
It's honestly mindless people who repeat nonsense without understanding. Clueless people shouldn't make comments.
How would a 32 bit operation take less time than a 8 bit operation. Can you explain that to me?
@@cheesepie4ever because modern architectures don't support bit operations on 8 bit data. This means these 8 bits have to be extended to 32, arithmetic done on that, and then converted back to 8 bits. This means extra CPU cycles for every operation.
On the other hand, 8 bit operations can be potentially faster if you're operating on huge sets of data at a time - in which case the extra operations wouldn't hurt as much as cache misses, since 8 bits will obviously be more tightly packed and you can fit more of them in cache.
As with all things performance related, don't theorize; benchmark. See for yourself.
I started learning Rust a few days ago after having 20+ years of Java and higher level language experience. It feels great to get closer to the metal and I can already see this series of videos will be invaluable for filling in the blank spots in my knowledge. Thanks.
5:50 this is basic beginner level JavaScript knowledge. All numbers are floats. The only time an actual integer gets used is when doing bitwise operations, which is the case where only integers get used. If a person does not know that all numbers are treated as floats, they don't know JavaScript. When teaching this, if the person already knows about int/float differences, I bring this up on day 1 as one of the first points to explain the type system, and I don't think hidden documentation or experience are good descriptors here.
7:33 Actually, in some javascript interpreters, the "default" type is a 64-bit double, but other types can be expressed by setting the exponent to a specific value used for NaN and Infinity. As long as those values are reserved, the rest 52 bits can be used to represent other types of values, including reference types, etc.
As far as I'm aware it's likely to be more than just "some interpreters". NaN boxing is an optimization used very widely in interpreters where the only number type is floating point. It provides a fast way to encode proper integers, allowing the use of the faster integer operations.
@@Bobbiasfloating point integer operations aren't even comparable to the speed of native integer arithmetic
@@gregorymorse8423 NaN boxing uses doubles as tagged unions. All 11 exponent bits are 1s, and the most significant bit of the mantissa is a 0 (the NaN is marked "quiet").
The exact contents of the remaining 52 bits are effectively meaningless in floating point math. The number is still a NaN regardless of the data stored in these bits. That means we can use the lower 52 bits of the mantissa for anything we want. Currently x86_64 pointers are only 48 bits wide, so we can store pointers in these bits fine. We can also store integers or anything else we want in those bits. We can also operate in the values using integer math (with the exception that is the value would overflow the 52 bits we use that's a problem we need to deal with separately).
So you can store a 52 (or less) bit integer in a NaN boxed double and treat it as though it's a regular integer for more efficient math.
If you are only storing integers, you can use all 52 bits, if you have multiple types you are NaN boxing, you use several upper bits as a tag, and you can still have most of the bits dedicated to see data you want.
Sir your channel is the very best channel on TH-cam (as far as I know) on low level coding and basic structures. Thank you.
Technically, Javascript's implicit conversions aren't undefined behavior. They are defined in the ECMAScript spec, unlike undefined behavior in C or unsafe Rust which is literally whatever the compiler implementation decides to do. But implicit casts are not intuitive, so code can behave in ways that are unpredictable.
only undefined to the 0.5x rust heads who cum when their code fails to compile for the 100th time
One of the ways to define a programming is its type system.
I remember the time when I started with C# and the .NET Framework, I struggled to read and implement these verbose types without getting errors. But they were worth it because it helped me know how to write Python code professionally. Although I hated the semicolons.
As a beginning programmer, I don't care if it takes 15ms or
"The reasons behind this limitation are beyond the scope of this video"
Noooo...! That's exactly what I was hoping to learn, haha. I'll subscribe if it means you'll cover that in the future!
Should be answered in the stack and heap video
Yeah, I'm already working on those videos :)
@@CoreDumpped Awesome, subscribed :)
@@CoreDumpped we are now waiting subscribed nice video!
I don't believe it was answered. I posted a comment on the next video explaining why.
I am very new to systems programming as I has always been working with javascript. Thank you for this explanation!
I remember programing on the DEC Alpha back in the early 90's, one of the first 64 bit processeors, the manual mentioned to use natural size processor uses for numbers as that is what it fetches in. If one make the size smaller ( for some size worry) the cpu fetches in the full size and then has and then cut down to size the programmer suggested. More instructions being executed. For most apps it slows things down as more processing is required to downsize to the programmer suggested. So making it smaller is not always optimal.
My first computer was a Nascom 1 with 1K of video RAM and 1K of user RAM (960 bytes available to me).
I did a lot of Z80 coding, so things like the stack, pointers and different length integers are second nature for me.
Hearing "Memory is limited" in 2024, when I have 64 GB RAM, is quite amusing!
Your videos are teaching me about considerations that I was previously unaware of. I particularly like making use of arrays when programming. I am realising that it is very important to specify the necessary number of cells per array, and also the specific type of information I need.
Discovered this video on reddit. Very well done sir, you just earned my subscribe.
I hope you continue to make videos such as this; your teaching style is very good.
Jesus what an informative video. I had a general knowledge of this but the way you explained brings everything together so well. Also I like the way you sprinkle a bit of humor throughout the video lol
I like Rust's explici static types. Some programming languages are counterintuitively implicitly statically typed. Even C falls into this category. Despite it being a statically typed language, the data types are implicit as they can differ from hardware. There is an stdint header file to help with this, but it doesn't eliminate all problems because not all libraries use it. Even if you use it yourself all the time, there is no way to guarantee everyone will, and so occasionally you will import someone else's code that not only may be unclear what its behavior is, but may even be filled with bugs caused solely by you running it on a different piece of hardware. I've ran into this once spending ages trying to debug someone else's code only to figure out that the microcontroller I ported it from defines the implicit signage of a char differently from my own. I honestly cant get upset over that being the programmer's mistake or just being bad code. Personally, I think it is a flaw in the language. I cannot see any justification to make implicit signage or even the width of primitive data types not something defined platform independently. Hot take but code that is written the same generally should run the same on every platform. The only exception should be if the programmer _explicitly_ puts in an exception. Anything implicit and platform dependent is bad language design imo.
This is a good intro to software engineering. Though I already knew this, sort of, it is still good to revue. It is why I prefer static type languages, or at least the option to use types.
Another thing with dynamically typed languages, is that they're great for making quick and dirty scripts or smaller programs, but the second a project gains any size, it becomes hell because by default you don't know what the types of values are being passed around into functions, so you either have to guess, or read up how every function you try to use works. Not to mention it makes LSPs less than stellar because it has no idea what types of values a function expects. Or what methods are attached to a class instance etc.
Yes i know that JS has TypeScript to alleviate that issue, but imagine that we had to make a while wrapper on top of a language to deal with that problem.
Great video, really good explanation of the topic. Downvote for computer voice.
I love these kind of videos that are so amazing but hidden in internet corners for some reason. tyy
Thank you for this type of programming content. Programming channels usually never touch on these types of concepts because they assume you already know them.
Size matters but what you put in the variables is more important. Meaning helpful information. Is more important than technically aspect of variables .
I would add an asterisk to your description of interpreted languages. A lot of them ate going the jit compiler route, so you do get some benefits of normal compiled languages, like real primitives, no interpreter step running for every line read, etc. Ofc, not perfect, but its not as bad as you'd think it is.
Amazing explanation with an even more amazing ending
7:12 I instantly knew where this was going 😂 You put it together so well
Ok, this is an AI Voice, right? lol
Yes, it's pretty commonly used one too.
They are getting so hood though!
One or two more years and it will be impossible to tell a real one from a synthetic one anymore.
Already good enough for me to not really care. All I care is the informations given
Tbh as a non native speaker I didn't even noticed. Maybe for a native its annoying tho
I don't give a flying f... about AI voice. I came for the informations. And the video was perfect. And tbh the voice is really convincing to me...
this video was soo cool to learn something about these things who looks like nobody cares today, memory. Thank you
I like the style / cadence / pacing that you explain things. Just the perfect amount of detail + speed, clean speaking. You have earned yourself a sub and I look forward to seeing what you come up with!
Sounds like an AI voice
@@syryouslyit's 100% an AI voice. One of the best ones I've ever heard but the speaker makes grammar mistakes a native speaker would never make
WTF........... damn.... @@syryously
Can I just say thank you for taking the time to do this even if it’s AI. I neeeeeed this
I'm sure the animation can't be done by ai, even if the script and audio is
Hey, just stumbled on this channel, I really like channels like these. Seeing as you're a relatively new channel, and aren't a native English speaker, I'd be glad to proofread your scripts for you to ensure the grammar is correct and natural. By the way, I'm a senior video game engineer, so I've already got a firm grasp on these topics, so you wouldn't need to worry about me improperly altering the meanings either. Looking forward to more videos!
so detailed, thank you very very much!
"a very bad language won't tell you anything but rather implicitly convert one of the values and then perform any possible operation."
*shows JS code*
My god that's based, I love this channel already.
Even the memory location matters which is why we use cpp instead of go or rust
For example to estimate the location of something in a data structure like linked list and jumping straight to the memory value to bypass the linked list at times for performance
Thank you for properly telling JS off for the BS that it does with types.
7:20 implicit conversion rules are in the spec, not undefined behavior at all!
Yeah I know that. What I was trying to portrait is that low level people really don't like that kind of things at all. Like, in what world does it make sense that 3 * "3" is 9 but 3 + "3" is "33" ?
If you're referring specifically to the Javascript (ECMAScript) spec, then it's "defined" behaviour across so so very many edge cases is the subject of a lot of "oh my God, what the actual f*ck!" humour among programmers who don't have to deal with that kind of nonsense in their daily lives. I suspect Javascript developers just cry themselves to sleep instead. Php developers have their own catalog of "this value + that value = nonsense", which they share among themselves when they want to speculate on what the designers of that language might have been smoking.
This video needs to include information on padding and alignment. This will change the video entirely, as your 8 bit values still take up 32 or 64 bits of space in a lot of instance, because padding is needed. An example is the structure at the end, where you have an id, len and reference. The reference will not be right after the 16 bit id and 8 bit len, as there will be padding added (unless you are on a 24 bit system?). This is because compilers will try to obey the alignment rules of the architecture, for maximum speed.
Let the Rust compiler do the optimizations for you. Like the comment from @KingJellyfishII said, CPUs like 32 and 64 bit values, for most modern archs, and if you end up pre-optimizing, and packing values together, you might end up with slow reads and writes.
The AtariJaguar has a 64bit bus, but pixels are only 16bpp. Due to scaling, roto-zoom or just moving a sprite, writes into the framebuffer are not aligned. And padding would lead to black columns? So instead of making misaligned writes work on a hardware level (write queue), they decided to make alignment by the software driver their holy cow and slowed down graphics by a factor of two.
N64 on the other hand has no problems to read unaligned triplets for liner texture filtering from TMEM .
You can and do use unusual bit counts in systems programming. The :6 notation is just for that, and it's very useful.
This is my new favorite Rust tutorial channel
Damn, bro. You've barely even started and you're already doing a great job. Keep it up! 📈🚀
Your videos slap! Hope you won't quit making videos
Love the small pause at 3:35
Amazing video, I wish one day to explain concepts with this clarity, thanks for that!
I love this ❤❤❤ , best conception videos I found on TH-cam so far
I think your videos are fantastic. Thank you.
One thing that I must nitpick is the list of things python can’t do. It can do far more than js at a lower level, and you have to worry about these things if you use the applicable packages. For example, sized data with ctypes, and parallelism with multiprocessing. I agree that most people won’t need it, however it is built into the language when you do.
In javascript all numbers are 64bit floats, untill you explicitly use bigint or typed arrays. For me it is the same kind of klowlege that knowing, that u in u8 means unsigned integer.
Better example would be PHP, becouse it has 2 types that can convert automatically: int and float.
7:05 problem isn't that JS or C# convert's int to string automatically (C# has strong typing but also calls ToString() automatically), but rather that uses the same operator + for both concatenation and addition. For example in PHP, which also uses automatic type conversion, + is addition and . is concatenation, so problme of "2"+"2" equals "22" don't occurs.
And it isn't undefined behavior when it work's like in a specification. it's only a skill issue.
Just Wow Type explanation!! Great effort
The default is still often a signed int. Whatever an int may be for your system. Using it for age could help describe a person who in't to be born in decades, depending on your unit.
I wouldn't use an unsigned age. It's confusing. You could use a unix timestamp as date of birth and have a function that calculates the age (0 for born in the future) .
@@redcrafterlppa303 Of course, it wasn't a serious remark. Linux's time stamp similarly wouldn't run out in 2034 if it wasn't a signed int32.
@@jwrm22 there is a 64bit version that gains adoption slowly but surely. It allows for prehistoric and astronomical dates solving 2 shortcomings at the same time.
Really good and informative! I can't wait for the next ones !
awesome video!
How do you make all these animations?
I really like the visual style of this video - what software did you use to make the visualisations? I do presentations for other software engineers and would really like to level these up
I wonder that too... I've seen some PowerPoint screenshots on his Twitter... So not sure! 😅
@5:32, yes. Cause there are no integers in JavaScript. Everything is a float.
Great video! Simple great! Looking forward to entire series
Good explanation! Thanks for share.
my man got emotional with javascript
I enjoyed the video and the shot at JavaScript for being able to add strings and integers. It would be awesome if you could make a video about the stack and heap. I get confused in c++ trying to figure out what gets put on the stack and heap all the time, so a video would be great!
Jorge** has added two excellent videos that cover each of those topics. His channel (this channel) is some of the very best content I've ever seen - anywhere - on this stuff. Definitely give the man a sub! (and go watch those two vids).
In general anything that doesn't have a fixed size goes on the heap, whereas your various "primitive" types (integers, floating point numbers, single characters) and fixed-sized arrays of those primitive types go on the stack.
In many languages you can also combine fixed combinations of those primitive types into "structs" which, because they are also a fixed size, can go on the stack. Things that don't have a fixed size (and must be stored on the heap) include; any sort of variable-length collection, strings (as opposed to a fixed-length array of characters), and objects.
** (it may be pronounced "hor-hey" unlike how the AI voice read it out, or maybe he's just had too many co-workers who can't pronounce his name and prefers "george" anyway?)
Keep it up doing these series it helps a lot
God I'm really glad you started spell checking and grammar checking your scripts after this video. The spelling/grammar mistakes in this one are killing me.
Yeah, I made that video in less than a weekend. Didn't expect it to be so popular.
3:34 "Let's see a little example of why the size of your...[pause]...'variables' might be important to you."
That one mid roll ad came like a flash bang, cause it had a white background. Jesus.
Your videos are really amazing.
Would you mind share how do you edit your videos?
Thank you.
It could, but it's not a topic that would fit on this channel. Maybe on a second channel.
5:50: Minor issue... Yes, I'm experienced with JS but it mainly just was my ability to use basic logic.
I think that if JS would not have it result in a float, we'd have bigger issues than `[] + []` returning an empty string.
Thks & request a well-ordered TH-cam playlist of your videos.
Hi bro, very nice video. thank you!
What are you using for the animation? thx
I'm not planning on learning Rust, rather, I'm learning these concepts for Zig. Your video was still super helpful and well done. Looking forward to your next videos in the series! Subscribed.
I just found your channel and I already love it, the quality is very good. Keep going!!
now i understand why my profesor make me start on C
there you have no option but to use the correct type of variable each time
It's kind of weird calling Javascript's implicit string cohercion undefined behaviour. Anyways, very nice video!
It's kinda weird: Javascript
there, I fixed it
sorry
Very well done!!!
Great vid, subscribed and looking forward to the next one
Alternative title: Shitting on JavaScript for 11 minutes 2 seconds straight 😂
I’m a python coder, I have been considering C or one of the such for a while, thanks!
There is a *galaxy* of difference between Python and C. Personally I would recommend you move to a statically typed, compiled language that doesn't put you so very very close to the workings of the physical hardware and include so very little "out of the box" (one look at handling strings in C and you'll run from it screaming). If you really want to jump straight to such a nuts-and-bolts level, then play with coding C for embedded micros like Arduino. Definitely don't try to write an "application" in C.
If your interests are more in the software application space then languages like Java, and C# I can personally vouch for as being great things to learn (I've used both professionally and would hands-down recommend C#. Unlike Java it does make a proper distinction between data types that go on the stack and those that live on the heap, and it can be very performant if used wisely. I hear lots of good things about GoLang as well.
The videos are awesome... How are these slides created...
what happen if we write any Condition like - If , else, while etc..why these were even written and why we give space after condition in next line ?
Thank you! Great visuals and content man!
But she said the size does not matter 😢
Amazing bro love your explanation ❤ we need more please....
What a great video dude. Hoping to see more of this.
Size doesn't matter, it's about how you use it.
Lmao. Copium
That's what they explained, you didn't get it;)
super cool video, fantastic channel!
A question: in C/C++/Rust, how does the program knows that, if i define a variable “float a = 3.0;”, the memory location storing “a” is storing a float32? because in Python you said that there are addition bytes used to store this information, while in C the type is defined at compile time. But then how the program remembers it?
It doesn't. The compiler makes sure that whatever instruction flow accesses that memory location only uses float32 instructions to process it UNLESS you specify otherwise. There is nothing to stop you from pointing at your float's location with an int pointer and then the real fun begins. You can even give the same memory location multiple names and types with a union.
Thanks for the knowledge!
5:40 javascript doesn't have integers, and I'd argue that's intuitive to every beginner programmer. Also what language even allows you to implicitly truncate floats? Pretty sure even C wants an explicit cast for that
If it did not have integers under the hood it would be a nightmare dealing with floating point errors in code logic that assumes integers. Imagine a for loop that increments a variable by 1 every pass. If it was a float you would get values like 5.99999998 and when you try to use that as a array index you would get an error.
@@ParkourGrip no you wouldn't, that's not how IEEE754 works. Precision errors pop up when you try to represent fractions like 1/3, but you only start losing precision on integers once you get to ludicrously large numbers that a u64 wouldn't even be able to represent.
I guess when using f64, integer logic should work well for integers in range -2^53 to 2^53. This is a larger range then i32 but smaller then i64.