Oh boy, this brings back some bad personal memories regarding endianness (with a nasty extra twist). Bare with me... Almost two decades ago, I worked with industrial Sony IEEE1394 (FireWire) cameras. I was asked to write drivers for automotive vision software, with only a basic dev kit for the camera to start with. After studying the IEEE1394 specs and the (huge) datasheet of the camera, all I got out of the darned thing was garbage. Sony's own (closed) software did work, and so did the (very basic) C examples that came with the dev kid. But any attempt to read or write control registers failed. Only after wasting weeks (if not months) of time, did I figured out that the data sheets had a huge blunder in them: they got the endianness wrong. I would have realized that a lot sooner, if it was not for an additional (even weirder) mistake: each byte (within a 2 byte word) had its bit order reversed (so 1010000 would come out as 00000101). It was a total scramble. Luckily, easy to fix once I figured it out. That was the last time I ever trusted a technical data sheet from Sony.
This sort of thing is quite common in embedded systems - I remember writing code for a device where the rule was "You can only do 16 bit access. If you do 8 or 32 bit access various terrible things will happen from data corruption to a bus error". I figured out the rule by peeking and poking around and only much later did I find out why it was like this - basically the device was 16 bit and the external memory interface was too. Anything but a 16 bit access required either two cycles (32 bit) or various combinations of word select lines (8 bit). The word select lines weren't connected and the device didn't support burst access correctly. So it was basically hardwired to 16 bit access only. But once you know the limitation it wasn't too bad. As a wise man once said 'Don't memcpy to or from device memory. Registers aren't the same thing as memory!'
It seems, that the only thing harder than creating a working FW for cameras is to create a complete datasheets on cameras. We once had an image sensor, where the timing diagram had time running from right to left. Took us a while to figure that bug out from out fpga code. It turned out that text translation is not always enough on datasheets :)
Altti Akujärvi Oh, timing diagrams! How I hate those. Once I was writing a driver for a bubble memory circuit (anyone remember those?) and I struggled for weeks with a reliability issue. It would work fine one moment and then I'd get gibberish out of the circuit the next time I tried. Turned out after me and a colleague hand-traced the code and compared every state change with the timing diagram that I had indeed flipped one bit wrong in one instance. As a result, it "almost" worked, like 99 times out of 100. So frustrating, and such a relief once we'd found the bug. 😊
@@alttiakujarvi In my case, a translation error was indeed part of the problem. Only the English version of the datasheets had the endianness wrong (the Japanese original didn't). The reversed bit order within bytes was probably an implementation error in the FPGA/ASIC, which Sony had worked around in their own software (of course they never mentioned/acknowledged that).
You can apply this elsewhere too - we write normal decimal numbers big-endian, and so are telephone numbers (country code, area code, number), but postal addresses are little-endian (house number, road, city, country). Times are big endian (hh:mm:ss), but dates are little endian in the UK (dd/mm/yyyy) and middle-endian in the USA (mm/dd/yyyy) - though I guess the UK one is mixed in a way too as the day, month and year numbers are themselves big-endian.
The standard date format is ISO 8601 and is big endian using hyphens as separator, YYYY-MM-DD. Since nobody is foolish enough to use YYYY-DD-MM, it is the only unambiguous format.
@@remuladgryta I was smugly aware that the UK (New Zealand in my case) date system dd/mm/yyyy was more logical than the USA mm/dd/yyyy system, then my MSc thesis was for a project with Japanese collaborators and I met the yyyy-mm-dd system and was an instant convert. I didn't know that Japan was big endian in postal addresses too.
@@remuladgryta I'm foolish enough to use that. It's wonderful to put dates of that format into filenames of a group of related files, so that alphabetic sorting automatically puts them in time order. (Edit: I use dates similar to ISO 8601)
@@kc9scott but.... alphabetic sorting doesn't equal chronological sorting for yyyy-dd-mm? Sorting alphabetically is only the same as chronologically if you use yyyy-mm-dd
Little endian always just made more sense to me in design. Every variable type always keeps the least-significant value in the first memory location, so you can always just start doing your arithmetic on that, and increment memory locations however many times the variable size is to do your carrying. Otherwise you have to do extra math to calculate the memory location to start decrementing from. That would be slightly less complicated on a CPU with an instruction that can do a memory index, but it's still more complicated than it needs to be.
I like the idea is that if you have a 32 bit number in range, say, 0 to 1000 you can read it as 16 bit from the same location. Also the fact that low bytes are in low addresses is logical.
This "big-endian" and "little-endian" is almost a text book case of a culturaly dependent naming, which migh actually make it harder for foreigners to understand the concept. Case in point: if you have never heard or read Gullivers travels in ENGLISH, you do not have the cultural background knowledge to suggest, that "we START with the big end" is a better way to understand "big-endian", than "we END with the big part", like I have understood it for the last decade. Well now, I know.
You don't need to understand the origin of the term in order to use it. (Also, the book is quite interesting, but then I'm not the kind of guy who always has to see the latest Marvel movie.)
You were SO CLOSE to giving to full proper explaination. You went through showing how bits were numbered -- 0 to 31 from RIGHT to LEFT. Now, just take the next step: number the BYTES from right to left. The hexdump listing is just a convenience for humans --- it has no bearing on how the bytes are actually situated in memory (How a byte could be consider to the "left" or "right" of another byte is another debate) -- and is normally done left to right to read ascii text within it. But we're talking about binary numbers here, so our addressing should be right to left. So, if we put the byte at address 0000 to the far right, and the byte at address 000F to the left, now 00 C0 FF EE is spelled out correctly in little endian, and big-endian has it backward. (I think this will also give a sensible result for PDP11 ordering as well) Little-endian puts the least signifiant bytes at the lowest address. From the hardware prospective, this is the most natural way. Big-endian twists that to make it easier for humans to read a hexdump.
Thank you! Our CS 101 English professor was demonstrating hexadecimal using the hex dump command in Linux, he wanted to show us how some java programs started with "cafe babe", to our surprise, it the hex dump command displayed "feca beba" and our professor said it's because of the cpu being little endian and that it's unrelated to what we're talking about. This video explained it perfectly!
Great Video! I really like the Professor. He is clear on the subject he is talking about & has great hands-on examples which obviously would have resulted from his personal experience. That is great.
This brings up a common theme in computer architecture; do you make life difficult for the hardware architect or the software engineer. Putting the smaller bits first makes the microarchitecture easy to implement but makes debugging more difficult... and vice versa for big endian
@@bytefu If you dont want the programmers to throw their food at you that's fine ha Just make sure the hardware designers are out of reach. It just depends on the situation, another concept in computing systems architecture
San Guchito Or just a soft-boiled one suitable for eating with a spoon. But he did fiddle about with the egg, so maybe he was quietly checking if it was internally wobbly, just in case of that prank.
just dowright spooky. yesterday afternoon we get a client wanting us to integrate with a bespoke tcp interface. During the design meeting I brought up endianness as something we need to be clear on - literally the first time I've had to use the word in 15 years.
If you look at the three different kinds of address numbering needed for memory, they are * byte addresses within memory (call this number B) * bit numbers within a byte for masking (call this number b) * bit numbers within a byte for integer digit values (call this number a) In all little-endian architectures, we have a = b and B = int(b / 8) which is very simple and straightforward. But in big-endian architectures, the situation is more complicated. Also, another peculiarity of big-endian architectures is that CPU registers are still effectively little-endian! For example, consider an instruction sequence like move_word a → b move_byte b → c Does c end up with the high or low byte of a? In little-endian architectures, it is always the low byte. In big-endian architectures, it is the high byte if b is in memory, but the low byte if b is a register! Thus, the only truly consistent byte/bit layout is little-endian.
Little Endian is basically the Western naming order. Big Endian is basically the Eastern naming order. Middle Endian is basically the American date format.
Have been working on network protocols for years and years. Now I am retired and catching up on all those books I always wanted to read. Gulliver’s travels is now in my e-book, and Wow! Lilleput’s wars about endianness. And Now it seems like everyone knew except me! BTW: Back in the 70s they couldn’t even agree about which way to number the bits! Some computers numbered them with bit 0 as the most significant bit and some with bit 0 as the least significant bit. It used to make life confusing sometimes. Luckily it was just internal notation in the circuit diagrams and only caused problems between the ears. Actually most problems have their origin between the earphones.
I've been asking where the term "endian" came from for a LONG time! Thank You! In the PLC world we have to be aware of our endian-ness for many applications.
In PLC's those individual bits matter. In some devices individual bits are digital inputs/outputs, and it's all in Octal which makes it even MORE difficult.
In the 16 bit example (at around 7:00) they are all off. A shown as 1011, b as 1110, c as 1101 and d as 1100. Maybe they aren't supposed to match there...
Just found this video and immediately thought WTF!, when shown hex C as 1101 @4:39. Those of us who know our hexadecimal, know that 1101 is D, and C is 1100. Pretty basic mistake for this channel!
When building up a number from a bitstream, based on the endian order you would either: store the byte (bitwise OR), and shift the storage left by a byte before storing the next bye (big endian); or keep a counter and shift the read byte by a multiple of the counter before storing it in the variable (bitwise OR) for little endian.
When I first got into reverse engineering using Cheat Engine on some video games, this blew my mind. Could have saved myself a lot of time if I'd have learned the basics _before_ trying to put them to use.
@@raxxer1234 I am so used to people using multi-monitor setups that I didn't realize that Dr Bagley had these two separate, independent iMacs with separate, independent sets of peripherals.
Likes diet coke and rubber dome key switches. Clearly we can not! He's clearly shown himself to be in favor of the software ninjas! We must plunder his land!
I find it really funny you've released this yesterday. I've been struggling with this for a week or two trying to create functions that work cross platform.
@8:00 Right on. Networks is where Endianness comes into play mainly . Network Byte Ordering . The internet is Big Endian, but a lot of CPU's are little endian.
+MichaelKingsfordGray I'm an informative man. I see someone clicking "down-vote" on a comment, I tell them that down-voting on comments most likely does nothing at all. Although it probably does get logged in Google servers somewhere - considering that after reloading the page it's still there - and we may never know the truth if it actually does anything or not. But, so far, I never saw an effect.
I open my egg by breaking the middle, then removing a ring and removing either the top or the bottom part of the shell intact, whichever is easier to remove. This is where I will put the rest of the shell as I peel it off. What kind of endian am I?
While most computers have a parallel implementation where all bits are operated on at once, there have been serial implementations which save circuits at the cost of speed by operating on one bit at a time. That was the case for the Datapoint 2200 and Little Endian makes the most sense for such machines. Even though the Intel 8008 was a parallel reimplementation of that machine, it was compatible with it and so kept the Little Endian design, as did the 8080, the 8086, 286, 386, 486, Pentium and so on. Motorola's 6800 was a from scratch parallel design, so it adopted Big Endian as did the 68000 and so on. When part of the 6800 design team moved on to the 6502 they wanted to be cheaper so reduced address registers (except the PC) to 8 bits and moved pointers to page zero in memory, where they would be easier to deal with if they were Little Endian.
Big endianness is advantageous in a lot of situations. Since it would take a 32 bit variable like FEDCBA98 and turn it into FE DC BA 98 and that is still readable. Secondly, we can always fill in variables from the highest addressed byte if we move say a 16 bit variable into a 64 bit register, thereby solving all problems with accidental shifts. (Since we know the length of the variable, then we know where it ends in memory so we can start there and read it in reverse regardless. Thereby having 1 mechanism for moving memory and have all the advantages of being "little endian" while actually still being big endian.)
The real question would be. Why do we have the problem. Or in other words, why are bits counted from the right, but memory addresses counted from the left.
Actually, that's a thing too -- if you look at various hardware spec sheets, you'll see some count bits one way and some the other (and some need an editor because they switch from one figure to the next). The IETF usually standardizes on big-endian for its network protocols, but argues that this also applies to the bit ordering as well: counting the most significant bit (MSB) as bit 0 and counting up as you approach the least significant bit (LSB). That said, counting bits from the LSB to the MSB makes more sense mathematically: bit 0 is 2 to the power of 0, bit 12 is 2 to the power of 12. etc. In general, it's the software people that care about endianness, not the hardware people. It's fairly trivial to rearrange bits any which way in hardware -- just wire the signals that way. In software, it's harder unless you have dedicated hardware available to do it for you (e.g., via an instruction or DMA transformation).
As with most things, it just sorts ended up that way. We could also write a hundred and three and a half as: 5.301 But it just happens that we generally write decimals Most Significant Digit first: 103.5
@aullik The fun doesn't end here; in many cases memory is visualised vertically, but here we also have a dichotomy: In some of those cases, it makes most sense to have the lowest addresses at the top, e.g. when you look at some program code (in particular assembly language) you'll have the start of the program at the top, i.e. at the lowest address. In other cases, for example when describing the hardware architecture on a given system, you'll often see that the lowest address is at the bottom. While this may seem confusing at first hand, it makes perfect sense within each context.
MichaelKingsfordGray Yep. Arabs write numbers little endian for easier adding, but write everything right to left. When their number notation was imported to Europe we kept the visual format but kept our left-to-right writing, resulting in big endian decimal numbers with Arabic shapes of Indian digits.
@9:48 Does Endianness slow down things? Yes (agreed). @10:11 "...these days at theses clock speeds we are dealing with, the slow down won't be noticeable..." Ah, not so! Okay, a PC (whatever OS) class processor, sure. But not so with embedded systems! Garden variety M0, 8-bit processors, etc... are USUALLY little endian and to stop and swap around transferred bytes is a headache. Start little endian and end little endian. That said, Dr. Steve Bagley was correct about reading little endian data in a debugger, BUT you get used to it. The real hassle is reading big endian data in a debugger when your brain has been trained to read little endian data in a debugger.
steveandamyalso An even bigger hassle is when hex dumping tools insist on arbitrarily bundling bytes into words before printing, when those bytes don't represent word data. Looking at you dd.
So we write left to right with the leftmost digit being the most significant. But our model of the memory layout is apparently with the leftmost byte having the smallest index.
I was always wondering why ffmpeg gives me two options for uncompressed signed 16-bit PCM encoders: pcm_s16le & pcm_s16be, but I was always too lazy to google. Very interesting to learn about the history and that it, as always, comes down to one standard being more readable while the other one is more machine-friendly.
"machine-friendly" is a loaded term, as Dr. Bailey said, some machine were built to read the bytes like you would an ordinary base 10 number, and others put the "least significant" bytes up front (like most Intel machines) in multi-byte numbers.
@@dustysparks It's still more machine friendly in the sense that it's easier to wire a little endian machine (which probably allows for more efficient processors).
Little endian is consistent given it keeps the most significant bits in the higher addresses. It only looks confusing when it's viewed left-to-right. The bits are MSB to LSB, so the addresses should be the same.
The problem is that humans have decided to say numbers by their most significant digit first. (Is there any culture that doesn't? I know Germans switch the tens-digit with the ones-digit: "zwei-und-vier-zig" instead of "four-ty-two", but otherwise it's like in English.) Then, if you put long numbers in computer memory, you have to decide whether to put them in adresses according to their significance (little-endian) or according to the order humans say them (big-endian). If someone in the past decided that a herd of sheep that could be divided in three single sheep, five groups of ten and one group of hundred should be called "351 sheep", no one today would think of using big-endian. Another thing: Instead of writing 0x00C0FFEE in little endian as 0[EE] 1[FF] 2[C0] 3[00] you could also write 3[00] 2[C0] 1[FF] 0[EE].
(Hm... You have to distinguish between written and spoken numbers. With spoken numbers, you actually tell the significance with the number, you don't just say one digit after the other.)
Thing is, if you think about it, little-endian is objectively better. The problem is that we learned to write numbers from the Arabs, and the Arabs write right-to-left. We took over that order and still write numbers right-to-left in the middle of our otherwise latin left-to-right text. When we say numbers, we also say the digits in the less sensible way. The problem is that if you look at numbers written in little-endian in hex, the hex digits themselves, per-byte, are still big endian. If you'd write numbers in little-endian in real life, so 0xEEFF0C00 for example, it would make perfect sense to store the 0xEE in the first byte, and if you'd read it back, it'd still be 0xEEFF0C00. It's just a matter of being used to the wrong thing.
When it comes to visualizing endianness, I see little endian being the one that drops down in the memory addresses, but the memory addresses are going in the opposite direction.
Little endian makes sense if you think of it as a polynomial (x=2^8, m_i represents what is stored in memory location i) m_0*x^0 + m_1*x^1 + m_2*x^2 + ... The memory address offset matches the exponent. This simplifies programming bigints a lot, for example. Besides, you could always write your memory addresses on paper right-to-left, giving you the natural way it works in our number system, while the indexes still make sense mathematically. Just like you labeled your individual bits!
My first gripe with the entire discussion of endianness is with the way the numbers are written down to begin with. Just take a look at the way the bytes were addressed here (the numbers in brackets are the indices in my example): [0][1][2][3][4][5][6][7] THEN the question is asked "do we put the big numbers first or the little numbers first?". But the way the bytes are addressed is already not consistent with the way we "address" bits. An integer has the LSB on the right but it also has the index 0 on the right, which means an integer has the LSB in index 0. So why the hell even start with 0 on the left and then ask "left or right?" instead of applying the same indexing to bytes as one does to bits to stay consistent? Like nobody has a problem interpreting 110 as 6, or 0x45 as 69 while automatically thinking 110 at index 0 should return 0 and 0x45 at index 0 should return 5. Applying the same logic to [ab][cd] index 0 should return cd. Even though this would correspond to little endian, I think the entire debate should've never existed in the first place.
I'd really like an explanation of why on little endian machines like the 8086 etc, the bit ordering within the byte is still big endian. That part never really made sense to me. Like, if it were really little endian, shouldn't it go: 2^0, 2^1, 2^2, 2^3, etc, where the nth bit represents 2^n? That just makes so much more sense than 2^7..2^0, 2^15..2^8, ...
One byte is treated as one undividable unit by the computer. The way it's written out just depends on what humans feel like writing (which is most often big-endian).
RISC-V is little-endian to resemble other familiar, successful computers, for example, x86. This also reduces a CPU's complexity and costs slightly because it reads all sizes of words in the same order.
How many systems does not use memory addressed in bytes? On the Intellivision console a memory address could have different number of bits. Some address ranges used bytes, some 10-bit words and some 16-bit words.
Bobaflott The principle isn't limited to the modern 8 bit bytes. It also works for the older 6 bit bytes used in some systems, for single bit notation or for any other length.
Modern systems use byte addressing thanks to IBM. Early systems were often word addressed. While that works well for calculations it works less well for processing text based information which IBM wanted to do.
Hmm from what I know from working with few file formats, endianness doesn't go down to bits. It just affects bytes. Let's consider a 16 bit number 0xEEAA: in big endiann it would be 1110 1110 | 1010 1010 and in little endiann it would be 1010 1010 | 1110 1110. Notice that bits stay the same. Only bytes move. Isn't it how it works?
In addition to the mentioned advantage of little endian I think we're writing numbers in English the wrong way around. We got them from Arabic, which is a right to left language, but didn't change the order. If the order would be the other way around one wouldn't need to space align a list of number like: 1 1234 56 789 And what debugger or hex editor can't convert numbers from whatever format to whatever format? Clearly little endian is more sensible. :P
As explained in the video, it's a reference to Gulliver's Travels, where the Big-Endians ate their eggs starting at the big end, and the Little-Endians ate their eggs starting at the little end. Don't think of 'end' here as the side opposite the beginning. There are two ends, like a rope, or a flat Earth.
There is no concept of bit order in a byte from a programmer's perspective, because bits are not addressable. The programmer just sees a number in the range 0-255. Of course, in languages that support it, you can write numbers in binary, and generally this is done with the most significant bit on the left, but this representation has nothing to do with how the number will be stored in hardware.
The numbering of the bits is usually based on their mathematical weight, e.g. bit N has the value 2^N. So a higher-numbered bit has a more significant value. This inherently gives a little-endian numbering of the bits within each byte. To be consistent with this, you'd also want little-endian numbering of the bytes within a word. From the perspective of a computer, with little-endian the bytes are NOT reversed. It's big-endian that has reversed bytes. When you add or subtract two numbers, you must always start the computation at the least-significant digit, and progress until you finish at the most-significant digit. The thing that is actually reversed is the way that humans prefer to read/write numbers, since we always want to understand the big-picture part of the story first, and only later worry about the details.
Because the bytes are at addresses. Sort of like how houses are at addresses. Maybe in one city, addresses go west to east. So on a street with four houses, the furthest west house on the street has address 1, the house next to it has address 2, and the furthest east house has address 4. If the city decided to flip things around, and change all the house addresses to be east to west instead - the floor plans of those houses wouldn't change. The floor plans of those houses wouldn't mirror just because their addresses have been. They just have a different address, but the individual houses are still all one unit. An individual bit does not have an address. They are just one room of a house, that is, a byte.
@@armpitpuncher and @TOMYSSHADOW In some cases there IS a concept of bit ordering within a byte. Most computers don't have machine-level bit addressability, but I'm sure a few obscure designs do (or did) have it. Either way, software will sometimes have a need for the concept of an array of bits, often of arbitrary length. Data like that will just about always have multiple items packed into each byte. An example of this is an I/O library for image files, where one of the possible image bit-depths is 1 bit per pixel. Such software (and the file formats that it supports) will establish convention(s) for the ordering of bits within a byte.
Kevin Scott Actually, a typical PC does have bit addressing in both directions. Modern Intel CPUs have instructions like BTS (Bit Test and Set) which use little endian bit addresses within bytes. All PC graphics cards support modes with 1, 2 or 4 bits per pixel, with the pixels stored big-endian in each byte. Most also support 16 bits per pixel with the bytes of the 16 bit value stored little endian in that same video RAM.
what about the hexadecimal 50. how will it be represented in little or big endian. This endian stuff still confuses me. All i see are very trivial examples
@@jamessadventures1380 ok, so one character is always one byte, which means that endianess doesn't affect anything? So if I send "Hi": H = 0x48, i = 0x69 Which means 0x4869 will be sent, padded with zeros to 32 bits either before or after, depending on endianess. Wouldn't that affect the transmission? Could the other end, due to endianess errors, receive the string "iH"? It feels like I'm missing something
@@Yaxqb Here we only consider storing integers. No one in their right mind would store text in an order where the last character has index 0, so endianness isn't an issue with strings. Strings are just a sequence of bytes in ASCII. However, with multi-byte encodings like UTF-16, endianness does matter. This is why we have the BOM (byte order mark).
My argument for "little endian" is that a if you have a pointer to a 32 bit value, but you know the value is small enough to fit in a 16 bit value, and you want to treat it likea 16 bit value you can use the same pointer, where in a big endian system that would need to be pointer + 2
I think little endian makes more sense from a programming point of view. Sure, sometimes it looks weird in hex editors, but I wouldn't sacrifice the ease of programming just to make the number look nicer in some cases.
Oh boy, this brings back some bad personal memories regarding endianness (with a nasty extra twist). Bare with me...
Almost two decades ago, I worked with industrial Sony IEEE1394 (FireWire) cameras. I was asked to write drivers for automotive vision software, with only a basic dev kit for the camera to start with. After studying the IEEE1394 specs and the (huge) datasheet of the camera, all I got out of the darned thing was garbage. Sony's own (closed) software did work, and so did the (very basic) C examples that came with the dev kid. But any attempt to read or write control registers failed. Only after wasting weeks (if not months) of time, did I figured out that the data sheets had a huge blunder in them: they got the endianness wrong. I would have realized that a lot sooner, if it was not for an additional (even weirder) mistake: each byte (within a 2 byte word) had its bit order reversed (so 1010000 would come out as 00000101). It was a total scramble. Luckily, easy to fix once I figured it out. That was the last time I ever trusted a technical data sheet from Sony.
This sort of thing is quite common in embedded systems - I remember writing code for a device where the rule was "You can only do 16 bit access. If you do 8 or 32 bit access various terrible things will happen from data corruption to a bus error". I figured out the rule by peeking and poking around and only much later did I find out why it was like this - basically the device was 16 bit and the external memory interface was too. Anything but a 16 bit access required either two cycles (32 bit) or various combinations of word select lines (8 bit). The word select lines weren't connected and the device didn't support burst access correctly. So it was basically hardwired to 16 bit access only.
But once you know the limitation it wasn't too bad. As a wise man once said 'Don't memcpy to or from device memory. Registers aren't the same thing as memory!'
It seems, that the only thing harder than creating a working FW for cameras is to create a complete datasheets on cameras. We once had an image sensor, where the timing diagram had time running from right to left. Took us a while to figure that bug out from out fpga code. It turned out that text translation is not always enough on datasheets :)
Altti Akujärvi Oh, timing diagrams! How I hate those. Once I was writing a driver for a bubble memory circuit (anyone remember those?) and I struggled for weeks with a reliability issue. It would work fine one moment and then I'd get gibberish out of the circuit the next time I tried. Turned out after me and a colleague hand-traced the code and compared every state change with the timing diagram that I had indeed flipped one bit wrong in one instance. As a result, it "almost" worked, like 99 times out of 100. So frustrating, and such a relief once we'd found the bug. 😊
@@alttiakujarvi In my case, a translation error was indeed part of the problem. Only the English version of the datasheets had the endianness wrong (the Japanese original didn't). The reversed bit order within bytes was probably an implementation error in the FPGA/ASIC, which Sony had worked around in their own software (of course they never mentioned/acknowledged that).
Oh my gosh, that's the EXACT problem I'm having right now. Freaking EEs!
thankyou Jonathon Swift (Gulliver's Travels) for your 1726 contributions to computer science
Man I've heard these terms for probably 15 years and I've never had it explained in a way that I understood
You can apply this elsewhere too - we write normal decimal numbers big-endian, and so are telephone numbers (country code, area code, number), but postal addresses are little-endian (house number, road, city, country). Times are big endian (hh:mm:ss), but dates are little endian in the UK (dd/mm/yyyy) and middle-endian in the USA (mm/dd/yyyy) - though I guess the UK one is mixed in a way too as the day, month and year numbers are themselves big-endian.
... with the Japanese doing the sensible thing and going big-endian for adresses and dates as well.
The standard date format is ISO 8601 and is big endian using hyphens as separator, YYYY-MM-DD. Since nobody is foolish enough to use YYYY-DD-MM, it is the only unambiguous format.
@@remuladgryta
I was smugly aware that the UK (New Zealand in my case) date system dd/mm/yyyy was more logical than the USA mm/dd/yyyy system, then my MSc thesis was for a project with Japanese collaborators and I met the yyyy-mm-dd system and was an instant convert. I didn't know that Japan was big endian in postal addresses too.
@@remuladgryta I'm foolish enough to use that. It's wonderful to put dates of that format into filenames of a group of related files, so that alphabetic sorting automatically puts them in time order. (Edit: I use dates similar to ISO 8601)
@@kc9scott but.... alphabetic sorting doesn't equal chronological sorting for yyyy-dd-mm? Sorting alphabetically is only the same as chronologically if you use yyyy-mm-dd
Little endian always just made more sense to me in design. Every variable type always keeps the least-significant value in the first memory location, so you can always just start doing your arithmetic on that, and increment memory locations however many times the variable size is to do your carrying. Otherwise you have to do extra math to calculate the memory location to start decrementing from. That would be slightly less complicated on a CPU with an instruction that can do a memory index, but it's still more complicated than it needs to be.
Thank you for actually explaining this instead of just saying "Death to big endians" like everyone else here
I like the idea is that if you have a 32 bit number in range, say, 0 to 1000 you can read it as 16 bit from the same location. Also the fact that low bytes are in low addresses is logical.
Dr Steve, a TCP/IP video would be a nice xmas treat :)
This "big-endian" and "little-endian" is almost a text book case of a culturaly dependent naming, which migh actually make it harder for foreigners to understand the concept. Case in point: if you have never heard or read Gullivers travels in ENGLISH, you do not have the cultural background knowledge to suggest, that "we START with the big end" is a better way to understand "big-endian", than "we END with the big part", like I have understood it for the last decade.
Well now, I know.
Front end, back end. There are two ends, end doesn't mean last.
@@mytech6779 : except in the concept pair 'beging and end'.
The use of literary analogy is indeed weird for such technical concept.
You don't need to understand the origin of the term in order to use it. (Also, the book is quite interesting, but then I'm not the kind of guy who always has to see the latest Marvel movie.)
I really don't think reading Gulliver's Travels is necessary to understand the terms, but they're certainly a bit obtuse.
You were SO CLOSE to giving to full proper explaination.
You went through showing how bits were numbered -- 0 to 31 from RIGHT to LEFT. Now, just take the next step: number the BYTES from right to left. The hexdump listing is just a convenience for humans --- it has no bearing on how the bytes are actually situated in memory (How a byte could be consider to the "left" or "right" of another byte is another debate) -- and is normally done left to right to read ascii text within it. But we're talking about binary numbers here, so our addressing should be right to left.
So, if we put the byte at address 0000 to the far right, and the byte at address 000F to the left, now 00 C0 FF EE is spelled out correctly in little endian, and big-endian has it backward. (I think this will also give a sensible result for PDP11 ordering as well)
Little-endian puts the least signifiant bytes at the lowest address. From the hardware prospective, this is the most natural way. Big-endian twists that to make it easier for humans to read a hexdump.
Thank you! Our CS 101 English professor was demonstrating hexadecimal using the hex dump command in Linux, he wanted to show us how some java programs started with "cafe babe", to our surprise, it the hex dump command displayed "feca beba" and our professor said it's because of the cpu being little endian and that it's unrelated to what we're talking about. This video explained it perfectly!
Great Video! I really like the Professor. He is clear on the subject he is talking about & has great hands-on examples which obviously would have resulted from his personal experience. That is great.
Most thorough and easy-to-remember explanation I've heard so far. The egg illustration helped. Thanks!
This brings up a common theme in computer architecture; do you make life difficult for the hardware architect or the software engineer. Putting the smaller bits first makes the microarchitecture easy to implement but makes debugging more difficult... and vice versa for big endian
If I had to choose between less bugs in software and less bugs in hardware, I would choose the latter.
@@bytefu If you dont want the programmers to throw their food at you that's fine ha Just make sure the hardware designers are out of reach. It just depends on the situation, another concept in computing systems architecture
Alpha Delta Big endian makes type casts harder to do in software. But I have used little endian systems for decades and may be biased.
Somebody missed a huge opportunity to replace that boiled egg for a raw one.. That would have been one heck of a prank.. :D
San Guchito Or just a soft-boiled one suitable for eating with a spoon. But he did fiddle about with the egg, so maybe he was quietly checking if it was internally wobbly, just in case of that prank.
just dowright spooky. yesterday afternoon we get a client wanting us to integrate with a bespoke tcp interface. During the design meeting I brought up endianness as something we need to be clear on - literally the first time I've had to use the word in 15 years.
If you look at the three different kinds of address numbering needed for memory, they are
* byte addresses within memory (call this number B)
* bit numbers within a byte for masking (call this number b)
* bit numbers within a byte for integer digit values (call this number a)
In all little-endian architectures, we have
a = b
and
B = int(b / 8)
which is very simple and straightforward. But in big-endian architectures, the situation is more complicated. Also, another peculiarity of big-endian architectures is that CPU registers are still effectively little-endian! For example, consider an instruction sequence like
move_word a → b
move_byte b → c
Does c end up with the high or low byte of a? In little-endian architectures, it is always the low byte. In big-endian architectures, it is the high byte if b is in memory, but the low byte if b is a register!
Thus, the only truly consistent byte/bit layout is little-endian.
Wow. I've worked in software since 1989, obviously knew about this, but did NOT know the link to Gulliver's Travels. Thank you for enlightening me!
I like my eggs scrambled so I guess I like the PDP11.
Little Endian is basically the Western naming order.
Big Endian is basically the Eastern naming order.
Middle Endian is basically the American date format.
Have been working on network protocols for years and years. Now I am retired and catching up on all those books I always wanted to read. Gulliver’s travels is now in my e-book, and Wow! Lilleput’s wars about endianness. And Now it seems like everyone knew except me!
BTW: Back in the 70s they couldn’t even agree about which way to number the bits! Some computers numbered them with bit 0 as the most significant bit and some with bit 0 as the least significant bit. It used to make life confusing sometimes. Luckily it was just internal notation in the circuit diagrams and only caused problems between the ears.
Actually most problems have their origin between the earphones.
I've been asking where the term "endian" came from for a LONG time! Thank You!
In the PLC world we have to be aware of our endian-ness for many applications.
In PLC's those individual bits matter. In some devices individual bits are digital inputs/outputs, and it's all in Octal which makes it even MORE difficult.
Interesting reference to Gulliver's Travels that I'd never heard. It's actually fitting in more than one way.
As a computer scientist, I appreciate the choice of hex number. Huge fan of c0ffee!
I hadn't caught that, nice.
On arm you usually use DMA to move the data in and out of peripheral, and you usually can change the endianness in its settings, or by clever ordering
Isn't 1101 D in hexadecimal? The animations show C as 1101
I was going to write the same comment.
In the 16 bit example (at around 7:00) they are all off. A shown as 1011, b as 1110, c as 1101 and d as 1100. Maybe they aren't supposed to match there...
lol, that part brings my binary-hex conversion self-confidence down xD
probably should be pinned and upvoted. this is crucial for understanding
Just found this video and immediately thought WTF!, when shown hex C as 1101 @4:39. Those of us who know our hexadecimal, know that 1101 is D, and C is 1100. Pretty basic mistake for this channel!
When building up a number from a bitstream, based on the endian order you would either: store the byte (bitwise OR), and shift the storage left by a byte before storing the next bye (big endian); or keep a counter and shift the read byte by a multiple of the counter before storing it in the variable (bitwise OR) for little endian.
A video about the PDP-endian encoding sounds like fun.
When I first got into reverse engineering using Cheat Engine on some video games, this blew my mind. Could have saved myself a lot of time if I'd have learned the basics _before_ trying to put them to use.
that's quite jolly and all that but he has two diet cokes on his desk, can we _really_ trust this man?
I've seen so many people with diet Cokes in so many different videos that I'm beginning to suspect a conspiracy here.
This stuff is undrinkable to normal humans. I think everyone who actually drinks diet coke is a literal alien. *tips tin foil hat*
@@raxxer1234 I am so used to people using multi-monitor setups that I didn't realize that Dr Bagley had these two separate, independent iMacs with separate, independent sets of peripherals.
One bottle is big endian. The other is little endian. That's why they're positioned at opposite ends of the keyboards.
Likes diet coke and rubber dome key switches. Clearly we can not! He's clearly shown himself to be in favor of the software ninjas! We must plunder his land!
It's so weird seeing a video that was filmed at night!
I find it really funny you've released this yesterday. I've been struggling with this for a week or two trying to create functions that work cross platform.
Great video and explanation. Thank you!
@8:00 Right on. Networks is where Endianness comes into play mainly . Network Byte Ordering . The internet is Big Endian, but a lot of CPU's are little endian.
OMGGG!! I HAVE AN EXAM ABOUT THAT IN LITERALLY 10HRS!!!
THIS VID SAVED ME!!
THANK YOU!
Y DO I SCREAM?!?
Is your test in 1 hour or 2?
HOW DID IT GO?
The video saved you? What, cracking open a book or asking your teacher was too much effort?
I'm a simple man. I see a Bagley Computerphile, I click. Bagley's the best!
He is so likable for some reason ha
+MichaelKingsfordGray
I'm an informative man. I see someone clicking "down-vote" on a comment, I tell them that down-voting on comments most likely does nothing at all. Although it probably does get logged in Google servers somewhere - considering that after reloading the page it's still there - and we may never know the truth if it actually does anything or not. But, so far, I never saw an effect.
@MichaelKingsfordGray oof
Absolutely phenomenal description!
Pedantry I know, but the binary representation of 0xc at 1:36 should be 1100
At 4:52, I think you've written 0x00d0ffee. C (hex) in binary is 1100, not 1101. Maybe you should've had more doffee before writing out the binary!
Nice pun.
Reading the comments makes me think that endianess is the tabs/spaces debate for computer hardware engineers.
this channel is remarkebly usefull to help studieng computer science. it explains a lot of concpets much better then my lectures do !
I open my egg by breaking the middle, then removing a ring and removing either the top or the bottom part of the shell intact, whichever is easier to remove. This is where I will put the rest of the shell as I peel it off. What kind of endian am I?
These guys make everything so interesting 😀
You know you've played too much minecraft when you think that his shirt is made of end stone
While most computers have a parallel implementation where all bits are operated on at once, there have been serial implementations which save circuits at the cost of speed by operating on one bit at a time. That was the case for the Datapoint 2200 and Little Endian makes the most sense for such machines. Even though the Intel 8008 was a parallel reimplementation of that machine, it was compatible with it and so kept the Little Endian design, as did the 8080, the 8086, 286, 386, 486, Pentium and so on. Motorola's 6800 was a from scratch parallel design, so it adopted Big Endian as did the 68000 and so on. When part of the 6800 design team moved on to the 6502 they wanted to be cheaper so reduced address registers (except the PC) to 8 bits and moved pointers to page zero in memory, where they would be easier to deal with if they were Little Endian.
Big endianness is advantageous in a lot of situations.
Since it would take a 32 bit variable like FEDCBA98 and turn it into FE DC BA 98 and that is still readable. Secondly, we can always fill in variables from the highest addressed byte if we move say a 16 bit variable into a 64 bit register, thereby solving all problems with accidental shifts. (Since we know the length of the variable, then we know where it ends in memory so we can start there and read it in reverse regardless. Thereby having 1 mechanism for moving memory and have all the advantages of being "little endian" while actually still being big endian.)
@@epsi Thanks for the correction, that spelling error should now be fixed.
The real question would be. Why do we have the problem. Or in other words, why are bits counted from the right, but memory addresses counted from the left.
Actually, that's a thing too -- if you look at various hardware spec sheets, you'll see some count bits one way and some the other (and some need an editor because they switch from one figure to the next). The IETF usually standardizes on big-endian for its network protocols, but argues that this also applies to the bit ordering as well: counting the most significant bit (MSB) as bit 0 and counting up as you approach the least significant bit (LSB). That said, counting bits from the LSB to the MSB makes more sense mathematically: bit 0 is 2 to the power of 0, bit 12 is 2 to the power of 12. etc.
In general, it's the software people that care about endianness, not the hardware people. It's fairly trivial to rearrange bits any which way in hardware -- just wire the signals that way. In software, it's harder unless you have dedicated hardware available to do it for you (e.g., via an instruction or DMA transformation).
The bit number corresponds to the power of 2 that that position represents, with the smaller 2^0 (i.e. 1) being on the right.
As with most things, it just sorts ended up that way. We could also write a hundred and three and a half as: 5.301
But it just happens that we generally write decimals Most Significant Digit first: 103.5
@aullik
The fun doesn't end here; in many cases memory is visualised vertically, but here we also have a dichotomy:
In some of those cases, it makes most sense to have the lowest addresses at the top, e.g. when you look at some program code (in particular assembly language) you'll have the start of the program at the top, i.e. at the lowest address.
In other cases, for example when describing the hardware architecture on a given system, you'll often see that the lowest address is at the bottom.
While this may seem confusing at first hand, it makes perfect sense within each context.
MichaelKingsfordGray Yep. Arabs write numbers little endian for easier adding, but write everything right to left. When their number notation was imported to Europe we kept the visual format but kept our left-to-right writing, resulting in big endian decimal numbers with Arabic shapes of Indian digits.
@9:48 Does Endianness slow down things? Yes (agreed). @10:11 "...these days at theses clock speeds we are dealing with, the slow down won't be noticeable..." Ah, not so! Okay, a PC (whatever OS) class processor, sure. But not so with embedded systems! Garden variety M0, 8-bit processors, etc... are USUALLY little endian and to stop and swap around transferred bytes is a headache. Start little endian and end little endian. That said, Dr. Steve Bagley was correct about reading little endian data in a debugger, BUT you get used to it. The real hassle is reading big endian data in a debugger when your brain has been trained to read little endian data in a debugger.
steveandamyalso An even bigger hassle is when hex dumping tools insist on arbitrarily bundling bytes into words before printing, when those bytes don't represent word data. Looking at you dd.
@@johnfrancisdoe1563 Wow, I have never experienced such a debugger. That REALLY sounds like an annoyance.
This is an eggcellent eggsplanation..
..apart from the binary number representations being wrong.
So we write left to right with the leftmost digit being the most significant. But our model of the memory layout is apparently with the leftmost byte having the smallest index.
Indeed, as he briefly mentioned it makes the wiring a bit simpler and usually isn't a big issue.
I was always wondering why ffmpeg gives me two options for uncompressed signed 16-bit PCM encoders: pcm_s16le & pcm_s16be, but I was always too lazy to google. Very interesting to learn about the history and that it, as always, comes down to one standard being more readable while the other one is more machine-friendly.
"machine-friendly" is a loaded term, as Dr. Bailey said, some machine were built to read the bytes like you would an ordinary base 10 number, and others put the "least significant" bytes up front (like most Intel machines) in multi-byte numbers.
@@dustysparks It's still more machine friendly in the sense that it's easier to wire a little endian machine (which probably allows for more efficient processors).
Little endian is consistent given it keeps the most significant bits in the higher addresses. It only looks confusing when it's viewed left-to-right. The bits are MSB to LSB, so the addresses should be the same.
I think I'll wait for How to Basic to cover this.
I was brought here from a C++ pointer video!! XD I learned a lot more than I thought I would thank you!
Awesome Explanation buddy. Thanks a lot a lot for sharing this ...
The binary spells DOFFEE!
Not to be confused) ABCD in hex -> 1010_1011_1100_1101 in binary. (not 1011_1110_1101_1100 as shown in the video)
The problem is that humans have decided to say numbers by their most significant digit first.
(Is there any culture that doesn't? I know Germans switch the tens-digit with the ones-digit: "zwei-und-vier-zig" instead of "four-ty-two", but otherwise it's like in English.)
Then, if you put long numbers in computer memory, you have to decide whether to put them in adresses according to their significance (little-endian) or according to the order humans say them (big-endian). If someone in the past decided that a herd of sheep that could be divided in three single sheep, five groups of ten and one group of hundred should be called "351 sheep", no one today would think of using big-endian.
Another thing: Instead of writing 0x00C0FFEE in little endian as 0[EE] 1[FF] 2[C0] 3[00] you could also write 3[00] 2[C0] 1[FF] 0[EE].
(Hm... You have to distinguish between written and spoken numbers. With spoken numbers, you actually tell the significance with the number, you don't just say one digit after the other.)
You guys make awesome informative videos...
Thing is, if you think about it, little-endian is objectively better.
The problem is that we learned to write numbers from the Arabs, and the Arabs write right-to-left. We took over that order and still write numbers right-to-left in the middle of our otherwise latin left-to-right text. When we say numbers, we also say the digits in the less sensible way.
The problem is that if you look at numbers written in little-endian in hex, the hex digits themselves, per-byte, are still big endian. If you'd write numbers in little-endian in real life, so 0xEEFF0C00 for example, it would make perfect sense to store the 0xEE in the first byte, and if you'd read it back, it'd still be 0xEEFF0C00. It's just a matter of being used to the wrong thing.
Nice explanation, Thank you.
This could be one of the culprits that 3d programmers are facing when they're parsing 3d models and porting from one system to another.
A character in one (or two) of William Gibson's books is called Bigend.
Interestingly enough, among mainframe programmers, most also count the bits starting from the most significant bit too.
When it comes to visualizing endianness, I see little endian being the one that drops down in the memory addresses, but the memory addresses are going in the opposite direction.
Little endian makes sense if you think of it as a polynomial (x=2^8, m_i represents what is stored in memory location i)
m_0*x^0 + m_1*x^1 + m_2*x^2 + ...
The memory address offset matches the exponent. This simplifies programming bigints a lot, for example.
Besides, you could always write your memory addresses on paper right-to-left, giving you the natural way it works in our number system, while the indexes still make sense mathematically. Just like you labeled your individual bits!
It was until recently I realized big/little endian had nothing to do with Indians
1 and 10 and 11 little endians
100 and 101 and 110 little endians
111 and 1000 and 1001 little endians
1010 little endian bytes
Mirza Brunjadze Actually, the -ians suffix is the same in Indian, Endians and Redmondians.
My first gripe with the entire discussion of endianness is with the way the numbers are written down to begin with.
Just take a look at the way the bytes were addressed here (the numbers in brackets are the indices in my example):
[0][1][2][3][4][5][6][7]
THEN the question is asked "do we put the big numbers first or the little numbers first?". But the way the bytes are addressed is already not consistent with the way we "address" bits.
An integer has the LSB on the right but it also has the index 0 on the right, which means an integer has the LSB in index 0. So why the hell even start with 0 on the left and then ask "left or right?" instead of applying the same indexing to bytes as one does to bits to stay consistent?
Like nobody has a problem interpreting 110 as 6, or 0x45 as 69 while automatically thinking 110 at index 0 should return 0 and 0x45 at index 0 should return 5.
Applying the same logic to [ab][cd] index 0 should return cd. Even though this would correspond to little endian, I think the entire debate should've never existed in the first place.
What's funny is that on the Apple II's video memory the bits are also little-endian (the rightmost bit is the leftmost pixel)
I'd really like an explanation of why on little endian machines like the 8086 etc, the bit ordering within the byte is still big endian. That part never really made sense to me. Like, if it were really little endian, shouldn't it go: 2^0, 2^1, 2^2, 2^3, etc, where the nth bit represents 2^n? That just makes so much more sense than 2^7..2^0, 2^15..2^8, ...
One byte is treated as one undividable unit by the computer. The way it's written out just depends on what humans feel like writing (which is most often big-endian).
Wow, my latest lecture in university 2 days ago was about this topic. What a coincidence :D
RISC-V is little-endian to resemble other familiar, successful computers, for example, x86. This also reduces a CPU's complexity and costs slightly because it reads all sizes of words in the same order.
what was the reason to store it the other way around in the first place?
How many systems does not use memory addressed in bytes? On the Intellivision console a memory address could have different number of bits. Some address ranges used bytes, some 10-bit words and some 16-bit words.
Bobaflott The principle isn't limited to the modern 8 bit bytes. It also works for the older 6 bit bytes used in some systems, for single bit notation or for any other length.
Modern systems use byte addressing thanks to IBM. Early systems were often word addressed. While that works well for calculations it works less well for processing text based information which IBM wanted to do.
little endian hurts my head, what kind of monster invented the bitmap image format??!!
1:05 You made me get coffee right away :D
Are there high-heeled and low-heeled computers?
Hmm from what I know from working with few file formats, endianness doesn't go down to bits. It just affects bytes. Let's consider a 16 bit number 0xEEAA: in big endiann it would be 1110 1110 | 1010 1010 and in little endiann it would be 1010 1010 | 1110 1110. Notice that bits stay the same. Only bytes move. Isn't it how it works?
That's right. Another example: You have 0x12345678 in a register. You store that in memory on a little-endian cpu. The bytes are stored 78 56 34 12
At this moment in time, nobody has finished the video, but some are very close
You're underestimating my speed setting.
I ended the endian video.
@@tabaks Did you watch it in little endian or big endian?
Aw. Dereferencing is so fun when dealing with endianess. Big endian ftw
How can endianness simplify hardware design? Can someone explain? It was explained around 7:20, though I didn't get it.
Least significant bits are always in the first byte no matter if it's an 8bit, 16bit, 32bit or 64 bit number -easier for hardware to manage >Sean
I suppose I'm in the independent side of things, Reverse-Endian. An endianness no one talks about and by far the most logical. Who's with me?
In addition to the mentioned advantage of little endian I think we're writing numbers in English the wrong way around. We got them from Arabic, which is a right to left language, but didn't change the order. If the order would be the other way around one wouldn't need to space align a list of number like:
1
1234
56
789
And what debugger or hex editor can't convert numbers from whatever format to whatever format? Clearly little endian is more sensible. :P
Thank you!!! It took me ages to find someone spotting the actual problem with the argument of which endianess is "more sensible"
I wonder why the one with the big end (end as in last bit) is called the little-endian and the one with the small end the big-endian.
As explained in the video, it's a reference to Gulliver's Travels, where the Big-Endians ate their eggs starting at the big end, and the Little-Endians ate their eggs starting at the little end. Don't think of 'end' here as the side opposite the beginning. There are two ends, like a rope, or a flat Earth.
Super helpful, thank you!! :)
Why with little endian are the bytes reversed, but the bits within a byte arent?
There is no concept of bit order in a byte from a programmer's perspective, because bits are not addressable. The programmer just sees a number in the range 0-255. Of course, in languages that support it, you can write numbers in binary, and generally this is done with the most significant bit on the left, but this representation has nothing to do with how the number will be stored in hardware.
The numbering of the bits is usually based on their mathematical weight, e.g. bit N has the value 2^N. So a higher-numbered bit has a more significant value. This inherently gives a little-endian numbering of the bits within each byte. To be consistent with this, you'd also want little-endian numbering of the bytes within a word. From the perspective of a computer, with little-endian the bytes are NOT reversed. It's big-endian that has reversed bytes. When you add or subtract two numbers, you must always start the computation at the least-significant digit, and progress until you finish at the most-significant digit.
The thing that is actually reversed is the way that humans prefer to read/write numbers, since we always want to understand the big-picture part of the story first, and only later worry about the details.
Because the bytes are at addresses. Sort of like how houses are at addresses.
Maybe in one city, addresses go west to east. So on a street with four houses, the furthest west house on the street has address 1, the house next to it has address 2, and the furthest east house has address 4.
If the city decided to flip things around, and change all the house addresses to be east to west instead - the floor plans of those houses wouldn't change. The floor plans of those houses wouldn't mirror just because their addresses have been. They just have a different address, but the individual houses are still all one unit.
An individual bit does not have an address. They are just one room of a house, that is, a byte.
@@armpitpuncher and @TOMYSSHADOW In some cases there IS a concept of bit ordering within a byte. Most computers don't have machine-level bit addressability, but I'm sure a few obscure designs do (or did) have it. Either way, software will sometimes have a need for the concept of an array of bits, often of arbitrary length. Data like that will just about always have multiple items packed into each byte. An example of this is an I/O library for image files, where one of the possible image bit-depths is 1 bit per pixel. Such software (and the file formats that it supports) will establish convention(s) for the ordering of bits within a byte.
Kevin Scott Actually, a typical PC does have bit addressing in both directions. Modern Intel CPUs have instructions like BTS (Bit Test and Set) which use little endian bit addresses within bytes. All PC graphics cards support modes with 1, 2 or 4 bits per pixel, with the pixels stored big-endian in each byte. Most also support 16 bits per pixel with the bytes of the 16 bit value stored little endian in that same video RAM.
At 4:38 should hexadecimal number C be binary number 1100, and not 1101 as shown in the video.
Is it weird that i've never seen a brown chicken egg in real life?
Isn't c=1100 ?
Correct if I'm wrong....m still learning
1 hour before the exam , and this got me curious to know about endianess 😂
the sound of writing on the paper is killing me
Can I offer you a nice egg in this tryin' time?
what about the hexadecimal 50. how will it be represented in little or big endian. This endian stuff still confuses me. All i see are very trivial examples
0x50 fits in one byte so it's the same
Wait doesnt this go down to the bit level? Like aren't the bits in these bytes arranged with endianness as well?
As a fan of the 68000 I'm going to have to agree that big-endian is the only logical choice.
10:21 "the rest of the transmission probably is in ASCII anyway" wait what, is there not endianess in ASCII as well??
No because ASCII is only 8 bits!
@@jamessadventures1380 ok, so one character is always one byte, which means that endianess doesn't affect anything?
So if I send "Hi": H = 0x48, i = 0x69
Which means 0x4869 will be sent, padded with zeros to 32 bits either before or after, depending on endianess. Wouldn't that affect the transmission? Could the other end, due to endianess errors, receive the string "iH"? It feels like I'm missing something
@@jamessadventures1380 Technically 7 bit
@@flyingskyward2153 Indeed!
@@Yaxqb Here we only consider storing integers. No one in their right mind would store text in an order where the last character has index 0, so endianness isn't an issue with strings. Strings are just a sequence of bytes in ASCII.
However, with multi-byte encodings like UTF-16, endianness does matter. This is why we have the BOM (byte order mark).
Only thing I thought was little was easier to truncate (convert short to char if in range etc)
In python3.10.11 converting 0xc to binary results in 1100, or '0b1100', slightly different from the video
Thank you sir
My argument for "little endian" is that a if you have a pointer to a 32 bit value, but you know the value is small enough to fit in a 16 bit value, and you want to treat it likea 16 bit value you can use the same pointer, where in a big endian system that would need to be pointer + 2
That seems like an open invitation to bugs, especially if anyone other than you will ever touch the code, to save 2 bytes of memory.
@@LikelyToBeEatenByAGrue you've never come across a case where an int is cast to a short?
@@aarondavis5386 sure. The thing I've never seen is someone using an int as a short by piling another int on top of it's address
I guess I'm a sidean egg breaker.
I think little endian makes more sense from a programming point of view. Sure, sometimes it looks weird in hex editors, but I wouldn't sacrifice the ease of programming just to make the number look nicer in some cases.
I'm guessing you've never had to implement cryptography on an Intel machine at the C++ level. I envy you... it's a NIGHTMARE.
Yeah, and then we have also serial communication with first bit in and first bit out order 😄