This is REAL headache when working with images at 'low level'. I was making a program for fast scan processing, it should rotate skewed scans by 2 or 3 'shears', that is shifting entire rows left or right (horizontal shear. There is also vertical shear, funny thing too, but doesn't depend on endianness so much). To achieve good performance, these operations should be made one register at a time, 32 or 64 bits, no matter what pixel format is. Turns out: it matters A LOT. Pixels go in file one after another, left to right. But when loading whole 32 bits into register, it gets 'reversed' so now doing simple shift operation kills the image entirely.
The CPUs have it the right way around, its everyone and everything else that has it backwards. Little-endian really is the right way around as human readability is less important than hardware efficiency. Just because the Internet uses big-endian doesn't mean it is right. Frequency analysis of numbers will show you that the least significant parts of numbers are the most frequently used and so placing them first makes more sense especially if you are going to compress the data. Add to that the frequency analysis of CPU opcodes, most frequent being lower in number, and then apply that information into creating a compression algorithm and hardware decoder that allows _more_ CPU instructions to fit in a single block of cache and you have a CPU that will be *much* faster than today's CPU technology.
What on earth do you mean by "The CPUs"? With regards to what CPUs require data to be little endian oriented, it's basically just Intel now that DEC Alpha is dead. Given that maybe you would expect Intel CPUs to run more efficiently than other CPUs, but quite the opposite is true. Checking bit n of a 64bit number takes the same amount of time on a big or little endian processor, anyway. The only reason Intel is little endian is because they could make their early CPUs with fewer transistors when reading memory a little piece at a time. Modern processors don't read memory one byte at a time unless they want it to run like dog shit.
I used to be a little-endian guy but after reading the feeble comments below defending little-endianess, and since I can't stomach big-endian, I must now advocate the compromise, middle-endian. Not just for words, but for all the ram. The first 32 bit word should have the bytes arranged as 3, 4, 1, 2. Then the first four words of memory should be c, d, a, b. Then the first four blocks of 16 bytes each should be C, D, A, B, etc.
I am a big forestander of little endian. First, as you said, for computers it is easier to work with little endian, and for humans, big endian is easier, because that's the way we write numbers. Now I say, the way we write numbers is wrong. We use Arabic numerals, and Arabic is written right to left. However, we copied the order of their numbers, so our numbers are actually right-to-left in the middle of left-to-right text. Now you might say that we don't only write numbers as big endian, we say them like that, too. Well, I have two things to say to that. First, some germanic languages still haven't reversed the order of the tens and ones when speaking (German and Dutch), so 27 = seven and twenty. All other languages have just been influenced by the writing system too much. And before we had Arabic numerals, we didn't count much at all, and words for large quantities were rather approximate. Now this is how I think about little endian-ness: Let's say big endian is FEDCBA98 76543210 then you'd say little endian is 76543210 FEDCBA98 right? Well, try thinking of it like 01234567 89ABCDEF That's also how little-endian chips work at the silicon level; they don't worry about endianness. Suddenly bitshifting works too. Routines for converting numbers to decimal or hex are also easier if you swap the order around. The only thing that's harder is humans, and I say humans are wrong here.
We use Latin/Italian/German decimal positional numerals based on the Arabic numerals, that were the Sanscrit decimals used by much of central Asia, that probably stemmed from the Indian positional number system, that was predated by a Chinese decimal positional number system that lacked zero. But I think you're right, I have one more argument: For most purposes, in tables, we align numbers to the right, so that the scale of a number stands out. If we were to read the number from the right, we would align them to the left, like all text.
Written text is always "big endian", because when it comes to sorting, the first character is the most significant. One advantage of big endian is, that you can compare two strings of bytes the same way without knowing if they are meant as numbers or as text. Besides these little differences, the advantage of one over the other is just as minor as whether you're driving on the left or on the right. With modern cpus, you can even select the byte order you like, so who cares after all? When it comes to "human readability": If you are trained as a "little endian guy" like me, you can read hex dumps as fast as every "big endian guy" can read his dumps. It's just a matter of how your brain is wired. Only switching is hard. It's more important to know what the NOP instructions' code is, and the hex 90 in Intel cpus always annoyed me.
So what are high-heels and low-heels in computing? What do you recommend as a big-endian box for a developer who wants to make sure that software works properly on big-endian boxen?
Linux has Qemu, which lets you run various architectures in software emulation, including big-endian ones, like PowerPC. It may be slow, but it should be good enough to test for checking the endian independence of your code, at least.
When reading numbers in text I have to mentally right-align and count the total number of digits aided by grouping of thousands. You can't just read 1 in 101 and say it is a hundred without also reading and counting the LSBs. I find Intel byte order in a hex dump intuitive. It is automatically aligned by more common small values, and I don't need to care how long the word in the given file type is defined as, and I can address it by where the number visually begins at, often after a string of nulls. What is backward about little-endian is the naming, because little is actually a the beginning.
The idea isn't about left and right. It's about the fact that you start with the least significant bit and read up. For example (not 100% the same but using decimal system for familiarity), we read two-hundred and forty (and) two, but the computer reads two and forty and two-hundred. Left or right doesn't matter, it's about the order of significant digits and all modern cultures that I'm aware of read numbers from most significant to least significant.
No, little-endian is not “backwards”. There is no front or back or up or down or left or right in RAM, there are only addresses -- byte addresses and bit offsets. And there are two ways to look at bit offsets, depending on whether you are working with binary digits in an integer or bit fields within some random structure. Big-endian can never get the three numberings entirely consistent -- only little-endian can.
I read that as if s/three/their/. It's a question if at address A and wanting to test for bit position 8(where position 0 is the first position instead of 1) of an dword being set, is that (1 & *(A + 3)) OR (1 & *(A + 1)). That's Big, Little respectively.
@@cheako91155 Consider the 680x0 family, where the single-bit instructions numbered the bits one way, but the bitfield instructions introduced in the 68020 and later numbered them the opposite way. And in PowerPC, the bit numbering is completely the opposite of the binary-digit numbering. So you see, big-endian can never get it consistent.
@@lawrencedoliveiro9104 Endianness doesn't apply to registers, even if in the assembler they are given numbers instead of names as all registers inherently have numbers associated with them. I understand that one can twiddle bits in assembled code to effect what register is opperated against as though altering a pointer, such operations are separate from the Endianness of an architecture. As the 680000 series is RISC, I assume the only place where Endianness comes into play is during loads and stores and I've typically seen both are supported.
@@cheako91155 Funny thing, registers are another example of the inconsistency of big-endianness. Consider a sequence of instructions like move_long a to t move_byte t to b Does b end up with the bottommost or topmost byte of a? In little-endian, it’s always the bottommost byte. But in big-endian, it depends on whether t is a register or not! Effectively, even in big-endian, registers are still little-endian!
@@lawrencedoliveiro9104 You can not use a word register in a byte move op. In some way, either inherently or specifically, you are specifying what part of the register to use or rather you specify a register that can also be operated on as part of a long op. Though as I already said such topics are outside the topic of Endianness because resisters and register slices can't have meaningful pointers. With addresses I think we agree that the address is always the lowest byte effected(I.E. *(A - 1) is never read or written and *(A + 4) holds most sig. on Little End.
I think you exaggerated how hard it is to read little endian in memory dumps. I feel your basis of Internet Byte Ordering as an argument for big endian is pretty weak. You must know your data types and use them appropriately to be an effective programmer. Switching byte orders around in my head and in code is something I'm pretty comfortable with.
Littler endian was always perverse, and a right pain when reading hex dumps. I cut my teeth on IBM 360/370 architecture, which was big endian. Intel's processors were a shock when I first fiddled with them (without even considering that dreadful segmented memory addressing model). Of course, ARM started out as little-endian, but it's now bi-endian, which must create some fun. nb. the issue of data transfer using binary is something of a nightmare between different architecture machines which can't even agree on matters such as byte ordering, lengths of integers, data alignment and even support for variations on BCD (for those architectures that support in natively). Hence so much stuff over the network is turned into XML (OK there are other reasons to use XML). However, what XML over binary data transfer and storage does is massively increased the volumes.
Compilers are historically bad at dealing with endianness. There are still programs being written that just fail or worse if u ever try to load a save written by a different endian.
Diez Roggisch ye when I first moved from my lovely Atari ST with its 68K CPU and started on x86 my first shock was the registers, my first thought was WTF, then came the dreaded little stuff, my next thought was this x86 is back to front and inside out.
If that wasn't brain ache enough, there was that appalling segmented memory addressing model of the 8086 and, even if you went outside assembly code, the gazillions of different addressing mode options for compilers. How Intel managed to rule the world of PC processors is something that can only be explained by IBM adopting the architecture.
Gary Explains I am not sure also but is a bit counter intuitive you don’t think is like zigue-zague writting Actually I research a bit Turns out that we write the numbers from the back to front www.theguardian.com/notesandqueries/query/0,5753,-23605,00.html So maybe the intel way is the most correct way our language is the problematic one By the way great video, and thanks for the response
Big endian is NOT the proper format....and stating internet as a reason is not accurate either, considering ALL packets (no matter if you do big or little endian in processing) is sent via hardware layer in bit-for-bit and byte-for-byte order. This means that your packets are packaged however you want and received on the other end exactly the same order. All systems that use big endian have shown to be extremely inefficient systems when it comes to hardware and speeds.....and that legacy needs to die already. As for "difficult to read," no it's not, not in the slightest....not only is this handled by almost every single compiler and source reading program, but if you are reading opcodes on a hex grid, then you should already have proficiency in reading proper endian style for that machine (which is majority little). So stop trying to make 68000 become a thing again....nostalgia is fun, but it's not practical.
You are over-complicating things... The root of the confusion about the little/big endian is that: human read numbers from right to left, but read byte arrays from left to right. So, for human readability, you have to mix up the byte order (or the bit order). That's big endian. About the bit shifting you're just wrong ^^'... if you order bit in the right way, there is no tradeof to use little endian. Regarding communication, endianness doesn't matter, because we use shift registers to send & receive. So it make sense to use the most human readable. Finally, most of the processors built for a while are little endian. That not beacause Intel does funny things, it's beacause little endian is better to use from an architectural point of view. but yes: it's harder to read ^^' That's why some computer architects call "big endian" the "wrong endian" x).
Where you are wrong is that it isn't stored in BIT order, it is stored in (reverse) BYTE order. If it was BIT order then it would make more sense. So even in little endian where the least significant byte is store first, the most significant bit is still stored first, not the least significant.
@@Arthur-qv8np i.e 0000 1001 0110 0000. So in big endian it is most significant bit to least, left to right across the whole number. In little endian bit 0 is next to bit 15. So the least significant bit and the most significant are next to each other. That isn't bit order.
What I'm stating above is just to explain that there is no overhead for bitshifting with little endian. but hey ! there is no overhead too with big endian as long as you are ordering bits to make a contiguous sequence of bits. but still, little endian is the most natural way to order bytes. *Example* With Little endian: Let's write two 32bits number with digits following bit significance word 0 : 0x76543210 word 1 : 0xFEDCBA98 memory: byte 0: 0x10 byte 1: 0x32 byte 2: 0x54 byte 3: 0x76 byte 4: 0x98 byte 5: 0xBA byte 6: 0xDC byte 7: 0xFE Now, let's read a word from byte 2 : 0xBA987654 ok.. that's sound good. Same experience with big endian: word 0 : 0x76543210 word 1 : 0xFEDCBA98 memory: byte 0: 0x76 byte 1: 0x54 byte 2: 0x32 byte 3: 0x10 byte 4: 0xFE byte 5: 0xDC byte 6: 0xBA byte 7: 0x98 Now.. let's read a word from byte 2: 0x3210FEDC oh ? it look like big endian is not following bit sigificance order. Therefore, It's legitimate to call it the "reverse byte order".
Mike Mestnik, IBM mainframe computers, long before any mini computers (let alone micro computers) existed used numbered registers, 0 thru 15. All 16 registers were general purpose. When large numerals had to handled any available even-odd pair could be used, e.g. 0&1 or 6&7, or 4&5, etc. Intel's Little Endian system became widely used because Windows became popular and only ran on Intel CPUs _AND_ eventually hundreds of millions of Intel micro computers outnumbered tens of millions of IBM mainframe computers and Motorola 6800 micro computers. Gulliver's Travels all over again ;-).
@@jfitzpatrick6108 All registers have a binary representation so in essence they are all numbered registers. When an assembler uses numbers instead of letters it has the same meaning [A-F] == [0-15]. My point is that registers can't meaningfully be pointed to, there is no instruction to get or put the "next" register... Though you could get this effect by bit twiddling the compiled code, this however is not what ppl are talking about when they say big or little endian processor.
@@cheako91155 It's more like the registers can be truncated. So you have the 32-bit eax register, and then the ax register is just the first 16 bits of eax.
I'm only halfway your video, and I can say that your explanation is way better than the Computerphile explanation.. Well done ;)
This is REAL headache when working with images at 'low level'. I was making a program for fast scan processing, it should rotate skewed scans by 2 or 3 'shears', that is shifting entire rows left or right (horizontal shear. There is also vertical shear, funny thing too, but doesn't depend on endianness so much). To achieve good performance, these operations should be made one register at a time, 32 or 64 bits, no matter what pixel format is. Turns out: it matters A LOT. Pixels go in file one after another, left to right. But when loading whole 32 bits into register, it gets 'reversed' so now doing simple shift operation kills the image entirely.
*GARY!!!*
*Good Afternoon Professor!*
*Good afternoon Fellow Classmates!*
Moral of the lesson: Computers are dyslexic but work through it! lol
MARK!!!
MARK!
The Omnipresent MARK!!!
Good evening!
GARY!!!
MARK!!!
ZAMAN!!!
FREEDY!!!
BRIAN!!!
The CPUs have it the right way around, its everyone and everything else that has it backwards.
Little-endian really is the right way around as human readability is less important than hardware efficiency. Just because the Internet uses big-endian doesn't mean it is right. Frequency analysis of numbers will show you that the least significant parts of numbers are the most frequently used and so placing them first makes more sense especially if you are going to compress the data. Add to that the frequency analysis of CPU opcodes, most frequent being lower in number, and then apply that information into creating a compression algorithm and hardware decoder that allows _more_ CPU instructions to fit in a single block of cache and you have a CPU that will be *much* faster than today's CPU technology.
What on earth do you mean by "The CPUs"? With regards to what CPUs require data to be little endian oriented, it's basically just Intel now that DEC Alpha is dead.
Given that maybe you would expect Intel CPUs to run more efficiently than other CPUs, but quite the opposite is true. Checking bit n of a 64bit number takes the same amount of time on a big or little endian processor, anyway. The only reason Intel is little endian is because they could make their early CPUs with fewer transistors when reading memory a little piece at a time. Modern processors don't read memory one byte at a time unless they want it to run like dog shit.
I mean by your logic the 8088 should have been the fastest CPU on earth with its 4bit memory model. Yeesh
In X86 shifts, even when operating on an address, are simply mul/dev by 2, X times. It's not a shift if system endianness is not taken into account.
Some architectures may have instructions that implements a bit shift on an address range, but they should all have a mathematical shift.
@@cheako91155 Ugh... ok... that is just making it even MORE confusing for me. :p
I used to be a little-endian guy but after reading the feeble comments below defending little-endianess, and since I can't stomach big-endian, I must now advocate the compromise, middle-endian. Not just for words, but for all the ram. The first 32 bit word should have the bytes arranged as 3, 4, 1, 2. Then the first four words of memory should be c, d, a, b. Then the first four blocks of 16 bytes each should be C, D, A, B, etc.
PDP-11 FORTRAN, anybody? That had a convention like you describe for storing 32-bit integers.
@@lawrencedoliveiro9104 Yes, that was known as PDP-endian :)
I have always struggle to understand this concept.
But your explanation really helped me!!.
Thanks from Spain, Madrid
Now if Gary would explain why Dates are written differently. 12/1/2021 is that December 1st or January 12th?
Wonderful explanation! Thank you Mr. Explains!
I am a big forestander of little endian. First, as you said, for computers it is easier to work with little endian, and for humans, big endian is easier, because that's the way we write numbers. Now I say, the way we write numbers is wrong. We use Arabic numerals, and Arabic is written right to left. However, we copied the order of their numbers, so our numbers are actually right-to-left in the middle of left-to-right text.
Now you might say that we don't only write numbers as big endian, we say them like that, too. Well, I have two things to say to that. First, some germanic languages still haven't reversed the order of the tens and ones when speaking (German and Dutch), so 27 = seven and twenty. All other languages have just been influenced by the writing system too much. And before we had Arabic numerals, we didn't count much at all, and words for large quantities were rather approximate.
Now this is how I think about little endian-ness: Let's say big endian is
FEDCBA98 76543210
then you'd say little endian is
76543210 FEDCBA98
right? Well, try thinking of it like
01234567 89ABCDEF
That's also how little-endian chips work at the silicon level; they don't worry about endianness. Suddenly bitshifting works too. Routines for converting numbers to decimal or hex are also easier if you swap the order around. The only thing that's harder is humans, and I say humans are wrong here.
We use Latin/Italian/German decimal positional numerals based on the Arabic numerals, that were the Sanscrit decimals used by much of central Asia, that probably stemmed from the Indian positional number system, that was predated by a Chinese decimal positional number system that lacked zero.
But I think you're right, I have one more argument:
For most purposes, in tables, we align numbers to the right, so that the scale of a number stands out. If we were to read the number from the right, we would align them to the left, like all text.
Wars have been fought over little end vs. big end!
sadly no mention of Gullivers Travels.
one little, two little, three Little ENDIANS
great explanation
Great video, just make sure to flip the video image next time. (Your camera is mirroring)
Good explanation as always Gary, Thanks very much as this helps me as a budding Assembly programmer!
Written text is always "big endian", because when it comes to sorting, the first character is the most significant. One advantage of big endian is, that you can compare two strings of bytes the same way without knowing if they are meant as numbers or as text.
Besides these little differences, the advantage of one over the other is just as minor as whether you're driving on the left or on the right. With modern cpus, you can even select the byte order you like, so who cares after all?
When it comes to "human readability": If you are trained as a "little endian guy" like me, you can read hex dumps as fast as every "big endian guy" can read his dumps. It's just a matter of how your brain is wired. Only switching is hard. It's more important to know what the NOP instructions' code is, and the hex 90 in Intel cpus always annoyed me.
*GARY!*
*Good afternoon, Professor!*
*ZAMAN!*
What a great example to take a symmetric, zero-and-ones-only number to explain digit significance for a decimal number :P
Gary, thank you! I get it now!
Glowies are the ones behind little-endian.
Awesome, many thanks!
Nice video! I can't wait for next one 👍
So what are high-heels and low-heels in computing?
What do you recommend as a big-endian box for a developer who wants to make sure that software works properly on big-endian boxen?
Linux has Qemu, which lets you run various architectures in software emulation, including big-endian ones, like PowerPC. It may be slow, but it should be good enough to test for checking the endian independence of your code, at least.
When reading numbers in text I have to mentally right-align and count the total number of digits aided by grouping of thousands. You can't just read 1 in 101 and say it is a hundred without also reading and counting the LSBs. I find Intel byte order in a hex dump intuitive. It is automatically aligned by more common small values, and I don't need to care how long the word in the given file type is defined as, and I can address it by where the number visually begins at, often after a string of nulls.
What is backward about little-endian is the naming, because little is actually a the beginning.
thanks bro
0:50 Well, when it comes to Pi - the most significant digit is actually the last you can find
0:19 That’s the convention in human languages written left-to-right. Those written right-to-left have some different ideas.
Are you familiair with right-to-left languages? They write the numbers left-to-right.
@@telocho Reality is a bit more complex than that.
The idea isn't about left and right. It's about the fact that you start with the least significant bit and read up. For example (not 100% the same but using decimal system for familiarity), we read two-hundred and forty (and) two, but the computer reads two and forty and two-hundred. Left or right doesn't matter, it's about the order of significant digits and all modern cultures that I'm aware of read numbers from most significant to least significant.
So which bit has the lowest address: the least-significant or the most-significant?
@@lawrencedoliveiro9104 depends on the architecture
is that the reason why Americans write month/day/year or why we Germans emphasize the ones over the tens as in zwei-und-vierzig (two plus forty) ;-)
Old English, used to do the same thing. 42 would be pronounced as "two and forty". I'm not sure when it changed though.
G H Still do it from 13 to 19
No, little-endian is not “backwards”. There is no front or back or up or down or left or right in RAM, there are only addresses -- byte addresses and bit offsets. And there are two ways to look at bit offsets, depending on whether you are working with binary digits in an integer or bit fields within some random structure.
Big-endian can never get the three numberings entirely consistent -- only little-endian can.
I read that as if s/three/their/. It's a question if at address A and wanting to test for bit position 8(where position 0 is the first position instead of 1) of an dword being set, is that (1 & *(A + 3)) OR (1 & *(A + 1)). That's Big, Little respectively.
@@cheako91155 Consider the 680x0 family, where the single-bit instructions numbered the bits one way, but the bitfield instructions introduced in the 68020 and later numbered them the opposite way.
And in PowerPC, the bit numbering is completely the opposite of the binary-digit numbering.
So you see, big-endian can never get it consistent.
@@lawrencedoliveiro9104 Endianness doesn't apply to registers, even if in the assembler they are given numbers instead of names as all registers inherently have numbers associated with them. I understand that one can twiddle bits in assembled code to effect what register is opperated against as though altering a pointer, such operations are separate from the Endianness of an architecture.
As the 680000 series is RISC, I assume the only place where Endianness comes into play is during loads and stores and I've typically seen both are supported.
@@cheako91155 Funny thing, registers are another example of the inconsistency of big-endianness. Consider a sequence of instructions like
move_long a to t
move_byte t to b
Does b end up with the bottommost or topmost byte of a? In little-endian, it’s always the bottommost byte. But in big-endian, it depends on whether t is a register or not! Effectively, even in big-endian, registers are still little-endian!
@@lawrencedoliveiro9104 You can not use a word register in a byte move op. In some way, either inherently or specifically, you are specifying what part of the register to use or rather you specify a register that can also be operated on as part of a long op. Though as I already said such topics are outside the topic of Endianness because resisters and register slices can't have meaningful pointers. With addresses I think we agree that the address is always the lowest byte effected(I.E. *(A - 1) is never read or written and *(A + 4) holds most sig. on Little End.
Excellent explanation! A hint of data/parameter marshaling would be nice, though. Thanks, Gary!
*takes a deep bow*
Teacher.
big indian, LITTLE indian
Nice 🙂
I think you exaggerated how hard it is to read little endian in memory dumps. I feel your basis of Internet Byte Ordering as an argument for big endian is pretty weak. You must know your data types and use them appropriately to be an effective programmer. Switching byte orders around in my head and in code is something I'm pretty comfortable with.
Littler endian was always perverse, and a right pain when reading hex dumps. I cut my teeth on IBM 360/370 architecture, which was big endian. Intel's processors were a shock when I first fiddled with them (without even considering that dreadful segmented memory addressing model).
Of course, ARM started out as little-endian, but it's now bi-endian, which must create some fun.
nb. the issue of data transfer using binary is something of a nightmare between different architecture machines which can't even agree on matters such as byte ordering, lengths of integers, data alignment and even support for variations on BCD (for those architectures that support in natively). Hence so much stuff over the network is turned into XML (OK there are other reasons to use XML). However, what XML over binary data transfer and storage does is massively increased the volumes.
Both Android and iOS run ARM in little endian mode. I suspect ARM CPUs running in big endian are quite rare.
Let the compiler deal with in - in this day and age?
Compilers are historically bad at dealing with endianness. There are still programs being written that just fail or worse if u ever try to load a save written by a different endian.
@@cheako91155 Ever since the invention of the UNIX toolchain compilers have been dumb as shit
I always prefer big, one of the things I hated about Assembly programming on x86, hurts your brain 🧠 after a while, 68K nicer to your brain 🧠.
I agree to the conclusion, but the bigger issue for me were the few and weird registers of x86.
Diez Roggisch ye when I first moved from my lovely Atari ST with its 68K CPU and started on x86 my first shock was the registers, my first thought was WTF, then came the dreaded little stuff, my next thought was this x86 is back to front and inside out.
If that wasn't brain ache enough, there was that appalling segmented memory addressing model of the 8086 and, even if you went outside assembly code, the gazillions of different addressing mode options for compilers. How Intel managed to rule the world of PC processors is something that can only be explained by IBM adopting the architecture.
Everything is _42_ . Do the math. It is true 1+1=42.
Hmm the meaning of life?
Lol
By humans you don’t mean arabs hahahaha!?!? They read from right to left
I am no expert on Arabic, but I have heard that even in Arabic the numbers are written left to right.
Gary Explains I am not sure also but is a bit counter intuitive you don’t think is like zigue-zague writting
Actually I research a bit
Turns out that we write the numbers from the back to front
www.theguardian.com/notesandqueries/query/0,5753,-23605,00.html
So maybe the intel way is the most correct way our language is the problematic one
By the way great video, and thanks for the response
@@GaryExplains I’m Arabic and that’s right
I didn't understand a word... this video depression asf
Moral of the lesson: Computers are dyslexic ... but work through it! lol
Mark Keller Reminded me of my math teacher she spoke Arabic at an English class....still virgin tho
@@1MarkKeller Me too :)
Big endian is NOT the proper format....and stating internet as a reason is not accurate either, considering ALL packets (no matter if you do big or little endian in processing) is sent via hardware layer in bit-for-bit and byte-for-byte order. This means that your packets are packaged however you want and received on the other end exactly the same order. All systems that use big endian have shown to be extremely inefficient systems when it comes to hardware and speeds.....and that legacy needs to die already. As for "difficult to read," no it's not, not in the slightest....not only is this handled by almost every single compiler and source reading program, but if you are reading opcodes on a hex grid, then you should already have proficiency in reading proper endian style for that machine (which is majority little).
So stop trying to make 68000 become a thing again....nostalgia is fun, but it's not practical.
🤣
"All systems that use big endian have shown to be extremely inefficient systems when it comes to hardware and speeds"
Source?
1st
You are over-complicating things... The root of the confusion about the little/big endian is that: human read numbers from right to left, but read byte arrays from left to right.
So, for human readability, you have to mix up the byte order (or the bit order). That's big endian.
About the bit shifting you're just wrong ^^'... if you order bit in the right way, there is no tradeof to use little endian.
Regarding communication, endianness doesn't matter, because we use shift registers to send & receive. So it make sense to use the most human readable.
Finally, most of the processors built for a while are little endian. That not beacause Intel does funny things, it's beacause little endian is better to use from an architectural point of view.
but yes: it's harder to read ^^'
That's why some computer architects call "big endian" the "wrong endian" x).
Where you are wrong is that it isn't stored in BIT order, it is stored in (reverse) BYTE order. If it was BIT order then it would make more sense. So even in little endian where the least significant byte is store first, the most significant bit is still stored first, not the least significant.
@@GaryExplains
let's take a number of 16 bit:
(bit 15) MSB >0110 0000 0000 1001
@@Arthur-qv8np i.e 0000 1001 0110 0000. So in big endian it is most significant bit to least, left to right across the whole number. In little endian bit 0 is next to bit 15. So the least significant bit and the most significant are next to each other. That isn't bit order.
What I'm stating above is just to explain that there is no overhead for bitshifting with little endian.
but hey ! there is no overhead too with big endian as long as you are ordering bits to make a contiguous sequence of bits. but still, little endian is the most natural way to order bytes.
*Example*
With Little endian:
Let's write two 32bits number with digits following bit significance
word 0 : 0x76543210
word 1 : 0xFEDCBA98
memory:
byte 0: 0x10
byte 1: 0x32
byte 2: 0x54
byte 3: 0x76
byte 4: 0x98
byte 5: 0xBA
byte 6: 0xDC
byte 7: 0xFE
Now, let's read a word from byte 2 : 0xBA987654
ok.. that's sound good.
Same experience with big endian:
word 0 : 0x76543210
word 1 : 0xFEDCBA98
memory:
byte 0: 0x76
byte 1: 0x54
byte 2: 0x32
byte 3: 0x10
byte 4: 0xFE
byte 5: 0xDC
byte 6: 0xBA
byte 7: 0x98
Now.. let's read a word from byte 2: 0x3210FEDC
oh ?
it look like big endian is not following bit sigificance order. Therefore, It's legitimate to call it the "reverse byte order".
It's funny that you want to explain the CPU but you're not clear on the fundamentals (endianness is the basis of the basics).
Endianness doesn't effect registers. They are typically given letters instead of numbers, unlike address.
Just looked up x86 they use H and L to break a word into bytes, for high and low.
Mike Mestnik, IBM mainframe computers, long before any mini computers (let alone micro computers) existed used numbered registers, 0 thru 15. All 16 registers were general purpose. When large numerals had to handled any available even-odd pair could be used, e.g. 0&1 or 6&7, or 4&5, etc. Intel's Little Endian system became widely used because Windows became popular and only ran on Intel CPUs _AND_ eventually hundreds of millions of Intel micro computers outnumbered tens of millions of IBM mainframe computers and Motorola 6800 micro computers. Gulliver's Travels all over again ;-).
@@jfitzpatrick6108 All registers have a binary representation so in essence they are all numbered registers. When an assembler uses numbers instead of letters it has the same meaning [A-F] == [0-15]. My point is that registers can't meaningfully be pointed to, there is no instruction to get or put the "next" register... Though you could get this effect by bit twiddling the compiled code, this however is not what ppl are talking about when they say big or little endian processor.
@@cheako91155 It's more like the registers can be truncated. So you have the 32-bit eax register, and then the ax register is just the first 16 bits of eax.
Registers are always effectively little-endian.
dont understand anything, better if u talk less and show more examples more
Hmmm, sorry you didn't understand anything. But I think my explanation is quite simple and comprehensive.
@@GaryExplains maybe i watch again when im not hungry