Aw man! You are awesome! I study computer science and I had to write a program that decodes signs to their unicode representation... Script from my lecturer was sa complicated and unclear... I spent a lot of hours trying to write this program and suddenly I found your channel, and I understood the most of the material in a little more than 20 minutes (including pauses). You are awesome teacher!
7 bit for ASCII let them use the eighth bit for parity checking. They could a 1 or 0 to that eighth bit to make the bit count always come out even (or odd depending on the local standard). If a byte came through that did NOT have an even bit count it was clearly in error. It was a very useful feature back in the day.
for 4 bytes utf-8 encoding, total available length is 21-digits, which goes to 0x1F FF FF, why the unicode codepoints ends at 0x10 FF FF, what happens to the entire nibble?
no one has made unicode characters past 0x10FFFF the link shows the list of the unicode code point ranges(in hexadecimal) for different languages www.unicode.org/Public/UNIDATA/Blocks.txt at the bottom you'll see that the last range ends at 0x10 FF FF
no. the 2 byte utf-8 only has 11 bits to store the unicode points. 2^11 = 2048. Thats the maximum amount of unicode points we can represent. Of the 2048 unicode points we can represent, 128 of those are used to represent the ascii characters. so ( 0 - 128 ) for ascii characters and (128 - 2047) for other characters. on top of that 2047+128 also doesn't make sense be cause we would need more than 11 bits for that.
@@mikeyamaro9035 I don't think you understood what he was asking. if we have a byte that starts with 0 then we are using 1 byte encoding that means we simply look at the next 7 bits which gives as values from 0-127. that's already taken care of right?. so if we meet a byte where it's first two bits are 1 0 that means we are using 2-byte encoding and so we have 11 bits and can represent 2048 values. since we already know the code point can't be from 0-127 the first value must be 128 and we have 2048 other values to represent @Habu Ayush was asking whether the representable values should be from 128 to (127 + 2048)? We start at 128 and we have 2048 values we can represent. I'm also wondering the same thing
I feel like he should have written 0 -> 2047, but, normally if the number can be represented inbetween 0->127, then they use 1 byte encoding to save sending two bytes.
I thought about this too when I watched the video. In theory, 2 byte utf-8 should be able to store another 2^11. But in practice, it doesn't, so it is a bit of waste of 128 combinations there. I guess this is because when UTF-8 decoder decodes bytes, it just converts bytes to code points. It doesn't remember that how many bytes the code point was read from. This is clearer when converting code points to UTF-8. Say U+0041 ('A') for example has 7 bits to the most significant 1. So 1 byte UTF-8 (which has 7 bits) is enough. We can also use 2 byte, 3 byte, etc. for the same code point U+0041 too. But if we do that, a code point will have many UTF-8 representations and somehow they decided that this is not good(why? still figuring this out), so we only have 1-to-1 code point to UTF-8. For example 2 byte 'A' in hex is C1 81 which is an invalid UTF-8. onlineutf8tools.com/convert-hexadecimal-to-utf8 will give an error.
thanks Heisenberg!
Aw man! You are awesome! I study computer science and I had to write a program that decodes signs to their unicode representation... Script from my lecturer was sa complicated and unclear... I spent a lot of hours trying to write this program and suddenly I found your channel, and I understood the most of the material in a little more than 20 minutes (including pauses). You are awesome teacher!
Read many articles but concept became clear for Unicode and UTf-8 after watching this video. Thanks you very much, Appreciate it.
I am leaving this comment since this helped me during my first year of college majoring in CS. Thank you.
This video finally helped me grasp this topic. You explained this far better than my computer science professor did :) thank you!
I'm so lucky i found this channel, thank u
7 bit for ASCII let them use the eighth bit for parity checking. They could a 1 or 0 to that eighth bit to make the bit count always come out even (or odd depending on the local standard). If a byte came through that did NOT have an even bit count it was clearly in error.
It was a very useful feature back in the day.
Thank You! This was extremely helpful. I struggled to find resources that explained it as proficiently and eloquently as you.
Thank You! This was extremely helpful.
Thank you very much for this video! It was very interesting and educational for me! :)
thank you so much
Excellent presentation!
dope vid bro
Wait, he is writing stuff in flipped horizontally?!!?
Btw, great video .thanks
Awesome
Breaking Bad vibes 😉😉😉
for 4 bytes utf-8 encoding, total available length is 21-digits, which goes to 0x1F FF FF, why the unicode codepoints ends at 0x10 FF FF, what happens to the entire nibble?
no one has made unicode characters past 0x10FFFF
the link shows the list of the unicode code point ranges(in hexadecimal) for different languages
www.unicode.org/Public/UNIDATA/Blocks.txt
at the bottom you'll see that the last range ends at 0x10 FF FF
i am confused.
A represent 41???
9:18
I am confused regarding 11:55
Shoudn't it be 128 to 2175(2047+128)?
no. the 2 byte utf-8 only has 11 bits to store the unicode points. 2^11 = 2048.
Thats the maximum amount of unicode points we can represent.
Of the 2048 unicode points we can represent, 128 of those are used to represent the ascii characters.
so ( 0 - 128 ) for ascii characters and (128 - 2047) for other characters.
on top of that 2047+128 also doesn't make sense be cause we would need more than 11 bits for that.
@@mikeyamaro9035 I don't think you understood what he was asking. if we have a byte that starts with 0 then we are using 1 byte encoding that means we simply look at the next 7 bits which gives as values from 0-127. that's already taken care of right?. so if we meet a byte where it's first two bits are 1 0 that means we are using 2-byte encoding and so we have 11 bits and can represent 2048 values. since we already know the code point can't be from 0-127 the first value must be 128 and we have 2048 other values to represent @Habu Ayush was asking whether the representable values should be from 128 to (127 + 2048)? We start at 128 and we have 2048 values we can represent. I'm also wondering the same thing
I feel like he should have written 0 -> 2047, but, normally if the number can be represented inbetween 0->127, then they use 1 byte encoding to save sending two bytes.
I thought about this too when I watched the video. In theory, 2 byte utf-8 should be able to store another 2^11. But in practice, it doesn't, so it is a bit of waste of 128 combinations there. I guess this is because when UTF-8 decoder decodes bytes, it just converts bytes to code points. It doesn't remember that how many bytes the code point was read from. This is clearer when converting code points to UTF-8. Say U+0041 ('A') for example has 7 bits to the most significant 1. So 1 byte UTF-8 (which has 7 bits) is enough. We can also use 2 byte, 3 byte, etc. for the same code point U+0041 too. But if we do that, a code point will have many UTF-8 representations and somehow they decided that this is not good(why? still figuring this out), so we only have 1-to-1 code point to UTF-8. For example 2 byte 'A' in hex is C1 81 which is an invalid UTF-8. onlineutf8tools.com/convert-hexadecimal-to-utf8 will give an error.
This is a great presentation! thank you. Let me know if you need help to sell that blue meth