I've recently discovered this channel and you're doing a really great job. There is a lack of low-level staff like that on YT because everyone want's to create "Yet another React tutorial". Keep it up :)
The idea that 754 is a compression algo is a really profound way of looking at the world of computation. It brings a new level of thinking to the implementations of the world and helped me better think critically of these systems. Thanks!
2 ปีที่แล้ว +1
always knew IEEE754 could store floating numbers, but today i learned that it also allows to store -0, NaN, and infinity in a specific format. ty!
Thank you so much for this! I learned floating point before but completely forgot how it worked but it just came up in another class where my professor's explanation didn't make any sense. Now I'm actually understanding it from your video. :)
Appreciate the very detailed and hands on tutorial! One question, doesn't the mantissa also need a (10-bit) bitmask in your encode function? I.e., function encode(n) { ... const mantissa = 1025 * percentage; mantissa = mantissa & 0b1111111111; ... } That way, in case of an overprecise mantissa, we don't clobber the sign and exponent bits in the return value.
Thank you so much for the detailed walkthrough of the specification. Though, one thing that's still bugging me is the fact that neither 0.3, nor 0.30000000000000004 can be represented in binary without recurring digits. 0.3 would actually be stored as 0.299999999999999988897769753748... , and 0.30...04 would be stored as 0.300000000000000044408920985006... . I understand that the discrepancy is caused by the fact that 0.1 and 0.2 also can't be represented accurately. So, my point is, why does Javascript show the first digit where the imprecision takes effect, instead of leaving it out entirely, or showing more, maybe even all decimal digits? Is it to prevent possible errors where the check for 0.1 + 0.2 === 0.3 would fail? Was the number of digits chosen as to be able to uniquely identify any number with the least amount of digits? Thanks in advance :)
I think it is a case of a standard amount of precision letting the little error sneak in- if that standard number of significant sigits had been one less, then 0.1 + 0.2 would for all intents and purposes equal 0.30000000000000000 or 0.3, and it just so happens that in this case we grab 18 significant digits, instead of 17.
Isn’t the key point that JS numbers are encoded with base 2 and that base 2 can’t represent 1/10 and 2/10 precisely (similar to how decimal numbers can’t precisely represent 1/3, even with lots of finite storage)? If JS numbers were encoded with base 10, then 0.1 and 0.2 could be represented precisely.
You're completely right. Though I like to think there are many key points that come with IEEE 754 system. There is also a ton of nuance in how the operations work, and how rounding is handled. And many interesting things that happen when you start dealing with denormalised numbers - which is almost like a secondary system embedded in the specification. The main understanding I wanted people to come away with from this video was how the representation, encoding, and precision parts work, and the non-representability of 0.1 was left as more of an implication. I hope to come back to this topic in the future and dive deeper. In particular I'm interested in exploring the famous "Carmack" fast inverse square root hack. Also I'm a big fan of your blog. Thanks for watching!
Yazeed, I assure you this is definitely not how the "big" microphone sounds like. The desk mic would be even clearer and crisper, imagine radio recordings on TH-cam without music or anything, just voice. That's how it's going to be with a decently priced mic.
0:11 I think you missed a zero EDIT: Jokes aside (there really is a missing zero though) great stuff as always! Was kind of hoping you would actually namedrop posits at some point after you mentioned you got intrigued by them last week, but otoh that's definitely a bit too deep into the hypothetical weeds for now
Haha I was hoping someone would count the zeros! But I have been looking into posits and unums quite a bit. Once I've wrapped my head around them enough to get a software implementation up and running, I think I'll make a video.
Yeah, there's a weird kind of nerdy fun in figuring out how they work and about what kind of special optimised use-cases you can have for them, no? Also just to become aware that all these systems for representing numbers have design trade-offs and aren't as "finished" as we might think. PICO-8 for example uses its own 16:16 fixed-point number system because Joseph White (its creator) thought it would make for a more interesting fantasy console, and it creates some interesting limitations.
For sure. There is so much to enjoy there - figuring out the system and contrasting it with IEEE 754, the fact that it's one guy just coming up with stuff like a mad scientist/evil genius, the multiple iterations, the controversy with william kahan, and actually just reading the wikipedia discussion page and seeing these bitter debates. It's amazing - so actually thank you for introducing me! I didn't know that about PICO-8 either. That's a project I've been watching from the outside a bit - I really enjoy the crazy procedural animations people are able to crack out of it.
Thanks very much. This was one of the best videos I found on this subject. Does this mean the following : In a 16 bit floating point representation, I can represent a maximum of 1024 unique values in each range of numbers. (0-2), (2-4), (4-8), (8-16) And if yes, it implicates that we have a better representation in the smaller exponent ranges, as we get more number of unique values for a significantly smaller range .? Pl clarify. Thank you very much for the informatory video.
Yes that's exactly right - in floating point, the closer you are to zero, the better the approximation can be, and the further you travel from zero, the worse the approximation gets.
@@LowByteProductions Thanks for the quick response too. Can you pl shed some light on this too.. I read that the max positive number that can be represented through IEEE 754 32 bit floating point is 3.403E38. But as I understand, there's only 2^32 values that can be uniquely represented using 32 bit binary. In this case, how do we even reach a number as huge as 3.403E38.. I have difficulty inferring this, can you please help in the decoding this for me...?
@@reddyharishkannapu1850 if you're still interested, the longer you travel through the number line, the more of the floating point numbers get skipped. For example: the next possible value after 32.768 might be 32.770 (example), but for 2837.768 the next possible value will be 2837.794 already. And the bigger the number gets, the bigger the gap.
Dumb question. Why not represent numbers as something that takes up more bits if more accuracy is needed, or takes up less if less is required. 0.5 vs 0.39201329 just inherently have different amounts of information in them right?
The VM series is making use of typed arrays - I think so far only UInt8Array has been used (to place raw bytes into an array buffer) but the essence is the same. I'm sure they'll be used there later as well.
3 ปีที่แล้ว
I don't think that most people say imprecision of floating point numbers is a fault of JS. I believe they say that it's a fault of JS to force all numbers into being floats and not giving programmes appropriate tools to tackle the imprecision as the given domain requires.
Which would be wrong anyway, since JS has ArrayBuffers, Uint{8, 16, 32}Arrays, Int{8, 16, 32}Arrays, and BigInts - for when specific or even arbitrary integer precision is required.
3 ปีที่แล้ว
@@LowByteProductions I wonder how well-known they are in practice? I don't remember seeing them in the wild but I didn't look at too much JS anyway.
On this channel they are very well known 😁 If you're an everyday web developer making landing sites in react then you might not come across them, but if you do any work with audio, webgl, pixel pushing on the canvas,or transferring and/or parsing binary data then you'll be familiar. Most people that have worked with node will also be familiar with the idea of a Buffer object - which these days is now just an abstraction built on the ArrayBuffer/TypedArray standards.
3 ปีที่แล้ว
@@LowByteProductions good 🙂, I did mostly simple stuff although I came across ArrayBuffer. So it seems I misunderstood those JS critics.
Really good video! But how would this work when you don't have floating point number calculation available? Because Math.log (and so does Math.pow / **) returns a float. I kinda doubt that this would be possible in JS or rather easily doable.
IEEE 754 is fully implementable in hardware (and, or, not, xor, shift), so these operations are definitely possible in js without falling back on the standard library. Interestingly, if you simply cast a floating point number to an integer, it acts as a crude, out of scale logarithm. This is the basis for the famous "fast inverse square root".
This video is about building a model of floating point, not necessarily in the same way it happens in hardware. The implementation uses floats internally, but we're not trying to bootstrap a system from the ground up; we're trying to learn how the algorithm works.
In JS, unfortunately not. It might be possible in node by writing a C++ extension that could actually examine the bit pattern of the NaN and pass the result back to JS.
pppfff. It IS a js fault. js was supossed to be easy for scripts, why didnt they use a human notation? 0.1 + 0.2 = 0.3 =( that and arrays starting on zero, no need in high level langs. Day 1 Month 1... february 1 ? =( Interesting video. And channel. intense. =)
Sounds great 👍 let me know when you've made the video that explains AND implements floating point numbers, which somehow fits in 5 minutes and makes sense to people. I'm sure it will be fantastic.
@@LowByteProductions please brother i have watched your video 3 times and i have been learning about floating points for the last 2 or 3 months I was just trying to make a joke about his comment great contents by the way i would love to watch a 5 hours video from u about floating points 🙂
I've recently discovered this channel and you're doing a really great job. There is a lack of low-level staff like that on YT because everyone want's to create "Yet another React tutorial". Keep it up :)
The idea that 754 is a compression algo is a really profound way of looking at the world of computation. It brings a new level of thinking to the implementations of the world and helped me better think critically of these systems. Thanks!
always knew IEEE754 could store floating numbers, but today i learned that it also allows to store -0, NaN, and infinity in a specific format. ty!
Fantastic, I love it! Keep it up man. This channel has great potential.
Holy shit, this channel is pure gold, I didn't even know your channel. Keep up with this awesome job.
Thank you so much for this! I learned floating point before but completely forgot how it worked but it just came up in another class where my professor's explanation didn't make any sense. Now I'm actually understanding it from your video. :)
Glad it helped!
Just love you, ur a new tech monster.
What content are you planning next..
this channel is just on fire 🔥🔥🔥🔥
Wow, amazing job and presentation, this is an incredible amount of info. Hats off to you sir
Great explanation and information density. It's first time I thought to slow down video speed instead of increasing it 😂
Unorthodox display of hubris but very well
I like this video. I think I understand the implementing of the float-numbers today. Thank you very much.
Thanks for making the very detailed video. I wish I could follow all of it :(
oh man this was very intuitive
Appreciate the very detailed and hands on tutorial! One question, doesn't the mantissa also need a (10-bit) bitmask in your encode function? I.e.,
function encode(n) {
...
const mantissa = 1025 * percentage;
mantissa = mantissa & 0b1111111111;
...
}
That way, in case of an overprecise mantissa, we don't clobber the sign and exponent bits in the return value.
Thank you for your explanation😭😭
0:44 pause it and shake your screen it makes for a pretty cool optical illusion
Awesome tutorial
Thank you so much for the detailed walkthrough of the specification. Though, one thing that's still bugging me is the fact that neither 0.3, nor 0.30000000000000004 can be represented in binary without recurring digits. 0.3 would actually be stored as 0.299999999999999988897769753748... , and 0.30...04 would be stored as 0.300000000000000044408920985006... . I understand that the discrepancy is caused by the fact that 0.1 and 0.2 also can't be represented accurately.
So, my point is, why does Javascript show the first digit where the imprecision takes effect, instead of leaving it out entirely, or showing more, maybe even all decimal digits? Is it to prevent possible errors where the check for 0.1 + 0.2 === 0.3 would fail? Was the number of digits chosen as to be able to uniquely identify any number with the least amount of digits? Thanks in advance :)
I think it is a case of a standard amount of precision letting the little error sneak in- if that standard number of significant sigits had been one less, then 0.1 + 0.2 would for all intents and purposes equal 0.30000000000000000 or 0.3, and it just so happens that in this case we grab 18 significant digits, instead of 17.
Isn’t the key point that JS numbers are encoded with base 2 and that base 2 can’t represent 1/10 and 2/10 precisely (similar to how decimal numbers can’t precisely represent 1/3, even with lots of finite storage)? If JS numbers were encoded with base 10, then 0.1 and 0.2 could be represented precisely.
You're completely right. Though I like to think there are many key points that come with IEEE 754 system. There is also a ton of nuance in how the operations work, and how rounding is handled. And many interesting things that happen when you start dealing with denormalised numbers - which is almost like a secondary system embedded in the specification. The main understanding I wanted people to come away with from this video was how the representation, encoding, and precision parts work, and the non-representability of 0.1 was left as more of an implication. I hope to come back to this topic in the future and dive deeper. In particular I'm interested in exploring the famous "Carmack" fast inverse square root hack.
Also I'm a big fan of your blog. Thanks for watching!
10:46 I died. :)
6:42 does this mean if the bigger the number, the precission will lose more?
Yes exactly - and it's a deliberate trade off taken by the designers.
This is a great explanation and I understand the why of the title, but you never demonstrate it specifically.
awesome job!
Great content and very clear voice. May I ask, what is your mic setup? How close are you to it while talking?
Thanks! At the moment I'm just using a little lav mic attached to my shirt.
Yazeed, I assure you this is definitely not how the "big" microphone sounds like. The desk mic would be even clearer and crisper, imagine radio recordings on TH-cam without music or anything, just voice. That's how it's going to be with a decently priced mic.
Sometime in future if you have time: make a video explaining how Arm processors do division... Its strange to say the very least
Self-teaching here, so the following is not a boast.
This is why a SC degree is a good thing.
I didn't get a CS degree
@@LowByteProductions Really? But you are dedicated lol
0:11 I think you missed a zero
EDIT: Jokes aside (there really is a missing zero though) great stuff as always! Was kind of hoping you would actually namedrop posits at some point after you mentioned you got intrigued by them last week, but otoh that's definitely a bit too deep into the hypothetical weeds for now
Haha I was hoping someone would count the zeros!
But I have been looking into posits and unums quite a bit. Once I've wrapped my head around them enough to get a software implementation up and running, I think I'll make a video.
Yeah, there's a weird kind of nerdy fun in figuring out how they work and about what kind of special optimised use-cases you can have for them, no? Also just to become aware that all these systems for representing numbers have design trade-offs and aren't as "finished" as we might think.
PICO-8 for example uses its own 16:16 fixed-point number system because Joseph White (its creator) thought it would make for a more interesting fantasy console, and it creates some interesting limitations.
For sure. There is so much to enjoy there - figuring out the system and contrasting it with IEEE 754, the fact that it's one guy just coming up with stuff like a mad scientist/evil genius, the multiple iterations, the controversy with william kahan, and actually just reading the wikipedia discussion page and seeing these bitter debates. It's amazing - so actually thank you for introducing me! I didn't know that about PICO-8 either. That's a project I've been watching from the outside a bit - I really enjoy the crazy procedural animations people are able to crack out of it.
@@LowByteProductions Why that's .1 cannot be represented exactly?
4:39 Range of numbers that can be represented..How precisely they can be represeented
Thanks very much. This was one of the best videos I found on this subject.
Does this mean the following :
In a 16 bit floating point representation, I can represent a maximum of 1024 unique values in each range of numbers. (0-2), (2-4), (4-8), (8-16)
And if yes, it implicates that we have a better representation in the smaller exponent ranges, as we get more number of unique values for a significantly smaller range .?
Pl clarify.
Thank you very much for the informatory video.
Yes that's exactly right - in floating point, the closer you are to zero, the better the approximation can be, and the further you travel from zero, the worse the approximation gets.
@@LowByteProductions
Thanks for the quick response too.
Can you pl shed some light on this too..
I read that the max positive number that can be represented through IEEE 754 32 bit floating point is 3.403E38.
But as I understand, there's only 2^32 values that can be uniquely represented using 32 bit binary.
In this case, how do we even reach a number as huge as 3.403E38..
I have difficulty inferring this, can you please help in the decoding this for me...?
@@reddyharishkannapu1850 if you're still interested, the longer you travel through the number line, the more of the floating point numbers get skipped.
For example: the next possible value after 32.768 might be 32.770 (example), but for 2837.768 the next possible value will be 2837.794 already. And the bigger the number gets, the bigger the gap.
Your videos are great, but I cannot watch them on my mobile device as the font size is too small
9:03, the second line. There are 6 E and 9 M. Why not 5 E and 10 M?
You're right - this is a mistake, it should be 5E & 10M. Good catch!
Great video!
this is exactly whats driving me nuts on my current app hahaha
magic thank u
Dumb question. Why not represent numbers as something that takes up more bits if more accuracy is needed, or takes up less if less is required. 0.5 vs 0.39201329 just inherently have different amounts of information in them right?
There are ways to do this just not efficiently.
You can only get so close with powers of twos
Can you make videos in which you use Uint16Array?
The VM series is making use of typed arrays - I think so far only UInt8Array has been used (to place raw bytes into an array buffer) but the essence is the same. I'm sure they'll be used there later as well.
I don't think that most people say imprecision of floating point numbers is a fault of JS. I believe they say that it's a fault of JS to force all numbers into being floats and not giving programmes appropriate tools to tackle the imprecision as the given domain requires.
Which would be wrong anyway, since JS has ArrayBuffers, Uint{8, 16, 32}Arrays, Int{8, 16, 32}Arrays, and BigInts - for when specific or even arbitrary integer precision is required.
@@LowByteProductions I wonder how well-known they are in practice? I don't remember seeing them in the wild but I didn't look at too much JS anyway.
On this channel they are very well known 😁 If you're an everyday web developer making landing sites in react then you might not come across them, but if you do any work with audio, webgl, pixel pushing on the canvas,or transferring and/or parsing binary data then you'll be familiar. Most people that have worked with node will also be familiar with the idea of a Buffer object - which these days is now just an abstraction built on the ArrayBuffer/TypedArray standards.
@@LowByteProductions good 🙂, I did mostly simple stuff although I came across ArrayBuffer. So it seems I misunderstood those JS critics.
Why use mul or div with 1024 and not use bit shifts instead? I'm unfamilliar with JS so wonder if >> or
Another good reason to use Excel!!
As if we needed one!
Can you explain to me why 4? why not 5, 3, 2, or any other number?
hard...... I will see this after I learn Javascript
Really good video! But how would this work when you don't have floating point number calculation available? Because Math.log (and so does Math.pow / **) returns a float. I kinda doubt that this would be possible in JS or rather easily doable.
IEEE 754 is fully implementable in hardware (and, or, not, xor, shift), so these operations are definitely possible in js without falling back on the standard library.
Interestingly, if you simply cast a floating point number to an integer, it acts as a crude, out of scale logarithm. This is the basis for the famous "fast inverse square root".
@@LowByteProductions Thank you! I think I already readabout the fast inverse square root somewhere but I looked it up and its pretty cool.
i have a doubt how does computer calculate the percentage ? Should it not be able to represent the floating point number itself in the first place.
This video is about building a model of floating point, not necessarily in the same way it happens in hardware. The implementation uses floats internally, but we're not trying to bootstrap a system from the ground up; we're trying to learn how the algorithm works.
Ahhh okay, will try to dig how it works in the hardware level. Thanks for the clarification. Great content. Love from India ❤
Can we actually extract what went wrong from NaN in JS?
In JS, unfortunately not. It might be possible in node by writing a C++ extension that could actually examine the bit pattern of the NaN and pass the result back to JS.
but why our exponent is 3 from [3,4]
It’s the first number
First time I got this was in Python
From reddit
beast
What color theme is used?
Dracula
2:24 16bits=2**16 numbers
3:18 64bis..Doubleprecision?
So, its an abstraction over 1's and 0's. And the abstraction is leaky. Got it.
Leaky in what way? It's not like you need to know how the 3 parts and their bits fit together in order to use floats.
pppfff. It IS a js fault.
js was supossed to be easy for scripts, why didnt they use a human notation? 0.1 + 0.2 = 0.3
=(
that and arrays starting on zero, no need in high level langs. Day 1 Month 1... february 1 ?
=(
Interesting video. And channel. intense.
=)
it would be quite pain to implement decimal numbers manually in js so they used ieee 754
Meh. There is a video here on YT made by a schoolgirl from India which explains it perfectly on a piece of paper.
maybe because Big Brother told that 1 + 2 is not 3..
TL;DR
javascript is easy to hate
this is bs
That's actually really convincing, I've never thought about it that way before
@Low Byte Productions
:D
why so long... It could be 5 - 7 minutes at most.
because the watch time is what matters to TH-cam, mate
do it so
yea u only need 5 minutes to learn about floats
Sounds great 👍 let me know when you've made the video that explains AND implements floating point numbers, which somehow fits in 5 minutes and makes sense to people. I'm sure it will be fantastic.
@@LowByteProductions please brother i have watched your video 3 times and i have been learning about floating points for the last 2 or 3 months
I was just trying to make a joke about his comment
great contents by the way i would love to watch a 5 hours video from u about floating points 🙂
2.3 * 100 === 229.99999999999997