There are definitely places in computer graphics where floating point errors will become noticeable (texture bleeding, or other weird lines/banding). I also ended up having to rip out float as the underlying datatype for a budgeting application I worked on since the errors were biting me in all kinds of unexpected places. Python has a Decimal type, which worked well for my use case as replacement.
@@darrenphillips1845We haven’t lost the tech we’ve lost the motivation and funding. Back then, the space program was 5% of the federal budget, now it’s 0.5%. People were complaining that the government ‘spent too much money on the moon’.
Nice work. Coming from a networking background watching you explain how to represent 12 made me want to bang my head on my desk though. 😂 You're not wrong, but the way you explained it is like rocket science from how I've been doing it all my life lol. This may be the first time I can explain something to YOU in an easier way vs the other way around lol. ❤
One of the consequences of this is that you should avoid doing equality comparisons on floating point numbers. Code like "if (a + b + c == 1.0)" is subject to unexpected failure.
I truly enjoy your videos. When will you do a video about yourself, an intro, how you got started developing? What's your favorite language? A Q&A stream would be cool...🎉
5:00 I did not get that one. You just explained that the numbers are the same but have different exponents. That means even in binary 0.2 is exactly 2*0.1. So either they are both slightly larger than the decimal number or both slightly smaller.
Which doesn't rely on floating point precision and uses more memory than a single integer and more cpu time than a simple addition of 2 registers. You can calculate beyond the precision of a 64 bit float. (How else are computers coming up with the 1000th decimal place of Pi). It just uses more memory and flops
A different technique for converting decimal to binary is starting from the left to see if it fits in your number and then substracting that from your number if it does: 9 1000 ^^^^ 8421 9 - 8 = 1 1001 ^^^^ 8421 1 - 1 = 0 If this makes sense at all. Only requires you to know what the left most bit represents, which is 8 in this case.
Something completely off topic. You had a video a year ago about the end of the Atom text editor. It seems it was revived as Pulsar IDE. Would you be interested in looking at it and report on it?
I just tried it in C just to see the result (which is obviously the same), but the error is definitely greater using simple float rather than double : float precision : float : 0.30000001192092896000 double : 0.30000000000000004000 using that code : #include int main(int argc, char **argv) { float f_res = 0.1 + 0.2; double d_res = 0.1 + 0.2; printf("float precision : "); printf("float : %1.20f ", f_res); printf("double : %1.20f ", d_res); return (0); }
Really good video. If you want a good treatment on floating point, I recommend one of my dad's favorites "Numbers in Theory and Practice" by Blaise W. Liffick. And, if you want a really in depth treatment on the subject, the authoritative source is Knuth's "Art of Computer Programming, The: Seminumerical Algorithms, Volume 2"
There are definitely places in computer graphics where floating point errors will become noticeable (texture bleeding, or other weird lines/banding). I also ended up having to rip out float as the underlying datatype for a budgeting application I worked on since the errors were biting me in all kinds of unexpected places. Python has a Decimal type, which worked well for my use case as replacement.
It's a wonder we didn't shoot right past the moon.
never went
@zane812 - When they say we lost the 60s tech that we used to get to the moon, and that's why we haven't gone back, I'd tend to agree with you.
@@zane812so during the Cold War, Russia joined the US lie (by not publishing the moon landing was faked)? That’s an even bigger conspiracy theory
To the moon! 🚀 🌝 Playboi carti 2024 album upcoming
@@darrenphillips1845We haven’t lost the tech we’ve lost the motivation and funding. Back then, the space program was 5% of the federal budget, now it’s 0.5%. People were complaining that the government ‘spent too much money on the moon’.
This is why floating point comparison is usually done with some threshold. I.e. to check if variable A is equal to 0.3, you would compare A-0.3
Nice work. Coming from a networking background watching you explain how to represent 12 made me want to bang my head on my desk though. 😂 You're not wrong, but the way you explained it is like rocket science from how I've been doing it all my life lol. This may be the first time I can explain something to YOU in an easier way vs the other way around lol. ❤
One of the consequences of this is that you should avoid doing equality comparisons on floating point numbers. Code like "if (a + b + c == 1.0)" is subject to unexpected failure.
Exactly! It is OK to use 《 or 》but == should not occur. Perhaps check if a number is within a range as an alternative solution.
Great seeing a visual breakdown of the memory and the walkthrough for the problem!
I truly enjoy your videos. When will you do a video about yourself, an intro, how you got started developing? What's your favorite language? A Q&A stream would be cool...🎉
5:00 I did not get that one. You just explained that the numbers are the same but have different exponents. That means even in binary 0.2 is exactly 2*0.1. So either they are both slightly larger than the decimal number or both slightly smaller.
0:51 If you use groovysh, 0.1 + 0.2 does give you 0.3, This is because Groovy uses Java's BigDecimal class by default.
Which doesn't rely on floating point precision and uses more memory than a single integer and more cpu time than a simple addition of 2 registers. You can calculate beyond the precision of a 64 bit float. (How else are computers coming up with the 1000th decimal place of Pi). It just uses more memory and flops
Amazing quality video! It’s clear that you have skills
A different technique for converting decimal to binary is starting from the left to see if it fits in your number and then substracting that from your number if it does:
9
1000
^^^^
8421
9 - 8 = 1
1001
^^^^
8421
1 - 1 = 0
If this makes sense at all. Only requires you to know what the left most bit represents, which is 8 in this case.
Yes so this was the way I taught it in my binary video in description.
I've seen math programs such as mathcad keep floating point numbers as fractions of whole numbers, i believe to fix this very problem
.
we love some engineer man 8)
Something completely off topic. You had a video a year ago about the end of the Atom text editor. It seems it was revived as Pulsar IDE. Would you be interested in looking at it and report on it?
Love your videos. Keep it up!
I just tried it in C just to see the result (which is obviously the same), but the error is definitely greater using simple float rather than double :
float precision :
float : 0.30000001192092896000
double : 0.30000000000000004000
using that code :
#include
int main(int argc, char **argv)
{
float f_res = 0.1 + 0.2;
double d_res = 0.1 + 0.2;
printf("float precision :
");
printf("float : %1.20f
", f_res);
printf("double : %1.20f
", d_res);
return (0);
}
Maybe add in a bit about ULP? That could help with understanding why you get .30000004 vs .30000012 or whatnot.
Very nice video BTW.
Well explained. I've never been the first commenter before.
First time for everything!
Really good video. If you want a good treatment on floating point, I recommend one of my dad's favorites "Numbers in Theory and Practice" by Blaise W. Liffick. And, if you want a really in depth treatment on the subject, the authoritative source is Knuth's "Art of Computer Programming, The: Seminumerical Algorithms, Volume 2"
Thx man
👍
PowerShell 7 returns 0.3
Bro please tell me why is your right eyebrow always higher
None of your business man.
wtf are u talking about
It's because he's on sus looking at your comment 😂 - don't be a d!