How Floating Point Numbers Work (in 7 minutes!)

แชร์
ฝัง
  • เผยแพร่เมื่อ 26 ก.ย. 2024

ความคิดเห็น • 36

  • @Peregringlk
    @Peregringlk 8 หลายเดือนก่อน +10

    Your explanations are amazing. You definitely knows how to talk to the audience.

    • @CodeSlate
      @CodeSlate  8 หลายเดือนก่อน +2

      thanks, I try!

  • @IcyyDicy
    @IcyyDicy 8 หลายเดือนก่อน +6

    This explanation was infinitely better then whatever my comp sci professor taught in class. Thanks!

    • @CodeSlate
      @CodeSlate  8 หลายเดือนก่อน +4

      Sorry to hear that! I'm sure your professor was trying their best, but it can be hard to communicate about these things in a clear way

  • @ZipplyZane
    @ZipplyZane 8 หลายเดือนก่อน +5

    Did you mention the purpose of the bias and I missed it? It allows the exponent to also include negative numbers, which allows for numbers smaller than 1.
    I am curious why they didnt just use a signed integer for the exponent, though.

    • @CodeSlate
      @CodeSlate  8 หลายเดือนก่อน +2

      I'm not exactly clear on the specifics but I believe the biased representation allows for easier/faster comparison operations in the hardware

  • @CodeSlate
    @CodeSlate  9 หลายเดือนก่อน +3

    Thanks for watching! If you have any questions, just post them here and I'll do my best to help you.

  • @Владислав-е6щ9ъ
    @Владислав-е6щ9ъ 8 หลายเดือนก่อน +1

    What an unexpected pleasant find! Subscribed! 😃

    • @CodeSlate
      @CodeSlate  8 หลายเดือนก่อน +1

      Thanks, glad you liked it!

  • @PhysicsBoye
    @PhysicsBoye 9 หลายเดือนก่อน +2

    This was a really good video!!

    • @CodeSlate
      @CodeSlate  9 หลายเดือนก่อน +2

      Thank you, glad you liked it Physics Boye! You might also like my other video about floats, th-cam.com/video/iE1grioxWS4/w-d-xo.html
      It is only 3.5 minutes long but covers a problem most coders run into at some point, so check it out if you haven't yet!

  • @Aurowa
    @Aurowa 8 หลายเดือนก่อน

    goated channel

    • @CodeSlate
      @CodeSlate  8 หลายเดือนก่อน

      Thanks, I hope to get more content out quickly!

  • @JohnBerry-q1h
    @JohnBerry-q1h 8 หลายเดือนก่อน +1

    This is only the beginning of the headaches. The fun (??) really begins when attempting number comparison Test Statements and your code seemingly malfunctions because a simple ‘’, or ‘=‘ test won’t evaluate properly (when comparing Floating Point numbers.)

    • @CodeSlate
      @CodeSlate  8 หลายเดือนก่อน +1

      Totally agree! That's why I put the fire on this guy's head in the thumbnail for my other video about floats
      th-cam.com/video/iE1grioxWS4/w-d-xo.html

  • @samueldeandrade8535
    @samueldeandrade8535 8 หลายเดือนก่อน +1

    Hehe. I fell in love with his voice and accent.

  • @SamsonicX13
    @SamsonicX13 8 หลายเดือนก่อน

    Great video! The animation is wonderful! But how 0.0 would look like in binary?

    • @CodeSlate
      @CodeSlate  8 หลายเดือนก่อน +1

      Great question! 0 is a special case. It's all 0s, all the way down. Actually if you set the sign bit to 1, that's "negative zero", a weird special value that I would make a video about but I think others have already done this.

  • @dinoeebastian
    @dinoeebastian 8 หลายเดือนก่อน

    when I tried to explain negating binary integers to my dad and mentioned signed binary he thought I was talking about SIN and all his math was failing because of that

  • @callyral
    @callyral 8 หลายเดือนก่อน +1

    where is the bias stored

    • @CodeSlate
      @CodeSlate  8 หลายเดือนก่อน +2

      From what I can tell, the bias is always 127 but where to store the bias is not specified in the IEEE 754 standard, so it would depend on the implementation. Without getting too far into the weeds (I am not a hardware guy), I would guess that for standard 32-bit floats on a modern architecture, it's wired into the floating point units, IE, it's an unchangeable part of the hardware and is not really 'stored' anywhere that we can access or modify.
      Thanks for the question and have a wonderful 2024!

    • @callyral
      @callyral 8 หลายเดือนก่อน +1

      @@CodeSlate thanks!

    • @microwave856
      @microwave856 8 หลายเดือนก่อน +2

      in the balls

  • @ianweckhorst3200
    @ianweckhorst3200 8 หลายเดือนก่อน

    I’m thinking about this in binary, 127 is 7 ones in a row, but why did we do that?

    • @CodeSlate
      @CodeSlate  8 หลายเดือนก่อน

      The bias of 127 gets subtracted off to get an exponent of 0, giving us a multiplier of 2 raised to the zero power, which is one.
      Or do you mean why 1111111 = 127 in binary?

    • @ZipplyZane
      @ZipplyZane 8 หลายเดือนก่อน

      To allow for negative exponents. You need negative exponents of 2 to represent numbers smaller than 1.
      It really helps to understand if you are familiar with scientific notation from school science classes. Then you remember how really small numbers always have a negative exponent.

  • @Baltr
    @Baltr 9 หลายเดือนก่อน +11

    ingenious but i hate it

    • @CodeSlate
      @CodeSlate  9 หลายเดือนก่อน +2

      sorry you feel that way! Actually most of the stuff in this video you won't need on a day to day basis; it is more of a technical exploration for people who are curious. Thanks for commenting. I am a new channel and really appreciate it

    • @tam_69420
      @tam_69420 8 หลายเดือนก่อน +3

      ​@CodeSlate they probably hate it more so because of how unintuitive it is, and not how you explained it, ur explanation is great da

    • @CodeSlate
      @CodeSlate  8 หลายเดือนก่อน +2

      @@tam_69420
      Thanks!

  • @57thorns
    @57thorns 8 หลายเดือนก่อน

    We call them floats for short? Make up your mind, are we talking about shorts or floats?
    This comment was brought to you by the attention squad, we write nonsensical comments on good videos that deserve more attention.

    • @CodeSlate
      @CodeSlate  8 หลายเดือนก่อน +1

      Thanks! We're talking about 32-bit/single precision floating point numbers

  • @Lukasek_Grubasek
    @Lukasek_Grubasek 8 หลายเดือนก่อน +1

    The optimisations are very interesting but I don't really understand why we don't simply represent the green part in a nornal binary number (so that 10000001 actually = 129) and the blue part the same way. I get that it gives us more storage and we can represent more numbers but I think I would gain a lot from a comparison between my simple monkey brain solution and this. Exactly how many more numbers can we represent with this aystem? And also I just didn't get why the bias is exactly 127. Why not 63? Why not some other number? How did these clever engineers go from my intuitive solution (the one I mentioned above) to this? It just seems so incredibly random but I know you don't just wake up and magically come up with such complex optimisations, so it would also be interesting to see how an average person like me could go from the simple unoptimal solution to this brillancy.
    It kinda feels like your channel tries to appeal to an audience unfamilliar with these concepts you're presenting, as you don't go too in depth (and that's cool) but you have to understand that man, I'm a noob at this. If you really want me to understand it instead of going "wow that must be so clever that I'm never going to get how it works" you need to take a step back and think carefully about each step in your explainations and ask yourself the question "does the viewer I'm trying to appeal to understand every step or did I make too big of a jump there?" because when you nentioned the bias, it really felt like a tremendous jump I wasn't quite able to grasp and I like to think I'm a pretty smart guy 😅
    Said you appreciated feedback so here is mine. Don't let your imperfecions discourage you tho, as I think you have the right type of energy for this kind of stuff.

    • @CodeSlate
      @CodeSlate  8 หลายเดือนก่อน +1

      Thanks for the thorough feedback. You're right that I am trying to communicate about CS concepts to a broader audience, as I feel like there's enough high-end technical material out there but a lack of stuff that speaks to, say, someone who has only been programming for a month or two, or who learned this in school but it's been a while and they don't necessarily remember.
      To answer your question about why they chose 23 bits for mantissa and 8 bits for exponent, essentially there's a tradeoff between precision (more mantissa=more precise) and dynamic range (more bits for exponent=you can express a wider range of numbers). The engineers favored more precision and less range in their choice here, probably since they wanted relatively accurate results in the type of math most people were using computers for at that time (1985??). Some modern formats designed for use in AI (bfloat16 for example, or FP8 E5M2) basically give up a lot of precision to represent a wider range in fewer bits, as it works better for representing the internals of big AI models.
      The 127 bias lets you get exponents (with an 8-bit number) between roughly -127 and +126, but not exactly this range as some of those are reserved for special values like infinity and zero. But the range gives you a good expressive balance between huge numbers(like 10 raised to 38) and tiny ones. And representing the exponent with a bias essentially allows for the hardware to run comparisons (and hence, sorting algos) on simpler circuits. IE, cheaper and faster.
      Thanks man, I really appreciate you giving some of your time to write out a detailed reply.