Floating Point Numbers (Part1: Fp vs Fixed) - Computerphile

แชร์
ฝัง
  • เผยแพร่เมื่อ 21 พ.ย. 2024

ความคิดเห็น • 261

  • @K1RTB
    @K1RTB 5 ปีที่แล้ว +161

    Computerphile: my main source of „nought“

    • @hexerei02021
      @hexerei02021 4 ปีที่แล้ว

      nought button -> 10:25

  • @Roxor128
    @Roxor128 5 ปีที่แล้ว +40

    There are other approaches, too.
    I remember reading an article about an investigation one guy did into "floating-bar numbers", which were a restricted form of rational numbers (fitting into 32 or 64 bits) where the number of bits for the numerator and denominator could vary, though would be limited to a total of 26 bits in the 32-bit implementation (the other 6 bits being used for the sign and bar position).
    Another approach being a logarithmic system, where numbers are stored as their logarithms. It has the advantage of multiplication, division, powers and roots being fast, but with the penalty of addition and subtraction being slow. The Yamaha OPL2 FM synthesis chip uses one internally, operating on log-transformed sine-wave samples, then uses a lookup table to convert to a linear form for output.

    • @JosGeerink
      @JosGeerink หลายเดือนก่อน

      How do the pros and cons of LNS conpare to traditional FP? Specifically, as it relates to addition and subtraction.

  • @Rchals
    @Rchals 5 ปีที่แล้ว +286

    0.1 + 0.2 == 0.3
    >>> False
    really was a great moment in my life

    • @bjornseine2342
      @bjornseine2342 5 ปีที่แล้ว +16

      Had a similar moment with an assignment last year.... Had a calculation that was supposed to output an upper diagonal matrix (No idea wether it's called that in english, basically everything below the diagonal was supposed to be zero). Well, it wasn't.... Took me 1/2h+ to figure out that I was using floats and the entries were very close to zero, just not precisely. I felt quite stupid afterwards :D

    • @jhonnatanwalyston6645
      @jhonnatanwalyston6645 5 ปีที่แล้ว +19

      python3 >>> 0.30000000000000004

    • @jhonnatanwalyston6645
      @jhonnatanwalyston6645 5 ปีที่แล้ว +20

      round(0.1+0.2, 1) == 0.3 # quick-fix LOL

    • @platin2148
      @platin2148 5 ปีที่แล้ว +3

      Ricard Miras Sadly it could have been already messed up by converting from ascii to float. Inside the lexer.

    • @EwanMarshall
      @EwanMarshall 5 ปีที่แล้ว +11

      >>> from decimal import *
      >>> getcontext().prec = 28
      >>> Decimal('0.1') + Decimal('0.2') == Decimal('0.3')
      True
      >>> Decimal('0.1') + Decimal('0.2') == Decimal('0.2')
      False

  • @FyberOptic
    @FyberOptic 5 ปีที่แล้ว +129

    There's that horrible moment in any programmer's life when they realize that floating point calculations don't work on computers the way they work in real life, and all of your code suddenly has to be based around this fact.

    • @rwantare1
      @rwantare1 5 ปีที่แล้ว +29

      My method:
      1. Try long instead of float.
      2. Accept a range for the correct answer and round it.
      3. Give up and look up the stackoverflow question explaining how to do it

    • @nakitumizajashi4047
      @nakitumizajashi4047 5 ปีที่แล้ว +11

      That's exactly why I use integers to do financial calculations (all amounts are expressed in cents).

    • @rwantare1
      @rwantare1 5 ปีที่แล้ว +16

      @@nakitumizajashi4047 clearly you never have to divide.

    • @rcookie5128
      @rcookie5128 5 ปีที่แล้ว

      Hahaha yeah

    • @theshermantanker7043
      @theshermantanker7043 4 ปีที่แล้ว

      Analog Computers would help a lot

  • @echodeal7725
    @echodeal7725 5 ปีที่แล้ว +133

    Floating Point numbers are two small Integers in a trenchcoat pretending to be a Real Number.

    • @g3i0r
      @g3i0r 5 ปีที่แล้ว +11

      More like they're pretending to be a Rational Number.

    • @9999rav
      @9999rav 5 ปีที่แล้ว +3

      @@g3i0r they are rational numbers... but they are pretending to be real instead

    • @Kezbardo
      @Kezbardo 5 ปีที่แล้ว

      What's your name?
      Vincent! Uhh... Vincent... Realnumber!

    • @g3i0r
      @g3i0r 4 ปีที่แล้ว

      @Username they can't represent all rational numbers either, hence my comment.

  • @PaulPaulPaulson
    @PaulPaulPaulson 5 ปีที่แล้ว +11

    When a third party DLL was running another third party DLL with code that exexuted in parallel that changed the FPU precision settings at seemingly random points in time, that was the hardest problem I ever had to debug. Looked like magic until i figured out the cause. Before, I didn't even expect those settings to be shared among DLLs.

    • @johnfrancisdoe1563
      @johnfrancisdoe1563 5 ปีที่แล้ว +2

      Paul Paulson Of cause CPU registers are shared with DLLs. But I'm surprised no one told the DLL authors that the floating point settings need to be preserved across the library call boundary, just like (for example) the stack pointer.

  • @davesextraneousinformation9807
    @davesextraneousinformation9807 5 ปีที่แล้ว +13

    Back in the days of TTL logic, I got to implement a design that used logarithmic numbers for a multi-tap Finite Impulse Response (FIR) filter. The number system we came up with to represent the logs was very much like a floating point number with an exponent and a mantissa. We had a radar signal to simulate, so there was a large dynamic range to handle. I think the input was a 12 bit unsigned number and we had something like 64 samples to multiply and accumulate. These were the days just before “large” multipliers were commonly available. That made using the logs an attractive solution. We used interpolation between 8 of the FIR weights to eliminate 56 multipliers, but still, how to accumulate the multiplication products?
    Enter the log adder. With some simple algebra, one can effectively add log numbers. Part of that process was linearizing the mantissa, shifting it according to its exponent, and adding that to the other number’s linearized mantissa. Then the result was normalized, the mantissa converted to a log and you have a sum.
    That experience piqued my interest in how video signals are handled, beginning in the CCD or CMOS sensor chip and on. In the years since, I have never come across anything other than a chip with a wider and wider integer output. I think some start-ups have promised wider dynamic ranges, but I don’t know what has come of it.
    Does anyone know of chips with anything other than integer digital outputs?

    • @1st_ProCactus
      @1st_ProCactus 5 ปีที่แล้ว +1

      I may not understand, But would having an MCU with over a hundred outputs fit that, surly software can make those outputs mean anything you like ?

    • @davesextraneousinformation9807
      @davesextraneousinformation9807 5 ปีที่แล้ว +1

      @@1st_ProCactus Well, there are many considerations to get an increased dynamic range. The first one is that the sensor analog performance has to have a higher accuracy and a lower noise floor. Next is the analog to digital converter must also have commensurate performance. No amount of software can generate information that is not in the signal to begin with.
      What I was musing about was whether the data bus width is increased linearly as better sensors are developed, or are there different number systems, like a floating point number system that is output from the A/D converter. My guess is that the outputs increase in width one bit at a time, since each new bit represents twice the previous dynamic range.

    • @1st_ProCactus
      @1st_ProCactus 5 ปีที่แล้ว +1

      So you want to input a very high dynamic range with minimal pins, Logarithmic like? Ill keep that in mind, I'm interested in any kind of sensor, Though Ive not noticed any sensor that does that. But im sure something like that must be out there for a specific sensors.
      Ive seen some unusually high numbers on some of the new on chip sensors (MEMS ?).
      Just curious as far as ADC goes, How many bits do you dream of ?

  • @proudsnowtiger
    @proudsnowtiger 5 ปีที่แล้ว +59

    I've just been writing about the new RISC-V architecture, which is a modular ISA where integer, single precision and double precision maths instructions are design options. There are a lot of interesting tech, economic and ecosystem aspects to this project, which is an open-source competitor to ARM - would love to see your take on it.

    • @hrnekbezucha
      @hrnekbezucha 5 ปีที่แล้ว +1

      Many embedded devices get by just fine with fixed point arithmetic to save cost of the MCU. RISC-V and ARM give people the option to include the floating point module. Another factor is speed. So even if you need floating point, you can do it in software and the calculation will take some 20 or however many clock cycles, while the FP module would do it in one cycle.
      Cpu architecture is really great topic, but probably not friendly for a bite-size video

    • @foobar879
      @foobar879 5 ปีที่แล้ว +1

      Yeah RISC-V is really nice, can't wait for the vector extension to be implemented!
      Meanwhile i'll keep on fiddling on the k210 with the sipeed's boards.

    • @hrnekbezucha
      @hrnekbezucha 5 ปีที่แล้ว +2

      @@robertrogers2388 Also, if you want to license a chip from ARM, they'll charge you a relatively hefty fee for each chip made. One more reason RISC-V gets so much traction lately. It's becoming more than a proof of concept.

    • @floriandonhauser2383
      @floriandonhauser2383 5 ปีที่แล้ว +3

      I actually developed a Risc V processor at Uni (vhdl, run on an fpga). The modularity was pretty helpful

    • @johnfrancisdoe1563
      @johnfrancisdoe1563 5 ปีที่แล้ว

      Robert Rogers Talking of Risc V and doing other number types in software, has anyone built a properly optimized multi precision 9nteger library for it, without timing side channels? Because the lack of arithmetic flags and conditional execution has me worried this is an anti-security processor, compared to the MIPS, OpenSparc and ARM.

  • @lawrencedoliveiro9104
    @lawrencedoliveiro9104 5 ปีที่แล้ว +4

    6:17 What’s missing is called “dynamic range”.
    Also note it’s not about large versus small numbers, but large versus small *magnitudes* of numbers. Remember that negative numbers are smaller than positives ones (and zero).

  • @willynebula6193
    @willynebula6193 5 ปีที่แล้ว +91

    I'm a bit lost!

    • @Soken50
      @Soken50 5 ปีที่แล้ว +24

      Did you get carried away ?

    • @anisaitmessaoud6717
      @anisaitmessaoud6717 4 ปีที่แล้ว +3

      Me too , i think it's not well explicated from the general public

    • @VascoCC95
      @VascoCC95 4 ปีที่แล้ว +7

      I see what you did there

    • @yearlyoatmeal
      @yearlyoatmeal 3 ปีที่แล้ว

      @@anisaitmessaoud6717 r/whoosh

  • @merseyviking
    @merseyviking 5 ปีที่แล้ว +2

    Love the Illuminatus! Trilogy / Robert Anton Wilson reference in the number 23. It's my go to number after 42.

  • @jecelassumpcaojr890
    @jecelassumpcaojr890 5 ปีที่แล้ว +3

    As more and more transistors became available, the improvement in floating point hardware was greater than the improvement in the main processor (as impressive as that was). So the difference on a more modern machine would be a lot more than the 4 times of the late 1980s computer.

  • @gordonrichardson2972
    @gordonrichardson2972 5 ปีที่แล้ว +4

    At 01:40 he talks about recompiling the program to use the floating point co-processor. When I was programming in Fortran in the 1990s, the compiler had an option to detect this at run-time. If the co-processor was present it would be used, otherwise an emulator software library would be used instead. The performance difference was notable, but it was easier to release a single program that was compatible with both.

    • @JeffreyLWhitledge
      @JeffreyLWhitledge 5 ปีที่แล้ว +2

      When attempting to execute a floating-point processor instruction without the coprocessor installed, an exception (interrupt) would be raised. The handler for that interrupt would then perform the calculation via software emulation and then return. It was seamless, but the performance difference was huge.

    • @gordonrichardson2972
      @gordonrichardson2972 5 ปีที่แล้ว

      Agreed (my memory is rusty). For testing, there was a flag during compilation, so that the emulator would execute the instructions in software as if the co-processor was never installed.

    • @mrlithium69
      @mrlithium69 5 ปีที่แล้ว

      some compilers can do this.

    • @DaveWhoa
      @DaveWhoa 5 ปีที่แล้ว

      cpuid

  • @ABaumstumpf
    @ABaumstumpf 5 ปีที่แล้ว +27

    Minecraft comes to mind - there you can quit easily notice the problem as the game has a rather large world.
    Specially in older versions - once you got a few thousand blocks away from the origin everything started to be a bit funcky cause distances were relative to the absolute world origin (instead of player or chunk centered). movement became stuttery and particles and not-full-block entities became distorted.

    • @glowingone1774
      @glowingone1774 5 ปีที่แล้ว +1

      yeah on the mobile devices its possible to fall through blocks due to the error in position

    • @noxabellus
      @noxabellus 5 ปีที่แล้ว +18

      I believe "a few thousand blocks" is an understatement...
      After checking, yes, it was after 16 *million* blocks from the origin, which still gave it a total unaffected area of 1,024,000,000 sq km total eg about double the surface area of Earth

    • @kellerkind6169
      @kellerkind6169 5 ปีที่แล้ว +11

      Far Lands Or Bust

  • @todayonthebench
    @todayonthebench 5 ปีที่แล้ว +3

    Floating point in short is a trade between resolution and dynamic range.
    If dynamic range is important, then floating point is a good option. (though, one can do this without floating point, but it gets fiddly...)
    If resolution is important, then integers are usually a better option.
    (and if any degree of rounding errors or miscounting leads to legal issues, then integers are usually the safe option. (ie banking software.))

    • @conkerconk3
      @conkerconk3 2 ปีที่แล้ว

      In java, there exists the "BigDecimal" class, which is a much slower but more accurate way to represent decimal numbers, which is what one might use for banking i guess

  • @SebastianPerezG
    @SebastianPerezG 5 ปีที่แล้ว +6

    I remember when tried run 3D Studio Release 4 on my 386 pc ask me for " you need a numeric coprocessor " , then my uncle have one and bring and install it.
    Old times ...

  • @vinsonwei1306
    @vinsonwei1306 2 ปีที่แล้ว

    Holy Smoke! Didn't realize there're so many holes in the range of float32. Great video!

    • @angeldude101
      @angeldude101 ปีที่แล้ว

      There is a number system called the dyadic rationals, which form precisely 0% of all ℝeal numbers. Every single representable float value that isn't infinite or infinitesimal (floats don't actually have 0; they have positive and negative infinitesimals that they wrongly call "0"), even with arbitrary precision, is a dyadic rational, and with only finite memory, you're still missing most dyadic rationals anyways. (You do however get 16 million ways to write "error," which form 0.3% of all float values.)
      Specifically, the dyadic rationals are the integers "adjoined" with 1/2, so every sum and product formed from the integers with every power of 1/2.

  • @ZintomV1
    @ZintomV1 5 ปีที่แล้ว

    This is a really great video by Dr Bagley!

  • @Smittel
    @Smittel 5 ปีที่แล้ว +27

    "Its lossy but it doesnt really matter"
    *Minecraft Beta laughing 30.000.000 blocks away*

    • @teovinokur9362
      @teovinokur9362 5 ปีที่แล้ว +1

      minecraft bedrock laughing 5000000 blocks away

  • @Debraj1978
    @Debraj1978 2 ปีที่แล้ว

    For someone used to fixed point, a simple "if" statement:
    if(a == b)
    will not work in floating point. Also, in general "if" statement calculation takes longer time in floating point.

  • @DarshanSenTheComposer
    @DarshanSenTheComposer 5 ปีที่แล้ว +44

    It's called *QUICK MAFFS* !!!

    • @billoddy5637
      @billoddy5637 5 ปีที่แล้ว +2

      int var;
      var = 2 + 2;
      printf("2 + 2 is %d
      ", var);
      var = var - 1;
      printf("- 1 that’s %d
      ", var);
      printf("QUICK MAFFS!
      ");

    • @ExplicableCashew
      @ExplicableCashew 5 ปีที่แล้ว +2

      @@billoddy5637 man = Man(kind="everyday", loc="block")
      man.smoke("trees")

  • @TheToric
    @TheToric 5 ปีที่แล้ว

    Im impressed that gcc still supports that architecture.

  • @brahmcdude685
    @brahmcdude685 3 ปีที่แล้ว

    really really terrific video. this should be taught in school - "over the ocean" [greta]

  • @Lightn0x
    @Lightn0x 5 ปีที่แล้ว +2

    An observation that I don't think was mentioned: the leading digit is not necessarily 1. When the exponent is minimal, the digit is treated as 0 (i.e. it's no longer 1.[x]*2^y, but rather 0.[x]*2^y).

    • @lostwizard
      @lostwizard 5 ปีที่แล้ว +2

      That only applies to special cases that are not normalized (which, in my not so humble opintion, are a misfeature of common floating point representations). In a properly normalized floating point number, the only possible value that doesn't have a leading 1 is zero.

    • @Lightn0x
      @Lightn0x 5 ปีที่แล้ว +3

      @@lostwizard
      Maybe so, but the IEEE 754 standard (which this video describes and which all modern CPUs use) operates this way. Also, you call it a misfeature, but it does have its advantages (for example, it allows for more precise representations of very small numbers). Trust me, many minds more bright than yours or mine have thought out this standard, and if they thought this special case was worth implementing, they probably had their reasons :)

    • @lostwizard
      @lostwizard 5 ปีที่แล้ว +2

      @@Lightn0x Sure. I've even read the reasoning for it. I just don't agree that everyone should be saddled with it because five people have a use for it. (Note: hyperbole) I'm sure it doesn't cause much trouble for hardware implementations other than increasing the die real estate but it does make software implementations more "interesting" if they need to handle everything. Any road, I wouldn't throw out IEEE 754 just because I think they done goofed. :)

    • @tamasdemjen4242
      @tamasdemjen4242 5 ปีที่แล้ว +3

      It's to prevent division by 0 due to underflow. Assume `a` and `b` are different numbers, but so close to each other that `a - b` would give a result of 0. That's called an underflow. Then 1 / (a - b) would cause a division by zero, even though `a` is not equal to `b`. Denormal (or subnormal) numbers guarantee that additions and subtractions cannot underflow. So if a != b, then a - b != 0. Yes, it requires extra logic in the hardware.
      Also, there are two zeros, positive zero, and negative zero. He couldn't possibly mention everything in a short video. There's a document called "What every computer scientist should know about floating-point arithmetic". It's 44 pages and VERY intense.

    • @johnfrancisdoe1563
      @johnfrancisdoe1563 5 ปีที่แล้ว

      Tamas Demjen Seems like a short summary document of the hype variety. IEEE 754 representation became extremely popular due to Intel's hardware implementation, but like any design it has it's quirks and implementation variations. Other floating point formats do exist and have different error characteristics. Some don't have negative 0, many don't have the NaN concept (slows down emulation), most use different exponent encoding and binary layout. For example Turbo Pascal for x86 without coprocessor had a 6 byte real type. Texas 99/4 used a base 100 floating point format to get a 1:1 mapping to decimal notation. Each mainframe and traditional supercomputer brand had it's own format too.

  • @jeffreyblack666
    @jeffreyblack666 5 ปีที่แล้ว +37

    I'm disappointed that this doesn't actually go through how they work and instead just says how they store the bits.

    • @jeffreyblack666
      @jeffreyblack666 5 ปีที่แล้ว +20

      @ebulating If that was the case then they wouldn't exist. The fact that they do exist means it can be explained.

    • @louiscloete3307
      @louiscloete3307 5 ปีที่แล้ว

      @Jefferey Black I second!

    • @visualdragon
      @visualdragon 5 ปีที่แล้ว

      @@jeffreyblack666 assume that @ebulating said "...too complicated to explain" in a 15 minute video on TH-cam.

    • @jeffreyblack666
      @jeffreyblack666 5 ปีที่แล้ว

      @@visualdragon Except now they have released a part 2 which goes over addition in 8 minutes (although I haven't yet watched it). They have now changed the title to something far more appropriate rather than the clickbait they had before.

  • @kooky216
    @kooky216 5 ปีที่แล้ว +3

    2:51 back when the camera would shoot a right-handed writer from the left side, the good old days ;)

  • @willriley9316
    @willriley9316 3 ปีที่แล้ว +1

    It is difficult to follow your explanation because the camera keeps switching around. It would be helpful to maintain a visual perspective throughout your verbal explanation.

  • @damicapra94
    @damicapra94 5 ปีที่แล้ว +30

    He did not really talk about the FPU though

    • @Milithryus
      @Milithryus 5 ปีที่แล้ว +2

      Yes this video is exclusively about floating point representation. Being able to represent floating points, and computing operations on them are very different problems. Disappointing.

    • @wallythewall600
      @wallythewall600 5 ปีที่แล้ว +5

      Simple enough. Take the numbers, and compare the exponents. You fix the smaller exponent to become the larger one and rewrite the mantissa to keep it the same value, then just add mantissas.
      Suppose you have a larger (in magnitude) floating point number and a smaller (again, in magnitude) floating point number. Say the exponent portion for the larger number is 5 ("101" in binary; keep in mind I won't be writing all the leading zeroes that would be in the actual floating point representation for simplicity's sake) and the smaller number's exponent is 4 ("011"). The difference between them is 1. Now, to do the exponent changing magic, all you need to do is shift the mantissa to the right over by the difference in the exponents. Consider the mantissa of the smaller number to be "1.011", where I included the decimal mark since I'm NOT considering the normalized part of the mantissa but also the unitary part. If you wanted to turn the 4 exponent into a 5 you shift the mantissa right once, and your mantissa becomes "0.1011". Check for yourself, "1.011" with exponent 4 is the same as "0.1011" with exponent 5. You can also check that if the difference were larger, you just keep shifting right by the difference in exponents, tacking on leading zeroes to the mantissa as needed.
      The problem is we only have a fixed amount of bits representing the mantissa. If you have to shift right and the last binary digit in the mantissa is a 1 and not just a trailing zero, when you shift right it just drops off (losing information/"precision" in the process). Now, if we have to shift right 24 times and we only have 23 binary digits for the mantissa... we end up storing just zeroes in the mantissa. I've gotten more unending loops in some of my programs by not keeping this in mind.
      Hardware wise, you need binary integer addition to find the difference in exponent bits, a right bit shifter for rewriting the mantissa to fit the new exponent, and again binary integer addition to ad the mantissas. You also need a few small things like registers to store information (you need to remember the largest exponent and the difference, for example) and some hardware to account when you add two numbers where the unitary part of the mantissa is 1 (you again need to shift around the mantissa and exponent to get back your final normalized representation) but it's not complicated to imagine how you could implement this.

    • @johnfrancisdoe1563
      @johnfrancisdoe1563 5 ปีที่แล้ว

      Wally the Wall Still too basic. Things get complicated when you want to do maximum speed double floats on a 16 or 32 bit integer only CPU. His demo example must have done lots of other stuff for the speedup to only be about 4x .

    • @wallythewall600
      @wallythewall600 5 ปีที่แล้ว +1

      @@johnfrancisdoe1563 Well, I explained it about as deeply as Computerphile would have. It's not like they were going to go into actual architecture design details, which I myself absolutely have no idea about.

    • @Cygnus0lor
      @Cygnus0lor 5 ปีที่แล้ว

      Check out the next part

  • @GH-oi2jf
    @GH-oi2jf 5 ปีที่แล้ว +2

    He gets off to a bad start. The alternative to “floating point” is not “integer” but “fixed point.” It would be better if he got that right at the beginning.

  • @TheJaguar1983
    @TheJaguar1983 5 ปีที่แล้ว +2

    When I started using Pascal, my programs would crash when I enabled floating point. Took me ages to realise that my 386 didn't have an FPU. I was probably about 10 at the time.

  • @MattExzy
    @MattExzy 5 ปีที่แล้ว

    It's one of my personal favourite units.

  • @rcookie5128
    @rcookie5128 5 ปีที่แล้ว

    Thanks for the episode, really appreciate it!

  • @SouravTechLabs
    @SouravTechLabs 5 ปีที่แล้ว +1

    Excellent video.
    But I have a request!
    Can you add the 15:36 thumbnail preview videos link to the description? That will make life easier!

  • @slpk
    @slpk 5 ปีที่แล้ว +1

    Wouldn't a gopro-like camera positioned at the top of the paper and filming in upside-down be better for the kinds of zoom you do?
    I would think they wouldn't get skewed like these ones do.

  • @davesextraneousinformation9807
    @davesextraneousinformation9807 5 ปีที่แล้ว +1

    Oh, I almost forgot! I wanted to ask how computers calculate ridiculously large numbers like Pi to the infinitesimal decimal point. How do they do that? Of course I want to know the inverse of that, how do they calculate all those primes and stuff that are so huge. That sounds like a great Computerphile or Numberphile topic.

    • @quintrankid8045
      @quintrankid8045 5 ปีที่แล้ว

      Search for Arbitrary Precision Arithmetic and/or bignum.

    • @johnfrancisdoe1563
      @johnfrancisdoe1563 5 ปีที่แล้ว

      Richard Vaughn For special jobs like digits of Pi, there are algorithms that don't need a billion digits type.

    • @davesextraneousinformation9807
      @davesextraneousinformation9807 5 ปีที่แล้ว

      Thanks for the info, guys!

  • @marksykes8722
    @marksykes8722 5 ปีที่แล้ว +2

    Still have a Weitek 3167 sitting around somewhere.

  • @Jaksary
    @Jaksary 5 ปีที่แล้ว +5

    PLEASE, can you look at a program like the spinning cube in more detail? On another channel maybe? I'm interested in the details, thanks! :)

  • @Gengh13
    @Gengh13 5 ปีที่แล้ว

    You should start using the TH-cam feature to link previous videos(the i in the corner), it's handy.

    • @johnfrancisdoe1563
      @johnfrancisdoe1563 5 ปีที่แล้ว

      Genghisnico13 Links in description are much better. In-video links have been abused to death by VEVO.

    • @Gengh13
      @Gengh13 5 ปีที่แล้ว

      @@johnfrancisdoe1563 either works for me, unfortunately none are present.

  • @digitalanthony7992
    @digitalanthony7992 5 ปีที่แล้ว

    Literally just had a quiz on this stuff yesterday.

  • @brahmcdude685
    @brahmcdude685 3 ปีที่แล้ว +1

    Just a thought: why not place a second top camera looking straight down onto the written paper?

  • @RayanMADAO
    @RayanMADAO ปีที่แล้ว +1

    I don't understand why the float couldn't add 1

    • @angeldude101
      @angeldude101 ปีที่แล้ว +1

      Try doing 1.000 * 10^6 + 1. Expanded it becomes 1 000 000 + 1= 1 000 001 Convert to scientific notation: 1.000001 * 10^6, however we only have finite space to store values, so we have to round to only 4 significant digits like we started with, giving 1.000 * 10^6... Wait, didn't we just add 1? Where'd the 1 go‽ The exact same place as the 1 that got lost when adding to a float that's too big: it could rounded off.

  • @treahblade
    @treahblade 5 ปีที่แล้ว

    I actually ran this program on a 486DX which has a floating point unit on it. When adding 2 it does not solve the problem actually, at least on that processor. The end 2 numbers you get are 15,16,18,20,22. The wierd one here is the jump from 15 -> 16. it should go to 17 in hex its 4b7fffff and 4b800000

  • @AliAbbas-of2vq
    @AliAbbas-of2vq 5 ปีที่แล้ว +8

    The guy resembles a young Phillip Seymour Hoffman.

  • @EllipticGeometry
    @EllipticGeometry 5 ปีที่แล้ว +6

    I wouldn’t say floating point is any more lossy than fixed point or an integer. They all have their own way to lose precision and overflow. If you use arbitrary-precision math, you can get really unwieldy numbers or even be forced to use symbolic representations if you want something like sin(1) to be exact. It’s really about choosing a representation that suits your needs.
    By the way, floating point is excellent in 3D graphics. Positions need to be more precise the closer they are to the camera, because the projection magnifies them. Floating point is ideal for storing that difference from the camera. I suspect the rasterization hardware in modern GPUs lives on the boundary between fixed and floating point, with things like shared exponents to get the most useful combination of properties.

  • @lawrencedoliveiro9104
    @lawrencedoliveiro9104 5 ปีที่แล้ว

    15:34 Ah, but it could make a difference if you are trying to do very long baseline interferometry. For example, setting up a future radio telescope array. Maybe call it the “Square Light-Year Array”.

  • @ExplicableCashew
    @ExplicableCashew 5 ปีที่แล้ว +18

    Today I realized that 42 is "Lololo" in binary

    • @KnakuanaRka
      @KnakuanaRka 3 ปีที่แล้ว

      65 = 5*13 = 101 x D = lol xD

  • @dipi71
    @dipi71 5 ปีที่แล้ว +10

    13:01 if you declare your main function to return an int, you should actually return an integer value. Just saying.

    • @_tonypacheco
      @_tonypacheco 5 ปีที่แล้ว

      Doesn't matter, it'll return 0 dy default which indicates a successful run anyways

    • @dipi71
      @dipi71 5 ปีที่แล้ว

      @@_tonypacheco It’s a bad default from C’s early days, it works only for main(), and compiler flags like the highly reocmmended »-Wall« will warn you. Just return your state properly.

    • @9999rav
      @9999rav 5 ปีที่แล้ว

      @@dipi71 in C++ return is not needed in main(). And you will get no warnings, as it is defined in the standard

  • @gravity4606
    @gravity4606 5 ปีที่แล้ว

    I like 2^4 power notation as well. easier to read.

  • @nietschecrossout550
    @nietschecrossout550 5 ปีที่แล้ว +10

    IEEE754, float 128
    Is there a way to chain together two doubles (float 64) in order to emulate a float with a 104bit mantissa?

    • @nietschecrossout550
      @nietschecrossout550 5 ปีที่แล้ว

      I guess that a double-double would be faster then a [Intel] long double or the GCC __float128 implementation, as there is actual hardware support for 64bit floats

    • @peterjohnson9438
      @peterjohnson9438 5 ปีที่แล้ว +7

      There's some hardware with support for 128 bit float, but it isn't a standard feature, and you can't really force a vector unit to treat two 64 bit floats as a single value due to the bit allocation patterns being physically wired into the hardware.
      You're better off rethinking your algorithm.
      [edit: standardized -> a standard feature to avoid confusion.]

    • @nietschecrossout550
      @nietschecrossout550 5 ปีที่แล้ว

      @@peterjohnson9438 Even if emulating something close to a 128bit float would require 4 or more double operations it would still be faster then all sw implementations, plus it mostly puts load on the FPUs instead of being a generic load. Therefore it seems to me (with my limited knowledge) that a hw based implementation is more desirable then a pure sw float128 according to IEEE754.
      As far as I know 128bit FPUs are very very scarce and expensive, therefore mostly undesirable because - unless you're running a purpose built data center - your not going to use f128 very often. Making a solution using multiple f64s even more desirable. I don't know how such an implementation would look like, though it will have multiple buffer f64s for sure.
      edit: replaced >>most>all

    • @ABaumstumpf
      @ABaumstumpf 5 ปีที่แล้ว +3

      quad precision in hardware is really rare as it is hardly ever used and using the compiler specific implementations is sufficient for most scenarios. And you will not manage to get better general performance with self-made constructs - they are already based on what the hardware can deliver.

    • @sundhaug92
      @sundhaug92 5 ปีที่แล้ว +6

      @@peterjohnson9438 128 bit float is standardized, it's part of IEEE 754, it's just not common in consumer hardware

  • @GogiRegion
    @GogiRegion 5 ปีที่แล้ว +1

    I don’t think I ever had the problem where an equals equivalency test didn’t work with floating point numbers because I’ve never actually used that in an actual program. I’m curious in what kind of program would that actually be used.

    • @visualdragon
      @visualdragon 5 ปีที่แล้ว

      Assume you have a vessel of some sort that you know holds x units of something and you have a program for monitoring the level in that vessel. You now start to fill that vessel and every time you add 1 unit you check to see if the current level equals the max level. It is possible when using floats or doubles that the test of maxCapacity == currentLevel will fail even when they are "equal" and then there's gonna be a big mess and somebody is going to get fired. :)

  • @OlafDoschke
    @OlafDoschke 5 ปีที่แล้ว +2

    2^(-53)+2^(-53)+1 vs 2^(-53)+1+2^(-53)

  • @hymnsfordisco
    @hymnsfordisco 5 ปีที่แล้ว

    So does this mean the smallest possible positive number, 1*2^-127, would then have the same representation as 0? That seems like a very nice way to move the problem number to the least important possible value (at least in terms of making it distinct from 0)

    •  ปีที่แล้ว

      By the definition, a nonzero number can only go to 2^-126, so 2^-127 is not representable as a normal. On FPUs that that allow denormals (x86 does), it would be converted to that (wikipedia has an article on subnormals), otherwise an underflow exception would be raised and the number would be rounded to zero.

  • @okboing
    @okboing 4 ปีที่แล้ว

    One way you can find out what number size your computer uses (32 bit, 64 bit) is to write in your calculator 16777216 + 0.125. If it returns 16777216 without a decimal, your machine uses 32 bit. Otherwise the equation will return 16777216.125 and your machine will be 64* bit

    • @peNdantry
      @peNdantry 3 ปีที่แล้ว

      Non sequitur. Your facts are uncoordinated. Sterilise! Sterilise!

    • @okboing
      @okboing 3 ปีที่แล้ว

      @@peNdantry huh

    • @peNdantry
      @peNdantry 3 ปีที่แล้ว

      @@okboing No fair! You edited it! Cheater cheater pumpkin eater!

    • @okboing
      @okboing 3 ปีที่แล้ว

      @@peNdantry if I wasn't posta edit it there wouldnt be an edit button

    • @peNdantry
      @peNdantry 3 ปีที่แล้ว

      @@okboing I have no clue what you're saying

  • @deckluck372
    @deckluck372 5 ปีที่แล้ว

    At the end "maybe I should have done sixteen bit numbers." 😂

  • @DanielDupriest
    @DanielDupriest 5 ปีที่แล้ว

    6:37 I've never seen a video that was corrected for skew before!

    • @johnfrancisdoe1563
      @johnfrancisdoe1563 5 ปีที่แล้ว

      Daniel Dupriest The wobbling is the actual paper being wobbly because someone stored it folded sideways.

  • @avi12
    @avi12 5 ปีที่แล้ว +1

    I love geeky videos like this one!

  • @benjaminbrady2385
    @benjaminbrady2385 5 ปีที่แล้ว +4

    8:43 when you prove someone wrong

  • @egonkirchof
    @egonkirchof 7 หลายเดือนก่อน

    What if they represent it with fractions and only generate a decimal number if needed for printing it ?

    • @g33xzi11a
      @g33xzi11a 7 หลายเดือนก่อน

      Yes. This is a tactic that we use sometimes where we store the numerator and denominator separately and only combine them later. The problem is that division is very very slow in computers relative to multiplication and adding. It’s worth saying that what you’re thinking of is not a decimal. It’s a decimal fraction. It’s already a fraction. The denominator of that fraction is always known given the length of the numerator so we don’t need to write the denominator. In base 10 these fraction denominators are powers of ten. In base 2 these fraction denominators are powers of 2. Binary numbers containing a fractional component are not decimals even if you separate the whole number component from the fractional component with your preferred indicator of that separation (like a period/point/fullstop). It’s still binary. It’s just a binary fraction.
      The reason you sometimes see the decimal fraction names in shorthand as “a decimal” is because in English we had a long history of computation on fractions using geometry rather than a positional number system like decimal and the number symbols we use are older than our use of decimal positional math. So for many English speakers the idea of numbers or fractions was distinct from positional numbering systems like decimal and one of the most interesting concepts to them would have been the ability to represent a fractional component in-line with the whole number component hence conflating decimal with decimal fraction like you are. But no. Decimal fractions are just fractions with a couple of conveniences baked in.

  • @jakeshomer1990
    @jakeshomer1990 5 ปีที่แล้ว +2

    Can we get your code for this program?? Great vide btw!!!

    • @Architector_4
      @Architector_4 5 ปีที่แล้ว +1

      The whole program except the last closing curly bracket is visible at 12:36, you can just write it out from your screen

    • @jakeshomer1990
      @jakeshomer1990 5 ปีที่แล้ว +5

      @Icar-us Sorry I meant the code for the spinning cube

  • @MarkStead
    @MarkStead 5 ปีที่แล้ว

    Yeah I used fixed-point when coding 3D graphics on a z80.

  • @dendritedigital2430
    @dendritedigital2430 2 ปีที่แล้ว

    I don't know why computers can't sort this out on a basic level? Either leave it in factional form or use a number system that has the factors you are using. Behind the scene you could have a base like 16 * 9 * 49 * 11 * 13 = 1009008 and get an integer value that is exactly right. It would be close to the metric system for computers ( 1024 ). Example: 200 / 3 = 200 * 1009008 / 3 = 67267200 / 1009008 = 66.666666666666...

    • @g33xzi11a
      @g33xzi11a 7 หลายเดือนก่อน

      Computers can’t sort this out on a basic level because they are binary as a physical constraint. Transistors are designed to be powered on or off and nothing else for a variety of practical reasons related to electrical engineering, manufacturing and fabrications, and an at this point deeply entrenched system of programming with the assumption that computers work in binary built from the ground up through every single level of abstraction. To do what you’re suggesting we would need a large number of transistors each of which has some prime number of variable states it can be in at exactly one of at any given time (and able to exactly jump between the states by the time they are next observed without accidentally being detected on their way to the state we hope to see them in. This would then need to be coordinated with other logic modules that have no idea what’s going on and are probably still using base two because outside of specialize hardware used for this (and maybe some cryptographic processes) these super specialized ultra expensive transistors would be useless for the general purposes of the computer. Meanwhile a floating point number in binary can be added and multiples using general purpose logical adders and multipliers that also work on binary integers there’s no conversion layer to put it back in a form every other part of the system agree on. All of this also ignores that division and finding prime factors are notoriously very very slow algorithms in any number base compared to addition and multiplication which are very fast. There’s just no reason at all to do what you’re suggesting at a fundamental level. There are specialized code libraries for handling numbers that need to be precise and these usually end up doing something like just storing values for the explicit numerator and normally implicit denominator separately and then performing the math only on the whole numbers until you need to finalize at which point it does the slow division were otherwise trying to avoid. Even these libraries eventually make cutoffs for rounding though because they don’t have infinite space and would break immediately if given an irrational number like pi to calculate in earnest.

  • @pcuser80
    @pcuser80 5 ปีที่แล้ว +3

    My slow 8088 pc with a 8087 was much faster with Lotus123 than a 80286 AT pc.

    • @ataksnajpera
      @ataksnajpera 5 ปีที่แล้ว

      than not then ,german.

    • @sundhaug92
      @sundhaug92 5 ปีที่แล้ว +2

      x87 is kinda interesting, because while it supports IEEE 754 doubles, it actually uses a custom 80-bit format internally

    • @gordonrichardson2972
      @gordonrichardson2972 5 ปีที่แล้ว +1

      Yeah, mainly for rounding and transcendental functions, to limit inaccuracty. Studied that way back in the 1980s.

    • @pcuser80
      @pcuser80 5 ปีที่แล้ว

      @@ataksnajpera corrected

  • @VADemon
    @VADemon 5 ปีที่แล้ว

    Original video title: "Floating Point Processors - Computerphile"

  • @charlescox290
    @charlescox290 5 ปีที่แล้ว

    I cannot believe a college professor just assigned an int to a float without a cast. That's a big no-no, and I'm surprised GCC didn't pop a warning.
    Or do you ignore your warnings?

  • @grainfrizz
    @grainfrizz 5 ปีที่แล้ว +21

    0.1st

    • @justinjustin7224
      @justinjustin7224 5 ปีที่แล้ว

      Dustin Boyd
      No, they’re obviously saying they’re 1/16 of the way to the first comment.

  • @amalirfan
    @amalirfan 3 ปีที่แล้ว

    I love binary math, never made me do 14 x 7, decimal math is hard.

  • @aparnabalamurugan4444
    @aparnabalamurugan4444 3 ปีที่แล้ว

    why 127 is added? I don't get it.

  • @ryanbmd7988
    @ryanbmd7988 5 ปีที่แล้ว

    Could a 1040ST get a fpu upgrade? What magic does he use to get the Atari video to lcd?!?

  • @Concentrum
    @Concentrum 5 ปีที่แล้ว

    what is this editing sorcery at 7:19?

  • @halistinejenkins5289
    @halistinejenkins5289 5 ปีที่แล้ว +1

    when i see the English version of Ric Flair in the thumbnails, i click

  • @Cashman9111
    @Cashman9111 5 ปีที่แล้ว

    6:25 wohohohooo!!!... that was... quick

  • @BurnabyAlex
    @BurnabyAlex 5 ปีที่แล้ว +4

    Google says Alpha Centauri is 4.132 × 10^19 mm away

  • @lawrencedoliveiro9104
    @lawrencedoliveiro9104 5 ปีที่แล้ว

    5:22 It’s just easier to say “two to the sixteen”.

  • @Jtretta
    @Jtretta 5 ปีที่แล้ว

    0:15 And then you have AMD cpu cores that share a singular fpu between two execution units and still call the arrangement two full cores. What a silly idea in my opinion; in addition to their generally subpar ipc, the fpu had to be shared by the two "cores" which reduced performance.

    • @johnfrancisdoe1563
      @johnfrancisdoe1563 5 ปีที่แล้ว +1

      Jtretta Maybe some of their IPC loss was from not being as reckless with speculative execution as Intel.

  •  5 ปีที่แล้ว

    "Popular auction site beginning with the letter e"
    Which could it be? :D

  • @silkwesir1444
    @silkwesir1444 5 ปีที่แล้ว +1

    42 + 23

  • @brahmcdude685
    @brahmcdude685 3 ปีที่แล้ว

    Also: please make sure the sharpie has ink :(

  • @chaoslab
    @chaoslab 5 ปีที่แล้ว

    Thanks! :-)

  • @EliA-mm7ul
    @EliA-mm7ul 5 ปีที่แล้ว

    This guy is stuck in the 80s

  • @danieljensen2626
    @danieljensen2626 5 ปีที่แล้ว +1

    Probably worth mentioning that even in modern systems fixed point is still faster if you can use it. Real time digital signal processing often still uses fixed point if it's operating at really high sample rates.

    • @shifter65
      @shifter65 5 ปีที่แล้ว

      Is the fixed point done in software or is it supported by hardware?

    • @danieljensen2626
      @danieljensen2626 5 ปีที่แล้ว

      @@shifter65 Hardware. Doing it just with software wouldn't be any faster, but with hardware support you save several steps with each operation because you don't need to worry about exponents, just straight binary addition. Many digital signal processing boards don't even support floating point.

    • @shifter65
      @shifter65 5 ปีที่แล้ว

      @@danieljensen2626 I was wondering with regards to CPUs (sorry for not being clear). The previous comment mentions that modern CPUs use fixed point for some processes. Is there a hardware equivalent to the FPU in modern CPUs to do these tasks? For DSP I would imagine since the hardware is custom that the fixed point would be incorporated, but curious about general purpose computers.

    • @danieljensen2626
      @danieljensen2626 5 ปีที่แล้ว

      @@shifter65 Ah, yeah, I don't actually know, but my guess would be yes.

    • @MrGencyExit64
      @MrGencyExit64 5 ปีที่แล้ว

      @@tripplefives1402 Floating-pont also has additional exceptional cases. Division by zero is undefined in integer math, but usually well-defined by anything calling itself floating-point.

  • @yesgood1357
    @yesgood1357 4 ปีที่แล้ว

    you really should do a proper animation of what is going on. I don't really get what he is saying.

  • @Goodvvine
    @Goodvvine 5 ปีที่แล้ว

    ha, the final clip 🤣

  • @cmdlp4178
    @cmdlp4178 5 ปีที่แล้ว +1

    To the topic of floating point numbers, there should be a video of the inverse square root hack in the quake source code. And I would like to see videos about other bit-hacks.

    • @rallokkcaz
      @rallokkcaz 5 ปีที่แล้ว +1

      cmdLP problem with the quake rsqrt is that it's actually almost unknown who wrote it, and most resources describing how it was designed are also just guessing for the most part. Thank you miscellaneous SGI developer for that wonderful solution to one of the most complex problems in computational mathematics.

  • @wmd5645
    @wmd5645 5 ปีที่แล้ว

    I had to dig back into these topics recently and Java was really being bad.
    DIP class using java..... no unsigned int directly supported, unless casted and the values still weren’t 0-255 they kept coming out to 0-127. Tried to use char like in C but it proved to be difficult. The .raw image file had a max pixel value less than 255, I think around 230ish. Anyone know why using Java’s tounsigned int/short/long still wanted to truncate my pixel values?

    • @thepi4587
      @thepi4587 5 ปีที่แล้ว

      It wouldn't be Java's byte being signed by default tripping you up, would it?

    • @wmd5645
      @wmd5645 5 ปีที่แล้ว

      @@thepi4587 could be bc that's how I've read in the file. But I've casted the values after. Still the same result.

    • @thepi4587
      @thepi4587 5 ปีที่แล้ว

      I don't really use Java myself, I just remember reading about this same problem before at one point. I think the answer ended up being to just not use bytes and stick with a larger primitive that could handle 0-255 instead.

  • @davidho1258
    @davidho1258 5 ปีที่แล้ว

    simple program = 3d spinning cube :/

  •  5 ปีที่แล้ว

    BCD or nothing.

  • @cursed_multicel
    @cursed_multicel 5 ปีที่แล้ว

    The delivery was very poor and confusing. Scripts matter

  • @nakitumizajashi4047
    @nakitumizajashi4047 5 ปีที่แล้ว

    1.0 != 1

  • @kimanih617
    @kimanih617 5 ปีที่แล้ว +4

    Too early, snoozing

  • @trueriver1950
    @trueriver1950 5 ปีที่แล้ว

    COBOL (a business language) had fix point numbers that were not integers, used them for money. So a pound or dollar could be stored as a value with a fixed 2decimals of fractional part.
    Unlike spreadsheets nowadays, you could do calculations on money values and get exact answers without rounding errors.
    More exactly by specifying the number type, you knew exactly how the rounding would be applied at each step in the calculation.
    We lost that somewhere along the way....

  • @GaryGrumble
    @GaryGrumble 5 ปีที่แล้ว

    You obviously have never worked with floating point data.

  • @VincentRiviere
    @VincentRiviere 5 ปีที่แล้ว

    Which GCC version did you use to compile your cube program?

  • @SimGunther
    @SimGunther 5 ปีที่แล้ว +1

    Compilers + Computerphile == comprehensible video
    >>> False
    Can we please just have ONE compiler video series that matches the calibre of the rest of the catalogue? PLEASE???

  • @NotMarkKnopfler
    @NotMarkKnopfler 5 ปีที่แล้ว

    That didn't really make much sense, guys.

  • @tsunghan_yu
    @tsunghan_yu 5 ปีที่แล้ว

    Can someone explain *((int*)&y) ?

    • @Simon8162
      @Simon8162 5 ปีที่แล้ว +4

      It casts the address of `y` to an int pointer. Remember y is a float, so by creating an int pointer to it you end up treating it like an int. Presumably int and float are the same size on that machine.
      Then the int pointer is dereferenced. The value of the int will be the IEEE representation of the float, which is passed to printf.

    • @tiarkrezar
      @tiarkrezar 5 ปีที่แล้ว +3

      He wanted to see the actual bit pattern of the float as it's stored in memory, this is a roundabout way to cast the float to an int without doing any type conversion because C doesn't offer an easy way to do that otherwise. Just doing (int) y would give a different result.

    • @jecelassumpcaojr890
      @jecelassumpcaojr890 5 ปีที่แล้ว +1

      @@tiarkrezar the proper C way to see the bits without conversion is a union, which is like a struct but the different components take up the same space instead of being stored one after the other.

    • @tiarkrezar
      @tiarkrezar 5 ปีที่แล้ว

      @@jecelassumpcaojr890 Yes, but it's still kind of awkward to define a throwaway union for a one time use like this, that's why the pointer mangling way comes in handy. For this exact reason, I kind of wish printf had a "raw data" format specifier that would just print out the actual contents of the memory without worrying about types.

    • @Vinxian1
      @Vinxian1 5 ปีที่แล้ว +1

      &y gets the memory address (pointer) of the float.
      (int*) tells the compiler that it should threat &y as a pointer to an integer.
      The final * says fetch the value stored in this memory location.
      So *(int*)&y gives you an integer whoms value corresponds with the binary representation of the float.
      This is helpful if you need to store a float in EEPROM, or like in this video, want to print the hexadecimal representation with "%X"

  • @1st_ProCactus
    @1st_ProCactus 5 ปีที่แล้ว

    I don't care, Just fix it :P

  • @frankharr9466
    @frankharr9466 5 ปีที่แล้ว

    Oh, this reminds me of when I made my app.

  • @lawrencedoliveiro9104
    @lawrencedoliveiro9104 5 ปีที่แล้ว

    12:48 C-language trivia question: are the parentheses in “(1

  • @HPD1171
    @HPD1171 5 ปีที่แล้ว

    why did you use *((int*)&y) instead of just using a union with a float and int type?

  • @CodeMaker4
    @CodeMaker4 5 ปีที่แล้ว

    there is exactly 1k views