Floating Point Numbers - Computerphile

Поделиться
HTML-код
  • Опубликовано: 28 сен 2024
  • Why can't floating point do money? It's a brilliant solution for speed of calculations in the computer, but how and why does moving the decimal point (well, in this case binary or radix point) help and how does it get currency so wrong?
    3D Graphics Playlist: • Triangles and Pixels
    The Trouble with Timezones: • The Problem with Time ...
    More from Tom Scott: / enyay and / tomscott
    / computerphile
    / computer_phile
    This video was filmed and edited by Sean Riley.
    Computerphile is a sister project to Brady Haran's Numberphile. See the full list of Brady's video projects at: bit.ly/bradycha...

Комментарии • 825

  • @MorreskiBear
    @MorreskiBear 8 лет назад +3763

    Explains weird results I got in BASIC programs 29.99999998 years ago!

  • @Daealis
    @Daealis 10 лет назад +3323

    I got 0.9999... Problems and floating point is one.

  • @InsaneMetalSoldier
    @InsaneMetalSoldier 8 лет назад +10274

    Everything is easier to understand if it's explained in british accent

  • @DjVortex-w
    @DjVortex-w 10 лет назад +737

    Small correction: You don't need a 64-bit computer to use 64-bit floating point numbers in hardware. For example on Intel processors 64-bit floating point has been supported since at least the 8087, which was a math co-processor for the 16-bit 8086 CPU. (The math co-processor has been integrated into the main CPU since the 80486, which was a 32-bit processor.)

  • @MMMowman23
    @MMMowman23 10 лет назад +1740

    This is why Minecraft 1.7.3 alpha has block errors after 2 million blocks away from spawn. Pretty cool.

  • @eisikater1584
    @eisikater1584 8 лет назад +75

    Do you remember the (in)famous Apple IIe 2 squared error? 2+2 yielded four, as it should have been, but 2^2 yielded somothing like 3.9999998, and floating point arithmetic was difficult on 8 bit computers anyway. I once even used a character array to emulate fifteen digits behind the comma, not anything I'd do nowadays, but it worked then.

  • @Creaform003
    @Creaform003 10 лет назад +774

    Minecraft stores entities as floating point's, when the world was infinite you could teleport something like 30,000km in any direction and see objects start to move and stutter about including the player.
    Once you hit 32 bit int overflow the world would become Swiss Cheese, and at the 64 bit int overflow, the world would collapse and crash.

  • @themodernshoe2466
    @themodernshoe2466 10 лет назад +491

    Can you PLEASE do more with Tom Scott? He's awesome!

  • @wolverine9632
    @wolverine9632 7 лет назад +169

    I remember the first time I experienced this. I was writing a Pac-Man clone, and I set Pac-Man's speed to be 0.2, where 1.0 would be the distance from one dot to another. Everything worked fine until I started coding the wall collisions, where Pac-Man keeps going straight ahead until hitting a wall, causing him to stop. The code checked to see if Pac-Man's location was a whole integer value, like 3.0, and if it was it would figure out if a wall had been hit. When I tested it, though, Pac-Man went straight through the walls. If I changed the speed to 0.25, though, it worked exactly as expected. I was baffled for a few moments, and then it hit me. Computers don't store decimal values the way you might first expect.

  • @antivanti
    @antivanti 10 лет назад +452

    nought (nɔːt)
    (Mathematics) the digit 0; zero: used esp in counting or numbering
    For those who were wondering. I guess it has the advantage of being half the number of syllables as "zero".

    • @IceMetalPunk
      @IceMetalPunk 10 лет назад +147

      My calculus teacher in high school loved that word, mainly because when he'd have a subscript of 0, he'd use the variable y just to say, "Well, y-nought?"

  • @miss1epicness
    @miss1epicness 10 лет назад +22

    Tom Scott's videos strike me (a young CS student) as my favorite. They're really authentic with the "and this is the part where you, a young programmer, start to tear your hair out"--please keep them up, I'm really enjoying it!

  • @mountainhobo
    @mountainhobo 10 лет назад +4

    Ran into this 30 years ago, was working in finance and trying my hand at simple applications running fairly complex transactions over many years horizon. Was dumbfounded then. Took me while to learn what was going on. Thanks for the trip down the memory lane.

  • @JSRFFD2
    @JSRFFD2 8 лет назад +320

    OK, so why does the cheap pocket calculator show the correct answer? I thought they used floating point numbers too. Do they have electronics that do computation in decimal?

  • @EebstertheGreat
    @EebstertheGreat 5 лет назад +147

    I prefer 3 bit floats. They have 1 sign bit, 1 exponent bit, and 1 significand bit. They can encode ±0, ±1, ±∞, and NaN, which is all you really need.

  • @plutoniumseller
    @plutoniumseller 10 лет назад +393

    ...that's why the point is floating. *sudden clarity moment*

  • @rodcunille4800
    @rodcunille4800 10 лет назад +2

    Before anything, great explanation!!!
    That is why supermarket's checkout's systems expresses the currencies in hundreds and not in decimals, at the end, your bill will be divided by 100 and you'll get your exact amount to pay

  • @Patman128
    @Patman128 10 лет назад +2

    New languages (like Clojure) are starting to include fractions as a data type. If the numerator and denominator are arbitrary precision (integers that scale as necessary) then you can represent and do operations on any rational number with no error.

    • @DrMcCoy
      @DrMcCoy 10 лет назад

      Sure; just keep in mind that this introduces both memory and computational overhead. :)

  • @frollard
    @frollard 10 лет назад +1

    This primer now makes it all make sense.
    I've written a few programs where floats were required and they always pooped the bed somehow - to solve it I was doing type conversions to to integer math, then converting back...such a pain!

  • @LeBonkJordan
    @LeBonkJordan Год назад +3

    Jan Misali basically said (heavy paraphrasing here) "0.3" in floating point doesn't really represent exactly 0.3; it represents 0.3 _and_ every infinitely-many other real numbers that are close enough to 0.3 to be effectively indistinguishable to the format. Basically, every floating point number is actually a range of infinitely many real numbers.

  • @imanabu5862
    @imanabu5862 4 года назад +1

    I was only one minute into the video and you already answered my questions! I am no a specialist and literally have no idea what floating points are/ after hours of searching this is the first video that makes sense to me !! thanks

  • @StephenDahlke
    @StephenDahlke 10 лет назад +28

    Eagerly awaiting the follow-up video that talks about the actual storage and operation of these numbers. My Numerical Analysis course spent several lectures on it, and wrapping your head around storing a number like 123.456 in IEEE 64-bit floating point is about as much fun as dealing with time zones. :)

  • @ivokroon1099
    @ivokroon1099 10 лет назад

    He says that the last didgit is not a problem, but once I tried to make a timer which kept adding 0.1 to itself, and when it reached a number (for example 3), it would activate another script. I did never activate because of those floating point numbers. Thanks for the explonation, computerphile!!!

  • @airmanfair
    @airmanfair 10 лет назад +52

    Aren't floating point errors what they used in Office Space to ripoff Initech?

  • @CoxTH
    @CoxTH 10 лет назад +57

    To come back to the 3d game example:
    In older versions of Minecraft there is a bug caused by floating point rounding errors. Because the game world is procedurally generated those errors cause errors is the world generation. They only become significant when one of the coordinates is at about +/- 12.5 million, but there the world generation changes significantly.

  • @michelle-ve3jb
    @michelle-ve3jb 3 года назад +3

    Are you kidding me 😂 computerphile and tom?? where was this before my exam this morning? Not in my recommendations 😭

  • @majinpe
    @majinpe 10 лет назад

    Tom Scott's talk is so interesting and easy to understand
    I love this guy

  • @puzzician
    @puzzician 8 лет назад +3

    The problem is actually much worse than this. If you think through the implications of scientific notation, large integral values are totally screwed too. This example happens to be in Swift but it doesn't matter:
    1> var x: Float = 1_000_000_000
    x: Float = 1.0E+9
    2> x = x + 1
    3> print(x)
    1e+09
    That's right, 1 billion plus 1 equals 1 billion, using 32-bit precision floating point numbers.

  • @MysterX79
    @MysterX79 10 лет назад +4

    And then you have a simple "if (d

  • @jgcooper
    @jgcooper 10 лет назад +1

    it took me a long while, but i just noticed that the "/" was added at the end. awesome!

  • @Kd8OUR
    @Kd8OUR 10 лет назад +1

    You guys provide a far better lesson than anything I got in school.

  • @matsv201
    @matsv201 10 лет назад +85

    I´m not a programmer, but a computer hardware scientist. One thing that annoy me allot is programmers that see this rounding errors and just go for 64 bit floating-points because they just don´t wan´t to see the rounding errors... normally not because its needed... it´s very seldomly is.
    Than to ad to my annoyance they then use a 32 bit compiler and than run it on a 64 bit operation-system with a 64 bit processor. Then they ask me.. why is my program running so slow, what is wrong with this computer.
    Here is the ting. the 32bit ALU in a 32bit CPU can´t obvious do any 64 bit calculation. What the compiler than does is break down the one 64 instruction to eight (8) 32 bit instructions and basically makes the program run 8 times more slowly. When the program is run at a 64 bit CPU the CPU don´t understand that the instruction before compiling actually was a 64 bit, and it run it i backwards compatibility mode, well it workes just fine, but is alot slower.
    On the other hand, if the program is made to use 32 bit flotingpoints and uses a (some what smart) 64 bit compiler, the program sadly don´t works 8 times faster, but anyway 2 times faster.
    That´s the ting
    64 bit number on a 32 bit compiler (don´t care about the CPU) 8 times slower
    32 bit number on a 64 bit compiler (needs a 64 bit CPU) only, in the best cases, twice as fast.
    Basically, don't do wrong

    • @0pyrophosphate0
      @0pyrophosphate0 10 лет назад +33

      What annoys me, as a computer science student, is being told by default to use a 32-bit int to count just a handful of things. Then everybody jokes about how memory-hungry modern programs are.
      Also, being told that the only optimization that matters is moving down a big-O class. Doubling the speed of the algorithm doesn't matter, but moving from O(n) to O(logn) is the real gold. Because all programs have infinite data sets.

    • @matsv201
      @matsv201 10 лет назад +2

      Actuality 16 bit flotingpoints can be used for a number of things to. E.I. game physics simulation were there is no need for really good precision... but i don't know if modern processors support 16 bit SIMD, i guess not, and than there will be a performance loss in steed of gain. Well, of cause save cache and RAM as well as bandwidth.
      But then if you talk about stats, partnumber, information and so on, the number of bits can just as well be 32 bit because they usualy don´t exist on a mass level. I.e. if using a million pars all using 32 bit description instead of 16 bit, you only save 2MB
      When RAM is consumed on a mas level is in textures and terrain data. Sadly Lossless realtime texture compression is still not used on mas..... Well well, next time AMD/Nvidia runs in to the bandwidth wall, they probably will do it.

    • @totoritko
      @totoritko 10 лет назад +25

      0pyrophosphate0 What annoys me, as a computer science graduate, is when computer science students think they've mastered a topic that they've barely dipped their toes in.
      Follow the advice of your tutors on int sizes - with years of experience and countless integer overflow bugs and performance issues with unaligned memory access you'll learn to appreciate their wisdom.

    • @0pyrophosphate0
      @0pyrophosphate0 10 лет назад +9

      totoritko You don't know me or what I've studied or how much. No need to be condescending. I'm quite aware of how much I don't know, and I do know there are pitfalls to avoid when dealing with memory. I just would rather be taught about those pitfalls and how to avoid, debug, and fix them, rather than to just shut up and use int.
      Even worse, I think, is using double for everything with a decimal point.
      I'm also fully prepared to accept in 5-10 years that my current attitude is ridiculous, if in fact it turns out that way.

    • @Zguy1337
      @Zguy1337 10 лет назад +7

      0pyrophosphate0 Your program is not going to turn into a memory-hungry beast just because you're using 32-bit integers instead of 16- or 8-bit integers. It might become a bit faster though.

  • @marksusskind1260
    @marksusskind1260 10 лет назад

    I was contemplating how to store recurring digits, but that involved not only storing where the radix point is, but the divisor, too, and a "phrase": an array of remainders (mod divisor) against which to check a freshly-computed remainder. If the new remainder already exists in the phrase, the fraction is recurring.

  • @samanthabayley2194
    @samanthabayley2194 7 лет назад

    This finally explains strange results in a program I wrote for first year uni. The program was meant to calculate change in the fewest possible coins but whenever I had 10c left it would always give two 5c coins, for the life of me I couldn't figure out why now I know. It also used to happen with the train ticket machines so at least I'm not alone in making that error.

  • @ghuegel
    @ghuegel 10 лет назад +1

    I've run into this a few times at work using Excel. It seemed very strange and looking into why it happened was enlightening and interesting... plus, we found a few ways to correct it.

  • @TechLaboratories
    @TechLaboratories 10 лет назад

    Thank you for what's probably the best explanation of floating point arithmetic that now exists! It's easy from this starting point to extrapolate into understanding floating point operations (FLOPs), and half (16bit), single (32 bit), and double (64bit) precision floating point numbers! Thanks Tom!

  • @SirCrest
    @SirCrest 10 лет назад

    Tom Scott is probably my favourite guest.

  • @AlexVineyard
    @AlexVineyard 8 лет назад +32

    Thank you Sir Dude, I finally understand!

  • @sklanman
    @sklanman 10 лет назад +4

    This reminds me of Office Space (the movie) in which the characters discuss plans to modify the bank software's source code which would take those tiny fractions of a cent in everyday banking transactions and instead of rounding it down, would transfer the difference to their account.

  • @harley291096
    @harley291096 10 лет назад

    Who is this man? He needs a fucking medal. Give him 3!

  • @Mittzys
    @Mittzys 7 месяцев назад +1

    Watching this, I already understood floating point numbers, but I did not understand scientific notation. Ironically, this video has taught me the exact opposite of what it intended to

  • @jmfriedman7
    @jmfriedman7 10 лет назад

    One bugaboo, where the last decimal place IS important, is with floating expressions such as: xx^2 + yy^2 + zz^2
    These can give small NEGATIVE numbers if xx, yy, and zz are all small and nonzero. Try to take a square root or a logarithm and you end up with NaN (not-a-number) and this will carry all the way through the code giving errors.
    Similar errors occur with other functions that have limited input ranges. They are easily avoidable, but they always manage to sneak in somewhere.

  • @ThePhiSci
    @ThePhiSci 10 лет назад

    I watch Computerphime's videos for this guy.

  • @falnica
    @falnica 8 лет назад +5

    Just happened to me, my algorithm didn't worked because I was asking for the place where the vector was =0.8 but it was in fact =0.80000001

  • @insanitycubed8832
    @insanitycubed8832 6 лет назад +1

    4:09 I think every mathematician died a little inside when you said "all the way to infinity".

  • @enotirab
    @enotirab 10 лет назад +1

    Tom I have to say that your videos are great. I'm a professional programmer myself and I find that you explain these topics with 1000 time more clarity than any of my college professors managed to! Keep up the good work!

  • @montyoso
    @montyoso 10 лет назад +11

    This is a clever guy that explain complicated things in a simple way.

  • @FollowTheLion01
    @FollowTheLion01 10 лет назад

    It's probably well worth pointing out that when you try to add a small number onto a big number you get into trouble. This is really bad if you're trying to integrate a floating point the way you would increment an integer. Even if the number you're adding is nice and tidy, like 1.0 (tidy in base 2 as well as base 10), for example, when your variable gets really big, like 3E8 (your example), the computer can't add a small number because it gets lost due to the the significant digits problem. This may be self evident to many of us, but this single little aspect of dealing with floating points can cause a lot of trouble. I keep it in mind whenever I'm dealing with floating points, especially on platforms that handle fewer significant digits, like many PLC's, DCS's, process control systems, etc. If the compiler allows it, it's sometimes better to use really big integers, or even multiple registers (like a times 100000 tag and a times 1 tag for less than 100000) to allow you to keep adding.
    Some software supports integers of length n, which solves a lot of these problems, but has it's own issues, if course (speed, memory).

  • @viccie211
    @viccie211 7 лет назад +1

    If you ever want to see the failure of precision of a 32 bit float in a video game, go watch kurtjmac's Far Lands or Bust. He has walked so far in minecraft his location on one axis has lost precision to quarters of blocks! Especially episode 471 where he finds the boundary where the precision loss increases.

  • @MasterHD
    @MasterHD 10 лет назад

    A clarification I'd like to make to this video. The hardware architecture "32-bit computer" (x86) or "64-bit computer" has nothing to do with the precision of floating point numbers. That is strictly used for memory addressing. So any precision value can still be used on any hardware, though some may be more efficient at it than others.

  • @madeline.
    @madeline. Год назад +1

    7:13 When hearing all this I immediately thought of a shounic video from 8 months ago, In Team Fortress 2, the place where a Sniper is aiming is shown with a dot on the wall he's aiming at. To the sniper, this is exactly where he aims, it's client side, but to other players, the dot jumps around on the wall if the sniper is far enough away, because there's not enough precision in the angle measurement sent from the server to your computer about what angle the sniper is looking at.

  • @reubenandrews9043
    @reubenandrews9043 7 лет назад

    my teacher failed to teach me this with years, tom scott succeded in 10 minutes

  • @coolboyrocx
    @coolboyrocx 10 лет назад

    I already understood this concept but I thought it was a really good explanation regardless. Some tricky things in here to get your head around initially so hopefully people find it useful!

  • @mailsprower1
    @mailsprower1 10 лет назад

    Thanks for closing that tag at the end, Brady!
    My OCD is finally fixed!

    • @MrTridac
      @MrTridac 10 лет назад

      The problem is that the tags from the first few videos are still open. I guess the nesting depth is still a few dozen levels deep. And you have to include the opening tags at the end of all these early videos too. INSANE. O.o

    • @mailsprower1
      @mailsprower1 10 лет назад

      I tried not to think about that... I tried.

    • @MrTridac
      @MrTridac 10 лет назад +1

      mailsprower1 Brady should make a "closing tags only" video or put them in as "cameo" here and there in future videos. :)

  • @brittanyalkire3287
    @brittanyalkire3287 10 лет назад +1

    floating point numbers are hilariously fun and irritating when dealing with them in something like JOGL, and throw a bunch of those in some matrices and multiply them on a massive scale and it's pretty easy to break things :P
    i haven't been so playfully frustrated since learning how my calculator did division, lol it's funny because it uses something that behaves like a taylor or maclaurin series. all hail the binary coded decimal!
    also props for that dot matrix printer paper :P

  • @DynamicFortitude
    @DynamicFortitude 10 лет назад

    In decimal (data type), worth noting, exponent and significand are binary. It differs from double by base of exponent. And by size, it's between double and quad when it comes to precision.

  • @deantheobald1393
    @deantheobald1393 9 лет назад

    an important piece of information i needed to know, will definitely look here first for any other needs i have on this subject... thanks!

  • @blizzy78
    @blizzy78 10 лет назад +3

    It's not quite correct that rounding errors are insignificant for any game. For example, Kerbal Space Program ran into massive problems when using astronomical distances combined with small-scale distances like centimeters, because the Unity3D engines uses float as the default instead of double.

  • @DeRepear
    @DeRepear 6 лет назад

    JavaScript is a big culprit of this sort of thing, many other languages like Python will automatically know that 0.1 + 0.2 = 0.3 but JavaScript won't automatically (try typing it in your browser's developer console). It does have built in functions to help with this sort of thing though.

  • @0pyrophosphate0
    @0pyrophosphate0 10 лет назад

    This actually cleared up FP logic for me a lot. Thanks for that.

  • @kristianbrasel
    @kristianbrasel 9 лет назад +1

    excellent explanation. thank you.

  • @paullawrie
    @paullawrie 3 года назад

    Convert to cents, calculate using cents, store as cents. Format as currency when displaying only. Learnt the hard way when I was a kid :)

  • @henke37
    @henke37 10 лет назад

    A good follow up to this one would be the fast inverse square root trick. But then again, concepts like NaN and Infinity would be good topics too.

  • @ltt0903
    @ltt0903 7 лет назад +3

    Your voice is really nice. Btw this is a good video!

  • @kpunkt.klaviermusik
    @kpunkt.klaviermusik 9 лет назад

    Great explanation! But the problem is a quite theoretical one. For output purposes, binary numbers have to be converted into decimals anyways. And it's just a little step to round the number to a given number of digits.

  • @OH5EDP
    @OH5EDP 8 лет назад +7

    Hmm what IF that banking system where to use integers for money storing and someone had more than 2147483647£?

  • @smellthel
    @smellthel 4 года назад +1

    Wow man. Thanks for that exponent trick!

  • @Reavenk
    @Reavenk 10 лет назад +1

    I'm surprised you didn't cover the concept of epsilon bounds for checking if 2 floats are equal(ish).

  • @nO_d3N1AL
    @nO_d3N1AL 10 лет назад

    Wow, didn't know there was so much to floating point numbers. Nice video

  • @Frasenius9
    @Frasenius9 10 лет назад

    Yay for more of Tom Scott!

  • @PullerzCoD
    @PullerzCoD 10 лет назад +14

    Thats first maths process is gonna save me a lot of anguish

  • @YingwuUsagiri
    @YingwuUsagiri 10 лет назад

    It's interesting to see how different everyone writes recurring. We write 0.33 with a diagonal line through the second 3.

  • @L0LWTF1337
    @L0LWTF1337 10 лет назад +36

    I once programmed a string to number and vice versa function thingy... I always got complete garbage and wondered what I did wrong.
    Then I discovered that the standard library power function gave wrong results for 10^10 since it calculated everything in float.
    I made a one liner for a*a and then everything worked.
    Hate that so much.

  • @HarbortTheone
    @HarbortTheone 10 лет назад

    It is worth noting that decimal notation doesn't reduce the issues related to floating points. It so happen that all our laws are expressed in decimal and the requirements for calculations (e.g. precision and all) are also expressed in decimal. Therefore, it is because of our legal system that currency calculations have to be made in decimal, rather than the decimal system being flawless.

  • @MusicByNumbersUK
    @MusicByNumbersUK 10 лет назад +1

    thanks! Happy to see this as it was one of the things I was hoping to see! :) as an addition, it would be great to get an explanation on floating point from the perspective of the mantissa and exponent part and how these are implemented in binary but that might be taking it too far right? ;)

  • @rlamacraft
    @rlamacraft 10 лет назад +1

    I had an exam on this 2 weeks ago! Why wasn't this up then?? Would have been SO useful…

    • @Truthiness231
      @Truthiness231 10 лет назад +2

      Would have been more helpful to me about a decade ago... but hey, we got to be in the minority of people who already learned why float isn't to be trusted for precision before watching this. ^.^

  • @ONeillDonTube
    @ONeillDonTube 10 лет назад

    Precision and accuracy are situational. The matter sometimes, e.g., in Global Positioning calculations. By the way don't use 22/7 for pi; instead look it up in Rinehart Mathematical Tables for best results.

  • @Dref360ML
    @Dref360ML 10 лет назад

    Good video I'm actually studuying SPARC assembly in college, quite interesting to know how a computer works

  • @Wafflical
    @Wafflical 10 лет назад +13

    That is something I might not have known.

  • @tonyfremont
    @tonyfremont 3 года назад +1

    Thank God for packed decimal.

  • @Walter5850
    @Walter5850 6 лет назад

    These rounding errors are quite obvious in systems that are highly sensitive on initial conditions (chaotic systems).
    Basically to the point where if you run absolutely the same program on a 32 bit cpu vs a 64 bit cpu, you will soon get completely different results.
    For example calculating a double joint pendulum swing.

  • @DudokX
    @DudokX 10 лет назад

    Actually in Kerbal Space Program you can see floating point errors in "prediction of your orbit". Ap and Pe height last digits sometimes quickly alternate even if you pause the game.

  • @jyotirmayghosh353
    @jyotirmayghosh353 7 лет назад

    what a voice man, and thanks a lot all my doubt cleared...

  • @klutterkicker
    @klutterkicker 10 лет назад

    I don't think I've seen someone so excited about rounding errors before.

  • @frankharr9466
    @frankharr9466 8 лет назад

    I ran into that. It was very frustrating. I needed to figure all sorts of tricks out to get around it. It finally sort-of worked.
    In particular, I couldn't get 32 F to be 0 C and (weirdly) I couldn't for from 0 C to 0 Re. GAHHHHH!

  • @EddyProca
    @EddyProca 10 лет назад +5

    How come this only happens with 0.1+0.2? I've tried 0.1+0.1 or 0.1+0.3 and all kinds of other possibilities but all the other ones work fine (in JavaScript jsfiddle.net/LBzq5/).

  • @elimik31
    @elimik31 10 лет назад +1

    I remember how we had to write a C++ program that can solve systems of linear equations with the Gauss algorithm and in some cases, this actually mattered when we didn't get a zero where a zero was supposed to be. Pivotization (and using double) helped out a bit.

  • @TheSam1902
    @TheSam1902 8 лет назад

    What a brilliant video, thank you, I wasn't aware of this problem and I'll think about it next time I deal with things like currency etc ^^

  • @beirirangu
    @beirirangu 10 лет назад +1

    hm... just a quick question (that should be rather simple but could (for all I know) be very complicated) : how does a computer calculate rounding or cutting off numbers at the integer (like the functioin some programs have "int(x)")

  • @gizmoguyar
    @gizmoguyar 10 лет назад

    He said in the video that 32 bit floating point numbers store 23 significant digits. It's my understanding that the mantissa is normalized, right (1 sign bit, 1 exponent sign bit, 7 exponent bits, and 23 mantissa bits)? Then the leading digit of the mantissa is always 1, and thus, ignored, giving an extra significant digit? Could someone to clarify this for me?...

  • @TheAkbir
    @TheAkbir 10 лет назад

    Hey, okay so I'm a mathematician and have only recently started looking into computing. The underlying problems seems to be, that numbers for a computer do not follow the "Completness Axiom" - which intrinsically makes the difference between having the rational numbers and the real numbers. Is not it possible to implement such a rule? Mathematically, all it requires is for us to allow any set of numbers to have a Supremum/ Lowest Upper Bound. As to how you code this....I dunno.

  • @linkVIII
    @linkVIII 10 лет назад +17

    I almost missed this video because of youtube messing with the subscriber feed.

  • @QuatreFun
    @QuatreFun 10 лет назад

    These videos are very useful

  • @DynamicFortitude
    @DynamicFortitude 10 лет назад

    decimal is basically scientific notation, float is a scientific notation in binary. So decimal can be conveniently used anyywhere we have input data in decimal (like money), without rounding errors at the very start. But decimal isn't precise, be aware it has the same rounding problems as float. Try in C#: 1m/3m+1m/3m+1m/3m == 1m

    • @DynamicFortitude
      @DynamicFortitude 10 лет назад

      In decimal, worth noting, exponent and significand are binary. It differs from double by base of exponent. And by size, it's between double and quad when it comes to precision.

    • @DynamicFortitude
      @DynamicFortitude 10 лет назад

      For you scientists, on some architectures, precision in ascending order: float (32), double (64), long double (80), decimal (128), quad (128). As for range, decimal is better than quad.

    • @MarysiaHa
      @MarysiaHa 10 лет назад

      Hmmm... ten filmik musi być ciekawy skoro tyle komentarzy do niego napisałeś... Ale jeszcze ciekawsze jest to, że to właśnie te komentarze mnie zniechęciły do oglądania go.

  • @JLConawayII
    @JLConawayII 10 лет назад +1

    I always knew about this, but the first time it really mattered in one of my programs was when I tried to program a basic n-body simulation. Simple 2-body orbital system, should be pretty easy right? Yeah not so much. My planet sailed off into the void thanks to rounding error in the integrator.

  • @DennisChaves
    @DennisChaves 10 лет назад +3

    Computerphile always fascinates me. And reminds me exactly how dumb I am.

  • @WizardNumberNext
    @WizardNumberNext 10 лет назад +10

    correction!
    x87 (x86 FPU) is 80bit wide
    63bits are used for actual number, 16 for exponent, 1 for sign

  • @markarthur1083
    @markarthur1083 3 года назад +2

    I use double for currency

  • @orange5423
    @orange5423 2 года назад +1

    I think I understand everything of the video, but only 1 thing not; what is the computers logic of 0,1 + 0,2 = 0,3000000001 ?
    Doesn't it see 0,1 as 0,1000000000, and the same with 0,2?
    Where is dat 1 and the end of 0,30...01 coming from?

  • @shando_tube
    @shando_tube 7 лет назад +2

    shouts out to my undergrad for teaching me all of this and making me feel like I know something

  • @TheyZebra
    @TheyZebra 8 лет назад

    Awesome as usual! Thanks for the knowledge. Subscribed.