Floating Point Numbers - Computerphile

Поделиться
HTML-код
  • Опубликовано: 27 ноя 2024
  • Why can't floating point do money? It's a brilliant solution for speed of calculations in the computer, but how and why does moving the decimal point (well, in this case binary or radix point) help and how does it get currency so wrong?
    3D Graphics Playlist: • Triangles and Pixels
    The Trouble with Timezones: • The Problem with Time ...
    More from Tom Scott: / enyay and / tomscott
    / computerphile
    / computer_phile
    This video was filmed and edited by Sean Riley.
    Computerphile is a sister project to Brady Haran's Numberphile. See the full list of Brady's video projects at: bit.ly/bradycha...

Комментарии • 826

  • @timothemalahieude5076
    @timothemalahieude5076 8 лет назад +2779

    I think this actually used to be a flaw in some banking systems, because programmers initially used floating-point numbers to store accounts money. And then someone took advantage of it by making a lot of transactions from 2 of his accounts, where each transaction made him win a tiny bit of money due to rounding. So in the long term he was able to "create" as much money as he wanted.

  • @MorreskiBear
    @MorreskiBear 8 лет назад +3813

    Explains weird results I got in BASIC programs 29.99999998 years ago!

  • @Daealis
    @Daealis 11 лет назад +3352

    I got 0.9999... Problems and floating point is one.

  • @oatstralia
    @oatstralia 10 лет назад +563

    Another issue I've run across with floating point rounding errors is the fact that - for the same reasons outlined in the video - a comparative statement like "0.1 + 0.2 == 0.3" comes out false, and it can be super annoying to pick out where the error is coming from, especially if it's buried in some if statement. For things like currencies, I usually just deal with ints that represent the number of cents (e.g. $2.50 is represented as 250), and divide by 100 in the end... saves me the hair tearing.

  • @DjVortex-w
    @DjVortex-w 10 лет назад +742

    Small correction: You don't need a 64-bit computer to use 64-bit floating point numbers in hardware. For example on Intel processors 64-bit floating point has been supported since at least the 8087, which was a math co-processor for the 16-bit 8086 CPU. (The math co-processor has been integrated into the main CPU since the 80486, which was a 32-bit processor.)

  • @kagi95
    @kagi95 10 лет назад +56

    This is great, I've finally understood what it means (and why) when people say "loses precision" when referring to floating points.

  • @InsaneMetalSoldier
    @InsaneMetalSoldier 9 лет назад +10353

    Everything is easier to understand if it's explained in british accent

  • @Creaform003
    @Creaform003 11 лет назад +781

    Minecraft stores entities as floating point's, when the world was infinite you could teleport something like 30,000km in any direction and see objects start to move and stutter about including the player.
    Once you hit 32 bit int overflow the world would become Swiss Cheese, and at the 64 bit int overflow, the world would collapse and crash.

  • @paterfamiliasgeminusiv4623
    @paterfamiliasgeminusiv4623 7 лет назад +5

    I appreciate the fact that you get into the topic right at the first second. You don't see this in the world very often.

  • @antivanti
    @antivanti 11 лет назад +456

    nought (nɔːt)
    (Mathematics) the digit 0; zero: used esp in counting or numbering
    For those who were wondering. I guess it has the advantage of being half the number of syllables as "zero".

    • @IceMetalPunk
      @IceMetalPunk 11 лет назад +148

      My calculus teacher in high school loved that word, mainly because when he'd have a subscript of 0, he'd use the variable y just to say, "Well, y-nought?"

  • @wolverine9632
    @wolverine9632 7 лет назад +169

    I remember the first time I experienced this. I was writing a Pac-Man clone, and I set Pac-Man's speed to be 0.2, where 1.0 would be the distance from one dot to another. Everything worked fine until I started coding the wall collisions, where Pac-Man keeps going straight ahead until hitting a wall, causing him to stop. The code checked to see if Pac-Man's location was a whole integer value, like 3.0, and if it was it would figure out if a wall had been hit. When I tested it, though, Pac-Man went straight through the walls. If I changed the speed to 0.25, though, it worked exactly as expected. I was baffled for a few moments, and then it hit me. Computers don't store decimal values the way you might first expect.

  • @mountainhobo
    @mountainhobo 10 лет назад +4

    Ran into this 30 years ago, was working in finance and trying my hand at simple applications running fairly complex transactions over many years horizon. Was dumbfounded then. Took me while to learn what was going on. Thanks for the trip down the memory lane.

  • @miss1epicness
    @miss1epicness 11 лет назад +22

    Tom Scott's videos strike me (a young CS student) as my favorite. They're really authentic with the "and this is the part where you, a young programmer, start to tear your hair out"--please keep them up, I'm really enjoying it!

  • @mtveltri
    @mtveltri 5 лет назад +150

    I have a computer science degree from a top school, and yet nothing was ever explained nearly as well as this.
    I love this RUclips channel. Absolutely brilliant explaination. Thank you!!

  • @MMMowman23
    @MMMowman23 11 лет назад +1757

    This is why Minecraft 1.7.3 alpha has block errors after 2 million blocks away from spawn. Pretty cool.

  • @etmax1
    @etmax1 9 лет назад +35

    Well I've been working with computers for some 35 years and while very early compilers used to do 32 bit FP (floating point) around 20 years ago some people got together and settled on standards for floating point on computers and soon after 80 bit FP became a standard even though the computers were only 16 or 32bit at the time. Basically the machine's register size (32bit etc. has nothing to do with the number size usable as a sub-program deals with it. Sure it won't be as time efficient, but that's not what was suggested here.

  • @JSRFFD2
    @JSRFFD2 9 лет назад +320

    OK, so why does the cheap pocket calculator show the correct answer? I thought they used floating point numbers too. Do they have electronics that do computation in decimal?

  • @themodernshoe2466
    @themodernshoe2466 10 лет назад +488

    Can you PLEASE do more with Tom Scott? He's awesome!

  • @eisikater1584
    @eisikater1584 8 лет назад +75

    Do you remember the (in)famous Apple IIe 2 squared error? 2+2 yielded four, as it should have been, but 2^2 yielded somothing like 3.9999998, and floating point arithmetic was difficult on 8 bit computers anyway. I once even used a character array to emulate fifteen digits behind the comma, not anything I'd do nowadays, but it worked then.

  • @joemaffei
    @joemaffei 9 лет назад +130

    An even better example of floating point error is trying to add 1/5 and 4/5 in binary, which is similar to the example in the video about adding thirds:
    1/5 = 0.0011~
    4/5 = 0.1100~
    1/5 + 4/5 = 0.1111~

  • @junofall
    @junofall 8 лет назад +600

    starting out programming and some bullshit turned out as 4.000000005 instead of 4 so i'm here now xD

  • @chaquator
    @chaquator 9 лет назад +736

    So is this why I got like 5.0000003 when I input 25 into my self-written square-root function?

  • @MatthewChaplain
    @MatthewChaplain 9 лет назад +6

    I love the way that all of the scrap paper for these series is clearly paper for a tractor-fed printer that's probably been in a box since the 80s.

  • @novarren
    @novarren 8 лет назад +35

    I've put up English closed captions for this, since I haven't seen anyone else do CCs for this video, which is weird.
    I think it still needs authorised before it shows up actually on the channel, though.

  • @rodcunille4800
    @rodcunille4800 10 лет назад +2

    Before anything, great explanation!!!
    That is why supermarket's checkout's systems expresses the currencies in hundreds and not in decimals, at the end, your bill will be divided by 100 and you'll get your exact amount to pay

  • @MenacingBanjo
    @MenacingBanjo 10 лет назад +6

    Wow! I had wondered for years why Microsoft Excel would always end up with weird figures that would just show up way out in the 15th-or-so decimal places of the returned values from SUM formulas. Now I know. Thank you!

  • @AtheniCuber
    @AtheniCuber 9 лет назад +3186

    american here, just hearing 'Not Not Not Not Not Not Not Not Not Not Not seven'

  • @RMoribayashi
    @RMoribayashi 9 лет назад +1

    My old HP-25c and 15c had Engineering notation. It was Scientific notation but the exponent always a multiple of 3 with up to three digits left of the decimal point. Getting your answers formatted 823.45x10⁻⁶ instead of 8.2345x10⁻⁴ made working with the Metric system effortless, absolutely brilliant.

  • @superdau
    @superdau 11 лет назад +155

    You should have mentioned: never use a check for equality with floats. The chances are very high you will never match it.
    On the other hand I guess every programmer should make that mistake once. E. g. decrease a whole number by 0,1 until you reach zero. Be prepared for a never ending loop.

    • @MrShysterme
      @MrShysterme 11 лет назад +23

      Many compilers throw a warning when you try to compare floats.
      I am not a computer scientist, but instead learned to code for research purposes and fun.
      There are cases where you have to compare floating point numbers. When I do so, my trick is to come up with some acceptable tolerance around them or to convert to large integers by multiplying or dividing by scalars, rounding, taking ceiling or floor, etc. Basically, I massage into an integer that I can control somewhat or use less than or greater than but watch what I'm up to. I've had many programs not function properly because I did not take into account the behavior of floating point.
      Is there other techniques that are better that I'm missing? I'm mostly self taught and come up with workarounds on my own.

    • @tobortine
      @tobortine 11 лет назад +4

      Entirely agree plus there's no reason (that I know of) to use a float for a loop counter.

    • @ThisNameIsBanned
      @ThisNameIsBanned 11 лет назад +5

      Yea most compilers i used actual see the problem and pop a warning, in Eclipse you can even go as far as using CheckStyle to auto-correct your operations with float (so it will highlight the possible problems right away).
      Float is great for speed, but you have to do some hacks to get precision again.

  • @imanabu5862
    @imanabu5862 5 лет назад +1

    I was only one minute into the video and you already answered my questions! I am no a specialist and literally have no idea what floating points are/ after hours of searching this is the first video that makes sense to me !! thanks

  • @SerpentStare
    @SerpentStare 9 лет назад +6

    Tom is very passionate about his numbers being correct. Some of his explanations of these errors almost sound as though the numbers have personally offended him.
    I find it rather charming.

  • @cbbuntz
    @cbbuntz 10 лет назад +9

    Floating point is quite nice for audio. It means that very quiet passages have roughly as much accuracy as loud passages. It can also be annoying for audio too. If you want to add DC offset for some types of processing, the accuracy plummets (as I pointed out in another comment here).
    example:
    If you have a quiet passage and you do something like
    x = in + 1;
    out = x - 1;
    out will have much lower accuracy than in.

    • @midinerd
      @midinerd 10 лет назад

      Because we do not perceive loudness in a linear form, the amount of positions offered to 'louder' portions is significantly larger than those allocated to the 'quiet' portions, and thus it is easiest to tell the bit-depth of the recording by listening to the quietest passages, loudest.

    • @Moonteeth62
      @Moonteeth62 10 лет назад +1

      And yet they tend to use absolute values that scale to the DAC values. Like 8,12,16,24 bit PCM. In the end ALL codecs have to feed an digital to analog converter that is N number of bits. Is there an audio format that uses floating point values? I don't know of one, but that doesn't mean anything.

    • @cbbuntz
      @cbbuntz 10 лет назад +1

      Moonteeth62 Most DAWs process audio with floating point. That's why Nuendo sounded better ProTools years. It's also much easier to be "sloppy" with floating point. You don't have to worry about clipping at all; With 32 bit float, you essentially have a scalable 24 bit PCM. However, I don't know of any modern floating point ADCs or DACs. There were some in early digital years to squeeze the most out of 12 bits and that sort of thing. Converters on early AMS and EMT gear from the late 70's early 80's were that way.
      Our senses are generally logarithmic loudness, brightness, skin pressure. Interestingly they're have been studies that indicate that tribes without civilized culture contact often have a logarithmic perception of numbers, as in they would "count" (so to speak) in exponents. 1 2 4 8 16 etc It makes sense to me though. If you have a million dollars, you don't care about 10 bucks. If you've got 40 bucks, you care about 10 bucks.

  • @at-excel
    @at-excel 7 лет назад

    Thanks a lot for your explanation! Floating Point Numbers are a very big problem in spreadsheet calculations most People are not aware of. If you do a lookup or if-function the normal user expects, that his numbers are correct and not smaller or bigger. I allways use to round a value to cut the error off.

  • @EebstertheGreat
    @EebstertheGreat 5 лет назад +148

    I prefer 3 bit floats. They have 1 sign bit, 1 exponent bit, and 1 significand bit. They can encode ±0, ±1, ±∞, and NaN, which is all you really need.

  • @FhtagnCthulhu
    @FhtagnCthulhu 9 лет назад +399

    Please, anyone reading this, don't use float for finances. Its a mistake I see people do all the time, please just don't.

  • @plutoniumseller
    @plutoniumseller 11 лет назад +391

    ...that's why the point is floating. *sudden clarity moment*

  • @LeBonkJordan
    @LeBonkJordan Год назад +3

    Jan Misali basically said (heavy paraphrasing here) "0.3" in floating point doesn't really represent exactly 0.3; it represents 0.3 _and_ every infinitely-many other real numbers that are close enough to 0.3 to be effectively indistinguishable to the format. Basically, every floating point number is actually a range of infinitely many real numbers.

  • @Zeuskabob1
    @Zeuskabob1 5 лет назад +40

    Hah, this reminds me of a programming exercise that I had to undertake in Algorithms 2. The teacher wanted us to calculate a continuous moving average for a set of values. Since the data requirement was so minimal, I decided to store the last n digits in an array, and cycle through them when new numbers appeared. When needed, the moving average was calculated by adding the numbers together and dividing by n.
    My program would fail the automated test, because it failed to include the almost 3% error that the professor had gotten by updating a floating point average value for every step of the calculation. I had to teach about 5 other students about the fact that their program was too accurate, and needed to be downgraded.

  • @Tulanir1
    @Tulanir1 10 лет назад +265

    If you type
    0.1 + 0.2
    into the python 3.4.1 shell you get
    0.30000000000000004
    But if you type
    1/3 + 1/3 + 1/3
    you get 1.0

  • @KoyasuNoBara
    @KoyasuNoBara 9 лет назад

    Thank you! I came across this problem in some homework recently. I stored a decimal number plus an incrementing number in the float variable that the teacher provided in his premade node, and started getting this problem.
    The variable was supposed to hold user input anyway*, and since the problem didn't show up when I did it that way, I didn't bother worrying about it. It did confuse the hell out of me, though, so I'm glad to find an explanation.
    *I got tired of doing the user input part while trying to test my Linked List, because it involved about six different questions to answer per node.

  • @RitchieFlick
    @RitchieFlick 11 лет назад

    Have been programming for a while now (self-taught, 5 years in school, 1 semester in university) and I wasn't even aware of this. I LIKE :)

  • @marko.rankovic
    @marko.rankovic 10 лет назад +2

    In Python, doing the 0.1 + 0.2 gave me 0.3000000... Then the last digit was 4.. This lesson helped me understand the situation :)

  • @frollard
    @frollard 11 лет назад +1

    This primer now makes it all make sense.
    I've written a few programs where floats were required and they always pooped the bed somehow - to solve it I was doing type conversions to to integer math, then converting back...such a pain!

  • @the-goat
    @the-goat 11 лет назад

    Avoiding float errors is one of the first things every budding young programmer should learn. Another example of where these problems arise is in high-precision timing applications where rates of change are calculated based on floating point inputs.

  • @majinpe
    @majinpe 11 лет назад

    Tom Scott's talk is so interesting and easy to understand
    I love this guy

  • @ivokroon1099
    @ivokroon1099 11 лет назад

    He says that the last didgit is not a problem, but once I tried to make a timer which kept adding 0.1 to itself, and when it reached a number (for example 3), it would activate another script. I did never activate because of those floating point numbers. Thanks for the explonation, computerphile!!!

  • @mc85eu
    @mc85eu 9 лет назад +8

    Exceptionally clear and engaging.
    You're obvioiusly clever, but are also able to break things down and be interesting.
    I learned a lot - and not just about floating point. Thank you.

  • @Jenny_Digital
    @Jenny_Digital 11 лет назад

    Well done on explaining it so well! For a long time I thought it a sod and left it. I finally had to do it in PIC assembly and then after I'd struggled to victory I get told straightforwardly over a single video many years later.
    Keep up the excellent stuff!

  • @puzzician
    @puzzician 8 лет назад +3

    The problem is actually much worse than this. If you think through the implications of scientific notation, large integral values are totally screwed too. This example happens to be in Swift but it doesn't matter:
    1> var x: Float = 1_000_000_000
    x: Float = 1.0E+9
    2> x = x + 1
    3> print(x)
    1e+09
    That's right, 1 billion plus 1 equals 1 billion, using 32-bit precision floating point numbers.

  • @Czeckie
    @Czeckie 10 лет назад +13

    Congratulations on Tom Scott! He seems like a great teacher, I enjoyed this video very much - it doesn't matter that i knew the stuff already

  • @nlgatewood
    @nlgatewood 11 лет назад +3

    I run into this problem all the time at work...I deal a lot with currency. great explanation

  • @EZCarnivore
    @EZCarnivore 9 лет назад +639

    5 years as a programmer, and I finally understand floating point numbers =P

    • @EZCarnivore
      @EZCarnivore 9 лет назад +183

      I have a college education, they just never explained it in a way that I understood it!

    • @WillLorenzoCooper
      @WillLorenzoCooper 9 лет назад +142

      +エックザック True it's like you have one shot to learn in a lesson then it's gone. While on here, you can relearn on many videos if we don't quite understand.

  • @ericwu6428
    @ericwu6428 6 лет назад +3

    Wow. I remember this from my early times in programming. Now I am learning programming in school and those programming languages are so smart that they fix these errors for us. It makes me kind of sad to think that in the future these things will be done for us, and an understanding of these sort of things is going to be obsolete.

  • @matsv201
    @matsv201 11 лет назад +85

    I´m not a programmer, but a computer hardware scientist. One thing that annoy me allot is programmers that see this rounding errors and just go for 64 bit floating-points because they just don´t wan´t to see the rounding errors... normally not because its needed... it´s very seldomly is.
    Than to ad to my annoyance they then use a 32 bit compiler and than run it on a 64 bit operation-system with a 64 bit processor. Then they ask me.. why is my program running so slow, what is wrong with this computer.
    Here is the ting. the 32bit ALU in a 32bit CPU can´t obvious do any 64 bit calculation. What the compiler than does is break down the one 64 instruction to eight (8) 32 bit instructions and basically makes the program run 8 times more slowly. When the program is run at a 64 bit CPU the CPU don´t understand that the instruction before compiling actually was a 64 bit, and it run it i backwards compatibility mode, well it workes just fine, but is alot slower.
    On the other hand, if the program is made to use 32 bit flotingpoints and uses a (some what smart) 64 bit compiler, the program sadly don´t works 8 times faster, but anyway 2 times faster.
    That´s the ting
    64 bit number on a 32 bit compiler (don´t care about the CPU) 8 times slower
    32 bit number on a 64 bit compiler (needs a 64 bit CPU) only, in the best cases, twice as fast.
    Basically, don't do wrong

    • @0pyrophosphate0
      @0pyrophosphate0 11 лет назад +33

      What annoys me, as a computer science student, is being told by default to use a 32-bit int to count just a handful of things. Then everybody jokes about how memory-hungry modern programs are.
      Also, being told that the only optimization that matters is moving down a big-O class. Doubling the speed of the algorithm doesn't matter, but moving from O(n) to O(logn) is the real gold. Because all programs have infinite data sets.

    • @matsv201
      @matsv201 11 лет назад +2

      Actuality 16 bit flotingpoints can be used for a number of things to. E.I. game physics simulation were there is no need for really good precision... but i don't know if modern processors support 16 bit SIMD, i guess not, and than there will be a performance loss in steed of gain. Well, of cause save cache and RAM as well as bandwidth.
      But then if you talk about stats, partnumber, information and so on, the number of bits can just as well be 32 bit because they usualy don´t exist on a mass level. I.e. if using a million pars all using 32 bit description instead of 16 bit, you only save 2MB
      When RAM is consumed on a mas level is in textures and terrain data. Sadly Lossless realtime texture compression is still not used on mas..... Well well, next time AMD/Nvidia runs in to the bandwidth wall, they probably will do it.

    • @totoritko
      @totoritko 11 лет назад +25

      0pyrophosphate0 What annoys me, as a computer science graduate, is when computer science students think they've mastered a topic that they've barely dipped their toes in.
      Follow the advice of your tutors on int sizes - with years of experience and countless integer overflow bugs and performance issues with unaligned memory access you'll learn to appreciate their wisdom.

    • @0pyrophosphate0
      @0pyrophosphate0 11 лет назад +9

      totoritko You don't know me or what I've studied or how much. No need to be condescending. I'm quite aware of how much I don't know, and I do know there are pitfalls to avoid when dealing with memory. I just would rather be taught about those pitfalls and how to avoid, debug, and fix them, rather than to just shut up and use int.
      Even worse, I think, is using double for everything with a decimal point.
      I'm also fully prepared to accept in 5-10 years that my current attitude is ridiculous, if in fact it turns out that way.

    • @Zguy1337
      @Zguy1337 11 лет назад +7

      0pyrophosphate0 Your program is not going to turn into a memory-hungry beast just because you're using 32-bit integers instead of 16- or 8-bit integers. It might become a bit faster though.

  • @TechLaboratories
    @TechLaboratories 11 лет назад

    Thank you for what's probably the best explanation of floating point arithmetic that now exists! It's easy from this starting point to extrapolate into understanding floating point operations (FLOPs), and half (16bit), single (32 bit), and double (64bit) precision floating point numbers! Thanks Tom!

  • @algarch
    @algarch 10 лет назад +2

    The place that I have seen this (rounding errors with floats) become significant is in summing large lists of values which have a large variation in value. When adding small values to an existing large value the entire change can be lost in the inaccuracy of the floating point (it just get's dropped as not being significant). If you do this many times (add a small value to a large value) the end result can still be no change even though you have cumulatively added a large value. For this type of data sorting the values from smallest to largest and summing them in that order gives a more accurate end result, assuming that no one value is significantly larger than the current total to which it is being added. I realise I'm not defining significant, small and large sufficiently, but just enough to make my point. Maybe.

  • @airmanfair
    @airmanfair 10 лет назад +52

    Aren't floating point errors what they used in Office Space to ripoff Initech?

  • @Werevampiwolf
    @Werevampiwolf 5 лет назад +10

    I've taken five semesters of calculus, chemistry, and several engineering and physics classes. I've used scientific notation for years...and this video is the first time I've heard an explanation for WHY scientific notation is used.

  • @RachelMant
    @RachelMant 7 лет назад +3

    Would be fantastic to see a video done on Fixed Point, which is the other way to solve the Floating Point accuracy issues for some small length of numbers, especially as you can store the decimal component as an integer and do some clever maths computing the overflow quantity back into the integer component. This is actually how programs like Excel solve the problem when you click the Currency button.

  • @BritchesBumble57
    @BritchesBumble57 10 лет назад +66

    and this is why you should never fuck with floating point numbers

  • @veloxsouth
    @veloxsouth 11 лет назад +29

    I would have loved to see more about the reserved cases of the floating point standard such as NaN, as well as some more on the topic of normalization. Hope there's more to this in some "extra bits"

  • @Kd8OUR
    @Kd8OUR 11 лет назад +1

    You guys provide a far better lesson than anything I got in school.

  • @ThePhiSci
    @ThePhiSci 11 лет назад

    I watch Computerphime's videos for this guy.

  • @Patman128
    @Patman128 11 лет назад +2

    New languages (like Clojure) are starting to include fractions as a data type. If the numerator and denominator are arbitrary precision (integers that scale as necessary) then you can represent and do operations on any rational number with no error.

    • @DrMcCoy
      @DrMcCoy 11 лет назад

      Sure; just keep in mind that this introduces both memory and computational overhead. :)

  • @BooBaddyBig
    @BooBaddyBig 11 лет назад +17

    Actually, in graphics it's also a problem, even being off by a fraction of a pixel can give a visible dust of gaps between surfaces.
    Careful rounding usually helps with floating point issues.

    • @murphy54000
      @murphy54000 11 лет назад +4

      Uhm. Pixels can't move, and there are less than 3000 tall/long on most screens. I fail to see how it could make a difference.

    • @BooBaddyBig
      @BooBaddyBig 11 лет назад +7

      For example in 3D graphics you also have rotation, translation and perspective transformations and they can individually or in combination put the edges of surfaces right either side of the centre of the pixel, so the point used for the pixel isn't touching either surface.

    • @murphy54000
      @murphy54000 11 лет назад +2

      BooBaddyBig Well I don't make graphics, so I'm going to say that it still doesn't really make sense, but I trust what you say regardless.

  • @jgcooper
    @jgcooper 11 лет назад +1

    it took me a long while, but i just noticed that the "/" was added at the end. awesome!

  • @SirCrest
    @SirCrest 10 лет назад

    Tom Scott is probably my favourite guest.

  • @samanthabayley2194
    @samanthabayley2194 7 лет назад

    This finally explains strange results in a program I wrote for first year uni. The program was meant to calculate change in the fewest possible coins but whenever I had 10c left it would always give two 5c coins, for the life of me I couldn't figure out why now I know. It also used to happen with the train ticket machines so at least I'm not alone in making that error.

  • @nagyandras8857
    @nagyandras8857 9 лет назад +4

    What the video says does stand true with digital computers.
    but analouge ones can understand recurring numbers, and 1/3 +1/3 absolutely makes sense and will yield 2/3.

  • @krishnasivakumar2479
    @krishnasivakumar2479 6 лет назад

    I'm loving this channel. It's heaven.

  • @cosmemiralles1295
    @cosmemiralles1295 10 лет назад +1

    Finally, I start to understand what the floating point is!!!

  • @TheKazragore
    @TheKazragore 7 лет назад +1

    It's so nice to hear someone say "maths" on RUclips.

  • @FollowTheLion01
    @FollowTheLion01 11 лет назад

    It's probably well worth pointing out that when you try to add a small number onto a big number you get into trouble. This is really bad if you're trying to integrate a floating point the way you would increment an integer. Even if the number you're adding is nice and tidy, like 1.0 (tidy in base 2 as well as base 10), for example, when your variable gets really big, like 3E8 (your example), the computer can't add a small number because it gets lost due to the the significant digits problem. This may be self evident to many of us, but this single little aspect of dealing with floating points can cause a lot of trouble. I keep it in mind whenever I'm dealing with floating points, especially on platforms that handle fewer significant digits, like many PLC's, DCS's, process control systems, etc. If the compiler allows it, it's sometimes better to use really big integers, or even multiple registers (like a times 100000 tag and a times 1 tag for less than 100000) to allow you to keep adding.
    Some software supports integers of length n, which solves a lot of these problems, but has it's own issues, if course (speed, memory).

  • @harley291096
    @harley291096 11 лет назад

    Who is this man? He needs a fucking medal. Give him 3!

  • @martinkuliza
    @martinkuliza 10 лет назад +2

    Mate... Respect for still having the blue/white lined FORM FEED paper LOL i still have a few boxes myself, it's good for scrap huh.

  • @cfhay
    @cfhay 11 лет назад +17

    What I missed in the video is, the explanation how can the calculator show 0.1 + 0.2 as 0.3 still, when it uses floating point numbers. The calculator stores more precision than it shows on the screen, and the leftover digits are used for rounding, showing most of the stable calculations without error. This is important, because programmers needs to be aware of these errors, how to handle them, what error is tolerable, and what precision they need to keep up that requirement.

  • @tobortine
    @tobortine 11 лет назад +15

    An excellent explanation and a pleasure to watch.

  • @oO_ox_O
    @oO_ox_O 11 лет назад +4

    Could do a continuation where NaN and all the other fun stuff from IEEE 754 are explained and while at it maybe also talk about the different rounding modes and encodings of negative numbers.

  • @albertgao7256
    @albertgao7256 8 лет назад +4

    Once you know that the 0.1+0.2 0.3 is just a phenomenon that you need to get to used to, because it is the way the computer treats the floating number, you will not scared with the " 0.3" stuff.

  • @MasterHD
    @MasterHD 10 лет назад

    A clarification I'd like to make to this video. The hardware architecture "32-bit computer" (x86) or "64-bit computer" has nothing to do with the precision of floating point numbers. That is strictly used for memory addressing. So any precision value can still be used on any hardware, though some may be more efficient at it than others.

  • @ghollisjr
    @ghollisjr 10 лет назад

    There are typically two types of trouble with floating point arithmetic:
    1. Adding lots of them together, accumulating errors
    2. Trying to test for equality
    The first one actually is nasty, but the second points to flawed algorithms. When working with the real numbers, even in the formulation of e.g. analysis of reals, we work with inequality, not equality. It is almost always better to implement an algorithm working with real numbers by using bounds.

  • @afluka
    @afluka 11 лет назад

    It maybe would've been cool to mention that about half of all floating point numbers fall within the [-1,1] range, while the other half stretches out all the way to the highest exponents. In fact, there's a set amount of numbers that fall between ranges of numbers that are powers of two. So, for instance, the amount of numbers between 1 and 2 is equal to the amount between 2 and 4, 4 and 8 and so on.
    This makes normalising your numbers a good idea when you need to do precise calculations. An operation of 1000.1+1000.2 is gonna give you an even bigger error than 0.1 + 0.2. Of course, if you can avoid errors entirely by using some other (software) number format without impacting your performance too much, then that's the better solution.

  • @thecassman
    @thecassman 11 лет назад +10

    Fantastic explanation! Not heard it explained anywhere this clearly before and it's a great way to explain it. Keep these videos going Brady / Sean!
    As somebody who programmed for a payroll firm for 5 years, and now program for a credit risk firm - i totally appreciate the benefits of using decimal types!

  • @jmfriedman7
    @jmfriedman7 10 лет назад

    One bugaboo, where the last decimal place IS important, is with floating expressions such as: xx^2 + yy^2 + zz^2
    These can give small NEGATIVE numbers if xx, yy, and zz are all small and nonzero. Try to take a square root or a logarithm and you end up with NaN (not-a-number) and this will carry all the way through the code giving errors.
    Similar errors occur with other functions that have limited input ranges. They are easily avoidable, but they always manage to sneak in somewhere.

  • @coolboyrocx
    @coolboyrocx 11 лет назад

    I already understood this concept but I thought it was a really good explanation regardless. Some tricky things in here to get your head around initially so hopefully people find it useful!

  • @ollllj
    @ollllj 11 лет назад

    "second life" uses 32-bit floats for the positions of anything within chunks that are 256x256x1024 meters wide (not using the higher number range, disregarding accuracy for performance) . It also uses quaternions of 4 of these floats for any rotation. Floating point rounding errors quickly add up to something that is easily visible, especially if multiple objects are grouped/linked and transformed relative to each other.

  • @123456789robbie
    @123456789robbie 11 лет назад

    Hey, here's an idea: interviews with computer-y people about their background. I think Tom's would be interesting because i'm pretty sure his professional qualification is as a linguist, so it would be cool to hear about why he learned about all the compsci stuff that he did

  • @torikenyon
    @torikenyon 7 лет назад

    I like how they add a / to to mark the end of the video

  • @marcuswilliams3455
    @marcuswilliams3455 2 года назад

    Very excellent explaining of Floating Point Numbers. Now it be great to explain about Decimal floating-point (DFP) aka IEEE 754-2008. As per financial transactions, people often balk at the one language which utilize BCD numbers for representing currency.

  • @ghuegel
    @ghuegel 11 лет назад +1

    I've run into this a few times at work using Excel. It seemed very strange and looking into why it happened was enlightening and interesting... plus, we found a few ways to correct it.

  • @MysterX79
    @MysterX79 11 лет назад +4

    And then you have a simple "if (d

  • @CoxTH
    @CoxTH 10 лет назад +57

    To come back to the 3d game example:
    In older versions of Minecraft there is a bug caused by floating point rounding errors. Because the game world is procedurally generated those errors cause errors is the world generation. They only become significant when one of the coordinates is at about +/- 12.5 million, but there the world generation changes significantly.

  • @mikelipsey8837
    @mikelipsey8837 10 лет назад +16

    This guy is a very good teacher.

  • @sklanman
    @sklanman 11 лет назад +4

    This reminds me of Office Space (the movie) in which the characters discuss plans to modify the bank software's source code which would take those tiny fractions of a cent in everyday banking transactions and instead of rounding it down, would transfer the difference to their account.

  • @Mittzys
    @Mittzys 9 месяцев назад +1

    Watching this, I already understood floating point numbers, but I did not understand scientific notation. Ironically, this video has taught me the exact opposite of what it intended to

  • @claudemartin5907
    @claudemartin5907 7 лет назад +2

    "32 bit computer" -> that's about addressing. They can handle 64 bit floating-point arithmetic.
    You could build a computer that can do 128 bit floating-point arithmetic and uses only 16 bit for addressing. There certainly is some relation when it comes to CPU design and the floating-point unit size, register size and address size. But it's not really the same. It might even be handled by a coprocessor. And FPU stack registers are usually 80 bits (10 bytes) wide, not 64 bits. See fld, fild, and fbld in assembly.

  • @mailsprower1
    @mailsprower1 11 лет назад

    Thanks for closing that tag at the end, Brady!
    My OCD is finally fixed!

    • @MrTridac
      @MrTridac 11 лет назад

      The problem is that the tags from the first few videos are still open. I guess the nesting depth is still a few dozen levels deep. And you have to include the opening tags at the end of all these early videos too. INSANE. O.o

    • @mailsprower1
      @mailsprower1 10 лет назад

      I tried not to think about that... I tried.

    • @MrTridac
      @MrTridac 10 лет назад +1

      mailsprower1 Brady should make a "closing tags only" video or put them in as "cameo" here and there in future videos. :)

  • @mxkep
    @mxkep 8 лет назад

    I never thought floating point could be described in such an interesting way!

  • @montyoso
    @montyoso 10 лет назад +11

    This is a clever guy that explain complicated things in a simple way.

  • @0pyrophosphate0
    @0pyrophosphate0 11 лет назад

    This actually cleared up FP logic for me a lot. Thanks for that.