how floating point works

Поделиться
HTML-код
  • Опубликовано: 28 сен 2024

Комментарии • 1,1 тыс.

  • @codahighland
    @codahighland 2 года назад +2761

    For anyone curious: The generalization of a decimal point is known as a radix point.

    • @ferociousfeind8538
      @ferociousfeind8538 2 года назад +60

      Is that pronounced [ɹadɪks] or [ɹeidɪks]?

    • @codahighland
      @codahighland 2 года назад +119

      @@ferociousfeind8538 It's [ɹeidɪks] in American English, at least. I couldn't say what the appropriate Latin pronunciation would be.

    • @Cheerwine091
      @Cheerwine091 2 года назад +41

      Rad!

    • @doublex85
      @doublex85 2 года назад +36

      Hmm, I'm not sure I ever heard anyone say it aloud so I usually read it to myself as /rædɪks/. Both Merriam-Webster and Cambridge only offer /reɪdɪks/, which is a bit distressing.

    • @stevelknievel4183
      @stevelknievel4183 2 года назад +130

      @@ferociousfeind8538 I don't think I've ever seen someone ask about pronunciation in the comments section on RUclips and then someone else give an answer where both commenters both know and use IPA. It stands to reason that it would be on this channel though.

  • @bammam5988
    @bammam5988 2 года назад +830

    Floating-point is a very carefully-thought-out format. Here's my favorite float tricks:
    * You can generate a uniform random float between 1.0 and 2.0 by putting random bits into the mantissa, and using a certain constant bit pattern for the exponent/sign bits. Then you can subtract 1 from it to get a uniform random value between 0 and 1. This is extremely fast *and* has a much more uniform distribution compared to the naive ways of generating random floats, like "rand(100000) / 100000.0f"
    * GPU's can store texture RGBA pixels in a variety of formats, and some of them are floating-point. But engineers try as hard as possible to cram lots of data into a very small space, which leads to some esoteric formats. For example, the "R10G11B11" format stores each of the 3 components as an *unsigned* float, with 10 bits for the Red float, and 11 for each of the Green and Blue floats, to fit into a 32-bit pixel. Even weirder is "RGB9_E5". In this format, each of the three color channels uses a 14-bit unsigned float, 5 bits of exponent and 9 of mantissa. How does this fit into 32 bits? Because they share the same exponent! The pixel has 5 bits for the exponent, and then 9 bits for each mantissa.

    • @pekkanen_sr
      @pekkanen_sr 2 года назад +18

      So if the colors in RGB9_5 share the same exponent, then wouldn't each color have roughly the same brightness, each less than twice as much as each other? So the colors would always look pretty unsaturated.

    • @kkard2
      @kkard2 2 года назад +54

      ​@@pekkanen_sr ​ no, because you would scale the other components' mantissa.
      e.g. if you want very bright red with very small green value, you set high exponent, high red mantissa and low green mantissa
      ofc this way you lose precision with very bright colors, but that's the point

    • @pekkanen_sr
      @pekkanen_sr 2 года назад +18

      @@kkard2 So are you saying the first digit of the mantissa can be a 0 whatever the exponent is, unlike with usual floating points?

    • @kkard2
      @kkard2 2 года назад +6

      ​@@pekkanen_sr hmm, tbh i just think it's that way, fast google searches failed me and left with unconfirmed information...

    • @pekkanen_sr
      @pekkanen_sr 2 года назад +2

      @@kkard2 Because if it's not like that then my point still stands, the highest the mantissa can be is 1.11111111 in binary or about 1.998 in decimal, and the lowest it can be is 1, which is exactly what i was saying

  • @jezerozeroseven
    @jezerozeroseven 2 года назад +125

    IEEE 754 is amazing in how often it just works. "MATLAB's creator Dr. Cleve Moler used to advise foreign visitors not to miss the country's two most awesome spectacles: the Grand Canyon, and meetings of IEEE p754" -William Kahan

  • @leow2996
    @leow2996 2 года назад +3

    That barely audible mouse click to stop recording right after a nonsensical parting one-liner is just *chef's kiss*

  • @seneca983
    @seneca983 2 года назад +312

    11:23 "The caveat only being that they get increasingly less precise the closer you get to zero."
    That isn't the only caveat. Subnormals often have a separate trap handler in the processor which can slow down processing quite a bit if a lot of subnormals appear.

    • @Kaepsele337
      @Kaepsele337 2 года назад +76

      Thank you, you just answered my question of why my monte carlo integrator has wildly different run times for different values of the parameters even though it should perform the exact same operations.

    • @RyanLynch1
      @RyanLynch1 2 года назад +4

      good tidbit/nit! thank you!

    • @seneca983
      @seneca983 2 года назад +13

      @@Kaepsele337 Wow, I didn't expect my comment to be this useful but I'm glad that I could help you.

    • @lifthras11r
      @lifthras11r 2 года назад +33

      Note that this is in many cases an implementation artifact. There _are_ implementation techniques for performant subnormal handling, but since they are supposed to happen only sparingly actual implementations have less incentive to optimize them. That's why subnormal numbers hit much harder in newer machines than in older machines. (IEEE 754 does have an additional FP exception type for subnormals, but traps are frequently disabled for the same reason that signaling NaN is unpopular.)

  • @Hyreia
    @Hyreia 2 года назад +147

    Shout out to my favorite underused format the fixed-point. An old game creation library I used had them. Great for 2D games when you wanted subpixel movement but not a lot of range ("I want it to move 16 pixels over 30 frames"). I found the lack of precision perfect so that you don't get funny rounding errors and have things not line up after moving them small spaces.

    • @sourestcake
      @sourestcake 2 года назад +26

      They're also good for avoiding bugs caused by the changing precision in floating-point formats. Sadly they have worse optimization potential in modern CPUs.

    • @ferociousfeind8538
      @ferociousfeind8538 2 года назад +15

      I also feel like minecraft should be using a fixed-point position format. Half of all the number space is completely wasted at the world's origin, and then the precision is stretched to its limit at quintillions of blocks out. Does minecraft need more than 4096 possible positions within a block (in X, Y, and Z directions each of course)? I don't really think so. Which leaves 19 bits for macro-block positions (and 1 for the sign of course) which is 524,288 blocks with 32 bits. Or, with 64 bits as I'm pretty sure it already does use, you can go out to 2,251,799,813,685,248 (or 2.251×10^15, or 2.2 quadrillion) blocks in any direction, before the game either craps out or loops your world. Which I think is a fine amount of space, and even 500,000 blocks was fine- you'd run out of computer memory doing ordinary minecraft things or run out of interest in the world before you actually naturally explored that far. But with 2 quadrillion blocks in any direction, there is no way you'll get out there ever, without teleworking there to see what it's like

    • @ferociousfeind8538
      @ferociousfeind8538 11 месяцев назад +3

      Yeah, games with limited scale don't need the sliding scale of the floating point number system. Minecraft does not need to specify 1/65536th of a block, it probably doesn't functionally need anything more than, like, 1/4096th of a block (or 1/256th of a single pixel), which would nominally squash the range the floating point coordinate system afforded you, but expand the usable space significantly and give you a hard cutoff where the fixed point numbers top out (or bottom out negatively)
      In fact, using 64 bits for positions (floats are SO last year) and a fixed-point implementation down to 1/4096th of a block, you get a range up to 2.2517 * 10^15 blocks in any direction from spawn, all which behave as well as current vanilla minecraft between 2048 and 4096 blocks away from spawn (where the minimum change is already 1/4096)
      And, of course, I couldn't just NOT mention the other half of the floating-point numbers that Minecraft straight up cannot use- half of the available numbers are positions between 0 and 1! Unparalleled precision used for nothing!

    • @ferociousfeind8538
      @ferociousfeind8538 11 месяцев назад +2

      Lmfao I am constantly harping on this I guess

  • @Dawn_Cavalier
    @Dawn_Cavalier 2 года назад +85

    If I had to guess, 0 being positive and 1 being negative is a holdover from 2's complement binary representation.
    For those uninitiated, 2's complement binary representation (2C) is a way to represent positive and negative whole numbers in binary that also uses the leading bit as the signed bit. To showcase why this format exists here's an example of writing a -3 in 2C using 8 bits.
    Step 1: Write |-3| in binary
    |-3| = 3 = (0000 0011)
    Step 2: Invert all of the bits
    Inv(0000 0011) = 1111 1100
    Step 3: Add 1
    1111 1100
    + 0000 0001
    1111 1101
    -3 = (1111 1101)2C
    Converting it back is the reverse process
    Step 1: Subtract 1
    1111 1111
    - 0000 0001
    1111 1110
    Step 2: Invert all of the bits
    Inv(1111 1110) = 0000 0001
    Step 3: Convert to base of choice, 10 in this example, and multiply by -1
    0000 0001 = 1
    1 * -1 = -1
    (1111 1111)2C = -1
    The advantage of this form is that addition works the same regardless of sign of both numbers or the order. It does this by using the overflow to discard the negative sign if the result would be positive.
    Example: -3 + 5 Example: -3 + 2
    1111 1101 1111 1101
    + 0000 0101 + 0000 0010
    0000 0010 1111 1111
    2 = (0000 0010)2C -1 = (1111 1111)2C
    It's ingenious how effortlessly this integrates with existing computer operations and doesn't have glaring issues, such as One's Complement having a duplicate zero or requiring operational baggage like more naïve negative systems.
    To go back to the original statement, this system only works if the leading digit is a one because if it were inverted 0 would be (1000 0000)2C. This is not only unsatisfying to look at, but dangerous when you consider most bits in a computer are initialized as zero, which would be read in this hypothetical system as -255.

    • @hughcaldwell1034
      @hughcaldwell1034 2 года назад +3

      This was in the back of my mind, too.

    • @angeldude101
      @angeldude101 Год назад

      Using a single bit to represent the sign is convenient for thinking about and implementing the numbers, but ultimately binary integers are _modular_ integers, which don't actually have any concept of sign. Ascribing a sign based on the first bit gives odd behavior specifically for 0 = -0 being "positive" and 128 = -128 being "negative" when really both are neither.

  • @agcummings11
    @agcummings11 2 года назад +17

    I already know how floating point works but I’m watching this anyway because I have PRINCIPLES and they include watching every Jan misali video

    • @Duiker36
      @Duiker36 2 года назад +1

      I mean, it's at least as important as the origins of Carameldansen.

  • @denverbeek
    @denverbeek 2 года назад +8

    Microprocessor/IC enthusiast here.
    5:33
    The reason that a 1 in the sign bit of a signed integer is negative and a 0 is positive, is that it saves a step when doing 2's Compliment, which is how computers do subtraction (basically, you can turn any addition problem into a subtraction problem by flipping the bits of one of the numbers and adding 1, since computers can't natively subtract).

  • @quietsamurai1998
    @quietsamurai1998 2 года назад +20

    This is without question the best explanation of floating point numbers I've ever seen. I wish this video was around when I was taking my freshman CS classes and we had to memorize the structure of floats, since actually walking through the whole process of making compromises and design decisions behind the format really gives you a deep understanding of the reasoning behind the format.

  • @MCLooyverse
    @MCLooyverse 2 года назад +55

    I always see the sign as `isNegative`. Also, I recently did a thing in Desmos where I needed to know the quadrant a point was in, so I had Q(p) = N(p.x) + 2N(p.y); N(x) = {x < 0: 1, 0} (which generates a non-standard quadrant index, but it's maybe better.), where N(x) is the same mapping.
    Also, with this sign encoding, we have that the actual sign is (-1)^s ((-1)^0 = 1, (-1)^1 = -1)

    • @matthewhubka6350
      @matthewhubka6350 2 года назад +1

      The sign function also exists in desmos btw. Might not work as well though since sign(0)=0

  • @EDoyl
    @EDoyl 2 года назад +63

    doubly recommend the tom7 video. It's interesting how the different NaN values tell you what sort of NaN it is, but NaN never equals NaN if you compare them. Even if they're the same sort of NaN they don't equal each other. Even if they're literally the same bits at the same location in memory it doesn't equal itself.

  • @notnullnotvoid
    @notnullnotvoid 2 года назад +16

    I always found it silly that IEEE-754 gives us 1/0=INF, but not INF*0=0, this is the first time I've had a plausible explanation for why it might have been designed that way. Thanks!

    • @cmyk8964
      @cmyk8964 2 года назад +2

      0 × ∞ is either a discontinuity or an exception in normal math too though. Anything × ∞ is ±∞, but 0 × anything is 0.

    • @danielbishop1863
      @danielbishop1863 2 года назад +3

      INF*0 is one of the standard "indeterminate forms" in calculus, which is why it evaluates to NaN.
      en.wikipedia.org/wiki/Indeterminate_form

    • @cmyk8964
      @cmyk8964 2 года назад +1

      @@danielbishop1863 Exactly. 0 can be an infinitesimal in normal math too.

  • @FernandoGarcia-hz1gp
    @FernandoGarcia-hz1gp 2 года назад +9

    Here i am, once again
    When i got out of all my maths courses i swore to never come back, but am i gonna sit through however much time jan Misali needs to talk about some cool random math thing?
    Yes, yes i am

  • @pekkanen_sr
    @pekkanen_sr 2 года назад +83

    I think the reason that the first bit being a 0 represents positive numbers and 1 for negative is so that it's consistent with signed integer formats. When you add two signed ints (or byte, long, etc) you just go from right to left, adding bitwise and carrying the 1 to the next digit if you need to. The leading bit of a signed int is 0 if it's positive so that, to add two ints, you can just treat the leading bit like another digit in the number. For example, in signed bytes, 1+2=3 is done as 00000001+00000010=00000011, but if a leading 1 represented a positive number then if you tried to apply the same bitwise addition step to each digit, you would get 10000001+10000010=00000011. Since you're adding two leading 1's together, they add to 0, which means the number becomes negative. When using a 0 for the leading digit of a positive number, you don't have this problem when adding together negative numbers, since you'll have a 1 carried to the leading digit (so for example, (-1)+(-2)=(-3) is done as 11111111+11111110=11111101) unless the numbers are so negative that you get an integer underflow, which is an unavoidable problem. Because this convention makes logical sense for signed ints, it makes sense that it would be used for floats for consistency.

    • @steffahn
      @steffahn 2 года назад +12

      One useful result of the sign bit being 0 meaning positive is that this way the "ordinary zero", i.e. "positive zero" value consists entirely of zero-bits. So if some piece of memory is zero-initialized, and then interpreted as floating point numbers, those become zero-initialized, too.

  • @fernandobanda5734
    @fernandobanda5734 2 года назад +24

    The weirdest thing you'll face when dealing with floating point is when you try to make games and one of the tips is "As you get farther from the (0, 0, 0) coordinate, physics gets less precise".
    It makes a lot of sense. If you need to spend more bits in the integer part of the number, the decimal part will be less precise. It's just so impractical for this purpose.

    • @huckthatdish
      @huckthatdish 2 года назад

      That makes total sense but I’ve never worked in games so it never occurred to me. How interesting. What kind of data type are the coordinates? I’d assume it’s at least a double these days but like is double precise enough for large open world games or do you need more bits?

    • @kkard2
      @kkard2 2 года назад +7

      ​@@huckthatdish in games most often float is used, due to higher performance on GPU (transforming vertices by matrix, etc.)
      open world games usually use floating world origin, which shifts entire world closer to (0, 0, 0) (implementations vary)

    • @qwertystop
      @qwertystop 2 года назад +7

      @@huckthatdish Generally, games with large enough areas to make this a problem handle it by setting the origin coordinate at the player's location and moving the rest of the world around them, and not simulating the parts of the world that the player isn't currently in. That, or loading zones; any given loaded area can have it's own fixed origin.

    • @qwertystop
      @qwertystop 2 года назад +3

      On the other hand, Outer Wilds has a whole miniature solar system and needs to run physics everywhere even when you're not around. I'm not sure how that handles it, but I am very impressed that they did.

    • @huckthatdish
      @huckthatdish 2 года назад +1

      @@kkard2 interesting. Still surprised with how far draw distances are these days everything loaded at once can be simulated with just float, but I know the tricks and hacks to make it all work are myriad. Very interesting

  • @GameTornado01
    @GameTornado01 2 года назад +1

    After I had to convert numbers into floating point format and back by hand in an exam this year, I feel really superior watching this.

  • @aaronspeedy7780
    @aaronspeedy7780 2 года назад +12

    0:10 Wow, I didn't realize that non-programmers really only hear about floats in the context of precision errors

    • @accuratejaney8140
      @accuratejaney8140 2 года назад +2

      Yeah, I first heard about floating point numbers in reference to the fact that if you go far enough in Minecraft Bedrock, you can fall through the world (Java doesn't have the problem in the allowed +30 million to -30 million playspace because it uses doubles), and I next heard them in reference to Super Mario 64's "parallel universes" bug and how, if you go far enough, you can't enter certain parallel universes because they're not near enough to a floating point number.

  • @DeWillpower
    @DeWillpower 6 месяцев назад

    thank you for making this video! my code teacher told me basically everything i needed to know but also move to a more important subject, while your video on youtube is a more cozy place where i could learn more

  • @Yotanido
    @Yotanido 2 года назад +3

    NaN is an absolute scourge. You do some operation that results in a NaN. It doesn't error, you just get a NaN as a result.
    Any operation on a NaN is also a NaN. By the time this causes an issue in your program, it might be somewhere completely different and now you need to figure out where the NaN originated, before it infected all the other floats.
    Honestly, I'd prefer it if things just errored out when encountering a NaN. It would make debugging so much easier and in the vast majority of cases, things are going wrong once a NaN shows up anyway.

    • @Chloe-ju7jp
      @Chloe-ju7jp 2 года назад +1

      just have checks for if things are NaN after doing something if it's a problem

    • @Yotanido
      @Yotanido 2 года назад

      @@Chloe-ju7jp Sure, you can do that. But then you need to do a NaN check everywhere and you might forget.
      Or you make a function for each operation, which will make longer expressions look absolutely daft and hard to read.
      NaN is an exceptional state. It should throw in exception. (Or whatever error mechanism the language you are using happens to have)

    • @rsa5991
      @rsa5991 2 года назад

      @@Yotanido On x86, hardware supports signaling on FP errors. It is controlled by MXCSR register. Your compiler might have some function or setting to control it.

    • @dojelnotmyrealname4018
      @dojelnotmyrealname4018 3 месяца назад

      @@Yotanido It sounds like what you need is a float monad.

  • @nanometer6079
    @nanometer6079 2 года назад +1

    I love that my interest in computer science and other esoteric internet things like homestuck and undertale somehow converge on this channel lol

  • @doublex85
    @doublex85 2 года назад +8

    Hey jan Misali, take a look at "Posit numbers" if you haven't already. I feel like they'll be aligned with your interests. They feel like floating point but without all the ad-hoc design-by-committee kludges.
    No negative zero, only one exception value (NotAReal) instead of trillions, more precision near zero but larger dynamic range by trading bits between the fraction part and the exponent part using the superexponential technique you actually joked about in this video. They're super cool.

  • @rohinkartik-narayan7535
    @rohinkartik-narayan7535 2 года назад

    I love the awkward pause after saying that asking why zero is positive and one is negative is a good question

  • @whatelseison8970
    @whatelseison8970 2 года назад +3

    Daaamn Jan! You really blew up since last I saw you. Nice job! This video was a lot of fun. I like to think you also did the entire thing in one take. I know you said at the end you're not a number but as far as I'm concerned, you're number one!

  • @BLiZIHGUH
    @BLiZIHGUH 2 года назад +8

    Great video as always! And I'm always happy to see Tom7 getting a plug as well :) You both exist in the same realm of "obscure but incredibly entertaining content about niche subjects"

  • @gauravbyte3236
    @gauravbyte3236 2 года назад +1

    keeping this as a reminder to comeback as I didn't understand in one go

  • @areadenial2343
    @areadenial2343 2 года назад +8

    I hope you do a video about the balanced ternary number system sometime, it's very cool and has lots of unique properties! Having the digits 1, 0, and -1, it can naturally represent negative numbers. Truncation is equivalent to rounding, so repeated rounding will not result in loss of precision. Converting a number to negative simply involves swapping the 1 digit with -1 and vice versa. Similar to binary, having only digits with a magnitude of 1 simplifies multiplication, allowing you to use a modified version of the shift-and-add method (flip to negative if -1, shift, and add). Some early computers used balanced ternary, such as the Setun computer at Moscow State University, and a calculating machine built by Thomas Fowler. Fowler said in a letter to another mathematician that "I often reflect that had the Ternary instead of the denary Notation been adopted in the Infancy of Society, machines something like the present would long ere this have been common, as the transition from mental to mechanical calculation would have been so very obvious and simple."

  • @DominoPivot
    @DominoPivot 2 года назад +12

    Thanks, this is a very straightforward explanation and I'm probably going to link it to any neophyte programmer asking me a question about floating points :)
    Or anyone who shouts that JavaScript is a bad language when the flaws they're complaining about are really just flaws with the IEEE floating point standard that JS doesn't encapsulate.

    • @keiyakins
      @keiyakins 2 года назад +7

      I mean, "all numbers are double precision floats, deal with it" is a pretty awkward design decision. On the other hand the thing was thrown together in like a week and a half originally and when you take that into consideration it's pretty good.

    • @notnullnotvoid
      @notnullnotvoid 2 года назад +4

      Having no integer data type is FAR from the only reason why javascript is a terrible programming language.

    • @drdca8263
      @drdca8263 2 года назад +1

      @@notnullnotvoid that doesn’t preclude many of the complaints made from being mistaken though

    • @DominoPivot
      @DominoPivot 2 года назад

      @@notnullnotvoid Plenty of languages with a high level of abstraction have only one number type. 64bit floating point numbers are pretty good at representing 32 bit integers accurately, so when dealing with human-scale numbers it's rarely a problem. But JS could have done a better job handling NaN, Infinity and -0 for sure.

    • @jangamecuber
      @jangamecuber 2 года назад

      @@notnullnotvoid eg. The Date system

  • @tanyaomrit1616
    @tanyaomrit1616 2 года назад

    jan Misali: These five bits for where the point should go allow us to do something very clever.
    Me, listening to this in the background for the third time, processing everything kinda on autopilot, and also having seen the Lidepla video: Uh oh that can't be good

  • @jasmijnwellner6226
    @jasmijnwellner6226 2 года назад +59

    I now understand subnormal numbers, I thought I never was going to get them. Thanks!
    By the way, have you heard of posits? It's kinda like a "floating floating point system" like you mentioned at 5:10, and it's really fascinating (albeit harder to understand than the standard IEEE 754 format).

    • @kered13
      @kered13 2 года назад +1

      Thank you for mentioning posits! I remembered reading about them, but I could not remember what they were called. They're a really clever format.

  • @michaeltan7625
    @michaeltan7625 2 года назад +1

    That was a very good explanation. Personally, I always considered finding the best compromise to be one of the cornerstoness of engineering, and this system is a really good example of it.
    Also, I find the whole "every number is a range" thing much easier to digest by thinking about it as scientific notation with limited significant figures. Then it does make sense that 1.00000 * 10^15 + 1 still rounds to 1.00000 * 10^15 if you are hypothetically limited to 5 significant figures.

  • @santoast24
    @santoast24 2 года назад +3

    Once again, Jan Misali has taken something I dont give two hoots in Hell about, and convinced me to sit through a 20 (almost) minute long video, and enjoy every second of it (and learn some stuff, that even though I dont care about, I will happily carry with me forever)
    Impressive

  • @tizurl
    @tizurl Год назад

    love how Floating Point is basically just working with limits when we get to 0s and infinities lmao, the undefined values are the NaNs

  • @hikingpete
    @hikingpete 2 года назад +1

    Floating point numbers are lots of fun. I find the intricacy of string conversion to be particularly fascinating. There's so much to talk about - starting with Edward Lorenz discovery of Chaos Theory, and ending up with algorithms like Dragon4 and Grisu3. Of course, you could also pursue that thread you started pulling of 'floating point floating pionts', which will lead you into the world of Posits and Unums.

  • @Zoltorion
    @Zoltorion 2 года назад

    I just sat up last night thinking about and looking at the wiki page for floating point numbers in the middle of the night and so now it's blowing my mind that you released this video at basically the same time that was happening. Crazy coincidence and great video as always.

  • @skyshoesmith6098
    @skyshoesmith6098 2 года назад

    This is better than the explanation given in my first year of my computer science degree, with one important omission:
    If two approximations are very good, subtracting one from the other might STILL yield a massively disproportionate difference. For example, a quadrillion minus a quadrillion-and-one is one. But floating point numerals would return 0. So that's wrong by a factor of NaN.
    If you don't want a factor of NaN in your multiplications/divisions, this is a (real world!) problem.

  • @gingganggoolie
    @gingganggoolie 2 года назад

    I love this format, and that you use it so much. Like explaining the history of a letter. Knowing WHY something is the way it is is one of the most effective ways I know to remember something long term

  • @TheThirdPrice
    @TheThirdPrice 2 года назад +4

    9:29 legit made me laugh out loud

  • @jamesburrelljr.8561
    @jamesburrelljr.8561 9 месяцев назад +1

    Have merry Mantissa and a happy sign bit to you.

  • @downwardtumble4451
    @downwardtumble4451 2 года назад +4

    0 is positive and 1 is negative because of how signed integers work. If you were to subtract 1 from 00000000, it would underflow and get you 11111111, which is -1.

  • @flyinhigh7681
    @flyinhigh7681 2 года назад +1

    Thank you for shouting out Tom VII over at Suckerpinch, the man is a comp sci legend and makes great videos

  • @abrahammekonnen
    @abrahammekonnen 2 года назад +1

    Thank you for your explanatory videos. I always appreciate how clear, effective, and entertaining your videos are.

  • @ManOfDuck
    @ManOfDuck 2 года назад

    Great video. I love how you broke it down in a way that didnt just explain what it did, but also why it did it, coming from the perspective of the person inventing it. Presenting it like that makes it seem way more intuitive and allows a deeper understanding, rather than just memorization. Solid stuff!

  • @King_Imani
    @King_Imani 7 месяцев назад

    dude I have never watched your content and The format of your content is top teir
    With love from 1 dev to another

  • @gaeel330
    @gaeel330 2 года назад +2

    It's equally truthful and painful to call Tom7's video the "logical conclusion" of NaN and Infinity shenanigans.
    Thanks for this video, it's a fun and easy to follow explanation of the format, I particularly appreciate how you insist that floating point numbers represent a range of numbers.
    ni li pona, a!

    • @gaeel330
      @gaeel330 2 года назад

      nampa pi sike telo li pona ala. lipu tawa li pona.

  • @potato_nuggetz6675
    @potato_nuggetz6675 2 года назад +3

    at 8:03 at the bottom right it says 5.88x10^039 instead of 5.88x10^-39 which is quite humorous i think

  • @Huntracony
    @Huntracony 2 года назад +1

    Thinking of floats as representing a small range of numbers rather than a specific number is really cool and useful.

    • @Huntracony
      @Huntracony 2 года назад

      @@gregoryford2532 That's fair.

  • @gamemeister27
    @gamemeister27 2 года назад +1

    Floating point ate my ass when I tried to write a program in javascript to draw the Mandelbrot set. I can't wait to find out why!

  • @lescitrons
    @lescitrons 2 года назад +1

    the extra bits in a NaN value can be used in a technique called NaN boxing implemented in some interpreters for languages like javascript that use floating point heavily, where instead of having separate memory to store the type of a value, a type is assumed to be floating point unless it is NaN, otherwise, it is (often a pointer to) a value of another type

  • @f1nch3z
    @f1nch3z 2 года назад +1

    i love these math videos because no matter which one i watch, i will be completely lost without fail

  • @krel4
    @krel4 2 года назад +1

    Since Not a Number represents a set of results of invalid or undefined operations, the IEEE standard guarantees that always NaN ≠ NaN.
    This is the only value where you are guaranteed this (without using some language trickery). You can find whether a number x is NaN by testing whether x ≠ x.

  • @johnbyrnes6621
    @johnbyrnes6621 2 года назад +1

    I know this is out of scope for your video, but there is a hardware acceleration possible for multiplication by a "negative zero" that doesn't work for multiplication by "positive zero" which is fun to look at

  • @mattguy1773
    @mattguy1773 2 года назад +4

    This would have helped me succeed computer science

  • @valet_noir
    @valet_noir 2 года назад +1

    I think that the fact that 1 = negative and 0 = positive is link with the way a computer add and subtract. The ALU (the chip that does addition and subtraction) have an input bit that can be equal to 0 (LOW electric signal) for an addition, and 1 (HIGH electric signal)for subtraction (why LOW for addition and HIGH for subtraction is because of the patern of the logic gates inside the bit-adders)

  • @wolfelkan8183
    @wolfelkan8183 2 года назад

    Explaining a mathematical concept in such a way that you could have invented it yourself? You weren't the first, but I'm sure he approves!

  • @Truttle1
    @Truttle1 2 года назад

    I learned this in a computer architecture class exactly one year ago, but have since forgot, so thx! :D

  • @ingwerschorle_
    @ingwerschorle_ 2 года назад +6

    numbers aaahh

    • @kijete
      @kijete 2 года назад +2

      you were first

    • @ingwerschorle_
      @ingwerschorle_ 2 года назад +2

      @@kijete and i will not make a big deal of it

  • @TheRedAzuki
    @TheRedAzuki 2 года назад +1

    "left to the implementer's discretion" just means they haven't figured out an optimal way of doing it yet.

  • @vgtcross
    @vgtcross 2 года назад +7

    2:30 got me laughing real hard😂

  • @hjag-is-also-ourplebop
    @hjag-is-also-ourplebop 2 года назад

    I can't believe it took me so long to realize "signed" integers doesn't mean signed as in signature, but signed as in number sign

  • @AceWolf456
    @AceWolf456 Год назад

    I think I took a 0 when I should have taken a 1 somewhere around the 5 minute mark of this video. I have no idea what is being said 90% of the time, and I’m barely registering these words as English. But I continue watching, simply because the words sound fun and you sound excited. 10/10 video.

  • @fdagpigj
    @fdagpigj 2 года назад

    THANK YOU, I had never come across such an exhaustive yet concise and understandable explanation of this before.

  • @mercylessplayer
    @mercylessplayer 2 года назад

    Coming back with an absolute banger of a video

  • @SnoFitzroy
    @SnoFitzroy 2 года назад +1

    Exactly the type of thing Ive been wanting to hear good explanation (that I can understand) for, by just the type of person I'd expect to talk about something so odd

  • @guiAstorDunc
    @guiAstorDunc 2 года назад +4

    5:43 yeah this is kinda binary in a nutshell from my understanding
    Everyone just agreed that’s how it should be and it’s too late to change it now

  • @juniperbelmont
    @juniperbelmont 2 года назад +1

    Hey, FYI, you make really awesome videos and I love watching all of them and you're really clever and enlightening and I'm so glad you exist.

  • @NXTangl
    @NXTangl 2 года назад

    For anyone who wants to look at what supernormal numbers might look like, look up posits, defined in _Beating Floating Point at Its Own Game_ by Gustafson and Yonemoto. Their format has the additional advantage of comparing almost exactly like twos-complement integers, rather than sign-magnitude integers as in IEEE floats. It also has unsigned infinity which logically means "a really big number whose sign we don't know."
    Also I would argue that interval arithmetic over floating points is better. Especially if NaN is remapped onto the interval [-Inf, Inf].

  • @Slaydrik
    @Slaydrik 2 года назад

    "a quadrillion plus one is still a quadrillion" reminds me of Fies' Law

  • @SamOliver4
    @SamOliver4 2 года назад

    The coolest thing about the compromises floating point makes, imo, is that even though the constructs it generates are not technically individual numbers (but ranges of them, as you said), they reflect a lot of the interesting realities of working with infinity and infinitesimals in fields like calculus. We wouldn't say "1/0 = ∞" or "1/-0 = -∞" in calculus per se, but _intuitively_ this is more or less correct.
    This, I imagine, is probably in part why this standard ended up sticking even though it's kinda awkward. Arbitrary precision is lost, but for engineers and scientists, you gain something better: a rudimentary reflection of higher-math concepts like infinity.

  • @Icebadger
    @Icebadger 2 года назад

    I don't really know how you make such a wide variety of, what I think of as, boring topics entertaining. But you do. So that's cool. Good video

  • @givecamichips
    @givecamichips 2 года назад

    Between this and Panenkoek2012's video on floats, I've learned way more about them than I thought I would after barely passing my Grade 11 computer class.

  • @Hecatonicosachoron54
    @Hecatonicosachoron54 2 года назад +1

    I appreciate the subtitles 👍

  • @squilliams7124
    @squilliams7124 2 года назад

    yet another banger by jan Misali

  • @Stella-onehalf
    @Stella-onehalf 2 года назад

    Great video! Bit representation standards really are a strange and fascinating rabbit hole, its depth truly is 01111111111111111111111111111111

  • @SleepyVesties
    @SleepyVesties 2 года назад

    I know I'm too dumb for this but I learned something.
    "Zero can actually have all sorts of value."
    As a zero out of ten this is very nice to hear.
    Also even though I don't understand most of it, it's a really entertaining video. Thank you jan Misali.

  • @mohammedbelgoumri
    @mohammedbelgoumri 2 года назад +1

    The answer given by my computer architecture professor when asked why is + 0 and - 1 is that mnemonicly, a + on the left of a binary string behaves a lot like an unsignificant 0 in that it can be a omitted. The 1 and the - on the other hand cannot.

  • @mygills3050
    @mygills3050 2 года назад +2

    I think NaN should be repurposed as i (sqrt(-1)) with the mantissa used to represent the coefficient.

  • @Dalenthas
    @Dalenthas 2 года назад

    I have a bachelor's in Comp Sci, and this is a better explanation of floats than I ever got in college.

  • @melting0
    @melting0 2 года назад

    I always love your graphics

  • @ri-gor
    @ri-gor 2 года назад +1

    I love this kind of video! Have you heard of posits? Apparently they are a type of variable precision number similar to floating point that are easier to implement in hardware. I'd love to see a video on that, because I can't understand the documentation for it on a deep enough level to get that first global view of the problem. That view really helps me start breaking things down and understanding them better.

  • @gFamWeb
    @gFamWeb 2 месяца назад

    I know tos is a two year old video and someone may have already said this, but I think the reason negative is 1 in signed numbers is to be generally backwards compatibile with unsigned numbers.

  • @endymallorn
    @endymallorn 2 года назад +1

    You are not a number - you are a free man!

  • @vulduv
    @vulduv 2 года назад +2

    Is this just another one of those videos were someone goes "Look how clever floating point is!" Without ever mentioning its downsides?
    *After watching the video...* Hmmmmm... Well this video did only do what the title said and nothing else. Though I feel like a lot of people would get lost in the complexity of floating point and end up assuming it is "The best" for all situations just because it is complicated. (Some people even believe it can represent more numbers than an interger can.)
    So I feel a mention within the video that "This is just one way to store numbers. What numbering system you should use will vary depending on what you do." Would have been prefered.
    I have also seen quite a lot of silly stuff were floating point is used when it really shouldn't be. Like a progress bar that goes from 0 to 1. Literally throwing away all the exponent bits.
    Or counting how many ticks has passed in a game, which will always be a whole number, never negative, and never infinite. Why not just use an interger?
    Honestly... Floating point is over used. And I think that mostly comes from people thinking its added complexity somehow makes it "perfect" or "the best."

  • @ohEmily
    @ohEmily Год назад

    i tried watching this video but i didnt process a single word.
    i liked and subscribed

  • @olivius8891
    @olivius8891 2 года назад

    Now we can use wan and ala for 1 and 0.
    Finally a functioning "convenient" number system for toki pona.

  • @lebenebou
    @lebenebou Месяц назад

    absolute banger of a video

  • @Amanda-C.
    @Amanda-C. 2 года назад +2

    I haven't dug into it, but I'm fairly sure the treatment of NaN, +/-Inf, and +/-0 were chosen not just to accurately represent very small or very large numbers with arbitrary (but not infinite) precision, but also to have relatively clean arithmetic that could be implemented purely using logic gates without requiring special treatment of sub-normals and super-normals compared to normal numbers. (Not sure if it's true at all, but that would be a design goal if I was designing it, especially given the hardware constraints of the time when the standard was being written.)

    • @Double-Negative
      @Double-Negative 2 года назад

      the subnormals, the biased exponents, and all the edge-case wierdness makes hardware optimization much more challenging, but now that the standard is fixed in place, everyone is expected to follow it. The upside is that the standard has been around long enough that we have practice building processors that can do flops fast

    • @Amanda-C.
      @Amanda-C. 2 года назад

      @@Double-Negative Figures it couldn't be that easy... still, it is an amazingly thorough standard, nonetheless. And we can just go into our text editors and our IDEs and type in or measure arbitrary real numbers for our programs to work with, and, somehow, they mostly just work.

  • @frankharr9466
    @frankharr9466 2 года назад

    You're a free man, eh?
    I had to find ways of dealing with this when I designed my unit conversion app. Normally, you would just ignore it, but I decided that I needed to allow fractions and that made these too small to zero numbers pop up in many places. It took years and a number of solutions all used together, but I think it kind of works now.

  • @user-qw9yf6zs9t
    @user-qw9yf6zs9t 2 года назад +1

    HOLY FUCKING SHIT I JUST REALIZED WHEN I MOVED VALUES IN COOKIE CLICKER WHEN I PUT IN A BIG NUMBER IT GAVE ME INFINITY THANK YOU

  • @onion_soup_
    @onion_soup_ Год назад

    I really love listening to this while I draw, I don't know why

  • @lescitrons
    @lescitrons 2 года назад

    5:49 a good question with a few good answers.

  • @geinelws
    @geinelws 2 года назад

    this is my favorite rhythm heaven remix so far

  • @BonJoviBeatlesLedZep
    @BonJoviBeatlesLedZep 2 года назад

    Okay it is really jarring to see a video where I actually already knew the stuff you're talking about. Am I in the upside down?

  • @TheV360
    @TheV360 2 года назад

    IMO one of the most interesting things to do with NaNs is to "box" values inside the NaN's mantissa bits, and reinterpret those mantissa bits (along with the sign bit in some cases) as a tagged union of other types of values (booleans, integers, pointers to larger structures). It's typically used with 64-bit floating point numbers, as a way to store a variety of types in a single 4-byte value. It's used in quite a few interpreters (including some major browsers' JavaScript engines), and it's surprisingly fast for how complicated it is.

    • @mikefochtman7164
      @mikefochtman7164 2 года назад

      Indeed. I ran across a system that used the two LSB for such things. The values were to represent data from a data acquisition system and they encoded four possible 'flags' in these bits. '00' meant the data was good, '01' meant the data was out of range and had been clipped to ceiling/floor limit of the DAQ, '10' meant it was stale and no new values could be obtained, but this was the last 'good' value. and '11' meant the DAQ itself was failed.
      So the first thing incoming data went through was stripping out those bits and posting the datum's status in another structure. But the DAQ just sacrificed those two bits rather than send another item. (we were using only 15-bit ADC's, so the apparent loss in precision in stealing those bits wasn't really an issue)

  • @acykablyatley
    @acykablyatley 2 года назад

    one justification for zero representing a positive sign, is that bit b can be represented as a factor of (-1)^b in the number's algebraic representation, rather than (-1)^(b+1). and your "floating floating point" number system is similar to some arbitrary precision arithmetic systems, which are useful in some mathematical and scientific applications, but impractical in lots of engineering settings, which are more common in every day computing. I loved the explanation of subnormals, which many tutorials and even the algebraic expression I mentioned normally leave out :) If you want to know more about binary number systems, you can check out similar logarithmic number systems from the 80s, or the Unum format.

  • @brysonsmith1523
    @brysonsmith1523 2 года назад

    1 being negative in floats aligns with 1 representing negative in ints which is very useful for bit shifting and general maths for the computer

  • @everlastingwonder6126
    @everlastingwonder6126 2 года назад +1

    I would hypothesize that at least part of the rationale for having 0 as positive and 1 as negative is that it's consistent with how the first bit in a two's complement binary integer behaves, so you can just look at the first bit of any signed number and know what its sign is without having to know whether it's an int or a float.

  • @mollymack98
    @mollymack98 2 года назад

    "Unless you're doing something *really* silly like storing the current date and time as the exact number of seconds that have passed since midnight on January 1st 1970" #calledout

  • @krel4
    @krel4 2 года назад +1

    Looking at this video I'm a bit bummed that there's no "actually 0" value in the format.
    It would be nice to have negative 0 for the range (-a, 0), positive 0 for the range (0, a), and zero zero for the range {0}. This way you have exactly one way to write 0 and a way to differentiate limits to 0 and 0.

  • @Scar32
    @Scar32 7 месяцев назад

    the whole thing about everything is an estimate with floating point math is kinda the same for a lot of thinks in programming
    like yeah computers are supposed to be this thing that gives you consistent results but often especially when making a program to control your mouse (i use mac) it's really like irl where if you go too fast stuff breaks so you have to slow down to get to a reasonable state and even then depending on how much load or whatever the computer is dealing with it still could break.