Floating Point Numbers - Computerphile
HTML-код
- Опубликовано: 27 ноя 2024
- Why can't floating point do money? It's a brilliant solution for speed of calculations in the computer, but how and why does moving the decimal point (well, in this case binary or radix point) help and how does it get currency so wrong?
3D Graphics Playlist: • Triangles and Pixels
The Trouble with Timezones: • The Problem with Time ...
More from Tom Scott: / enyay and / tomscott
/ computerphile
/ computer_phile
This video was filmed and edited by Sean Riley.
Computerphile is a sister project to Brady Haran's Numberphile. See the full list of Brady's video projects at: bit.ly/bradycha...
I think this actually used to be a flaw in some banking systems, because programmers initially used floating-point numbers to store accounts money. And then someone took advantage of it by making a lot of transactions from 2 of his accounts, where each transaction made him win a tiny bit of money due to rounding. So in the long term he was able to "create" as much money as he wanted.
Explains weird results I got in BASIC programs 29.99999998 years ago!
I got 0.9999... Problems and floating point is one.
Another issue I've run across with floating point rounding errors is the fact that - for the same reasons outlined in the video - a comparative statement like "0.1 + 0.2 == 0.3" comes out false, and it can be super annoying to pick out where the error is coming from, especially if it's buried in some if statement. For things like currencies, I usually just deal with ints that represent the number of cents (e.g. $2.50 is represented as 250), and divide by 100 in the end... saves me the hair tearing.
Small correction: You don't need a 64-bit computer to use 64-bit floating point numbers in hardware. For example on Intel processors 64-bit floating point has been supported since at least the 8087, which was a math co-processor for the 16-bit 8086 CPU. (The math co-processor has been integrated into the main CPU since the 80486, which was a 32-bit processor.)
This is great, I've finally understood what it means (and why) when people say "loses precision" when referring to floating points.
Everything is easier to understand if it's explained in british accent
Minecraft stores entities as floating point's, when the world was infinite you could teleport something like 30,000km in any direction and see objects start to move and stutter about including the player.
Once you hit 32 bit int overflow the world would become Swiss Cheese, and at the 64 bit int overflow, the world would collapse and crash.
I appreciate the fact that you get into the topic right at the first second. You don't see this in the world very often.
nought (nɔːt)
(Mathematics) the digit 0; zero: used esp in counting or numbering
For those who were wondering. I guess it has the advantage of being half the number of syllables as "zero".
My calculus teacher in high school loved that word, mainly because when he'd have a subscript of 0, he'd use the variable y just to say, "Well, y-nought?"
I remember the first time I experienced this. I was writing a Pac-Man clone, and I set Pac-Man's speed to be 0.2, where 1.0 would be the distance from one dot to another. Everything worked fine until I started coding the wall collisions, where Pac-Man keeps going straight ahead until hitting a wall, causing him to stop. The code checked to see if Pac-Man's location was a whole integer value, like 3.0, and if it was it would figure out if a wall had been hit. When I tested it, though, Pac-Man went straight through the walls. If I changed the speed to 0.25, though, it worked exactly as expected. I was baffled for a few moments, and then it hit me. Computers don't store decimal values the way you might first expect.
Ran into this 30 years ago, was working in finance and trying my hand at simple applications running fairly complex transactions over many years horizon. Was dumbfounded then. Took me while to learn what was going on. Thanks for the trip down the memory lane.
Tom Scott's videos strike me (a young CS student) as my favorite. They're really authentic with the "and this is the part where you, a young programmer, start to tear your hair out"--please keep them up, I'm really enjoying it!
I have a computer science degree from a top school, and yet nothing was ever explained nearly as well as this.
I love this RUclips channel. Absolutely brilliant explaination. Thank you!!
This is why Minecraft 1.7.3 alpha has block errors after 2 million blocks away from spawn. Pretty cool.
Well I've been working with computers for some 35 years and while very early compilers used to do 32 bit FP (floating point) around 20 years ago some people got together and settled on standards for floating point on computers and soon after 80 bit FP became a standard even though the computers were only 16 or 32bit at the time. Basically the machine's register size (32bit etc. has nothing to do with the number size usable as a sub-program deals with it. Sure it won't be as time efficient, but that's not what was suggested here.
OK, so why does the cheap pocket calculator show the correct answer? I thought they used floating point numbers too. Do they have electronics that do computation in decimal?
Can you PLEASE do more with Tom Scott? He's awesome!
Do you remember the (in)famous Apple IIe 2 squared error? 2+2 yielded four, as it should have been, but 2^2 yielded somothing like 3.9999998, and floating point arithmetic was difficult on 8 bit computers anyway. I once even used a character array to emulate fifteen digits behind the comma, not anything I'd do nowadays, but it worked then.
An even better example of floating point error is trying to add 1/5 and 4/5 in binary, which is similar to the example in the video about adding thirds:
1/5 = 0.0011~
4/5 = 0.1100~
1/5 + 4/5 = 0.1111~
starting out programming and some bullshit turned out as 4.000000005 instead of 4 so i'm here now xD
So is this why I got like 5.0000003 when I input 25 into my self-written square-root function?
I love the way that all of the scrap paper for these series is clearly paper for a tractor-fed printer that's probably been in a box since the 80s.
I've put up English closed captions for this, since I haven't seen anyone else do CCs for this video, which is weird.
I think it still needs authorised before it shows up actually on the channel, though.
Before anything, great explanation!!!
That is why supermarket's checkout's systems expresses the currencies in hundreds and not in decimals, at the end, your bill will be divided by 100 and you'll get your exact amount to pay
Wow! I had wondered for years why Microsoft Excel would always end up with weird figures that would just show up way out in the 15th-or-so decimal places of the returned values from SUM formulas. Now I know. Thank you!
american here, just hearing 'Not Not Not Not Not Not Not Not Not Not Not seven'
My old HP-25c and 15c had Engineering notation. It was Scientific notation but the exponent always a multiple of 3 with up to three digits left of the decimal point. Getting your answers formatted 823.45x10⁻⁶ instead of 8.2345x10⁻⁴ made working with the Metric system effortless, absolutely brilliant.
You should have mentioned: never use a check for equality with floats. The chances are very high you will never match it.
On the other hand I guess every programmer should make that mistake once. E. g. decrease a whole number by 0,1 until you reach zero. Be prepared for a never ending loop.
Many compilers throw a warning when you try to compare floats.
I am not a computer scientist, but instead learned to code for research purposes and fun.
There are cases where you have to compare floating point numbers. When I do so, my trick is to come up with some acceptable tolerance around them or to convert to large integers by multiplying or dividing by scalars, rounding, taking ceiling or floor, etc. Basically, I massage into an integer that I can control somewhat or use less than or greater than but watch what I'm up to. I've had many programs not function properly because I did not take into account the behavior of floating point.
Is there other techniques that are better that I'm missing? I'm mostly self taught and come up with workarounds on my own.
Entirely agree plus there's no reason (that I know of) to use a float for a loop counter.
Yea most compilers i used actual see the problem and pop a warning, in Eclipse you can even go as far as using CheckStyle to auto-correct your operations with float (so it will highlight the possible problems right away).
Float is great for speed, but you have to do some hacks to get precision again.
I was only one minute into the video and you already answered my questions! I am no a specialist and literally have no idea what floating points are/ after hours of searching this is the first video that makes sense to me !! thanks
Tom is very passionate about his numbers being correct. Some of his explanations of these errors almost sound as though the numbers have personally offended him.
I find it rather charming.
Floating point is quite nice for audio. It means that very quiet passages have roughly as much accuracy as loud passages. It can also be annoying for audio too. If you want to add DC offset for some types of processing, the accuracy plummets (as I pointed out in another comment here).
example:
If you have a quiet passage and you do something like
x = in + 1;
out = x - 1;
out will have much lower accuracy than in.
Because we do not perceive loudness in a linear form, the amount of positions offered to 'louder' portions is significantly larger than those allocated to the 'quiet' portions, and thus it is easiest to tell the bit-depth of the recording by listening to the quietest passages, loudest.
And yet they tend to use absolute values that scale to the DAC values. Like 8,12,16,24 bit PCM. In the end ALL codecs have to feed an digital to analog converter that is N number of bits. Is there an audio format that uses floating point values? I don't know of one, but that doesn't mean anything.
Moonteeth62 Most DAWs process audio with floating point. That's why Nuendo sounded better ProTools years. It's also much easier to be "sloppy" with floating point. You don't have to worry about clipping at all; With 32 bit float, you essentially have a scalable 24 bit PCM. However, I don't know of any modern floating point ADCs or DACs. There were some in early digital years to squeeze the most out of 12 bits and that sort of thing. Converters on early AMS and EMT gear from the late 70's early 80's were that way.
Our senses are generally logarithmic loudness, brightness, skin pressure. Interestingly they're have been studies that indicate that tribes without civilized culture contact often have a logarithmic perception of numbers, as in they would "count" (so to speak) in exponents. 1 2 4 8 16 etc It makes sense to me though. If you have a million dollars, you don't care about 10 bucks. If you've got 40 bucks, you care about 10 bucks.
Thanks a lot for your explanation! Floating Point Numbers are a very big problem in spreadsheet calculations most People are not aware of. If you do a lookup or if-function the normal user expects, that his numbers are correct and not smaller or bigger. I allways use to round a value to cut the error off.
I prefer 3 bit floats. They have 1 sign bit, 1 exponent bit, and 1 significand bit. They can encode ±0, ±1, ±∞, and NaN, which is all you really need.
Please, anyone reading this, don't use float for finances. Its a mistake I see people do all the time, please just don't.
...that's why the point is floating. *sudden clarity moment*
Jan Misali basically said (heavy paraphrasing here) "0.3" in floating point doesn't really represent exactly 0.3; it represents 0.3 _and_ every infinitely-many other real numbers that are close enough to 0.3 to be effectively indistinguishable to the format. Basically, every floating point number is actually a range of infinitely many real numbers.
Hah, this reminds me of a programming exercise that I had to undertake in Algorithms 2. The teacher wanted us to calculate a continuous moving average for a set of values. Since the data requirement was so minimal, I decided to store the last n digits in an array, and cycle through them when new numbers appeared. When needed, the moving average was calculated by adding the numbers together and dividing by n.
My program would fail the automated test, because it failed to include the almost 3% error that the professor had gotten by updating a floating point average value for every step of the calculation. I had to teach about 5 other students about the fact that their program was too accurate, and needed to be downgraded.
If you type
0.1 + 0.2
into the python 3.4.1 shell you get
0.30000000000000004
But if you type
1/3 + 1/3 + 1/3
you get 1.0
Thank you! I came across this problem in some homework recently. I stored a decimal number plus an incrementing number in the float variable that the teacher provided in his premade node, and started getting this problem.
The variable was supposed to hold user input anyway*, and since the problem didn't show up when I did it that way, I didn't bother worrying about it. It did confuse the hell out of me, though, so I'm glad to find an explanation.
*I got tired of doing the user input part while trying to test my Linked List, because it involved about six different questions to answer per node.
Have been programming for a while now (self-taught, 5 years in school, 1 semester in university) and I wasn't even aware of this. I LIKE :)
In Python, doing the 0.1 + 0.2 gave me 0.3000000... Then the last digit was 4.. This lesson helped me understand the situation :)
This primer now makes it all make sense.
I've written a few programs where floats were required and they always pooped the bed somehow - to solve it I was doing type conversions to to integer math, then converting back...such a pain!
Avoiding float errors is one of the first things every budding young programmer should learn. Another example of where these problems arise is in high-precision timing applications where rates of change are calculated based on floating point inputs.
Tom Scott's talk is so interesting and easy to understand
I love this guy
He says that the last didgit is not a problem, but once I tried to make a timer which kept adding 0.1 to itself, and when it reached a number (for example 3), it would activate another script. I did never activate because of those floating point numbers. Thanks for the explonation, computerphile!!!
Exceptionally clear and engaging.
You're obvioiusly clever, but are also able to break things down and be interesting.
I learned a lot - and not just about floating point. Thank you.
Well done on explaining it so well! For a long time I thought it a sod and left it. I finally had to do it in PIC assembly and then after I'd struggled to victory I get told straightforwardly over a single video many years later.
Keep up the excellent stuff!
The problem is actually much worse than this. If you think through the implications of scientific notation, large integral values are totally screwed too. This example happens to be in Swift but it doesn't matter:
1> var x: Float = 1_000_000_000
x: Float = 1.0E+9
2> x = x + 1
3> print(x)
1e+09
That's right, 1 billion plus 1 equals 1 billion, using 32-bit precision floating point numbers.
Congratulations on Tom Scott! He seems like a great teacher, I enjoyed this video very much - it doesn't matter that i knew the stuff already
I run into this problem all the time at work...I deal a lot with currency. great explanation
5 years as a programmer, and I finally understand floating point numbers =P
I have a college education, they just never explained it in a way that I understood it!
+エックザック True it's like you have one shot to learn in a lesson then it's gone. While on here, you can relearn on many videos if we don't quite understand.
Wow. I remember this from my early times in programming. Now I am learning programming in school and those programming languages are so smart that they fix these errors for us. It makes me kind of sad to think that in the future these things will be done for us, and an understanding of these sort of things is going to be obsolete.
I´m not a programmer, but a computer hardware scientist. One thing that annoy me allot is programmers that see this rounding errors and just go for 64 bit floating-points because they just don´t wan´t to see the rounding errors... normally not because its needed... it´s very seldomly is.
Than to ad to my annoyance they then use a 32 bit compiler and than run it on a 64 bit operation-system with a 64 bit processor. Then they ask me.. why is my program running so slow, what is wrong with this computer.
Here is the ting. the 32bit ALU in a 32bit CPU can´t obvious do any 64 bit calculation. What the compiler than does is break down the one 64 instruction to eight (8) 32 bit instructions and basically makes the program run 8 times more slowly. When the program is run at a 64 bit CPU the CPU don´t understand that the instruction before compiling actually was a 64 bit, and it run it i backwards compatibility mode, well it workes just fine, but is alot slower.
On the other hand, if the program is made to use 32 bit flotingpoints and uses a (some what smart) 64 bit compiler, the program sadly don´t works 8 times faster, but anyway 2 times faster.
That´s the ting
64 bit number on a 32 bit compiler (don´t care about the CPU) 8 times slower
32 bit number on a 64 bit compiler (needs a 64 bit CPU) only, in the best cases, twice as fast.
Basically, don't do wrong
What annoys me, as a computer science student, is being told by default to use a 32-bit int to count just a handful of things. Then everybody jokes about how memory-hungry modern programs are.
Also, being told that the only optimization that matters is moving down a big-O class. Doubling the speed of the algorithm doesn't matter, but moving from O(n) to O(logn) is the real gold. Because all programs have infinite data sets.
Actuality 16 bit flotingpoints can be used for a number of things to. E.I. game physics simulation were there is no need for really good precision... but i don't know if modern processors support 16 bit SIMD, i guess not, and than there will be a performance loss in steed of gain. Well, of cause save cache and RAM as well as bandwidth.
But then if you talk about stats, partnumber, information and so on, the number of bits can just as well be 32 bit because they usualy don´t exist on a mass level. I.e. if using a million pars all using 32 bit description instead of 16 bit, you only save 2MB
When RAM is consumed on a mas level is in textures and terrain data. Sadly Lossless realtime texture compression is still not used on mas..... Well well, next time AMD/Nvidia runs in to the bandwidth wall, they probably will do it.
0pyrophosphate0 What annoys me, as a computer science graduate, is when computer science students think they've mastered a topic that they've barely dipped their toes in.
Follow the advice of your tutors on int sizes - with years of experience and countless integer overflow bugs and performance issues with unaligned memory access you'll learn to appreciate their wisdom.
totoritko You don't know me or what I've studied or how much. No need to be condescending. I'm quite aware of how much I don't know, and I do know there are pitfalls to avoid when dealing with memory. I just would rather be taught about those pitfalls and how to avoid, debug, and fix them, rather than to just shut up and use int.
Even worse, I think, is using double for everything with a decimal point.
I'm also fully prepared to accept in 5-10 years that my current attitude is ridiculous, if in fact it turns out that way.
0pyrophosphate0 Your program is not going to turn into a memory-hungry beast just because you're using 32-bit integers instead of 16- or 8-bit integers. It might become a bit faster though.
Thank you for what's probably the best explanation of floating point arithmetic that now exists! It's easy from this starting point to extrapolate into understanding floating point operations (FLOPs), and half (16bit), single (32 bit), and double (64bit) precision floating point numbers! Thanks Tom!
The place that I have seen this (rounding errors with floats) become significant is in summing large lists of values which have a large variation in value. When adding small values to an existing large value the entire change can be lost in the inaccuracy of the floating point (it just get's dropped as not being significant). If you do this many times (add a small value to a large value) the end result can still be no change even though you have cumulatively added a large value. For this type of data sorting the values from smallest to largest and summing them in that order gives a more accurate end result, assuming that no one value is significantly larger than the current total to which it is being added. I realise I'm not defining significant, small and large sufficiently, but just enough to make my point. Maybe.
Aren't floating point errors what they used in Office Space to ripoff Initech?
I've taken five semesters of calculus, chemistry, and several engineering and physics classes. I've used scientific notation for years...and this video is the first time I've heard an explanation for WHY scientific notation is used.
Would be fantastic to see a video done on Fixed Point, which is the other way to solve the Floating Point accuracy issues for some small length of numbers, especially as you can store the decimal component as an integer and do some clever maths computing the overflow quantity back into the integer component. This is actually how programs like Excel solve the problem when you click the Currency button.
and this is why you should never fuck with floating point numbers
I would have loved to see more about the reserved cases of the floating point standard such as NaN, as well as some more on the topic of normalization. Hope there's more to this in some "extra bits"
You guys provide a far better lesson than anything I got in school.
I watch Computerphime's videos for this guy.
New languages (like Clojure) are starting to include fractions as a data type. If the numerator and denominator are arbitrary precision (integers that scale as necessary) then you can represent and do operations on any rational number with no error.
Sure; just keep in mind that this introduces both memory and computational overhead. :)
Actually, in graphics it's also a problem, even being off by a fraction of a pixel can give a visible dust of gaps between surfaces.
Careful rounding usually helps with floating point issues.
Uhm. Pixels can't move, and there are less than 3000 tall/long on most screens. I fail to see how it could make a difference.
For example in 3D graphics you also have rotation, translation and perspective transformations and they can individually or in combination put the edges of surfaces right either side of the centre of the pixel, so the point used for the pixel isn't touching either surface.
BooBaddyBig Well I don't make graphics, so I'm going to say that it still doesn't really make sense, but I trust what you say regardless.
it took me a long while, but i just noticed that the "/" was added at the end. awesome!
Tom Scott is probably my favourite guest.
This finally explains strange results in a program I wrote for first year uni. The program was meant to calculate change in the fewest possible coins but whenever I had 10c left it would always give two 5c coins, for the life of me I couldn't figure out why now I know. It also used to happen with the train ticket machines so at least I'm not alone in making that error.
What the video says does stand true with digital computers.
but analouge ones can understand recurring numbers, and 1/3 +1/3 absolutely makes sense and will yield 2/3.
I'm loving this channel. It's heaven.
Finally, I start to understand what the floating point is!!!
It's so nice to hear someone say "maths" on RUclips.
It's probably well worth pointing out that when you try to add a small number onto a big number you get into trouble. This is really bad if you're trying to integrate a floating point the way you would increment an integer. Even if the number you're adding is nice and tidy, like 1.0 (tidy in base 2 as well as base 10), for example, when your variable gets really big, like 3E8 (your example), the computer can't add a small number because it gets lost due to the the significant digits problem. This may be self evident to many of us, but this single little aspect of dealing with floating points can cause a lot of trouble. I keep it in mind whenever I'm dealing with floating points, especially on platforms that handle fewer significant digits, like many PLC's, DCS's, process control systems, etc. If the compiler allows it, it's sometimes better to use really big integers, or even multiple registers (like a times 100000 tag and a times 1 tag for less than 100000) to allow you to keep adding.
Some software supports integers of length n, which solves a lot of these problems, but has it's own issues, if course (speed, memory).
Who is this man? He needs a fucking medal. Give him 3!
Mate... Respect for still having the blue/white lined FORM FEED paper LOL i still have a few boxes myself, it's good for scrap huh.
What I missed in the video is, the explanation how can the calculator show 0.1 + 0.2 as 0.3 still, when it uses floating point numbers. The calculator stores more precision than it shows on the screen, and the leftover digits are used for rounding, showing most of the stable calculations without error. This is important, because programmers needs to be aware of these errors, how to handle them, what error is tolerable, and what precision they need to keep up that requirement.
An excellent explanation and a pleasure to watch.
Could do a continuation where NaN and all the other fun stuff from IEEE 754 are explained and while at it maybe also talk about the different rounding modes and encodings of negative numbers.
Once you know that the 0.1+0.2 0.3 is just a phenomenon that you need to get to used to, because it is the way the computer treats the floating number, you will not scared with the " 0.3" stuff.
A clarification I'd like to make to this video. The hardware architecture "32-bit computer" (x86) or "64-bit computer" has nothing to do with the precision of floating point numbers. That is strictly used for memory addressing. So any precision value can still be used on any hardware, though some may be more efficient at it than others.
There are typically two types of trouble with floating point arithmetic:
1. Adding lots of them together, accumulating errors
2. Trying to test for equality
The first one actually is nasty, but the second points to flawed algorithms. When working with the real numbers, even in the formulation of e.g. analysis of reals, we work with inequality, not equality. It is almost always better to implement an algorithm working with real numbers by using bounds.
It maybe would've been cool to mention that about half of all floating point numbers fall within the [-1,1] range, while the other half stretches out all the way to the highest exponents. In fact, there's a set amount of numbers that fall between ranges of numbers that are powers of two. So, for instance, the amount of numbers between 1 and 2 is equal to the amount between 2 and 4, 4 and 8 and so on.
This makes normalising your numbers a good idea when you need to do precise calculations. An operation of 1000.1+1000.2 is gonna give you an even bigger error than 0.1 + 0.2. Of course, if you can avoid errors entirely by using some other (software) number format without impacting your performance too much, then that's the better solution.
Fantastic explanation! Not heard it explained anywhere this clearly before and it's a great way to explain it. Keep these videos going Brady / Sean!
As somebody who programmed for a payroll firm for 5 years, and now program for a credit risk firm - i totally appreciate the benefits of using decimal types!
One bugaboo, where the last decimal place IS important, is with floating expressions such as: xx^2 + yy^2 + zz^2
These can give small NEGATIVE numbers if xx, yy, and zz are all small and nonzero. Try to take a square root or a logarithm and you end up with NaN (not-a-number) and this will carry all the way through the code giving errors.
Similar errors occur with other functions that have limited input ranges. They are easily avoidable, but they always manage to sneak in somewhere.
I already understood this concept but I thought it was a really good explanation regardless. Some tricky things in here to get your head around initially so hopefully people find it useful!
"second life" uses 32-bit floats for the positions of anything within chunks that are 256x256x1024 meters wide (not using the higher number range, disregarding accuracy for performance) . It also uses quaternions of 4 of these floats for any rotation. Floating point rounding errors quickly add up to something that is easily visible, especially if multiple objects are grouped/linked and transformed relative to each other.
Hey, here's an idea: interviews with computer-y people about their background. I think Tom's would be interesting because i'm pretty sure his professional qualification is as a linguist, so it would be cool to hear about why he learned about all the compsci stuff that he did
I like how they add a / to to mark the end of the video
Very excellent explaining of Floating Point Numbers. Now it be great to explain about Decimal floating-point (DFP) aka IEEE 754-2008. As per financial transactions, people often balk at the one language which utilize BCD numbers for representing currency.
I've run into this a few times at work using Excel. It seemed very strange and looking into why it happened was enlightening and interesting... plus, we found a few ways to correct it.
And then you have a simple "if (d
To come back to the 3d game example:
In older versions of Minecraft there is a bug caused by floating point rounding errors. Because the game world is procedurally generated those errors cause errors is the world generation. They only become significant when one of the coordinates is at about +/- 12.5 million, but there the world generation changes significantly.
This guy is a very good teacher.
This reminds me of Office Space (the movie) in which the characters discuss plans to modify the bank software's source code which would take those tiny fractions of a cent in everyday banking transactions and instead of rounding it down, would transfer the difference to their account.
Watching this, I already understood floating point numbers, but I did not understand scientific notation. Ironically, this video has taught me the exact opposite of what it intended to
"32 bit computer" -> that's about addressing. They can handle 64 bit floating-point arithmetic.
You could build a computer that can do 128 bit floating-point arithmetic and uses only 16 bit for addressing. There certainly is some relation when it comes to CPU design and the floating-point unit size, register size and address size. But it's not really the same. It might even be handled by a coprocessor. And FPU stack registers are usually 80 bits (10 bytes) wide, not 64 bits. See fld, fild, and fbld in assembly.
Thanks for closing that tag at the end, Brady!
My OCD is finally fixed!
The problem is that the tags from the first few videos are still open. I guess the nesting depth is still a few dozen levels deep. And you have to include the opening tags at the end of all these early videos too. INSANE. O.o
I tried not to think about that... I tried.
mailsprower1 Brady should make a "closing tags only" video or put them in as "cameo" here and there in future videos. :)
I never thought floating point could be described in such an interesting way!
This is a clever guy that explain complicated things in a simple way.
This actually cleared up FP logic for me a lot. Thanks for that.