I just discovered your page today and I just want to thank you! I moved to a new technical sales role but I had no background in development. Your videos have really helped me figure out a few things. Really grateful. Keep you the good work!
It kind of sucks that this side of his content doesn't get a lot of attention, i really liked these videos and it helped me a lot to understand basic programming concepts. Same with Michael Reeve's programming tutorials. (Yes, he did those, like two or three videos i think.) Jabrils will forever be my major programming inspiration.
What is the difference between a float value and a double value? In my hi school i learnt double and in my own time and projects i use float, so what is different between the two
Size difference, in most languages *doubles* are twice(double :) ) the size of a *float* , therefore they're more accurate, they can have more digits than floats. I believe a float has 7 digits and a double has 15 digits
@@thelandoftwitchclips English is my second language, but thanks for being so kind as to point that out, may I ask where my grammar mistakes are and what I should've wrote instead
Not sure how I'm just recently getting fed some of your older videos (I mean, I've had this tab open for a bit, but not _that_ long), but here I am, and this video is great! That said, I do have a few quibbles (some of them more joking than real), to share (guessing none of this will be news to you, but hoping it might be useful to future readers, maybe): First, 0:12 - you seem to be implying that this is where floats get their name, but it's not. Indeed, floats can hold a value that is an integer, so, they're not floating between anything. What _is_ floating, though, is the decimal point. For example, 1.234, 12.34, and 123.4 are all representations that have the same digits, but a different position for the decimal point. The decimal point can "float" to different positions, while the digits of the number stay the same. (Of course, this is complicated a bit by the fact that it's the binary digits that stay the same, rather than the decimal ones, but that's going beyond where this video seems to go, so I'll just mention it for completeness and move on.) An alternative is _fixed point_ numbers -- something that's sometimes used to express currency, for example, where in everyday use, there's nothing smaller than (in various extant currency systems) one "cent", or 1/100th of whatever the currency is (U.S. Dollars, Euros, what have you.) So, you could have 1.23, 12.34, and 123.40 as the closest approximations of what we had before. Note that there are now always exactly two digits following the decimal place. That's what maxed it a "fixed point" number. The internal representations of these can be quite different, as can various other aspects. For example, one of the big advantages of floating point numbers is that they can express very large and very small numbers with the same amount of digital storage. As a human-friendly example (not the actual limits), both 1 billion (1,000,000,000) and 1 _billionth_ (0.000000001), in floating point terms, are basically the same number - 1.0 - times an exponent -- in human terms, 10 raised to some power, in this case either 9 or negative 9. So, 1.0*10^(±9). (Again, inside the computer, this is all binary, so it's a little different, but hopefully this gets the idea across.) An interesting thing to note about this, though, is that doing both small and large at the same time is, depending on scale, difficult to impossible. Abusing my above example a little, suppose we only had the two digits with which to express the number that gets multiplied by 10-to-whatever-power... If you wanted to represent 1.1 billion (or 1 billion, 100 million), no problem. But if you want to represent 1.01 billion (1 billion, 10 million), you can't do it. (In practice, it doesn't run out of space that quickly, for sure, but you probably genuinely couldn't express 1,000,000,000.000000001 -- the two ones are too far apart! [And actually, I just tested it in C, and got that this can't be represented with a "float" or a "double" (double-precision float), but with a "long double" (even more precision), it can be.) Next, 10:46 - hah. Well, I would agree they can sure be "devilish", though I don't think the devil made them. Humans did. :) And right after that, at 10:50 - I mean, you're not wrong here, but it does have some potential to be misleading. On the computer, using a "float" (rather than some other representation of a real number), there are only a finite number of numbers that can be represented, let alone numbers between two integers (which, depending on the size of the integer, may be quite small or even none at all! -- for example, in C (at least on my system), with a simple "float", there's no way to represent any numbers between 1,000,000,000 and 1,000,000,064, or between that and 1,000,000,128, etc.) So, the problem with the 4.4399999999999995 is... a little different than what you seem to be implying. For example, if I print out the math you're doing using C's printf() function, and the simple formatter of "%f", I get: 7.770000 + 3.330000 = 11.100000 7.770000 - 3.330000 = 4.440000 But if I explicitly ask for more precision (by using "%11.9f" for the base numbers, and %12.9f for the result), I get: 7.769999981 + 3.329999924 == 11.100000381 7.769999981 - 3.329999924 == 4.440000057 It's interesting to me that this is different from the Python result... I guess it's using a different internal representation, or something. Anyway... Re 11:38 - I'm not 100% certain, but I suspect we've been rounding 𝜋 off to 3.14 since long before floats came around -- that's just a human thing. Note, though, that calculating more than even 10 or 20 digits of pi requires us to go away from floats as our method of doing things. Higher precision at multiple scales is required, which is where floats fall down (they can be high precision at one scale, but not at two scales simultaneously.) Anyway, I hope that's all somehow useful, either to you, or to someone who sees this someday. Still, nice video -- hopefully it's not too cringeworthy for you to think about this thing from 5+ years ago. :D
there might be some weird workarounds. generally, if you work with floats then the floating-point error is going to be there, but it is usually precise enough to not matter. If I remember correctly then you could define your own "Floating-Point" data-structure when stores the decimal places in an array, but there are almost certainly better ways.
Its a pity (shame?) you could not talk about precise floating point calculations in python... Often calculating with float to get as most accurate result as possible is crucial. Also, you could show how to print certain amount of decimal points after calculation
0:42 BIG MISTAKE. we know exactly how many decimal figures pi has: an infinite number of ‘em. infinite doesn’t mean we don’t know. infinite means infinte. please excuse me, i’m a mathematician xD
I mean....We all knew what he meant: we don't know all of pi's digits. I don't think it's a big deal really, not like it's going to effect anyone's code, but I understand. Also, why do you like math so much? Just the beauty of math or it's power in solving problems?
I just discovered your page today and I just want to thank you! I moved to a new technical sales role but I had no background in development. Your videos have really helped me figure out a few things. Really grateful. Keep you the good work!
It kind of sucks that this side of his content doesn't get a lot of attention, i really liked these videos and it helped me a lot to understand basic programming concepts. Same with Michael Reeve's programming tutorials. (Yes, he did those, like two or three videos i think.) Jabrils will forever be my major programming inspiration.
Short answer: float = decimal number
1.45 = float
1.84628473 = float
2 = integer (not float)
2.0 = float
This is the best tutorial on youtube
hands down
What is the difference between a float value and a double value? In my hi school i learnt double and in my own time and projects i use float, so what is different between the two
Size difference, in most languages *doubles* are twice(double :) ) the size of a *float* , therefore they're more accurate, they can have more digits than floats. I believe a float has 7 digits and a double has 15 digits
According to google a double has 2x the precision of float(however it also requires more memory)
@@thelandoftwitchclips English is my second language, but thanks for being so kind as to point that out, may I ask where my grammar mistakes are and what I should've wrote instead
@@ostro7941 "i learnt" is "i learned" as far as i can see, english is my 2nd language too
Not sure how I'm just recently getting fed some of your older videos (I mean, I've had this tab open for a bit, but not _that_ long), but here I am, and this video is great! That said, I do have a few quibbles (some of them more joking than real), to share (guessing none of this will be news to you, but hoping it might be useful to future readers, maybe):
First, 0:12 - you seem to be implying that this is where floats get their name, but it's not. Indeed, floats can hold a value that is an integer, so, they're not floating between anything. What _is_ floating, though, is the decimal point. For example, 1.234, 12.34, and 123.4 are all representations that have the same digits, but a different position for the decimal point. The decimal point can "float" to different positions, while the digits of the number stay the same. (Of course, this is complicated a bit by the fact that it's the binary digits that stay the same, rather than the decimal ones, but that's going beyond where this video seems to go, so I'll just mention it for completeness and move on.)
An alternative is _fixed point_ numbers -- something that's sometimes used to express currency, for example, where in everyday use, there's nothing smaller than (in various extant currency systems) one "cent", or 1/100th of whatever the currency is (U.S. Dollars, Euros, what have you.) So, you could have 1.23, 12.34, and 123.40 as the closest approximations of what we had before. Note that there are now always exactly two digits following the decimal place. That's what maxed it a "fixed point" number.
The internal representations of these can be quite different, as can various other aspects. For example, one of the big advantages of floating point numbers is that they can express very large and very small numbers with the same amount of digital storage. As a human-friendly example (not the actual limits), both 1 billion (1,000,000,000) and 1 _billionth_ (0.000000001), in floating point terms, are basically the same number - 1.0 - times an exponent -- in human terms, 10 raised to some power, in this case either 9 or negative 9. So, 1.0*10^(±9). (Again, inside the computer, this is all binary, so it's a little different, but hopefully this gets the idea across.)
An interesting thing to note about this, though, is that doing both small and large at the same time is, depending on scale, difficult to impossible. Abusing my above example a little, suppose we only had the two digits with which to express the number that gets multiplied by 10-to-whatever-power... If you wanted to represent 1.1 billion (or 1 billion, 100 million), no problem. But if you want to represent 1.01 billion (1 billion, 10 million), you can't do it. (In practice, it doesn't run out of space that quickly, for sure, but you probably genuinely couldn't express 1,000,000,000.000000001 -- the two ones are too far apart! [And actually, I just tested it in C, and got that this can't be represented with a "float" or a "double" (double-precision float), but with a "long double" (even more precision), it can be.)
Next, 10:46 - hah. Well, I would agree they can sure be "devilish", though I don't think the devil made them. Humans did. :)
And right after that, at 10:50 - I mean, you're not wrong here, but it does have some potential to be misleading. On the computer, using a "float" (rather than some other representation of a real number), there are only a finite number of numbers that can be represented, let alone numbers between two integers (which, depending on the size of the integer, may be quite small or even none at all! -- for example, in C (at least on my system), with a simple "float", there's no way to represent any numbers between 1,000,000,000 and 1,000,000,064, or between that and 1,000,000,128, etc.) So, the problem with the 4.4399999999999995 is... a little different than what you seem to be implying. For example, if I print out the math you're doing using C's printf() function, and the simple formatter of "%f", I get:
7.770000 + 3.330000 = 11.100000
7.770000 - 3.330000 = 4.440000
But if I explicitly ask for more precision (by using "%11.9f" for the base numbers, and %12.9f for the result), I get:
7.769999981 + 3.329999924 == 11.100000381
7.769999981 - 3.329999924 == 4.440000057
It's interesting to me that this is different from the Python result... I guess it's using a different internal representation, or something. Anyway...
Re 11:38 - I'm not 100% certain, but I suspect we've been rounding 𝜋 off to 3.14 since long before floats came around -- that's just a human thing. Note, though, that calculating more than even 10 or 20 digits of pi requires us to go away from floats as our method of doing things. Higher precision at multiple scales is required, which is where floats fall down (they can be high precision at one scale, but not at two scales simultaneously.)
Anyway, I hope that's all somehow useful, either to you, or to someone who sees this someday.
Still, nice video -- hopefully it's not too cringeworthy for you to think about this thing from 5+ years ago. :D
Thank you for your time Jabrils! Great Video man!
I used a = a + 5 before this! Thank you!
Yes...
So is there a way to fix the 7.77 - 3.33 in python? what if i wanna make a calculator
there might be some weird workarounds. generally, if you work with floats then the floating-point error is going to be there, but it is usually precise enough to not matter. If I remember correctly then you could define your own "Floating-Point" data-structure when stores the decimal places in an array, but there are almost certainly better ways.
@@EndyStar rounding sounds like the worst solution for a calculator
I don't understand floats. They go over my head.
do you mean they float over your head?
Yes...
i'm having trouble trying to read a float value from the user in C#, can someone help me?
wouldn't it be like "float a = input()"?
i don't do c# but i guess it's something like that
Wait, how to use float in games? Is it like determining maths like in game currency?
how to make float array?
Nice 👍
Its a pity (shame?) you could not talk about precise floating point calculations in python... Often calculating with float to get as most accurate result as possible is crucial.
Also, you could show how to print certain amount of decimal points after calculation
i want to learn beacuse i want to have a coding job
Me too!
0:42 BIG MISTAKE. we know exactly how many decimal figures pi has: an infinite number of ‘em. infinite doesn’t mean we don’t know. infinite means infinte.
please excuse me, i’m a mathematician xD
A mathematician that still says "xD" mmk
SuperSuheb which is not true either...
please jabrils study a little bit of algebra!
I mean....We all knew what he meant: we don't know all of pi's digits. I don't think it's a big deal really, not like it's going to effect anyone's code, but I understand. Also, why do you like math so much? Just the beauty of math or it's power in solving problems?
WHO PUT FOR YOU A DISLIKE