@@janpaul74 You can use taylor series to approximate trig functions as a polynomial. For example cosx = 1 - (x^2)/2! + (x^4)/4! - (x^6)/6! + ... Look up taylor series for more info. I believe they are used to approximate trig functions and also other tricky functions like e^x as well. It is also how exact values of these functions were found before calculators.
Finally, a channel does a better explanation of the Cordic algorithm than just "rotating the vector" to approximate a trig function, When rotations require trig functions. Brilliant video.
Funnily enough, I wasn’t too hurt when you used curly braces instead of a colon. It’s a common mistake, sometimes it gets hard switching between languages very often. I was hurt when you called that symbol a “hashtag”
I didn't even notice he used curly braces but I was hurt when he used special characters in variable names instead of spelling them (phi, theta) or even using representative names.
Although he probably did need atan from math, but the question here is uh, how would one calculate that by hand, it’s clearly needed for the formula, and while there is an integral formula, it’s still an integral, and integrals are pretty equal in their difficulty to calculate, plus, even once you’ve gotten past that difficulty, there’s even some square roots that even with a definite formula, the formula is quite difficult and time consuming when you’ve converted your numbers to binary, otherwise, it’s pretty close to impossible, and since the atan is part of an approximation, and you have to stack two approximations which grow harder exponentially the more you stack them, and you’ve got a recipe for a horrible or likely impossible time getting it, if there was some solution he gave to that, let me know
he literally said it in the video, you can just use a lookup table for the values of n. since n is just a counter and therefore a natural number, you can just cover all the cases of n in one lookup table, no need to implement an atan function yourself.
Hey there, awesome video, but I did just want to give a pointer. Using a variable called d next to x, y, or phi is generally considered an abuse of notation since it looks closer to an infinitesimal rather than actual variable.
The d in an infinitesimal is a \mathrm d. So in typesetting this is not much of a problem and also here it‘s not that bad because if the line before that says "Let d be…" (and people should always add those lines) then any confusion seems to be sufficiently addressed.
@@hdbrot Of course it would negate confusion but that doesn't mean it's not an abuse of notation. Math should be able to be read with as little information as possible.
This is all well and good. But to any future programmers watching this, please do not use weird Unicode math characters in your code, just use the names of things, like phi, theta, delta, using these symbols will drive anyone that isn’t a heavy math user mad when trying to read your code.
tbh I'd kinda rather learn the meaning of 3 characters and not have big variable names like that. if you need readability, just shove a comment in explaining each symbol
@@mad_6519even though, you should avoid it, there are tons of text encoding and this can lead someone to have a pretty ugly and unreadable code if they don't know the right encoding. Use ASCII as long as you can, because it's compatible with most of encodings
it just sucks to actually program if you use weird unicode characters. It's a chore to have to copy and paste the variable each time you want to use it. I'll admit it kind of looks cool though
1) I think even if it's what algorithm is really used, there are some fine details about things regarding precision. Because if it shows 6 decimal places, then all of them should be correct. But error in cycle accumulates 2) Your code won't work for angles > 4pi 3) Question in the beginning was how do you calculate those without calculator. But then you pull out from somewhere: some constant which is limit of product (which is also you need to calculate without calculator), and table of 50 arctan, which you also need to calculate. I think more plausible way to calculate sin/cos without calculator to use angle halving formulas, and rotate by pi/2, pi/4, pi/8, pi/16 and so on.
You can read about it on Wikipedia, under modes of operation, it shows that the part inside the product can be written in the form 1/sqrt(1+2^-n) which is much more manageable to compute.
Didn't understand this, will have to watch again later. But when it comes to approximating sin and cos, I find that this is a good plan: 1) Add or subtract multiples of 2*pi until you're in the range -pi to pi. 2) Map the angle to the first quadrant and remember what that will do to the sign of the final result. 3) If you're dong the sin or cos of an angle greater than pi/4, do the cos or sin of the complementary angle. With those three steps, we've guaranteed that our angle is no more than 0.785 radians. We can Taylor series it and get a good approximation within just a few terms. But we can take it even further: 4) Pre-calculate some sines and cosines of angles like pi/4, pi/8, etc. Save them as constants to whatever arbitrary degree of precision you like. 5) Remember your trig identities, like sin(a+b) = sina*cosb + coa*sinb, and cos(a+b) = cosa*cosb - sina*sinb. With those in mind, suppose you want to calculate sin(3*pi/16). Well, that's sin(pi/8 + pi/16), and if you've precalculated sin(pi/8), then you just have to calculate sin(pi/16) and cos(pi/16) and do the trig identities. And since pi/16 is a little under 0.2, the calculations for sin(pi/16) and cos(pi/16) will converge very quickly.
If someone doesn't want to use pow function, the powers of 2 can be achieved by taking 1 and shifting it a few bits (remembering that 2^(-n) is the same as 1/2^(n))
That works when multiplying by two because the result is an integer, but dividing one by two results in a floating point number which don't quite lend themselves to the same bitwise shift operation. You can, however, keep 2^n in an integer variable (starting as 1) and for every iteration shift to the left once (which multiplies it by 2), then divide 1 by the result. Also, Python does have an exponentiation operator (double asterisks) and a built-in pow() function not part of the math library. Both would eliminate the need to use the math library (we still need it for arctan however).
From the thumbnail, I thought it would be how modern calculators give symbolic answers for special cases when it recognizes them. IAC, what you described is called the CORDIC algorithm. It needs one iteration per bit of the answer, so 55 iterations seems right as that matches the mantissa of a double precision floating point value. CORDIC _can_ be implemented using only addition, subtraction, bit shifts, and table lookups -- no multiplication or division. Your code doesn't exploit this, and in fact uses division gratuitously. (division being horribly slow even on modern CPUs). This makes it the preferred algorithm for low-end calculators that use 8-bit microcontrollers. For a more capable CPU, the Taylor series takes fewer iterations and will need fewer as the angle is smaller.
A Taylor series is easier for a human because the equation is shorter. However computers/calculators work in a binary number system (base 2). So the multiplication by powers of 2 is very easy for a computer because it just requires all the digits to be shifted (like how multiplication by powers of 10 is done by shifting the digits in our natural base 10 system.) This is why we used the 2^-n in the equations as this is easy to calculate for computers, maybe I should have included this in the video. Thanks
@@TheUnqualifiedTutor Slight correction/addition, Since we're dealing with floats, we don't shift the digits (as in bit shifting) floats are represented in the form of sign*mantissa*2^exp (a bit simplified, look up IEEE 754 for the whole thing) so when we calculate 2^-n, we just subtract n from the exp part shifting the bits only works for integers
@@sepdronseptadron As far as I'm concerned, adding and subtracting from the exponent field is basically the same operation as shifting. The only real difference is that for floats it doesn't have the modular behavior that integers have. If you're writing a typical decimal number, you can multiply by 10 by writing a zero, or if you're using scientific notation you can do the same by adding 1 to the exponent. There's a C function called "ldexp" which is basically a shift for floating point numbers, taking an integer and adding it to the float's exponent field. If there was any flat operation to overload the shift operators to, it would be ldexp.
being an actual python programmer, seeing the beginner tactics (like concatenation instead of functional strings or using Unicode characters as variables, or printing instead of returning) made me remind myself that beginners don't need to follow python conventions when their methods work. This was before I noticed you used curly brackets. (no hard feelings, great video)
This algorithm is usually set to execute only a fixed number of iterations (determined by the precision of the result desired). The K number needs to be modified (instead of calculating the product of the infinite series, you stop this calculation at the number of terms you intend to make). This requires the algorithm to execute ALL the planned iterations, even if the angle happens to match the input angle before all are performed. You need a table of arctan(2^)-n)) of length to match the number of iterations you make. Doing this algorithm in Python makes no sense (other than to test it). The algorithm specifically requires only bit-shifting & integer addition/subtraction. It should be done in assembly using register shift & add function. You can also use this to calculate arc-trig functions. You do it in reverse, setting the vector to the sin (or other trig function) & rotating it back to the X-axis. The output is the sum of all the angles iterations. HP-35 (first scientific pocket calculator, 1970) used this algorithm.
My first exposure to trig functions on a computer was with Atari BASIC - implemented on the 6502 which has no hardware multiply/divide (aside from bit shifting), let alone any transcendentals. I remember looking at the source code for how they implemented this back in my younger days, but that was so long ago I no longer remebmer _how_ it was done. It’s slow, but it works and fits within a small part of the 8K interpreter. (FYI, Atari BASIC uses 6-byte floating point: one byte for the sign bit and exponent, and five for a 10-digit BCD mantissa.)
Something worth noting here is, you still need to calculate arctan(2^-n) somehow, which is also a trig function. However, given this is very close to 2^-n, you can simply remove arctan for larger order terms, and perhaps hard-code first few terms to further decrease error.
7:54 Python uses the same double-asterisk operator as FORTRAN for exponentiation, so 2ⁿ would be written as `2 ** n`. Math.pow() always returns floating point numbers as a result, whereas the double-star operator will return integer values when appropriate.
Cool video! If you want to make those print statements a little easier to write and more readable, you can put an 'f' before the quotes and use curly brackets to avoid needing the str() functions. As in print(f"sin({θ}) = {y}")
What will you do? A B C or D? A: You can always go to the park B: You can always get to work on time C: You can always make a PERFECT triangle D: You go to Paris every year E: you ALWAYS get what you want
Iteration isn't the fastest method and there's a chance that change never reaches zero because of finite precision. It's better to use a polynomial or rational function. See Computer Approximations by J.F. Hart et al.
Imagine if a business major sees this. I think they’ll explode. Math majors may be sad, depressed, lonely, and overworked, but at least we can understand shit like this!
One note on the code showen, the "sigen" function wit x/abs(x) will have undified bahaviour for x = 0. Please use a proper sign function with an if inside for example.
Pretty cool! Though if we don’t have functions for sine and cosine, shouldn’t we also not have functions for arctangent? Or is this actually the way computers calculate it?
There are many different ways to approximate functions usually, some less computationally costly than others. For example, arctan(x) is the integral from 0 to x of 1/1+u^2 du, and there are so many ways to approximate integrals such as this. The way in which the function is computed depends on the type of computer/calculator you are using
You’re only taking the arctan of a small set of numbers (negative powers of two), so yes recalculating and storing will work. Whereas for the final trig functions themselves, any number could be the input
Acktually the sin is calculated using multiple techniques. Firstly, you only need to calculate the first quarent of the sin. Since other quarent can be calculate using trig. Secondly, look up table is used for common value like π/12, π/6, π/4, π/3, π/2 and more. Thirdly, values are close to 0 are return without calculation. Depend on how accurate the approximation need to be, cordic and Chebyshev polynomials can be use.
I remember in the 70's my dad brought home a TI calculator that had trig functions. Being about 8, I had no idea what they mean but I thought it was interesting that the calculator would take a couple of seconds to handle these functions. I made it my goal in life to be able to use all the functions on a calculator (it also had log as well.) But always wondered why it took so long to calculate sin, now I know.
Interesting video but I don't like the fact that this algorithm uses a trig function to define other trig functions. I think it's sexier to derive trig functions from lower level math abstractions.
the math was pretty awesome, but I'm pretty sure using unicode characters, as theta is not a good programming practice. you should stick as long as you can to ASCII to name variables and functions
The whole point of the original CORDIC (published by Jack Volder in 1957ish) was to replace computationally heavy/expensive multiplication and division in old memory-poor computers with additions/subtractions and some table lookups. Logs were also possible. Though based on some obscure 17th Century mathematics it was still a damn impressive algorithm. The code here would not have worked efficiently on early computers and calculators. In fact, it would have defeated the whole point of the original CORDIC. Interesting, though.
or simply use binomial expansion of trig functions, define the function, replace the x with the variable name in the function parameter, keep writing as many terms as you can then you will get almost identical results to real values
My guess is that it has a few of the proper values saved, such as for 0°, 30°, 45°, 60° and 90°. And then the rest are just approximated by a Taylor Series
calculators actually have tables with all the sin values with the maximum precision they need. they dont directly calculate sin() because of perfomance
This was really a tutoriel that I watched with curiosity until the end. I liked both the math and computer part very much. My only question is, cos(arctan(1)).cos(arctan(2)).cos(arctan(3))... I think it is not appropriate to calculate it on the computer. Because we used trig again? Also i _think_ you can use Taylor Series of sinx , cosx, or tanx for example: sinx ~ x -x^3/3! + x^5/5! -x^7/7!
I've looked into it now and Taylor / Maclaurin series definitely converge faster(as I suspected), but the CORDIC algorithm he is using is faster for the CPU.
but this isnt how a calculator does it? wheres the half adders? the full adders? the shift register? wheres the decimal to binary conversion? the complements of 9? all i saw was someone write code, that is then compiled by some program to hex, or machine code, and THAT is utterly different to what a calculator is actually doing to perform this, or any other calculation... try busting down the hex into opcodes and then stepping through how the actual processor deals with the code loaded to it...
I do get some weird values. π/4 stops after 2 iterations, but ends up at the really wrong value (0.6072529350088812 instead of 0.7071067811865475). And cos(0) is really wrong, after 1 iteration. For π/2 the sin and cos are fine, but understandably the tan value is a bit wonky.
How do you get rid of the atan() function in the code? We're not supposed to use trig functions here, unless there is a video explaining how to approximate atan() without other trig functions!
My favorite part is how it still uses a trig function
indeed, how do we get rid of the atan for f**k sake? ;-)
@@janpaul74 Thought the same first, then concluded that these are arctan of always the same values, then it can be hardcoded, I guess.
@@janpaul74 You can use taylor series to approximate trig functions as a polynomial. For example cosx = 1 - (x^2)/2! + (x^4)/4! - (x^6)/6! + ... Look up taylor series for more info. I believe they are used to approximate trig functions and also other tricky functions like e^x as well. It is also how exact values of these functions were found before calculators.
@@janpaul74 i think they use Taylor series for a close aproximation
@@dariosucevac7623 Nah, the Taylor series is too inefficient.
Check out the CORDIC algorithm
en.wikipedia.org/wiki/CORDIC
the most egregious programming tutorial ever
horrendous, perchance
@@The_Prince770 you can't just say perchance
@@The_Prince770 you cant just say perchance
It has befouled us
@@The_Prince770you can’t just say perchance
Finally, a channel does a better explanation of the Cordic algorithm than just "rotating the vector" to approximate a trig function, When rotations require trig functions. Brilliant video.
Funnily enough, I wasn’t too hurt when you used curly braces instead of a colon. It’s a common mistake, sometimes it gets hard switching between languages very often.
I was hurt when you called that symbol a “hashtag”
I didn't even notice he used curly braces but I was hurt when he used special characters in variable names instead of spelling them (phi, theta) or even using representative names.
Also, he could’ve easily saved the import and just used 1/(2**n), for someone teaching us about math, he sure doesn’t know basic math facts
Although he probably did need atan from math, but the question here is uh, how would one calculate that by hand, it’s clearly needed for the formula, and while there is an integral formula, it’s still an integral, and integrals are pretty equal in their difficulty to calculate, plus, even once you’ve gotten past that difficulty, there’s even some square roots that even with a definite formula, the formula is quite difficult and time consuming when you’ve converted your numbers to binary, otherwise, it’s pretty close to impossible, and since the atan is part of an approximation, and you have to stack two approximations which grow harder exponentially the more you stack them, and you’ve got a recipe for a horrible or likely impossible time getting it, if there was some solution he gave to that, let me know
he literally said it in the video, you can just use a lookup table for the values of n. since n is just a counter and therefore a natural number, you can just cover all the cases of n in one lookup table, no need to implement an atan function yourself.
Curly braces are just so much nicer to look at tho. (Yes I hate python)
you are the howtobasic of mathematics. lol
Hey there, awesome video, but I did just want to give a pointer. Using a variable called d next to x, y, or phi is generally considered an abuse of notation since it looks closer to an infinitesimal rather than actual variable.
The d in an infinitesimal is a \mathrm d. So in typesetting this is not much of a problem and also here it‘s not that bad because if the line before that says "Let d be…" (and people should always add those lines) then any confusion seems to be sufficiently addressed.
Yeah.
@@hdbrot Of course it would negate confusion but that doesn't mean it's not an abuse of notation. Math should be able to be read with as little information as possible.
This is all well and good. But to any future programmers watching this, please do not use weird Unicode math characters in your code, just use the names of things, like phi, theta, delta, using these symbols will drive anyone that isn’t a heavy math user mad when trying to read your code.
tbh I'd kinda rather learn the meaning of 3 characters and not have big variable names like that. if you need readability, just shove a comment in explaining each symbol
@@mad_6519even though, you should avoid it, there are tons of text encoding and this can lead someone to have a pretty ugly and unreadable code if they don't know the right encoding. Use ASCII as long as you can, because it's compatible with most of encodings
it just sucks to actually program if you use weird unicode characters. It's a chore to have to copy and paste the variable each time you want to use it.
I'll admit it kind of looks cool though
@@ilost_the_game If he's a mathematician, he probably has these character's alt codes in muscle memory already
@@mad_6519 he is copying and pasting the symbols in the video when he wants to reuse them
1) I think even if it's what algorithm is really used, there are some fine details about things regarding precision. Because if it shows 6 decimal places, then all of them should be correct. But error in cycle accumulates
2) Your code won't work for angles > 4pi
3) Question in the beginning was how do you calculate those without calculator. But then you pull out from somewhere: some constant which is limit of product (which is also you need to calculate without calculator), and table of 50 arctan, which you also need to calculate.
I think more plausible way to calculate sin/cos without calculator to use angle halving formulas, and rotate by pi/2, pi/4, pi/8, pi/16 and so on.
The algorithm is called CORDIC, apparently each iteration gives 1 more decimal place of accuracy
You can read about it on Wikipedia, under modes of operation, it shows that the part inside the product can be written in the form 1/sqrt(1+2^-n) which is much more manageable to compute.
Of course because of the symmetry of sine, you only need to calculate a domain of (0, π/2)
@@BryanLu0 it won't give you correct 6 decimal places if each term of summation will be calculated up to 6th decimal places.
well you can always take the angle mod 4pi
Didn't understand this, will have to watch again later. But when it comes to approximating sin and cos, I find that this is a good plan:
1) Add or subtract multiples of 2*pi until you're in the range -pi to pi.
2) Map the angle to the first quadrant and remember what that will do to the sign of the final result.
3) If you're dong the sin or cos of an angle greater than pi/4, do the cos or sin of the complementary angle.
With those three steps, we've guaranteed that our angle is no more than 0.785 radians. We can Taylor series it and get a good approximation within just a few terms. But we can take it even further:
4) Pre-calculate some sines and cosines of angles like pi/4, pi/8, etc. Save them as constants to whatever arbitrary degree of precision you like.
5) Remember your trig identities, like sin(a+b) = sina*cosb + coa*sinb, and cos(a+b) = cosa*cosb - sina*sinb. With those in mind, suppose you want to calculate sin(3*pi/16). Well, that's sin(pi/8 + pi/16), and if you've precalculated sin(pi/8), then you just have to calculate sin(pi/16) and cos(pi/16) and do the trig identities. And since pi/16 is a little under 0.2, the calculations for sin(pi/16) and cos(pi/16) will converge very quickly.
I would have done something similar myself. I don't know if I would have invoked the trig identities, but I would have considered it.
If someone doesn't want to use pow function, the powers of 2 can be achieved by taking 1 and shifting it a few bits (remembering that 2^(-n) is the same as 1/2^(n))
That works when multiplying by two because the result is an integer, but dividing one by two results in a floating point number which don't quite lend themselves to the same bitwise shift operation. You can, however, keep 2^n in an integer variable (starting as 1) and for every iteration shift to the left once (which multiplies it by 2), then divide 1 by the result.
Also, Python does have an exponentiation operator (double asterisks) and a built-in pow() function not part of the math library. Both would eliminate the need to use the math library (we still need it for arctan however).
@@kakuserankua was gonna say, why not use 2**n?
@@kakuserankuaif you really want you can subtract n from the exponent to divide by 2^n for floats :)
You try do that on a float.
@@luigidabro that's why I said "remembering that 2(-n) is the same as 1/2^(n)". You can get a float by dividing by an integer
From the thumbnail, I thought it would be how modern calculators give symbolic answers for special cases when it recognizes them.
IAC, what you described is called the CORDIC algorithm. It needs one iteration per bit of the answer, so 55 iterations seems right as that matches the mantissa of a double precision floating point value.
CORDIC _can_ be implemented using only addition, subtraction, bit shifts, and table lookups -- no multiplication or division. Your code doesn't exploit this, and in fact uses division gratuitously. (division being horribly slow even on modern CPUs). This makes it the preferred algorithm for low-end calculators that use 8-bit microcontrollers.
For a more capable CPU, the Taylor series takes fewer iterations and will need fewer as the angle is smaller.
I am glad still having a lot of calculators and a slide rule for calculating a sine.
Excellent video mate! However, wouldnt a taylor series be easier for a calculator to deal with?
yes..
A Taylor series is easier for a human because the equation is shorter. However computers/calculators work in a binary number system (base 2). So the multiplication by powers of 2 is very easy for a computer because it just requires all the digits to be shifted (like how multiplication by powers of 10 is done by shifting the digits in our natural base 10 system.) This is why we used the 2^-n in the equations as this is easy to calculate for computers, maybe I should have included this in the video. Thanks
@@TheUnqualifiedTutor Slight correction/addition,
Since we're dealing with floats, we don't shift the digits (as in bit shifting)
floats are represented in the form of sign*mantissa*2^exp (a bit simplified, look up IEEE 754 for the whole thing)
so when we calculate 2^-n, we just subtract n from the exp part
shifting the bits only works for integers
@@sepdronseptadron As far as I'm concerned, adding and subtracting from the exponent field is basically the same operation as shifting. The only real difference is that for floats it doesn't have the modular behavior that integers have. If you're writing a typical decimal number, you can multiply by 10 by writing a zero, or if you're using scientific notation you can do the same by adding 1 to the exponent.
There's a C function called "ldexp" which is basically a shift for floating point numbers, taking an integer and adding it to the float's exponent field. If there was any flat operation to overload the shift operators to, it would be ldexp.
@@angeldude101shifting bits is multiplying/dividing by powers of 2, to add/subtract you can't shift bits in a general case scenario
being an actual python programmer, seeing the beginner tactics (like concatenation instead of functional strings or using Unicode characters as variables, or printing instead of returning) made me remind myself that beginners don't need to follow python conventions when their methods work. This was before I noticed you used curly brackets.
(no hard feelings, great video)
This algorithm is usually set to execute only a fixed number of iterations (determined by the precision of the result desired). The K number needs to be modified (instead of calculating the product of the infinite series, you stop this calculation at the number of terms you intend to make). This requires the algorithm to execute ALL the planned iterations, even if the angle happens to match the input angle before all are performed. You need a table of arctan(2^)-n)) of length to match the number of iterations you make.
Doing this algorithm in Python makes no sense (other than to test it). The algorithm specifically requires only bit-shifting & integer addition/subtraction. It should be done in assembly using register shift & add function.
You can also use this to calculate arc-trig functions. You do it in reverse, setting the vector to the sin (or other trig function) & rotating it back to the X-axis. The output is the sum of all the angles iterations.
HP-35 (first scientific pocket calculator, 1970) used this algorithm.
My first exposure to trig functions on a computer was with Atari BASIC - implemented on the 6502 which has no hardware multiply/divide (aside from bit shifting), let alone any transcendentals. I remember looking at the source code for how they implemented this back in my younger days, but that was so long ago I no longer remebmer _how_ it was done. It’s slow, but it works and fits within a small part of the 8K interpreter. (FYI, Atari BASIC uses 6-byte floating point: one byte for the sign bit and exponent, and five for a 10-digit BCD mantissa.)
Something worth noting here is, you still need to calculate arctan(2^-n) somehow, which is also a trig function. However, given this is very close to 2^-n, you can simply remove arctan for larger order terms, and perhaps hard-code first few terms to further decrease error.
Instant subscription. Perfect intro into math amd coding combined for oeople unfamiliar with it. Keep it up!
this guys gonna be huge in the future
7:54 Python uses the same double-asterisk operator as FORTRAN for exponentiation, so 2ⁿ would be written as `2 ** n`. Math.pow() always returns floating point numbers as a result, whereas the double-star operator will return integer values when appropriate.
My theory: there is a hidden compass inside the calculator. It uses the compass and another device to calculate the sine and cosine.
For me, I would do either of the following:
1) Draw a right angle triangle with an angle of 1.
2) Use the Taylor Series.
Cool video! If you want to make those print statements a little easier to write and more readable, you can put an 'f' before the quotes and use curly brackets to avoid needing the str() functions. As in print(f"sin({θ}) = {y}")
Better to pronounce the sign() function as SIGNUM. Not “sine” - because that would confuse it with sin().
Legend says the CORDIC isn't used anymore.
8:08 the ** operator also works. i.e: 2**(-n)
What will you do? A B C or D?
A: You can always go to the park
B: You can always get to work on time
C: You can always make a PERFECT triangle
D: You go to Paris every year
E: you ALWAYS get what you want
E
Damn, i didnt know howtobasic was a mathematician 💀
You can just use tailor series, it works well with small numbers
Iteration isn't the fastest method and there's a chance that change never reaches zero because of finite precision. It's better to use a polynomial or rational function. See Computer Approximations by J.F. Hart et al.
I love the part where you used wolfram alpha to make you own trigonometric equation
Imagine if a business major sees this. I think they’ll explode. Math majors may be sad, depressed, lonely, and overworked, but at least we can understand shit like this!
If you're gonna use a trig function anyway (atan), why not just
def trig(theta):
return math.sin(theta)
thanks a lot. this has been a question at the back of my mind for a lot of time
One note on the code showen, the "sigen" function wit x/abs(x) will have undified bahaviour for x = 0. Please use a proper sign function with an if inside for example.
Pretty cool! Though if we don’t have functions for sine and cosine, shouldn’t we also not have functions for arctangent? Or is this actually the way computers calculate it?
i have no experience, but they are all the same so you can probably just precalculate and store them
There are many different ways to approximate functions usually, some less computationally costly than others. For example, arctan(x) is the integral from 0 to x of 1/1+u^2 du, and there are so many ways to approximate integrals such as this. The way in which the function is computed depends on the type of computer/calculator you are using
You’re only taking the arctan of a small set of numbers (negative powers of two), so yes recalculating and storing will work. Whereas for the final trig functions themselves, any number could be the input
You could have a table of atan(2^-n) that is fixed for every calculation of the sin, cos and tan
I had to stop watching a few minutes in because I couldn't focus afraid of a scene with wasted eggs and phone books sudddenly appearing.
Confirmed, Wolfram Alpha existed before calculators did.
Acktually the sin is calculated using multiple techniques.
Firstly, you only need to calculate the first quarent of the sin. Since other quarent can be calculate using trig.
Secondly, look up table is used for common value like π/12, π/6, π/4, π/3, π/2 and more.
Thirdly, values are close to 0 are return without calculation.
Depend on how accurate the approximation need to be, cordic and Chebyshev polynomials can be use.
I remember in the 70's my dad brought home a TI calculator that had trig functions. Being about 8, I had no idea what they mean but I thought it was interesting that the calculator would take a couple of seconds to handle these functions. I made it my goal in life to be able to use all the functions on a calculator (it also had log as well.) But always wondered why it took so long to calculate sin, now I know.
Amazing video, I wish you the best your future efforts, and I can only hope you keep this quality up!
Helo Lemon man
@@jansatamme6521 o/
Interesting video but I don't like the fact that this algorithm uses a trig function to define other trig functions. I think it's sexier to derive trig functions from lower level math abstractions.
Imagine if Aliens come down and see us doing this, and just pull out a protractor and say “guys, why aren’t you just using these with scale models?”
the math was pretty awesome, but I'm pretty sure using unicode characters, as theta is not a good programming practice. you should stick as long as you can to ASCII to name variables and functions
noted
Wow this vedio is very underrated. Excellent subscribed
The whole point of the original CORDIC (published by Jack Volder in 1957ish) was to replace computationally heavy/expensive multiplication and division in old memory-poor computers with additions/subtractions and some table lookups. Logs were also possible.
Though based on some obscure 17th Century mathematics it was still a damn impressive algorithm.
The code here would not have worked efficiently on early computers and calculators. In fact, it would have defeated the whole point of the original CORDIC.
Interesting, though.
this channel is a gem how i just saw this
"But wait, that requires cos and sin."
"Aaaarerggghg!!!!!!!!!" Got me dying. 💀 Eggs in a blender.
When I saw the brackets I died
Imagine being in 1956 without a calculator and having a Python interpreter...xD
11:00 this jump scared me slightly
or simply use binomial expansion of trig functions, define the function, replace the x with the variable name in the function parameter, keep writing as many terms as you can then you will get almost identical results to real values
The most underrated chanell on the platform
8:03 that's not "to the power of", that's "xor". XOR is a weird binary thingy, if you want "to the power of", use ** instead of ^
And I thought for half a century that mathematicians and programmers have zero emotions ...
I always thought it used the Taylor series expansion
everything good before the programming part
Fortunate that people were able to use wolfram alpha back in the day, despite not having a calculator
peak cinema of math
literallly just use taylor’s stratagey which is:
sin(x) = x-(x³/3!)+(x⁵/5!)-(x⁷/7!)+(x⁹/9!), etc
ok noob
@@TheUnqualifiedTutor i am so offended
“How does it do it?”
Σ(-1)^nx^2n+1
----- : 🗿
(2n+1)!
imagine not waiting until deltamath was invented
My guess is that it has a few of the proper values saved, such as for 0°, 30°, 45°, 60° and 90°. And then the rest are just approximated by a Taylor Series
cool approximation of sin cos and tan, impressively interesting approach to programming it tho
calculators actually have tables with all the sin values with the maximum precision they need. they dont directly calculate sin() because of perfomance
ERORR: division by zero line 7 and 13
Hello, what app do you use for that blackboard? I thought it looked very cool.
Krita
@@TheUnqualifiedTutor And if you don't mind me asking, what brush?
I’ve wanted to know this for a while
sin(x) = (4x(180 - x)) / (40500 - x (180 - x))
error margin: 0.0016
maximum relative error is less than 1.8%
Bhaskara I's sine approximation
I coded up CORDIC many years ago and was going to implement it in a FPGA but got bogged down with the floating point conversions.
This was really a tutoriel that I watched with curiosity until the end. I liked both the math and computer part very much. My only question is, cos(arctan(1)).cos(arctan(2)).cos(arctan(3))... I think it is not appropriate to calculate it on the computer. Because we used trig again?
Also i _think_ you can use Taylor Series of sinx , cosx, or tanx for example:
sinx ~ x -x^3/3! + x^5/5! -x^7/7!
How To Basic be like: 0:07
Watcing this as a programmer hurts.
the frustrated AUURRGHHH 🥚
8:25 Can’t you use ** to make the 2 use -n as exponent? i’m new to this sorry
yes
Why not use Taylor series approximations?
instead of math.pow, you can do 2**-n, no idea if it has the same time complexity tho
In degrees, use angle bisection as approximation. In radians, use the power series.
This means that the calculator needs to have a arctan(x) table in the memory or defined somehow for it to calculate sin(x)?
Also, before I thought that the calculators use Taylor series
I really liked the video but the python part made we want to bang my head
now make an infinite precision pi calculator
Video: How to make a trig function
8:45 : Ok first you have to use a trig function
Why use this approach over a Taylor / Maclaurin series?
Faster convergence and probably more numerically stable.
I've looked into it now and Taylor / Maclaurin series definitely converge faster(as I suspected), but the CORDIC algorithm he is using is faster for the CPU.
I thought calculators use Taylor's series. What's wrong with that?
but this isnt how a calculator does it?
wheres the half adders? the full adders? the shift register?
wheres the decimal to binary conversion?
the complements of 9?
all i saw was someone write code, that is then compiled by some program to hex, or machine code, and THAT is utterly different to what a calculator is actually doing to perform this, or any other calculation...
try busting down the hex into opcodes and then stepping through how the actual processor deals with the code loaded to it...
There's no way this video would've hit the algorithm if he actually wrote machine code lol
I do get some weird values. π/4 stops after 2 iterations, but ends up at the really wrong value (0.6072529350088812 instead of 0.7071067811865475). And cos(0) is really wrong, after 1 iteration. For π/2 the sin and cos are fine, but understandably the tan value is a bit wonky.
I noticed the brackets immediately and was very confused by it, I was like:
Why didn't we just do this in c or somethin and why did he do that?
You are eagle-eyed. I used python because its easier for beginners imo.
Then we need cosine.................
Mom our neighbour is destrying his house again.
A
i love this
How do you get rid of the atan() function in the code? We're not supposed to use trig functions here, unless there is a video explaining how to approximate atan() without other trig functions!
They would use a lookup table
@@TheUnqualifiedTutor But then you're not calculating anything!
Great video bruv! Just remember me when you have millions of subscribers😃
Just invent the calculator duh!
What whiteboard do you use?
Krita
Thanks
@@TheUnqualifiedTutor How'd you get that setup on Krita?
why can't we use infinite series expansion of sin, taking the first n terms such that it gives answer within accepted error limit?
How you didn't get division by zero when do phi / abs(phi)?
I don't know, I noticed that after.
I guess he used unicode characters for demonstation purposes, but please don’t. Use emojis instead.
Good video!
Nice video. I didn’t understand everything but it was pretty interesting 👍
4:51 I understand how the arctan values can be precomputed, but how do you calculate the cosine?
Ok, based on the Wikipedia article, the part inside the product can be written as, 1/sqrt(1+2^-n) which is much more manageable to calculate
Why does the algorithm for finding trig functions need you calculate arctan? How does it do that?
dont you use a knife to open another knife's box, or use the seed that an already-grown tree gives, to make another tree, dont question
either a lookup table (precalculated arctan values by hand probably) or "i used the arctan to find the arctan"