How can Computers Calculate Sine, Cosine, and More? | Introduction to the CORDIC Algorithm

Поделиться
HTML-код
  • Опубликовано: 23 май 2023
  • In this video, I'll explain the motivation for an algorithm to calculate sine, cosine, inverse tangent, and more in a fast and efficient way. I'll cover topics like geometry, calculus, and computer science to explain where and how these ideas are developed.
    Music by Vincent Rubinetti
    Download the music on Bandcamp:
    vincerubinetti.bandcamp.com/a...
    Stream the music on Spotify:
    open.spotify.com/playlist/3zN...

Комментарии • 313

  • @bobater
    @bobater  Год назад +151

    I figured I’d make this comment to address some common questions I’ve been seeing:
    EDIT: I made a mistake in saying that we don't need to calculate the angles a_n = tan^-1(2^-n). We do need the angles in order to keep track of the total angle we've rotated by, but we don't need to calculate tan(a_n) since we know it's 2^-n. Apologies for any confusion this may have caused.
    1. Is this algorithm the main way to calculate these functions today?
    Today, this algorithm is rarely used. The main advantage of it is how few transistors it needs, at least compared to implementing a floating point unit to use a polynomial method. For this reason, the main use of CORDIC today is in circuits where hardware is limited, and not in more powerful modern computers. I think it's still a useful algorithm to learn because of how interesting the ideas behind it are, and how it gives you an insight into how computers function.
    2. If CORDIC is used to calculate sin, cos, and arctan, how do we get the cos and arctan values we need for the rotation matrices in the first place?
    For the angles we use, which are tan^-1(2^-n), since we know the tangents of those angles are 2^-n, -we don’t need to actually know the angle on the computer- we don't actually have to compute tan. That’s due to the fact that the rotation matrix only has terms that are tan(a_n), which will always be some 2^-n. -We’re not actually using the values of the angles a_n when we do the computation.- We can calculate a_n = tan^-1(2^-n) using something like a Taylor series, which we need to keep track of the total rotation we've done by adding or subtracting each angle from the total depending on the direction of rotation.
    For the cos(a_n) constant, we can compute this using a method like a Taylor polynomial or some other approximation besides CORDIC. As @Demki noted, we could also use CORDIC to calculate the scalar of all of the cos product by starting with a vector on the x-axis with length 1, rotating it onto the y-axis, and then finding the final length of the vector L. If we multiply by 1/L, we’ll re-scale our vector to a length of 1 again meaning the cos scalar = 1/L
    3. Why is the cos product K always the same? Wouldn’t it change each time we run the algorithm?
    The reason the cos scalar K remains constant is because the series of angles we rotate by, +/- a_n, remain constant in magnitude with only their signs changing. Cos(x) is an even function, meaning cos(-x) = cos(x). Therefore the sign of a_n doesn’t affect our cos(a_n) value. Since every cos(a_n) value is the same every time we run the algorithm, the constant K to scale the vector by will be the same each time.
    4. Why not use something like a Taylor Series to approximate the values instead?
    Polynomial approximations like Taylor series are pretty simple in their computations, but they require a lot more multiplication operations. The main advantage of CORDIC is how easy it is to implement with very little in the way of transistors, at least compared to a floating point unit.
    5. Why use the repeated addition algorithm for multiplication? Aren’t there faster ways to do it?
    The main reason for using that algorithm was because it was simple to showcase without having to first introduce binary. I know that there are much faster algorithms for computer multiplication, but the goal was to show something simple that provided motivation for the bit-shifting optimization while still giving a general overview of how everything works. I do regret not putting a note in the video that explained this, but hopefully people will read this and not be mislead.

    • @r-prime
      @r-prime Год назад +2

      Hey I'm feeling really stupid rn but how do you keep track of whether or not you've overshot the angle without knowing what the individual angles are? Surely you would either need arctan(2^-n) or tan(theta) to be able to draw a comparison? But then you would need to calculate tan(theta) which obviously defeats the purpose...

    • @bobater
      @bobater  Год назад +5

      @@r-prime Sorry for the confusion, I updated the comment. You are correct, we do need the angles stored on the computer, arctan(2^-n), to keep track of the total rotation that we've done. We still don't need to calculate the tan(a_n) though, since we know it will always be 2^-n.

    • @FrankHarwald
      @FrankHarwald Год назад +1

      CORDIC is indispensible for when you have a very simple or low-power computer and no transcendental functions available & look-up table are also too big which is why no usual computer uses them - nowadays they are only used in tiny special case niche. Mostly computers don't calculate them at all: programs mostly multiply by normal vectors if you need rotation or heaven forbid if you have to a built-in floating point sine cosine (which internally use a combination of function symmetry & repetitive properties + polynomial approx + maybe table lookup). same is true for exponential functions, logarithms & the inverses of the trig functions.

    • @dimastus
      @dimastus Год назад

      ​@@bobater So, for the N-bit angle we need to precalculate N arctans?

    • @domenickeller2564
      @domenickeller2564 Год назад

      ​​@@dimastuses, basicly each Cordic micro rotation gives you ca. 1 bit of precition
      But there are newer versions of the cordic that work a little different

  • @rizalardiansyah4486
    @rizalardiansyah4486 Год назад +473

    I was expecting lookup tables and interpolation. Boy, how wrong am I...

    • @genehenson8851
      @genehenson8851 Год назад +156

      I was expecting wizard. I was wrong in the first thirty seconds.

    • @Jono98806
      @Jono98806 Год назад +32

      That wouldn't be very accurate and it's not necessary. Trigonometric function have well-defined Taylor series, which would be the easiest way to compute them.

    • @davidhand9721
      @davidhand9721 Год назад +15

      I've seen lookup tables in older machines.

    • @domenickeller2564
      @domenickeller2564 Год назад +14

      ​@@davidhand9721in very old machines and depending on the precision needed. The CORDIC scales really well for high precision. But if you don't care too much for example in machine learning stuff you use Look up Tables.

    • @lisamariefan
      @lisamariefan Год назад +3

      Isn't that what some games on the SNES do with mode 7 though?

  • @nanamacapagal8342
    @nanamacapagal8342 Год назад +225

    I was expecting an infinite series expansion but this is so much more clever and optimized for computers

    • @NumbToons
      @NumbToons Год назад +5

      me too, I have been thinking this for so many years.

    • @NumbToons
      @NumbToons Год назад +9

      But do any calculators use the taylor series expansion? I mean, before CORDIC, were taylor series used?

    • @FabienAurejac
      @FabienAurejac Год назад +11

      @@NumbToons If I understand correctly, Cordic was made for calculators with inadequate registers, in 1959, with techniques described similar to the ones described by Briggs much sooner... So I think, like the comment of WarpRulez says in this page, that techniques like CORDIC were used sooner than equivalents of taylor series, in the case of computing. You can also find on the internet the pdfs of publications by William E. Egbert, that described the algorithms behind most of the elementary operations in old calculators.

    • @flameofthephoenix8395
      @flameofthephoenix8395 Год назад +1

      I was thinking it might be a binary style algorithm, where the angle you want to find the sine of would be converted to a binary decimal number. Where 1 is 180 degrees .1 is 90 is degrees .01 is 45 degrees, and so on. Then you can find the sine/cosine pairings through this method. Start with A = {Angle:180,X:0,Y:-1} B = {Angle:90,X:1,Y:0} C = {Angle:0,X:0,Y:1} Then to find D = {Angle:45} you would do D={Angle:(B.Angle+C.Angle)/2,X:Normalize((B.X+C.X)/2,(B.Y+C.Y)/2).X,Y:Normalize((B.X+C.X)/2,(B.Y+C.Y)/2).Y}. Then with your previous binary decimal, you would use rotation matrices to find the exact angle, if the bit signifying 180 degrees is active, then you use a rotation matrix with A, if the bit signifying 90 degrees is active you do the same with B, and so on.

    • @diobrando7642
      @diobrando7642 Год назад +2

      @@NumbToons Taylor series expansion is valid, 10 iterations are enough to calculate accurately sine and cosine.

  • @baconeko
    @baconeko Год назад +31

    Amazing that you managed to put in an intuitive explanation of L’Hopital’s rule in the middle of it!

  • @LinesThatConnect
    @LinesThatConnect Год назад +92

    Awesome! I've been curious about CORDIC for a while, but I never had the motivation to trudge through technical papers about it.

    • @bobater
      @bobater  Год назад +22

      Thanks for watching! Your videos are part of what inspired me to do SoME this year. Love your content

    • @NithinJune
      @NithinJune Год назад +1

      lines literally one of my favorite math yt

  • @thechiliman500
    @thechiliman500 Год назад +40

    I've been a casual math enjoyer for a long time (with only really doing a small bit of maths up to DE and Liner Alg.) and I have to say your intuitive description of L’Hopital’s rule was one of the best I've ever seen. Short and to the point and without alot of the messiness that you sometimes get when describing some facts in math. Thanks :)

  • @Nik-dz1yc
    @Nik-dz1yc Год назад +161

    Ive always wanted to figure out how CORDIC woked. I ended up using a basic Taylor series with 3-4 terms while using a repeated Chebyshev polynomial to make it way more accurate

    • @AquesticYT
      @AquesticYT Год назад +29

      I was thinking Taylor series with an if statement to just repeat every 2pi radians

    • @Nik-dz1yc
      @Nik-dz1yc Год назад +18

      @@AquesticYT Yeah you're correct, modulo is used because the Taylor series only covers -pi to pi. The problem with just using a Taylor series is that even with quite a few terms, it's quite inaccurate near the edges. One thing we can exploit though is that near 0, even with just 2-3 terms, it's very accurate. This is why I mentioned a Chebyshev polynomial. if you take the cosine of a number x, and it's equal to z = cosx then you can calculate cos(2x) using 2z^2 - 1. So this simple polynomial lets you calculate cos(2x) using cosx and in fact, there are infinitely many of these but only the first few are practical and you can repeat them too. I repeat the 2x^2 - 1 one 2 or 4 times which allows for very high accuracy

    • @NXTangl
      @NXTangl Год назад +8

      ​@@Nik-dz1yc You don't even need to be accurate everywhere. Combining wrapping, mirroring, and the Pythagorean identity will get you anything as long as you can cover the first pi/4 radians, although for stability and to avoid sqrt(x) a full pi/2 radians is recommended...

    • @WarpRulez
      @WarpRulez Год назад +15

      If I understand correctly, CORDIC is useful when there's no support for hardware multiplication (or if multiplication is extremely slow), which is the case with more primitive or simple CPUs. However, if the processor supports fast multiplication (which modern processors do, as 1-clock-cycle multiplication is very common), then a power series (optimized with some lookup tables) is faster then CORDIC.

    • @Nik-dz1yc
      @Nik-dz1yc Год назад

      @@NXTangl you're right, I just proposed a simpler process here but there are a lot of optimizations that can be made to work on smaller spaces than 2pi, in fact I've heard people talk about very tiny spaces like pi/6 but I never looked into the math

  • @prwf
    @prwf Год назад +33

    Very nice presentation. The one thing that I found a bit cloudy was the K constant but after thinking it through I realized that K will always be equal to the product over the same angles, because cosine(-a)=cosine(a). This product looks like it should converge quite quickly so I’m assuming that we just use the constant (roughly 0.607) every time as within just 4 angle adjustments/rotations, K is within about 0.002 of this constant (K). We can safely assume that for almost every angle we are using to calculate the sin and cos of that it will take 4 or more rotations, so this constant is justified to be used for every calculation, and doesn’t need to be recalculated for each use of the algorithm. Thought I would explain my thought process just in case others were a bit confused.

    • @bobater
      @bobater  Год назад +12

      Spot on explanation! That’s one thing I regret not making as clear in the video now seeing people’s reactions to it. Glad you liked the video though!

  • @cdunku
    @cdunku Год назад +9

    I was looking for resources on implementing my own trigonometry functions in C just for practice and this is perfect!

  • @thingthingthingthingthingthing
    @thingthingthingthingthingthing 9 дней назад

    I love channels that explain complicated thing with small steps and examples that are understandable

  • @spegee5332
    @spegee5332 Год назад +22

    oh man, this was my idea for SoME3! guess I gotta find a new one :P awesome to see this cool concept well explained though. Hope you keep making videos, you def know what you're talking about and the visualizations are great too.

    • @bobater
      @bobater  Год назад +8

      Haha that's a cool coincidence! I was surprised to see that not many people had done videos on this topic when I was first thinking of making this video, maybe because it's somewhat more obscure and less important in the modern day. Thanks for the support!

  • @wesleytaylor-rendal5648
    @wesleytaylor-rendal5648 9 месяцев назад +1

    I'm really impressed. I'm currently implementing a cordic calculation in an FPGA. Both HLS and HDL implementations for two separate projects, but i have to admit this was a great refresher. Please keep it up. You're doing gods work.

  • @OmegaQuark
    @OmegaQuark Год назад +1

    Glad to be here before this channel blows up. Such amazing visuals and intuitions are a scarce thing

  • @Nim2
    @Nim2 Год назад +1

    Kudos. This is one of my favorites of this SoMe so far.

  • @anon_y_mousse
    @anon_y_mousse 11 месяцев назад +1

    I think you've just given me the insight I needed to figure out how the equations were derived in the first place. Taking the simple ratios of a unit circle and making them more complicated by stepping makes so much sense I don't know why I didn't think of it before, but hopefully I'll build up a better understanding of math because of this.

  • @herbie_the_hillbillie_goat
    @herbie_the_hillbillie_goat Год назад +2

    Thanks for making this. I've been trying to understand CORIC for quite some time now.

  • @squ1dd13
    @squ1dd13 Год назад +2

    fantastic video. i was really surprised to see you don’t have that many subscribers! you’ll be pleased to know you gained my subscription, and hopefully more people will find your channel soon. ❤ keep it up

  • @literallyjustayoutubecomme1591
    @literallyjustayoutubecomme1591 Год назад +2

    Thank you for this video. I remember seeing a knowledgeable person on Reddit mention some techniques for trigonometric function computations, which included the CORDIC algorithm as well as pade approximants, I had not seen an explainer on cordic aimed at non-experts yet

  • @katefick4246
    @katefick4246 10 месяцев назад

    This is awesome and so are you!!! Excellent graphics and clear explanation!

  • @sloshy1840
    @sloshy1840 Год назад +1

    Such a beautiful presentation!
    I've always been curious about this

  • @marcfruchtman9473
    @marcfruchtman9473 Год назад +1

    When I first started watching this video, I didn't expect that I would also learn some interesting pieces of knowledge re: Limits and series convergence. Thanks for the video.

  • @vari1535
    @vari1535 Год назад +15

    Beautiful presentation! I especially liked how you built a visual intuition for the sine and cosine angle addition formulas and for L'Hôpital's rule rather than just presenting them as established facts (hell, you didn't even mention their traditional names--and there was no need!).

    • @byronwatkins2565
      @byronwatkins2565 Год назад

      Presenting L'Hospital's work as his own is called Plagiarism; it is illegal.

    • @huvarda
      @huvarda Год назад +3

      @@byronwatkins2565 It's L'Hopital not hospital. Also, L'Hopital didn't discover that formula, his teacher Bernoulli did. Also also he didn't present it as if he discovered it, he just described how it works. This reply has to be one of the silliest things I've read today

  • @gunhasirac
    @gunhasirac Год назад +2

    very impressive video. Every college level concepts are explained in simple words and graph. Incredible work sir.

  • @arijeetsarkar1512
    @arijeetsarkar1512 Год назад +1

    What an amazing channel
    Hope that you grow to exponential heights.
    You help in visualization so well❤

  • @CMRTUBE
    @CMRTUBE Год назад +3

    This is insane. Love it!!

  • @chrisd561
    @chrisd561 6 месяцев назад

    I've been trying to find this. Thanks!

  • @kaustubhpandey1395
    @kaustubhpandey1395 11 месяцев назад +2

    Really cool entry
    P.S. love the effort in the 8 bit computer you made. I have seen ben's series on it

  • @osama_zn
    @osama_zn Год назад +5

    Wonderful video!! please keep going!

  • @walkingturtle6646
    @walkingturtle6646 Год назад +1

    excellent video, glad I discover your channel. Hope to see more videos in the same style.

  • @MissPiggyM976
    @MissPiggyM976 Год назад +1

    Very well done, thanks !

  • @Murderbot2000
    @Murderbot2000 Год назад +1

    I really liked this. The solution that I arrived at (and a solution that I’ve used for other transcendental functions) is to use a Taylor series expansion with just enough terms to be within the precision of the LSB of the data type that I’m calculating with.

  • @zandrillon4764
    @zandrillon4764 Год назад +2

    Very awesome video! Education and funny!

  • @mr.fishfish570
    @mr.fishfish570 Год назад +1

    You surprised me when you whipped out that breadboard computer.
    I actually made one following Ben's tutorial too!
    Although I have bugs to squah...

  • @EPMTUNES
    @EPMTUNES Год назад +1

    Nice video! Awesome song choice.

  • @yashshah7066
    @yashshah7066 Год назад +1

    Incredible video!

  • @calvindang7291
    @calvindang7291 Год назад +2

    i've never seen this proof for the trig identity before, that's cool

  • @someone_who_is_in_this_wor4536
    @someone_who_is_in_this_wor4536 Год назад +3

    Now this is what I would like to actually like!

  • @wangpixu234
    @wangpixu234 Год назад +1

    what a channel, please do a series on directed search algorithms.

  • @loszhor
    @loszhor Год назад +1

    Thank you for the information.

  • @Speak4Yourself2
    @Speak4Yourself2 Год назад +1

    Excellent video.

  • @martinnguyen9720
    @martinnguyen9720 Год назад +1

    Pretty good video and lots of good feedback in the comments. A few expanded explanations (now covered in the comments but would be better in the video itself) and slowing down the pacing a tad would be greatly beneficial. Great job!

    • @bobater
      @bobater  Год назад

      Thanks for the feedback! This is only my 2nd video so obviously there is still a lot to learn and improve, but all things I can do better on in my next video.

  • @ingenuity23
    @ingenuity23 Год назад +8

    you just dropped an intuitive proof of l'hôpital's rule and an explanation of cordic together, loved the video

    • @artsmith1347
      @artsmith1347 Год назад +1

      I thought what he did at least rhymed with l'hôpital's rule -- which has always seemed to be magic. This deserves a re-watch because this may be the best explanation anywhere on how and why l'hôpital's rule works.
      Likewise, the explanation of the CORDIC Algorithm is beyond excellent -- in both words and graphics.
      Subscribed.

    • @ingenuity23-yg4ev
      @ingenuity23-yg4ev Год назад

      @@artsmith1347 i think 3blue1brown also did a video on l'hôpital's and its a similarly intuitive approach

  • @oceannuclear
    @oceannuclear Год назад +1

    Thanks for this! I've been meaning to understand CORDIC for a while but there's just too much fluff/ computer science background that I couldn't be bothered to read into. This cuts right through all the fluff (especially the (1/2)^n series at the beginning of the video) and helps me understand it!

    • @bobater
      @bobater  Год назад +1

      That was exactly my goal in making this video, to just present the ideas and motives behind CORDIC without going too far into the technicalities of it. Glad you liked it!

  • @user-pd2lx8fb3n
    @user-pd2lx8fb3n 7 месяцев назад

    This video is AWESOME

  • @Astr0B
    @Astr0B Год назад +1

    Great video

  • @sdspivey
    @sdspivey Год назад +7

    Your computer is not multiplying, it is repeated adding. That's why it is slow. Using a Shift-Add or similar algorithm, it should be much faster.
    At a minimum, adding "16" seven times would be twice as fast as adding "7" sixteen times.

    • @bobater
      @bobater  Год назад +9

      You are correct. Modern computers have multiply instructions that can complete in just a few clock cycles. The main purpose of showcasing the slower repeated addition algorithm was to show how bit-shifting is faster than multiplication by numbers that aren’t powers of 2, and to give motivation for that optimization

    • @Pence128
      @Pence128 Год назад

      @@bobater Not the point. A computer with a parallel adder can multiply in linear time using grade 4 math. Your code runs in _exponential_ time.

    • @dsdsspp7130
      @dsdsspp7130 Год назад

      @@Pence128 that most certainly isn't true. by that logic computers have parallel processors so everything parallelizable can be done in linear time.
      obviously there is a finite number of these parallel adders so the time complexity can never be linear and what do you mean his code runs in exponential time? multiplication by repeated addition runs in polynomial time.
      plus that is exactly the point, the comment says "Using a Shift-Add or similar algorithm, it should be much faster" which is literally explained in the video and used to explain the CORDIC algorithm. what's the problem here exactly?

  • @piman406
    @piman406 Год назад +2

    You deserve more subscribers

  • @yolamontalvan9502
    @yolamontalvan9502 5 месяцев назад

    This is cool stuff. Highly advanced for rocket engineers.

  • @RSLT
    @RSLT Год назад +1

    Impressive!

  • @morgan0
    @morgan0 Год назад

    2:50 would it increase accuracy to split it into four quadrants each centered on an axis? (like +45 to -45, not 0 to 90) or would it simply make 90 degrees representable and nothing else?

  • @nalat1suket4nk0
    @nalat1suket4nk0 Год назад

    lets go SoME3!!!!

  • @yewo.m
    @yewo.m 2 месяца назад

    Back when I was first learning to program (using C++), I was so enthusiastic that I wanted to make my own command-line calculator. At first I made it using just basic arithmetic operations. And then later on I added trig functions (along with the other typical math functions). But at the time I didn't know the standard library already provided built-in sine/cosine/tangent functions, so I ended up writing my own rudimentary ones using Taylor's series approximations 😅

  • @RyanFick1
    @RyanFick1 Год назад +1

    Good video 👍

  • @NithinJune
    @NithinJune Год назад +1

    There’s a #SOME3 ??? i’m so excited?!!

  • @kodirovsshik
    @kodirovsshik Год назад +1

    Everybody's gansta untill bro causally pulls up a home-made cpu to calculate the algorithm

  • @sreyanchakravarty7694
    @sreyanchakravarty7694 Год назад +2

    Thank you for this video. It is over 10 years I wanted to know this.

  • @medazizmhenni2249
    @medazizmhenni2249 Месяц назад

    Even if the terms of a series are gradually getting smaller when n is getting bigger, the series can diverge. For example, the harmonic series (1/n) diverges even though the sequence 1/n converges to 0.
    In fact, if a series converges, then the corresponding sequence converges to 0. But, if a sequence converges to 0, the corresponding series can either converge or diverge.

  • @MrSilversMathSheets
    @MrSilversMathSheets Год назад

    This is a really nice video. I am excited about the content on this channel because it corresponds well to the content on my new channel. Your video is very professional and easy to understand - a feat considering the content.
    One quibble: the part about convergence misses a really important point - you need to cover all possible values. It turns out that the atan(.5^n) basis works, but not because the rate of converge is 0.5. For example, tan (pi/4*(.5^n)) would have the same asymptotic rate of convergence but leaves the range of computable values a Swiss cheese of holes. The property you need is sum of all remaining basis numbers must equal or exceed the last included basis number.

    • @bobater
      @bobater  11 месяцев назад

      Good point. I'll admit this is something I missed when researching the video, but it makes intuitive sense as to why that property needs to be satisfied. Thanks for bringing this up.

  • @byronwatkins2565
    @byronwatkins2565 Год назад

    How do you compute the atan(1/2^n) ?

  • @umedmaru4445
    @umedmaru4445 3 месяца назад

    Quick question. I was wondering if computers could use the complex plane to rotate points(by multiplying complex numbers), instead of using a rotation matrix. Is there a reason this isn't used?

  • @loweffortdev
    @loweffortdev Год назад +1

    🥲 Very cool video. You explain well. Best wishes man.

  • @Tynach
    @Tynach Год назад +1

    I'd be curious if there are any hardware-based optimizations for this. For example, with multiplication, you don't have to add one number over and over again.
    You can instead AND the two values together several times simultaneously, each time at a different offset, which gives you 'partial products' in O(1) time. Then, you can have a series of 'reduction stages', where the partial products with the same place value are added together. Each reduction stage runs in O(1) as well, but you do need O(log(n)) reduction stages, and they do need to run in series. Finally, after the reduction stages, you're left with two numbers that you simply add together to get the final value.
    Because 1, generating the partial products is O(1); 2, you only need O(log(n)) reduction stages; and 3, the final adder can be implemented as a carry-lookahead adder (or other fast adder), which also runs in roughly O(log(n)), it's said that this brings multiplication's time complexity down to O(log(n)). As for how many gates it takes, that depends on how you build your reduction stages. If you use the Wallace method, you'll end up with fewer bits to add at the very end (making even a ripple-carry adder viable). However, you'll have a significantly higher number of total gates than if you use the Dadda method, which instead has more bits at the end to add up. A compromise is the 'Reduced Complexity Wallace' (or RCW) method.
    It's worth noting, however, that this doesn't mean that O(log(n)) computations are performed with this method, just that so many are done in parallel that the duration of time it takes to perform these computations is O(log(n)).
    However, using multidimensional wizardry, Joris van der Hoeven and David Harvey finally (announced in 2019, published in 2021) devised an algorithm for binary multiplication using only O(n log(n)) binary operations. This leads me to suspect they may not be human, but instead advanced transdimensional aliens who perceive time and space non-linearly. This would explain why their method isn't feasible with current manufacturing techniques, and would require a substrate with more than 3 spatial dimensions.

  • @premkumarsr4021
    @premkumarsr4021 21 день назад

    May I know if this algorithm is better than Taylor series computation of sine and cosine?

  • @pcdwarf9152
    @pcdwarf9152 Год назад

    is it the algorythm used by the math C library on cpus that do not have hadware acceleration of theses functions ? sin and cos are still pretty slow on small cpu like on 8bit arduinos. BTW It seems to me that modern cpu (after intel "Core" microarchitecture on PC) have an assembly instruction that do both sin and cos at the same time and also really fast. I think it's using hard-wired lookup tables and maybe linear interpolation. In fact i do not really know how they do it so fast and I would be interested if you could explain it the same way you did for CORDIC.

  • @arielfuxman8868
    @arielfuxman8868 Год назад +1

    Nice Vid

  • @tobysuren
    @tobysuren Год назад +9

    I'm not sure if I just missed it, but if to calculate inverse tangent you need to use this method and you need inverse tangent to use this method, how are the inverse tangents of 2^-n calculated?

    • @bobater
      @bobater  Год назад +9

      Good question! There are actually other ways besides this algorithm to calculate these functions. They are usually more complex hardware-wise and might take more operations, which is the advantage of using cordic over one of those. One way to calculate those values in advance though would be to use a Taylor series, which is a way to approximate functions with polynomials. Another thing to note is that modern more powerful computers probably don’t use the cordic method, but some of the ideas from it are still used

    • @aleksandersabak
      @aleksandersabak Год назад +3

      ​@@bobater so to actually use the method you presented we need precomputed values of all invers tangents of powers of two we're going to use and the final product of cosines?

    • @bobater
      @bobater  Год назад +2

      @@aleksandersabak -actually we just need the cos constant, since we know the tangent of the nth angle is 2^-n, therefore we don’t need the angles in advance since we already have the cos constant and the tan(a_n) values- Yes, we need both the cos constant and each angle, that way we can scale the vector correctly at the end and know what total angle we've rotated by.

    • @3snoW_
      @3snoW_ Год назад +2

      @@bobater Then how do we know whether to add or subtract the next angle? How do we iterate to the desired angle if we don't know the values of the angles we're adding/subtracting to reach it?

    • @bobater
      @bobater  Год назад +2

      @@3snoW_ You're correct, we do need the angles, we just don't need to actually calculate their tangents. Sorry for the confusion.

  • @FinnCB
    @FinnCB Год назад +1

    New sub here! Very interesting video.

  • @redf1sh
    @redf1sh 11 месяцев назад

    I didn't understand how should we get tan(x) values? Have we a precalculated table of tan(45), tan(22.5) ... ?

  • @MrSilversMathSheets
    @MrSilversMathSheets 10 месяцев назад

    I was disappointed that this did not receive a mention in the final announcement. I hope you continue to make videos though.

  • @alekseikarlovich9262
    @alekseikarlovich9262 Год назад +1

    awesome video! always wondered how these things worked

  • @dyld921
    @dyld921 Год назад +1

    A quicker way to prove convergence of the inverse tangent series: Note that tan^-1(x) < x for x > 0, therefore the sum of tan^-1(2^-n) is less than the sum of 2^-n, which is finite.

  • @yolamontalvan9502
    @yolamontalvan9502 5 месяцев назад

    Thank you for this lesson. I learned something beyond my capability. You forgot to mention the software you use to make those circles and triangles.

  • @NumbToons
    @NumbToons Год назад

    But do any calculators use the taylor series expansion? I mean, before CORDIC, were taylor series used?

  • @RayanMADAO
    @RayanMADAO Год назад +1

    So magic

  • @Wolf-if1bt
    @Wolf-if1bt Год назад +1

    How do you handle the accumulation of errors at each step? And how do you compute sin(z) where z is complex?

    • @wafikiri_
      @wafikiri_ 10 месяцев назад

      For your second question, I have an answer. A complex number is a sum of a real and an imaginary parts, therefore, its sine function is a sum of products of sine and cosine functions of those two parts, formulae shown in the video for sine and cosine of a sum of angles. The sine and cosine of the real part are not any problem. But what are the sine and cosine of imaginary values? They are but the hyperbolic sine and hyperbolic cosine of the corresponding real value as argument, i.e., the imaginary value divided by i. So, you need a way of computing hyperbolic functions in order to compute the sine (or the cosine) of a complex number.

  • @vikingthedude
    @vikingthedude Год назад +1

    These graphics are beautiful. Were they made in Manim?

  • @RealMusicHype
    @RealMusicHype Год назад

    00:24 Very good!

  • @Petch85
    @Petch85 Год назад +1

    Grate video

  • @IshanBanerjee
    @IshanBanerjee Год назад +1

    Thats amazing work ❤

  • @gigantopithecus8254
    @gigantopithecus8254 8 месяцев назад

    i found my own method for calculating cos(x)
    1.choose accuracy n
    2. divide the input number by 2^n-1
    3.square the number and subtract 2from that
    4. repeat 3 n times
    5.divide the answer by two
    this works due to the double angle formula and chebesev polinomials

  • @sifodyas_
    @sifodyas_ Год назад +1

    Awesome, what do you use for the animation?

    • @bobater
      @bobater  Год назад

      Thanks! This video was made with Manim CE

  • @RisetotheEquation
    @RisetotheEquation Год назад +1

    Hey Bobater, great job! I was wondering if any of this might help explain the super cool phenomenon of tan 89, tan 89.9, tan 89.99, etc. Why they keep going up by factors of 10 has always been something I've been curious about.

    • @Noname-67
      @Noname-67 10 месяцев назад

      It's easier to see when you convert them contangents using the fact that cot x = tan(90-x). This results in cot 1, cot 0.1, cot 0.01. They approximately scales by factor of 10 is because cot x is close to 1/x for small x, using sin x ≈ x and cos x ≈ 1.

  • @user-vf2lx5eo6p
    @user-vf2lx5eo6p 10 месяцев назад

    Thanks.

  • @inyobill
    @inyobill Год назад +1

    "How CAN computers compute (trig functions)" vs. "How DO computers ...". The hard real-time systems I worked on used Taylor Series approximation. 3 or 4 terms are adequte to achieve required accuracy (usually LSB).

  • @hydropage2855
    @hydropage2855 Год назад +1

    My calculus teacher last year taught us about Taylor series expansions, and he based the explanation off wondering how calculators get trig values, implying they use Taylor series. I spoke with him later in private to explain that there exists this algorithm and calculators really don’t use series, which he was surprised about

    • @mattiaviola7152
      @mattiaviola7152 Год назад

      Maybe your teacher was not totally wrong. In the video the author says that this method is only one option, used in old computers.

    • @hydropage2855
      @hydropage2855 Год назад +1

      @@mattiaviola7152 I just didn’t appreciate that he didn’t point out that it was an outdated or primitive way of doing it, he asserted it was how all calculators did it, which is misinformation. I spoke to him in private about CORDIC later

  • @gokaytaspnar1355
    @gokaytaspnar1355 Год назад

    Do we precalculate the tan values like this ?
    tan(45) = 1
    tan(26.565) = 1/2
    tan(14.036) = 1/4
    tan(7.1250) = 1/8
    tan(3.5763) = 1/16
    tan(1.7899) = 1/32
    tan(0.8951) = 1/64
    tan(0.4476) = 1/128
    tan(0.2238) = 1/256
    tan(0.1119) = 1/512

  • @mattiaviola7152
    @mattiaviola7152 Год назад +1

    Thanks, really very nice, I love manim. One more clarification needed: I would expect K to depend on the number of iterations, not be constant. Do you store in memory a vector of the first, say, 10 K-Values?

    • @bobater
      @bobater  Год назад +1

      Thank you! Yes, the K value changes slightly depending on the number of iterations used to calculate it. However, it converges very quickly, so storing the calculated value for something like 10 iterations is sufficient as long as we do about that many iterations when we're running the algorithm.

  • @rudypieplenbosch6752
    @rudypieplenbosch6752 4 месяца назад

    In the old days, i calculated the Arctan in assembler using a the CORDIC. There is a mathematical proof of the CORDIC somewhere.

  • @shuba5173
    @shuba5173 Год назад +1

    animations are great

  • @aguyontheinternet8436
    @aguyontheinternet8436 Год назад +2

    I wish you made this video a year ago, when I searched for them my first year of high school geometry
    At the start, you joked about being told that the answer was "wizard magic" but that is genuinely what I was told in school. Why does youtube have a better curriculum than the education system for math anyways
    oh, this is for SoME3. So I guess the answer to my question is 3B1B.

  • @BB-mr3vy
    @BB-mr3vy 5 месяцев назад

    how do you keep track of what theta is in order to compare it to the target angle? i understand how you track the x and y coordinates, but not the angle.

    • @bobater
      @bobater  5 месяцев назад

      Good question! We can store the total angle that we’ve rotated by in memory. Each time we iterate the algorithm, we can add or subtract the angle that we’ve just rotated by with our total angle so that we know what angle we are currently at and can compare it with the target angle.

    • @BB-mr3vy
      @BB-mr3vy 5 месяцев назад

      thanks for your response. my confusion was more about how you actually know what angle you're adding, since we only know them as arctan of 2^{-k}, not an approximate decimal value which we could sum and keep track of. do you just start by calculating decimal approximations of arctan(2^{-k}) in some other way for a sufficiently large number of values of k, and then storing them for use in the CORDIC algorithm?@@bobater

    • @bobater
      @bobater  5 месяцев назад +1

      Yep that’s exactly right! We can pre calculate the angles using some other method and then store them in a lookup table in memory which we can reference while using the algorithm

  • @CesarGrossmann
    @CesarGrossmann 3 месяца назад

    Very interesting.
    Is that how scientific calculators do this kind of calculations? Like the venerable TI-30?

  • @manameisjeffie656
    @manameisjeffie656 Год назад

    I have a question how do u know when to rotate a vector backwards or forward, (substract and add the angle)
    EDIT: actually this sounds kinda dumb, since u can check if the currentAngle is bigger or smaller then the desiredAngle and add/substract the next iteration of the angle

    • @arnoldn2017
      @arnoldn2017 Год назад

      One simple way of distinguishing between forward and backward rotation is to use complex numbers, forward rotations can be done by multiplication of two complex numbers and backward rotations by division

    • @manameisjeffie656
      @manameisjeffie656 Год назад

      @@arnoldn2017 that does work, but when do u know when u multiply or divide to increase/decrease the angle, that was the question i was asking, until i realised how it worked, but it is still pretty interesting how u can find sin(x) and cos(x) by looking at complex numbers properties (cos(x) + i*sin(x) = e^(i*x) )
      also don't this remind anyone of how conditionally convergent series work ?

  • @alexciobanu3819
    @alexciobanu3819 Год назад

    ty )

  • @walkingpizza1796
    @walkingpizza1796 Год назад +1

    WOW!

  • @yoyobom1130
    @yoyobom1130 Год назад +2

    This video is really really really mind-blowing

  • @tantuz1128
    @tantuz1128 Год назад +4

    Nice and informative video. However, this is not how modern computers compute trig functions. Multipliers (even floating point) are cheap and abundant. Few layers of small lookup tables combined with complex multiplication will give you a lot of angle resolution and precission. Or a small LUT + Chebyshev polinomial.

    • @TrickyNekro
      @TrickyNekro 2 месяца назад

      Watch until the end. And sure unless your FPU has explicitly trig functions, if you have an FPU at all, this method is definitely being used today, albeit in hardware restricted environments.

  • @gabedarrett1301
    @gabedarrett1301 Год назад +1

    12:08 Each term in the harmonic series is smaller than the one before, but it doesn't converge

    • @loganpockrus6882
      @loganpockrus6882 Год назад

      However, taking the limit as n approaches infinity of [1/(n+1)] / [1/n], which is equivalent to [n / (n+1)], leads to 1, and 1 is not less than 1. Hence, the ratio test does not say the harmonic series converges.

  • @nomzz1
    @nomzz1 10 месяцев назад

    can you make one explaining how a logarithm is calculated? ive always wanted to know

  • @zxborg9681
    @zxborg9681 5 месяцев назад

    Super interesting, I always wondered how CORDIC worked. Question, though, at 12:19, I'm not sure that the ratio of a[n+1]/a[n] is really sufficient to prove convergence, since the latter terms of the harmonic series (1/2 + 1/3 + 1/4 + ...) satisfies that test but still doesn't converge. Maybe I'm missing something though. Anyways, great video!

    • @bobater
      @bobater  5 месяцев назад

      Thanks for watching! The key with the ratio test is that we’re evaluating the ratio as a limit as we approach infinity. For the harmonic series, that ratio approaches 1 as we approach infinity. Since the ratio isn’t less than 1, the test does not tell us that the terms of the harmonic series are shrinking relative to each other at the limit, and therefore doesn’t imply that the harmonic series converges, which is why there is no contradiction in using the test