I'm only at 0:16 and I'm already having numerical computing class flashbacks (took that class ten years ago). Netwon Raphson, Regula Falsi, Runge-Kutta. It's all coming back.
The original special case for square roots is called "The Babylonian Method" because it was invented by a Greek mathematician living in Egypt. I think it was named by an engineer who decided "Greece and Egypt ≈ Babylon"
Fun fact: The number of correct digits roughly *doubles* with each iteration of Newton's method. So for example you could compute 1 billion digits of sqrt(17) with about 30 iterations.
While this is true, it only works when you already know at least one correct digit. If the initial guess is way off, you'll only get halfway closer to the solution. That's why having a good initial guess is important.
The number of correct digits, depends on how tight the margins are. If the margins are loose or tight, you'll have to vary your input. Sometimes a single digit input works sometimes a handful will.
Right! We first determined what percentage is acceptable, then we stopped iterating. Btw, they went to the moon with calculating with a slide ruler, only 3 decimals, with estimation, 4.
But those numbers are irrational, we will never have an exact solution, the estimation becomes synonymous with the exact value for any actual application and for anything abstract we just keep it as is, sqrt(a)
That square root approximation is elegantly simple. Each guess is just the average of the previous guess, and the number over that previous guess. As you approach the root, it becomes the average of the root and the number over the root (number over root is the root). So beautiful
@@Ryanisthere i don't quite get these jokes. Aren't engineers got to be precise so that buildings don't fall off and circuits don't burn? Using pi=3 would be a fukin travesty, right?
I had to use the newton raphson method in my engineering career a few years ago to approximate a function (solving a Civil Engineering equation backwards with multiple square roots in weird places) that otherwise converges on a few nonreal/negative answers and one real, positive one I was looking for. I never thought I would actually apply it in my life when I learned it, but it felt so cool to have a real world application for it! Made me realize that weird, theoretical math part of my degree wasn't quite such a waste of time after all!
Interesting topic! This reminds me on programming in BASIC interpreter 40 years ago. At that time the value of PI was not implemented, the solution was : 4*arctan(1), which gave PI with the accuracy of devise's BASIC.
What a great video 👌 It would have been such a great starting point for me a while back when I was writing GPU algorithms for fast square and cube roots of float 32 and float 64 values. Managed to get them super fast combining Taylor series expansions, the power laws and the good old Newton raphson iteration. If I remember correctly, about 3ns to compute cube root to fp64 precision.
We ❤ approximations! Honestly, sometimes wanting an exact solution is lazy. People don't realize how much math goes into designing numerical methods and proving their convergence and stability.
Most square roots can only be approximations since they are irrational. There is no exact solution unless you write with the square root symbol. If you want to use just digits it is going to an approximation. To anyone who says “just use a calculator”, guess what? The calculator uses an algorithm to find the square root up to the number of digits the calculator can work with.
Thank you for bringing context to an otherwise "insignificant" topic covered for 15 mins in a first year calculus course! I thought I hated math, but I've just been missing out on how much fun it can be once you wrap your head around the concepts
long ago I wrote a integer Square root on a DSP processor. It used the DSP's single cycle multiplier to create the square. then it compared it and set one output bit. after 16 loops I had a 16 bit result.
The way your computer calculates square roots (assuming it's a recent computer) is using a related method, Goldschmidt's algorithm. Let Y be an approximation to sqrt(n). Set: x_0 = Y*n h_0 = Y*0.5 And iterate: r_i = 0.5 - x_i * h_i x_{i+1} = x_i + x_i * r_i h_{i+1} = h_i + h_i * r_i Then x_i converges to sqrt(n) and y_i converges to 1/2sqrt(n). As hinted at in the video, some approximations have advantages over others. In this case, the advantage is that the "inner loop" is three copies of the same operation a + b * c, called a "fused multiply-add". This saves on circuitry compared to Newton-Raphson methods.
Applied Numerical Methods. I don't remember the exact name but I remember a technique which converts a definite integral to two (or natural number) terms. Gauss quadrature rule, was it? I honestly was intrigued by this method.
Just to contribute an interesting point here. Arguably the most significant piece of evidence we have when it comes the global regularity problem for the Navier Stokes equations is Terence Tao’s work on the subject. His biggest paper on the subject showed that for an approximated form of the Navier Stokes equations (one that has been averaged in an extremely specific and accurate way) blow up results occur. The relevance of this is two fold 1. This may very well be one of if not the most complicated approximations ever thereby showing how approximations are an important part of math and science at every level And 2. It shows that even pure mathematicians can use approximations to create partial progress on the toughest problems ever. That result was huge as it showed both that there is a possible pathway toward a full solution and it also showed that any attempt at proving global regularity in the positive would require methods which delve into the finer nonlinear structures with the full pde that got averaged out in the approximation. In many ways, this paper is why most of the community believes that global regularity for Navier Stokes is going to be solved in the negative whenever it happens.
This video would have been glorious half a year ago... Had a University course in evolutionary game theory and literally all of it was linear approximation because biological/evolutionary models are only estimations and I did not understand what a fixed point was. Seems so easy now... Thanks a lot!
Right after watching this video, I listened to Bob Dylan singing "Queen Jane Approximately" from "Blonde On Blonde". Dylan really sucks at rigorous explanation, and Newton-Raphson is also well-presented elsewhere ad nauseam. I understand that going beyond the basics is more difficult, which makes producing lots of videos less likely, and maybe no one will ever even look for the next steps. That is the dilemma of the youtube STEM educator, and is in large part why MIT's OCW series and similar stuff exists and is valuable. That said, it's great that you are reaching out to learners who are just starting out. Well done, man. L'chaim.
Some approximations are quite good. If you use 22/7 for the value of Pi then on a 100 ft. diameter circle the circumference error is ~one and a half inches.
10:43 that iteration method is just computing the finite simple continued fractions of the golden ratio, and will converge to its simple continued fraction. A great opportunity to bring up that topic :D
Thanks for the timely video and inspiration! Just finished related rates in Stewart's calculus and the literal next section is linear approximations. Loved this video and can't wait to be thoroughly confused by that coming numerical analysis video lol
One that I can remember (I picked it up from one of Clive Sinclair's companies) is that Pi to 6 decimal places is 355/113. Dates back to the early calculators of the 70s, before scientific calculators were available at affordable prices. BTW - 3550001/1130001 does this to 8 decimal places.
Another example of numerical approximations of things that are hard to arithmetically calculate is a matrix inverse. Similar to the iteration pf the square root, there is a simple iteration process that leads to a good approximation of the matrix inverse, which takes way longer to compute than the square root, both on a camculator and by hand
0:00 I've been studying and practicing English for the last 22 of my 25 years of age, but only now I found out that, unlike my mother tongue, English has two separate words for clocks and watches, despite I've known and used both words for years now.
11:53 both solutions of the equation are the golden ratio, but one is the longer side/shorter side and the other one is the reciprical, shorter side/longer side
i also have an amazing aproximation technique it's done like this "hmm root of 17 has to be more then 4 since 4^2 =16 but less then 5 since 5^2=25, it's closer to 4 so for all intents and purposes it's 4"
There is a better equation for finding the square root: Xn+1 = A/2 + C/2A, where A is our first estimation and later subsequent numbers as we go for solving X2 and so on... If we get our first estimation near the true value (that shouldn't be too difficult), we can solve the square root of C in 2 - 3 lines only !!
You should definitely talk about the finite element method. Approximating differential equations is a huge deal in engineering (especially civil/mechanical/aerospace)
there are some really cool algorithms. First order methods that use only the derivative and second order methods that need fewer iterations but are damn expensive. @Zach Star Please make a video on gradient descent. Hopefully some of the my students will see the simple version and we can move directly into the more involved variants. There is plain gradient descent, smooth gradient descent, accelerated gradient descent, mirror descent, coordinate descent, BFGS and L-BFGS.
I took numerical analysis in uni (I think it was called numerical methods) and they recommended to have two scientific calculators to iterate calculations more efficiently (if we're not going to bring our laptops to use excel in class)
This is the newton ralphson method :edit- i commented this when i was at 0:30 into this video. I already know how this method works but this video was a really good visual explanation . :D
The thing is, if you take x_{n+1}=1+1/(x_n), you'll eventually converge to the golden ratio, but if you use newton's method on f(x)=x^2-x-1, you have x_{n+1}=((x_n)^2+1)/(2*x_n-1), and if you try this, you'll also reach the golden ratio, but it's MUCH faster. This is because, as the approximation gets closer to the correct answer, the graph more closely resembles a line, and since newton's method assumes the graph is a line for every step. The rate at which you reach the answer increases drastically.
Where was this video at the start of the semester. Could have saved me so much time trying to get the early chapters in the numerical analysis class I am taking...
This reminds me of another kind of approximation. Take 50% of a number. In the first iteration, let's call it a distance e.g., 1", the first iteration is 1/2", then 1/4", then 1/8", next 1/16". Notice that the distance covered in just 4 iterations is almost 94% of the total distance of 1. This same logical/simple process applies to almost any subject. If you are learning to play a musical instrument, your skills will increase exponentially in the beginning. However, if your objective is to become an expert, the skill level will increase very slowly, after the 4th iteration. Therefore, from a business point of view, one must define the point of diminishing returns.
A control theory and applications would be cool to control systems as well as machine learning applications also love the video applications to engineering with algorithms used in matlab and simulink modeling and simulation is such a great field!
I only learned something similar I think. It was based on Taylor series and you got out of it the approximate value after iteration and how big the error was. I don't remember it very well - it was years ago - but I think if you wanted to know for example sqrt(10) - you just used sqrt(9) and sqrt(16) - so basically bracketed a number, or used (I don't remember if you needed bigger and smaller, or just 1 number that was close and knows) an easy number with the same function (in this case sqrt(x)) to start approximation. Sorry - I learned it, while learning Calculus. It looks a bit like the second method in 12:25 - but I'm not 100% certain.
"for calculation purposes, let asume this cow is perfectly round"
@@danielyuan9862 Considering the digestive system, isn't a cow more related to a donut?
let’s assume this cat is a cube
@@thepiratepeter4630 but aren't there more than one orifice?
@@sleepycritical6950 But the other orifices aren't "tubes"
@@thepiratepeter4630 still counted. A sphere with a hole is no longer a sphere. A torus with a hole on the surface is also not a torus.
I'm an engineer
I see approximation
I click
Same too me!!
Same for me!!
and it was just what I expected
"Approximations"
Oh cool
"The Engineering way"
_oh boi this is gonna be good_
Google Play Store Search : jumpjump
Fun And Surprising Game is here.
playtime is short but,
only 1 dollar cost is snack cost. This Is Jump Game Adventure. Great BGM And GRAPHIC Is In the Game. Please Enjoy. jumpjump game is Fun.
Thank you
I'm only at 0:16 and I'm already having numerical computing class flashbacks (took that class ten years ago). Netwon Raphson, Regula Falsi, Runge-Kutta. It's all coming back.
Gauss-Seidel, Picard aaaaah
Just finished it two weeks ago...
AAAHHHHH
Bisection method :D
I learned FORTRAN in uni when doing this stuff! I'd forgotten I once knew FORTRAN!
Just had this Yesterday 😂
Math never fails to surprise me, I could not even think such a thing could exist
The original special case for square roots is called "The Babylonian Method" because it was invented by a Greek mathematician living in Egypt.
I think it was named by an engineer who decided "Greece and Egypt ≈ Babylon"
We’re doing this in my calc class rn and I swear to god you explain it better than my professors
Fun fact: The number of correct digits roughly *doubles* with each iteration of Newton's method. So for example you could compute 1 billion digits of sqrt(17) with about 30 iterations.
While this is true, it only works when you already know at least one correct digit. If the initial guess is way off, you'll only get halfway closer to the solution. That's why having a good initial guess is important.
The number of correct digits, depends on how tight the margins are. If the margins are loose or tight, you'll have to vary your input. Sometimes a single digit input works sometimes a handful will.
The relevant theorem here is that there is a small domain about any attractive fixed point in which convergence is quadratic.
@@rsa5991 yes but we know the first digit of every square root
@@Naverb proof?
Please keep making these so I can make it through college.
Mathematicians: We need exact solutions!
Engineers: Nah, "close enough" is good enough.
Right! We first determined what percentage is acceptable, then we stopped iterating. Btw, they went to the moon with calculating with a slide ruler, only 3 decimals, with estimation, 4.
But those numbers are irrational, we will never have an exact solution, the estimation becomes synonymous with the exact value for any actual application and for anything abstract we just keep it as is, sqrt(a)
Applied Mathematicians: We need to get exactly close enough!
The Forbidden Math
Nice Clock and Watch, where can I get one of deeze, Zach? :^D
Hi Papa flammy
Papa
father
Daddy
I would highly recommend them you can get them on stemerch.com :) papa flammy
That's a really cool formula
"Why be right when you can approximate?"
Why get a girlfriend when you can get a proxy mate.
That square root approximation is elegantly simple. Each guess is just the average of the previous guess, and the number over that previous guess. As you approach the root, it becomes the average of the root and the number over the root (number over root is the root). So beautiful
Good observation!
Ah, the fundamental theorem of engineering.
2 = e = π =3
this is the first thing you learn in engineering college
@@Ryanisthere haahhahahhahhahha awesome😁😁😁😂😂😂 engineer for ever😎😎😎
and sin(x) = x 😂😂
@@Ryanisthere i don't quite get these jokes. Aren't engineers got to be precise so that buildings don't fall off and circuits don't burn? Using pi=3 would be a fukin travesty, right?
@@black_jack_meghav r/woooosh
dude I was just expecting to get some stuff like pi = 3 = 3 or g^2 = 10 or something like that, but I actually learned a lot!
I had to use the newton raphson method in my engineering career a few years ago to approximate a function (solving a Civil Engineering equation backwards with multiple square roots in weird places) that otherwise converges on a few nonreal/negative answers and one real, positive one I was looking for. I never thought I would actually apply it in my life when I learned it, but it felt so cool to have a real world application for it! Made me realize that weird, theoretical math part of my degree wasn't quite such a waste of time after all!
I did a Bachelors thesis partly on this, when I finally got how it worked when I saw it, it was almost magical.
Interesting topic! This reminds me on programming in BASIC interpreter 40 years ago. At that time the value of PI was not implemented, the solution was : 4*arctan(1), which gave PI with the accuracy of devise's BASIC.
What a great video 👌
It would have been such a great starting point for me a while back when I was writing GPU algorithms for fast square and cube roots of float 32 and float 64 values.
Managed to get them super fast combining Taylor series expansions, the power laws and the good old Newton raphson iteration. If I remember correctly, about 3ns to compute cube root to fp64 precision.
We ❤ approximations!
Honestly, sometimes wanting an exact solution is lazy. People don't realize how much math goes into designing numerical methods and proving their convergence and stability.
Most square roots can only be approximations since they are irrational. There is no exact solution unless you write with the square root symbol. If you want to use just digits it is going to an approximation. To anyone who says “just use a calculator”, guess what? The calculator uses an algorithm to find the square root up to the number of digits the calculator can work with.
Really cool to see these real world applications- the way you teach math makes it fun and interesting!
The effort put into these videos is just amazing. And the educational content, truly first class. Keep up the good work Zach!
As an Engineer I relate to these useful approximations. Thank you so much for theses examples and explanations!
We’re literally on this exact topic in calculus right now
Quickly becoming my favorite youtube channel!
first law of engineering: everything is linear
Sinx=x
@@fgvcosmic6752 0?
quality content
as always
Chebyshev Approximations are also very useful.
Zach : It's possible to get stuck in an infinite loop.
Float error : IT'S MY TIME TO SHINE
Great timing. I'm starting my numerical analysis class at uni tomorrow
Doing it at engineering school, and very happy to find it on RUclips ! Thanks
Thank you for bringing context to an otherwise "insignificant" topic covered for 15 mins in a first year calculus course! I thought I hated math, but I've just been missing out on how much fun it can be once you wrap your head around the concepts
Holy crap thanks for explaining this, the random pdfs that I found on the internet are confusing as hell.
I have a Numerical Analysis midterm in 8 hours so i clicked on this as soon as i saw it in my sub box, thanks ^^
could have used this video last semester during numerical methods. you explained it better in 14 minutes than my prof did in 3 lectures
Absolutely beautiful. I learned that stuff year ago at the university, but you described it so so much better.
Numerical analysis is the coolest class of functions that have already been written for you
Looking forward to a video about Numerical Analysis, I'm taking it in the fall!
long ago I wrote a integer Square root on a DSP processor. It used the DSP's single cycle multiplier to create the square. then it compared it and set one output bit. after 16 loops I had a 16 bit result.
Great video! I was wondering if you would mention the Quake fast inverse square root and then bam! Awesome. Keep up the great work!
The way your computer calculates square roots (assuming it's a recent computer) is using a related method, Goldschmidt's algorithm. Let Y be an approximation to sqrt(n). Set:
x_0 = Y*n
h_0 = Y*0.5
And iterate:
r_i = 0.5 - x_i * h_i
x_{i+1} = x_i + x_i * r_i
h_{i+1} = h_i + h_i * r_i
Then x_i converges to sqrt(n) and y_i converges to 1/2sqrt(n). As hinted at in the video, some approximations have advantages over others. In this case, the advantage is that the "inner loop" is three copies of the same operation a + b * c, called a "fused multiply-add". This saves on circuitry compared to Newton-Raphson methods.
This is so incredibly helpful. I literally had a numerical analysis assignment last week where we had to use Newton Raphson
Dude, I really have to watch all your videos about engineering’s stuff. im in my second year and there is a lot of things i have to be familiar with
The quake 3 fast inverse square root video got me into watching these kinds of videos. Now that's a meme you'll want to see.
Applied Numerical Methods. I don't remember the exact name but I remember a technique which converts a definite integral to two (or natural number) terms. Gauss quadrature rule, was it? I honestly was intrigued by this method.
Just to contribute an interesting point here. Arguably the most significant piece of evidence we have when it comes the global regularity problem for the Navier Stokes equations is Terence Tao’s work on the subject. His biggest paper on the subject showed that for an approximated form of the Navier Stokes equations (one that has been averaged in an extremely specific and accurate way) blow up results occur.
The relevance of this is two fold
1. This may very well be one of if not the most complicated approximations ever thereby showing how approximations are an important part of math and science at every level
And 2. It shows that even pure mathematicians can use approximations to create partial progress on the toughest problems ever. That result was huge as it showed both that there is a possible pathway toward a full solution and it also showed that any attempt at proving global regularity in the positive would require methods which delve into the finer nonlinear structures with the full pde that got averaged out in the approximation. In many ways, this paper is why most of the community believes that global regularity for Navier Stokes is going to be solved in the negative whenever it happens.
This video would have been glorious half a year ago... Had a University course in evolutionary game theory and literally all of it was linear approximation because biological/evolutionary models are only estimations and I did not understand what a fixed point was. Seems so easy now...
Thanks a lot!
Damn it's been ages since I did maths "properly", but this was really accessible and a good reminder of how it all slots together. Thank you!
Right after watching this video, I listened to Bob Dylan singing "Queen Jane Approximately" from "Blonde On Blonde". Dylan really sucks at rigorous explanation, and Newton-Raphson is also well-presented elsewhere ad nauseam. I understand that going beyond the basics is more difficult, which makes producing lots of videos less likely, and maybe no one will ever even look for the next steps. That is the dilemma of the youtube STEM educator, and is in large part why MIT's OCW series and similar stuff exists and is valuable. That said, it's great that you are reaching out to learners who are just starting out. Well done, man. L'chaim.
Some approximations are quite good. If you use 22/7 for the value of Pi then on a 100 ft. diameter circle the circumference error is ~one and a half inches.
I heard about it before but was thinking why isn't it too famous thanks for elaborating it. I always wanted to know more about it keep it up😀😀😀👍👍🙏🙏
I read this under the heading computational methods
TODAY!!
Thanks a lot for the amazing info dude, it's satisfying to get stuff explained by you
10:43 that iteration method is just computing the finite simple continued fractions of the golden ratio, and will converge to its simple continued fraction.
A great opportunity to bring up that topic :D
Thanks for the timely video and inspiration! Just finished related rates in Stewart's calculus and the literal next section is linear approximations. Loved this video and can't wait to be thoroughly confused by that coming numerical analysis video lol
Yas! You posted something on your OG profile! LIT 🔥
That was one of the coolest videos about a table on my calculus book that I took as magic
One that I can remember (I picked it up from one of Clive Sinclair's companies) is that Pi to 6 decimal places is 355/113. Dates back to the early calculators of the 70s, before scientific calculators were available at affordable prices.
BTW - 3550001/1130001 does this to 8 decimal places.
Very awesome video Zack.. Keep up the good work..
Another example of numerical approximations of things that are hard to arithmetically calculate is a matrix inverse. Similar to the iteration pf the square root, there is a simple iteration process that leads to a good approximation of the matrix inverse, which takes way longer to compute than the square root, both on a camculator and by hand
Im currently taking a numerical analysis course right now, this 10 minute video made more sense than the whole class has this semester -.-
Please make videos like this.
It was a wonderful video.
Numerical Methods was one of the more rigorous and work-intensive courses in my mechanical engineering workload so far
ooh boi i am going through these in my current semester and already coded the fn for iterattive method and newton raphson, loved to know more on it😊
0:00 I've been studying and practicing English for the last 22 of my 25 years of age, but only now I found out that, unlike my mother tongue, English has two separate words for clocks and watches, despite I've known and used both words for years now.
11:53 both solutions of the equation are the golden ratio, but one is the longer side/shorter side and the other one is the reciprical, shorter side/longer side
i also have an amazing aproximation technique it's done like this "hmm root of 17 has to be more then 4 since 4^2 =16 but less then 5 since 5^2=25, it's closer to 4 so for all intents and purposes it's 4"
There is a better equation for finding the square root: Xn+1 = A/2 + C/2A, where A is our first estimation and later subsequent numbers as we go for solving X2 and so on... If we get our first estimation near the true value (that shouldn't be too difficult), we can solve the square root of C in 2 - 3 lines only !!
You should definitely talk about the finite element method. Approximating differential equations is a huge deal in engineering (especially civil/mechanical/aerospace)
The clock looks awesome
Diophantine approximation is a surprisingly interesting area of number theory too.
I went through the first 4 minutes of this just thinking huh, this reminds me a lot of Newton-Rhapson that I learned last summer in Numerical Comp.
Everyone in comments section: it was about time that you decided to finally make a video this
Thank you very much for this video.
there are some really cool algorithms. First order methods that use only the derivative and second order methods that need fewer iterations but are damn expensive. @Zach Star Please make a video on gradient descent. Hopefully some of the my students will see the simple version and we can move directly into the more involved variants. There is plain gradient descent, smooth gradient descent, accelerated gradient descent, mirror descent, coordinate descent, BFGS and L-BFGS.
Finally, i good way of writing down Heron's method
I took numerical analysis in uni (I think it was called numerical methods) and they recommended to have two scientific calculators to iterate calculations more efficiently (if we're not going to bring our laptops to use excel in class)
Awesome video!
This is the newton ralphson method :edit- i commented this when i was at 0:30 into this video.
I already know how this method works but this video was a really good visual explanation . :D
Video would have helped so much in understanding my numerical methods class if it was a year ago
The thing is, if you take x_{n+1}=1+1/(x_n), you'll eventually converge to the golden ratio, but if you use newton's method on f(x)=x^2-x-1, you have x_{n+1}=((x_n)^2+1)/(2*x_n-1), and if you try this, you'll also reach the golden ratio, but it's MUCH faster. This is because, as the approximation gets closer to the correct answer, the graph more closely resembles a line, and since newton's method assumes the graph is a line for every step. The rate at which you reach the answer increases drastically.
can't wait for the numerical analysis examples that took way longer than expected :-)
i remember this when i took numerical method class. we used loop method to program this
approximations the engineering way: 𝝅=e=3, g=10m/s²=9=𝝅²=e²
Ok, so I just tried the square root formula on excel and it is so damn satisfying.
Beautiful!
Good stuff. Nice job.
Wait a minute… back in calc 1 I only learned (f(x-h) + f(x)) / hunt holy crap this brought me a whole new meaning lol
great vid as always
THANK YOU.
Where was this video at the start of the semester. Could have saved me so much time trying to get the early chapters in the numerical analysis class I am taking...
This video came out approximately on my birthday
This reminds me of another kind of approximation. Take 50% of a number. In the first iteration, let's call it a distance e.g., 1", the first iteration is 1/2", then 1/4", then 1/8", next 1/16". Notice that the distance covered in just 4 iterations is almost 94% of the total distance of 1. This same logical/simple process applies to almost any subject. If you are learning to play a musical instrument, your skills will increase exponentially in the beginning. However, if your objective is to become an expert, the skill level will increase very slowly, after the 4th iteration. Therefore, from a business point of view, one must define the point of diminishing returns.
Just proof lim (xn)n=sqrt(c) but that wouldn’t be engineering style
A decent introduction to Numerical Analysis
A control theory and applications would be cool to control systems as well as machine learning applications also love the video applications to engineering with algorithms used in matlab and simulink modeling and simulation is such a great field!
I only learned something similar I think. It was based on Taylor series and you got out of it the approximate value after iteration and how big the error was.
I don't remember it very well - it was years ago - but I think if you wanted to know for example sqrt(10) - you just used sqrt(9) and sqrt(16) - so basically bracketed a number, or used (I don't remember if you needed bigger and smaller, or just 1 number that was close and knows) an easy number with the same function (in this case sqrt(x)) to start approximation. Sorry - I learned it, while learning Calculus. It looks a bit like the second method in 12:25 - but I'm not 100% certain.
To simplify calculations let e = 3, pi = 3, and 3 = 2.9!
I knew this method but didnt think about using it like this