I'm am digging the subjects that you're presenting on lately. They're much more my level and I'm super excited to look over them all. Just wish that there was more time in the day to "do it all".
After all these years after college I finally understand what the Taylor series really is! Prof Brunton has supreme taste in knowledge presentation and makes concept really intuitive
Steve, you're amazing! Your explanations are on point with the perfect amount of detail with impeccable choice of words for the same! these are immensely helpful! Thank you so much!
Thank you very much...🌺🌺🌺 Homework (Excercise): In time 8:43: 1- Consider the Taylor series of f(x+dx) about a fixed point 'x'. 2- Again, consider the Taylor series of f(x) about a fixed point 'a'. Now, prove that these two Taylor series are identical (actually, are the same as each other). Hint: I think this is a good idea to use some variables, such as z1 = x and z2 = x+dx (in the first relation), and also y1 = a and y2 = x (in the second relation), and prove the polynomials of the Taylor series are the same as each other.
STEVE I ABSOLUTLY LOVE HOW SHARP AND CLEAN AND PACKED FULL OF INFO YET STILL DIGESTIBLE EVERYTHING IS!! You are my favorite teacher. I am a self learner and you are GOD MODE for that. PS: please do a video on why all power series are Taylor series (without borel heavy machinery if possible or by using borel but breaking everything down)!
thanks for this. when we were doing Taylors in calc 2 i was going less hard and dedicating more time to my now-wife. I got a C in the course, and i still got into the MS in AI program I’m about to finish. no regrets :)
I think the issue is that in Calc 2 they (Taylor/Maclaurin) are more of a curiosity...in Diff Eq, you see how powerful they are (as well as power series). Good luck to you!
Polynomial does a poor job in the tails, BUT the function is periodic, so withing the interval [-pi, pi] 7th and 9th degree polynomials do very well. Why look at the tail when you can always reduce down to this interval.
Approximating sin(x) by way of the Maclaurin Series? Not to be forgotten there is also Newton's Method, and Simpson's Rule. . . 🙄 One still remembers programming that knid of thing in Commodore Basic, dawdling one's precious time away on a C-64 bosting 7Mhz clockspeed and a 16-Bit address bus, during college years. 🤔 Ps: Nice code making decent graphic representations possible. Compared to writing additional machine code in order to access an 8-bit graphics processor chip for it to generate a couple of green lines.
Feynman wrote somewhere that he did not understand why it takes such an amount of calculation to predict, in quantum mechanics, what happens a moment later right next door It's the same feeling here : To see what happens at Delta X, right next door, we need to deploy an infinity of terms. Maybe, mathematics are not the best tool to describe the Nature ...
Those kinds of doubts are better addressed in a "Philosophy of Science" video rather than a calculus course. What you're saying is true-there are no infinities in nature, but they arise in math nevertheless. But that's just an artifact of math. Many such artifacts appear when applying math to science and engineering. Despite that, math is currently the best tool we have to describe nature. Do you know of anything better? 😜
@@nHans The big alternative is the CNN : Convolutional Neural Network. Yann Lecun, the french mathematician who invented it, was mocked because, mathematicaly speaking, his Neural Network was not "convex"... Yann Lecun says that too much theory is not a good think to do real progress ...
@@coraltown1 Do you know of any examples where infinities occur in nature? (Exclude "the size of the universe," because that's still an open question.)
Great video and explanation. I love the content, but, someone please help me with a fundamental concept I'm missing. If we already know the original function (sin and cos in this case), why is a Taylor Series used to approximate it? We can simply evaluate sin(x) or cos(x) directly to get an exact answer. Thanks for the help.
While looking at simple sin and cos functions it might not be apparent why we are using a Taylor series, but take x as something complicated, like a matrix, what is the sin of a matrix? then you can no longer use the conventional trigonometric definition of sin and cosine. That's where using a Mclaurin series comes in handy.
You might already get the answer from later clip about linearization a nonlinear equation. The problem is that if the argument of the function is unknown and it appears as a part of an equation, then finding solution is difficult. Take pendulum for example, the equation is diff(x, t, 2)=-sin x. By approximation, it becomed diff(x, t, 2)=-x, with a solution of sine or cosine of t.
Also, near the expansion point, we can think of function's behavior as proportional to deviation, or oscillating around the fixed point. That's hard to see when looking at the whole function.
For example, calculator in your phone doesn't know how to calculate sin() of a random number you've provided, instead the calculator is using Taylor series up to some power which is much easier to implement in any programming language.
@@th1rt3nthA bit nitpick. 😅 IIRC, numerical sine is calculated from 5th or 6th order polynomials. The coefficients are not exactly the same as those from taylor's series, but adjusted to yield desired accuracy in on a given range.
While plotting the expansions for sin/cos, I tried to simplify the sign for terms calculation in sin/cos... then 💡: it's a complex number vector rotation by 90 degrees for each term. For sin we start with i and for cos we start with 1. Effectively it simplifies into sin(x) = ∑ real(i^[k-1])*x^k / !k; and cos(x)=∑ real(i^k)*x^k / !k
It's kind of disappointing that pure math doesn't touch on significant figures. If you want to know how many terms of the Taylor series you should use, there is a real, objective answer when your numbers come from the real world.
17:00 This just blew my mind, I never realised that is where Euler's Formula came from, yet it was always right in front of my eyes multiple times
I'm am digging the subjects that you're presenting on lately. They're much more my level and I'm super excited to look over them all. Just wish that there was more time in the day to "do it all".
Apart from the really thorough explanations, his ability to write so well in reverse always amazes me
Thanks!
@@Eigensteve Got your book Data driven Science and Engineering today, really looking forward to reading it
This marriage of math and programming is heaven. Thanks!
Hey Steve, just wanted to let you know that 'I love you' Platonically ;) thank you for everything you do.
After all these years after college I finally understand what the Taylor series really is! Prof Brunton has supreme taste in knowledge presentation and makes concept really intuitive
Steve, you're amazing! Your explanations are on point with the perfect amount of detail with impeccable choice of words for the same! these are immensely helpful! Thank you so much!
This is why the small angle approximation works.
Best video on taylor expansion
Thank you very much...🌺🌺🌺
Homework (Excercise):
In time 8:43:
1- Consider the Taylor series of f(x+dx) about a fixed point 'x'.
2- Again, consider the Taylor series of f(x) about a fixed point 'a'.
Now, prove that these two Taylor series are identical (actually, are the same as each other).
Hint:
I think this is a good idea to use some variables, such as z1 = x and z2 = x+dx (in the first relation), and also y1 = a and y2 = x (in the second relation), and prove the polynomials of the Taylor series are the same as each other.
One version of the script for R users:
library(polynom)
x
Thank you very much...🌺🌺🌺
STEVE I ABSOLUTLY LOVE HOW SHARP AND CLEAN AND PACKED FULL OF INFO YET STILL DIGESTIBLE EVERYTHING IS!! You are my favorite teacher. I am a self learner and you are GOD MODE for that.
PS: please do a video on why all power series are Taylor series (without borel heavy machinery if possible or by using borel but breaking everything down)!
thanks for this. when we were doing Taylors in calc 2 i was going less hard and dedicating more time to my now-wife. I got a C in the course, and i still got into the MS in AI program I’m about to finish. no regrets :)
I think the issue is that in Calc 2 they (Taylor/Maclaurin) are more of a curiosity...in Diff Eq, you see how powerful they are (as well as power series). Good luck to you!
The best doctor ever
I agree with you
Always great to hear you lecture learned lots from them. You look very lean is everything ok with health just a concern
Polynomial does a poor job in the tails, BUT the function is periodic, so withing the interval [-pi, pi] 7th and 9th degree polynomials do very well. Why look at the tail when you can always reduce down to this interval.
What about approximating a cosine function using Matlab? That would be great. Thanks so much for sharing so much with us. Cheers!
This is a great review! Thanx so much! 😊
Thank you Steve.
Don't MATLAB and Python have a Taylor command to give you the Taylor expansion up to a number instead of providing them yourself?
extremely fascinating thankyou
Approximating sin(x) by way of the Maclaurin Series?
Not to be forgotten there is also Newton's Method, and Simpson's Rule. . . 🙄
One still remembers programming that knid of thing in Commodore Basic,
dawdling one's precious time away on a C-64 bosting 7Mhz clockspeed and a 16-Bit address bus, during college years. 🤔
Ps: Nice code making decent graphic representations possible.
Compared to writing additional machine code in order to access an 8-bit graphics processor chip for it to generate a couple of green lines.
Id love to be able to sort your videos in order of difficulty :)
Feynman wrote somewhere that he did not understand why it takes such an amount of calculation to predict, in quantum mechanics, what happens a moment later right next door
It's the same feeling here : To see what happens at Delta X, right next door, we need to deploy an infinity of terms.
Maybe, mathematics are not the best tool to describe the Nature ...
Those kinds of doubts are better addressed in a "Philosophy of Science" video rather than a calculus course. What you're saying is true-there are no infinities in nature, but they arise in math nevertheless. But that's just an artifact of math. Many such artifacts appear when applying math to science and engineering.
Despite that, math is currently the best tool we have to describe nature.
Do you know of anything better? 😜
@@nHans The big alternative is the CNN : Convolutional Neural Network.
Yann Lecun, the french mathematician who invented it, was mocked because, mathematicaly speaking, his Neural Network was not "convex"...
Yann Lecun says that too much theory is not a good think to do real progress ...
@@nHans "there are no infinities in nature" .. I feel very unsure about that, as if the opposite is true.
@@coraltown1 Do you know of any examples where infinities occur in nature? (Exclude "the size of the universe," because that's still an open question.)
9:06 Curly brace is a cuspy line.
Great video and explanation. I love the content, but, someone please help me with a fundamental concept I'm missing. If we already know the original function (sin and cos in this case), why is a Taylor Series used to approximate it? We can simply evaluate sin(x) or cos(x) directly to get an exact answer. Thanks for the help.
While looking at simple sin and cos functions it might not be apparent why we are using a Taylor series, but take x as something complicated, like a matrix, what is the sin of a matrix? then you can no longer use the conventional trigonometric definition of sin and cosine. That's where using a Mclaurin series comes in handy.
You might already get the answer from later clip about linearization a nonlinear equation.
The problem is that if the argument of the function is unknown and it appears as a part of an equation, then finding solution is difficult. Take pendulum for example, the equation is diff(x, t, 2)=-sin x. By approximation, it becomed diff(x, t, 2)=-x, with a solution of sine or cosine of t.
Also, near the expansion point, we can think of function's behavior as proportional to deviation, or oscillating around the fixed point. That's hard to see when looking at the whole function.
For example, calculator in your phone doesn't know how to calculate sin() of a random number you've provided, instead the calculator is using Taylor series up to some power which is much easier to implement in any programming language.
@@th1rt3nthA bit nitpick. 😅 IIRC, numerical sine is calculated from 5th or 6th order polynomials. The coefficients are not exactly the same as those from taylor's series, but adjusted to yield desired accuracy in on a given range.
While plotting the expansions for sin/cos, I tried to simplify the sign for terms calculation in sin/cos... then 💡: it's a complex number vector rotation by 90 degrees for each term. For sin we start with i and for cos we start with 1. Effectively it simplifies into sin(x) = ∑ real(i^[k-1])*x^k / !k; and cos(x)=∑ real(i^k)*x^k / !k
Looks like this video is out of order in the playlist? Should it be moved up the playlist?
How are you writing everything laterally inverted?
Çok teşekkürler büyük adamsın ,kral
hey how can i get the codes?
someone please help me out
Thanks ..for 11 undecanic ...for 13 tridecanic
Why is EVERY power series a Taylor series (without having to use heavy analysis stuff I don’t understand)!?
It's kind of disappointing that pure math doesn't touch on significant figures. If you want to know how many terms of the Taylor series you should use, there is a real, objective answer when your numbers come from the real world.
Well it's not very pure to chop the number at a few decimal places
sin is definetly NOT mirrored image...it is cosine :-)