So in the complex numbers it isn't actually infinitely differentiable, right? I noticed that and thought that is probably why the Taylor series breaks, so it'd be nice to make sure
@@felipeopazo8375 In the complex numbers it is totally infinitely differentiable everywhere except at 0. The image of any open set containing 0 is the whole complex plain except 0. The function is really wild around 0. And can't be defined continuously at 0.
@@felipeopazo8375 The point he "added", f(0) = 0, is only valid if you approach zero on the real axis (from the positive or negative direction). Approaching zero on the imaginary axis (from either direction) gives a limit of infinity. And at a 45 degree angle on the complex plane (the real and imaginary parts are equal and both approach zero), there's no limit, because it oscillates with infinite frequency. In complex variable theory, this is referred to as an "essential singularity." These are trouble and best avoided!
When I was in analysis 1 and 2, a lot of my proofs would start with: "Let f be a *Nice* function on R to R" I never had to define what "Nice" meant or how it was different from "analytic" or "smooth". It didn't matter I had different professors from different universities; I could just say "Nice" and they instantly knew what I meant and thought it would be obvious what it meant for even undergrads to get. I still don't have a definition of "Nice function". I just know it's something that includes polynomials, sine functions, simple exponentials, and logs, but not square roots, complicated exponentials, infinitely discontinuous function, or those weird toy functions.
Wow, I learned about interval of convergence but didn't really connect the dots fully until I saw the actual graph and how it becomes inaccurate after 2. Awesome stuff!
Same here. I never really gave much thought on what the interval of convergence means geometrically. I was just focused on getting the right answers. It's very nice to gain appreciation even if it's only after we discussed this topic that I learned it.
I freaking love how you are able to break down your explanations into clear, short segments. I'm getting my PhD so that I may become a professor back in my country, your channel is also helping me learn about how these concepts should be taught
This video is a great demonstration between the difference between mathematicians and engineers. As a grad student working in the data side of electrical engineering I would say that the linear Taylor Series approximation of exp(-x^(-2)) is actually pretty good around zero despite it being trivial and in a sense not related.
Hope you enjoyed this funky example of a Taylor Series can converge everywhere, but not to the original function! My thanks again to Maple for sponsoring today's video, don't forget to check out Maple Calculator for your phone: www.maplesoft.com/products/maplecalculator/download.aspx?p=TC-9857 or Maple Learn for your browser: www.maplesoft.com/products/learn/?p=TC-9857
I feel like this is kinda trivial once you think about it though- there’s no reason that a Taylor series for one part of a piecewise function should converge to the entire piecewise function… you just showed that the Taylor series of 0 is 0, which, yeah, it is. Convergence of the Taylor series for the ln(x) example was way more interesting
This is a great reminder to be cautious. Especially as a physicist it is so easy to just pretend any function is as nice as you need it to be... One idea I think is interesting regarding this lack of caution relates to the way we teach analysis. Almost all of our analysis teaching is built on limits, so students get desensitised to the weirdness of them. Maybe that would be better if we taught non-standard calculus. If fewer things we did in calc classes were limits maybe it would be easier to treat limits with the appropriate respect. Besides I think it's easier to see that a->b and a=b are different when a and b are from completely different fields. (also great example of a topological but not metric space)
This is not where people get confused, they get confused at the very idea of rigorous proofs. To fix that, there is a program of Homotopy Type Theory and proof assistants like Coq which turn writing proofs into a programming language in the natural way.
physicists not being cautious is exactly why they generate new mathematics, because they're scientists, the only thing they care about is describing and modelling nature and coming up with testable predictions. a great example of this is the feyman path integral, it doesn't even exist in mathematics, yet they can still use it to predict stuff which has been confirmed by experiment to incredible accuracy
I absolutely hated how hand wavey some of the arguments in my uni physics classes were, and I got screwed over so many times because you can ignore boundary terms in this, but not in that
Wow I didn't know such broken functions existed. It is amazing that you can build a function using elementary functions that doesn't want to be analytic (ie. when we do the most natural completion, it isn't analytic)
Something that blew my mind is that you can construct a function, defined and infinitely differentiable on all of R, yet nowhere analytic. (Wikipedia lists it under "Non-analytic smooth function.")
They would probably have the radius of convergence equal to their distance from 0, since someone said here that 0 is the only singularity in the complex numbers
Very good topic! that's why to understand it deeply one needs to see it from the big window of complex numbers. From the narrow window of real analysis, it is a smooth function, but from the wide window of complex analysis, we can notice that this function is really ugly. Real analysis does not give the full picture. The same can be said for example about the function f(x)=1/(1+x^2). It is smooth, but why its taylor series does not converge everywhere unlike the common functions e^x, polynomials, cos x ,etc...
@@DrTrefor Don't forget to cover complex numbers as a part of geometric algebra. GA is an eye-opener to how complex numbers, quaternions, dual quaternions, elliptic and hyperbolic spaces, conformal transformations, as well many other geometric concepts can be fit smoothly together into an intuitive picture that is insanely useful and practical.
Nitpick: In your natural log Taylor series, you have the `k = 0` still hanging around at the bottom. I think the `k = 0` term is a special case that gets dragged out from the sum, and noticed to be equal to 0 itself.
hey Dr. Trefor amongst all the stuff available on YT, I love learning from you; and your personality. I studied biological sciences but my interest in maths and physics is more than that in biology. i have embedded your video in my website so other people from my country can also learn from your quality lectures. Respect from Pakistan
Absolutely love your content. I am an electrical engineer who loves math, but work and life often get in the way. Almost every time I watch your videos my passion is reignited and I want to learn more. So thank you!
Great video. This came up for me recently. The topic was whether it's a good idea to take e^x = 1 + x + x^2/2 +... as a definition (I say yes, absolutely). But people were describing this as "defining e^x by its Taylor series" which was leading to a lot of confusion. People thought the definition was circular - don't you need to take derivatives before you find the Taylor series? - and kept trying to analyze convergence using the Lagrange remainder and getting nowhere. Basically, people thought we were defining e^x as "the function with this Taylor series at 0" which is silly since, as you demonstrate, there are many such functions.
I prefer this definition: f(x) is the unique function where f'(x) = f(x) and f(0) = 1. Then proceed to prove that: 1. there is a number e such that f(x) = e^x, and define e as THAT special number. 2. prove the taylor series expansion.
@@martinepstein9826 yeah the 2 definitions are basically equivalent. But the first definition seems better pedagogically because the taylor series definition just seems to come out of nowhere. Asking if there is a function equal to it's own derivative is a natural question that could quickly arise after having learned the concept of derivatives.
Now I think of it, there is a subtle problem in my approach. The function e^x or a^x in general is only clearly defined on the rational numbers. The extension is based on taylor series in itself. And that is a bit circular, because taylor series are also generally proved by assuming the functions are already defined on the reals. So we should also need to come up with an alternative but equivalent way of extending a^x in general to all real values of x. But in order to do that you need to talk about the rationals as a 'dense subset on the reals', prove that every continious rational valued function has one and only one continious extension to the reals and use that to define a^x. But now it's even worse pedagogically, because this stuff is more advanced than taylor series in the first place...
@@dekippiesip Hold up; the fact that irrational powers (and even worse, complex powers) are awkward to define in terms of repeated multiplication is _precisely_ the advantage of your approach (and mine). Let's call the solution to the initial value problem 'exp' and define e = exp(1). We know exp(a+x) = exp(a)*exp(x) because exp(a+x)/exp(a) solves the IVP and therefore equals exp(x). It follows that exp(p/q) = exp(1)^(p/q) = e^(p/q). For irrational or nonreal x writing exp(x) = e^x is just a formalism.
That example is a good demonstration for why Taylor expansion isn't sufficient. When you expand past Taylor series and into Laurent series, that function that is only ever 0 as a Taylor series is very much not 0 as a Laurent series.
you are amazing teacher- yet again i learn something from your channel and i was pretty sure you can't really surprise me with something as simple as Taylor series. But now i feel like i need to rethink my understanding of whole idea of Taylor series, thank you
If you define the derivative asymptotically it's much easier to show that e^(-1/x^2) has 0 n'th derivative for all n. A function f is n-times differentiable at 0 iff f(x) minus an n'th degree (or less) polynomial is in o(x^n). Clearly e^(-1/x^2) is in o(x^n) for all n.
These infinitely differentiable but not analytic functions turn from annoyances to tools when we realize that, because they exist, it's possible to have an infinitely differentiable "bump function" that is only nonzero over a finite area. That can be really useful if you're using them to divide something into pieces in a smoothly feathered-off way that doesn't make any of the derivatives blow up. For instance, they're useful in differential topology when talking about describing a manifold in terms of an "atlas" of maps of pieces of it to Euclidean space. This is also a big difference between real analysis and complex analysis--in complex analysis, anything whose complex derivatives are all defined is analytic. (And analyticity also becomes an incredibly powerful constraint.)
A function can be of class C(∞) [infinitely differentiable] but not analytic. A mismatch that can happen in the real field, but not in the complex field. One of the gemstone-facts of analysis. [I was a little surprised you didn't mention this, even as a "teaser."] Fred
I would like to take a moment to share with you a special kind of smooth but not analytic function called a 'bump function': Define f(x) = exp( 1/( x - b ) - 1/( x - a ) ) (a real valued function) with the restriction that a
I have recently "invented" this function to use as a window function for a dsp FIR filter (or, to put it otherwise, a window function for spectral analysis). I was somewhat surprised such a function is not part of the standard library of window functions. It worked quite well, it turned out to be the best window function for my purposes compared to those defined in scipy.signal.window (tukey 0.3 was about as good, basically a draw).
I've seen the terms "the series converges very slowly" or "very fast" used quite a lot but I don't usually see authors defining these properly. So I`ll leave this topic as a suggestion for future content (if you haven't done one already). Anyways, good video.
Ya this is a common issue. Ultimately it is answered in the next video linked at the end about remainders. We have an intuitive graphical sense, but when we are precise the remainder gives us the degree of the error
With lnx, the radius of convergence was related to how far away you are from a singularity. with e^-1/x^2, you are at that singularity. it's surprising to me that it gives a radius at all there. also, with lnx, if you increase your "a" value, your radius increases, so I think I can do this! lnx = lim(b->infinity)[ b - sum(1 -> infinity) [ (-1)^n (x-e^b)^n / (n*e^nb)] ]
Maple on mobile really is the best thing for calculus. Step by step for derivative and such is a great way to learn. I will say that graphing and syntax are not the best so also have Desmos for your graphing needs
If you did the Taylor series for exp(-1/x^2) around x=1 instead, what would be the interval of convergence and would it approximate the function itself?
Thanks for the great video. I wasn't aware of this before. I used to think that any infinitely differentiable function would have a Taylor series representation. However, after watching this and doing some research, I learned that this is not always the case. This video has expanded my understanding of the limitations and complexities of mathematical analysis and reminded me of the importance of continuous learning and exploration in the field of mathematics.
One of the craziest things that i learned on university is that on complex analysis smooth functions and analytic functions is the same , and not on real analysis. And the difference is just the multiplication we define on complex numbers. I understand the mathematics behind that but it is crazy ... Great video btw 😊
As a followup maybe you could introduce the corollary of this in complex analysis whereby functions that are complex-differentiable at every point in a set (holomorphic) _are_ necessarily analytic on that set. Then you can bring in the Cauchy-Riemann equations in a natural fashion.
It's great this function exists. This or analogs of it means the test function space of C^\infty with compact support is dense in any sense one wants, and can even consist of piecewise analytic functions?
0:43 slight issue but not really: we should separate out the first term from the series to avoid any notion of dividing by zero (as well as dealing with (-1)!).
We literally just learnt about analytic functions and series solutions of differential equations. Great timing 😅 Also i always wondered how do we prove whether the remainder will go to 0 as n->infinity. Since the remember term R(n+1) has a point other than the point we're evaluating the series about. If you can show an example of how to deal with these 2 variables in the limit.
There are a range of formulas for the remainder and sometimes the methods are ad hoc, I have done at least one example in the video linked add the end.
You mention looking at the remainder as a way to tell if the Taylor series actually converges to the function, but I'm curious if we can use complex analysis to also see this? For example, e^-1/z^2 has an essential singularity at z=0 and the real version e^-1/x^2 doesn't equal its Taylor series as you say. Log z has a branch cut, so non-isolated singularity, and we can't do a Taylor series at x=0 for log x. Is it true that all "problematic" real functions can be spotted this way by looking at the behaviour of the complex version of them, or is there a counterexample?
You should look into "holomorphic" functions and the "identity theorem" for holomorphic functions. Then see if you can spot why holomorphic on a some domain might be related to the function being equal to it's own taylor series. Alternatively, if you don't feel like playing around with complex analysis, look at this wikipedia article: en.wikipedia.org/wiki/Analyticity_of_holomorphic_functions
The exp(-x^2) have an infinite Laurent Series given by 1- 1/x^2 + 1/(2*x^4) - 1/(3*x^6) + ... Which means that exp(-x^2) have an essential singularity at x=0.
The idea is we are approximating the function, but we can do that about any number of points. So there isn’t anything intrinsic to the function to have an a on the LHS
The reason exp(-1/x^2) doesn't work for complex variable is that it blows up as x=it and t goes to zero. So the complex function has an essential singularity, not a removable singularity like it does as a real function. Then there's Picard's Theorem if you really want to get crazy... Their ongoing mission, to go everywhere, infinitely often, with at most one exception.
for the function at 9:40 couldn't it be said that the radius of convergence is actually undefined since taking the ratio of a(k+1)/a(k) is actually undefined since it would imply dividing by 0?
If it had been any non-zero real divided by zero then yes, undefined. But 0/0 is unpredictable. Most times when you reach a 0/0 case, you have to find a different method to arrive at results.
No, even though it may seem natural to think that, the ratio test actually doesn't work that way. If the limit of |a(k+1)/a(k)| exists and is less than 1, then we can conclude that our series converges, but the converse does not hold. There are infinite series that converge where the limit of |a(k+1)/a(k)| does not exist. Knowing that the limit used in the root test does not exist gives absolutely (or that it is equal to 1) gives us no information about the convergence of our series. In the specific case of the 0 series, we can use a stronger test called the root test to show that the radius of convergence is infinite.
There is a more general way to compute the radius of convergence called the root test. The ratio test gives the same answer as the root test *when it converges*.
Can we use this to make a function where before a point, the function is a constant 0, after a second value, the function is a constant 1, in between, the function is something, and the entire function is smooth at all points?
It would converge for all positive values and all negative values depending on which A you choose. You could choose a very very large A and essentially get the function for a very very large space
Not sure how you managed to talk about this without mentioning anything (even in passing) about Stoke's Wedges/problems - !! Probably the most important/interesting consequence of this. Next topic suggestion? :P
question though, doesn't the taylor series of e^(-1/x^2) always equal zero in some small range so the zero series taylor expansion does "approximate" it within that window where its always zero? Obviously this isn't really all that useful in practice, but...
drr bazett will you make a series on differential geometry? like riemannian manifolds, differential forms, how vectors and vector fields are differential operators (or derivations?) affine connection, levi-cita connection, stuff like that! maybe even some complex differential geometry?
Due to existence of bump function, I always thought if analytic, then radius of convergence makes sense, otherwise no guarantee. I didn't realize exp(-1/x^2) is a simpler example. I always thought of analytic meaning differentiable in complex domain rather than it converges with taylor series.
out of curiosity whats the limit of the radius of convergence as the centre of the taylor series goes to 0, it doesnt have to be constant right? cause ROC is dependant on the coefficients of a general power series, its not a taylor poly specific thing
@@DrTrefor really? the graph clearly has non 0 second and first derivatives in a lot of places so if you centre the series there then as far as i can tell you get a legitimate taylor no?
@@DrTrefor well that’s exactly what I‘m ask to find in my calculus sheet. I am supposed to do it with the fact that the radius of convergence is at least the distance of z0 to the next singularity in complex plain so that Br(z0) does not include a singularity but I‘m a bit confused here because if Br(z0) should include no singularity and the distance between z0 and the next singularity is r then shouldn’t r be the maximum radius of convergence and not the lowest? And I‘m a bit confused that as you said the function is not defined for z=0 so the distance to the next singularity when you expand from z0=0 will be 0. would be nice if you can help
May you help and explain why the residue of Riemann Zeta may give complex number values at point near 0.9? Anything wrong or may be my mistake using software like mathematica or else or any suggestion? Here comes the code: For [ j = 0, j
If I defined a function f(x)=exp(x^2/(x^2-1))*(1-x^2+|1-x^2|)/2 is undefined at x=0 as your example exp(-1/x^2)?(so, it needs to be split in a piecewise definition) Or it is imply that the "hidden" step function is defining it as zero at x=0?
One crazy property of complex functions is that if a complex function is once differentiable, then it is automatically infinitely differentiable. Most functions that you deal with in a school or real world environment are complex differentiable.
Let's plug in a small imaginary number like 0.1*i. If the function is smooth then the output should be close to f(0) = 0. e^(-1/(0.1*i)^2) = e^(-1/(-0.01)) = e^100 Nope.
the thing is, the taylor series perfectly approximates the function you defined at x=0; in a piecewise function the taylor series by definition cannot extract information from regions x is not
when i see the graph of exp(-1/x²) in the video, at some distance from zero (so far the graph is horizontal) the function starts to behave like an exponential, then like a logarithm. is the point where it starts to go up from being horizontal special?
Keep up the good work with videos! I’m only 14 and the fact I can keep up with your videos is amazing, and your lessons online are really helping me master calculus one step at a time so all I can say it thank you!
This is crazy, what is he going to tell us next: something insane like this function can be used for analysis to get fundamental features as a test function ?
Replace the exponential series with z=-1/x^2, and done. An infinite series with negative power terms, also known as Laurent Series of an essential singularity (The number of terms with negative power series are infinite.)
@@martinepstein9826 Not so well behaved is underselling it - it takes on the value of any complex number (excepting zero itself) infinitely many times in any punctured neighbourhood of zero according to Great Picard's Theorem!
Doesn’t this have more to do with how you defined the piecewise function though? It’s certainly a reasonable definition, since the limit is zero. But if you hadn’t defined the function that way, it wouldn’t be differentiable at zero (and hence not smooth at zero). Then we would know that no Taylor Series will approximate the function at all near zero. In fact, it’s no surprise that the Taylor Series of the piecewise function is zero everywhere (near zero), since you’re only approximating the constant function f(x) = 0, whose Taylor Series OUGHT to be just 0. Great video tho, I love the graphical demonstrations!
We're 4D. I keep hearing theories like "simulation", "holographic" or back to Leibniz' "contingent" universe. Those theories all make me think of the i, j, k in quaternions. Quaternion MATHEMATICS a complex number of the form w + xi + yj + zk, where w, x, y, z are real numbers and i, j, k are imaginary units that satisfy certain conditions. RARE/biblical a set of four parts, things or persons. (dimensions)
Actually taking the derivatives to derive the infinite sum for the logarithm seems like too much effort. Just take the sum of the geometric series with r=-x and integrate both sides. Voila, an infinite sum for ln(1+x).
7:13 Please, please, please don't use fractions in exponents. It looks horrible. At 8:10 It looks even worse when it's smaller. It starts looking a whole lot like -1/(e*x^2), but with a wonky fraction line.
What is so special about the ability to approximate a function using its Taylor series that an entire branch of mathematics is dedicated to it? Why not study functions that can be computed to arbitrary precision using any method involving arithmetic operations?
exp(-1/x^2) may look like a nice function on the reals. But in the complex numbers, it explodes into total crazyness around 0.
So in the complex numbers it isn't actually infinitely differentiable, right? I noticed that and thought that is probably why the Taylor series breaks, so it'd be nice to make sure
@@felipeopazo8375 In the complex numbers it is totally infinitely differentiable everywhere except at 0. The image of any open set containing 0 is the whole complex plain except 0. The function is really wild around 0. And can't be defined continuously at 0.
@@felipeopazo8375 The point he "added", f(0) = 0, is only valid if you approach zero on the real axis (from the positive or negative direction). Approaching zero on the imaginary axis (from either direction) gives a limit of infinity. And at a 45 degree angle on the complex plane (the real and imaginary parts are equal and both approach zero), there's no limit, because it oscillates with infinite frequency. In complex variable theory, this is referred to as an "essential singularity." These are trouble and best avoided!
Can anyone else drop some nice resources that talk about this behavior? Jejeje that sounds rad!
wait wat
When I was in analysis 1 and 2, a lot of my proofs would start with:
"Let f be a *Nice* function on R to R"
I never had to define what "Nice" meant or how it was different from "analytic" or "smooth". It didn't matter I had different professors from different universities; I could just say "Nice" and they instantly knew what I meant and thought it would be obvious what it meant for even undergrads to get.
I still don't have a definition of "Nice function". I just know it's something that includes polynomials, sine functions, simple exponentials, and logs, but not square roots, complicated exponentials, infinitely discontinuous function, or those weird toy functions.
Nice = infinitely differentiable everywhere? I haven't properly studied real analysis yet so idk if that's a good definition lol
@@mastershooter64 that's just smoothness
@@mastershooter64 I think by nice you mean a function that doesn't break my proof and obeys my implicitly assumptions 😂
Nice = analytic, isn’t it?
But if you want nice, then ignore real analysis and only study complex analysis!
Wow, I learned about interval of convergence but didn't really connect the dots fully until I saw the actual graph and how it becomes inaccurate after 2. Awesome stuff!
Really? That’s what interval of convergence means
@@duckymomo7935 geez where are your friends
Check out possible solution on my chan
lol me too, I never understood radius of convergence properly
Same here. I never really gave much thought on what the interval of convergence means geometrically. I was just focused on getting the right answers. It's very nice to gain appreciation even if it's only after we discussed this topic that I learned it.
I freaking love how you are able to break down your explanations into clear, short segments. I'm getting my PhD so that I may become a professor back in my country, your channel is also helping me learn about how these concepts should be taught
This video is a great demonstration between the difference between mathematicians and engineers. As a grad student working in the data side of electrical engineering I would say that the linear Taylor Series approximation of exp(-x^(-2)) is actually pretty good around zero despite it being trivial and in a sense not related.
Hope you enjoyed this funky example of a Taylor Series can converge everywhere, but not to the original function! My thanks again to Maple for sponsoring today's video, don't forget to check out Maple Calculator for your phone: www.maplesoft.com/products/maplecalculator/download.aspx?p=TC-9857 or Maple Learn for your browser: www.maplesoft.com/products/learn/?p=TC-9857
I feel like this is kinda trivial once you think about it though- there’s no reason that a Taylor series for one part of a piecewise function should converge to the entire piecewise function… you just showed that the Taylor series of 0 is 0, which, yeah, it is. Convergence of the Taylor series for the ln(x) example was way more interesting
This is a great reminder to be cautious. Especially as a physicist it is so easy to just pretend any function is as nice as you need it to be...
One idea I think is interesting regarding this lack of caution relates to the way we teach analysis. Almost all of our analysis teaching is built on limits, so students get desensitised to the weirdness of them. Maybe that would be better if we taught non-standard calculus. If fewer things we did in calc classes were limits maybe it would be easier to treat limits with the appropriate respect. Besides I think it's easier to see that a->b and a=b are different when a and b are from completely different fields.
(also great example of a topological but not metric space)
The problem is not everyone is a math major. They have to limit the concepts somewhere.
exactly. Nonstandard analysis is actually easier to understand
This is not where people get confused, they get confused at the very idea of rigorous proofs. To fix that, there is a program of Homotopy Type Theory and proof assistants like Coq which turn writing proofs into a programming language in the natural way.
physicists not being cautious is exactly why they generate new mathematics, because they're scientists, the only thing they care about is describing and modelling nature and coming up with testable predictions. a great example of this is the feyman path integral, it doesn't even exist in mathematics, yet they can still use it to predict stuff which has been confirmed by experiment to incredible accuracy
I absolutely hated how hand wavey some of the arguments in my uni physics classes were, and I got screwed over so many times because you can ignore boundary terms in this, but not in that
Wow I didn't know such broken functions existed. It is amazing that you can build a function using elementary functions that doesn't want to be analytic (ie. when we do the most natural completion, it isn't analytic)
Something that blew my mind is that you can construct a function, defined and infinitely differentiable on all of R, yet nowhere analytic. (Wikipedia lists it under "Non-analytic smooth function.")
Would have been interesting to explore the Taylor series for e^1/x2 centered at values besides 0, and how those converge.
They would probably have the radius of convergence equal to their distance from 0, since someone said here that 0 is the only singularity in the complex numbers
This was explained really nicely. Thank you.
Very good topic! that's why to understand it deeply one needs to see it from the big window of complex numbers. From the narrow window of real analysis, it is a smooth function, but from the wide window of complex analysis, we can notice that this function is really ugly. Real analysis does not give the full picture. The same can be said for example about the function f(x)=1/(1+x^2). It is smooth, but why its taylor series does not converge everywhere unlike the common functions e^x, polynomials, cos x ,etc...
Totally, I really want to do a mini series on complex numbers to help solidify so many of these concepts
@@DrTrefor Don't forget to cover complex numbers as a part of geometric algebra. GA is an eye-opener to how complex numbers, quaternions, dual quaternions, elliptic and hyperbolic spaces, conformal transformations, as well many other geometric concepts can be fit smoothly together into an intuitive picture that is insanely useful and practical.
@@DrTrefor ruclips.net/video/ZB-9f12zdEI/видео.html complex numbers = cO|Or cartesian-system. Hov dumb can you be? for real... :)))
I am currently learning Taylor expansions as part of highschool calculus.
Thanks Prof .
Great timing!
indoctrination you mean, not learning :))) Vatch this instead and spare your time vith stupid schO|Ol ;) ruclips.net/video/ZB-9f12zdEI/видео.html
Nitpick: In your natural log Taylor series, you have the `k = 0` still hanging around at the bottom. I think the `k = 0` term is a special case that gets dragged out from the sum, and noticed to be equal to 0 itself.
@p s Yes, but it's just a typo, and changing it doesn't significantly change the video.
0:07 I believe there's a mistake and in e^x series that last term before the 3 dots should be x^4/4! instead of x^5/4!
hey Dr. Trefor amongst all the stuff available on YT, I love learning from you; and your personality. I studied biological sciences but my interest in maths and physics is more than that in biology. i have embedded your video in my website so other people from my country can also learn from your quality lectures. Respect from Pakistan
Absolutely love your content. I am an electrical engineer who loves math, but work and life often get in the way. Almost every time I watch your videos my passion is reignited and I want to learn more. So thank you!
That’s awesome!
Great video. This came up for me recently. The topic was whether it's a good idea to take e^x = 1 + x + x^2/2 +... as a definition (I say yes, absolutely). But people were describing this as "defining e^x by its Taylor series" which was leading to a lot of confusion. People thought the definition was circular - don't you need to take derivatives before you find the Taylor series? - and kept trying to analyze convergence using the Lagrange remainder and getting nowhere. Basically, people thought we were defining e^x as "the function with this Taylor series at 0" which is silly since, as you demonstrate, there are many such functions.
I prefer this definition:
f(x) is the unique function where f'(x) = f(x) and f(0) = 1. Then proceed to prove that:
1. there is a number e such that f(x) = e^x, and define e as THAT special number.
2. prove the taylor series expansion.
@@dekippiesip Also a good definition. It's easy to get one from the other.
@@martinepstein9826 yeah the 2 definitions are basically equivalent. But the first definition seems better pedagogically because the taylor series definition just seems to come out of nowhere.
Asking if there is a function equal to it's own derivative is a natural question that could quickly arise after having learned the concept of derivatives.
Now I think of it, there is a subtle problem in my approach. The function e^x or a^x in general is only clearly defined on the rational numbers. The extension is based on taylor series in itself. And that is a bit circular, because taylor series are also generally proved by assuming the functions are already defined on the reals.
So we should also need to come up with an alternative but equivalent way of extending a^x in general to all real values of x. But in order to do that you need to talk about the rationals as a 'dense subset on the reals', prove that every continious rational valued function has one and only one continious extension to the reals and use that to define a^x. But now it's even worse pedagogically, because this stuff is more advanced than taylor series in the first place...
@@dekippiesip Hold up; the fact that irrational powers (and even worse, complex powers) are awkward to define in terms of repeated multiplication is _precisely_ the advantage of your approach (and mine). Let's call the solution to the initial value problem 'exp' and define e = exp(1). We know exp(a+x) = exp(a)*exp(x) because exp(a+x)/exp(a) solves the IVP and therefore equals exp(x). It follows that exp(p/q) = exp(1)^(p/q) = e^(p/q). For irrational or nonreal x writing exp(x) = e^x is just a formalism.
That example is a good demonstration for why Taylor expansion isn't sufficient. When you expand past Taylor series and into Laurent series, that function that is only ever 0 as a Taylor series is very much not 0 as a Laurent series.
you are amazing teacher- yet again i learn something from your channel and i was pretty sure you can't really surprise me with something as simple as Taylor series. But now i feel like i need to rethink my understanding of whole idea of Taylor series, thank you
If you define the derivative asymptotically it's much easier to show that e^(-1/x^2) has 0 n'th derivative for all n. A function f is n-times differentiable at 0 iff f(x) minus an n'th degree (or less) polynomial is in o(x^n). Clearly e^(-1/x^2) is in o(x^n) for all n.
These infinitely differentiable but not analytic functions turn from annoyances to tools when we realize that, because they exist, it's possible to have an infinitely differentiable "bump function" that is only nonzero over a finite area. That can be really useful if you're using them to divide something into pieces in a smoothly feathered-off way that doesn't make any of the derivatives blow up. For instance, they're useful in differential topology when talking about describing a manifold in terms of an "atlas" of maps of pieces of it to Euclidean space.
This is also a big difference between real analysis and complex analysis--in complex analysis, anything whose complex derivatives are all defined is analytic. (And analyticity also becomes an incredibly powerful constraint.)
A function can be of class C(∞) [infinitely differentiable] but not analytic. A mismatch that can happen in the real field, but not in the complex field.
One of the gemstone-facts of analysis. [I was a little surprised you didn't mention this, even as a "teaser."]
Fred
I would like to take a moment to share with you a special kind of smooth but not analytic function called a 'bump function':
Define f(x) = exp( 1/( x - b ) - 1/( x - a ) ) (a real valued function) with the restriction that a
I have recently "invented" this function to use as a window function for a dsp FIR filter (or, to put it otherwise, a window function for spectral analysis). I was somewhat surprised such a function is not part of the standard library of window functions. It worked quite well, it turned out to be the best window function for my purposes compared to those defined in scipy.signal.window (tukey 0.3 was about as good, basically a draw).
oh, actually, my invented one was exp(-1/((x-a)*(b-x))), slightly different
very cool, thanks for sharing!
This is so cool, I wish I had seen this back when I was taking Calc 2, it would've cleared up a lot of confusion for me xD
I've seen the terms "the series converges very slowly" or "very fast" used quite a lot but I don't usually see authors defining these properly. So I`ll leave this topic as a suggestion for future content (if you haven't done one already). Anyways, good video.
Ya this is a common issue. Ultimately it is answered in the next video linked at the end about remainders. We have an intuitive graphical sense, but when we are precise the remainder gives us the degree of the error
With lnx, the radius of convergence was related to how far away you are from a singularity. with e^-1/x^2, you are at that singularity. it's surprising to me that it gives a radius at all there.
also, with lnx, if you increase your "a" value, your radius increases, so I think I can do this!
lnx = lim(b->infinity)[ b - sum(1 -> infinity) [ (-1)^n (x-e^b)^n / (n*e^nb)] ]
Maple on mobile really is the best thing for calculus. Step by step for derivative and such is a great way to learn. I will say that graphing and syntax are not the best so also have Desmos for your graphing needs
If you did the Taylor series for exp(-1/x^2) around x=1 instead, what would be the interval of convergence and would it approximate the function itself?
Yes, but it would break down at x=0
Thanks for the great video. I wasn't aware of this before. I used to think that any infinitely differentiable function would have a Taylor series representation. However, after watching this and doing some research, I learned that this is not always the case. This video has expanded my understanding of the limitations and complexities of mathematical analysis and reminded me of the importance of continuous learning and exploration in the field of mathematics.
One of the craziest things that i learned on university is that on complex analysis smooth functions and analytic functions is the same , and not on real analysis. And the difference is just the multiplication we define on complex numbers.
I understand the mathematics behind that but it is crazy ...
Great video btw 😊
Sorry for my terrible English 😅
As a followup maybe you could introduce the corollary of this in complex analysis whereby functions that are complex-differentiable at every point in a set (holomorphic) _are_ necessarily analytic on that set. Then you can bring in the Cauchy-Riemann equations in a natural fashion.
It's great this function exists. This or analogs of it means the test function space of C^\infty with compact support is dense in any sense one wants, and can even consist of piecewise analytic functions?
0:43 slight issue but not really: we should separate out the first term from the series to avoid any notion of dividing by zero (as well as dealing with (-1)!).
0:11 x^5 / 4!?
We literally just learnt about analytic functions and series solutions of differential equations. Great timing 😅
Also i always wondered how do we prove whether the remainder will go to 0 as n->infinity. Since the remember term R(n+1) has a point other than the point we're evaluating the series about.
If you can show an example of how to deal with these 2 variables in the limit.
There are a range of formulas for the remainder and sometimes the methods are ad hoc, I have done at least one example in the video linked add the end.
@@DrTrefor thanks!! I'll check it out 😄
You mention looking at the remainder as a way to tell if the Taylor series actually converges to the function, but I'm curious if we can use complex analysis to also see this? For example, e^-1/z^2 has an essential singularity at z=0 and the real version e^-1/x^2 doesn't equal its Taylor series as you say. Log z has a branch cut, so non-isolated singularity, and we can't do a Taylor series at x=0 for log x. Is it true that all "problematic" real functions can be spotted this way by looking at the behaviour of the complex version of them, or is there a counterexample?
Absolutely! Complex analysis really solves many of these issues that come when we restrict to only reals
You should look into "holomorphic" functions and the "identity theorem" for holomorphic functions. Then see if you can spot why holomorphic on a some domain might be related to the function being equal to it's own taylor series.
Alternatively, if you don't feel like playing around with complex analysis, look at this wikipedia article:
en.wikipedia.org/wiki/Analyticity_of_holomorphic_functions
The exp(-x^2) have an infinite Laurent Series given by 1- 1/x^2 + 1/(2*x^4) - 1/(3*x^6) + ...
Which means that exp(-x^2) have an essential singularity at x=0.
In the taylor series in the first seconds of the video, why there is an "a" in the rhs but not in the lhs ?
The idea is we are approximating the function, but we can do that about any number of points. So there isn’t anything intrinsic to the function to have an a on the LHS
Please link the video where you show that Taylor of e^x converges for the function.
These are really high-quality videos!
The reason exp(-1/x^2) doesn't work for complex variable is that it blows up as x=it and t goes to zero. So the complex function has an essential singularity, not a removable singularity like it does as a real function. Then there's Picard's Theorem if you really want to get crazy... Their ongoing mission, to go everywhere, infinitely often, with at most one exception.
Cool video -- I actually learned something! It's been a hot minute since I took calculus but seeing stuff like this explained so well is awesome.
Very good video. Nice explanation
for the function at 9:40 couldn't it be said that the radius of convergence is actually undefined since taking the ratio of a(k+1)/a(k) is actually undefined since it would imply dividing by 0?
If it had been any non-zero real divided by zero then yes, undefined. But 0/0 is unpredictable. Most times when you reach a 0/0 case, you have to find a different method to arrive at results.
No, even though it may seem natural to think that, the ratio test actually doesn't work that way. If the limit of |a(k+1)/a(k)| exists and is less than 1, then we can conclude that our series converges, but the converse does not hold. There are infinite series that converge where the limit of |a(k+1)/a(k)| does not exist. Knowing that the limit used in the root test does not exist gives absolutely (or that it is equal to 1) gives us no information about the convergence of our series. In the specific case of the 0 series, we can use a stronger test called the root test to show that the radius of convergence is infinite.
There is a more general way to compute the radius of convergence called the root test. The ratio test gives the same answer as the root test *when it converges*.
Can we use this to make a function where before a point, the function is a constant 0, after a second value, the function is a constant 1, in between, the function is something, and the entire function is smooth at all points?
Absolutely this would work great for that
I have one question,
-> Why is there an Hipopotamus standing on the hypotenuse?
Because the two words sound alike.
@@carultch you are a genuine born mathematician. No jokes only truths!
Very interesting. I didn't know about this. I only knew about analytic functions of complex variables.
It would converge for all positive values and all negative values depending on which A you choose. You could choose a very very large A and essentially get the function for a very very large space
Not sure how you managed to talk about this without mentioning anything (even in passing) about Stoke's Wedges/problems - !! Probably the most important/interesting consequence of this. Next topic suggestion? :P
Better late than never - did a BA in maths over 50 years ago - never before seen a Taylor series misbehave like that.
Thanks - maths is still fun.
question though, doesn't the taylor series of e^(-1/x^2) always equal zero in some small range so the zero series taylor expansion does "approximate" it within that window where its always zero? Obviously this isn't really all that useful in practice, but...
Gotta say it, I love that shirt. I've gotta buy one of those and support the channel.
Thank you!! Would mean a lot:)
drr bazett will you make a series on differential geometry? like riemannian manifolds, differential forms, how vectors and vector fields are differential operators (or derivations?) affine connection, levi-cita connection, stuff like that! maybe even some complex differential geometry?
Due to existence of bump function, I always thought if analytic, then radius of convergence makes sense, otherwise no guarantee. I didn't realize exp(-1/x^2) is a simpler example. I always thought of analytic meaning differentiable in complex domain rather than it converges with taylor series.
Thank you sir.... You have made me interested in algebra....
out of curiosity whats the limit of the radius of convergence as the centre of the taylor series goes to 0, it doesnt have to be constant right? cause ROC is dependant on the coefficients of a general power series, its not a taylor poly specific thing
Ya this is just the zero series regardless of its origins as a Taylor series, so infinite radius of convergence
@@DrTrefor really? the graph clearly has non 0 second and first derivatives in a lot of places so if you centre the series there then as far as i can tell you get a legitimate taylor no?
Note: the series for ln(x) should start at k=1, not k=0.
Other than that, nice video though!
So if I’m asked the radius of convergence of the function e^-1/x^2 is it infinity or 0 because 0 is the only point it is correct to the Taylor series
It only makes sense to ask the radius of convergence of the series, which in this case is infinite, but that’s a sort of useless observation
@@DrTrefor well that’s exactly what I‘m ask to find in my calculus sheet. I am supposed to do it with the fact that the radius of convergence is at least the distance of z0 to the next singularity in complex plain so that Br(z0) does not include a singularity but I‘m a bit confused here because if Br(z0) should include no singularity and the distance between z0 and the next singularity is r then shouldn’t r be the maximum radius of convergence and not the lowest? And I‘m a bit confused that as you said the function is not defined for z=0 so the distance to the next singularity when you expand from z0=0 will be 0. would be nice if you can help
Terrific treatment... as always!
May you help and explain why the residue of Riemann Zeta may give complex number values at point near 0.9? Anything wrong or may be my mistake using software like mathematica or else or any suggestion? Here comes the code:
For [
j = 0,
j
AH YES! IN MY ESTIMATION OF THIS PROBLEM, I HAVE CONCLUDED THAT YOU HAVE FORGOTTEN TO INCLUDE EINSTEIN'S THEORY OF RELATIVITY. AHAHAHA AHAHAHA LOL
If I defined a function
f(x)=exp(x^2/(x^2-1))*(1-x^2+|1-x^2|)/2
is undefined at x=0 as your example exp(-1/x^2)?(so, it needs to be split in a piecewise definition)
Or it is imply that the "hidden" step function is defining it as zero at x=0?
Is the function smooth in complex numbers?
One crazy property of complex functions is that if a complex function is once differentiable, then it is automatically infinitely differentiable. Most functions that you deal with in a school or real world environment are complex differentiable.
except the point 0
Let's plug in a small imaginary number like 0.1*i. If the function is smooth then the output should be close to f(0) = 0.
e^(-1/(0.1*i)^2) = e^(-1/(-0.01)) = e^100
Nope.
the thing is, the taylor series perfectly approximates the function you defined at x=0; in a piecewise function the taylor series by definition cannot extract information from regions x is not
What program were you using for those graphs at the beginning?
when i see the graph of exp(-1/x²) in the video, at some distance from zero (so far the graph is horizontal) the function starts to behave like an exponential, then like a logarithm. is the point where it starts to go up from being horizontal special?
Not particularly, we can compute the inflection point from second derivative fairly easily if we wish to
With the example, what if you take the limit of the sum rather than the sum of the limit?
Sir please reply. Which books do u recommend for multivariable calculus and linear algebra, . for self-study
Keep up the good work with videos!
I’m only 14 and the fact I can keep up with your videos is amazing, and your lessons online are really helping me master calculus one step at a time so all I can say it thank you!
That’s awesome!
Excellent!
This is crazy, what is he going to tell us next: something insane like this function can be used for analysis to get fundamental features as a test function ?
Position from which to impress: PhD....impressed.
You have a division by zero at 5:20
True! The k=0 term is zero so I should have just excluded it by starting at k=1
WhAt program do you use?
asnwer=(1a+ex)
exp(-1/x^2) is not differentiable near zero! BTW, thank you for your other videos.
"Not all smooth functions are analytic"
-Laughs in complex analysis
so if exp(-1/x²) is not analytic, is it possible to approximate it with a series at all?
Replace the exponential series with z=-1/x^2, and done. An infinite series with negative power terms, also known as Laurent Series of an essential singularity (The number of terms with negative power series are infinite.)
@@vascomanteigas9433 sounds like laurent series has more power
Yup with complex analysis we can give much more satisfying answers to these types of problems
Nice, could you plot this function near zero on the imaginary axis? :)
Absolutely, that would be good to do
We are talking about real x-values. No imaginary axis!
@@azzteke But extending to the complex plane is revealing. We see that the function is actually not so well behaved near 0.
@@martinepstein9826 Not so well behaved is underselling it - it takes on the value of any complex number (excepting zero itself) infinitely many times in any punctured neighbourhood of zero according to Great Picard's Theorem!
Here's a visualisation of e^-1/z to give you an idea for e^-1/z^2: ruclips.net/video/NaDhdCEvAxI/видео.html
Perhaps the point is that it is not the function and its derivatives that are equal to zero at the point x=0, but their limits.
Doesn’t this have more to do with how you defined the piecewise function though? It’s certainly a reasonable definition, since the limit is zero. But if you hadn’t defined the function that way, it wouldn’t be differentiable at zero (and hence not smooth at zero). Then we would know that no Taylor Series will approximate the function at all near zero. In fact, it’s no surprise that the Taylor Series of the piecewise function is zero everywhere (near zero), since you’re only approximating the constant function f(x) = 0, whose Taylor Series OUGHT to be just 0. Great video tho, I love the graphical demonstrations!
This dude's Canadian accent is bordering on the Letterkenny-esque.
00:15 Lol x⁵/4! seems like an off-by-one
Dude, your shirt's hilarious
That's what I am looking for, the caveats, and who shall explain them better than Dr. Trefor.
Glad you enjoyed!
3 sets of 3 dimensions.
We're 4D, not 3D.
1D, 2D, 3D are spatial
4D, 5D, 6D are temporal
7D, 8D, 9D are spectral
1D, 4D, 7D line/length/continuous
2D, 5D, 8D width/breadth/emission
3D, 6D, 9D height/depth/absorption
just after 9 minutes: You said the function is not flat.. you meant the function does not have a constant derivitive, which would make it a parabola.
Hello i am form india and I like your video
Omg that was so cool
Let’s see some triad claws
We're 4D.
I keep hearing theories like "simulation", "holographic" or back to Leibniz' "contingent" universe.
Those theories all make me think of the i, j, k in quaternions.
Quaternion
MATHEMATICS
a complex number of the form w + xi + yj + zk, where w, x, y, z are real numbers and i, j, k are imaginary units that satisfy certain conditions.
RARE/biblical
a set of four parts, things or persons. (dimensions)
0:09 - Uhhhhh...
Actually taking the derivatives to derive the infinite sum for the logarithm seems like too much effort. Just take the sum of the geometric series with r=-x and integrate both sides. Voila, an infinite sum for ln(1+x).
Succinct to the point.
7:13 Please, please, please don't use fractions in exponents. It looks horrible. At 8:10 It looks even worse when it's smaller. It starts looking a whole lot like -1/(e*x^2), but with a wonky fraction line.
Did you not notice that your ak is undefined at k=0? :)
1^2
some thing funky has to be happening in the complex world here right?
Very much so. Try plugging in a small imaginary number.
What is so special about the ability to approximate a function using its Taylor series that an entire branch of mathematics is dedicated to it? Why not study functions that can be computed to arbitrary precision using any method involving arithmetic operations?
Misleading. This function has an essential singularity at x=0, and is not differentiable
I don’t know what this function. I bet it is some weirdo that can only appear in the instanton potential of a Yang-Mill theory or sth
i wont turn on the subtitles to see better