EDIT: This post is was about the general way to handle when transform methods produce complex solutions to real ODEs. However, it does not apply properly in this particular case. The general reasoning here about how to handle complex solutions is correct, so I won't delete the post, but it really doesn't apply in this case. The issue is that in this particular case, I missed that the apparently complex function Ai(x) is actually real function in disguise, meaning Im{ Ai(x) } = 0. That leaves the calculations valid, but entirely trivial and pointless. Again, the reasoning and procedure is right, but because Ai(x) is already a real function, it doesn't say anything here. In particular, there's still another solution not produced by this transform method. His Fourier Transform approach to y ' ' - x y = 0 might raise two questions for anyone not comfortable with this kind of approach: 1) It's a 2nd order linear ODE, but the solution, y(x) = C Ai(x), has only one arbitrary constant of integration C. 2) It's an ODE for a real-value function y, but the solution is given as a complex function of x. Those two are related, and that the ultimate real-valued solution to y ' ' - x y = 0 is: y(x) = C1 Re{ Ai(x) } + C2 Im{ Ai(x) }, where C1 and C2 are arbitrary real constants. For question 1, the single arbitrary constant C is a complex constant, and so represents two real constants. We do have two arbitrary real constants for the general solution given here. For question 2, the linearity of the ODE carries over into complex values, and so you can separate out real and imaginary parts. Explicitly: By the linearity of the ODE (an ODE showing only real values), a complex-valued solution y(x) = y1(x) + i y2(x) of y ' ' - x y = 0 must have both its real and complex components being real-valued solutions to the same ODE: (y1 + i y2) ' ' - x ( y1 + i y2 ) = 0 ( y1 ' ' - x y1 ) + i ( y2 ' ' - x y2 ) = 0 both y1 ' ' - x y1 = 0 and y2 ' ' - x y2 = 0. So a complex-valued function y(x) is a solution to y ' ' - x y = 0 if and only if both Re{ y } and Im{ y } are real-solutions to the same equation. Since y(x) = C Ai(x) is the found complex-valued solution, where C = a + bi (a, b are both real constants), have that y(x) = C Ai(x) = (a + bi) ( Re{ Ai(x) } + i Im{ Ai(x) } ) = [ a Re{ Ai(x) } - b Im{ Ai(x) } ] + i [ b Re{ Ai(x) } + a Im{ Ai(x) } ], and so Re{ y(x) } = a Re{ Ai(x) } - b Im{ Ai(x) } (and Im{ y(x) } = b Re{ Ai(x) } + a Im{ Ai(x) }). That says that the real component of a complex-valued solution to y ' ' - x y = 0 is a (real) linear combination of the real-valued functions Re{ Ai(x) } and Im{ Ai(x) } (same goes for the imaginary component). Because the real component of a complex-valued solution to y ' ' - x y = 0, i.e. Re{ y }, is a real-valued solution to the ODE, that says that a real-valued solution to the ODE is a (real) linear combo of the two real-valued functions Re{ Ai(x) } and Im{ Ai(x) }, and this also produces the 2 arbitrary constants for this 2nd order ODE.
In case anyone is wondering, Ai(x) as defined in the video is a purely real function as can be checked from the definition by looking at (Ai(x))*=Ai(x), with x a real number. The solution here is not a "general" solution in the sense that there are two different solutions hidden in Re{Ai(x)} and Im{Ai(x)}. The reason for this is related to the convergence of the integral definition of Ai(x), coming from the constraint at 16:23. However one can construct a complex valued solution y(x), also with an integral definition, that DOES contain two linearly independent real valued functions where one of them is the Ai(x) and the other one is called Bi(x), both satisfying y''-xy=0 but with Bi(x->\infty)->\infty.
You're welcome!!! Your question (1) is very important and it's quite valuable to think about. It just turns out here that the complexity does not "turn up" if you had gone through everything you said rigorously which is kind of unlucky, but still not any less valuable i think.
This is really cool to see! I'd seen a lot of the set-up material when I learned the Fourier series, but I'd never seen how it relates to the Fourier transform and its inverse. When I learned those, they were taught as if they were just handed down from on high. It's so much more satisfying to see how they can be derived from the Fourier series, which can be motivated by concepts from linear algebra.
On general principles, I like the normalization where both FT and Inverse FT have a Sqrt[2π], but these days, I'm used to physic conventions, where Inverse FT (from momentum to position) has a volume element dk/2π. BTW, I much prefer the physics inner product that is antilinear in the 1st variable and linear in the second (rather than Math which is the opposite.) This also works well with numerical analysis where x dot y is written x* y or x^H y. All that being said, the FT is fun, but sometimes more fun is the Mellin transform. You should do some videos on that..
Oh man, this is a treat. I never see any of the math channels actually talk about the Fourier transform. As an EE student, I'm pretty familiar with how powerful the Fourier transform is, so it's nice to see you cover it and I hope you use it more in the future.
But is it well known for which set is the Fourier basis complete? I believe (i.e. I've never seen a proof yet I was kinda taught to believe) it's ok to use the series for any L2 function, yet are there others?
one should also maybe mention that you can distribute that factor 1/2π freely between the fourier transform and its inverse. from my experience it's the more typical choice in physics to put the full factor in the k integral while in maths you often see the symmetrical definition with 1/sqrt(2π) for both integrals. it's also just a matter of convention where the minus sign in the exponential is. there is no reason why it would or wouldn't be in one or the other integral
Another important detail to note --- that's likely been omitted for brevity and/or to avoid delving into deeper analysis --- is that one still has to prove that the set of complex exponentials is complete with respect to this inner product, i.e: if = 0 for all k then f identically vanishes (almost everywhere since we're integrating).
Before we apply the transform to the differential equation at the end, maybe we forgot to clarify that the transform is a linear operator, so the transform of a sum is the sum of the transforms. I suppose it comes more or less straightforwardly from the definition of the transform as an integral.
At 16:22, Michael uses the assumption that f(x) ->0 at pm inf to get rid of the leading term in integration-by-parts. And that assumption is totally valid, but super limiting. For example, the function f(x) = 1 violates that assumption, but the FT of f'(x) = 0 is 0. (Because F(k) = sqrt(2pi) \delta(k), ikF(k) = ik \delta(k) which is ~~0 at k=0 (if you squint a little)) The same thing happens for functions like sin(x) where F(k) = i sqrt(π/2) δ(k - 1) = ik sqrt(π/2) δ(k - 1) = F(cos(k)). (You're making use of the delta function being 0 except at k=1) Is there a more general form of the assumption Michael makes that allows you to use this analysis for all bounded functions, rather than all bounded functions that go to 0?
I really love the derivation of the Fourier transform as a linear application, or a projection to an orthogonal space. EDIT: although I first learned using {sin m x, cos n x} as being the orthogonal set...
Nice! Please consider doing a video introducing wavelets (eg, Daubechies' wavelets) sometime (if that isn't too much for one video). (I did my PhD EE dissertation on using wavelets for low energy signal detection back in the early 90's.)
The nice thing about the Laplace transform is that it makes it a little more obvious to set up initial conditions, but I normally used Fourier transforms to solve DEs where applicable. I learned Fourier first, and this saved on memorization of transform pairs.
How do you show orthogonality of the basis functions once L went to infinity, since in that case the inner product integral as shown at 3:00 does not converge anymore for any pair of basis vectors? It would be really awesome if you could make a follow-up video which shows the connection/extension of the Fourier transform to generalized eigenfunctions / the rigged Hilbert space + the use of the dual pairing as an extension of the inner product!
*_Question:_* _How is the transition from discrete points to a continuum made exactly?_ I understand that as L gets very large, Δk gets very small. But how is it that a line of discrete points-no matter how finely spaced-transitions to a continuum of points? I mean, as far as I can tell, the best we could do, would be to make Δk = 0, in which case our series of finely spaced points would collapse to a single point, but alas, not a continuum. I'm clearly missing _something_ here, I just don't know what that _something is._
Is there a similar story or history for other integral transforms? For instance the Laplace transform Since you build the Fourier transform from the Fourier series, but there seems not having any Laplace series?
you should probably also check if your set of orthonormal functions is also a complete "basis". you don't know a priori that i covers all relevant functions and you also don't know if you might not only need a handful of the basis vectors. who knows, maybe the space is finite-dimensional (spoiler: it's not)
Use Euler's formula: Now, since cosine is an even function we have: Cos((m-n)pi) = Cos(-(m-n)pi) The sines are both equal zero, since their arguments are integer multiples of pi. Thus, we are left with two equal consines which cancel each other out.
I'm wondering, is there a general rule where using a Fourier transform to solve a differential equation is better than a Laplace transform? I've been out of school for a LONG time, and my very sketchy memory remembers they are very similar with the exception that Fourier assumes periodicity back to negative infinity, while Laplace allows for unilateral analysis from a specific event forward in time. But when it comes to how to recognize when to use one vs. the other, I really can't remember that much. I really appreciate these refresher courses, because it is so easy to forget this stuff when you don't use it every day.
I think it comes down to how easy it is to use the inverse transform. The inverse Fourier transform is very clean and you just assume the integral converges. With the Laplace transform, there's a consideration for the region of convergence and an actual contour integral for the inverse. It seems like typically you want the Laplace transform for linear equations with constant coefficients, because then it's just linear algebra that already uses the initial conditions.
1. Laplace-transform can be used only on the half interval of real numbers while Fourier-transform is valid on the whole real line. 2. The Laplace-transform works well if initial values are known, while Fourier-transform is good for finding general solutions.
The usual development has the basis elements themselves normalized by dividing by √(2L), leaving the standard L² inner product intact, but it doesn't look as clean when the main point is developing the Fourier transform.
21:41 this was originally a 2nd order D.E. and should have 2 degrees of freedom in the solution, right? How did the fourier transform reduce it to a 1st order without introducing a constant?
It's transformed into a first order *_complex-valued_* ODE, so the constant of integration in its solution will be a complex constant, which means it has two real constants. I wrote this out in a separate post.
It would be nice to check whether the soln y at the end satisfies y'-> 0 for x to +-inf as assumed in the proof of the fourier transform of derivatives Cant really see how that would be true tho
Note: An inner product on an inner product space V over a field F is a map from VxV to F which satisfies conjugate symmetry (1), linearity in the first argument (2), and positive definiteness (3). In the case of the inner product in the start of the video, V is the set of all complex-valued function on the interval [-L,L] and F=C. (1) then follows from (ab*)*=a*b for all a,b in C, where * denotes conjugate. The linearity of integration implies the given map is bilinear, and (2) follows a fortiori. (3) is due to the fact xx*>0 for all x in C.
What is 'k' in this system? As in if I transfrom from a real-valued complex function f(x) to fourier F(k), what's the relationship between x and k? In other situations I've seen is as f(t) transformed to F(w) where t is time (e.g. a cyclic function repeating every n microseconds) and w is frequency (e.g. sin(wt)).
Is he using “orthogonal “ and “orthonormal” interchangeably? Because I thought orthonormal means the norm of the basis vectors has to be 1, in which case one needs to divide by sqrt(2L) in the Lemma?
I am trying to follow the proof, but around 10:26 he sets the limits of integration to (-infinity,infinity) when we are summing over positivity n. Shouldn’t the limits of integration be 0 to infinity… Thanks for any help
Regarding the last diff eq, as it is second order linear diff eq, i expected there to be two independent solutions but it ended up with one. Where's the other one gone?
This method of solution assumes that the Fourier Transform of the solution exists, which it doesn't for the second solution, Bi(x), since it diverges at infinity, so you only get the first solution.
The real and complex parts of his Ai(x) are two independent real-valued solutions whose (real) linear combo is a real-valued solution to the ODE. (I made a video-level comment to this effect that you can read & consider.) I'm not as familiar with this as others, so maybe I'm missing something, but checking on Wikipedia, "the" definition of Ai(x) given in this video is not the same as given by Wikipedia's entry for "Airy function". Specifically Wikipedia_Ai(x) = Re{ ThisVideo_Ai(x) }. (The division by 2 goes away as the integral of an even over all R changes to the positive reals.) However, it's not true that Wikipedia_Bi(x) = Im{ ThisVideo_Ai(x) }, but it instead includes an additional term in the integrand... so perhaps I'm missing something here. Given these conflicting definitions for Ai(x), I don't know what the "standard" convention for Ai(x) is, or if there is a universal one.
Speechless! Awesome piece of work. I may be wrong, hope not, but it looks like Michael has slipped into Math as a wonderful thing rather than Math as an instituted educational thing. And there is no harm in that. Say it how it is dude! Orthogonal poly'als Finite and infinite vector spaces linked with polynomials add a dash of orthonormal spanning sets blending infinite sums and doubly infinite integrals with transforms glancing into Fourier transforms and Fourier series all in 23 minutes and 11 seconds including intro and extro. What is not to like?
If you're talking her up, and she asks what you'd suggest for a fun first date, my advice is to say: _How about we discuss finite and infinite vector spaces linked with polynomials add a dash of orthonormal spanning sets blending infinite sums and doubly infinite integrals with transforms glancing into Fourier transforms and Fourier series._ _What's not to like?_
@@mathboy8188lol - and I hope she has a good appreciate of arts and poetry. I did make a mistake in my original post: replace 'wonderful' with 'beautiful' Where artists in paint use colors, thinners, brushes, media and surfaces artists in math use thought. Both use vision and beauty 🙂 It is a shame educational math is 'institutionalized' and taught in the main as applications of procedures.| Whatever happened to Natural Philosophy and Philosophy Mathematic?
Dude, I love your videos, and I'm sure I'm gonna like this one, but as a french person, I can't get over how you pronounce "Fourier" as "Foh-yeh". It's pronounced more like "Foor-yeh" (The "oo" in "Fooh" is like the "oo" in "loom" and the "yeh" is like the "ier" is "Navier"). But yeah, love your stuff, I'm just being a stereotypical pedantic french guy here.
@ChuffingNorah Well I can give you props for not using google translate 🙃 Correction : "Chacun à son goût" comme ils disent dans la belle France - n'est-ce pas ? (There would be better ways to say what you're trying to say, but that's alright) Some things I wanna point out because I'm annoying: -"Parlez" Is the present tense, plural second person of the verb "Parler". Here, you wanna use the plural third person "Parlent" -The verb "Parler" is equivalent to the verb "To speak" in English, "Dire" would be the correct verb to use here (equivalent to the verb "To say") -iirc, an apostrophe can never directly follow the letter "q", "q" is always followed by a "u", as in : "Quelqu'un" (Someone) "qu'X" is a contraction of "que X" (than X), e.g. "Plus qu'une table" = "Plus que une table" (More than a table)
I think this is the first video I have seen that explicitly talks about the connection between Fourier series and Fourier transform, super clear!
EDIT: This post is was about the general way to handle when transform methods produce complex solutions to real ODEs. However, it does not apply properly in this particular case. The general reasoning here about how to handle complex solutions is correct, so I won't delete the post, but it really doesn't apply in this case.
The issue is that in this particular case, I missed that the apparently complex function Ai(x) is actually real function in disguise, meaning Im{ Ai(x) } = 0. That leaves the calculations valid, but entirely trivial and pointless. Again, the reasoning and procedure is right, but because Ai(x) is already a real function, it doesn't say anything here. In particular, there's still another solution not produced by this transform method.
His Fourier Transform approach to y ' ' - x y = 0 might raise two questions for anyone not comfortable with this kind of approach:
1) It's a 2nd order linear ODE, but the solution, y(x) = C Ai(x), has only one arbitrary constant of integration C.
2) It's an ODE for a real-value function y, but the solution is given as a complex function of x.
Those two are related, and that the ultimate real-valued solution to y ' ' - x y = 0 is:
y(x) = C1 Re{ Ai(x) } + C2 Im{ Ai(x) }, where C1 and C2 are arbitrary real constants.
For question 1, the single arbitrary constant C is a complex constant, and so represents two real constants. We do have two arbitrary real constants for the general solution given here.
For question 2, the linearity of the ODE carries over into complex values, and so you can separate out real and imaginary parts. Explicitly:
By the linearity of the ODE (an ODE showing only real values), a complex-valued solution y(x) = y1(x) + i y2(x) of y ' ' - x y = 0 must have both its real and complex components being real-valued solutions to the same ODE:
(y1 + i y2) ' ' - x ( y1 + i y2 ) = 0 ( y1 ' ' - x y1 ) + i ( y2 ' ' - x y2 ) = 0 both y1 ' ' - x y1 = 0 and y2 ' ' - x y2 = 0.
So a complex-valued function y(x) is a solution to y ' ' - x y = 0 if and only if both Re{ y } and Im{ y } are real-solutions to the same equation.
Since y(x) = C Ai(x) is the found complex-valued solution, where C = a + bi (a, b are both real constants), have that
y(x) = C Ai(x)
= (a + bi) ( Re{ Ai(x) } + i Im{ Ai(x) } )
= [ a Re{ Ai(x) } - b Im{ Ai(x) } ] + i [ b Re{ Ai(x) } + a Im{ Ai(x) } ],
and so
Re{ y(x) } = a Re{ Ai(x) } - b Im{ Ai(x) } (and Im{ y(x) } = b Re{ Ai(x) } + a Im{ Ai(x) }).
That says that the real component of a complex-valued solution to y ' ' - x y = 0 is a (real) linear combination of the real-valued functions Re{ Ai(x) } and Im{ Ai(x) } (same goes for the imaginary component).
Because the real component of a complex-valued solution to y ' ' - x y = 0, i.e. Re{ y }, is a real-valued solution to the ODE, that says that a real-valued solution to the ODE is a (real) linear combo of the two real-valued functions Re{ Ai(x) } and Im{ Ai(x) }, and this also produces the 2 arbitrary constants for this 2nd order ODE.
Thank you for writing this down. This was very helpful!
In case anyone is wondering, Ai(x) as defined in the video is a purely real function as can be checked from the definition by looking at (Ai(x))*=Ai(x), with x a real number. The solution here is not a "general" solution in the sense that there are two different solutions hidden in Re{Ai(x)} and Im{Ai(x)}. The reason for this is related to the convergence of the integral definition of Ai(x), coming from the constraint at 16:23. However one can construct a complex valued solution y(x), also with an integral definition, that DOES contain two linearly independent real valued functions where one of them is the Ai(x) and the other one is called Bi(x), both satisfying y''-xy=0 but with Bi(x->\infty)->\infty.
@@Blackmuhahah
Yikes!
The imaginary part is symmetrically integrating sine of an odd function, so gives 0. I totally missed that.
Thank you!
You're welcome!!!
Your question (1) is very important and it's quite valuable to think about. It just turns out here that the complexity does not "turn up" if you had gone through everything you said rigorously which is kind of unlucky, but still not any less valuable i think.
This is really cool to see! I'd seen a lot of the set-up material when I learned the Fourier series, but I'd never seen how it relates to the Fourier transform and its inverse. When I learned those, they were taught as if they were just handed down from on high. It's so much more satisfying to see how they can be derived from the Fourier series, which can be motivated by concepts from linear algebra.
Very much looking forward to the Laplace transformation and Bi!
On general principles, I like the normalization where both FT and Inverse FT have a Sqrt[2π], but these days, I'm used to physic conventions, where Inverse FT (from momentum to position) has a volume element dk/2π. BTW, I much prefer the physics inner product that is antilinear in the 1st variable and linear in the second (rather than Math which is the opposite.) This also works well with numerical analysis where x dot y is written x* y or x^H y. All that being said, the FT is fun, but sometimes more fun is the Mellin transform. You should do some videos on that..
You literally speak in pure maths... Fascinating to watch you teach, you do it effortlessly.
Oh man, this is a treat. I never see any of the math channels actually talk about the Fourier transform.
As an EE student, I'm pretty familiar with how powerful the Fourier transform is, so it's nice to see you cover it and I hope you use it more in the future.
stanford has a whole course on FT on line. The Professor is great.
Going back to the Fourier series, I don't think I've ever seen a video explaining why the Fourier basis should be complete. Maybe a future video?
The proof I learned used Stone-Weierstrass approximation theorem, but I'd love to see Michael give another proof
But is it well known for which set is the Fourier basis complete? I believe (i.e. I've never seen a proof yet I was kinda taught to believe) it's ok to use the series for any L2 function, yet are there others?
one should also maybe mention that you can distribute that factor 1/2π freely between the fourier transform and its inverse. from my experience it's the more typical choice in physics to put the full factor in the k integral while in maths you often see the symmetrical definition with 1/sqrt(2π) for both integrals. it's also just a matter of convention where the minus sign in the exponential is. there is no reason why it would or wouldn't be in one or the other integral
I would love to see more Fourier Series and Fourier Transforms. The way you naturally progress through this video is awesome!
Another important detail to note --- that's likely been omitted for brevity and/or to avoid delving into deeper analysis --- is that one still has to prove that the set of complex exponentials is complete with respect to this inner product, i.e: if = 0 for all k then f identically vanishes (almost everywhere since we're integrating).
Before we apply the transform to the differential equation at the end, maybe we forgot to clarify that the transform is a linear operator, so the transform of a sum is the sum of the transforms. I suppose it comes more or less straightforwardly from the definition of the transform as an integral.
I love these videos on special functions, they're extremely useful and a great way to look at each function holistically.
At 16:22, Michael uses the assumption that f(x) ->0 at pm inf to get rid of the leading term in integration-by-parts. And that assumption is totally valid, but super limiting. For example, the function f(x) = 1 violates that assumption, but the FT of f'(x) = 0 is 0. (Because F(k) = sqrt(2pi) \delta(k), ikF(k) = ik \delta(k) which is ~~0 at k=0 (if you squint a little))
The same thing happens for functions like sin(x) where F(k) = i sqrt(π/2) δ(k - 1) = ik sqrt(π/2) δ(k - 1) = F(cos(k)). (You're making use of the delta function being 0 except at k=1)
Is there a more general form of the assumption Michael makes that allows you to use this analysis for all bounded functions, rather than all bounded functions that go to 0?
that bothered me as well. there are many approaches to make this assumption, Michael of course chose one of them. check Math-Overflow
I have just finished my research on general fourier series and this beauty comes up.
You do the exact same thing when finding the QFT harmonic oscillator and I didn't even realize they were connected
Thank you, normal non technical people don't find difference between discrete cosine transform and sine one :)
I'm actually feeling sick today and very weirdly this comforts me haha
Guess it brings me back to my time in uni studying physics xD
I really love the derivation of the Fourier transform as a linear application, or a projection to an orthogonal space.
EDIT: although I first learned using {sin m x, cos n x} as being the orthogonal set...
It is always a good time to discuss Fourier transform.
Fantastic Presentation, concise and clear.
Beautiful.
I actually understood this, even if i had to stop and think a bit a few times and watch some parts twice
Nice! Please consider doing a video introducing wavelets (eg, Daubechies' wavelets) sometime (if that isn't too much for one video). (I did my PhD EE dissertation on using wavelets for low energy signal detection back in the early 90's.)
Please do the Fourier transform over finite groups.🙏
The nice thing about the Laplace transform is that it makes it a little more obvious to set up initial conditions, but I normally used Fourier transforms to solve DEs where applicable. I learned Fourier first, and this saved on memorization of transform pairs.
Thanks for this presentation.
How do you show orthogonality of the basis functions once L went to infinity, since in that case the inner product integral as shown at 3:00 does not converge anymore for any pair of basis vectors? It would be really awesome if you could make a follow-up video which shows the connection/extension of the Fourier transform to generalized eigenfunctions / the rigged Hilbert space + the use of the dual pairing as an extension of the inner product!
You bring me to young Age, from complex analysis to analysis to algebra.
learnt a lot, one of the best on Fourier transform
*_Question:_*
_How is the transition from discrete points to a continuum made exactly?_
I understand that as L gets very large, Δk gets very small. But how is it that a line of discrete points-no matter how finely spaced-transitions to a continuum of points? I mean, as far as I can tell, the best we could do, would be to make Δk = 0, in which case our series of finely spaced points would collapse to a single point, but alas, not a continuum.
I'm clearly missing _something_ here, I just don't know what that _something is._
I had never thought that the derivation for the fourier transform could be so simple
Two things you could have also mentioned, Bessel functions of 1/3 order, and bracket notation.
Lets say that this week have a very orthogonal start.
Cool stuff! Do you plan to show smth on discrete transforms?
Is there a similar story or history for other integral transforms? For instance the Laplace transform
Since you build the Fourier transform from the Fourier series, but there seems not having any Laplace series?
@MichaelPennMath, I would also love to see a similar exposition on the Laplace transform. Great work.
I am having a telecomunication exam tomorrow. And we are gonna use ALOT of fourrier transform
Good luck! 👍👍
@@BikeArea ty 😊
you should probably also check if your set of orthonormal functions is also a complete "basis". you don't know a priori that i covers all relevant functions and you also don't know if you might not only need a handful of the basis vectors. who knows, maybe the space is finite-dimensional (spoiler: it's not)
Beautiful stuff!
At 3:50 how is that integral zero?
Use Euler's formula:
Now, since cosine is an even function we have:
Cos((m-n)pi) = Cos(-(m-n)pi)
The sines are both equal zero, since their arguments are integer multiples of pi.
Thus, we are left with two equal consines which cancel each other out.
Can you talk about the discrete Fourier transform and how to compute it?
The title: The Fourier Transform from scratch. Mike you should scratch while doing this video as you do on others!
oooh being careful about an infinite basis
Felt like in my PDE first classes ❤
I'm wondering, is there a general rule where using a Fourier transform to solve a differential equation is better than a Laplace transform? I've been out of school for a LONG time, and my very sketchy memory remembers they are very similar with the exception that Fourier assumes periodicity back to negative infinity, while Laplace allows for unilateral analysis from a specific event forward in time. But when it comes to how to recognize when to use one vs. the other, I really can't remember that much. I really appreciate these refresher courses, because it is so easy to forget this stuff when you don't use it every day.
I think it comes down to how easy it is to use the inverse transform. The inverse Fourier transform is very clean and you just assume the integral converges. With the Laplace transform, there's a consideration for the region of convergence and an actual contour integral for the inverse. It seems like typically you want the Laplace transform for linear equations with constant coefficients, because then it's just linear algebra that already uses the initial conditions.
1. Laplace-transform can be used only on the half interval of real numbers while Fourier-transform is valid on the whole real line.
2. The Laplace-transform works well if initial values are known, while Fourier-transform is good for finding general solutions.
Is there a way to also get the second airy function using fourier transforms? Also the last function is odd so you can just turn it into cosine.
I am a simple man, I see Fourier, I like it. 😊
Could you do a top-down approach to Fourier transform in terms of additive set function and measure?
Are you doing series on Fourier transform?
The usual development has the basis elements themselves normalized by dividing by √(2L), leaving the standard L² inner product intact, but it doesn't look as clean when the main point is developing the Fourier transform.
I think some details about function spaces should be mentioned. It isn't clear when these integrals are defined.
21:41 this was originally a 2nd order D.E. and should have 2 degrees of freedom in the solution, right? How did the fourier transform reduce it to a 1st order without introducing a constant?
It's transformed into a first order *_complex-valued_* ODE, so the constant of integration in its solution will be a complex constant, which means it has two real constants. I wrote this out in a separate post.
It would be nice to check whether the soln y at the end satisfies y'-> 0 for x to +-inf as assumed in the proof of the fourier transform of derivatives
Cant really see how that would be true tho
Note: An inner product on an inner product space V over a field F is a map from VxV to F which satisfies conjugate symmetry (1), linearity in the first argument (2), and positive definiteness (3).
In the case of the inner product in the start of the video, V is the set of all complex-valued function on the interval [-L,L] and F=C. (1) then follows from (ab*)*=a*b for all a,b in C, where * denotes conjugate. The linearity of integration implies the given map is bilinear, and (2) follows a fortiori. (3) is due to the fact xx*>0 for all x in C.
What is 'k' in this system? As in if I transfrom from a real-valued complex function f(x) to fourier F(k), what's the relationship between x and k?
In other situations I've seen is as f(t) transformed to F(w) where t is time (e.g. a cyclic function repeating every n microseconds) and w is frequency (e.g. sin(wt)).
Is he using “orthogonal “ and “orthonormal” interchangeably? Because I thought orthonormal means the norm of the basis vectors has to be 1, in which case one needs to divide by sqrt(2L) in the Lemma?
I am trying to follow the proof, but around 10:26 he sets the limits of integration to (-infinity,infinity) when we are summing over positivity n. Shouldn’t the limits of integration be 0 to infinity…
Thanks for any help
n is both positive and negative, because it is defined to be an integer. The integers range from -infinity to +infinity
Pro-tip: watch at 0.5x speed to see Prof. Penn build the Fourier transform after coming home from the pub.
Regarding the last diff eq, as it is second order linear diff eq, i expected there to be two independent solutions but it ended up with one. Where's the other one gone?
Indeed, there are two linearly indep solutions, Ai(x) and Bi(x).
This method of solution assumes that the Fourier Transform of the solution exists, which it doesn't for the second solution, Bi(x), since it diverges at infinity, so you only get the first solution.
@@whonyx6680Oh, That makes sense. Thank you.
The real and complex parts of his Ai(x) are two independent real-valued solutions whose (real) linear combo is a real-valued solution to the ODE. (I made a video-level comment to this effect that you can read & consider.)
I'm not as familiar with this as others, so maybe I'm missing something, but checking on Wikipedia, "the" definition of Ai(x) given in this video is not the same as given by Wikipedia's entry for "Airy function".
Specifically Wikipedia_Ai(x) = Re{ ThisVideo_Ai(x) }.
(The division by 2 goes away as the integral of an even over all R changes to the positive reals.)
However, it's not true that Wikipedia_Bi(x) = Im{ ThisVideo_Ai(x) }, but it instead includes an additional term in the integrand... so perhaps I'm missing something here.
Given these conflicting definitions for Ai(x), I don't know what the "standard" convention for Ai(x) is, or if there is a universal one.
Прекрасное видео, брат!
Speechless! Awesome piece of work.
I may be wrong, hope not, but it looks like Michael has slipped into Math as a wonderful thing rather than Math as an instituted educational thing. And there is no harm in that. Say it how it is dude!
Orthogonal poly'als
Finite and infinite vector spaces linked with polynomials add a dash of orthonormal spanning sets blending infinite sums and doubly infinite integrals with transforms glancing into Fourier transforms and Fourier series all in 23 minutes and 11 seconds including intro and extro.
What is not to like?
If you're talking her up, and she asks what you'd suggest for a fun first date, my advice is to say:
_How about we discuss finite and infinite vector spaces linked with polynomials add a dash of orthonormal spanning sets blending infinite sums and doubly infinite integrals with transforms glancing into Fourier transforms and Fourier series._
_What's not to like?_
@@mathboy8188lol - and I hope she has a good appreciate of arts and poetry.
I did make a mistake in my original post: replace 'wonderful' with 'beautiful'
Where artists in paint use colors, thinners, brushes, media and surfaces artists in math use thought. Both use vision and beauty 🙂
It is a shame educational math is 'institutionalized' and taught in the main as applications of procedures.|
Whatever happened to Natural Philosophy and Philosophy Mathematic?
The Airey equation is second order. You have solved it to give one solution. Where’s the other?
15:55
great
pls someone explain in minecraft terms
The Prof has gone all Eric Meijer (of Haskell fame) with his groovy multi-coloured Tie-Dye! Time for some jolly hard Category Theory - or not?
Top
😂😂😂CHHOTAA DIMAAG HAIN..
Dude, I love your videos, and I'm sure I'm gonna like this one, but as a french person, I can't get over how you pronounce "Fourier" as "Foh-yeh". It's pronounced more like "Foor-yeh" (The "oo" in "Fooh" is like the "oo" in "loom" and the "yeh" is like the "ier" is "Navier").
But yeah, love your stuff, I'm just being a stereotypical pedantic french guy here.
"Chaq'un a son gout" as they parlez dans La Belle France - n'est pas? 🤔🤣😁
@ChuffingNorah Well I can give you props for not using google translate 🙃
Correction :
"Chacun à son goût" comme ils disent dans la belle France - n'est-ce pas ?
(There would be better ways to say what you're trying to say, but that's alright)
Some things I wanna point out because I'm annoying:
-"Parlez" Is the present tense, plural second person of the verb "Parler". Here, you wanna use the plural third person "Parlent"
-The verb "Parler" is equivalent to the verb "To speak" in English, "Dire" would be the correct verb to use here (equivalent to the verb "To say")
-iirc, an apostrophe can never directly follow the letter "q", "q" is always followed by a "u", as in : "Quelqu'un" (Someone)
"qu'X" is a contraction of "que X" (than X), e.g. "Plus qu'une table" = "Plus que une table" (More than a table)
@@Nolys-bk4kd Quel Grand Yawnnnnnn! Bien-Sur? 😴😳😱😷😁So much time, so little to do! Have you ever considered Satire?
Hey, told you I was annoying lol@@ChuffingNorah
Well, you can;t blame English speaking mathematicians pronouncing Fourier like the number four. Mathematicians love numbers!