This lecture has a very suttle argument (especially from 4 min 30 on) that initially missed me completely. When you arrived at the point where you had ∫p(x) = ∫r(x) I told myself Ah! now we just need to go back to the previous lecture to solve the integral. I initially was confused why you worried about the zeros of the Legendre polynomials since, by the inner product argument, that integral was zero. I then realized that the whole point of choosing the zeros of the Legendre polynomial was A) because R(x) = P(x) at those points and B) the coefficients ωi of Ln have been calculated once and for all.Therefore the evaluation the integral of p(x) becomes simply the sum ωi * P(gi) where gi are the zeros of the Ln Legendre polynomial - I mention this should someone else have the same hesitation as I did. Last question - at the end of te previous lecture (11:40) you stated that the problem with the method presented in that lecture was the very significant variation in the magnitude of the coefficients. Is the use of the zeros of Ln better because the coefficients have been determined and we do not need to rederive them or is there really less variation - in which case why? As always thank you for a very interesting presentation.
I'm still confused about the professor's argument, so thank you for elaborating. You're saying that not only is ∫p(x)dx = ∫r(x)dx for all x (this makes sense to me), but p(gi) = r(gi) if gi is a zero of Ln, is that right? Why are we so interested in r(x)? From what I've seen you don't have to *do* any polynomial division when you're actually using Gaussian Quadrature to integrate, so I assume it's just part of the proof, but why?
Hi! r(x)-(n-1)th order is at lower order than p(x)-(2n-1)th order. Hence, for (n-1)th order polynomial, we have less coefficients to evaluate. Eg. of p(x) is order 3 (cubic,n=2), we only need to evaluate a linear polynomial r(x) at the same x. According to professor's last lecture (the matrix), only two weights are required as a linear polynomial has only 2 unknown coefficients(r(x)=w1+w2x1). After obtaining the two coefficients, we can always use that to obtain exact p(x) -- also by integrating r(x), we can exactly integrate p(x)
Gideon Buckwalter As for why there is a need to divide by ln(x), it is to construct the orthogonal integral such that ln(x) will have higher order than q(x). Since we want to have lowest order for integration(r(x)), min order for Ln(x) and r(x) would be nth such that q(x) will have lower order than Ln(x)
@@gideonbuckwalter4128 I'm guessing you've since figured it out, but I thought I'd answer anyway for possible future viewers. You indeed don't need to do any polynomial division. It's a bit more complicated to argue rigorously, but basically it boils down to this: Remember that we have a set {c_i} and a set {w_i} such that our quadrature for any function is simply the sum of f(c_i)*w_i over all i, no matter what the function is. We know that this quadrature is exact for polynomials of degree
Thank you, Gauss, for revealing this incredible method to the world, and thank you, @MathTheBeautiful, for explaining it in such a beautifully amazing way.
I am a computer science student, but I really enjoy watching these lectures to understand terms that I hear all the time, such as Laplacian and Gaussian quadruture.
Thanks, great video! Btw, I think I can give an analogy to explain why the weights behave nice. In one of the previous videos (Why {1,X,x^2} is a terrible basis), you had explained the nice feature of orthogonal basis which makes it less error prone compared to an arbitrary basis. The Legendre polynomials are also orthogonal with respect to the inner product and thus they approximate better than arbitrary non-orthogonally polynomials. That may be the reason why the weights are nicer
@@MathTheBeautiful did Gauss make this video? No. Did he make Lemma? I think not. I certainly respect Gauss but I'm a huge fan of you and your work independent of Gauss
Konstantinos Nikoloutsps they aren’t arbitrary. Watch the first lecture on the topic. The natural basis functions, {1, x, x^2,...} aren’t orthogonal so we apply the Gram Schmidt procedure to them in order to produce a set of orthogonal basis. And Legendre polynomials are the result
I am guessing that number theory may have guided Gauss in the derivation of this method. The idea of "dividing" by Ln may have come from two numbers being equal mod something - in this case mod Ln....just an afterthought.
@@MathTheBeautiful Oh ok, I was looking for something in the content that related to basketball. Nice lecture, by the way. Gaussian quadrature is awesome!
Is there also an optimum rule if we only assume continuity, like |x|? How much we can gain over rectangular rule with equidistant samples of same weight? Or whay about integration from a to inf.
The idea is that you have some function that isn't really a polynomial at all, but can be approximated by a polynomial if you choose a high enough degree. I don't think there's anything particularly magic about it being of an odd degree (since 2n-1 is odd), but that's just how the algorithm works out. If you thought your function was well-approximated by a 4th degree polynomial, it'll work just fine as a 5th degree, so go with n=3.
You can map any interval to [-1, 1] but you have to use that one because that’s the interval over which the orthogonality for Legendre polynomials holds.
In the previous video, you used 4 points to integrate up to a 3rd degree polynomial. Why is it now that you use n points to integrate up to a 2n-1 polynomial?
[Note: I'm answering this question mostly because trying to explain it will help me understand it better, myself.] He explains it pretty well, but you might need to watch it a few times to see where the trick is. The idea is that he takes a 2n-1 degree polynomial and re-represents it as the product of an n degree and an n-1 degree, plus some n-1 degree. But, since he chose the n degree to be a Legendre polynomial, and since an n degree Legendre polynomial is orthogonal with all polynomials of lower degree, we know that that part of the integral is zero, so we only need to calculate the integral of the remaining n-1 degree polynomial, and for that, we only need n points. TL;DR: He used Legendre polynomials to reduce the problem from a 2n-1 degree polynomial to an n-1 degree polynomial.
I thank you for the video. But honestly it did not tell the whole story. Gauss found what we now call the Gaussian quadrature for polynomials of order up to 7, using continued fraction (according to Wikipedia). Then, Jacobi discovererd the connection between Gauss points (x_i) and roots of Legendre polynomials. So, Gauss did not use linear algebra!
Go to LEM.MA/LA for videos, exercises, and to ask us questions directly.
this guy not only teaches but also inspires. The way he explains, it feels nothing in the world is more important. Massive respect!! :)
This lecture has a very suttle argument (especially from 4 min 30 on) that initially missed me completely. When you arrived at the point where you had ∫p(x) = ∫r(x) I told myself Ah! now we just need to go back to the previous lecture to solve the integral. I initially was confused why you worried about the zeros of the Legendre polynomials since, by the inner product argument, that integral was zero. I then realized that the whole point of choosing the zeros of the Legendre polynomial was A) because R(x) = P(x) at those points and B) the coefficients ωi of Ln have been calculated once and for all.Therefore the evaluation the integral of p(x) becomes simply the sum ωi * P(gi) where gi are the zeros of the Ln Legendre polynomial - I mention this should someone else have the same hesitation as I did.
Last question - at the end of te previous lecture (11:40) you stated that the problem with the method presented in that lecture was the very significant variation in the magnitude of the coefficients. Is the use of the zeros of Ln better because the coefficients have been determined and we do not need to rederive them or is there really less variation - in which case why?
As always thank you for a very interesting presentation.
I'm still confused about the professor's argument, so thank you for elaborating.
You're saying that not only is ∫p(x)dx = ∫r(x)dx for all x (this makes sense to me), but p(gi) = r(gi) if gi is a zero of Ln, is that right? Why are we so interested in r(x)? From what I've seen you don't have to *do* any polynomial division when you're actually using Gaussian Quadrature to integrate, so I assume it's just part of the proof, but why?
Hi! r(x)-(n-1)th order is at lower order than p(x)-(2n-1)th order. Hence, for (n-1)th order polynomial, we have less coefficients to evaluate. Eg. of p(x) is order 3 (cubic,n=2), we only need to evaluate a linear polynomial r(x) at the same x. According to professor's last lecture (the matrix), only two weights are required as a linear polynomial has only 2 unknown coefficients(r(x)=w1+w2x1). After obtaining the two coefficients, we can always use that to obtain exact p(x) -- also by integrating r(x), we can exactly integrate p(x)
Gideon Buckwalter As for why there is a need to divide by ln(x), it is to construct the orthogonal integral such that ln(x) will have higher order than q(x). Since we want to have lowest order for integration(r(x)), min order for Ln(x) and r(x) would be nth such that q(x) will have lower order than Ln(x)
I was wondering exactly about this. Thanks!!
@@gideonbuckwalter4128 I'm guessing you've since figured it out, but I thought I'd answer anyway for possible future viewers. You indeed don't need to do any polynomial division.
It's a bit more complicated to argue rigorously, but basically it boils down to this: Remember that we have a set {c_i} and a set {w_i} such that our quadrature for any function is simply the sum of f(c_i)*w_i over all i, no matter what the function is. We know that this quadrature is exact for polynomials of degree
Just WOOOOOW!! How did these mathematicians had the vision to go that far?
I always think about this, too.
This is easily one of the most beautiful methods of numerical analysis 😭❤️
Thank you, Gauss, for revealing this incredible method to the world, and thank you, @MathTheBeautiful, for explaining it in such a beautifully amazing way.
I am a computer science student, but I really enjoy watching these lectures to understand terms that I hear all the time, such as Laplacian and Gaussian quadruture.
You are a very good teacher and you enjoy teaching with your heart!!Congratulations!!
Thank you - much appreciated!
Thanks, great video! Btw, I think I can give an analogy to explain why the weights behave nice. In one of the previous videos (Why {1,X,x^2} is a terrible basis), you had explained the nice feature of orthogonal basis which makes it less error prone compared to an arbitrary basis. The Legendre polynomials are also orthogonal with respect to the inner product and thus they approximate better than arbitrary non-orthogonally polynomials. That may be the reason why the weights are nicer
This is beautiful, I'm lost for words.
Logically buildinging concepts step by step.Upload more videos.
I’m gonna try this tonight! I’m so excited to code this marvelous idea from scratch 😊🔥🙏🏽🎊❤️💯🙌🏽😭👏🏽🥳
Incredible explanation. Loved it
Awesome lecture. I studied this at class but I did not understand anything. Your videos are much more interesting
Great teaching, good video editing! Perfect
The amount of times I had to pause the video and go hold up hold up hOLD UP FOR A MOMENT
Thank you very much for your wonderful lecture. I wish I was one of your students.
Thank you for the kind words and you *are* one of my students :)
@@MathTheBeautiful You are right, Sir :)
Thank you, Carl Friedrich Gauss
.
Don't forget the apostle :-)
Gotta love Pavel Grinfeld
Haha, Gauss did all the work and I get all the love
@@MathTheBeautiful did Gauss make this video? No. Did he make Lemma? I think not. I certainly respect Gauss but I'm a huge fan of you and your work independent of Gauss
What an awesome idea!
Oh my god this is amazing!!!
I understood everything but Legendre polynomials looks really arbitrary.
Thank you for your video by the way. You are a special teacher!
Konstantinos Nikoloutsps they aren’t arbitrary. Watch the first lecture on the topic. The natural basis functions, {1, x, x^2,...} aren’t orthogonal so we apply the Gram Schmidt procedure to them in order to produce a set of orthogonal basis. And Legendre polynomials are the result
Thank you for that great lecture :)
Thank you! Part of the credit belong to Gauss.
Thank you so much for this video !!!!
I am guessing that number theory may have guided Gauss in the derivation of this method. The idea of "dividing" by Ln may have come from two numbers being equal mod something - in this case mod Ln....just an afterthought.
Brilliant
it took me a min to get the Joke at 1:33 nice move lol
I couldn't figure it out
The spin move.
@@MathTheBeautiful Oh ok, I was looking for something in the content that related to basketball. Nice lecture, by the way. Gaussian quadrature is awesome!
That was pretty cool.
what a smart idea to use Legendre polynomials.
Is there also an optimum rule if we only assume continuity, like |x|? How much we can gain over rectangular rule with equidistant samples of same weight? Or whay about integration from a to inf.
Kind of understand an example would of really helped
p(x) is the polynomial that we are trying to integrate, r(x) is the remainder. What about f(x)?
First of all thanks for the lecture sir. can any one please explain why we should choose polynomial of degree 2n-1.
The idea is that you have some function that isn't really a polynomial at all, but can be approximated by a polynomial if you choose a high enough degree. I don't think there's anything particularly magic about it being of an odd degree (since 2n-1 is odd), but that's just how the algorithm works out. If you thought your function was well-approximated by a 4th degree polynomial, it'll work just fine as a 5th degree, so go with n=3.
I have a question. Why can we use this for any interval of integration [a,b] and not for only [-1,1]. Thanks
You can map any interval to [-1, 1] but you have to use that one because that’s the interval over which the orthogonality for Legendre polynomials holds.
how do you get the roots of the legendre polynomials?
Good question. Numerically.
awesome
In the previous video, you used 4 points to integrate up to a 3rd degree polynomial. Why is it now that you use n points to integrate up to a 2n-1 polynomial?
Well, that's the brilliance of Gaussian elimination. There are few of them, but they are very cleverly placed.
[Note: I'm answering this question mostly because trying to explain it will help me understand it better, myself.]
He explains it pretty well, but you might need to watch it a few times to see where the trick is. The idea is that he takes a 2n-1 degree polynomial and re-represents it as the product of an n degree and an n-1 degree, plus some n-1 degree. But, since he chose the n degree to be a Legendre polynomial, and since an n degree Legendre polynomial is orthogonal with all polynomials of lower degree, we know that that part of the integral is zero, so we only need to calculate the integral of the remaining n-1 degree polynomial, and for that, we only need n points.
TL;DR: He used Legendre polynomials to reduce the problem from a 2n-1 degree polynomial to an n-1 degree polynomial.
I thank you for the video. But honestly it did not tell the whole story. Gauss found what we now call the Gaussian quadrature for polynomials of order up to 7, using continued fraction (according to Wikipedia). Then, Jacobi discovererd the connection between Gauss points (x_i) and roots of Legendre polynomials. So, Gauss did not use linear algebra!