This is a great example, but it doesn't tell me anything about how Gaussian Quadrature works under the hood, or where the sampling points or weights come from.
@@MathTheBeautiful you made a video where you say this method exists and good. thx. pls don't waste my time before exam anymore. is this really a lesson?
@@bencehalmosi8919 I just don't like the way you talk to our professor like that. His lecture mean the world to me and professor don't have the duty to teach me or you.
But how do you KNOW that the Gaussian quadrature method is a better approximation than the rectangle method? For that we would have to know the exact area beforehand right?
I first learned of Gaussian Quadrature before computers (well, not quite!) but at any rate, I remembered being struck by the lecturer's claim that (for a numerical method) this one was 'exact', which seemed inexplicable. It turns out that GQ is 'exact' but only for polynomials - so if your function IS a polynomial the method is able to find the integral exactly for degree two times the number of sample points used. Here the trancendental function cos^2(x) was still being approximated but in effect by a polynominal of degree 20. (Nice demonstration though)
I am a 19 year old physics student from the US. Thanks for providing an intuitive and world-class math education for those who cant afford it. Thank you so much.
The amazing thing is, I thought the “a billion times better” statement was clickbait, or more likely, just a figure of speech. Turns out it really is that much better, at least in the example he showed us. I’m mind blown, this is amazing! I just came from his three videos on how this all works under the hood, and I’m not disappointed. I’m gonna use Gaussian Quadrature to compute the E&M fields in Maxwells equations for a material whose permittivity and permeability vary in time and space. So excited to give this a try! 😁🎊💯🔥🙌🏽
Yeah but 10 points with Gaussian quadrature with that much floating point arithmetic it is way more computationally intensive than approximating by left point rectangles, which makes the example misleading with regards to how useful gaussian quadrature actually is.
Why would you say it's more computationally intensive? I think it's about the same amount of computation since each method evaluates the function 10 times.
Wrong word, sorry. I meant computationally expensive. Floats, especially high precision ones require more cycles(I am referring to the csubi*f(xsubi) part of the calculation here.), so from a computer hardware perspective Gaussian quadrature cannot compute an equal n in approximately the same number of cycles. That being said Gaussian quadrature is still faster(fewer cycles) for equal precision, because you need fewer operations. Ergo you could get the same precision with n=2 or 3 compared to a 10 point numerical integration using left points. What I am getting at is that from a numerical computing perspective either method can reach arbitrary precision simply by increasing n, so the real question is which one gets the most precision for the fewest cycles. I wasn't trying to say Gaussian quadrature is a bad solution, in fact it is strictly superior in any case I can think of, but that it is not as superior as a simple n=10 comparison for both would make it seem.
I don't quite follow the details of your argument, but it sounds like an interesting point. Another common way to look at it is this: the rectangle rule is exact only for 0-th degree polynomials, while the 10-point Gaussian is exact for up to 19-th degree polynomials. I'm guessing Gaussian quadrature may take twice the flops for the same number of sampling points, which doesn't really offset the spectacularly higher convergence rate.
I don't quite get it either: In both cases there are 10 points, 10 evaluations of the function, and 10 floating-point multiplications + 9 additions. The only difference between them is how you choose the points and their weights, which at most could require twice as much work, but the algorithmic complexity is still the same, and this is quite good for a million-times improvement in precision.
Brian Durbin the quality of a numerical approximation that was invented 200 years ago is not to be evaluated in terms of how fast specific current cpu architectures can compute it. That's preposterous.
Go to LEM.MA/LA for videos, exercises, and to ask us questions directly.
Your voice makes me to understand the concept very clearly.
Thank you! It's nice when a Russian accent stands for something positive.
This is a great example, but it doesn't tell me anything about how Gaussian Quadrature works under the hood, or where the sampling points or weights come from.
+Brian Johnson That's right. It requires a fair deal of linear algebra to explain the technique.
You should recieve the Oscar award of Teaching.Indeed you are an amazing Teacher.Your students are the Luckiest out there...
@@MathTheBeautiful you made a video where you say this method exists and good. thx. pls don't waste my time before exam anymore. is this really a lesson?
@@bencehalmosi8919 ruclips.net/video/65zwMgGZnUs/видео.html
@@bencehalmosi8919 I just don't like the way you talk to our professor like that.
His lecture mean the world to me and professor don't have the duty to teach me or you.
Well that definitely contributed quite a lot to my excitement for the course. That accuracy is insane....
Gauss was a beast!
Only Euler could stand 1 min in that math fight. 😂
Thank you for allowing me the chance to benefit from your knowledge.
Thank you for explaining this simply and precisely. Great stuff!
But how do you KNOW that the Gaussian quadrature method is a better approximation than the rectangle method? For that we would have to know the exact area beforehand right?
I first learned of Gaussian Quadrature before computers (well, not quite!) but at any rate, I remembered being struck by the lecturer's claim that (for a numerical method) this one was 'exact', which seemed inexplicable. It turns out that GQ is 'exact' but only for polynomials - so if your function IS a polynomial the method is able to find the integral exactly for degree two times the number of sample points used. Here the trancendental function cos^2(x) was still being approximated but in effect by a polynominal of degree 20.
(Nice demonstration though)
I think it's 2n-1, so degree 19.
Thank you for this perfect introduction! It is really intuitive, appreciate it
I am a 19 year old physics student from the US. Thanks for providing an intuitive and world-class math education for those who cant afford it. Thank you so much.
The amazing thing is, I thought the “a billion times better” statement was clickbait, or more likely, just a figure of speech. Turns out it really is that much better, at least in the example he showed us. I’m mind blown, this is amazing! I just came from his three videos on how this all works under the hood, and I’m not disappointed. I’m gonna use Gaussian Quadrature to compute the E&M fields in Maxwells equations for a material whose permittivity and permeability vary in time and space. So excited to give this a try! 😁🎊💯🔥🙌🏽
I love these thank you so much! please don't stop these are really good.
A click baity video title that is actually true! Loved it. Thanks
More click baity than true, to be honest
Any idea how to use this method for a data set that is not between -1 and 1?
shifting and scaling
Thank you for your work!! Just amazing what you have done for us.
Thank you - much appreciated!
Wonderful example ! An astonishing technique ! :O
I'm leaving my jaw dropped here. I'll come back for it later. Thank you.
hello. how did you get the x and w values for each in gaussian quadrature
ruclips.net/video/65zwMgGZnUs/видео.html
Awesome video! Thank you!
That was so awesome!
Is there anyway to calculate those numbers? Or can you do a video on it? I haven't been through your videos yet if you have, I will check though.
SlykeThePhoxenix No, that will come later in the course in the Part on Inner Products.
ruclips.net/video/65zwMgGZnUs/видео.html
Fantastic lecture!
Very informative, thanks
super excited...
And with good reason! (Also see ruclips.net/video/65zwMgGZnUs/видео.html where this topic is developed)
Thank you sir
Good. Keep it up.
4:19
Perfect!!!
awesome
wow, that's crazy!
Tell me more X3
The last column should be f(x) *w and not f(x) 😅
Yes! Thank you
My GOD ... Mathematics shows the way to all the Sciences ...........
KARTICK MANNA May i Quote you? :) I love your comment!
Yeah but 10 points with Gaussian quadrature with that much floating point arithmetic it is way more computationally intensive than approximating by left point rectangles, which makes the example misleading with regards to how useful gaussian quadrature actually is.
Why would you say it's more computationally intensive? I think it's about the same amount of computation since each method evaluates the function 10 times.
Wrong word, sorry. I meant computationally expensive.
Floats, especially high precision ones require more cycles(I am referring to the csubi*f(xsubi) part of the calculation here.), so from a computer hardware perspective Gaussian quadrature cannot compute an equal n in approximately the same number of cycles.
That being said Gaussian quadrature is still faster(fewer cycles) for equal precision, because you need fewer operations. Ergo you could get the same precision with n=2 or 3 compared to a 10 point numerical integration using left points.
What I am getting at is that from a numerical computing perspective either method can reach arbitrary precision simply by increasing n, so the real question is which one gets the most precision for the fewest cycles.
I wasn't trying to say Gaussian quadrature is a bad solution, in fact it is strictly superior in any case I can think of, but that it is not as superior as a simple n=10 comparison for both would make it seem.
I don't quite follow the details of your argument, but it sounds like an interesting point.
Another common way to look at it is this: the rectangle rule is exact only for 0-th degree polynomials, while the 10-point Gaussian is exact for up to 19-th degree polynomials.
I'm guessing Gaussian quadrature may take twice the flops for the same number of sampling points, which doesn't really offset the spectacularly higher convergence rate.
I don't quite get it either: In both cases there are 10 points, 10 evaluations of the function, and 10 floating-point multiplications + 9 additions. The only difference between them is how you choose the points and their weights, which at most could require twice as much work, but the algorithmic complexity is still the same, and this is quite good for a million-times improvement in precision.
Brian Durbin the quality of a numerical approximation that was invented 200 years ago is not to be evaluated in terms of how fast specific current cpu architectures can compute it. That's preposterous.
Math is really very beautiful..Thumbs up to @MathTheBeautiful
4:20
Let freedom reign!