i am very thankful that i found your video. i was learning PCA but wasn't able to imagine in the 3d space but you explained it really well. kudos to you mate.
This video is very good! I like how you labeled the first component as "power". I think it is important to clarify that PCA loses the distinction of original features unless you keep all the principal components, and this new labeling explains this very well.
Hello! :) I used the software Blender3D for creating the 3D animations, the library manim for 2D, and premiere/after effect for putting everything together.
Extraordinary Video! I will show this to my students in all my linear algebra classes. One very minor comment: it is worth mentioning that your data is centered before beginning your analysis. That is, each column vector has it mean subtracted. That is why, C (as you defined), is a covariance matrix.
It might be possible to explain the search for PCs even without explaining the Langrangian optimization: There is simply a linear transformation that one wants to perform on the features such that the covariance matrix is as diagonal as possible. The reason for that is that when non-diagonal terms are 0 or close to 0, it means, that the two corresponding new features are really independent. So the explanation can actually boil down to finding the best linear transformation. So this wage explanation shows why we should search for eigenvectors. It however doesn't explain why the best eigenvector is the one with the largest eigenvalue.
Great video! But, how is happiness related to any of these factors? Based on the covariance matrix, I could only see how each factor is related to one another. Was there another vector in there based on ranking that is not included?
I love it! Thanks for that. Can you share the code used for PCA in this video, please? I am trying repeat, but my results dont check with yours, I want to see where I'm going wrong (I didn't find it in the description on github). Thanks for the video.
Thank you! It brings back (mostly unpleasant) memories of college matrix algebra from 4 decades past... but I get the gist. The only thing I could wish for would be a way to stop the video, and have a tool to re-orient the static 3D representation onto the 2D screen. That would greatly help me visualize what's being said (so well!)
Thank you for this amazing video. This has helped me a lot, but I am a little bit confused about 2:18 when you say that it can be solved via Lagrange multiplier -- is this a convex optimization problem? The form looks good but this is a maximization problem. How can we apply the Lagrange multiplier method to solve a problem if it is non-convex?
Great point! The problem as written is not convex. But this is one of the (very) few nonconvex problems that can be solved to optimality with techniques usually reserved for convex problems.
I am sorry for criticism, please read it only if you want to improve, otherwise leave it. Your graphics are great, and you know it, but some steps are not explained at all. For example at 4:45 suddenly show 3 axes that stop being perpendicular it is not only not clear why, but this unexplained "why" keeps the student`s brain busy instead of keeping following you. I am well familiar with the PCA and I maybe understood what you were trying to say, but others probably or didn't get you or (the most common) think they did. but they did not.
This is actually a very good question! One of the downsides of PCA is that it gives components that not interpretable by defaults. The only way to give them meaning is to look at the coefficients of the vector components and try to make sense of them (which is what I did for the video).
But where does the factor "happiness" play a role here? We usually optimize some dependent variable (e.g. "happiness") but it isn't represented anywhere here, while it's the question we're asking. I'm a bit confused here :/
1:55 information preserved i.e. dot product would be just x transpose u, wouldn't it? why did we square it? is it because how we always take root mean square ??
Great video, For clarity, I've noticed that the features are color coordinated however, Social is green and Life is Blue which makes your equation for u1 and u2 the life and social labels should be swapped. u2 = (0.22 GDP + 0.55 Social) - 0.8 Life . Check the vectors as well. Could you please clarify. Just an observation for clarity. Thank you. :)
Well explained video, but just a quick pointer. "Icelandic countries" is not a thing. Iceland is a country by itself. I am sure you must have meant Scandinavian countries. :) Otherwise, well made.
Finally someone that actually derives the PCA without just reporting the algorithm, great work!
A well-designed animation surpasses thousand words!
i am very thankful that i found your video.
i was learning PCA but wasn't able to imagine in the 3d space but you explained it really well.
kudos to you mate.
PCA is like 'magic', never really understood it but it is so useful! thanks for the great video.
This video is very good! I like how you labeled the first component as "power". I think it is important to clarify that PCA loses the distinction of original features unless you keep all the principal components, and this new labeling explains this very well.
That is an absolute masterpiece. Thank you for your plain, visualizing video.
Best video on PCA I've seen out the hundreds
Really well explained and amazing visualizations! Thanks.
Glad you liked it!
Amazing Bachir Khadir ! Visually you explained in much lesser time. keep developing Visual world. waiting to see you
such a clearly explanation for PCA! Giving the example really helps a lot to understand the meaning and how to use it.
Awesome visual and intuitive way to explain PCA, loved the graphics too :)
Very well put in such a short time.. conveyed the essence very well.. I'll go ahead and subscribe to you..
Keep up the awesome work..
Best video out here about PCA!
Beautiful and well explained :) Hello from Singapore!
I'm wondering: what animation software do you use to produce this?
Hello! :) I used the software Blender3D for creating the 3D animations, the library manim for 2D, and premiere/after effect for putting everything together.
@@VisuallyExplained amazing! Thanks much :) keep doing what you do
Extraordinary Video! I will show this to my students in all my linear algebra classes. One very minor comment: it is worth mentioning that your data is centered before beginning your analysis. That is, each column vector has it mean subtracted. That is why, C (as you defined), is a covariance matrix.
Perfectly explained.
Brilliant video, thanks very much.
U are great sir .I messed up with finding what is PCA .All ppl 's explaining way is complicated .urs way can help ppl understand python PCA.Thanks .
great visuals, thanks!
Nice explanation simple fast and efective, good job with the example and the edition too
amazingly clear explanation of PCA!
Amazing! But such a cliff-hanger! I want to see the kernel trick as well :)
Coming soon, stay tuned ...
Thank you, the video was fun to watch with clear explanations
Big ups from 🇲🇦 Keep up the great work 👏🏼👏🏼👏🏼
Thanks!
Excellent explanation with a beautiful aesthetic
Glad you liked it
Excellent brother 🇲🇦
Loved the visual depiction to explain the concept.. I wish to know which software was used for the animations ??
Really helpful video and channel overall! Hope you keep It up
It might be possible to explain the search for PCs even without explaining the Langrangian optimization: There is simply a linear transformation that one wants to perform on the features such that the covariance matrix is as diagonal as possible. The reason for that is that when non-diagonal terms are 0 or close to 0, it means, that the two corresponding new features are really independent. So the explanation can actually boil down to finding the best linear transformation. So this wage explanation shows why we should search for eigenvectors. It however doesn't explain why the best eigenvector is the one with the largest eigenvalue.
Very well Explained!! Leaving a comment to increase the popularity of the video!
Thanks a lot 😊
Great video! But, how is happiness related to any of these factors? Based on the covariance matrix, I could only see how each factor is related to one another. Was there another vector in there based on ranking that is not included?
Great video and nice explanations! has a lot of work on the animations and textures 😁👍
Yes it was. Thanks a lot for the encouragement!
We center the data to have a mean of 0, which allows us to match the form of the covariance matrix provided in the video
And thanks for the video btw. It is amazing.
I love it! Thanks for that. Can you share the code used for PCA in this video, please? I am trying repeat, but my results dont check with yours, I want to see where I'm going wrong (I didn't find it in the description on github).
Thanks for the video.
Thank you! It brings back (mostly unpleasant) memories of college matrix algebra from 4 decades past... but I get the gist.
The only thing I could wish for would be a way to stop the video, and have a tool to re-orient the static 3D representation onto the 2D screen. That would greatly help me visualize what's being said (so well!)
Thanks for the feedback, that’s an interesting suggestion
simple but not simplist.. this is the eigenTRUTH. THANKS FROM ALGERIA...
شكرا جزيلا
Just asking... am i right to say C is semipositive definite?
BTW, Mind blowing video....noice
Thank you for this amazing video. This has helped me a lot, but I am a little bit confused about 2:18 when you say that it can be solved via Lagrange multiplier -- is this a convex optimization problem? The form looks good but this is a maximization problem. How can we apply the Lagrange multiplier method to solve a problem if it is non-convex?
Great point! The problem as written is not convex. But this is one of the (very) few nonconvex problems that can be solved to optimality with techniques usually reserved for convex problems.
Thank you
Thanks!
SUPERAWESOME!!!
This is so good
permission for learn sir .thank you
thanks mate!
Nice
I know that Moroccan accent
how i imagine a typical united nations summit discussion to be
I am sorry for criticism, please read it only if you want to improve, otherwise leave it.
Your graphics are great, and you know it, but some steps are not explained at all. For example at 4:45 suddenly show 3 axes that stop being perpendicular it is not only not clear why, but this unexplained "why" keeps the student`s brain busy instead of keeping following you. I am well familiar with the PCA and I maybe understood what you were trying to say, but others probably or didn't get you or (the most common) think they did. but they did not.
Thank you for taking the time to watch the video so carefully. I very much welcome your criticism to help improve the channel :-)
Garcia Angela Jones Jason Thompson Patricia
Lewis Deborah Gonzalez Helen Moore Christopher
Lopez Steven Jackson Richard Davis Helen
Gonzalez Mary Harris Dorothy Perez Kenneth
Rodriguez Timothy Moore Michael Walker Shirley
Miller Daniel Hall David Brown Amy
Martin Edward Walker Cynthia Taylor Jennifer
Young Kevin Miller Patricia Brown Nancy
Taylor Sharon Walker Nancy Moore Gary
Clark Joseph Young Elizabeth Johnson Jessica
Jackson Jeffrey Williams Jeffrey Hall John
awesome animations!!! Thanks so much!
Harris Jennifer Harris Kenneth Thompson Richard
Lee Jason Perez Karen Jones Betty
very good explanation. just how could you infer the meaning of the first two components that you called ‘power’ & ´balance’ ?
This is actually a very good question! One of the downsides of PCA is that it gives components that not interpretable by defaults. The only way to give them meaning is to look at the coefficients of the vector components and try to make sense of them (which is what I did for the video).
You are a very talented teacher !
Thank you! 😃
just wanted to thank you brother for this hard work! best explanations! saving me in grad school right now!
Awesome!!! Thank you so much! It's so fun to watch and so well explained!
Wow, simplified the entire concept of PCA. And also I love the example u gave. Thnx for the vid 💛🧡
Amazing explanation! Thank you
Absolutely loved the explanation
Super informative and so eloquently explained! Thankyou so much!
But where does the factor "happiness" play a role here? We usually optimize some dependent variable (e.g. "happiness") but it isn't represented anywhere here, while it's the question we're asking. I'm a bit confused here :/
excellent!!
dude! what an amazing channel! Super underrated man
Wilson Angela Garcia Elizabeth Martinez Anthony
1:55 information preserved i.e. dot product would be just x transpose u, wouldn't it? why did we square it? is it because how we always take root mean square ??
But what does it mean "to provide as much information as possible"? To maximize the variance?
Thomas Jose Clark Kimberly Davis Susan
at 4:57, "the happiest country seems to be the most balanced ones", seems wrong, it should be "the most power ones" ?
Martinez Steven Robinson Kevin Perez George
Hall Shirley Jones Maria Williams Linda
Martin Kenneth Garcia Deborah Moore Sarah
Moore Patricia Johnson Frank Garcia Sandra
Lewis Mark Taylor Jose Perez Betty
Great video!
Great video, For clarity, I've noticed that the features are color coordinated however, Social is green and Life is Blue which makes your equation for u1 and u2 the life and social labels should be swapped. u2 = (0.22 GDP + 0.55 Social) - 0.8 Life . Check the vectors as well. Could you please clarify.
Just an observation for clarity. Thank you. :)
thank you
Well explained video, but just a quick pointer. "Icelandic countries" is not a thing. Iceland is a country by itself. I am sure you must have meant Scandinavian countries. :) Otherwise, well made.
Very respectfully, please use correct maps of countries. For example India.
Great great video very much easier to understand
this channel really good
Thank you! Very clear!
how u hv taken gradient
One word: grateful
Amazing explanation man
Excellent work.
Hmm - I don't Norway would appreciate being called an "Icelandic" country. Iceland might not.
My bad … I meant scandinavian, not icelandic
Nice!
0:00
Garcia Charles Moore Brenda Martin Sandra