Visual Explanation of Principal Component Analysis, Covariance, SVD
HTML-код
- Опубликовано: 30 сен 2024
- Linearity I, Olin College of Engineering, Spring 2018
I will touch on eigenvalues, eigenvectors, covariance, variance, covariance matrices, principal component analysis process and interpretation, and singular value decomposition.
Great clarity. You clearly understand your stuff from a deep level so it's easy to teach.
Wow, that was quite good explanation.
PS: Video is targeted at people who already have a deep knowledge of what the video is trying to explain.
Well explained, you should do more videos
Clear explanation. Thank you for shading more light especially on the application of eigenvalues and vectors.
Nicely done. It hit the right level for someone who understands the linear algebra behind Eigenvectors and Eigenvalues but still needed to make the leap of connecting a dot or two in the application of PCA to a problem. Again, thank you!
Beautifully explained! Thank you so much!
I do understand that eigenvalues represent the factor by which the eigenvectors are scaled, but how do they signify “the importance of certain behaviors in a system”, what other information do eigenvalues tell us other than a scaling factor? Also, why do eigenvectors point towards the spread of data?
If you consider a raw matrix or just geometric examples eigenvalues are just a scaling factor indeed. And you cannot say much more. But here, we are talking with additional context: we know we are doing statistics and putting "data" into a covariance matrix, which means we can now add more interpretations. The eigen vector is not just some eigenvector of some matrix, it's the eigenvector of a *covariance matrix* in the context of statistics, we've put data into a matrix whose elements measure all the possible spread of data, which is why we can now say an eigenvector points towards the spread of data and its length (eigenvalue) relates to the importance of that spread.
Your explanations are awesome! Thank you!
Believe it or not, I've been wondering a lot about the concept of covariance because every video seems to miss the reason behind the idea. But I think I kind of figured it out today before watching this video and I drew the same exact thing that is in the thumbnail. So I guess was thinking correctly : ))
Really, amazing lecture! It's make my conception clear regarding eigenvalue and eigenvector. Thanks a lot!
Around the minute of 1.36, you said "we divide by n for covariance", but we divide by n-1, instead. Please, do check on that. Thanks for the video. Maybe, I sohuld say estimated covariance has the n-1 division.
Love this video, great work!
Awesome explanation!! Nobody did it better!
Extremely helpful. Thank you!
1:37 shouldn’t the covariance be divided by (n-1)?
Nice visual explanation of covariance!
Excellent video, thank you!
Graphical interpretation of covariance is very intuitive and useful for me. Thank you.
Great job explaining this
thank you for this amazing video
Would love to request an in person version
Great concise presentation, much appreciated! 👍
Why do you stop making videos?
Thank you. It was beautiful
poggers explination thankyou
ty
Good job, no wasted time
Good explanation
Plz do more videos
How do u find eigenvalues and eigenvectors from the covariance matrix?
Same as usual, right? Find lambda using det(Sigma - lambda * I) = 0, so just take lambda away from the main diagonal of the Cov. Matrix, take the determinant of that and you'd be left with some polynomial of lambda which you then solve for, each solution being a unique eigenvalue.
Good lecture
Thanks for concisely explaining that PCA is just SVD on the covariance matrix.
1:28 I personally visualise covariance like this, I always thought i was wrong, I have never seen others doing this, how come??
Very well done!
awesome explanation! thx!
Very nice video. I plan to use it for my teaching. What puzzles me a bit is that the PCs you give as an example are not orthogonal to each other.
Great video! Can anyone tell how she decided that PC1 is spine length and PC2 is Body mass? Should we guess (hypothesize) this in real world scenarios?
I have always dreaded statistics, but this video made these concepts so simple while connecting it to Linear algebra. Thank you so much ❤
I LOVE YOU !!!!! whattay explanation... thank you so much
This video needs a golden buzzer.
Agreed!!!
Best PCA Visual Explanation! Thank You!!!
Incredible video. Genuinely exactly what i needed.
thank you for this amazing and simple explanation
I thought PCA was a hard concept. Your video is so great!
Thank you! very nice video, well explained!
This is excellent, Emma... I will subscribe to your videos!
Great explication. Thank you.
Hello Emma, Great job! Very nicely explained.
thanks for this simple yet very clear explanation
You are awesome... u make a mediocre out of a knownothing.
I love the way u spelled "data" at [3:34]😁😁
Very Nice..pls keep posting
nice job was always kinda confused by this.
A very good explanation.
Dear mam,
How did you obtain the matrix at 5:30 ?
Find the Covariance Matrix of these variables, like at 2:15, and find its eigen decomposition (find its two dominant eigenvectors). The matrix at 5:30 is the two dominant eigenvectors. Each column is an eigenvector.
Emma Freedman thank u !
The video's explanation was great and covered all the fundamentals required to fully understand PCA !! 😃
awesome explanation! make more vids pls
Very nice explanation!
Thank you, Ma'am!
investigate hedge/hogs
Great video, thank you!
great explanation
I just love the voice🙄😸
beautiful, thanks a lot!
Great explanation!
thank you so much!! :)
Thank you for this great lecture.
thanks A LOT
AWESOMEEE 🤘🤘🤘
Best explanation I have heard from PCA. Thank you
5:12 was very good
Awesome!
nicely explained
Tnks
Well explained
4:35
Congratulations Emma, your work is excellent!
babe var(x,x) makes no sense. either you say var(x) or cov(x,x)
No one explains why they use covariance matrix. Why not use actual data and find its igen vector/igen values. I have been watching hundreds of videos books. No one explains that. It just doesn't make sense to me to use covariance matrix. Covariance is very useless parameter. It doesn't tell you much at all.
No it does especially using PCA. But you are right, you need actual data. Say the data are 3D points of some 3d objects, if you use this technique (build a cov matrix using the 3D points and do the PCA of it) then you will find a vector aligned with overall direction of the shape: for instance you will find the main axis of a 3d cylinder. This is quite a useful information.