19. Principal Component Analysis
HTML-код
- Опубликовано: 7 фев 2025
- MIT 18.650 Statistics for Applications, Fall 2016
View the complete course: ocw.mit.edu/18-...
Instructor: Philippe Rigollet
In this lecture, Prof. Rigollet reviewed linear algebra and talked about multivariate statistics.
License: Creative Commons BY-NC-SA
More information at ocw.mit.edu/terms
More courses at ocw.mit.edu
already felling in love with this professor and his classes...
if you find this lecture challenging, it might be because you forget some basic linear algebra. Don't be discouraged by the somewhat trivial algebraic calculation. the Prof does a very good job in explaining the intuition and statistical foundation for doing PCA. PCA is so commonly used in psychology studies, yet no one in the my Psy department seem to have a clue where PCA is coming from.
extremely helpful with building the basics and then moving forward
For people who are whining about the lecture is too hard or can not follow. I think you guys do not have the prerequisite for the course. His lecture is trying to illustrate from the statistical prospective of PCA, anybody who is series in data science should know that statistics and linear algebra shares a lot of same ideas from different prospective.
Gave me some insight, Thanks. I liked the part about how u^TSu is the variance of the X's along the u direction. Good to know for an alternative viewpoint to Singular value decomposition as a PCA.
I was forwarding like crazy until I hear something and was thinking "Damn not only the first minute without audio". Just to realise my sound was mute
nice example of seeing matrix in perspective of stat
Audio starts at 1:14
Thanks.
Great lecture. Thank so much Professor.
H is a n by n matrix, and v is a d-element column vector. H can not multiply v
He corrected the idea but didn't clean up the board. v is n-dim
But I have a doubt here n shows number of examples and d tells about how many dimension the space have so v should be of size (dx1) so it should not be feasible for Hv??
Can someone pls answer on this??
@@danishmahajan6901 Same doubt, have you got any ans ?
@yashwanth4120 no
Is the professor teaching PCA by writing on a cloudy/unrubbed blackboard with key concepts so that we extract the key features out of the entire volume and be able to identify the most significant ones? Is that why he is writing on a blackboard as this?
He is pretty good actually
Absolutely precious! Excellent in explaining details! Thank you.
how to proof that eigenvectors are coulums of projection matrix
49 min in and still hoping he'll get to PCA soon hahaha... great lecture though
A good opportunity to burn calories would be to wipe the blackboard properly.
Clean the blackboard PROPERLY
Can anyone pls help me with how prof. Come up on the final result from multiplication of Hv?? Steps i am little bit confused
Concept of Eigenvector at 1:02
wonderful teacher and everything. But what's with the horrible chalk rubbing.
Shouldn't the empirical covariance matrix be divided by n-1 and not n?
both definitions works well
@@professorravik8188 May I ask you why? Is it because we suppose that we are calculating empirical covariance matrix for a whole population. But if we wanted to calculate it only for sample from population, we would have to divide by n-1?
@@vojtechremis3651 /n-1 is an unbiased estimator, while /n is the MLE or the definition of empirical variance
Anyone can explain how the did he get the term in the parenthesis at 39:07? Why does Transpose(v)[1] = Transpose([1])v?
both are transpose to each other and anyway, you are going to take the expectation of those two. so it will be same
horrible. Don't max out your volume. There's nothing till you get a huge surprise at 1:15.
One of the cameras is tracking the movement of the lecturer, and it makes me dizzy. The view of the blackboard is enough. Even in 2016, the camera man at OCW still can't master how to record good video lectures.
I don't know why he let I_d rather than I_n denote n by n Identity matrix since 32:10.
oh, he correct this mistake since 40:30
thanks ♥️🤍
Is he deliberately making the writing hard to read by making the blackboards so poorly erased and specifically writing on those poorly erased boards instead of the nice black ones?
In 1:08:10, those lambda's should not be eigen values of Sigma ? (or covariance matrix ?)
The only bothersome thing in this video is the dirty is the blackboard.
Can anyone explain how is he multiplying Identity matrix Id which is dxd with all-ones matrix which is nxn?????
Nevermind....he clears it up around 40:00 it was gigantic mess
wtf is he writing in a messed up white board???
Absolutely great. If you have trouble getting this, maybe read a book first.
47:25 bottom left: How is Var(u^TX)defined? What does the "variance" of a random vector mean? Thank you so much
X is vector, not matrice in this case. So u^TX is just scalar
Try working backwards from the result U^TxU
Can you share your slide please?
The lecture slides are available on MIT OpenCourseWare at: ocw.mit.edu/18-650F16. Best wishes on your studies!
for such an important concept you would think Mit would've fixed this issue by now
Good Lecture. But bad handling by Cameraman
I understand nothing...
Rather watch one of the lectures on PCA by Prof Ali Ghodsi.
Link please
@@NphiniT ruclips.net/p/PLehuLRPyt1Hy-4ObWBK4Ab0xk97s6imfC
This is full playlist.
Man this video is such a torture! :D
why?
No sound?
It has sound... it's just really low. Sorry!
11:00 There is a ghost on a board, in a right lower corner of it.
I see why this was made free.
He should learn how to teach from Gibert Strang
Bit rude.
@@aazz7997but true
Lectures of both professors are awesome. It may be helpful to understand this course if prerequisite courses (18.600, 18.06, 18.100, etc.) are completed first. May also be helpful to study the slides first before listening to the lectures.
my computer is so smart
Is this really MIT?
He makes PCA way more complicated than it should be, wow...
Most of what he is doing is introducing the linear operator formalism. The gravy here is this side stuff, not the minimalist way to explain PCA
no
He doesn't even cares to rub the board properly 😅😅😅😅
thanks for the video! Question: can someone explain the difference between big Sigma and S. One is covariance matrix, one is sample covariance matrix. they are not the same thing? Thanks!
Big Sigma is for the whole population. S is when selecting a sample from the population. S is an estimate of Sigma. If the sample is big enough, S would approach Sigma, but may not be exactly equal to the population parameter. I hope this is clear!
This guy is so cute
Insane :)
this is helpful ♥️🤍
this is really not the quality i expected from MIT, pretty sloppy instructor
Dude needs better erasers
1:13:51 v1 ZULUL
this guy is a complete mess...
He looks rather unsecure.
Terrible
Winy have you published such a mess? Shame on you!
Audio starts at 1:15