19. Principal Component Analysis

Поделиться
HTML-код
  • Опубликовано: 7 фев 2025
  • MIT 18.650 Statistics for Applications, Fall 2016
    View the complete course: ocw.mit.edu/18-...
    Instructor: Philippe Rigollet
    In this lecture, Prof. Rigollet reviewed linear algebra and talked about multivariate statistics.
    License: Creative Commons BY-NC-SA
    More information at ocw.mit.edu/terms
    More courses at ocw.mit.edu

Комментарии • 85

  • @ZiLi-h9f
    @ZiLi-h9f 2 месяца назад +1

    already felling in love with this professor and his classes...

  • @bowenzheng8580
    @bowenzheng8580 4 года назад +11

    if you find this lecture challenging, it might be because you forget some basic linear algebra. Don't be discouraged by the somewhat trivial algebraic calculation. the Prof does a very good job in explaining the intuition and statistical foundation for doing PCA. PCA is so commonly used in psychology studies, yet no one in the my Psy department seem to have a clue where PCA is coming from.

  • @gouravkarmakar9606
    @gouravkarmakar9606 4 года назад +7

    extremely helpful with building the basics and then moving forward

  • @user-ff5sx6pg3d
    @user-ff5sx6pg3d 7 месяцев назад +3

    For people who are whining about the lecture is too hard or can not follow. I think you guys do not have the prerequisite for the course. His lecture is trying to illustrate from the statistical prospective of PCA, anybody who is series in data science should know that statistics and linear algebra shares a lot of same ideas from different prospective.

  • @linkmaster959
    @linkmaster959 4 года назад +4

    Gave me some insight, Thanks. I liked the part about how u^TSu is the variance of the X's along the u direction. Good to know for an alternative viewpoint to Singular value decomposition as a PCA.

  • @vegeta23121992
    @vegeta23121992 5 лет назад +6

    I was forwarding like crazy until I hear something and was thinking "Damn not only the first minute without audio". Just to realise my sound was mute

  • @김경환-p2z3c
    @김경환-p2z3c 4 года назад +2

    nice example of seeing matrix in perspective of stat

  • @2357y1113
    @2357y1113 6 лет назад +10

    Audio starts at 1:14

  • @aungkyaw9353
    @aungkyaw9353 4 года назад +2

    Great lecture. Thank so much Professor.

  • @zhenhuahu9814
    @zhenhuahu9814 5 лет назад +3

    H is a n by n matrix, and v is a d-element column vector. H can not multiply v

    • @brucereinhold9564
      @brucereinhold9564 5 лет назад +4

      He corrected the idea but didn't clean up the board. v is n-dim

    • @danishmahajan6901
      @danishmahajan6901 8 месяцев назад

      But I have a doubt here n shows number of examples and d tells about how many dimension the space have so v should be of size (dx1) so it should not be feasible for Hv??

    • @danishmahajan6901
      @danishmahajan6901 8 месяцев назад

      Can someone pls answer on this??

    • @yashwanth4120
      @yashwanth4120 2 месяца назад

      @@danishmahajan6901 Same doubt, have you got any ans ?

    • @danishmahajan6901
      @danishmahajan6901 2 месяца назад

      @yashwanth4120 no

  • @END-ch24
    @END-ch24 2 месяца назад

    Is the professor teaching PCA by writing on a cloudy/unrubbed blackboard with key concepts so that we extract the key features out of the entire volume and be able to identify the most significant ones? Is that why he is writing on a blackboard as this?

  • @pedagil4570
    @pedagil4570 5 лет назад +1

    He is pretty good actually

  • @ahmadmousavi495
    @ahmadmousavi495 5 лет назад +1

    Absolutely precious! Excellent in explaining details! Thank you.

  • @MsKouider
    @MsKouider 2 года назад

    how to proof that eigenvectors are coulums of projection matrix

  • @yasmineguemouria9099
    @yasmineguemouria9099 4 года назад +13

    49 min in and still hoping he'll get to PCA soon hahaha... great lecture though

  • @tilohauke9033
    @tilohauke9033 4 года назад +9

    A good opportunity to burn calories would be to wipe the blackboard properly.

  • @MrPolinesso3
    @MrPolinesso3 Год назад +4

    Clean the blackboard PROPERLY

  • @danishmahajan6901
    @danishmahajan6901 8 месяцев назад

    Can anyone pls help me with how prof. Come up on the final result from multiplication of Hv?? Steps i am little bit confused

  • @zknolz
    @zknolz 5 лет назад +2

    Concept of Eigenvector at 1:02

  • @shashanksharma1498
    @shashanksharma1498 6 месяцев назад

    wonderful teacher and everything. But what's with the horrible chalk rubbing.

  • @StatelessLiberty
    @StatelessLiberty 5 лет назад +4

    Shouldn't the empirical covariance matrix be divided by n-1 and not n?

    • @professorravik8188
      @professorravik8188 4 года назад +2

      both definitions works well

    • @vojtechremis3651
      @vojtechremis3651 5 месяцев назад

      @@professorravik8188 May I ask you why? Is it because we suppose that we are calculating empirical covariance matrix for a whole population. But if we wanted to calculate it only for sample from population, we would have to divide by n-1?

    • @xuchuan6401
      @xuchuan6401 20 дней назад

      @@vojtechremis3651 /n-1 is an unbiased estimator, while /n is the MLE or the definition of empirical variance

  • @wnualplaud2132
    @wnualplaud2132 3 года назад

    Anyone can explain how the did he get the term in the parenthesis at 39:07? Why does Transpose(v)[1] = Transpose([1])v?

    • @asterkleel3409
      @asterkleel3409 3 года назад +1

      both are transpose to each other and anyway, you are going to take the expectation of those two. so it will be same

  • @lazywarrior
    @lazywarrior 5 лет назад +1

    horrible. Don't max out your volume. There's nothing till you get a huge surprise at 1:15.
    One of the cameras is tracking the movement of the lecturer, and it makes me dizzy. The view of the blackboard is enough. Even in 2016, the camera man at OCW still can't master how to record good video lectures.

  • @阿明打code打羽球
    @阿明打code打羽球 6 лет назад

    I don't know why he let I_d rather than I_n denote n by n Identity matrix since 32:10.

  • @not_amanullah
    @not_amanullah Месяц назад

    thanks ♥️🤍

  • @rw7154
    @rw7154 8 месяцев назад

    Is he deliberately making the writing hard to read by making the blackboards so poorly erased and specifically writing on those poorly erased boards instead of the nice black ones?

  • @tomasjurica9624
    @tomasjurica9624 5 лет назад

    In 1:08:10, those lambda's should not be eigen values of Sigma ? (or covariance matrix ?)

  • @huzefaghadiyali5886
    @huzefaghadiyali5886 3 года назад

    The only bothersome thing in this video is the dirty is the blackboard.

  • @milindyadav7703
    @milindyadav7703 4 года назад

    Can anyone explain how is he multiplying Identity matrix Id which is dxd with all-ones matrix which is nxn?????

    • @milindyadav7703
      @milindyadav7703 4 года назад

      Nevermind....he clears it up around 40:00 it was gigantic mess

  • @viniciusviena8496
    @viniciusviena8496 4 года назад +2

    wtf is he writing in a messed up white board???

  • @吴舒晨
    @吴舒晨 4 года назад +1

    Absolutely great. If you have trouble getting this, maybe read a book first.

  • @sAAgit
    @sAAgit 4 года назад

    47:25 bottom left: How is Var(u^TX)defined? What does the "variance" of a random vector mean? Thank you so much

    • @zongmianli9072
      @zongmianli9072 4 года назад

      X is vector, not matrice in this case. So u^TX is just scalar

    • @quantlfc
      @quantlfc 2 года назад

      Try working backwards from the result U^TxU

  • @uploaddeasentvideo402
    @uploaddeasentvideo402 2 года назад

    Can you share your slide please?

    • @mitocw
      @mitocw  2 года назад +2

      The lecture slides are available on MIT OpenCourseWare at: ocw.mit.edu/18-650F16. Best wishes on your studies!

  • @thedailyepochs338
    @thedailyepochs338 4 года назад +1

    for such an important concept you would think Mit would've fixed this issue by now

  • @srikanthmyskar5610
    @srikanthmyskar5610 Год назад

    Good Lecture. But bad handling by Cameraman

  • @melihaslan9509
    @melihaslan9509 4 года назад +1

    I understand nothing...

  • @pranavchat141
    @pranavchat141 5 лет назад +2

    Rather watch one of the lectures on PCA by Prof Ali Ghodsi.

    • @NphiniT
      @NphiniT 5 лет назад

      Link please

    • @pranavchat141
      @pranavchat141 5 лет назад

      @@NphiniT ruclips.net/p/PLehuLRPyt1Hy-4ObWBK4Ab0xk97s6imfC
      This is full playlist.

  • @highartalert6927
    @highartalert6927 5 лет назад +8

    Man this video is such a torture! :D

  • @luluW199005
    @luluW199005 6 лет назад

    No sound?

    • @mitocw
      @mitocw  6 лет назад +1

      It has sound... it's just really low. Sorry!

  • @AndreyMoskvichev
    @AndreyMoskvichev 4 года назад

    11:00 There is a ghost on a board, in a right lower corner of it.

  • @Illinoise888
    @Illinoise888 4 года назад +1

    I see why this was made free.

  • @罗文山-t5v
    @罗文山-t5v 5 лет назад +18

    He should learn how to teach from Gibert Strang

    • @aazz7997
      @aazz7997 4 года назад +10

      Bit rude.

    • @MrJ691
      @MrJ691 Год назад

      ​@@aazz7997but true

    • @freeeagle6074
      @freeeagle6074 Год назад

      Lectures of both professors are awesome. It may be helpful to understand this course if prerequisite courses (18.600, 18.06, 18.100, etc.) are completed first. May also be helpful to study the slides first before listening to the lectures.

  • @djangoworldwide7925
    @djangoworldwide7925 Год назад

    my computer is so smart

  • @chaitanya.teja.g
    @chaitanya.teja.g 6 лет назад +3

    Is this really MIT?

  • @mswoonc
    @mswoonc 7 лет назад +5

    He makes PCA way more complicated than it should be, wow...

    • @brucereinhold9564
      @brucereinhold9564 5 лет назад +4

      Most of what he is doing is introducing the linear operator formalism. The gravy here is this side stuff, not the minimalist way to explain PCA

    • @joelwillis2043
      @joelwillis2043 Год назад

      no

  • @out_aloud
    @out_aloud 3 года назад +1

    He doesn't even cares to rub the board properly 😅😅😅😅

  • @Tyokok
    @Tyokok 6 лет назад

    thanks for the video! Question: can someone explain the difference between big Sigma and S. One is covariance matrix, one is sample covariance matrix. they are not the same thing? Thanks!

    • @MrTdib
      @MrTdib 5 лет назад +10

      Big Sigma is for the whole population. S is when selecting a sample from the population. S is an estimate of Sigma. If the sample is big enough, S would approach Sigma, but may not be exactly equal to the population parameter. I hope this is clear!

  • @jivillain
    @jivillain 4 года назад +2

    This guy is so cute

  • @quantlfc
    @quantlfc 2 года назад

    Insane :)

  • @not_amanullah
    @not_amanullah Месяц назад

    this is helpful ♥️🤍

  • @WowIan2012
    @WowIan2012 5 лет назад +6

    this is really not the quality i expected from MIT, pretty sloppy instructor

  • @danielketterer9683
    @danielketterer9683 4 года назад

    Dude needs better erasers

  • @georgeivanchyk9376
    @georgeivanchyk9376 4 года назад

    1:13:51 v1 ZULUL

  • @carloszg8398
    @carloszg8398 5 лет назад

    this guy is a complete mess...

  • @AndresSalas
    @AndresSalas 7 лет назад +2

    He looks rather unsecure.

  • @looploop6612
    @looploop6612 5 лет назад

    Terrible

  • @zbynekba
    @zbynekba 4 года назад

    Winy have you published such a mess? Shame on you!

  • @Mau365PP
    @Mau365PP 3 года назад +4

    Audio starts at 1:15