PCA, SVD

Поделиться
HTML-код
  • Опубликовано: 23 окт 2024

Комментарии • 24

  •  10 лет назад +4

    I found this video very helpful as a quick intro to PCA, especially the Eigen-face example gave a good intuition for PCA applications. Thanks a lot!

  • @gt3726b
    @gt3726b 11 лет назад +7

    Best explanation of these concepts I've ever seen. Thank you!

  • @vedhaspandit6987
    @vedhaspandit6987 9 лет назад +4

    Thanks! You made it intuitive, especially dimensionality reduction and PCA slides in the beginning! Awesome teacher you are!! :)
    At 6:30 you say, 'recall that' covariance matrix sigma decompose the variance in set of directions? While I *understood* the statement, from the definition of variances/covariances, I would like to know what video of yours are you referring to when you say 'recall'... It will be good for me to go through basics, and your videos esp, since you teach really well!
    I would like to know about eigen decomposition you stated next at 6:37, develop *intuition* about eigen vectors! Please consider making a video, if you have not already done one!

    • @AlexanderIhler
      @AlexanderIhler  9 лет назад +2

      ***** I think the recording of my lecture on Gaussians is ruclips.net/video/eho8xH3E6mE/видео.html ; hopefully that covers what you're interested in. I'm afraid I do a lot of lecture material in my course that isn't fully recorded; that's just the reality of online vs. in-person teaching. But I do try my best to put most of it up, as reference material. Hope that helps.

  • @luizkfox
    @luizkfox 11 лет назад +2

    nice explanation. it's just what I needed to better understand SVD. Thanks!

  • @adityacs007
    @adityacs007 7 лет назад +3

    Excellent videos. Thanks!

  • @KulvinderSingh-pm7cr
    @KulvinderSingh-pm7cr 2 года назад

    Simply best explanation!!

  • @patrickmullan8356
    @patrickmullan8356 3 года назад

    What is X0' (note the dash after the 0)?
    You use in in Equation at 8:50 on the right hand side, without having it defined before.

  • @Enerdzizer
    @Enerdzizer 5 лет назад +1

    Good explanation about PCA, but about SVD just formal definition and theory. What is actual connection between component vectors in PCA and SVD decomposition. Why are vectors in SVD are thouse from PCA?

  • @viacheslavgusev3377
    @viacheslavgusev3377 7 лет назад +1

    Alexander can you explain pls: Why (X^T)*X - is covariance matrix?

  • @seggsroland5486
    @seggsroland5486 9 лет назад +2

    Very good explanation and helpful. Thanks

  • @dariusdaro555
    @dariusdaro555 11 лет назад +4

    I agree, best explanation !

  • @esissthlm
    @esissthlm 10 лет назад +2

    Thank you, interesting to see this applied to images!

  • @KulvinderSingh-pm7cr
    @KulvinderSingh-pm7cr 2 года назад

    Beautiful

  • @skilstopaybils4014
    @skilstopaybils4014 9 лет назад

    Different places give very different explanations of SVD, some produce m x n, m x m, n x n data from an m x n data matrix. Super irritating. Also it would be great if someone showed how exactly to decompose.
    Also, when you first introduce dimensionality reduction you use z^(i) to represent the new data point, on the next slide you seem to switch to a^(i). I hope I'm correct in assuming that change?

  • @zrmsraggot
    @zrmsraggot 2 года назад

    Careful with that.. as we say, simple but not simpler

  • @celisun1013
    @celisun1013 Год назад

    please publish more videos, professor Ihler!

  • @konstantinburlachenko2843
    @konstantinburlachenko2843 8 лет назад

    Thanks. In 07:13 U contains eigenvectors. But for which matrix/operator does it contain eigenvectors?

    • @AlexanderIhler
      @AlexanderIhler  8 лет назад

      +Konstantin Burlachenko
      "U" at 7:13 are the eigenvectors of the DxD centered covariance matrix, Sig = (X-mu)^T * (X-mu), of the data
      At 9:30 or so I show that the connection between thes and the left/right singular vectors of the NxD data matrix, X
      Unfortunately due to standard naming there is a notational change here; "U" from PCA is actually "V" in standard SVD notation.

  • @panweihit
    @panweihit 9 лет назад +1

    very clear

  • @merepapa350
    @merepapa350 6 лет назад +1

    Good sir

  • @muhittinselcukgoksu1327
    @muhittinselcukgoksu1327 5 лет назад

    Thank you soo much..

  • @videofountain
    @videofountain 10 лет назад

    Thanks. The viewer does not know the use and context of reduced facial images or the original facial images. Its difficult to agree or disagree that the reduced facial image is useful. Casual observation might consider the result just to be a blurred image.

    • @AlexanderIhler
      @AlexanderIhler  10 лет назад +1

      The "use" of unsupervised learning is really up to the user; more directly, all it is is a form of lossy compression -- but hopefully one that is helpful in some way.
      PCA reduces the number of real-valued numbers required to represent the data, by allowing some distortion. This can be used to visualize the data (for example, by reducing to 3 dimensions), or to improve the quality of interpolation, or to improve nearest neighbor or kernel-based predictions (which suffer in high-dimensional settings).
      So, the blurred, original images are probably not useful -- but identifying the linear subspace in which they lie may be, such as using that lower dimensional representation to reduce the complexity of some other step.