7. Eckart-Young: The Closest Rank k Matrix to A

Поделиться
HTML-код
  • Опубликовано: 11 сен 2024

Комментарии • 83

  • @yupm1
    @yupm1 4 года назад +58

    What a wonderful lecture! I wish Prof. Gilbert a very long life.

  • @adolfocarrillo248
    @adolfocarrillo248 5 лет назад +66

    I love Prof. Gilbert Strang, he is a dedicated man to teach mathematics. Please receive a huge hug on my behalf.

  • @kirinkirin9593
    @kirinkirin9593 5 лет назад +22

    10 years ago I took OCW for the first time and I am still taking it. Thank you professor Gilbert Strang.

  • @xXxBladeStormxXx
    @xXxBladeStormxXx 3 года назад +32

    It's funny that this video (lec 7) has vastly fewer views than both lecture 6 and 8. But if the title of this video was PCA instead of Eckart-Young it would easily be the most viewed video in the series.
    That's why kids, do the entire course instead of just watching 15 minutes of popular concepts.

    • @prajwalchoudhary4824
      @prajwalchoudhary4824 3 года назад

      well said

    • @neoblackcyptron
      @neoblackcyptron 3 года назад +2

      He has not explained anything about PCA in this lecture. He barely started out in the end and it wrapped up.

    • @oscarlu9919
      @oscarlu9919 3 года назад +1

      That's the exact same thing I think about. I just followed up on the sequence of videos, and I surprisingly notice that this video is about PCA, which is closely connected to previous videos. But viewing previous videos makes the understanding of PCA far deeper!

  • @neoblackcyptron
    @neoblackcyptron 3 года назад +7

    Really deep lectures, I learn something new every time I watch them again and again. These lectures are gold.

  • @tempaccount3933
    @tempaccount3933 2 года назад +3

    Gil 3:30.
    Eckart & Young in 1936 were both at The University of Chicago. The paper was published in the (relatively new?) journal Psychometrica. Ekhart had already worked in the foundations of QM with some of the founders. And went on to work in Fermi's section on the Manhattan Project. If I recall correctly, Eckart married the widow of von Neumann. And ended up at UCSD. He was very renowned in applied physics including oceanographicy/ geophysics.
    Mr Gale Young was a grad student at Chicago. He also had a successful career taking his Master's from Chicago to positions in acedemia & US nuclear power industry.

  • @amirkhan355
    @amirkhan355 5 лет назад +11

    Thank you for being who you are and touching our lives!!! I am VERY VERY grateful.

  • @mitocw
    @mitocw  5 лет назад +24

    Fixed audio sync problem in the first minute of the video.

  • @eljesus788
    @eljesus788 3 года назад +1

    Gil has been my math professor for the last 12 years. These online courses are so amazing.

  • @dmitriykhvan2035
    @dmitriykhvan2035 3 года назад +3

    you have changed my life Dr. Strang!

    • @Aikman94
      @Aikman94 3 года назад +2

      His passion, knowledge and unique style. He's such a treasure. An amazing professor and wonderful mathematician.

  • @JuanVargas-kw4di
    @JuanVargas-kw4di 2 года назад +1

    In the least-squares vs. PCA discussion that starts at 37:44, he's comparing minimizing sum of squares of vertical distances to minimizing sum of squares of squares of perpendicular distances. However, each vertical error is related to each perpendicular error by the same multiplicative constant (cosine of the angle made by the estimated line), so in a way, minimizing one is tantamount to minimizing the other. Where the two methods do seem to differ is that least squares allows for an intercept term, while the PCA line goes through the origin. However, when we look at the estimate of the intercept term ( b_0 = mean(y) - b_hat*mean(x) ), least squares appears to be performing a de-meaning similar to the first step in PCA. In summary, I think we would need a more thorough discussion than we see in the video in order to conclude that least squares and the first principal component of PCA are different.

  • @SalarKalantari
    @SalarKalantari Год назад +2

    33:54 "Oh, that was a brilliant notation!" LOL!.

  • @georgesadler7830
    @georgesadler7830 3 года назад

    Professor Strang thank you for great lecture that involves Norms, Ranks and Least Squares. All three topics are very important for solid linear algebra development.

  • @KipIngram
    @KipIngram 3 года назад +3

    44:40 - No, it's not making the mean zero that creates the need to use N-1 in the denominator. That's done because you are estimating population mean via sample mean, and because of that you will underestimate the population variance. It turns out that N-1 instead of N is an exact correction, but it's not hard to see that you need to do *something* to push your estimate up a bit.

    • @obarquero
      @obarquero 3 года назад +1

      Well, indeed I guess both are saying more or less the same thing. This is called Bessel’s correction. I prefer to think that dividing by N-1 yields an unbiased estimator, so that on average the sample cov matrix is the same as the cov matrix from the pdf.

  • @nikre
    @nikre 2 года назад

    a privilege to take part in such a distilled lecture. no confusion at all.

  • @tusharganguli
    @tusharganguli 2 года назад

    Protect this man at all cost! Now we know, what an angel looks like!

  • @JulieIsMe824
    @JulieIsMe824 3 года назад +1

    Most interesting linear algebra lecture ever!! It's very easy to understand even for us chemistry students

  • @xc2530
    @xc2530 Год назад

    27:00 matirx A multiply orthogonal matrix, the norm of A doesn’t change

  • @GeggaMoia
    @GeggaMoia 3 года назад +2

    Anyone else thinks he talks with the same passion for math, as Walter White does for Chemistry? Love this guy.

  • @Zoronoa01
    @Zoronoa01 3 года назад +1

    is it my computer or is the sound level a bit low?

  • @jayadrathas169
    @jayadrathas169 4 года назад +7

    Where is the follow-up lecture on PCA it seems to be missing from the following lectures?

    • @philippe177
      @philippe177 4 года назад

      Did you find it where. I am dying to find it.

    • @krakenmetzger
      @krakenmetzger 4 года назад +2

      @@philippe177 The best explanation I've found is in a book called "Data Mining: The Textbook" by Charu Aggarwal. The tl;dr. Imagine you have a bunch of data points in Rn, and you just list them as rows in a matrix.
      First assume the "center of mass" (mean value of rows) is 0. Then PCA = SVD. The biggest eigenvalue/eigenvector points in the direction of largest variance, and so on for the second, third, fourth, etc eigenthings.
      In the case where center of mass is not zero, SVD gives you the same data as PCA, it just takes into account that the center of mass has moved.

    • @vasilijerakcevic861
      @vasilijerakcevic861 4 года назад

      Its this lecture

    • @justpaulo
      @justpaulo 4 года назад

      ruclips.net/video/ey2PE5xi9-A/видео.html

  • @Nestorghh
    @Nestorghh 4 года назад +1

    He’s the best.

  • @mathsmaths3127
    @mathsmaths3127 4 года назад +1

    Sir You are wonderful and beautiful mathematician Thank you so much for teaching us for being with us

  • @k.christopher
    @k.christopher 5 лет назад +1

    Thank you Prof Gilbert.

  • @KapilGuptathelearner
    @KapilGuptathelearner 5 лет назад +3

    at around 37:15 when the Sir is talking about difference in Least Squares and PCA. I think minimization will lead to same solution as perpendicular length is proportional to other line. Hypotenuse * sin(theta), where theta is the angle between vertical line and the line of least squares which must be fixed for a particular plane(line). I could not understand where I am going wrong.

    • @AmanKumar-xl4fd
      @AmanKumar-xl4fd 5 лет назад

      Where r u from

    • @KapilGuptathelearner
      @KapilGuptathelearner 5 лет назад

      @@AmanKumar-xl4fd ??

    • @AmanKumar-xl4fd
      @AmanKumar-xl4fd 5 лет назад

      @@KapilGuptathelearner jst asking

    • @shivammalviya1718
      @shivammalviya1718 5 лет назад +2

      Very nice doubt bro. The catch is in the theta. Let us assume that first you used least squares and found out a line such that error is minimum and it is equals to E. Then as you said the error in case of PCA should be sin(theta) * E , here change in theta will have a direct effect to minimize error of PCA since it is in the product. So minimizing just E will not work, as you should minimize the product, and sin(theta) is also there. I hope you got i want to say.

    • @AmanKumar-xl4fd
      @AmanKumar-xl4fd 5 лет назад

      @UCjU5LGbSp1UyWxb8w7wPE6Q u know about coding

  • @vivekrai1974
    @vivekrai1974 Год назад

    28:50 Isn't it wrong to say that Square(Qv) = Transpose(Qv) * (Qv)? I think Square(Qv) = (Qv) * (Qv).

  • @haideralishuvo4781
    @haideralishuvo4781 3 года назад +1

    Can anyone explain whats relation of Eckart Young theorem and PCA ?

  • @Enerdzizer
    @Enerdzizer 5 лет назад +4

    Where is continuation? It must have been on Friday as Prof announced. But lecture 8 is not that lecture. Right?

  • @zkhandwala
    @zkhandwala 5 лет назад +4

    Good lecture, but I feel it only just starts getting into the heart of PCA before it ends. I don't see a continuation of the the discussion in subsequent lectures, so I'm wondering if I'm missing something.

    • @rahuldeora5815
      @rahuldeora5815 5 лет назад

      Yes you are right. Do you know any other good source to learn PCA of this quality? Am having a hard time finding

    • @ElektrikAkar
      @ElektrikAkar 4 года назад

      @@rahuldeora5815 This one seems pretty nice for more information on PCA: ruclips.net/video/L-pQtGm3VS8/видео.html

    • @joaopedrosa2246
      @joaopedrosa2246 4 года назад

      @@ElektrikAkar thanks for that, I've wasted a huge amount of time looking for a good source

    • @DataWiseDiscoveries
      @DataWiseDiscoveries 3 года назад

      nice lecture loved it
      @@ElektrikAkar

  • @micahdelaurentis6551
    @micahdelaurentis6551 3 года назад

    I just have one question not addressed in this lecture...what actual color is the blackboard?

  • @lavalley9487
    @lavalley9487 Год назад

    Thank, Pr... Very helpful!

  • @yidingyu2739
    @yidingyu2739 4 года назад +1

    It seems that Prof. Gilbert Strang is a fan of Gauss.

  • @forheuristiclifeksh7836
    @forheuristiclifeksh7836 Месяц назад

    0:37 what is a pca(?

  • @johnnyhackett199
    @johnnyhackett199 2 года назад

    @2:48 Why'd he have the chalk in his pocket?

  • @forheuristiclifeksh7836
    @forheuristiclifeksh7836 Месяц назад

    5:56 vector norm matrices

  • @dingleberriesify
    @dingleberriesify 4 года назад

    I always thought the N-1 was related to the fact that the variance of a single object is undefined (or at least nonsensical), so the N-1 ensures this is reflected in the maths? As well as something related to the unbiasedness of the estimator etc.

    • @obarquero
      @obarquero 3 года назад

      This is called Bessel’s correction. I prefer to think that dividing by N-1 yields an unbiased estimator, so that on average the sample cov matrix is the same as the cov matrix from the pdf.

  • @user-kn4oj4ze7b
    @user-kn4oj4ze7b 2 года назад

    How can I find the proof of Eckart-Young theorem mentioned in the video? Where is the link?

    • @mitocw
      @mitocw  2 года назад

      The course materials are available at: ocw.mit.edu/18-065S18. Best wishes on your studies!

  • @xc2530
    @xc2530 Год назад

    44:00 covariance matrix

  • @naterojas9272
    @naterojas9272 4 года назад +1

    Gauss or Euler?

    • @sb.sb.sb.
      @sb.sb.sb. 3 года назад +2

      ancient indian mathematicians knew about Pythagors thrm and euclidean distance

  • @xc2530
    @xc2530 Год назад

    31:00 PCA

  • @pandasstory
    @pandasstory 4 года назад

    Great lecture! Thank you so much Prof. Gilbert Strang. But can anyone tell me where to find the following part of PCA?

  • @Andrew6James
    @Andrew6James 4 года назад

    Does anyone know where the notes are?

    • @mitocw
      @mitocw  4 года назад

      Most of the material is in the textbook. There are some sample chapters available from the textbook, see the Syllabus for more information at: ocw.mit.edu/18-065S18.

  • @TheRossspija
    @TheRossspija 4 года назад

    16:55 There was a joke that we didn't get to hear :(

  • @manoranjansahu7161
    @manoranjansahu7161 2 года назад

    Good. But I wish proof was given

  • @forheuristiclifeksh7836
    @forheuristiclifeksh7836 Месяц назад

    7:09

  • @xc2530
    @xc2530 Год назад

    4:26 norm

    • @xc2530
      @xc2530 Год назад

      Minimise- use L1

    • @xc2530
      @xc2530 Год назад

      18:00 nuclear norm- incomplete matrix with missing data

  • @mr.soloden1981
    @mr.soloden1981 5 лет назад +1

    Нихера не понял, но лайк поставил)

  • @kevinchen1820
    @kevinchen1820 2 года назад

    20220526 簽