Computing the Singular Value Decomposition | MIT 18.06SC Linear Algebra, Fall 2011

Поделиться
HTML-код
  • Опубликовано: 1 фев 2025

Комментарии • 266

  • @SnackPacks10
    @SnackPacks10 8 лет назад +568

    5 min of this video taught me more than two lectures, two chapters of my book, one office hour with the TA, and 4 hours trying to figure it out on my own.

    • @jugsma6676
      @jugsma6676 8 лет назад +3

      same here, :))) , haha

    • @TheR971
      @TheR971 7 лет назад +1

      nice thought!

    • @sunghunet
      @sunghunet 7 лет назад +1

      Yes. It saved the time to understand the difficult processes of SVD. Very helpful to me.

    • @oguztopcu9048
      @oguztopcu9048 6 лет назад +5

      dude you can accomplish anything by watching 2 youtube videos.

    • @deepgee9214
      @deepgee9214 6 лет назад +1

      Hahaha 😂🤣

  • @jacobfields8111
    @jacobfields8111 8 лет назад +125

    You explained singular value decomposition better in 11 minutes than my linear algebra professor did in 50. Thanks.

    • @carl6167
      @carl6167 6 лет назад +6

      I am afraid mate that then you don't understand anything, but how to compute it... See, SVD is way more than that, it is a way of decomposing a homomorphism into three separate endomorphism. One that rotates it in the definition Space, One that scales vectors accordingly when bringing them from one dimension to another, and then, the vector gets rotated once again in the Output Space (Unitary matrices can be decomposed into rotation matrices).

    • @sathasivamk1708
      @sathasivamk1708 5 лет назад +5

      @@carl6167 I don't think first one rotates , orthogonal matrices doesn't mean it's a rotation I can be a reflection/inversion too

  • @ArdianUmam
    @ArdianUmam 8 лет назад +32

    I'm always amazed by MIT OCW videos. The way they teach is just ideal. Clear/big enough writen on the board, systematic explanation and comfortable to understand.

  • @Georgesbarsukov
    @Georgesbarsukov 5 лет назад +52

    I vocally say out loud 'Minus lambda' at 3:44, he then turns around and says 'minus lambda, thank you'. You're welcome.

  • @김서현-n2r5n
    @김서현-n2r5n 8 лет назад +15

    I'm sincerely thanks for MIT to give me this wonderful lecture for free .
    It's really helpful for me to learn SVD.

  • @D3tyHuff
    @D3tyHuff 6 лет назад +102

    7:24 No need to look for me man, I'm right here.

  • @muhammaddanialshafqat4576
    @muhammaddanialshafqat4576 7 лет назад +1

    MIT OCW is the big reason due to which I pass my courses like Linear Algebra in my university ...
    THANKS MIT_OCW

  • @PaMS1995
    @PaMS1995 8 лет назад +8

    This is the best svd tutorial I could wish for, thanks for making it easy

  • @anuraagnarang8982
    @anuraagnarang8982 8 лет назад +93

    This is a bit confusing. If anyone wants to know how to find U, have a look at my workings. (S=sigma)
    Firstly, when calculating the eigenvector for the eigenvalue 20, swap the signs around so that the vector is (3/root10, -1/root10).
    Note that this also a correct eigenvector for the given matrix and its eigenvalue 20. You can find out why by reading up on eigendecomposition.
    Secondly, swap the order of the columns around in S, so that the values go from high to low when looking from left to right. This is the conventional format of the sigma matrix.
    Now when finding U, I'm not sure why he's done the unit length thing, and I can't even see how he's got his final answer from it.
    Anyway, we know that CV = US, which means CVS^-1 = USS^-1.
    Since S is diagonal, SS^-1 = I, the identity matrix i.e. it can be ignored.
    So now we have CVS^-1 = U.
    To find S^-1: Since we have a diagonal matrix, just invert the values along the diagonal i.e. any value on the diagonal, x, becomes 1/x.
    Now multiply your CV by your S^-1 and you should get the same result for U as in the video, but with the columns swapped around i.e. in the correct format.

    • @cdahdude51
      @cdahdude51 6 лет назад +2

      Excellent, thanks. Was wondering about how he calculated U

    • @Yustynn
      @Yustynn 5 лет назад +5

      Regarding getting U after he retrieves US, there are 2 ways.
      Both rely on the following: Notice that S, being a diagonal matrix, stretches the matrix U. Each column gets scaled by the corresponding diagonal in the S matrix.
      Method 1 (need to know the value of S): Divide each column j by the corresponding S_jj value.
      Method 2 (no need to know the value of S): Observe that U is by definition orthogonal, and that each column has been scaled by
      2. Observe that:
      i) S is by definition diagonal => all that happened to U was a scaling of each column vector
      ii) U is by definition orthogonal => each column vector has magnitude 1
      So just normalize all the columns.
      He made a mistake: he put the minus sign on U_11 instead of U_21

    • @forthrightgambitia1032
      @forthrightgambitia1032 4 года назад

      When he gets the eigenvectors did he just have the prepared or is there a mental trick to get the vectors he saw without Gaussian elimination?

    • @KnowledgeGuide859
      @KnowledgeGuide859 3 года назад +1

      Excellent. I spent an hour thinking how did he calculate U. Thanks again.

  • @rockfordlines3547
    @rockfordlines3547 4 года назад +30

    Thanks for this, Mclovin.

  • @teddydrewski
    @teddydrewski 6 лет назад +3

    Explained a complicated problem in a simple way. Amazing work.

  • @Rajbhandari88
    @Rajbhandari88 12 лет назад +5

    Note to viewers, the sigma is wrong. The elements, s, in sigma should be s1>s2.

  • @lilo7387
    @lilo7387 Месяц назад

    Finally, I understand Singular Value Composition now!! Thank you a lot!!

  • @aggressivetourist1818
    @aggressivetourist1818 4 года назад

    One of the best ways to teach people is to show them an example.
    I cannot find the example of this thing in my LA class. A teacher uses matlab to calculats the UEV
    Thanks this man. He must gain Phd already. I wish

  • @rafaellima8146
    @rafaellima8146 10 лет назад +73

    Just an advice: next time try to compute without jumping steps. When you compute step-by-step calmly, your chances of success increases a lot. Anyway, thank you for the lecture anyway. It was very helpful.

  • @jasont7604
    @jasont7604 5 лет назад +1

    Thank you, most examples I found for this were simple examples, this helped me figure out the more complex problems.

  • @nathanpatera9836
    @nathanpatera9836 4 дня назад

    This is the video I come back to when I need to review how it's done.

  • @vincenzosolinas6495
    @vincenzosolinas6495 3 года назад +9

    There is a problem, the \Sigma matrix must have positive decreasing values, so can't be diag([2/sqrt(5) 4/sqrt(5)]) but \Sigma = diag([4/sqrt(5) 2/sqrt(5)]). Indeed, if you perform the matrices product of your U Sigma and V' you don't get C but you obtain [ -1 7; 5 5] instead

    • @adeoluoluwadara3240
      @adeoluoluwadara3240 27 дней назад

      Yes you are right, because the values for the sigma matrix is decreasing

  • @parthgadoya5690
    @parthgadoya5690 7 лет назад +18

    Last minute Mistake: He put a wrong sign in u11 and u21 position.
    Correction: u11 = -1/root(2), u21= 1/root(2)
    Correct me If I am going somewhere wrong.

    • @GarbageMan144
      @GarbageMan144 3 года назад +3

      no ur right. these mistakes can be so annoying sometimes

    • @dpccn6969
      @dpccn6969 3 года назад +1

      I think same you.

  • @yaweli2968
    @yaweli2968 5 лет назад +4

    There is a much simpler way of doing this with less errors in one of the MIT pages on SVD decomposition. U is the unit Eigenvector of C Ctranspose, and V transpose is the unit eig. Vector of C transpose C. Eigenvalues are same for both matrices. Just find your eigen value diagonal matrice and you are done.

  • @mehmetkazakli956
    @mehmetkazakli956 6 лет назад +18

    Singular values need to be ordered decreasingly. When you write sigma, should not 1st and 4th values switched ?

  • @antonivanov3830
    @antonivanov3830 4 года назад +8

    Thank you for this video! Very nicely done! Just one correction: U11 and U21 seems to be should be swapped)

  • @justtrynagetthisms
    @justtrynagetthisms Месяц назад

    thanks lil homie just saved my life

  • @yangong7277
    @yangong7277 7 лет назад

    Gotta save my life for the rest of the quarter!!! So lucky to find this tutorial right before the midterm tomorrow LOL

  • @iasm
    @iasm 11 лет назад +1

    Directly computing CC* and C*C will work definitely but an important issue will arise. That is the uniqueness of eigenvector! Since giving minus to every eigenvector could also be a reasonable solution, U and V obtained from AA* and A*A might not be able to reconstruct the original A. However, if one uses the 2nd equation mentioned in this video, such potential issue could be avoided because U is derived from C, V and sigma!

    • @kuzminkg
      @kuzminkg 2 года назад

      Underrated comment! Indeed, the demonstrated method find the matrix V that corresponds to previously found matrix U.

  • @Kitiara306
    @Kitiara306 10 лет назад +60

    U is wrong. First entry is negative, not third.

    • @ortollj4591
      @ortollj4591 5 лет назад +5

      I highly recommend watching this other video about SVD very well explained:
      ruclips.net/video/EokL7E6o1AE/видео.html
      Yes a little sign error in U matrix:
      sagecell.sagemath.org/?q=yuwvgr

    • @fxelixe
      @fxelixe 5 лет назад +3

      thank you finally I can go to sleep. he didn't seem to notice it at all

    • @mdichathuranga1
      @mdichathuranga1 3 года назад

      @@ortollj4591 thanks for sharing this link, this seems the clearest explanation I found so far.

  • @sophiaxmc2167
    @sophiaxmc2167 5 лет назад +19

    I think the V and the Sigma need to be ordered with the largest singular vector/value on the left?

    • @kanakambaran
      @kanakambaran 4 года назад

      ya.. this is what i thought as well. but if you do that and rearrange V also, you dont get the original matrix if you multiply the 3 components. so there must be some reason why he has put it like that

    • @heathmatthews2643
      @heathmatthews2643 3 года назад

      They don’t need to be but it’s super useful for stuff like Principal Component Analysis so arrange the sigma from greatest to least :)

  • @gregorybehm4890
    @gregorybehm4890 11 лет назад

    I just wanted to let you know that AA* or A*A works in the SVD and the other way results in a far easier computation of the SVD.

  • @totetnhitnhitjt
    @totetnhitnhitjt 6 лет назад +3

    Perhaps there's a faster way. let's note C' the transpose of C, S for sigma matrix
    By using two equations : CC' = V S'S V'
    C'C = US'SU'
    You can compute S'S by finding eigenvalues of CC' but it happens to be the same eigen values of C'C. So after finding V, instead of using your second equation. You can just find the eigen vectors of C'C by doing :
    C'C - 20I
    C'C - 80I
    You'll find the vectors of U faster and without inversing S.
    Also by factorizing 1/sqrt(10), it's easier to compute

  • @luislobo6986
    @luislobo6986 10 лет назад +21

    To find U it is simple. Did the professor found U*Sigma from CV, didn't he?
    So, to find U we do. U*Sigma = CV.
    U = [a11 a12; a21 a22]*[2*sqrt(5) 0; 0 4*sqrt(5)] = [-sqrt(10) 2*sqrt(10); sqrt(10) 2*sqrt(10)].
    So, we have a11=-sqrt(10)/2*sqrt(5) = -1/sqrt(2)
    a12 = 2*sqrt(10)/4*sqrt(5)=1/sqrt(2)
    a21 = sqrt(10)/2*sqrt(5)=1/sqrt(2)
    a22 = 2*sqrt(10)/4*sqrt(5)=1/sqrt(2)

    • @SKCSK792
      @SKCSK792 10 лет назад +1

      Perfect

    • @plavix221
      @plavix221 9 лет назад

      +Luis Lobo Isnt the order of the matrices in the first equation CtC wrong?
      I dont get why V is at the front and Vt at the end. I thought matrices are not commutative.....
      Hope someone can help me out.

    • @GasMaskedMidget
      @GasMaskedMidget 8 лет назад +2

      This still doesn't make sense to me. How is -sqrt(10)/2*sqrt(5) not equal to -sqrt(2)/2???

    • @danielwenhao
      @danielwenhao 7 лет назад +1

      i just came across ur comment, hope u have already solved ur problem, but ill give u the reason behind it with the link :) www.wikihow.com/Divide-Square-Roots

  • @companymen42
    @companymen42 11 месяцев назад

    Sometimes the old ways are better. All this new fangled technology and we forget how to teach along the way. Blackboard and chalk is the way to go!

  • @valdizos
    @valdizos 12 лет назад

    Yes. Notice that its simple to invert sigma, since it is a diagonal matrix.

  • @mariestolk3794
    @mariestolk3794 4 года назад +2

    This video is really great! Thank you for walking through this example so clearly, I really get it now! :)

    • @eevibessite
      @eevibessite 2 года назад

      ruclips.net/video/0Ahj8SLDgig/видео.html

  • @rivaldijulviar1765
    @rivaldijulviar1765 5 лет назад +4

    Great way of teching!! you've teach me SVD in 11 minutes. a few error on the end but that's understandable

  • @valdizos
    @valdizos 12 лет назад

    You can find an SVD of a non-square matrix, but you cannot find the determinant of a non-square matrix... because you take C^TC, which is a square matrix... if you exclude the zero eigenvalues and eigenvectors that you obtain, you get the SVD of a non square matrix.

  • @수수-v2m
    @수수-v2m 2 года назад

    Best explanation ever ! Finally I got it😍😍 Thanks

  • @gregorybehm4890
    @gregorybehm4890 11 лет назад

    Yeah makes sense, haven't had to calculate this problem only understand the two methods, I see why you need to be careful, we mainly used Schur's triangulation/decomposition which more or less does the same thing since you're making the U a unitary matrix and will either result in upper trianglular matrix, unless diagonalizable then it's sigma in this case. But this seems to have a little more freedom since U and V* are not the same as U and U* Excuse any typos I'm on my phone auto correct sucks

  • @arceuslegend4605
    @arceuslegend4605 5 лет назад

    To find U, it would've been a lot easier to multiply to the right of each side by A^T so this time you get another diagonalization equation for AA^T and so all you need to do is find the eigenvectors of AA^T. Much faster imo

  • @byronhuang6244
    @byronhuang6244 4 года назад

    Appreciation! You save my life!

  • @sanjaykrish8719
    @sanjaykrish8719 7 лет назад

    you are a born teacher. justified because you are a student of Mr.Gilbert

  • @ethanbartiromo2888
    @ethanbartiromo2888 Год назад

    I’m going to be completely honest. You are exactly like me! I am a masters recipient, but I still get confused multiplying matrices in my head because it’s just so much to keep track of.

  • @jhabriel
    @jhabriel 8 лет назад +1

    The correct values according to MATLAB are:
    u = [0.7071 0.7071
    0.7071 -0.7071]
    s = [8.9443 0
    0 4.4721]
    v = [0.3162 0.9487
    0.9487 -0.3162]
    Regards!

    • @eiriklid
      @eiriklid 7 лет назад

      You have just changed the order of the eigenvectors/-values & and set v_2 to equal -v_1 in the video. But the values are equally correct!

  • @poiu850
    @poiu850 11 лет назад +1

    V1 and V2 are ortonormal vector, using gram-schmidt process..:)

  • @sakshimantri7548
    @sakshimantri7548 3 года назад +1

    Something I've noticed that only a student can teach a student well, most professors complicate stuff for no reason

  • @jahbini
    @jahbini Год назад

    Wonderful hand waving at 1.57

  • @speedbird7587
    @speedbird7587 2 года назад

    very short, fast, and easily understandable

  • @samrap5
    @samrap5 12 лет назад

    v1 is the firs eigenvector (corresponding to the first eigenvalue) after it is normalized etc.

  • @kstahmer
    @kstahmer 12 лет назад

    A Singular Value Decomposition (SVD) computation is easy to screw up. It’s wise to take this video reciter’s (Ben Harris’) advice. Please try a few SVD computations on simple 2x2 matrices. Once you have an answer, check that it works. This will help demystify SVD.
    This link may help:
    h t t p://ocw.mit.edu/courses/mathematics/18-06-linear-algebra-spring-2010/video-lectures/lecture-29-singular-value-decomposition/
    (URL editing is required at the beginning.)

  • @DuuAlan
    @DuuAlan 8 лет назад +8

    You should go over the case when some of the eigen values are 0.

    • @andibritz
      @andibritz 8 лет назад

      einegvalues can never be trivial

    • @carl6167
      @carl6167 6 лет назад

      Well, you just substract 0 to the start matrix. SO you basically just compute Kern(C^TC)

  • @harshitarawat1411
    @harshitarawat1411 4 года назад +1

    why did he take -3 at 5:03 and not 3 while taking V1 when there are no -ves , and 3 while taking V2 when there are -ves ?

  • @vaga8bnd
    @vaga8bnd 4 года назад +1

    @7:04 Are you sure that your diagonal matrix will be like that? I think the diagonal matrix's diagonal elements have to be monotonically decreasing from upper left to lower right. I think it has to be [sqrt(80) 0; 0 sqrt(20)].

    • @peter2485
      @peter2485 2 года назад

      You are right. The order has to decrease from upper left to lower right. So v_1 and v_2 needs to swapped around, the same with \sigma_1 and \sigma_2.

  • @ramonmassoni9657
    @ramonmassoni9657 7 лет назад

    You're the real MVP man

  • @vedprakashdubey5839
    @vedprakashdubey5839 5 лет назад +4

    In final step there is a mistake in first column of u reverses sign of element.

  • @ajayjadhav5699
    @ajayjadhav5699 5 лет назад +3

    I think he skipped a step where product CV is multiplied by inverse of sigma matrix

  • @samytee3282
    @samytee3282 3 года назад +4

    Very helpful. You delivered it very nicely, stay blessed:).... But I am confused on the V1 and V2 part ,how did you make that a unit like -3 and 1

    • @Sammyd123samuel
      @Sammyd123samuel Год назад

      Part of finding eigenvectors, set the (CtC -20i)v = 0 and v will be -3 and 1

  • @DivinityinLove
    @DivinityinLove Год назад +4

    At 5:08 I don't understand how you get -3 and 1? Then again at 6:04 how do you get 1 and 3?

  • @AjaySharma-pg9cp
    @AjaySharma-pg9cp 5 лет назад

    Nice both video and the way you up the eyebrow in last second

  • @cosminanton5419
    @cosminanton5419 3 года назад +1

    I really thought he was going to leave for a whole minute.

  • @MrCigarro50
    @MrCigarro50 2 года назад

    Great example, thank you very much.

  • @rocioalvarado8116
    @rocioalvarado8116 11 месяцев назад

    Thank you! You are a genius!

  • @dw61w
    @dw61w 2 года назад

    It's confusing to call the matrix C, as if it is the covariance matrix. In fact, C is not, C.T @ C is the covariance matrix to eig().

  • @royxss
    @royxss 7 лет назад

    sigma matrix values should have been switched such that a11 >= a22. That's what svd says. Correct me if I am wrong

  • @vishnus2567
    @vishnus2567 8 лет назад +5

    why he swapped the negative sign in the last section of finding u(in column 1)?
    -(1/2) change to +(1/2) and +(1/2) change to -(1/2) in column 1?
    got confused in the last section?
    plz help
    thankyou

    • @ausamahassan9559
      @ausamahassan9559 4 года назад +1

      you are right, though too late a reply,!!u11 negative rather than u21you can check by multiplying u(sigma)V^T to get C

  • @arteks2001
    @arteks2001 6 лет назад

    Excellent video. Quite clear.

  • @sarowarhossain9029
    @sarowarhossain9029 5 лет назад

    saved me a lot of time!

  • @soundman1992
    @soundman1992 8 лет назад +3

    Great tutorial but in the Sigma matrix the values are the wrong way round. Max singular value should be top left. This leads to incorrect U matrix I think

    • @PaMS1995
      @PaMS1995 8 лет назад +1

      It's actually arbitrary what you make the order, but yeah the convention is to go in decreasing value afaik

    • @Evan490BC
      @Evan490BC 5 лет назад

      @@PaMS1995 Exactly.

  • @gregorybehm4890
    @gregorybehm4890 11 лет назад

    I'm not sure what you're saying Alex there is no problem using AA* and A*A is the way to solve what are you implying not to use A*A and A*A to solve?
    Can you give an actual example where using A*A and AA* to solve each U and V separately doesn't work. Because how I see it solving each by construction gives you the solution just as going one way getting just U or just V and solving for the other.

  • @1996Pinocchio
    @1996Pinocchio 5 лет назад

    This was very helpful indeed.

  • @madhuw1642
    @madhuw1642 7 лет назад

    I think he mistaken when writing the final V. It should be write down as the eigen vector of the largest eigen value is column one, the eigen vector of the next largest eigen value is column two, and so forth and so on until we have the eigen vector of the smallest eigen value as the last column of our matrix. Right?

  • @ausamahassan9559
    @ausamahassan9559 4 года назад

    in u matrix the minus sign should go from u21 to u11

  • @WillWolfrick
    @WillWolfrick 13 лет назад +1

    Thank you for the lecture!

  • @skilstopaybils4014
    @skilstopaybils4014 9 лет назад +2

    This is the closest to understanding the process of SVD to date! So thank you for that, but it would be great if you didn't use the convenient example for the V matrix. Anybody know of a video that uses algebra?
    Also, the identity matrix is Sigma*Sigma.T right?

    • @hminhph
      @hminhph 9 лет назад

      +skilstopaybils Sigma*Sigma^T equals a matrix with the eigenvalues of (C^T)*C on the diagonal.
      in this case its 20 and 80 on the diagonal

  • @DrAndyShick
    @DrAndyShick Год назад

    Did anyone here confirm the values of U? Because I'm not confident this was correct. Also, he was doing a bunch of things that didn't seem to make sense. Edit: I did it myself. It is correct, but still unclear on the logic, and why the columns of sigma and V are switched. Also confused by that unit vector thing at the end.

  • @lockejetYY
    @lockejetYY 6 лет назад

    Thank you for your another method to get U.

  • @StriderMKz
    @StriderMKz 5 лет назад +1

    God, I love MIT.

  • @leilabagheri2024
    @leilabagheri2024 4 года назад

    Really enjoyed!!

  • @ccss31319
    @ccss31319 10 лет назад +16

    In the end of ans of u, did he make the mistake of the sign between u11 and u12?

  • @johngillespie8724
    @johngillespie8724 8 лет назад +2

    Good job. Very helpful.

  • @GarbageMan144
    @GarbageMan144 3 года назад +1

    dude, when i compute the UsigmaV^T you have, I dont get the A matrix back.

  • @saadsaeed354
    @saadsaeed354 12 лет назад +1

    shouldnt you multiply U(Sigma) with sigma ^ -1 for finding U in the end

  • @TorresVr1
    @TorresVr1 7 лет назад

    When finding the eigenvectors for a given eigenvalue, why is it necessary that we find a single element of the nullspace, instead of the whole nullspace?

  • @uthmanzubairoluwatos
    @uthmanzubairoluwatos 7 лет назад

    What if C iss not a square matric, you need to show that you have to find c^Tc and cc^T, then find there coresponding eigen vectors

  • @inhonoroflagrange1308
    @inhonoroflagrange1308 4 года назад

    simple, but very good.

  • @hhafizhan
    @hhafizhan 7 лет назад

    why we can just put square root of 20 and 80 for the sigma matrix? I mean, shouldn't it be just 20 and 80?

  • @JacobSmithodc
    @JacobSmithodc 5 лет назад

    Great video!

  • @SaddamHussain-ju7so
    @SaddamHussain-ju7so 6 лет назад

    Thank you, It was very helpful.

  • @jacksonmckenzie2172
    @jacksonmckenzie2172 7 лет назад +1

    I thought that for the sigma matrix, the eigenvalues were listed in descending order, so it should be sqrt(80) then sqrt(20). Is this true or does it matter?

  • @TankNSSpank
    @TankNSSpank 2 года назад

    Thank you Sir, it is

  • @Joujau9
    @Joujau9 6 лет назад +1

    Can anyone explain me precisely, what is happening @10:00, how does he get this first matrix? What is he diving it with? I'm a bit confused.

    • @incognito5438
      @incognito5438 5 лет назад

      I am also confused about that moment :(

  • @morganlytle4892
    @morganlytle4892 5 лет назад

    Very helpful! Thank You!

  • @SanjuktaDawn
    @SanjuktaDawn 7 лет назад +2

    Hi , This video was very helpful. However I think the eigenvectors for the corresponding eigenvalues have been interchanged somehow. That is the eigen vector for lambda = 20 is the one which had been shown against lambda = 80

  • @peop.9658
    @peop.9658 2 года назад +1

    How did we wrote the values for v1 and v2 5:07

  • @anirudh7137
    @anirudh7137 2 года назад

    Is it necessary to have the diagonals of the sigma matrix to be square root of eigenvalues of A*A. If I try to find U then I need to go with AA* which means the diagonal elements of sigma matrix should now contain AA* eigenvalues's square root.

  • @soumodeepsen3448
    @soumodeepsen3448 3 года назад

    How did the negative sign in the U change to the 3rd element from the 1st element? I didn't understand that part.

  • @debbya.8277
    @debbya.8277 8 лет назад +9

    Thanks for the video. Kinda confused on the signs of the final answer of U tho'. Did we need to change the signs of a11 and a21?

    • @giovanni-cx5fb
      @giovanni-cx5fb 8 лет назад +18

      No, he just screwed up.

    • @moshiurchowdhury3673
      @moshiurchowdhury3673 7 лет назад

      he just screwed up at the end of U matrix!. In front of U11, there should be a - 'Minus' sign instead of U21.

  • @priyanshuverma9844
    @priyanshuverma9844 5 лет назад +1

    At 5:08, can you explain how did you calculated the value of V1?

    • @MrCorpsy6
      @MrCorpsy6 5 лет назад

      The second row is 3 times the first one, so you forget it and you find that y = -1/3x (or alternatively, x = -3y), therefore you can take v1 = (-3 1) (or v1 = (1 -3)).
      Basically, you're solving the linear system C^TC * X = 80X, where X = (x y) for instance.

  • @SophiaZhang-j2u
    @SophiaZhang-j2u 11 месяцев назад

    Thank you sooooooo much

  • @a3uu
    @a3uu 9 лет назад

    Great tutorial

  • @CalculusCraze1
    @CalculusCraze1 6 лет назад

    Excellent effort of this young man ! But I have a little confusion that if the evectors V1 and V2 do not happen to be orthonormals to each other then what should be there because we always require our V to be orthogonal
    ?

    • @hoangminhle8485
      @hoangminhle8485 2 года назад

      eigenvectors, i.e. V1 and V2, are of the symmetric matrix Ctranspose.C.
      Therefore, V1 and V2 are always orthogonal

  • @GoJMe
    @GoJMe 9 лет назад +3

    How does he get U????? can some1 show me how the fug he did, this make it jsut more complicated...

    • @alierencelik2188
      @alierencelik2188 8 лет назад

      +GoJMe he probably did not want to lose time with it as the matrice U is just obvious

    • @GasMaskedMidget
      @GasMaskedMidget 8 лет назад +9

      +Ali Eren Çelik Judging by the fact that he says he got U by "dividing" CV by Sigma, no, it's obviously not clear. You can't divide matrices so saying this just provides for confusion. I, myself can't figure out how he turned CV into unit vectors and then how multiplying that by Sigma simply changed some negative signs. So if it's so obvious to you why don't you stop being pompous towards other people and explain?

    • @mubashshirali571
      @mubashshirali571 8 лет назад +11

      +GasMaskedMidget U*Sigma=CV. After computing CV, multiply both sides by Sigma inverse and that will give you U. Or instead of computing Sigma inverse just divide CV by square root of the Eigen values, it's the same thing. Hope it's clear.

    • @giovanni-cx5fb
      @giovanni-cx5fb 8 лет назад +2

      Your comment was SO helpful to me! As soon as I read it, it came back to me that the inverse of a diagonal matrix is just another diagonal matrix whose diagonal entries are the inverse of the ones in the initial matrix and BAM! All clear.