Singular Value Decomposition : Data Science Basics

Поделиться
HTML-код
  • Опубликовано: 2 ноя 2024

Комментарии • 72

  • @VR-fh4im
    @VR-fh4im 4 года назад +18

    I looked all over for SVD for 3 hours, and your video in 10 minutes explained it, so nicely. Thanks.

    • @ritvikmath
      @ritvikmath  4 года назад +1

      no problem! happy to help

  • @JD-jl4yy
    @JD-jl4yy 3 года назад +13

    You have a gift for explaining things clearly! This is so much better than the 5 SVD videos I watched prior to this haha

  • @SupremeChickenx
    @SupremeChickenx 4 года назад +8

    bros who make youtube math tutorials are the real MVPs

  • @saidisha6199
    @saidisha6199 3 года назад +10

    Just amazing explanation. I had a blurry understanding of svd after taking a class and your video made the concept absolutely clear for me. Thanks a lot.

  • @SLopez981
    @SLopez981 10 месяцев назад

    Been watching your videos for months now. Very much enjoy how general your videos can be for someone outside of data science. I generally like watching math videos from non-math educators because they have a great balance of an explanation.
    One thing I really enjoy about your videos is at the end you bring it back to your field and why this is useful in your world.
    Reduced in entries for storage or for further calculations is very tangible to see the real world application.

  • @subandhu39
    @subandhu39 Год назад +1

    I think this video is amazing. I have been wanting to watch videos of this channel since the past 2 years but never could because i lacked the basic knowledge to gain from the explanations here. I was taught this concept in class very poorly, i immediately knew i could finally come here. The way this ties in with pca, if i am correct and the ease with which the mathematical kinks were explained was phenomenal. Glad to finally benefit from this channel. Thanks a ton.

  • @EdouardCarvalho82
    @EdouardCarvalho82 4 года назад +10

    Absolutely love your videos! Just to clear possible confusion for learners at 4:05 abt VtV=I because of orthonormality, not merely independence which is only a consequence. Great job!

  • @scorpio19771111
    @scorpio19771111 2 года назад

    Thank you. Short video that packs all the right punches.

  • @haiderali-wr4mu
    @haiderali-wr4mu Год назад

    Your explained most important things about SVD. Thank you

  • @amritpalsingh6440
    @amritpalsingh6440 3 года назад +1

    I was struggling to understand the concept in the class and this video made it very clear for me. Thank you so much. Keep them coming :)

  • @chunchen3450
    @chunchen3450 4 года назад +3

    I really like your videos! 👍 Methods are very clearly and concisely explained, explaining the applications of the methods in the end also helps a lot to remember what it really does. The time span of the video is also perfect! Thanks and hope to see more videos from you

  • @bittukumar-rv6rx
    @bittukumar-rv6rx 20 дней назад

    Thanks man for posting! Loved the explanation!

  • @rugahun
    @rugahun 2 года назад

    I am glad of the note about 4:06, I freaked out when that was said. GREAT VIDEO!!

  • @TrangTran-dw2vw
    @TrangTran-dw2vw 3 года назад +4

    Until you learn PCA and revisit this video, everything really makes sense!!

  • @AkshayRakate
    @AkshayRakate 2 года назад

    Thanks so much for clearing the concepts. now I can connect the dots for what's the reason we use SVD in recommended systems. 👍

  • @hamade7997
    @hamade7997 3 года назад +2

    I like this explanation too, thank you. Wish I had discovered you earlier during my data analysis course, but oh well :P

  • @nujelnigsns5376
    @nujelnigsns5376 3 года назад

    I was really struggling with leaner algebra ,your video's are really a saviour

  • @hameddadgour
    @hameddadgour 2 года назад

    Wow, now I have a real appreciation for SVD :)

  • @msrizal2159
    @msrizal2159 4 года назад +1

    Thanks ritvik, I'm phd candidate from Malaysia. Your videos are helping me a lot.

  • @Mertyy3
    @Mertyy3 3 года назад +1

    Unbelivable it has only 9k views... Video is great!

  • @matthewchunk3689
    @matthewchunk3689 4 года назад +1

    Very relevant subject right now. Thanks

  • @emptyjns
    @emptyjns 2 года назад +3

    The columns of M, before SVD, could mean features. Do the columns of U and V (the left and right singular vectors) carry any physical meaning? The video keeps two singular values. How many do people usually keep?

  • @Andynath100
    @Andynath100 4 года назад +1

    Thanks for the regular quality content !!

  • @IrfanPeer
    @IrfanPeer 2 года назад

    marker flip on point

  • @srs.shashank
    @srs.shashank 9 месяцев назад

    Thanks, the applications which I can think of rightaway is: PCA and Matrix Factorization. what could be other possible applications?

  • @iWillieDR
    @iWillieDR 3 года назад

    Great vid man, keep up the good work!

  • @user-or7ji5hv8y
    @user-or7ji5hv8y 4 года назад

    Your explanation is awesome

  • @lingtan6742
    @lingtan6742 2 года назад

    Super clear! Thank you so much!

  • @Fat_Cat_Fly
    @Fat_Cat_Fly 4 года назад +1

    Super well explained.

  • @sarvagyagupta1744
    @sarvagyagupta1744 Год назад

    This is really realy informative. Just one question. What are sigmas? Are they eigenvalues from SVD or something else? How did you get 2 and 3 in your example?

  • @Josh-di2ig
    @Josh-di2ig 2 года назад

    thanks for a great video. do you also have a video on how to find those lambda values?

  • @Actanonverba01
    @Actanonverba01 4 года назад +1

    I usually hear SVD used synonymously with PCA. The way you described it, SVD is like a compression of the data but how is that different from PCA?

  • @saraaltamirano
    @saraaltamirano 4 года назад +16

    I am a big fan of your videos, but I think I liked the old format better, where you do the math step-by-step and write it on the whiteboard :/

    • @ritvikmath
      @ritvikmath  4 года назад +10

      Thanks for the comment! I was debating whether to go back to the marker and paper style where I would write more stuff in real time. This suggestion is very helpful to me.

    • @saraaltamirano
      @saraaltamirano 4 года назад

      @@ritvikmath thanks for the reply! I am especially grateful for your PCA math video since I am currently doing research with a functional data analysis algorithm that uses multivariate functional PCA and I've looked EVERYWHERE for an easy explanation. Your PCA video (and the required videos) is hands down the best explanation out there. I am forever grateful :-)

    • @teegnas
      @teegnas 3 года назад

      @@saraaltamirano I initially had the same PoV but then after consuming the whiteboard type content for a while, I've gotten used to it, and recently he has started moving away from the board at the end of the video so that we can pause and ponder upon it.

    • @kristofmeszaros5505
      @kristofmeszaros5505 3 года назад +2

      I prefer this current style greatly, no need to spend time writing. Can concentrate on explaining more.

  • @kickfloeb
    @kickfloeb 3 года назад

    I feel like such a baby because I laughed everytime you said u p and sigma p. Anyway, great video as always :).

  • @juanguang5633
    @juanguang5633 Год назад +1

    can we say that using SVD we are extracting significant features?

  • @sanjeetwalia5077
    @sanjeetwalia5077 3 года назад +1

    Hi Ritvik, thanks for the very explanatory video. Really very helpful to understand. However, when you see that you achieved a 75% computation reduction in this case, was it really because we assumed sigma(3) onwards to be approximately equal to zero. Does this assumption sway away from reality or this is how it always happens. Eager to hear your thoughts. Happy to learn from this video.

    • @sanjeetwalia5077
      @sanjeetwalia5077 3 года назад

      Also, if you could do Moore-Penrose Pseudoinverse video as well. TIA

  • @kurtji8170
    @kurtji8170 Год назад

    Hi Ritvik, what if the data is well constructed and there are 10 significant non-zero singular values? What can we do about this data

  • @qiushiyann
    @qiushiyann 4 года назад +1

    This is really helpful

  • @kancherlapruthvi
    @kancherlapruthvi 3 года назад

    The explanations are good but for Linear Algebra the best videos are from Prof. Gilbert Strang

  • @mohamedhossam1717
    @mohamedhossam1717 3 года назад

    i don't understand how to get independent vectors of a matrix to get its rank and what is mean by independent vector

  • @abhyuditgupta7652
    @abhyuditgupta7652 2 года назад

    Thank you so much!

    • @abhyuditgupta7652
      @abhyuditgupta7652 2 года назад

      I have always thought of such applications of matrices but never worked on finding how. beauty of maths.

  • @robertleo3561
    @robertleo3561 3 года назад +1

    Wait what at 4:06 you said that matrices with full rank always have their transposes as inverses?

    • @ritvikmath
      @ritvikmath  3 года назад

      almost, it's a small but important distinction.
      A full rank matrix has its columns linearly independent to each other.
      An orthogonal matrix (like the ones in this video) are full rank but also satisfy the property that its rows and columns are orthogonal to each other (have dot product 0 for any pair of different rows/columns).
      So for an orthogonal matrix, like you said, its transpose is its inverse. But that's not generally true for any full rank matrix. Looking back, I did say the wrong thing, and I'll go correct it in the comments. Thanks for pointing it out!

  • @mohammadmuneer6463
    @mohammadmuneer6463 3 года назад

    Awesome !!

  • @yingguo3683
    @yingguo3683 3 месяца назад

    All the 'u' are linear independent to each other, which means we multiply matrix of U with its transpose, we will get identity matrix. Don't get why linear independent leads to orthonormal?

    • @yingguo3683
      @yingguo3683 3 месяца назад

      forget it, i just saw the *note*

  • @yulinliu850
    @yulinliu850 4 года назад

    Awesome 👍

  • @Pukimaxim
    @Pukimaxim 4 года назад

    Hi ritvik, I might missed it in your video but how do you get sigma?

  • @_kage_
    @_kage_ Месяц назад

    How do we get U and V?

  • @user-wr4yl7tx3w
    @user-wr4yl7tx3w Год назад

    but don't orthonormal matrix have to be square?

  • @erlint
    @erlint 11 месяцев назад

    Isn't this the thin form of SVD? And aren't you using the numerical rank in the praxis example? Because A = U [Σ; 0] V^T, but since Σ in nxn and 0 (m-n)xn then U = [U_1 U_2] where U_1 mxn and U_2 mxm-n therby A = U_1*Σ*V^T. Also U_2 will be in the null space of A (?). And the skip to rank truncated matrix instead of explaining how u_r+1, ..., u_m will be the basis for N(A^T) and v_r+1, ..., v_n be the basis for N(A).
    Also I'm still unsure on how the eigenvectors of A^T A and A A^T tells you the most important information in A. Are we projecting the data onto the eigenvectors like in PCA?
    The eigendecomposition and SVD videos are some of the most compact and understandable videos I have found on those topics, it made the link between the change of basis to the eigenbasis, then calculating linear transformation then back to original basis much more clear to me, thanks.

  • @singnsoul6443
    @singnsoul6443 Год назад

    tysm

  • @zigzagarmchair2367
    @zigzagarmchair2367 8 месяцев назад

    OMG again!!!

  • @SoumyajitGanguly_ALKZ
    @SoumyajitGanguly_ALKZ 4 года назад +1

    this is ~75% reduction. from 1000 to 222

    • @ritvikmath
      @ritvikmath  4 года назад

      true! thanks for pointing that out. I think I meant now you only need around 25% storage.

  • @nine9nineteen193
    @nine9nineteen193 2 месяца назад

    bro open your head , put the information , and close it

  • @pixieofhugs
    @pixieofhugs Год назад

    Thank you! You have been a life saver 🛟

  • @xinyuan6649
    @xinyuan6649 2 года назад

    The best! This is 101 on how to have fun with math in ds🫰