Visualizing Diagonalization

Поделиться
HTML-код
  • Опубликовано: 22 ноя 2024

Комментарии • 48

  • @sebastiantruijens7176
    @sebastiantruijens7176 Год назад +20

    I am usually a silent observer of RUclips videos, but this is special. Enjoyed every second of it. Thank you for making this.

  • @wyboo2019
    @wyboo2019 9 месяцев назад +5

    there's a great book called Linear and Geometric Algebra by Alan Macdonald. while a good portion of it is about building the foundation of geometric algebra (a very clean way of unifying many parts of linear algebra by defining a new operation on vectors), the best part about the book is that it teaches linear algebra and linear transformations without much matrix usage; there's like one or two chapters covering matrices, as they are important, but most discussion of linear transformations is matrix-free. i really like it because i think matrices are so heavily tied with linear transformations that the two tools can get conflated with one another

    • @qualitymathvisuals
      @qualitymathvisuals  9 месяцев назад

      What a great observation! Macdonald is one of the great when it comes to abstract algebra. I believe linear transformations are an incredible artifact of the human brain, coming from the more general idea of morphisms, matrices are just a way of describing their details in a well understood situation. Thank you for the thoughtful comment!

  • @rahman3405
    @rahman3405 Год назад +7

    Great expalnation. The animations and visuals were amazing.
    Answers:
    1) The direction invariant vectors are called Eigen vectors
    2)A matrix is diagonalizable if it has enough linearly independent eigen vectors to span the space
    3)The diagonal entries are the eigen values. correct me if i am wrong. Thanks!

    • @qualitymathvisuals
      @qualitymathvisuals  Год назад

      Thank you for the kind words! It seems you are quite well studied since all of your answers are indeed correct!
      Bonus question: can every onto linear transformation be diagonalized?

    • @AolAlpha
      @AolAlpha 8 месяцев назад +3

      @@qualitymathvisuals No sir, not every onto linear transformation can be diagonalized. Diagonalizability is a property of square matrices or linear transformations that have a full set of linearly independent eigenvectors.

    • @qualitymathvisuals
      @qualitymathvisuals  8 месяцев назад

      Excellent!

  • @5ty717
    @5ty717 9 месяцев назад +3

    This shows your very deeply intuition

  • @raypanzer
    @raypanzer Год назад +10

    This is a very high quality math visual! Never knew my homework was interesting 👍

  • @jonkazmaier5099
    @jonkazmaier5099 7 месяцев назад +4

    HOW does this not have more views?? Best visualization of this concept I have ever seen

  • @sanchitaagte8215
    @sanchitaagte8215 11 дней назад

    absolutely stunning. deserves to have more views and you more subscribers

  • @DimitrijeĆirić-x1x
    @DimitrijeĆirić-x1x 9 месяцев назад +3

    Thanks! Great video

  • @edwardgongsky8540
    @edwardgongsky8540 6 дней назад

    This made diagonalization more confusing. thanks chump!

  • @guiguio2nd1er
    @guiguio2nd1er Год назад +1

    Great video, as usual

  • @Fish-vs6jf
    @Fish-vs6jf 21 день назад

    Didn't understand a single word of this, but it was pretty!

  • @spyral2108
    @spyral2108 7 месяцев назад +1

    Wow, this is incredible. I must say you have done a very good job with this video, and you explained the concepts of diagonalization very concisely. Thanks!

  • @tune_m
    @tune_m 8 месяцев назад +2

    Very insightful! Question: when you read the equation at 4:27 you read it from left to right, but aren't the matrices composited from right to left?

    • @tune_m
      @tune_m 8 месяцев назад

      As a consequence I read it as "align the eigenvectors with the standard basis" -> "scale standard basis" -> "move the eigenvectors back"

    • @tune_m
      @tune_m 8 месяцев назад

      But I'm unsure whether my interpretation is correct

    • @qualitymathvisuals
      @qualitymathvisuals  8 месяцев назад

      Excellent question! Yes, given two matrices A and B, their product can be interpreted as the composition of the linear transformation of A with the linear transformation of B. So AB is the transformation that applies B and then A. So yes, the order of highlighting used in the animation is not helpful for this understanding, good catch!

    • @tune_m
      @tune_m 8 месяцев назад

      @@qualitymathvisuals Thanks for the prompt response! I'm currently a TA for an undergrad LinAlg course so this video serves me (and my students) well.

  • @guillermogarcia8912
    @guillermogarcia8912 9 месяцев назад +1

    Great video , greetings from Spain !

  • @buzzbuzz20xx
    @buzzbuzz20xx 5 дней назад

    beautiful

  • @cementedlightbulb
    @cementedlightbulb Месяц назад

    I am surprised that this has only 15k views

  • @OpPhilo03
    @OpPhilo03 9 месяцев назад +1

    Great video sir.
    Thank you so much Sir❤

  • @alexmathewelt7923
    @alexmathewelt7923 9 месяцев назад +1

    Fun fact: to calculate the largest power of a matrix, where the exponent still fits in 64bit unsigned long, there are only 128 Multiplications needed. Example: You want to calculate 5^14. We split the exponent in binary: 5^(2¹+2²+2³) = 5² × (5²)² × ((5²)²)² = 6.103.515.625 .
    We only have to x := x², and if the current bit is on, we multiply our result with the current power, then we square x again... So to calculate powers up to about 4 Billion, u only need at most 64 multiplications. 32 for the squaring and at most 32 for the result multiplication. Since computers do not have more difficulties with larger numbers , that reduces the amount of calculations by an insane amount.

    • @qualitymathvisuals
      @qualitymathvisuals  9 месяцев назад +1

      What a spectacular insight! The algorithm you are describing is called the “square and multiply algorithm” and is one of the main tools needed for computational cryptography. Hopefully I can talk about it soon in an upcoming video!

  • @AbuMajeed3957
    @AbuMajeed3957 8 месяцев назад +1

    Thank you

  • @Sarah-pu8un
    @Sarah-pu8un 5 месяцев назад

    Wow! Extremely helpful

  • @B-Ted
    @B-Ted 5 месяцев назад

    Truely Underrated 🌟

  • @sulbhasupriya4180
    @sulbhasupriya4180 Год назад +1

    Thank you🫡

  • @anonymoususer4356
    @anonymoususer4356 7 месяцев назад

    Superb video!

  • @theodoreshachtman7360
    @theodoreshachtman7360 7 месяцев назад

    Amazing video 🎉

  • @SailorUsher
    @SailorUsher 6 месяцев назад

    Thank you so much!!!

  • @starcrosswongyl
    @starcrosswongyl 6 месяцев назад +1

    Hi with regards to the PDP^-1. The P^-1 is convert to the new basis after which scale by D and then rotate back to the standard basis by P. Am i correct?

  • @zedentee5652
    @zedentee5652 6 месяцев назад

    Wth you are so underrated

  • @diamondredchannel8024
    @diamondredchannel8024 4 месяца назад

    Sir can you send me example of diagonalisable 5×5 matrix example

  • @Nalber3
    @Nalber3 6 месяцев назад

    I was thinking the other day what was used before analytical geometry. And then discovered synthetic geometry. I think there's a need for a balance between analytical and synthetic geometry. What do you think?
    Lovely animation, btw ❤

    • @Nalber3
      @Nalber3 6 месяцев назад

      I see a lot of potential in blender as a game changer to do simulations using interconnected nodes 😊

  • @TheJara123
    @TheJara123 Год назад

    Happpy like Hippo!! Thanks..man

  • @buirabxs
    @buirabxs 6 месяцев назад

    Guys how i understand we dividing some linear transformation to different steps that easier to calculate, i mean our p matrix help us to change basis and D changes sizes and P inverse ends the work,now i have question:
    Is it correct to say that P realize some rotation that we need and D just change sizes????

  • @Abcdeee_p
    @Abcdeee_p 7 месяцев назад

    wow!