there's a great book called Linear and Geometric Algebra by Alan Macdonald. while a good portion of it is about building the foundation of geometric algebra (a very clean way of unifying many parts of linear algebra by defining a new operation on vectors), the best part about the book is that it teaches linear algebra and linear transformations without much matrix usage; there's like one or two chapters covering matrices, as they are important, but most discussion of linear transformations is matrix-free. i really like it because i think matrices are so heavily tied with linear transformations that the two tools can get conflated with one another
What a great observation! Macdonald is one of the great when it comes to abstract algebra. I believe linear transformations are an incredible artifact of the human brain, coming from the more general idea of morphisms, matrices are just a way of describing their details in a well understood situation. Thank you for the thoughtful comment!
Great expalnation. The animations and visuals were amazing. Answers: 1) The direction invariant vectors are called Eigen vectors 2)A matrix is diagonalizable if it has enough linearly independent eigen vectors to span the space 3)The diagonal entries are the eigen values. correct me if i am wrong. Thanks!
Thank you for the kind words! It seems you are quite well studied since all of your answers are indeed correct! Bonus question: can every onto linear transformation be diagonalized?
@@qualitymathvisuals No sir, not every onto linear transformation can be diagonalized. Diagonalizability is a property of square matrices or linear transformations that have a full set of linearly independent eigenvectors.
Wow, this is incredible. I must say you have done a very good job with this video, and you explained the concepts of diagonalization very concisely. Thanks!
Excellent question! Yes, given two matrices A and B, their product can be interpreted as the composition of the linear transformation of A with the linear transformation of B. So AB is the transformation that applies B and then A. So yes, the order of highlighting used in the animation is not helpful for this understanding, good catch!
Fun fact: to calculate the largest power of a matrix, where the exponent still fits in 64bit unsigned long, there are only 128 Multiplications needed. Example: You want to calculate 5^14. We split the exponent in binary: 5^(2¹+2²+2³) = 5² × (5²)² × ((5²)²)² = 6.103.515.625 . We only have to x := x², and if the current bit is on, we multiply our result with the current power, then we square x again... So to calculate powers up to about 4 Billion, u only need at most 64 multiplications. 32 for the squaring and at most 32 for the result multiplication. Since computers do not have more difficulties with larger numbers , that reduces the amount of calculations by an insane amount.
What a spectacular insight! The algorithm you are describing is called the “square and multiply algorithm” and is one of the main tools needed for computational cryptography. Hopefully I can talk about it soon in an upcoming video!
Hi with regards to the PDP^-1. The P^-1 is convert to the new basis after which scale by D and then rotate back to the standard basis by P. Am i correct?
I was thinking the other day what was used before analytical geometry. And then discovered synthetic geometry. I think there's a need for a balance between analytical and synthetic geometry. What do you think? Lovely animation, btw ❤
Guys how i understand we dividing some linear transformation to different steps that easier to calculate, i mean our p matrix help us to change basis and D changes sizes and P inverse ends the work,now i have question: Is it correct to say that P realize some rotation that we need and D just change sizes????
I am usually a silent observer of RUclips videos, but this is special. Enjoyed every second of it. Thank you for making this.
there's a great book called Linear and Geometric Algebra by Alan Macdonald. while a good portion of it is about building the foundation of geometric algebra (a very clean way of unifying many parts of linear algebra by defining a new operation on vectors), the best part about the book is that it teaches linear algebra and linear transformations without much matrix usage; there's like one or two chapters covering matrices, as they are important, but most discussion of linear transformations is matrix-free. i really like it because i think matrices are so heavily tied with linear transformations that the two tools can get conflated with one another
What a great observation! Macdonald is one of the great when it comes to abstract algebra. I believe linear transformations are an incredible artifact of the human brain, coming from the more general idea of morphisms, matrices are just a way of describing their details in a well understood situation. Thank you for the thoughtful comment!
Great expalnation. The animations and visuals were amazing.
Answers:
1) The direction invariant vectors are called Eigen vectors
2)A matrix is diagonalizable if it has enough linearly independent eigen vectors to span the space
3)The diagonal entries are the eigen values. correct me if i am wrong. Thanks!
Thank you for the kind words! It seems you are quite well studied since all of your answers are indeed correct!
Bonus question: can every onto linear transformation be diagonalized?
@@qualitymathvisuals No sir, not every onto linear transformation can be diagonalized. Diagonalizability is a property of square matrices or linear transformations that have a full set of linearly independent eigenvectors.
Excellent!
This shows your very deeply intuition
This is a very high quality math visual! Never knew my homework was interesting 👍
@raypanzer Glad you enjoyed it!
HOW does this not have more views?? Best visualization of this concept I have ever seen
absolutely stunning. deserves to have more views and you more subscribers
Thanks! Great video
This made diagonalization more confusing. thanks chump!
Great video, as usual
Didn't understand a single word of this, but it was pretty!
Wow, this is incredible. I must say you have done a very good job with this video, and you explained the concepts of diagonalization very concisely. Thanks!
Glad you liked it!
Very insightful! Question: when you read the equation at 4:27 you read it from left to right, but aren't the matrices composited from right to left?
As a consequence I read it as "align the eigenvectors with the standard basis" -> "scale standard basis" -> "move the eigenvectors back"
But I'm unsure whether my interpretation is correct
Excellent question! Yes, given two matrices A and B, their product can be interpreted as the composition of the linear transformation of A with the linear transformation of B. So AB is the transformation that applies B and then A. So yes, the order of highlighting used in the animation is not helpful for this understanding, good catch!
@@qualitymathvisuals Thanks for the prompt response! I'm currently a TA for an undergrad LinAlg course so this video serves me (and my students) well.
Great video , greetings from Spain !
Thank you very much!
beautiful
I am surprised that this has only 15k views
Great video sir.
Thank you so much Sir❤
Thank you for the kind words :)
Fun fact: to calculate the largest power of a matrix, where the exponent still fits in 64bit unsigned long, there are only 128 Multiplications needed. Example: You want to calculate 5^14. We split the exponent in binary: 5^(2¹+2²+2³) = 5² × (5²)² × ((5²)²)² = 6.103.515.625 .
We only have to x := x², and if the current bit is on, we multiply our result with the current power, then we square x again... So to calculate powers up to about 4 Billion, u only need at most 64 multiplications. 32 for the squaring and at most 32 for the result multiplication. Since computers do not have more difficulties with larger numbers , that reduces the amount of calculations by an insane amount.
What a spectacular insight! The algorithm you are describing is called the “square and multiply algorithm” and is one of the main tools needed for computational cryptography. Hopefully I can talk about it soon in an upcoming video!
Thank you
Wow! Extremely helpful
Truely Underrated 🌟
Thank you🫡
Superb video!
Thank you very much!
Amazing video 🎉
Thank you so much!!!
Hi with regards to the PDP^-1. The P^-1 is convert to the new basis after which scale by D and then rotate back to the standard basis by P. Am i correct?
Yes!
Wth you are so underrated
Sir can you send me example of diagonalisable 5×5 matrix example
I was thinking the other day what was used before analytical geometry. And then discovered synthetic geometry. I think there's a need for a balance between analytical and synthetic geometry. What do you think?
Lovely animation, btw ❤
I see a lot of potential in blender as a game changer to do simulations using interconnected nodes 😊
Happpy like Hippo!! Thanks..man
Guys how i understand we dividing some linear transformation to different steps that easier to calculate, i mean our p matrix help us to change basis and D changes sizes and P inverse ends the work,now i have question:
Is it correct to say that P realize some rotation that we need and D just change sizes????
wow!