Square root of a matrix
HTML-код
- Опубликовано: 9 фев 2025
- Square root of a matrix: definition and calculation using eigenvalues. What does it mean for a matrix to have a square root?
Check out my Eigenvalues playlist: • Diagonalize 2x2 matrix
Subscribe to my channel: / @drpeyam
I never leant this until now....
Linear algebra Hoffman and kunze theorem 13 in 9.5
This is exploited in Quantum Mechanics.
I got this in my linear algebra course in my undergrad years. We did it like this, as I recall:
1) Find the eigenvalues, λ, & eigenvectors of A
2) Use the eigenvectors to make a matrix, S, & its inverse
3) Diagonalize A by similarity transformation, B = SAS⁻¹. The diagonal elements of B will be the λ's
4) √B is then just the diagonal matrix whose diagonal elements are √λ's [Either sign can be taken, ±√λ, with each λ]
5) Reverse the similarity transformation, √A = S⁻¹(√B)S. Done!
NB1: Not every matrix is diagonalizable; not every matrix has a square root.
I don't recall whether either set (or both!) is a subset of the other.
Every symmetric matrix has both properties, however.
NB2: This can also be done with (some) complex matrices.
Fred
Damnnnnn. Even the Calculus God didn't know about this linear Algebra topic
@@Safwan.Hossain I know, right? He's brilliant, but I was surprised he hadn't seen the Taylor Series expansion of a matrix argument of a function.
I swear this man's too friendly
I don't like to be the one, but: 4:02 the matrix should be [4 -4; -4 4].
Great video doctor ;-)
same row reduction
Now natural log of matrix Peyam ;)
so we can get sneaky introduction into Lie Groups :P
The way he uses to take a square root from matrix works for every function f. f(A) = P f(D) P ^ -1. This works for every function that can be expanded into Taylor series.
A very enthusiastic math teacher! Very rare these days!!!
Your enthusiasm is fun to watch. If I had only √ of it, I would be fine.
this is the most excited dude in the whole YT
With spectral calculus you can define any function of matrix. Either formally or by converging power series. 💪
[[1,2],[2,1]] is also a solution, so are the negative complements. So, there are 4 solutions in total.
this man must love teaching so much
There is a small mistake in the number of "square roots" of a diagonalizable positive semi definite matrix A.
It would be 2^k, where k is the number of non-zero eigenvalues of A.
The way I found one of the square roots of A = [[5, 4],[4, 5]] was by reasoning that it is some matrix R such that |R| times itself is |A| = 9. Which means |R| = 3 or -3. Then I guessed that [[1, 2],[2, 1]] worked, checked its determinant (which is -3), and then multiplied the matrix by itself, and it worked. This method of diagonalization you have demonstrated is a lot more intuitive!
I love the way you combine knowledge of both linear algebra and calculus to help solve knew math problems every time, I sure hope I can be as calm as you later in future and thanks for this video .
You can do this other times, such as a rotation matrix a=[[cosø, -sinø],[sinø, cosø]] -> √a = [[cos(ø/2),-sin(ø/2)],[sin(ø/2),cos(ø/2)]], but it is unclear when you can do this.
You can also do this for other matrices but generally the answer is non-double. You need tu use sub-spaces stability arguments to get the answers.
Hello Dr.Peyam
Thank you so much for your interesting lessons and excellent explanation, i do appreciate your job.
I wish you peace and happiness under the sky of prosperity.
All the best.
Take care and have a good time.
This put a smile on my face, thank you :)
Wow diagonalization really does wonders - and now you can generalize this for any fractional power of a matrix.
Try cubing it and call it matrix triology😂
love the enthusiasm bro
There is another positive solution, which is the matrix: [1 2 ; 2 1].
0:33 it should have been "if and only if y>=0" o0therwise
Square root of 4 can be -2 according to ur definition
Were was this when I was coursing Linear Algebra lol. Blame on youtube for not showing me your channel sooner
Find a polynomial p(x) such that p(y) = f(y) for all y, where y are eingenvalues of A, so p(A) = f(A), where f(x) is any function defined at the points x=y.
Dr. Petam should do a course in graduate math study series.
Notice that f(A) where A=([a,b],[b,a]) is equal to ( [f(a+b)+f(a-b) , f(a+b)-f(a-b)],[f(a+b)-f(a-b) , f(a+b)+f(a-b) ]*1/2
Just started the introduction to Time Series and found out that somehow no one has tought me how to calculate square roots of matrices. Wish me luck on my colloquium tommorow lol
Good luck!!!
Thank you.your videos are helping me a lot.
You can extend this to non-diagonalizable matrices with the Taylor series of square root
Or Jordan form :)
I didn't even know what eigenvalue is yet why do I still watched it till the end
There's a nice RUclips series of "3brown1blue" in where he explains the essence of linear algebra.
It took me a little while to get grips to it, but it was REALLY satisfactory when I get concepts as the Eigenvalue
In a physical system ( mechanics, acoustics, quantum mechanics...) the eigenvalues are related ( squared) to the natural frequencies of the system - i.e. where it tends to oscillate more easily by itself ( without being forced - think of a swing, a metal rod, a string, a membrane or a bridge etc.).
The associated eigenvectors then describe the various possible oscillating states - where each point of the system are at their extreme position - before returning to the opposite position ( ideally, when there is no energy lost to internal heat, damping or air resistance etc.)
For a one dimensional string:
A set of possible sine waves (eigenfunctions) with fix nodes at the end points, which depend on the lenght of the string and its density etc.
In this case the number of nodes ( the positions of EACH coordinate of these eigenvectors! ) becomes infinite as the eigenvector turns into a (continuous) eigenfunction in the limit. ( depending on how accurate a mathematical description we want of our physical system ).
I love your passion. Big ups
I used other method and I got 4 matrix results for square root of A, all of them works
"What I learning day by day is that everything is possible in math and physics"
Hello Dr Payam, I really enjoy your linear algebra videos. Could you do a video talking about the matrix exponential? I know what it is but I don't think I understand it completely. Cheers!
Sunday 😉
Haha, Monday actually, I’m gonna take a rest day tomorrow
@@drpeyam A while curvature? 😂
Thank you for the video. At 1:13 you mention there are N^2 square roots. Are there not 2^N square roots, obtained by independently choosing between the positive and negative square roots of each of the N eigenvalues?
Yeah, 2^n, my bad
You are really crazy
Is there such a thing as a matrix that has matrices as components?
Sadly not, elements in a matrix have to be in a field, and we can’t divide any nonzero matrix
Why demand that the matrix is positive definite? If not you just get a complex matrix as the root, but I feel like that is a valid answer... if someone ask you what the square root of minus 1 is, you answer that it is i or minus i. Likewise there is no shame in returning a complex answer when asked for the square root of a non positive definite matrix...
Because having complex value in your matrix is tensor algebra.
I guess he was only referring to the normal (real) square root function, not the more advanced complex square root function, which just "happens" to coincide with the former on the non-negative real axis.
After 40 years of using eigenvectors in my work in optics I never once thought of taking a square root of a matrix. We have the equivalent but it is not called the "square root". In a a periodic transport system we would call that quantity one half of the transport matrix.
Since you said this can be extended to all real roots for "non negative" matrices, could a "negative" matrix have an odd root?
It could have an imaginary root, like the square root of -1 is i
@@drpeyam oh, nice!
Very interesting and shortcut method of matrix inverse
I love his linear algebra esp fun and clarity
I'm here because of BPRP and wow I'm loving these !!!!
❤️❤️❤️
U seem really happy teaching this :)
Listen to me very carefully....you are the maaaaaaaaan! Omg never been so satisfyed by mathematic!!!!
So if your matrix is of complex numbers, the requirement for the determinants to be zero goes away, just like with numbers.
The determinant is still zero by definition of an eigenvalue
@@drpeyam I'm dumb, what I wrote wasn't anything like what I meant to say. I meant to say is that the requirement for the eigenvalues to be positive goes away when the matrix is of complex numbers.
That makes more sense :)
This 2x2 matrix (which is positive def) has 4 different choices of square root- matrices, and of the four possible choices he showed the one that is the positive definite one.
In fact, the unit matrix as infinitely many square roots.
For anyone interested in gravity, there is a quite recent formulation of general relativity due to Krasnov where the Lagrangian for the theory involves the trace of the square root of a matrix!
How about considering an n by n matrix in general?
Same idea
That is so cool! Thanks for this linear algebra refresher.
I do have a question, how would you go about finding the other matrices that give B^2 = A?
Use minus the square roots of the eigenvalues, and rearrange the eigenvectors in any order you wish. That should pretty much give you all of them
oh i really forgot the concepts... thanks for the revision
I learnt it from u thanks
What if one of the eigenvalues are negative?
You’d get imaginary matrices, which are technically also square roots
Doesn't it leave like n^2 (for nxn matrix) possibilities of square roots (just by this definition; over complex numbers?). Also why eigenvalues and not determinant afterall?
The interesting thing is that a 2 by 2 matrix can have 4 square roots! That is because each eigenvalue has 2 roots and there are 2 eigenvalues, thus 4 possibilities. They can be found much easier using Cayley Hamilton and completing the square with respect to the constant instead of the linear term in the Characteristic equation. And to make things crazier, the unit matrix has infinitely many square roots!
I would like to understand this but I'm not prepared even if I don't, I have to say that watching the video was interesting and fun
0:35 why is this still a condition XD because actually that condition works just as well allowing negative x values, and certainly doing this with matrices would make the distinction very blurry because you haven't defined comparing matrices, as far as i've seen (edit you then go on to define that, but i don't see why eigenvalues can't be negative or complex)
@dr Peyam What is the reason we ignore the negative sign of the square root of the matrix?
MUCHAS GRACIAS, ME FUE DE GRAN AYUDA. ME SUSCRIBO
De nada :)
is this analogous to the square root of a differential operator?
Yep
Holy cow, I encountered this when I was studying Unscented Transform for UKF...
Cool! I am using it too.
How to calculate the square root of a 3x3 matrix? I can't solve it
Same idea, diagonalize it
@@drpeyam for some matrixes it gives complex numbers as sq root. Then if I square that matrix, it doesn't give the real matrix back.
It should give the real matrix back
wow, this was very different than I would have guessed. I thought you were just going to have B = [a b c d] matrix, square it and set the cells equal to the matrix you're taking the root of. ie 4 equations with 4 unknowns.
Well I'm too lazy to do it but would this work?
I think it might be possible, but a huge mess!
me some months ago when the course started: Pff, I already did a linear algebra course years ago, how boring this will be
me after the limit of a matrix and this: O.O holly shit, this course is sick!
Also nice video would be to find a square root through direct decomposition into subspaces and apply it on non-diagonalizable matrices and especially nilpotent matrices. E.g. you can find what is square root of finite first derivative operator matrix applied on polynomials and how to interpret its coeficients. Another way how to handle it would be using exponencial on operator matrix.
As nilpotent matrix can never get back to have non zero diagonal elements, because sqrt(N) has to have all eigenvalues again 0 nilpotent precedent is always nilpotent if it exists. After each sqrt(N), the dim of its null space is at least 1 lower, so if we get to the corner of the chain with only one eigenvector, I suppose the sqrt of that matrix doesn't exist, because it would change nilpotent to regular matrix that can not have any eigenvalue 0.
that was awesome.... thank you!
I really enjoy that "something to a matrix" ideas. what about "cos to a matrix"
Already done ✅
At 1:14 you say there are at least n^2 values, that should be 2^n instead.
Yeah
Why to find Model Matrix and Diagonal Matrix if we can find the the square root or any higher power of a matrix with eigen values in characteristics equation only.
But isn't the matrix
[1 2]
[2 1]
an answer also?
Yep! That’s because the columns of P are not unique, your matrix comes from P but flipping the columns, which is also ok. So more than 4 square roots
@Chig Bungus Sort of
They are the "- sqrt(A)" solutions, since they have negative eigenvalues - i.e.
B = -sqrt(A) is not positive definite.
@Chig Bungus its like sqrt(4) has 2 as its unique solution but x^2 = 4 has solutions -2 and 2.
An amusing way to find the square root of matrices [a b; b a]: Let I be the identity matrix and let J be [0 1; 1 0]. So you want a matrix M such that M^2 = 5I + 4J. If you try M = xI + yJ, then M^2 = (x^2 + y^2)I + (2xy)J. So you want to solve x^2 + y^2 = 5 and 2xy = 4, which leads to (x,y) = (2,1), (-2,-1), (-1,-2), and (1,2), giving the four matrix square roots.
Thank you very much
But the opposite matrix of what was found also is a solution what happened to that matrix ?
Yes, they are "-sqrt(A)" 😎
Thanks for your great teaching. How can we comute A^(2^1/2)? I used this way but my teacher said its not true here.
[2 1 1 2] works, but so does [1 2 2 1], and so does [-2 -1 -1 -2] and [-1 -2 -2 -1]. All four matrices can be squared and produce [5 4 4 5]. Why can’t all four matrices be considered solutions to sqrt([5 4 4 5])?
I see Orang found these solutions as well...
Sure they can! They are the same up to rearrangement of the eigenvectors
what about the other square roots? I can see at least 3 more.
There are at least 4, up to conjugation
I want to be a square root
Generally speaking, I want to be a cube root, because that is much better to be then merely a square root, but the grand champ has to be all the complex roots of __.
then dont be a perfect sqaure
I have a question:
for the matrix to be positive, is it required that all its eigenvalues be positive?
If it has positive and negative eigenvalues, can not the definition be applied?
Thank you for the video, very interesting (and sorry for my bad english )
In that case the matrix is neither positive nor negative (some people say it’s hyperbolic)
I think it can still be done, but only in matrices over the complex field, ℂ.
And keep in mind that, in complex vector spaces, the dot product always has a complex conjugate on one of the vectors.
(When you transpose, e.g., to turn a row vector into a column vector, or vice versa, you must also conjugate.)
Is this about right, Dr. ∏M?
Fred
The answer for your question it is not necessarily unless the matrix is selfadjoint
Thank you!
Can someone refresh my memory? Is matrix multiplication associative? That is, does (AB)C = A(BC) for matrices A, B, & C?
Yeah
Thanks!
Associative yes, but AB =/= BA usually
Thank you so much. Very nice video.
Can we use sqrt(A) = e^0,5ln(A) so we have to calculate ln(A) with its Taylor Serie and the same argument for the exponential
That works too
I'm looking forward to linear algebra.
What is that "NUL"? ("noss"??) 3:25
Ok. But, for sqrt for 1 and 9 we get 1 (+/-) and 3 (+/-). So, we have 4 roots for matrix A. Why we have only one solution?
When the original matrix is not positive semi-definite, can you extend this method to have the square root be complex numbers, or does it break down?
Complex numbers, like the square root of -1
So now you can show people how to compute the exponential of a matrix (by expanding e^A in a taylor series). Then you can talk about Lie Groups and Lie Group actions !
Already done ✅
(D)^1/2 may be [1 0; 0 3] or [-1 0; 0 3] or [1 0; 0 -3] or [-1 0; 0 -3]
Great video, but I have a question:
When you get to D^(1/2), aren't you using the square root of a matrix to find the square root of a matrix?
I understand that this method works and all, but it looked like you used the concept to prove itself
The only thing he didn't prove is that the square root of the diagonal matrix is equal to the matrix with the square root of each term on the diagonal. But that's pretty trivial to prove. In general, it's easy to show that ANY function of a diagonal matrix is equal to the matrix where that function is applied to each term on the diagonal.
Well...Jordan decomposition is good at something here
I know that you said real numbers, but I get angry hen people say you cant sqrt a negative number
Because you can't... (don't bring up the complex numbers, their algebric definition involve the i squared properties, not any square root of complex numbers, which doesn't mean anything. The root notation makes only sense with real positive numbers. Using it as an analogy for matrixes nor complex numbers is but a display of the lack of understanding of the main idea behind it.)
Square root as defined in real number set cannot take negative number as an argument. The same way division cannot take zero as a second argument. Limits and complex numbers are not hacks that allow you to cheat on those restrictions, they are extensions which allow you to take those operations.
@@NessHX Of course. Hense, you can't use the square root notation wherever you want. That's why you won't find a single serious course where you'll find a square root of a matrix (even if we communly talk about the "square root of a matrix" the correct way to write it is : Search a matrix B from Mn(IR) that verify B.B = A ) You'll find the same idea with the "exponentiation" of a matrix, where you refer to the Infinite serie proprety, and not writing it as e^M.
Plus, giving lectures to highschool student. I've always found that writing sqrt(-1) is non only false, but counter intuitive (forcing students to look for a real solution) and not giving them an edge of the algebric computation of rotation. Which cause them to believe complex numbers are only imaginary...
Very nice
so....clear !!!!!
Watching your videos on 1.5x 😂😂
Guessed the result straight away 😎
There has to be certain condition to this. [1 2 ; 2 1] may apply.
Well, up to choice and rearrangement of the eigenvectors
method is obvious. I can symbolic processors to get general solutions for cube roots too but why? I have never seen the need and I work with matrices in simulations
Ok
Wouldn't the square root of a matrix have infinitely many solutions because you wind up algebraically creating a bunch of parabolae?
No, A^2 = B might have 2^n solutions but sqrt(A) is unique
@@drpeyam Oh, wait, I was initially thinking circle and then I changed it.
@@drpeyam Actually, wait. If A^2=B then you can create two equations: sqrt( b11 + b22 ) = a11 + a22 and sqrt( b12 + b21 ) = a12 + a21
Since you can have an infinite number of decimal places and you can just combine virtually any two numbers, it seems to me that there are an infinite number of solutions. Then going by that, you could say the solution set is a pair of lines with a slope of negative one. What am I forgetting? It's been a while.
great vid!
Thanks, doc!