7:10 Small correction. You can't order the matrices inside the trace function however you want. You can only rearrange them cyclically. So, for instance, you can do JP⁻¹P or P⁻¹PJ but not JPP⁻¹.
Can the rearrangement be arbitrary if the matrices involved in the multiplication (which we are taking the trace) have the same dimensions? The cyclical rearrangement is obviously required if the dimensions are different.
I actually have this formula tattooed on my left forearm:) I remember seeing it for the first time in 3b1b video, and immediately trying to prove it in my head. I managed to do it for the case of the diagonal matrix when I was falling asleep, and when I woke up I knew how to generalize it for any square matrix. After I got the tattoo, this formula started appearing everywhere in my studying, which is funny) Thanks for the video
I think you can prove it with this method too : (lambda,v) is an eigenvalue/eigenvector tuple of A iff (e^{lambda}, v) is an eigenvalue/eigenvector tuple of exp(A) (provable with the definition of eigenvalue on the Taylor expansion, a bit more difficult for the other side). The other thing is that det(A)=lamba_1…lambda_n, tr(A)=lamba_1+…+lambda_n (provable with a root factorization of the characteristic polynomial of A). Finally plug exp(A) in the the det, use the two last equalities and you have the result.
I believe that you can prove this directly by using the formula exp(xA)= lim_(N->infinity} (1+xA/N)^N and expanding the deterninant in x/N for small x/N. The leading order would be related to the trace and the rest goes away as N-> infinity. Of course, the Jordan decomposition provides an elegant proof if you assume the existance of Jordan decomposition.
Take the function det(e^(At)) The derivative of det at matrix A applied to matrix X is det(A)* tr(A^-1*X). And so the derivative of this function is det(e^(At))*tr(e^(-At)*A*e^(At))=det(e^(At))*tr(A) Also, det(e^(A*0))=1 And so we get det(e^(At))=e^(t*tr(A)) And subsuiting in one, we get det(e^A)=e^tr(A)
I suppose you are using that both functions satisfy the same linear differential equation? f'(t) = trA f(t) and g'(t) = trA g(t) f(0)=g(0)=1 Otherwise, I dont see where you show that their derivatives are equal.
It would be shorter if we allow eigenvalues in the picture. Det(e^A) = product of eigenvalues of e^A = e^L1.... e^Ln = e^(L1+ ... +Ln) = e^(Tr A). Where L1,..., Ln are eigenvalues of A.
Hmm... I don't think J is *always* upper-triangular, as you have assumed. For example, if A is 2x2 with complex eigenvalues, then I don't think J can be upper-triangular.
Sir? This could be an insult to you but I was wondering if you could give me a tip on how to find the limits while finding the area using integration. Please don't mind. I really have a hard time figuring out the limits. And lots of love and support! :-)
@@sirshabiswas3010if the limits aren't given, you might be referring to interesection points. Equate functions and solve for intersection, is my guess. For these topics there are good resources online. Good luck!
He keep saying "any matrix", but the power of a Matrix, in general, is not well defined. For square Matrixes it is, A^n will always exist, but it's not the case for more random matrixes (m x n) than one can define. This is a mayor flaw of the video, in particular for a formal mathematics one.
Further assumptions are needed. For example, that A is a square-matrix, never explicitly mentioned along the video, otherwise det(e^A) doesn't make any sense, as the determinant is defined only for square-matrices. A proof that the power-expansion of e^A is valid because A is a bounded linear operator in finite dimensions.
This proof is so wrong in many aspect I can't resolve it in a comment so I will just leave a dislike and tell viewers to not use this proof in their HW or any actual work they are doing!
I 100% agree and I’ll do it myself : Given a matrix A and considered as a complex matrix, its caracteristic polynomial has n roots (Gauss theorem) if not counted with mutltiplicities so A is C-trigonalizable and its diagonal coefficients are its eigen values lambda. Thus exp(A) is similar to a triangular matrix which diagonal coefficients are exp(lambda). Taking the determinant gives the result. No need to talk about Jordan’s form (absolute non-trivial result which can’t be used like nothing!!)
7:10
Small correction. You can't order the matrices inside the trace function however you want. You can only rearrange them cyclically. So, for instance, you can do JP⁻¹P or P⁻¹PJ but not JPP⁻¹.
Can the rearrangement be arbitrary if the matrices involved in the multiplication (which we are taking the trace) have the same dimensions? The cyclical rearrangement is obviously required if the dimensions are different.
@@slavinojunepri7648
No, even if they are all square matrices, they have to be rearranged cyclically (unless they otherwise commute).
from Morocco thank you very much son you took me far away 40 years in my past university studies
Thanks. I understood it. Excellent presentation.
What a beautiful result!
I actually have this formula tattooed on my left forearm:) I remember seeing it for the first time in 3b1b video, and immediately trying to prove it in my head. I managed to do it for the case of the diagonal matrix when I was falling asleep, and when I woke up I knew how to generalize it for any square matrix. After I got the tattoo, this formula started appearing everywhere in my studying, which is funny)
Thanks for the video
I just started linear algebra this semester, and this video makes me excited about the subject.
Always loved me some linear algebra
superb, loving it. subscribing.
Note that f(PJP^-1) = P f(J) P^-1 for any f(•) that has a taylor series
Alternatively, tr = ln○det○exp.
det=exp(tr(ln))
Ah, what an elegant way of calculating trace!
Excellent explanation! Only the notation for elements of a matrix is a bit off, most books use lowercase letters
Very nice and clear explanation
Very clear. Good presentation.
Very nice and easy to follow
I think you can prove it with this method too : (lambda,v) is an eigenvalue/eigenvector tuple of A iff (e^{lambda}, v) is an eigenvalue/eigenvector tuple of exp(A) (provable with the definition of eigenvalue on the Taylor expansion, a bit more difficult for the other side). The other thing is that det(A)=lamba_1…lambda_n, tr(A)=lamba_1+…+lambda_n (provable with a root factorization of the characteristic polynomial of A). Finally plug exp(A) in the the det, use the two last equalities and you have the result.
very clear, thank you very much
I believe that you can prove this directly by using the formula exp(xA)= lim_(N->infinity} (1+xA/N)^N and expanding the deterninant in x/N for small x/N. The leading order would be related to the trace and the rest goes away as N-> infinity.
Of course, the Jordan decomposition provides an elegant proof if you assume the existance of Jordan decomposition.
Jordan decomposition is always possiblw over an algebraically closed field, for example the complex numbers
@@m4gh3 Yes, I did not say it was wrong.
Then you have to explain how to get the algebraic closure which is not as easy as this exercise...
This is a very good video. I really enjoyed it.
Take the function
det(e^(At))
The derivative of det at matrix A applied to matrix X is det(A)* tr(A^-1*X). And so the derivative of this function is
det(e^(At))*tr(e^(-At)*A*e^(At))=det(e^(At))*tr(A)
Also, det(e^(A*0))=1
And so we get
det(e^(At))=e^(t*tr(A))
And subsuiting in one, we get
det(e^A)=e^tr(A)
I suppose you are using that both functions satisfy the same linear differential equation?
f'(t) = trA f(t) and g'(t) = trA g(t)
f(0)=g(0)=1
Otherwise, I dont see where you show that their derivatives are equal.
@@98danielray Yes, they are equal because the second function is the only solution for the differential equation that the left function fulfils.
Nice collection of topics
Good work ❤
Very clear and easy to follow
order matters if more than two terms inside trace, it allows cyclic rotation.
You’re back!
Love this topic
Fabulous
thank you!! how i wish you had a vid on hermitian matrices :(
Is there a geometric explanation to this equation?
Does it only work for e or for any base?
The result holds for any positive base.
It would be shorter if we allow eigenvalues in the picture. Det(e^A) = product of eigenvalues of e^A = e^L1.... e^Ln = e^(L1+ ... +Ln) = e^(Tr A).
Where L1,..., Ln are eigenvalues of A.
Wow!
Nice...
Thank You
👍🏻👍🏻👍🏻👍🏻👍🏻🔥🔥🔥🔥
It is only valid for square matrices and not for rectangular matrices.
Hmm... I don't think J is *always* upper-triangular, as you have assumed. For example, if A is 2x2 with complex eigenvalues, then I don't think J can be upper-triangular.
In fact the Jordan canonical form is always upper triangular. We permit J to have complex entries, as the rest of the proof still holds.
@@MuPrimeMath OK I missed the "complex entries" thing although you did state it in the video. Makes perfect sense now, thanks!
When we write A as PJP^-1, you said P is _any_ matrix. Just to clarify, I'm assuming you meant P is any _invertible_ matrix?
More specifically, P is a change-of-basis matrix, which is always invertible.
Yeah it's a slight mistake
Im not great at linear algebra but isn't P just the matrix of eigenvectors
dude you see P^-1, that said invertible
Very cool video. Interesting, to the point, well explained...
You should get more views
Sir? This could be an insult to you but I was wondering if you could give me a tip on how to find the limits while finding the area using integration. Please don't mind. I really have a hard time figuring out the limits. And lots of love and support! :-)
Sorry, the question is not clear. It's hard for me to give advice on such general topics. I wish you the best of luck.
@@MuPrimeMath it's okay, no worries. Thanks for your reply. Keep uploading. Lots of love!
@@sirshabiswas3010if the limits aren't given, you might be referring to interesection points. Equate functions and solve for intersection, is my guess. For these topics there are good resources online. Good luck!
@@alejrandom6592 thanks! good luck too!
Note that we could call this a Simply Beautiful solution, BUT not as beautiful as a Cowboy cheerleader.
Me on a first date
The infinite sum for e^A starts with I, the identity matrix, not the number 1
Yes. Here 1 denotes the multiplicative identity in the ring of matrices, which is the identity matrix.
He keep saying "any matrix", but the power of a Matrix, in general, is not well defined. For square Matrixes it is, A^n will always exist, but it's not the case for more random matrixes (m x n) than one can define. This is a mayor flaw of the video, in particular for a formal mathematics one.
Further assumptions are needed. For example, that A is a square-matrix, never explicitly mentioned along the video, otherwise det(e^A) doesn't make any sense, as the determinant is defined only for square-matrices. A proof that the power-expansion of e^A is valid because A is a bounded linear operator in finite dimensions.
No, he did perfect. Whatever you said isn't defined on the domain of determinant to begin with, yours is excessive.
@@wargreymon2024 First, study a little about endomorphisms and then share your opinion on social media.
ambiguous and excessive
The series expansion of e^A starts with the identity matrix, not 1.
Yes, here 1 denotes the multiplicative identity of the matrix ring.
I do not know why, but these math/cs geeks creep me the f out, they give serial killer vibe.
This proof is so wrong in many aspect I can't resolve it in a comment so I will just leave a dislike and tell viewers to not use this proof in their HW or any actual work they are doing!
I 100% agree and I’ll do it myself : Given a matrix A and considered as a complex matrix, its caracteristic polynomial has n roots (Gauss theorem) if not counted with mutltiplicities so A is C-trigonalizable and its diagonal coefficients are its eigen values lambda. Thus exp(A) is similar to a triangular matrix which diagonal coefficients are exp(lambda). Taking the determinant gives the result. No need to talk about Jordan’s form (absolute non-trivial result which can’t be used like nothing!!)