00:12 Symmetric matrices have real eigenvalues and perpendicular eigenvectors. 03:33 In the symmetric case, the eigenvector matrix becomes orthonormal. 10:23 Symmetric matrices have a unique property when it comes to eigenvalues and eigenvectors. 13:53 The video discusses the relationship between symmetric matrices and positive definiteness. 20:21 Eigenvalues of a symmetric matrix 23:18 Symmetric matrices are good matrices, whether they are real or complex. 29:40 Finding eigenvalues of a symmetric matrix is a complex and time-consuming task. 32:22 Symmetric matrices have a connection between the signs of the pivots and the eigenvalues. 38:44 When a symmetric matrix is positive definite, its eigenvalues are positive. 41:41 Symmetric matrices have positive sub determinants and a positive big determinant. Crafted by Merlin AI.
This is another fantastic lecture by the grandfather of linear algebra. Symmetric and Positive definite matrices pops up in systems and control engineering.
A living master of linear algebra who is not intimidated by spontaneous insights as he articulates the deeper meanings hidden in the mysterious mathematical creature called matrices.
@@shabnamhaque2003 I think he meant that Mr. Strang does a good job at explaining particular topics(trees) as well as how they relate to and fit in with each other(forest).
27:28 I don't understand why they are considered projection matrices. Projection matrices from my limited understanding satisfy P=P^n, where n is any real integer. Projection matrices project a vector onto a certain subspace. Back in lecture 15, he derived P = A (A^T A)^-1 A^T. In the context of this lecture, A is an orthogonal matrix. Since A^T = A^-1 , P = A A^T. Does he therefore mean that q q^T are projection matrices in this sense?
He probably means that q q^T is the projection matrix onto the subspace spanned by the vector q (for each subscript i=1, 2, .... of q_i). In that case, each projection matrix P will be q(q^T q)^-1 q^T, where actually (q^T q) denotes the dot product of q and q (i.e., the squared length of the vector q), which is the real number 1, since q is a unit vector. Thus, (q^T q)^-1 denotes the inverse of the real number 1, which is of course the real number 1 itself. Consequently the projection matrix P gets reduced to q q^T . That's what I think. ■
Okay, you're almost right. If you remember he taught that projection on the line through a vector a is (a a^T)/(a^T a). This is the projection matrix. This is the equivalent result when you're projecting on 1-D space. Now imagine when a=q (a unit vector). The denominator which is a scalar quantity is just 1 since (q^T q)=||q||^2=1. So projection matrix is nothing but (q q^T). I hope this helps you.
@@Joshiikanan The space it's projecting on is the eigenvector space, and each projection (P1,P2,...Pn) is projecting the eigenvalue into its assorted eigenvector, which is *one* vector, so the space generated by that vector is unidimentional, even though the vector itself is of dimention ''n'', n being the number of eigenvalues of the matrix A.
The number of positive pivots may not equal the number of positive eigenvalues. Take the matrix [1,0;-1,0] for example: without row exchange ,it reduces to [1,0;0,0], but with row exchange it reduces to [-1,0;0,0]. Odd number of row exchanges will change the sign of determinant and therefore change the number of negative eigenvalues. Assume that there is no row exchange and no multiplication of a row by a (negative) scalar, then the result holds.
symmetric matrices (A=A conjugate transpose) have real eigenvalues and orthogonal basis can be chosen symmetric matrices can be perceived as combination of projection matrices onto its basis still in symmetric matrices number of positive pivots=number of positive eigenvalues for positive definitive matrices all pivots are positive(the test) and all eigenvalues are positive(the outcomes) all sub determinant are positive
12:54 Lol professor could actually do that, but a little bit different by instead of the conjugate equation, we can use orginal equation. He actually pointed it out but mistook it a little bit. Just multiply both side of the tranpose equation by x, change A*x to Lambda * x, then we end up with the equation where Lambda = Conjugate(Lambda) . I actually followed his guide that moment and it worked, but he instead ended up with a mess XDD.
@39:29 how did he get rad 5 so quickly. I heard “16-11” I don’t know how he got the 16. If he used the quadratic formula, that was some light speed calculation of b^2-4ac, sqrt, and divide by 2
@@RenanRodrigues-yj5tz ikr. He pulled 4 out from (b^2-4ac) right away and sqrtted it to quickly cancel from the 2 in 2a in the denominator. (b^2 - 4ac) = 4((b^2)/4 - ac) ---> (64 - 4(11)) = 4(16 - 11). promptly recognized 64 goes into 4 sixteen times.
What if any Eigen value is repeated??? I guess that we still get n-orthogonal Eigen vectors. The reason: We can relate it to the algebraic multiplicity and geometric multiplicity of an Eigen value. 🙂
I don't get it. Since symmetric matrices are always diagonalizable, then it looks like they should always be invertible too (since it's eazy to say e.g. A=QΛQ' and so A'=QΛ'Q'). But they're not, for example a matrix with all ones or all zeroes is symmetric (and obviously not invertible). What am I missing here?
Your Eigen vectors will definitely change. This is how I understood this. A*x=l*x. Now suppose you change A, so you multiply a new matrix E on the left hand side that changes A, so E*A*x=l*E*x. Eigen values may change by a factor.
Man, u know why since lecture 23 or sth the views sinks🤣: u have to read the book to clarify to yourself about the important points the Prof Strang has leave there purposely, which is actually elegant😀 now I go to read the book to find out why the sign of pivots are the same as the of EV..
Eigenvalue lam=1.0 leads to a term exp(lam t) = exp(t) grows out of bound. Or am I missing the point. In the last lecture lam= 0 became the steady state value.
Can anybody help me to see how is a vector time his transpose a projection? Thank you very much in advance :) Btw, amazing courses, you're truly lighting the way, Mr. Strang!
Think of a vector as a row vector and it’s transpose as a column vector. When you do the multiplication you are doing the dot product of two vectors, which is a scalar. If you recall from an introduction course in math like calculus one, precalculus or college physics I, you know that when you dot product two vectors, say a.b =|a||b|cos(theta) where theta is the angle between the two vectors a and b. The smaller theta is, the bigger cos(theta) is, that is, the bigger the projection of the vector a onto vector b. Think of the projection as the length of shade of one vector on the ground. Hope that helps.
Audio channels fixed!
Pls provide link for the playlist for the audio channel fixed. Thanks
Dr. Strang is precious, protect him at all costs.
Ain't no one coming after him don't worry
The best course of linear algebra on the entire internet. I have been enjoying the course from the beginning. It helped me a lot.
1:49 Sometimes I watch his classes several times to make things settle in my mind, but sometimes just because I want to enjoy the humor.
I have never heard him say anything remotely funny. He is as dry as they come.
@@godfreypigott he sometimes does tickle the funny bone in me and make me giggle
00:12 Symmetric matrices have real eigenvalues and perpendicular eigenvectors.
03:33 In the symmetric case, the eigenvector matrix becomes orthonormal.
10:23 Symmetric matrices have a unique property when it comes to eigenvalues and eigenvectors.
13:53 The video discusses the relationship between symmetric matrices and positive definiteness.
20:21 Eigenvalues of a symmetric matrix
23:18 Symmetric matrices are good matrices, whether they are real or complex.
29:40 Finding eigenvalues of a symmetric matrix is a complex and time-consuming task.
32:22 Symmetric matrices have a connection between the signs of the pivots and the eigenvalues.
38:44 When a symmetric matrix is positive definite, its eigenvalues are positive.
41:41 Symmetric matrices have positive sub determinants and a positive big determinant.
Crafted by Merlin AI.
This is another fantastic lecture by the grandfather of linear algebra. Symmetric and Positive definite matrices pops up in systems and control engineering.
and in statistics!
A living master of linear algebra who is not intimidated by spontaneous insights as he articulates the deeper meanings hidden in the mysterious mathematical creature called matrices.
positive definite matrices start at 35:14
Is anyone else amazed at how he lets you see both the forest AND the trees... Simply the most elegant exposition of mathematics I have ever seen...
Where exactly in the lecture did you relate to understanding trees and forest?
I'm a beginner so I couldn't get it
@@shabnamhaque2003 I think he meant that Mr. Strang does a good job at explaining particular topics(trees) as well as how they relate to and fit in with each other(forest).
The perfect metaphor.
Thanks MITOpenCourseWare for uploading these beautiful lectures. Even remote students get taught by Prof. Strang. :)
Professor Strang- a gentleman and a scholar!
1:50 My favorite part of this video.
"PERPENDIC|| ULA ||R"
=========== ====
9:00: "That's what to remember from this lecture..."
Me: "Ight boys n gals. We can skip to the next lecture"
Finishes lecture. Never mind... Lecture (as always) was awesome.
1:49 "PERPENDIC---ULA---R"
ULA - Understanding Linear Algebra
A= LU
@@didyoustealmyfood8729 No ... I didn't steal your food
The best lecture Ive ever seen, Thank you very much!!!
@5:57 - looks like class rooms at MIT have ledges to jump off from if you don't understand anything :-)
@E 😂😂
43:00 - Summary
His move at 1:50 is legendary. Gang
this guy is a genius.. holy moly he has a quick mind
He is an absolute genius, loved the way he teach 😊
highly sympathic ... I would have loved to study at the MIT .. great, really
In Linear Algebra, Professor Strang is God.
He never erased "ULA" off the wall.
27:28 I don't understand why they are considered projection matrices. Projection matrices from my limited understanding satisfy P=P^n, where n is any real integer. Projection matrices project a vector onto a certain subspace. Back in lecture 15, he derived P = A (A^T A)^-1 A^T. In the context of this lecture, A is an orthogonal matrix. Since A^T = A^-1 , P = A A^T. Does he therefore mean that q q^T are projection matrices in this sense?
He probably means that q q^T is the projection matrix onto the subspace spanned by the vector q (for each subscript i=1, 2, .... of q_i). In that case, each projection matrix P will be
q(q^T q)^-1 q^T,
where actually (q^T q) denotes the dot product of q and q (i.e., the squared length of the vector q), which is the real number 1, since q is a unit vector. Thus, (q^T q)^-1 denotes the inverse of the real number 1, which is of course the real number 1 itself. Consequently the projection matrix P gets reduced to
q q^T .
That's what I think. ■
Okay, you're almost right. If you remember he taught that projection on the line through a vector a is (a a^T)/(a^T a). This is the projection matrix. This is the equivalent result when you're projecting on 1-D space.
Now imagine when a=q (a unit vector). The denominator which is a scalar quantity is just 1 since (q^T q)=||q||^2=1. So projection matrix is nothing but (q q^T). I hope this helps you.
@@Joshiikanan Thnks a lot
@@Joshiikanan The space it's projecting on is the eigenvector space, and each projection (P1,P2,...Pn) is projecting the eigenvalue into its assorted eigenvector, which is *one* vector, so the space generated by that vector is unidimentional, even though the vector itself is of dimention ''n'', n being the number of eigenvalues of the matrix A.
The number of positive pivots may not equal the number of positive eigenvalues. Take the matrix [1,0;-1,0] for example: without row exchange ,it reduces to [1,0;0,0], but with row exchange it reduces to [-1,0;0,0]. Odd number of row exchanges will change the sign of determinant and therefore change the number of negative eigenvalues. Assume that there is no row exchange and no multiplication of a row by a (negative) scalar, then the result holds.
30:57 "Matlab will do it, but it will complain" what a humour xd
symmetric matrices
(A=A conjugate transpose)
have real eigenvalues and orthogonal basis can be chosen
symmetric matrices can be perceived as combination of projection matrices onto its basis
still in symmetric matrices
number of positive pivots=number of positive eigenvalues
for positive definitive matrices
all pivots are positive(the test) and all eigenvalues are positive(the outcomes)
all sub determinant are positive
The linalg GOAT!
Are those empty seats??Please let me sit in one of those and I swear I ll attend everyday!!
35:35 He seems to claim that positive definite matrices must be symmetric. But that' cant be true.. [2,0;2,2] is positive definite but not symmetric!
12:54 Lol professor could actually do that, but a little bit different by instead of the conjugate equation, we can use orginal equation. He actually pointed it out but mistook it a little bit. Just multiply both side of the tranpose equation by x, change A*x to Lambda * x, then we end up with the equation where Lambda = Conjugate(Lambda) . I actually followed his guide that moment and it worked, but he instead ended up with a mess XDD.
Thank you, sir Strang!
deep insight with deep humour
What’s with the claim that repeated eigenvalues have eigenvectors that are independent/span a plane? Not always, only if matrix is diagnalizable
@39:29 how did he get rad 5 so quickly. I heard “16-11” I don’t know how he got the 16. If he used the quadratic formula, that was some light speed calculation of b^2-4ac, sqrt, and divide by 2
Nvm. After mulling over it I have figured it out
Lisa Dinh never thought of doing it like that. Now I’m always gonna use it haha
@@RenanRodrigues-yj5tz ikr. He pulled 4 out from (b^2-4ac) right away and sqrtted it to quickly cancel from the 2 in 2a in the denominator. (b^2 - 4ac) = 4((b^2)/4 - ac) ---> (64 - 4(11)) = 4(16 - 11).
promptly recognized 64 goes into 4 sixteen times.
دكتور من اروع ما يكون
"forgive me for doing such a thing" (looks at book)
which is again written by the legend himself :D
What if any Eigen value is repeated???
I guess that we still get
n-orthogonal Eigen vectors.
The reason:
We can relate it to the algebraic multiplicity and geometric multiplicity of an Eigen value. 🙂
😅
what a funny way to open an exciting class
Does anybody can explain, why the number of the pivots is equal to the number of the eigenvectors?
I don't get it. Since symmetric matrices are always diagonalizable, then it looks like they should always be invertible too (since it's eazy to say e.g. A=QΛQ' and so A'=QΛ'Q'). But they're not, for example a matrix with all ones or all zeroes is symmetric (and obviously not invertible). What am I missing here?
OK, I'm missing that it would have a zero eigenvalue, which means that there's no way to construct Λ'
energetic professor.
Hi, at 39:00 how did he so quickly find the roots of the equation?
He used the quadratic formula for solving the equation I believe
Trace(sum of diagonal values) is equal to sum of two lambdas
When a=1, the quadratic formula reads: -b/2 +- sqrt( (b/2)^2 - c )
What about decomposition into hermitian and skew-hermitian, how could we visualize that?
28:30
Wait now i have a question supposed i got the eigenvalues if i used elimination and then i got the eigenvalues again. Would they be the same?
Your Eigen vectors will definitely change. This is how I understood this. A*x=l*x. Now suppose you change A, so you multiply a new matrix E on the left hand side that changes A, so E*A*x=l*E*x. Eigen values may change by a factor.
@@dennisjoseph4528 thanks dude from México.
Dr Strange ALWAYS THE BEST
16:20:"where did he put his good god white foot on lol🤣"
Man, u know why since lecture 23 or sth the views sinks🤣: u have to read the book to clarify to yourself about the important points the Prof Strang has leave there purposely, which is actually elegant😀 now I go to read the book to find out why the sign of pivots are the same as the of EV..
i think its coz these are new videos with audio channel fixed. i don't think the views before 9 months or so were counted here
Excellent!
Eigenvalue lam=1.0 leads to a term exp(lam t) = exp(t) grows out of bound. Or am I missing the point. In the last lecture lam= 0 became the steady state value.
lambda=0 is steady state of differential eqns
lamba=1 is of difference eqns.
i thought the "cular" was a projection, NO! He wrote it on the wall lol
16:21 Blonde Guy with mohawk places his foot on the chair in front. Do this in a SE Asian country and have the duster come flying at your face. XD
35:17
Can anybody help me to see how is a vector time his transpose a projection? Thank you very much in advance :)
Btw, amazing courses, you're truly lighting the way, Mr. Strang!
please read chapter 4.2 projection. project onto a line
If q has length 1, P = q q^T is symmetric and P^2 = P
Think of a vector as a row vector and it’s transpose as a column vector. When you do the multiplication you are doing the dot product of two vectors, which is a scalar. If you recall from an introduction course in math like calculus one, precalculus or college physics I, you know that when you dot product two vectors, say a.b =|a||b|cos(theta) where theta is the angle between the two vectors a and b. The smaller theta is, the bigger cos(theta) is, that is, the bigger the projection of the vector a onto vector b. Think of the projection as the length of shade of one vector on the ground. Hope that helps.
All matrices matter, no such thing as a good or a bad matrix :P
Good are ones in which we easily see beautiful patterns on instants where others show no such patters
@@adhoax3521 is this not a clear case of matrix discrimination. Or is this how we get discriminants. :P
what is the mean "sines of the eigenvalues"? Thanks,
not sines but signs, there is caption's error
Good catch! Thank you for pointing that out. The caption will be corrected.
@@이승훈-u8f Thanks
31:27😂😂😂
대칭 행렬의 경우 피봇들의 부호와 고유값의 부호가 같다.
What's a pivot?
Oh dear ... back to the beginning for you.
For anyone else that needs this,
Strang is talking about turning the matrix into Echelon form without Row Reducing all the leading entries to 1.
here i am, still seven videos so far,
What 😳😱
رائع
when he has not enough space to write perpendicular😂😂😂😂😂
20:32 I FuXX
excellent :o
Wow
Prof. Strang is a myth
Sooooo ..... he doesn't exist?
Vietnamese student: easy peasy
Nah dude, hard af
@@phanthh yeah
loll haha, u funny
Vietnamese student here. Not that easy for me.
الله يحرق اللينير