This is so cool! When I started this lecture series in order to understand PCA better, I had no idea it would also relate to least squares regression! This blew my mind! Thank you so much for making these. They must be a lot of work, but they are so appreciated!
Thank you for explaining hard to grasp concepts in a filtered simple manner for us to understand. Your lectures are a great complement to prof strangs both high quality content.
Thank you Steve for video. We make the assumption that it is an economy SVD at time 6:28. Then, how can we guarantee that V multiplies with V* will become identity matrix, especially for the under-determined system?
@@ahsanahmed2505 He is using n by m in the videos and in his book instead of the usual convention of m by n which is quite confusing and contrary to most linear algebra resources.
Thanks for the video was really nice! There are 2 points which seem to be important to me. The invers of Σ is actually not always computable (if there exist a single value =0) so the more nice Expression would also be Σ+ . Where Σ+ is the matrix where every non zero single value is inverted but the zeroes are left as they are. And why not follow the convention of naming matrizes? Normally a matrix is called a mxn matrix. It seems you use a nxm matrix here which i think is a bit confusing at first glance.
The inverse is not defined by inverting every element, but that its multiplication with the matrix yields the identity matrix. In other words if S_inverse is the inverse of S, then S*S_inverse = 1, where 1 is the identity matrix. Therefore the inverse is computable even with zeros in the diagonal.
This is really nice lecture I've ever seen. I want to recommend this lecture for engineering graduate student basic class!! :-) Thank you so much~ I'll buy the book! ^-^
i know this is just about the notation, but i think the majority of linear algebra text use m by n, rather than n by m. It's sometimes a little confusing here...
I loves your lectures! it is so clear and save us from mist of information. I noticed that you wrote Sigma matrix is a invertible, if A is not invertible, shouldn't the Sigma matrix also is not invertible, hence Pseudo inverse of Sigma matrix? Thank you for clarification
This might be a year too late, but the Moore-Penrose Invert satisfies (AB)+ = B+A+. So I think there was a mistake, and it should have been Sigma+, not Sigma^{-1}, so you separate it from U and V. Since U and V are unitary, they are invertible, and thus their Moore-Penrose inverse is the regular inverse. As for Sigma, since it has only elements on the diagonal, its Moore-Penrose inverse is just the transpose, and then have the reciprocals for each of its elements (an "d" in the diagonal becomes a "1/d").
Thank you for the awesome materials and everything is well explained! One question I have is how we calculate the inverse of the singular matrix which is a non-square matrix? Isn't that back to the problem, i.e. inverse the non-square A matrix, we had at the first place?
Hello, thank you for this nice video series. This is so helpful and I use it with your book for my masters thesis. But while going through the equations, one question popped to my head: at 7:32 you use the inverse of Sigma, but for the SVD Sigma is not a quadratic matrix, rather than a nxm - Matrix and as such not invertable (in the "classic" sense of invertable matrices). So while I understand that if I use the economic SVD, this Matrix would be a mxm - Matrix, but I don't understand it for the case of nxm. Is there a video or a page in the book, where this case is discussed? Other than that, thank you very much for saving my masters degree :D
Sigma might be a nxm Matrix but not all rows and columns are non zero. If you remove all zero columns and rows you will get a quadratic matrix. So its essentially a quadratic matrix padded with zeros.
He replied to another person saying "Usually we will invert the first "m x m" sub-block, which is square, and then only use the first "m" columns of "U". Or, we could be even more aggressive and only invert the first "r x r" sub-block of Sigma, and only use the first "r" columns of "U" and "V", where r is much less than m." However, if you just consider when nm then we would use economy SVD (seen in his next video) so sigma would also be a square matrix.
Sir, by solving for x, I understand that A dagger cannot be a true inverse of A, but since the SVD is an exact egality of A, and since multiplying by U transpose S-1 and V is again an exact operation, I don't understand where the approximation step was introduced in the calculus.
I have a set of 3D positions and vectors. I try to find their intersection, so I linearized the equations like written in academic papers, but I dont get the results. I dont have a clue what could be wrong. I have Ax=b I tried: 1) (AA')^-1*A'*b 2) pinv(A)*b 3) A\b 4) I''ve just tried the SVD method, I get the same results with all. I checked my values 100 times. The vectors and directions are correct, I manually calculated them and confirmed with the matbal code.
LOL So moor-penrose pseudoinverse is just inverting one SVD term at a time, and since they U and V are orthogonal they are just transposed. Well that saved me a lot of effort looking into where it comes from.
Are you using the economy version of the SVD? otherwise you are not able to take the inverse of Sigma, which is a nxm matrix. EDIT YES >> see next video in this list
Thanks for the video Professor. Could you help me with something please? I am trying to fit a sinusoidal surface the depends on (x,y,t) but in my case b is not a vector but a matrix, what can I do in this case? Thank you
Extremely nice lectures Steve Brunton, thank you very much for all the effort of creating and sharing them! do any of you - or anyone that reads this :) - know of any reference that explores the math of why you get min |x|2 in the underdetermined case? thank you in advance!
This might be late. The reason for this min |x|2 is that any other solution would be this x hat plus something in the null-space of A. That addition would be orthogonal to this x hat and thus only be able to increase the magnitude of x hat. I am essentially reading this right out of pages 404 to 405 of Gilbert Strang's "Introduction to Linear Algebra" fourth edition.
what if we had a singular value of 0 ? ...cant that happen ? when A has dependent columns? ....in that case we wouldnt have Sigma inverse correct ? ....do we first idk,... drop some columns of A so they are all independent /.?
Under is dual to over, left is dual to right, up is dual to down, in is dual to out. Thesis is dual to anti-thesis -- The Generalized or time independent Hegelian dialectic. Alive is dual to not alive -- Schrodinger's/Hegel's cat. Duality creates reality.
This is so cool!
When I started this lecture series in order to understand PCA better, I had no idea it would also relate to least squares regression! This blew my mind!
Thank you so much for making these. They must be a lot of work, but they are so appreciated!
Am I right, that you write on real glass in front of camera and the image is just mirrored by editing? If so its brilliant.
Damn i just concluded he was an expert at writing mirrored
The real video will look like this: www.mirrorthevideo.com/watch?v=PjeOmOz9jSY
He is also using his left hand.
Thank you for explaining hard to grasp concepts in a filtered simple manner for us to understand. Your lectures are a great complement to prof strangs both high quality content.
please keep going with the numerical linear algebra/numerical analysis/scientific computation/applied math stuff thanks :)
I'm always impressed by how clean the board is, so it looks like there's nothing at all.
Pseudoinverse? More like "Super videos for us!" Thank you so much for making all of them.
Exceptionally clear explanation, crisp hand-written notes, wonderful!
Thank you very much for the clear explanation of pseudo-inverse.
Thank you Steve for video. We make the assumption that it is an economy SVD at time 6:28. Then, how can we guarantee that V multiplies with V* will become identity matrix, especially for the under-determined system?
can't thank you enough for sharing your knowledge with entire world
Since A has fewer rows than columns, then A is an m×n matrix with m
so, he made a mistake?
@@ahsanahmed2505 He is using n by m in the videos and in his book instead of the usual convention of m by n which is quite confusing and contrary to most linear algebra resources.
Excellent Lecture ! So clear to understand! Thank You !
having second thoughts about doing master cuz your videos are just too helpful
with all the amazing resources on the internet, it seems like higher ed is turning into mostly gatekeeping
Thank you Professor for this valuable lecture
Thanks for the video was really nice! There are 2 points which seem to be important to me. The invers of Σ is actually not always computable (if there exist a single value =0) so the more nice Expression would also be Σ+ . Where Σ+ is the matrix where every non zero single value is inverted but the zeroes are left as they are.
And why not follow the convention of naming matrizes? Normally a matrix is called a mxn matrix. It seems you use a nxm matrix here which i think is a bit confusing at first glance.
Thank you for the Single-Value-Matrix-Plus thing. Now I undertand the video.
The inverse is not defined by inverting every element, but that its multiplication with the matrix yields the identity matrix. In other words if S_inverse is the inverse of S, then S*S_inverse = 1, where 1 is the identity matrix. Therefore the inverse is computable even with zeros in the diagonal.
In the case of over-determined matrix X, why VVT is equal to identity since we are using economy matrices?
great video
This is really nice lecture I've ever seen. I want to recommend this lecture for engineering graduate student basic class!! :-) Thank you so much~ I'll buy the book! ^-^
Awesome, thanks so much, and hope you like the book!
i know this is just about the notation, but i think the majority of linear algebra text use m by n, rather than n by m. It's sometimes a little confusing here...
knowing that the pseudoinverse exists makes me feel really powerful
I loves your lectures! it is so clear and save us from mist of information. I noticed that you wrote Sigma matrix is a invertible, if A is not invertible, shouldn't the Sigma matrix also is not invertible, hence Pseudo inverse of Sigma matrix? Thank you for clarification
Absolutely Great Content
am I just missing something, why is n
Thanks for the great video! One question: 7:50, if you have zero singular value, how do you compute sigma inverse? Thank you!
This might be a year too late, but the Moore-Penrose Invert satisfies (AB)+ = B+A+. So I think there was a mistake, and it should have been Sigma+, not Sigma^{-1}, so you separate it from U and V. Since U and V are unitary, they are invertible, and thus their Moore-Penrose inverse is the regular inverse. As for Sigma, since it has only elements on the diagonal, its Moore-Penrose inverse is just the transpose, and then have the reciprocals for each of its elements (an "d" in the diagonal becomes a "1/d").
In an underdetermined case, if you use the economy SVD, is V*V' equal to an identity?
Thank you for the awesome materials and everything is well explained!
One question I have is how we calculate the inverse of the singular matrix which is a non-square matrix? Isn't that back to the problem, i.e. inverse the non-square A matrix, we had at the first place?
I'm amazed. This is so clearly explained!!
My god, you just explained what my professor is trying to explain for 5 lectures
Great explanation, sir!
Yeah. This is a very good and useful lecture.
Damn, thank you, I finally got it thanks to you after 3h research xD
Why in definition of A-dagger you put Sigma-inverse rather than Sigma-dagger? Sigma is non-square and has only pseudo-inverse rather than inverse.
Great content !
Wonderful lecture
Hello, thank you for this nice video series. This is so helpful and I use it with your book for my masters thesis. But while going through the equations, one question popped to my head: at 7:32 you use the inverse of Sigma, but for the SVD Sigma is not a quadratic matrix, rather than a nxm - Matrix and as such not invertable (in the "classic" sense of invertable matrices). So while I understand that if I use the economic SVD, this Matrix would be a mxm - Matrix, but I don't understand it for the case of nxm. Is there a video or a page in the book, where this case is discussed? Other than that, thank you very much for saving my masters degree :D
Sigma might be a nxm Matrix but not all rows and columns are non zero. If you remove all zero columns and rows you will get a quadratic matrix. So its essentially a quadratic matrix padded with zeros.
He replied to another person saying "Usually we will invert the first "m x m" sub-block, which is square, and then only use the first "m" columns of "U". Or, we could be even more aggressive and only invert the first "r x r" sub-block of Sigma, and only use the first "r" columns of "U" and "V", where r is much less than m."
However, if you just consider when nm then we would use economy SVD (seen in his next video) so sigma would also be a square matrix.
I had the same question
Thanks. Will you please tell me what online classroom or what classroom app you used ? I’m a teacher from China.
Wonder if Steve Brunton can cover for under/over determined systems the nontrivial solutions of Ax = 0, using SVD.
Sir, by solving for x, I understand that A dagger cannot be a true inverse of A, but since the SVD is an exact egality of A, and since multiplying by U transpose S-1 and V is again an exact operation, I don't understand where the approximation step was introduced in the calculus.
Great question. I address this exact question in the first 2 minutes of the next video: ruclips.net/video/02QCtHM1qb4/видео.html
Indeed, thanks :)
Thanks, just had this in mind as well
Man. It was really awesome..
I have a set of 3D positions and vectors. I try to find their intersection, so I linearized the equations like written in academic papers, but I dont get the results. I dont have a clue what could be wrong.
I have Ax=b
I tried:
1) (AA')^-1*A'*b
2) pinv(A)*b
3) A\b
4) I''ve just tried the SVD method, I get the same results with all.
I checked my values 100 times. The vectors and directions are correct, I manually calculated them and confirmed with the matbal code.
Thanks a lot
LOL So moor-penrose pseudoinverse is just inverting one SVD term at a time, and since they U and V are orthogonal they are just transposed.
Well that saved me a lot of effort looking into where it comes from.
I get a bit lost when he constructs the left pseudo-inverse. I cannot grasp why the single value matrix is guaranteed to have an inverse.
I FOUND GOLD.
Excellent
I am confused with the dimension of A matrix. in overperformed, shouldn't overdetermined have m>n
Are you using the economy version of the SVD? otherwise you are not able to take the inverse of Sigma, which is a nxm matrix. EDIT YES >> see next video in this list
Thanks for the video Professor. Could you help me with something please? I am trying to fit a sinusoidal surface the depends on (x,y,t) but in my case b is not a vector but a matrix, what can I do in this case? Thank you
Hi sir,
Can you solve one linear regression problem using svd and upload in RUclips.?
Please it helps me lot and others as well.!
Check the playlist, I already have an example
@@Eigensteve
Respected sir, can you send me the link?
Sorry, for troubling...!
Extremely nice lectures Steve Brunton, thank you very much for all the effort of creating and sharing them!
do any of you - or anyone that reads this :) - know of any reference that explores the math of why you get min |x|2 in the underdetermined case? thank you in advance!
This might be late.
The reason for this min |x|2 is that any other solution would be this x hat plus something in the null-space of A. That addition would be orthogonal to this x hat and thus only be able to increase the magnitude of x hat. I am essentially reading this right out of pages 404 to 405 of Gilbert Strang's "Introduction to Linear Algebra" fourth edition.
Thanks for your video. However, I'm confused about the proof of the theorem(why the norm is minimized), can you give a simple proof? Thanks.
I believe it has to do with Eckart-Young theorem. you can check it out on RUclips.
Hi Sir, I am quite curious about the name of the 'transparent' blackboard, I want to buy one, where I can get it? Thank you.
I think this is not a "transparent blackboard". It is just glass and behind him is a black wall.
@@ahmaddarawshi91 I'm always curious how he writes. He must write all stuff in a reverse direction - I mean, b would look like d from his perspective.
@@hanyingjiang6864 he writes normally as he would write on a whiteboard but then the video is flipped (reflected) digitally.
How does this board work?
Are you writing in opposite direction?
waw!! Merci beaucoup
How can sigma be invertible when it's a rectangular matrix?
Great!!
what if we had a singular value of 0 ? ...cant that happen ? when A has dependent columns? ....in that case we wouldnt have Sigma inverse correct ? ....do we first idk,... drop some columns of A so they are all independent /.?
Forget about everything what visual do you use to write and record....
Wouldn’t the error be ||X-x dagger||_2 not just ||x dagger||_2 in itself?
It is not Sigma^-1 but Sigma^-T so the traspose of the inverse ?????
If we have x and b vectors how to find A matrix ?
I don’t know why such was never explained when I took econometrics.
This is cool but computational cost to determine svd is nightmare
does this guy actually writes backward?
why U trans U cancel to identity. Don't they result in a square matrix???
Because U is a orthogonal Matrix
Under is dual to over, left is dual to right, up is dual to down, in is dual to out.
Thesis is dual to anti-thesis -- The Generalized or time independent Hegelian dialectic.
Alive is dual to not alive -- Schrodinger's/Hegel's cat.
Duality creates reality.
when you say thank you i emphasize thank you!!!! (not me) :)))