God you save my life. I couldn’t go to class in my college because covid and they pass all these in linear algebra and now I understand. Thanks MIT and the profesor
In the 4th and 5th editions of his Intro to Linear Algebra book, Professor Strang includes worked out examples like in these recitation videos in each chapter. It's nice to have a video record nonetheless.
Incidentally, he could've proceeded further with the row reduction to the full RREF to get a matrix of the form [I F] in which I is the identity and F are the free variables. Then, -F would be part of your nullspace solutions directly, ie, your bases for the orthogonal Subspace.
So why not just use the method Professor Strang taught in class, that to suppose free variable [x3 x4]=I? The answer comes out immediately and you never need to calculate what -2(-a+b)-2a-3b, which is slow, complex and easy to make mistakes.
Would it be the same to say that the matrix formed by combining S and S compliment you would have a matrix of rank 4 and therefore can always rewrite a vector of R4 within that space?
Yes. He pretty much said that in a different way, by saying that the 4x4 matrix composed of s and s-perp contains 4 linearly independent columns. Hence the rank is 4.
Use the beautiful fact discussed in lecture 14 to solve part 1 "null space and row space are orthogonal complements in R^4 " so null space contains all vectors perpendicular to row space so simply the perpendicular subspcae to A is nullspace of A now just give basis for nullspace of A
Because if you take those vectors as columns and multiply with x you don't get a number but a matrix. But the dot product is a number(scalar). So we write them as row vectors. Hope you got it.
If you took the vectors as columns, you could then proceed to compute N(A transpose) which would be orthogonal to the column space of A which by definition is the space spanned by the vectors you took as columns.
Explanation is good but he could have decomposed the matrix to RREF form from which finding null space if a simple step rather than writing the equations.
Could S^⊥ be a subspace of c_1[0 -1 1 0]+c_2[-5 1 0 1] ? It was not specified that it has to be largest space perpendicular to S, so can we also limit ourselves? Or the notation (^⊥) automatically means 'the largest'?
@CoeusQuantitative I meant that when they state S^⊥ is *a subspace* orthogonal to S, do they mean _any such subspace,_ or specifically the _one with the highest rank._
@CoeusQuantitative Actually, I was mainly _asking_ what is the definition of S^⊥. It's just that in ℝ^4, when S is two-dimensional, there are several other subspaces left. Assuming that two subspaces are orthogonal if all their vectors are orthogonal, even the null space would technically be ok. But S^⊥ could also mean the one complementary to S, so that's why I was asking. Indeed orthogonal complement's meaning is perfectly clear. : )
I find it's very helpful to have a exercise video after each lecture so you actually know how to solve the problem.
Thank OCW.
I feel grateful that the internet exists. I would never have access to these fantastic lectures. Thank you, MIT!
Thanks for the help. I personally think you are the best instructor at MIT.
All of you guys are awesome. I have become fan of this linear algebra tutorial series.
God you save my life. I couldn’t go to class in my college because covid and they pass all these in linear algebra and now I understand. Thanks MIT and the profesor
Excellent tutorial! Thank you David and MIT!
In the 4th and 5th editions of his Intro to Linear Algebra book, Professor Strang includes worked out examples like in these recitation videos in each chapter. It's nice to have a video record nonetheless.
David has done a great job tbh
very clear, very concise. Thank you David! Thank you MIT!
Incidentally, he could've proceeded further with the row reduction to the full RREF to get a matrix of the form [I F] in which I is the identity and F are the free variables. Then, -F would be part of your nullspace solutions directly, ie, your bases for the orthogonal Subspace.
exactly what i was wondering '='
yeah, simple, straightforward.
He has the cutest smile :)
damn, when he looks back at 8:13 lol
@ 5:58 is the first time he does it. And it's cheeky!
So why not just use the method Professor Strang taught in class, that to suppose free variable [x3 x4]=I? The answer comes out immediately and you never need to calculate what -2(-a+b)-2a-3b, which is slow, complex and easy to make mistakes.
Would it be the same to say that the matrix formed by combining S and S compliment you would have a matrix of rank 4 and therefore can always rewrite a vector of R4 within that space?
Yes. He pretty much said that in a different way, by saying that the 4x4 matrix composed of s and s-perp contains 4 linearly independent columns. Hence the rank is 4.
Use the beautiful fact discussed in lecture 14 to solve part 1 "null space and row space are orthogonal complements in R^4 " so null space contains all vectors perpendicular to row space so simply the perpendicular subspcae to A is nullspace of A now just give basis for nullspace of A
Wow this is the math knowledge base for control theory to solve coupled differential equations. Am I right?
Nice lesson but dude is jacked😮
good explanation.
want more examplessss
anyone thought that he has a nice voice? lol
Why he didn't take those vectors into columns?
Because if you take those vectors as columns and multiply with x you don't get a number but a matrix. But the dot product is a number(scalar). So we write them as row vectors. Hope you got it.
If you took the vectors as columns, you could then proceed to compute N(A transpose) which would be orthogonal to the column space of A which by definition is the space spanned by the vectors you took as columns.
Explanation is good but he could have decomposed the matrix to RREF form from which finding null space if a simple step rather than writing the equations.
Buena explicacion
sure this guy, do like to flex his muscles. 😂 just kidding nyc ques.
This subject in the book is so confusing.
👀
?
Could S^⊥ be a subspace of c_1[0 -1 1 0]+c_2[-5 1 0 1] ?
It was not specified that it has to be largest space perpendicular to S, so can we also limit ourselves? Or the notation (^⊥) automatically means 'the largest'?
@CoeusQuantitative I meant that when they state S^⊥ is *a subspace* orthogonal to S, do they mean _any such subspace,_ or specifically the _one with the highest rank._
@CoeusQuantitative Actually, I was mainly _asking_ what is the definition of S^⊥.
It's just that in ℝ^4, when S is two-dimensional, there are several other subspaces left. Assuming that two subspaces are orthogonal if all their vectors are orthogonal, even the null space would technically be ok.
But S^⊥ could also mean the one complementary to S, so that's why I was asking.
Indeed orthogonal complement's meaning is perfectly clear. : )
It's clear that the concept is not clear to you.@@mskiptr
Jacked nerds 🔥