As an aircraft engineer, I was only used to the length-preserving transformation of three-dimensional vectors. But I recently started to think that, the tensors, which I had an only taste of long ago, maybe a good tool for solving control problems of complex nonlinear systems. I decide to learn what it is and these videos are amazing to me in many aspects. Each stuff explained is so easy to understand. Thank you!
I was so confused about how to formulate the metric tensor as being sorta double covectorish and now I finally understand! It was my reason for starting this series but now I'm gonna watch the whole thing
It's interesting that you say that because when I look back on these videos, I see the whole "row of rows" thing as being a bit ridiculous and convoluted. But I'm glad you got something out of it.
I tried to build a constitutive model for geotechinical material and promote it to general stress space, which requires a lot tensor knowledge. I have been looking for different books and videos. These videos really inform me about what is a tensor and how it transforms. Your video is highly appreciated!!!
Amazing job you have done here. This must have taken a long time to create. You put a lot of attention into using colors in a helpful manner in showing the math. Also you have been very thorough in explaining all the concepts that those of us with with patchy memory of our school math can still follow the video series. Very thorough and carefully planned. Good pacing, editing, clear voice and everything. Very well done videos.
eigenchris No, the sound is very good and the way that you explain it is great too. Actually, the math is hard to me. I need to watch the whole series from the beginning again. Thank you for taking your time to answer my silly comment and thank you for the classes. Obrigado.
I'm a little worried my tensor product videos haven't been as good as the previous ones... I'm just going to keep making them for now. Feel free to comment about anything that's unclear or confusing.
@@rlicinio1 And how thick is your exercise book on this series? You do understand that one viewing might not be enough and recalculating all the stuff along might not be sufficient either. Invent your own examples and do fill some pages with them. It might help as well to have some software that works with tensors (there are free packages). Good luck!
Really enjoy your videos. Had the same question as below (at 6:45)..glad you said it was a typo--I thought I was very dense for about 30 minutes trying to figure out what I was missing. I feel often times I am dense but not that dense....
I apologize. That was a pretty bad typo. I'll pin a comment about the mistake so it's more visible. I wish RUclips hadn't removed the ability to make annotations.
Thanks for your response. I am a somewhat slow learner and your explanations are so well presented and clear, it is helpful to me. At some point, will you cover how the metric tensor fits into Einstein's general relativity equations and how all this ties into curved space geometry with changing derivatives and different bases--i am trying to under stand some of this but it is very difficult for me....
My long-term plan is to talk about tensor calculus, curved spaces, and general relativity. But it will take a lot of time to get through all this... I have recently started posting videos on tensor calculus, and I mention the metric tensor and Einstein's equations briefly. If you like you can try asking me questions, but I am not very experienced with relativity yet.
Around 6.50, we have row of rows , that's a 1x2 matrix ryt , which when multiplied by a vector ( a 2x1 matrix ) will give a 1x1 scalar .. which we get when a covector acts on a vector .. after that , scalar multiplied by again a vector which is (w1,w2) , how it's giving a scalar (
Around 6:40, I cannot understand when you start to multiply B and the column vector v. There is a + sign that disappears on the next line. Could you explain more ? Thanks a lot.
I am struggling with this derivation between lines 2 and 3 where the BijVk products are being matrix multiplied and regrouped as a row "vector", with two entries of row vectors. I am aware that there is a typo in the B12V2 term which should be B12V1 and the B21V2 term should be B22V2.
I'm not sure there's a special name for them. There's a related concept called a "bivector", but that's made by combining two vectors using the wedge product, not the tensor product.
Fantastic series of videos. Do you have anything on the affine connection? The symmetric, antisymmetric part... weyl vector and non-metricity? Thank you for your work!
I have been trying to see how the bilinear form, written as a row of rows, can work with the two forward transforms written as square matrices, to yield the "new" bilinear form. I can never get the same result as I get when I use the formula at 4:25. Using the formula, for B11~, the F terms would both have '1' in their lower index, but I always get some terms with a '2' there. Am I just being clumsy?
You videos are excellent! I can dense at times! Notwithstanding, the logical leap between 5:47 and 7:30 was huge! Your mistake did not particularly bother me; it is getting the correct stuff right that is hard.
There is an error at 6:48. The third line in the second equation must be... [B_11 v1 + B_21 * v2, B_12 * v1 + B_22 * v2] which can be proven with the first equation v * B = [v1 * B_11 + v2 * B_21, v1 * B_12 + v2 * B_22]
Around 3:04 you say something along the line of "obviously linear maps are matrices", but I think you meant the converse (at least in absolute, general terms)? I mean, it's true that every matrix is, and can be represented by, a linear map, but the converse is not always true
You do not contradict yourself at 5:47 and at 7:43, but it can seem that way if you don't catch the transposing nature that takes place between B12 and B21 when vector v swaps sides. The mistakes aren't bad....it's the correct stuff that took me a while to understand.
Am I right in thinking that, in general, when you add two row vectors together, you simply add the corresponding terms to create a single row vector? In other words, is it entirely analogous to adding two column vectors together?
Eigencris: a matrix is basically a row of columns Me, a computer scientist: you mean I've been doing it backwards my whole life? Jokes aside, would a column of rows and a row of columns be the same thing? ie, vⁱ ⊗ αⱼ = αⱼ ⊗ vⁱ? I'm guessing the tensor product doesn't commute vector with vector or covector with covector, but I feel like it would make sense that a covector and vector would commute in the tensor product.
The tensor product doesn't technically commute, although the spaces V⊗W and W⊗V are "very nearly the same"... the only difference is that the indices for the tensors are reversed. In a certain sense, one has the "transposed matrices" of the other.
@@eigenchris wow that was a very fast response, thank you! I think that helps a bit. I think one thing that keeps holding me back here is I'm coming from a computer science background so I subconsciously assume a vector is an array.
@@rubixtheslime Yeah, normally CS people see "vectors" as just a data structure like a list. For mathematicians, vectors are more closely related to geometry, because they need to keep track of how to change coordinates. If you like, you can continue thinking of a vector as a column, but for it to be also left-multiplied by a row of basis vectors. This way, each basis vector gets matched with one of the numbers in the column, giving the "sum"-style vectors you see in my videos. So you can think of the vector-as-column as still being sort of true, but it's only part of the truth. The other "half" the story (the other half being the basis vectors).
Everybody seems to enjoy this video but I don't get so much from it: Why bilinear are useful for? Why bilinear forms are linear comb. of covector covector (and not vector vector for example)?
This is largely a pure math video, explaining the mathematical concept of a bilinear map in pure math terms. I'm talking about any mathematical function that eats two vectors and outputs a scalar. The metric tensor, or dot product (which lets you measure lengths and angles in space) is one example of a function that eats two vectors and outputs a scalar. In hamiltonian mechanics, there is a bilinear map called the "Symplectic Form" which helps you write out relationships between position vectors and and momentum vectors.
a tensor product between columns yields a 4 entry column, then the product combining the row and column into a real number.. that is inner product (and a tensor product as well)... am I right?
I derived all these formulas considering "e" and "epsilon" as covariant and convariant basis vector. I'm not complitelly awaer of the benefits of the covector notation.
@@eigenchris No. It is just a different notation that I saw in a book which did not considered covector as a "new" mathematical entity but as another basis vector called covariant. So they are also considered simply as vectors. www.google.com/url?sa=t&source=web&rct=j&url=www.seas.upenn.edu/~amyers/DualBasis.pdf&ved=2ahUKEwjAseaF4Ov4AhUqJ7kGHV2SB1YQFnoECDkQAQ&usg=AOvVaw0YLXetehSeVuSuPxa0ta97
I have a question. When a basis covector acts on a basis vector (say (1 0) and (1 0)^T you get the Kronecker dalta like you defined somewere in the beginning in the series, but when these two get multipied by the tensor product you get a matrix with the rows (1 0) and (0 0) but that is not the Kronecker tensor right? I am a little confused about the distinction between these two things especially when the notation seems to be the same.
I'm starting to regret writing it the way I did. Anytime vectors and covectors are written next to each other without function notation (using the brackets), you should imagine a circle-times symbol beteen them. I'm hoping the next video (#13) might clear some of this up. Let me know if you're still confused.
The right one is then [[B11 B21] [B12 B22]] Ok. But there is still a problem. When the bilinear form is multiplied with vector v & w, i found that the result is B11v1w1 + B12v2w1 + B21v1w2 + B22v2w2 The problem is with the two middle terms. It seems the indexes don't match up. Shouldn't they be B12v1w2 + B21v2w1 ??
Great tutorial, thank you so much for your kindness🙏. I am new to tensors, but have decent general mathematical intuition (or so I think), and this is a perfect fit for me... I did have a slightly difficult time digesting this video. I couldn't quite internalize the logic at 04:50. Again, at 6:23, wouldn't a "row of rows" be the same as a row? I finally visualized bilinear form in the following manner, not sure if it makes sense. B(v, w) = (B_ij epsilon^i epsilon^j) . (v w) The first term of the dot product is a weighted sum of tensor products of 2d row vectors (co-vectors) giving the 4d co-vector [B11 B12 B21 B22]. And the second term is the tensor product of column vectors v and w giving a 4d column vector. This inner product in 4d would give the same result as that of the earlier quadratic operation using 2d matrix. i.e., the quadratic operation in n-dimensional space seems to be equivalent to a corresponding linear operation in n^2 dimensional space.
Watching the next video, I am not sure if I was mixing up tensor products and kronecker products above. In fact I am still not really sure what the difference is..
If two basis symbols are next to each other in any other situation... a situation that is NOT a tensor product... that would be confusing. I am still getting used to this. Sometimes the "e's" are paired with brackets and sometimes not, etc. I need to work on this. I DO like how you keep reviewing with summaries after each portion.
As I look over my notebook where I transcribe your lecture notes, I see the ee or e(e) or e(e)e. Cannot insert epsilon symbols here) Is there a difference here between these notations? Is this where circle-mult symbol should be? I am trying here but finding that I am unsure as to whether I am confused. That sounds strange, I realize, but I wish I could see a clarification of this with the "translation between your notation and conventional notation. Thanks for all your work.
Good Saga! Any upcoming intuitions on physics applications? especially on eletromagnetism and the whole venctorial product? missing more good examples on the vids. ty!
I honestly wasn't planning on covering much physics in the near future... partly because I don't feel qualified/educated enough to cover it properly. Is there anything in particular you wanted to know about? I can maybe direct you to other places online to learn about it.
I believe I wrote that line the way I wanted... The first element of the row is [B11 B12] and that gets multiplied by the first element of the column V1. The second element of the row is [B21 B22] and that gets multiplied by the second element of the column V2. Do you disagree? This is sort of something I made up myself. I apologize if it's confusing.
Distribute v1 to get [B11v1 B12v1] and distribute v2 to get [B21v2 B22v2]. After that, add the vectors together. I see now that I probably should have included an extra line to make the steps clear. I apologize for that.
I plan on including more examples in my future videos. I also plan on uploading another video on the tensor product in the next month or two to help explain it better.
This is the least comprehensive video of the series so far for me. Even the previous started at least with a problem not that practical, but at least clear one...but here just the formal blahblah. Ok, i managed to get the most of it, but how should i exploit me newly acquired knowledge? Pls if possible, start with a practical approach, after that lets have a numerical example, then we can go formal.
From your video, it seems that tensor is only a notational simplification, a tool that facilitates memorizing, or just a personal notational preference. But tensor is more than that! For example, tensor contraction (a very important concept of tensor, but you did not mention a word about it) can be used to prove that the trace of linear transformation does not depend on choose of basis. Tensor can also be used in (nonlinear higher order) label spreading in machine learning. But what your video can help in these use cases? It can do nothing but waste time of audience. Don't harm the audience. I suggest you delete your videos, which is perhaps the best thing these videos can do for learners of tensors.
Around 6:45-7:04 there are mistakes in the 3rd and 4th lines... Some of the B indexes are reversed.
It is true, but that is not what threw me off anyhow. I can usually handle small errors.
I think at 6:45 , the 2nd line should be [[B11 B21] [B12 B22]]....plz see
I am finding difficulty at this tensor product of two covectors point,plz explain
@@soumyabratahazra7723
You're right bro. Seems like you've understood it but aren't aware of it.
Yes, there is a mistake at 6:56 etc
As an aircraft engineer, I was only used to the length-preserving transformation of three-dimensional vectors. But I recently started to think that, the tensors, which I had an only taste of long ago, maybe a good tool for solving control problems of complex nonlinear systems. I decide to learn what it is and these videos are amazing to me in many aspects. Each stuff explained is so easy to understand. Thank you!
This series is the best expanation of tensors and their notation, you make everything very clear.
I almost never comment on videos, but I just have to say: this series is awesome. Thanks a ton for putting your time and energy into producing these!
Thanks. They do take a lot of time, so I'm glad it was worth it.
I was so confused about how to formulate the metric tensor as being sorta double covectorish and now I finally understand! It was my reason for starting this series but now I'm gonna watch the whole thing
It's interesting that you say that because when I look back on these videos, I see the whole "row of rows" thing as being a bit ridiculous and convoluted. But I'm glad you got something out of it.
I tried to build a constitutive model for geotechinical material and promote it to general stress space, which requires a lot tensor knowledge. I have been looking for different books and videos. These videos really inform me about what is a tensor and how it transforms. Your video is highly appreciated!!!
I’ve literally been searching for an explanation like this for an entire week and now I’ve finally found it! Thank you so much!
Glad it helped!
Amazing job you have done here. This must have taken a long time to create. You put a lot of attention into using colors in a helpful manner in showing the math. Also you have been very thorough in explaining all the concepts that those of us with with patchy memory of our school math can still follow the video series. Very thorough and carefully planned.
Good pacing, editing, clear voice and everything. Very well done videos.
I feel like I had my first Eureka moment while watching this video. Thank you for the amazing content.
I like the idea of "row of rows", which disclose the awkwardness I have never noticed before. Thanks.
I could not understand a single word for the last three or four last videos but strangely I am still enjoying. Thank you¡
Is that because of the audio quality, or just just because of the nature of the math? Is there anything I can do to make it easier to understand?
eigenchris No, the sound is very good and the way that you explain it is great too. Actually, the math is hard to me. I need to watch the whole series from the beginning again. Thank you for taking your time to answer my silly comment and thank you for the classes. Obrigado.
I'm a little worried my tensor product videos haven't been as good as the previous ones... I'm just going to keep making them for now. Feel free to comment about anything that's unclear or confusing.
@@rlicinio1 And how thick is your exercise book on this series? You do understand that one viewing might not be enough and recalculating all the stuff along might not be sufficient either. Invent your own examples and do fill some pages with them. It might help as well to have some software that works with tensors (there are free packages). Good luck!
Thanks a lot for the video. It's literally a godsend for me, as I'm struggling with the tensor in general relativity course
Really nice the last part on row of rows!
Really enjoy your videos. Had the same question as below (at 6:45)..glad you said it was a typo--I thought I was very dense for about 30 minutes trying to figure out what I was missing. I feel often times I am dense but not that dense....
I apologize. That was a pretty bad typo. I'll pin a comment about the mistake so it's more visible. I wish RUclips hadn't removed the ability to make annotations.
Thanks for your response. I am a somewhat slow learner and your explanations are so well presented and clear, it is helpful to me. At some point, will you cover how the metric tensor fits into Einstein's general relativity equations and how all this ties into curved space geometry with changing derivatives and different bases--i am trying to under stand some of this but it is very difficult for me....
My long-term plan is to talk about tensor calculus, curved spaces, and general relativity. But it will take a lot of time to get through all this... I have recently started posting videos on tensor calculus, and I mention the metric tensor and Einstein's equations briefly. If you like you can try asking me questions, but I am not very experienced with relativity yet.
Thank you very much. I will start studying your new videos.
Thank you sir I love your work I did understand tensor meaning from you for the first time
Around 6.50, we have row of rows , that's a 1x2 matrix ryt , which when multiplied by a vector ( a 2x1 matrix ) will give a 1x1 scalar .. which we get when a covector acts on a vector .. after that , scalar multiplied by again a vector which is (w1,w2) , how it's giving a scalar (
These are truly amazing.
Bro... this feels like completing my basic Linear Algebra knowledge from undergrad math. How did we learn none of this :(
Around 6:40, I cannot understand when you start to multiply B and the column vector v. There is a + sign that disappears on the next line. Could you explain more ? Thanks a lot.
Thanks Professor. keep it up very natural way of explanation.
I am struggling with this derivation between lines 2 and 3 where the BijVk products are being matrix multiplied and regrouped as a row "vector", with two entries of row vectors. I am aware that there is a typo in the B12V2 term which should be B12V1 and the B21V2 term should be B22V2.
I think you are right.
If Bilinear forms are covector-covector pairs, is there an equivalent for vector-vector pairs?
e.g. e₁ ⊗ e₂
I'm not sure there's a special name for them. There's a related concept called a "bivector", but that's made by combining two vectors using the wedge product, not the tensor product.
I have my Homer the poet moments, and I have my Homer Simpson moments. Homer Cross Product Homer. Maybe my mental battery will last longer this time.
Fantastic series of videos. Do you have anything on the affine connection? The symmetric, antisymmetric part... weyl vector and non-metricity? Thank you for your work!
I have been trying to see how the bilinear form, written as a row of rows, can work with the two forward transforms written as square matrices, to yield the "new" bilinear form. I can never get the same result as I get when I use the formula at 4:25. Using the formula, for B11~, the F terms would both have '1' in their lower index, but I always get some terms with a '2' there. Am I just being clumsy?
You videos are excellent! I can dense at times! Notwithstanding, the logical leap between 5:47 and 7:30 was huge! Your mistake did not particularly bother me; it is getting the correct stuff right that is hard.
Thanks a lot for the courses.
Can we say that Linear maps themselves make a vector space over V x V* . And bilinear forms over V* x V* ?
Yes
There is an error at 6:48. The third line in the second equation must be...
[B_11 v1 + B_21 * v2, B_12 * v1 + B_22 * v2]
which can be proven with the first equation
v * B = [v1 * B_11 + v2 * B_21, v1 * B_12 + v2 * B_22]
Around 3:04 you say something along the line of "obviously linear maps are matrices", but I think you meant the converse (at least in absolute, general terms)? I mean, it's true that every matrix is, and can be represented by, a linear map, but the converse is not always true
You do not contradict yourself at 5:47 and at 7:43, but it can seem that way if you don't catch the transposing nature that takes place between B12 and B21 when vector v swaps sides. The mistakes aren't bad....it's the correct stuff that took me a while to understand.
Nevermind... no contradiction..but you did leap a bit.
Can someone explain how L(x) becomes Lee(x)? That is, why does the function variable in parentheses move specifically to the epsilon?
Hi
I'd like you to make a short video with a small example of its application.
Whether physics or some other area is not important.
Am I right in thinking that, in general, when you add two row vectors together, you simply add the corresponding terms to create a single row vector? In other words, is it entirely analogous to adding two column vectors together?
Yes. All array addition works element-by-element.
@@eigenchris Much obliged, Chris.
In quantum mechanics we tend to distribute the latter matrix into the former for the kronecker product, howcome here it’s reversed?
I didn't think too hard about which matrix distributes into which when I made this 5-6 years ago. Just follow whichever convention your textbook uses.
Eigencris: a matrix is basically a row of columns
Me, a computer scientist: you mean I've been doing it backwards my whole life?
Jokes aside, would a column of rows and a row of columns be the same thing? ie, vⁱ ⊗ αⱼ = αⱼ ⊗ vⁱ? I'm guessing the tensor product doesn't commute vector with vector or covector with covector, but I feel like it would make sense that a covector and vector would commute in the tensor product.
The tensor product doesn't technically commute, although the spaces V⊗W and W⊗V are "very nearly the same"... the only difference is that the indices for the tensors are reversed. In a certain sense, one has the "transposed matrices" of the other.
@@eigenchris wow that was a very fast response, thank you! I think that helps a bit. I think one thing that keeps holding me back here is I'm coming from a computer science background so I subconsciously assume a vector is an array.
@@rubixtheslime Yeah, normally CS people see "vectors" as just a data structure like a list. For mathematicians, vectors are more closely related to geometry, because they need to keep track of how to change coordinates. If you like, you can continue thinking of a vector as a column, but for it to be also left-multiplied by a row of basis vectors. This way, each basis vector gets matched with one of the numbers in the column, giving the "sum"-style vectors you see in my videos. So you can think of the vector-as-column as still being sort of true, but it's only part of the truth. The other "half" the story (the other half being the basis vectors).
Everybody seems to enjoy this video but I don't get so much from it:
Why bilinear are useful for?
Why bilinear forms are linear comb. of covector covector (and not vector vector for example)?
This is largely a pure math video, explaining the mathematical concept of a bilinear map in pure math terms. I'm talking about any mathematical function that eats two vectors and outputs a scalar. The metric tensor, or dot product (which lets you measure lengths and angles in space) is one example of a function that eats two vectors and outputs a scalar. In hamiltonian mechanics, there is a bilinear map called the "Symplectic Form" which helps you write out relationships between position vectors and and momentum vectors.
at 6:27 mark, in the last time....the operation is mixing tensor product with matrix multiplication, right?
Yeah, there should probably be a "kronecker product" operator between the column vectors.
a tensor product between columns yields a 4 entry column, then the product combining the row and column into a real number.. that is inner product (and a tensor product as well)... am I right?
I derived all these formulas considering "e" and "epsilon" as covariant and convariant basis vector. I'm not complitelly awaer of the benefits of the covector notation.
Are you asking what covectors are for? I'm not clear on the question.
@@eigenchris No. It is just a different notation that I saw in a book which did not considered covector as a "new" mathematical entity but as another basis vector called covariant. So they are also considered simply as vectors.
www.google.com/url?sa=t&source=web&rct=j&url=www.seas.upenn.edu/~amyers/DualBasis.pdf&ved=2ahUKEwjAseaF4Ov4AhUqJ7kGHV2SB1YQFnoECDkQAQ&usg=AOvVaw0YLXetehSeVuSuPxa0ta97
I have a question. When a basis covector acts on a basis vector (say (1 0) and (1 0)^T you get the Kronecker dalta like you defined somewere in the beginning in the series, but when these two get multipied by the tensor product you get a matrix with the rows (1 0) and (0 0) but that is not the Kronecker tensor right? I am a little confused about the distinction between these two things especially when the notation seems to be the same.
I'm starting to regret writing it the way I did. Anytime vectors and covectors are written next to each other without function notation (using the brackets), you should imagine a circle-times symbol beteen them. I'm hoping the next video (#13) might clear some of this up. Let me know if you're still confused.
OK thanks!
The right one is then
[[B11 B21] [B12 B22]]
Ok. But there is still a problem.
When the bilinear form is multiplied with vector v & w, i found that the result is
B11v1w1 + B12v2w1 + B21v1w2 + B22v2w2
The problem is with the two middle terms. It seems the indexes don't match up.
Shouldn't they be B12v1w2 + B21v2w1 ??
Yes, your answer agrees with @Aliyu Bagudu.
Great tutorial, thank you so much for your kindness🙏. I am new to tensors, but have decent general mathematical intuition (or so I think), and this is a perfect fit for me...
I did have a slightly difficult time digesting this video. I couldn't quite internalize the logic at 04:50. Again, at 6:23, wouldn't a "row of rows" be the same as a row? I finally visualized bilinear form in the following manner, not sure if it makes sense.
B(v, w) = (B_ij epsilon^i epsilon^j) . (v w)
The first term of the dot product is a weighted sum of tensor products of 2d row vectors (co-vectors) giving the 4d co-vector [B11 B12 B21 B22]. And the second term is the tensor product of column vectors v and w giving a 4d column vector. This inner product in 4d would give the same result as that of the earlier quadratic operation using 2d matrix. i.e., the quadratic operation in n-dimensional space seems to be equivalent to a corresponding linear operation in n^2 dimensional space.
Watching the next video, I am not sure if I was mixing up tensor products and kronecker products above. In fact I am still not really sure what the difference is..
Am I correct that wherever I see two basis vector symbols together, I should envision the circle cross tensor multiplication symbol?
For these videos, yes. This is a notation I made up, however, and you probably won't see it anywhere else.
I guess I find that a little confusing. I will go back and envision the circle product symbols so I get used to it as well as knowing your symbol.
Sorry you find it confusing. I'll have to reconsider how I write in my my future videos.
If two basis symbols are next to each other in any other situation... a situation that is NOT a tensor product... that would be confusing. I am still getting used to this. Sometimes the "e's" are paired with brackets and sometimes not, etc. I need to work on this. I DO like how you keep reviewing with summaries after each portion.
As I look over my notebook where I transcribe your lecture notes, I see the ee or e(e) or e(e)e. Cannot insert epsilon symbols here) Is there a difference here between these notations? Is this where circle-mult symbol should be? I am trying here but finding that I am unsure as to whether I am confused. That sounds strange, I realize, but I wish I could see a clarification of this with the "translation between your notation and conventional notation. Thanks for all your work.
Previous mic was much better
At 6:22, why is are these 3 objects not associative?
Because of matrix multiplication. We multiply by rows and columns (in order), not in any other way.
Good Saga!
Any upcoming intuitions on physics applications? especially on eletromagnetism and the whole venctorial product?
missing more good examples on the vids.
ty!
I honestly wasn't planning on covering much physics in the near future... partly because I don't feel qualified/educated enough to cover it properly. Is there anything in particular you wanted to know about? I can maybe direct you to other places online to learn about it.
General Relativity to explain its main formula - maybe also show using geo examples on how the metric tensor can be applied to a few simple examples.
@7:04 the second line after awkward is wrong and Then the subsequent lines are correct. Is it true?
I believe I wrote that line the way I wanted... The first element of the row is [B11 B12] and that gets multiplied by the first element of the column V1. The second element of the row is [B21 B22] and that gets multiplied by the second element of the column V2.
Do you disagree? This is sort of something I made up myself. I apologize if it's confusing.
How would you then produce the subsequent line from this?
Distribute v1 to get [B11v1 B12v1] and distribute v2 to get [B21v2 B22v2]. After that, add the vectors together.
I see now that I probably should have included an extra line to make the steps clear. I apologize for that.
sorry actually @7:04
Look at it very well you made mistake in the indices. The 3rd line after awkward should have [(B11V1 + B21V2) (B12V1 + B22V2)]
You sound younger than in the last video.
Mi n
Thank you but we want more examples
I plan on including more examples in my future videos. I also plan on uploading another video on the tensor product in the next month or two to help explain it better.
This is the least comprehensive video of the series so far for me. Even the previous started at least with a problem not that practical, but at least clear one...but here just the formal blahblah. Ok, i managed to get the most of it, but how should i exploit me newly acquired knowledge? Pls if possible, start with a practical approach, after that lets have a numerical example, then we can go formal.
Great series but dude, you're making so many mistakes!
Yeah, I'm sorry for that. I try to correct them in the video description.
From your video, it seems that tensor is only a notational simplification, a tool that facilitates memorizing, or just a personal notational preference. But tensor is more than that! For example, tensor contraction (a very important concept of tensor, but you did not mention a word about it) can be used to prove that the trace of linear transformation does not depend on choose of basis. Tensor can also be used in (nonlinear higher order) label spreading in machine learning. But what your video can help in these use cases? It can do nothing but waste time of audience. Don't harm the audience. I suggest you delete your videos, which is perhaps the best thing these videos can do for learners of tensors.
Mi n