i know the feeling. first time i was exposed to vectors, was at High School. Instructors didn't even bother explain it back then. They just display a column of numbers within a set of braces and they say " so this vector, blah blah blah..." like wTF ! They didn't think we needed an explanation first before using it??
This is the best series of video that I have ever came across about tensors ! I really enjoy watching it and hope it will bring me to the next level of tensors calculus and then differential geometry ! Finally, I hope I can understand what Einstein Equation is ! Absolutely fabulous video series for beginners with basic mathematical knowledge ! 💪💪💪
you are a phenomenal teacher. People explaining linear algebra as if it was some voodoo dark art scared me away from cool physics at first. I just wish I had you as a teacher at uni, would have saved me some time.
Excellent videos! Thank you. I like the definitions, in increasing order of mathematical abstraction. Note that vector spaces also support distribution of scalars across vector sums.
Note to future self rewatching this video: When he says "the forward transformation BRINGS US from the old basis to the new basis", he does not mean it brings VECTORS represented in the old basis to the new, he means it allows us to use the old basis vectors to write the new ones.
He: "In doing this I realized the previous video has some errors in it. Probably won't bother fixing it unless these get more than 100 views." Me: Bro, you have no idea how much good you are doing to the student's out here. You say "100 views?!" just check the count, you have saved these many student's carrier. YOU, yes YOU ARE A SAVIOUR!
Dear Chris, I have done now the calculation on my own and it is not true that the "forward transformation" brings us from the old basis into the new basis and the other way around, while it is contrary for all other vectors. You use different directions in thinking for basis vectors on one hand and all other vectors on the other hand. And this is how your "contradiction" in terms of transformation for basis vectors on one hand and all other vectors on the other hand arises. And you build up the entire tensor theory based on this arbitrarily and willingly induced pseudocontradiction. I am excited how this will continue. I am currently at the vid "Tensors for beginners 10".
Definition 3 -- A vector is an element of a vector space -- is the one you'd better adopt if you plan on learning any quantum mechanics or advanced math like lie algebras, etc.
In simple terms, the backward transformation matrix expresses the old basis in terms of the new basis. Therefore, to convert vector components from the old basis to the new basis, we use the backward transformation matrix.
1:15: “These lists of numbers are vector components and not the vectors themselves. Vectors are invariant under a change of coordinates but vector components are not.” But the abstract definition of “vector” at 4:35 does not depend on “components” or “coordinates systems”. In fact, “the set of all lists of N numbers” is a vector space, by that definition, and a list of N numbers IS a vector in that space. I think that the problem is that physicists are interested in specific kinds of vectors and tend to say that anything else is not a vector. But what they mean is that anything else is not a vector or is not the kind of vector that they want to be included in their particular redefinition of “vector”.
@8.50 anyway the backword matrix is a linear map it can't change the basis vector so the vector at the output after the application of B in the old basis can't produce components in the new basis .The output is also in the old basis not on the new transformed one
Vector spaces are the spaces of vectors also some rules. Vectors are things in vector spaces. There is a loop here. The first two definition even though they can be not whole and suitable for all, they are better.
the thing about the opposing logic you mentioned, isn't that just the equality between the matrix transformation and the vector you want to achieve on the standard coordinate system? if we want to see what the new coordinates is in the new coordinate system to form the original coordinate, we can use the inverse matrix for that, which we know that B and F are to each other F*[x,y] = [1, 1.5] B*F* [x,y] = B*[1, 1.5] [x,y] = B* [1, 1.5] i dont think it is random😁
Hi Chris, is there a video where you go into further details about what it means for a vector to be Euclidean, and maybe also discuss other types of vectors ?
A common example is functions. You can add and scale functions, and put them together in linear combinatioms. A common example is the Fourier seried, where you write a function as a linear combinatiom of sine and cosine waves.
I'm finding your videos really helpful, and explaining at an excellent pace. Would you be able to perhaps write out the errors in the video description if you still don't want to edit the video itself?
Thanks for the compliments! I am very glad to hear these have helped. I'll make sure something gets done about the errors... either re-uploading the video, or adding in some annotations... The main error is that the matrices in the previous video are flipped sideways. It's not major, but it might confuse some people.
The choice is arbitrary. But as you will see later, row vectors transform in the opposite way that column vectors do. So we consider them different objects.
maybe the best video series about it (im just at 3, so lets see :p). hard to find stuff like this next to university examples which expect already advanced knowledge XD .
The best definition for a vector is member of vector space which is a set of vectors? Surely it is possible to see that definition is a bit circular? What is circular? It is something that has a circular shape. : )
It's probably better to learn the other two definitions first. A vectors space is a collection of "things" that can be added together and scaled. The "things" are called vectors.
@@eigenchris Thanks for your response. I appreciate your videos and that you remain active here. I am on my third go at your Tensor for Beginners videos. They are very helpful.
Hello, Just a question and suggestion at the same time, mark when the use scalar if is a column or row right or left to know very well the matrix multiplication, because the the last video about the corrector I didnt get why you change it till I get the you are taking the vector as row not column and is on the left
Excellent video but I don't understand why you obtain in 8:22 a vector completely different from vector [1 2] in the tilde base. In fact when you apply the backward transformation B to the vector [1 2] you obtain the original vector [1 1.5] in the old basis. Can you explain me this? Please
I'm studying Vectors and Tensors (Fleish). I read that vector components can be defined by paralell or perpendicular projection. Paralell projection components correctly add up to the original vector but perpendicular ones do not. My questionis if perpendicular ones do not up correctly, why is it a thing at all? It just makes us create the dual basis vector as a work around. I could additionally make up my own projection that would force me to create some workaround. Let's say we project at a 45 degree angle. I am sure I could create a method to make those components add up to the original vector. But why do any of this if paralell project works fine. Maybe we only need one?
Fleish teaches this a bit differently than me. In my case, with vectors I ALWAYS use the parallel method to get vector components. In videos 4-6 I introduce a new two of object called a covector, which are visualized as stacks instead of arrows. With covectors I ALWAYS use the pardendicular method to get the components (you'll see I do this by counting the number of stacks that an arrow pierces). Covectors can be useful because some things in physics are calculated with projections. For example, the work done on an object by a force equals the projection of the force vector onto the object displacement vector. In my videos, Force would be a covector and you cound the number of stacks that the displacement vector pierces.
@@eigenchris Sorry for not grasping this. You gave an example of work as a reason to use perpendicular projection (covectors), but i can more easily paralell project to get the contravariant components...I drew an example but unfortunately I cannot add an image to my reply. I did paralell projection and the components add up to the orig vector. I suspect I can do that with any local curvature scenario. What might I be missing here?
@@thevegg3275 If you want you can upload your picture to a photo-sharing website like imgur and then paste the link here. Normally the equation for work is (Force) · (Displacement), where the dot product gives perpendicular projection. You can instead interpret Force as a covector stack and count how many stack lines the displacement vector pierces.
@@thevegg3275 I guess you on't strictly *need* covectors in this example I gave for Work = (Force) · (Displacement). It is just a simple example where you could use a covector if you want. In physics you can always convert between covariant components and contravariant components, as I explain in video 16 of this series. Some more complicated objects in physics (particularly relativity) are "2-covariant" matrices. These include the metric tenor, the electromagnetic tensor, and the ricci curvature tensor. In order to understand these, you need to know how covariant components transform. But these are more advanced applications.
Wasn’t the flipped matrix from last video the right one and this one wrong? If e1*=2e1+1e2 is right then the F in this video should be wrong because if you multiply (e1;e2) with F, for e1* you‘ll get e1*=2e1-1/2e2 if I got matrix multiplying rule correct. So isn’t F in this Video flipped and F of the Last Video was Right?
This can become confusing when you consider the set of column vectors as a vector space, in that case a vector is the same as its components (with respect to the natural basis).
@@eigenchris Thank you for your quick comment..i'm not touching any code or any ML library yet. i'm focusing on learning mathematical concepts on my free time and i wanted clear explanation about Tensors wich you've explained very well ! I vaguely remember these concepts when I was at university.
Check this, he uploaded long back.: ruclips.net/video/ipRrCPvftTk/видео.html It's in the updated playlist: ruclips.net/p/PLJHszsWbB6hrkmmq57lX8BV-o-YIOFsiG
It’s not clear about whether when using the new coordinate system if you determine the vector values by drawing perpendiculars to the respective axes or parallel lines to the new axes. Hope that’s clear. In Euclidean geometry, you do both, which it turns out, are the same thing.
The method I use is the "parallel" method. You just count the number of basis vectors needed to construct the vector in question, and those are the components.
when i multiply 1.0 * the e1tilde vector and add 2.0 * the e2 tilde vector I get the vector 1.0 , 1.5. When i use the e vectors then I get the vector 1.0 , 2.0. What am I doing wrong? Thanks.
Are you sure the the F and B matrices are wrong in the previous video?! Looks to me that they were right but those in this video are wrong. Cause if we consider the old and new indices as a column vectors, then to get the new indices column vector, we need to multiply F by the old indices column vector components so we need to have the components related to e1 and e2 on the same row so that e1(tilda) is composed of e1 and e2. I am not sure if I made my explanation clear.
Column vectors are for vector COMPONENTS, so im order to get the matrix multiplication rule to work, the F and B matrices must be arranged as we see in this video. e1 and e2 should not be arranged in a column vector because they are actual vectors, not vector components.
Can someone help me understand how is v= 1e1 + 2e2 around 7:24... Is the units in two coordinate system is also different along with different orientation? Also how come the unit in x and y directions different in new coordinate system?
Do you know how to add vector arrows geometrically using "tip to tail addition"? You but the tail of one vector on the tip of the next. If you do this for e1~ + e2~ + e2~, the resulting arrow is v. Does this make sense?
@@eigenchris thank you for your time sir. I think I couldn't make you understand my question sir. In figure 1e tilda is bigger than 1e vector... So my first question is how is that the base unit in both systems are different? Second in first base system e1 vector is of same magnitude than e2 vector ... However in second system the e1 tilda looks way bigger than e2 Tilda... Are these units different
The "full definition 3" is kind of abstract and I wasn't sure if most viewers would want to see it. If you look up "Vector Space" on Wikipedia you can see the definition in full. It defines a vector space as two sets (V and S) and then defines the + and . operations through properties like (a+b)+c = a+(b+c) and a+b=b+a. So the process is to define the vector space first, and then just call members of the set V the "vectors". So it is not circular.
Vectors are invariant under coordinate change. Of course that fact stems out of the very definition of a vector, which is a fixed magnitude associated with a fixed direction. To say that if we change the coordinates relative to which we define our vector is also to say that the vector components are variant when we change the coordinate system. Again this fact stems out of the very definition of a coordinate system; that is, it is just a frame of reference. Since vector components have no absolute value in all frames of reference, such components only make sense relative to a specific coordinate system. For example, I am 50 years old. My oldest son is 25 years old and my youngest son is 10. I am twice as old as my oldest son, but I am five times older than my youngest son. My age didn’t change, but the way I described my age did depending on which son’s age I compared mine to.
Yup. A lot of math education that I've seen tends to take simple concepts and then makes them seem complicated. Ultimately, vectors shouldn't care about components at all, since they become invalidated the moment you shift your perspective.
@@angeldude101 exactly. In terms of means relative to whatever “of” is. I am 6 feet and 2 inches means the same as I am 74 inches the same as 74(2.5) cm approximately.
Hello Mr. Chris. You may remember me as the 12 year old full of questions. If it won't bother, will you have time to answer one of the most largest question I had for the past few days? It is not directly related with your video series, but I just thought that you might be able to answer this: has anyone ever solved Einstein's Field Equations (google won't give me an answer). And even if there is a solution to it, is it a straight answer or just a simplified algebra equation? Thank you for your time! by some 12 year old
Yes, Einstein's Equations (EE) have been solved in some specific situations. When we say someone "solves" the EEs for a given situation, we mean they have found the "metric tensor" for a given distribution of mass and energy for some region of spacetime. The metric tensor is the lower-case "g" in the equations, and it tells us how spacetime curves, and therefore how gravity works for that mass-energy distribution (this is because gravity is the result of spacetime curvature). Using the metric, you can calculate the curved paths that light will take in spacetime... for example, you can figure out how light will bend around black holes. (So, in some sense, once you have solved the EEs and gotten the metric, you need to do more with with the metric to actually find how light and mass moves). The EEs were published by Einstein in 1915, and the first non-vacuum solution was published in 1916. This was the Schwarzschild solution, which is for a spherical mass distribution like stars and black holes. It predicts that a black hole will have an event horizon and that light cannot escape it once it goes inside. You can find links to other solution types on this page: en.wikipedia.org/wiki/Exact_solutions_in_general_relativity
I have a little confusion Can you tell that the indices of the dual basis are raised and does that means that they are contra variant to the dual vector components "
When the basis gets twice as big, the dual basis gets half as dense. So the dual basis is contravariant, relative to basis vectors. Meanwhile, if the basis vectors get twice as big, covector components become twice as big, because the vectors will pierce twice as many stack lines. So covector components are covariant relative to basis vectors. So the dual (covector) basis and covector components transform in opposite ways.
@@muhammadtayyab4255 The dual basis is defined so that ϵ^i(e_i) = 1. If e_i gets twice as big, then ϵ^i must get half as dense to keep this formula true (so that the e_i arrow only pierces 1 stack line of ϵ^i).
Basically, everything in that video is "turned on its side". If you want all the standard matrix-vector multiplication to work out as expected, you need to "flip" all the indexes and matrices for it to work.
Many applications in math, physics, and computer programming don't require the ideas of vector multiplication and division. There are ways to define a "vector multiplication" rule, but there is more than one way to do it. There is thr famous "vector cross product", but it only works in 3 dimensions. Another way to invent multiplication involves turning a vector space into a "geometric algebra", which you can google. Lie Algebras are another way to invent a multiplication rule for vector spaces, but these are more advanced topics. Anyway, the answer is vector spaces are interesting enough without inventing multiplication, so we don't force it as a requirement.
chris, are vector spaces different if we choose different basis vectors like (x,y) and (r,theta) for the same 2-d space? coordinate space is also a vector space right?
A 2D plane is a vector space and you can choose many different basis vectors for it. However there is a difference between basis vectors and coordinate systems, especially when it comes to curved coordinates like polar coordinates. In rectangular coordinates, the coordinate lines always point in the same direction, just like a basis vector points in thr same direction. But in curved coordinates the coordinate likes change direction. You might want to watch videos 1-2 in my "Tensor Calculus" series to learn more.
@@eigenchris Is dimensionality the only factor to distinguish between two vector spaces? Like a vector in 2-d can be expanded in any 2-d basis.....What about position and momentum vector space in 2-d....aren't they different from a 2-d plane vector space?
@@priyankaragini5883 For finite dimenional vector spaces, dimensionality is the only factor that distinguishes them, as you said. You can always invent an invertible linear map that goes from one vector space to another of the same dimension (or in the reverse direction), and for this reason they are basically the same. A position-momentum 2d vector space is what's called a "phase space", which is what you'd study in Hamiltonian mechanics. I think this is an average 2D vector space, but it also has an additional function called a "symplectic form" that helps describe the physics of the space. It's the symplectic form that makes this 2D vector space "different" from an average 2D vector space. I haven't studied this in detail though so I can't tell you much more.
HI GREAT SERIES, BUT ACTUALLY VECTORS ARE NOT GEOMETRIC OBJECTS, ALTHOUGH THEY CAN BE INTERPRETED AND VISUALIZED AS SUCH. THEY ARE ALGEBRIC OBJECT... THATS WHY IT'S CALLED LINEAR ALGEBRA!
bro the cliffhangers on this series are crazy
Agreed!
with this knowledge I shall conquer the world
I really appreciate that you are explaining vector concepts with human language.
This Vector thing has been haunting me for years... You made it damn very clear.. Thanks Bro... High regards, from India....
i know the feeling. first time i was exposed to vectors, was at High School. Instructors didn't even bother explain it back then. They just display a column of numbers within a set of braces and they say " so this vector, blah blah blah..." like wTF ! They didn't think we needed an explanation first before using it??
This is the best series of video that I have ever came across about tensors ! I really enjoy watching it and hope it will bring me to the next level of tensors calculus and then differential geometry ! Finally, I hope I can understand what Einstein Equation is ! Absolutely fabulous video series for beginners with basic mathematical knowledge ! 💪💪💪
just curious, how did your journey go?
@@ehigieomokaro6462 going very slowly. One of the readers is the Ukraine 🇺🇦 war that uses a lot of my time following up the process ! 😫
"You'll have to watch the next video"
Biggest anime suspense ending
you are a phenomenal teacher. People explaining linear algebra as if it was some voodoo dark art scared me away from cool physics at first. I just wish I had you as a teacher at uni, would have saved me some time.
I am really grateful for these. You don't know how much these videos are helpful to someone who really needs to clarify his doubts. Thanks man.
I had trouble with tensors in linear algebra years ago. This would have saved me. Its in the Goldylocks zone between slow and difficult.
Thankyou
THANK YOU SO MUCH for making tensor calculus crystal clear !!!
Hi Eigenchris, just so you know this video is still changing lives. Thanks. Like the guy below said, with this knowledge I'll rule the world!
Just wanted to agree w/ everybody, these are great videos. Thank you for putting so much time and effort into them! Your knowledge shows through.
Excellent videos! Thank you. I like the definitions, in increasing order of mathematical abstraction.
Note that vector spaces also support distribution of scalars across vector sums.
sir --u r the best on tensors = lecture is short ; to the point ; excellent
Thank you very much for such brilliant lectures on tensors ;)
you are the best teacher i have ever had. thank you
Like the way you explain the difference between vectors and vector components
best teacher eve, thank god bring you to me
You are one of the best explaining this topic. Congratulations. I will folow you from Spain.
Thank you very much for making these videos. these videos are very helpful to understand the tensors.
Awesome work Chris. Exceptional. ✔
Brilliant! I've tried a few times to learn tensors from text books but they just present the results without any explanation - hopeless!
Really the best anime series I've ever watched
Thanks for the videos. They are understandable
Note to future self rewatching this video: When he says "the forward transformation BRINGS US from the old basis to the new basis", he does not mean it brings VECTORS represented in the old basis to the new, he means it allows us to use the old basis vectors to write the new ones.
Just found this channel but already one of my favorites for math. Also how about these cliff hangers
Great review for this 60 year old topologist.
He: "In doing this I realized the previous video has some errors in it. Probably won't bother fixing it unless these get more than 100 views."
Me: Bro, you have no idea how much good you are doing to the student's out here. You say "100 views?!" just check the count, you have saved these many student's carrier. YOU, yes YOU ARE A SAVIOUR!
Dear Chris, I have done now the calculation on my own and it is not true that the "forward transformation" brings us from the old basis into the new basis and the other way around, while it is contrary for all other vectors. You use different directions in thinking for basis vectors on one hand and all other vectors on the other hand. And this is how your "contradiction" in terms of transformation for basis vectors on one hand and all other vectors on the other hand arises. And you build up the entire tensor theory based on this arbitrarily and willingly induced pseudocontradiction. I am excited how this will continue. I am currently at the vid "Tensors for beginners 10".
Definition 3 -- A vector is an element of a vector space -- is the one you'd better adopt if you plan on learning any quantum mechanics or advanced math like lie algebras, etc.
It's not a definition, though. It's a property.
This guy literally leaving us on cliffhanger 😂😂
In simple terms, the backward transformation matrix expresses the old basis in terms of the new basis. Therefore, to convert vector components from the old basis to the new basis, we use the backward transformation matrix.
1:15: “These lists of numbers are vector components and not the vectors themselves. Vectors are invariant under a change of coordinates but vector components are not.”
But the abstract definition of “vector” at 4:35 does not depend on “components” or “coordinates systems”. In fact, “the set of all lists of N numbers” is a vector space, by that definition, and a list of N numbers IS a vector in that space.
I think that the problem is that physicists are interested in specific kinds of vectors and tend to say that anything else is not a vector. But what they mean is that anything else is not a vector or is not the kind of vector that they want to be included in their particular redefinition of “vector”.
@8.50 anyway the backword matrix is a linear map it can't change the basis vector so the vector at the output after the application of B in the old basis can't produce components in the new basis .The output is also in the old basis not on the new transformed one
Excellent video thank you so much.
Vector spaces are the spaces of vectors also some rules. Vectors are things in vector spaces. There is a loop here. The first two definition even though they can be not whole and suitable for all, they are better.
the thing about the opposing logic you mentioned, isn't that just the equality between the matrix transformation and the vector you want to achieve on the standard coordinate system?
if we want to see what the new coordinates is in the new coordinate system to form the original coordinate, we can use the inverse matrix for that, which we know that B and F are to each other
F*[x,y] = [1, 1.5] B*F* [x,y] = B*[1, 1.5] [x,y] = B* [1, 1.5]
i dont think it is random😁
Hi Chris, is there a video where you go into further details about what it means for a vector to be Euclidean, and maybe also discuss other types of vectors ?
A common example is functions. You can add and scale functions, and put them together in linear combinatioms. A common example is the Fourier seried, where you write a function as a linear combinatiom of sine and cosine waves.
I'm finding your videos really helpful, and explaining at an excellent pace. Would you be able to perhaps write out the errors in the video description if you still don't want to edit the video itself?
Thanks for the compliments! I am very glad to hear these have helped.
I'll make sure something gets done about the errors... either re-uploading the video, or adding in some annotations...
The main error is that the matrices in the previous video are flipped sideways. It's not major, but it might confuse some people.
Whao a cliffhanger
Thanks 😃
Thank you for the series. You are the best ✌🏻🌸. Maybe a stupid question. What is the reason for writing vectors as columns and not rows?
The choice is arbitrary. But as you will see later, row vectors transform in the opposite way that column vectors do. So we consider them different objects.
Thanks a lot for your reply 😊🌸
maybe the best video series about it (im just at 3, so lets see :p). hard to find stuff like this next to university examples which expect already advanced knowledge XD .
My brain hurts trying to catch up this thing. I don't even have any maths background whatsoever
The best definition for a vector is member of vector space which is a set of vectors?
Surely it is possible to see that definition is a bit circular?
What is circular? It is something that has a circular shape. : )
It's probably better to learn the other two definitions first. A vectors space is a collection of "things" that can be added together and scaled. The "things" are called vectors.
@@eigenchris Thanks for your response. I appreciate your videos and that you remain active here. I am on my third go at your Tensor for Beginners videos. They are very helpful.
Can a Euclidean vector be described by a column of numbers enveloped by braces?
You are just awesome
Explained i and j (and k). Thanks.
Hello, Just a question and suggestion at the same time, mark when the use scalar if is a column or row right or left to know very well the matrix multiplication, because the the last video about the corrector I didnt get why you change it till I get the you are taking the vector as row not column and is on the left
Excellent video but I don't understand why you obtain in 8:22 a vector completely different from vector [1 2] in the tilde base. In fact when you apply the backward transformation B to the vector [1 2] you obtain the original vector [1 1.5] in the old basis. Can you explain me this? Please
i might have the reason in the comment after yours😊
In short, vectors obey the principle of linearity:
a(F+G)=aF+aG where F,G are vectors and a is scalar
very grateful, thanks
Please upload general theory of relativity video.
No one, the best.
THANKS !
I'm studying Vectors and Tensors (Fleish). I read that vector components can be defined by paralell or perpendicular projection. Paralell projection components correctly add up to the original vector but perpendicular ones do not.
My questionis if perpendicular ones do not up correctly, why is it a thing at all? It just makes us create the dual basis vector as a work around. I could additionally make up my own projection that would force me to create some workaround. Let's say we project at a 45 degree angle. I am sure I could create a method to make those components add up to the original vector. But why do any of this if paralell project works fine. Maybe we only need one?
Fleish teaches this a bit differently than me. In my case, with vectors I ALWAYS use the parallel method to get vector components. In videos 4-6 I introduce a new two of object called a covector, which are visualized as stacks instead of arrows. With covectors I ALWAYS use the pardendicular method to get the components (you'll see I do this by counting the number of stacks that an arrow pierces). Covectors can be useful because some things in physics are calculated with projections. For example, the work done on an object by a force equals the projection of the force vector onto the object displacement vector. In my videos, Force would be a covector and you cound the number of stacks that the displacement vector pierces.
@@eigenchris Sorry for not grasping this. You gave an example of work as a reason to use perpendicular projection (covectors), but i can more easily paralell project to get the contravariant components...I drew an example but unfortunately I cannot add an image to my reply. I did paralell projection and the components add up to the orig vector. I suspect I can do that with any local curvature scenario. What might I be missing here?
@@thevegg3275 If you want you can upload your picture to a photo-sharing website like imgur and then paste the link here. Normally the equation for work is (Force) · (Displacement), where the dot product gives perpendicular projection. You can instead interpret Force as a covector stack and count how many stack lines the displacement vector pierces.
@@eigenchris imgur.com/a/7fucvtN
@@thevegg3275 I guess you on't strictly *need* covectors in this example I gave for Work = (Force) · (Displacement). It is just a simple example where you could use a covector if you want. In physics you can always convert between covariant components and contravariant components, as I explain in video 16 of this series. Some more complicated objects in physics (particularly relativity) are "2-covariant" matrices. These include the metric tenor, the electromagnetic tensor, and the ricci curvature tensor. In order to understand these, you need to know how covariant components transform. But these are more advanced applications.
thank you
May I ask why we "measure" 2d tensors with 2d coordinates, while we measure 1d tensors with 2d coordinates?
You could measure a 1D tensor (vector) in any dimensional space. I'm just using 2D space to keep things simple.
Wasn’t the flipped matrix from last video the right one and this one wrong? If e1*=2e1+1e2 is right then the F in this video should be wrong because if you multiply (e1;e2) with F, for e1* you‘ll get e1*=2e1-1/2e2 if I got matrix multiplying rule correct. So isn’t F in this Video flipped and F of the Last Video was Right?
This can become confusing when you consider the set of column vectors as a vector space, in that case a vector is the same as its components (with respect to the natural basis).
8:50 in fact the transpose of F brings us from old to new basis (see correction video)
Trying to understand deep learning and AI brings me here. 😁
A few people have looked at this series because of the ML library "TensorFlow". This series probably won't help you with that. Sorry.
@@eigenchris Thank you for your quick comment..i'm not touching any code or any ML library yet. i'm focusing on learning mathematical concepts on my free time and i wanted clear explanation about Tensors wich you've explained very well ! I vaguely remember these concepts when I was at university.
@@eigenchris but tensor products *are* used in quantum computing, so this series is still relevant to the field of computer science.
Thanks a lot!
The previous video has 50k views now can u say where the errors are?
Check this, he uploaded long back.: ruclips.net/video/ipRrCPvftTk/видео.html
It's in the updated playlist: ruclips.net/p/PLJHszsWbB6hrkmmq57lX8BV-o-YIOFsiG
It’s not clear about whether when using the new coordinate system if you determine the vector values by drawing perpendiculars to the respective axes or parallel lines to the new axes. Hope that’s clear. In Euclidean geometry, you do both, which it turns out, are the same thing.
The method I use is the "parallel" method. You just count the number of basis vectors needed to construct the vector in question, and those are the components.
when i multiply 1.0 * the e1tilde vector and add 2.0 * the e2 tilde vector I get the vector 1.0 , 1.5. When i use the e vectors then I get the vector 1.0 , 2.0. What am I doing wrong? Thanks.
Are you sure the the F and B matrices are wrong in the previous video?! Looks to me that they were right but those in this video are wrong. Cause if we consider the old and new indices as a column vectors, then to get the new indices column vector, we need to multiply F by the old indices column vector components so we need to have the components related to e1 and e2 on the same row so that e1(tilda) is composed of e1 and e2. I am not sure if I made my explanation clear.
Column vectors are for vector COMPONENTS, so im order to get the matrix multiplication rule to work, the F and B matrices must be arranged as we see in this video.
e1 and e2 should not be arranged in a column vector because they are actual vectors, not vector components.
ok thanks!
Can someone help me understand how is v= 1e1 + 2e2 around 7:24... Is the units in two coordinate system is also different along with different orientation? Also how come the unit in x and y directions different in new coordinate system?
Do you know how to add vector arrows geometrically using "tip to tail addition"? You but the tail of one vector on the tip of the next. If you do this for e1~ + e2~ + e2~, the resulting arrow is v. Does this make sense?
@@eigenchris thank you for your time sir. I think I couldn't make you understand my question sir.
In figure 1e tilda is bigger than 1e vector... So my first question is how is that the base unit in both systems are different?
Second in first base system e1 vector is of same magnitude than e2 vector ... However in second system the e1 tilda looks way bigger than e2 Tilda... Are these units different
@@mr.chindo8570 It's possible to have basis vectors that are different lengths, even if it's a bit strange.
How come in your prevous video u transposed the matrix and now your usibg convention matrix multiplication
Really useful videos - but definition 3 is circular? It defines vectors in terms of vectors?
The "full definition 3" is kind of abstract and I wasn't sure if most viewers would want to see it. If you look up "Vector Space" on Wikipedia you can see the definition in full. It defines a vector space as two sets (V and S) and then defines the + and . operations through properties like (a+b)+c = a+(b+c) and a+b=b+a. So the process is to define the vector space first, and then just call members of the set V the "vectors". So it is not circular.
eigenchris Thanks - so effectively a bit of group theory thrown in for good measure 😉
Vectors are invariant under coordinate change. Of course that fact stems out of the very definition of a vector, which is a fixed magnitude associated with a fixed direction. To say that if we change the coordinates relative to which we define our vector is also to say that the vector components are variant when we change the coordinate system. Again this fact stems out of the very definition of a coordinate system; that is, it is just a frame of reference. Since vector components have no absolute value in all frames of reference, such components only make sense relative to a specific coordinate system. For example, I am 50 years old. My oldest son is 25 years old and my youngest son is 10. I am twice as old as my oldest son, but I am five times older than my youngest son. My age didn’t change, but the way I described my age did depending on which son’s age I compared mine to.
Yup. A lot of math education that I've seen tends to take simple concepts and then makes them seem complicated. Ultimately, vectors shouldn't care about components at all, since they become invalidated the moment you shift your perspective.
@@angeldude101 exactly. In terms of means relative to whatever “of” is. I am 6 feet and 2 inches means the same as I am 74 inches the same as 74(2.5) cm approximately.
Hello Mr. Chris. You may remember me as the 12 year old full of questions. If it won't bother, will you have time to answer one of the most largest question I had for the past few days? It is not directly related with your video series, but I just thought that you might be able to answer this: has anyone ever solved Einstein's Field Equations (google won't give me an answer). And even if there is a solution to it, is it a straight answer or just a simplified algebra equation? Thank you for your time!
by some 12 year old
Yes, Einstein's Equations (EE) have been solved in some specific situations. When we say someone "solves" the EEs for a given situation, we mean they have found the "metric tensor" for a given distribution of mass and energy for some region of spacetime. The metric tensor is the lower-case "g" in the equations, and it tells us how spacetime curves, and therefore how gravity works for that mass-energy distribution (this is because gravity is the result of spacetime curvature). Using the metric, you can calculate the curved paths that light will take in spacetime... for example, you can figure out how light will bend around black holes. (So, in some sense, once you have solved the EEs and gotten the metric, you need to do more with with the metric to actually find how light and mass moves). The EEs were published by Einstein in 1915, and the first non-vacuum solution was published in 1916. This was the Schwarzschild solution, which is for a spherical mass distribution like stars and black holes. It predicts that a black hole will have an event horizon and that light cannot escape it once it goes inside. You can find links to other solution types on this page: en.wikipedia.org/wiki/Exact_solutions_in_general_relativity
@@eigenchris Thank you very much Mr. Chris!
I have a little confusion
Can you tell that the indices of the dual basis are raised and does that means that they are contra variant to the dual vector components "
When the basis gets twice as big, the dual basis gets half as dense. So the dual basis is contravariant, relative to basis vectors. Meanwhile, if the basis vectors get twice as big, covector components become twice as big, because the vectors will pierce twice as many stack lines. So covector components are covariant relative to basis vectors. So the dual (covector) basis and covector components transform in opposite ways.
@@eigenchris can you please elaborate how increase in vector basis causes the dual basis to get less dense
@@muhammadtayyab4255 The dual basis is defined so that ϵ^i(e_i) = 1. If e_i gets twice as big, then ϵ^i must get half as dense to keep this formula true (so that the e_i arrow only pierces 1 stack line of ϵ^i).
@@eigenchris Thanks
I got the point. Thanks for your hard work that you have done to make this wonderful series
btw why did you transpose the F and B matrices. They weren't like that in the last video
They are incorrect in the previous video (tried to state that in the description).
Sorry about this. Hopefully it's not too confusing for you.
Nah I suspected they were incorrect in the last video. Just wanted to make sure. Thanks dude
But why are they incorrect in the previous video? I don't see what is wrong.
Basically, everything in that video is "turned on its side". If you want all the standard matrix-vector multiplication to work out as expected, you need to "flip" all the indexes and matrices for it to work.
So, does it mean that in that video the basis vector should be the row vector, not the column vector? Or am I missing something.
How come a vector space has an addition and a scale element but no multiplicatiob and division element?
Many applications in math, physics, and computer programming don't require the ideas of vector multiplication and division. There are ways to define a "vector multiplication" rule, but there is more than one way to do it. There is thr famous "vector cross product", but it only works in 3 dimensions. Another way to invent multiplication involves turning a vector space into a "geometric algebra", which you can google. Lie Algebras are another way to invent a multiplication rule for vector spaces, but these are more advanced topics. Anyway, the answer is vector spaces are interesting enough without inventing multiplication, so we don't force it as a requirement.
@@eigenchris Thanks!
chris, are vector spaces different if we choose different basis vectors like (x,y) and (r,theta) for the same 2-d space? coordinate space is also a vector space right?
A 2D plane is a vector space and you can choose many different basis vectors for it. However there is a difference between basis vectors and coordinate systems, especially when it comes to curved coordinates like polar coordinates. In rectangular coordinates, the coordinate lines always point in the same direction, just like a basis vector points in thr same direction. But in curved coordinates the coordinate likes change direction. You might want to watch videos 1-2 in my "Tensor Calculus" series to learn more.
@@eigenchris Is dimensionality the only factor to distinguish between two vector spaces? Like a vector in 2-d can be expanded in any 2-d basis.....What about position and momentum vector space in 2-d....aren't they different from a 2-d plane vector space?
@@priyankaragini5883 For finite dimenional vector spaces, dimensionality is the only factor that distinguishes them, as you said. You can always invent an invertible linear map that goes from one vector space to another of the same dimension (or in the reverse direction), and for this reason they are basically the same. A position-momentum 2d vector space is what's called a "phase space", which is what you'd study in Hamiltonian mechanics. I think this is an average 2D vector space, but it also has an additional function called a "symplectic form" that helps describe the physics of the space. It's the symplectic form that makes this 2D vector space "different" from an average 2D vector space. I haven't studied this in detail though so I can't tell you much more.
Supeer coool
would u say Vector components ARE Variant?
under change of coordinate system predictably variant, yes. It's in the tensor definition video.
👍
Do you have slides (without the errors) posted on a website for each video?
Yes: github.com/eigenchris/MathNotes/tree/master/TensorsForBeginners
@@eigenchris Thanks!
@@eigenchris Thanks a lot.
Transforms F, B in here are different form F, B in previous video.
lol unless it gets more than 100 views. So it gets 120,000 views. 😳
Saying a Vector component is a vector is just as wrong as saying a bodypart is a body
Did you mean “rule”, then it’s a vector
the correction you have made in F and B matrices are wrong.the previous one was correct.
yes you are right
la matrice B del video 2 non è la stessa matrice B del video 1
Please look at video 1.5, where I correct this mistake.
@@eigenchris thanks !
what method did you use to solve the matrix?
Very confusing if you continue to use videos with errors. Otherwise OK thank you.
HI GREAT SERIES, BUT ACTUALLY VECTORS ARE NOT GEOMETRIC OBJECTS, ALTHOUGH THEY CAN BE INTERPRETED AND VISUALIZED AS SUCH. THEY ARE ALGEBRIC OBJECT... THATS WHY IT'S CALLED LINEAR ALGEBRA!
Please please please 🙏🙏🙏🙏