@@TensorCalculusRobertDavie Hi, you are very welcome. I browsed through some of your other titles just now, and I am excited to see a rich source of mathematics of my most favorite type. Would you mind if I cite you as a source in the text I am writing on general relativity (with a rigour in tensor calculus and differential/Riemannian geometry) -- especially for any instances that I am inspired to add to my work because of your content? If you would like, this is a hyperlink to my document. drive.google.com/open?id=1-MU7daeZ0Q8TefNOwzImcGD2uZIhktvZl3FO13R5UkQ Thank you for the content!
I like the ad placements on these videos. "Are you struggling with calculus?" If you're watching a video on curvature and differential geometry, then no, you're not struggling with calculus. You're struggling with something far beyond.
@@TensorCalculusRobertDavie No, it's not too bad. That's the price of posting stuff on youtube. They can put ads in your stuff and there's nothing you can do about it except not post videos. But I think it's a small price to pay for the freedom of being able to post mathematical content. I'm pretty grateful for youtube both as a viewer and as a poster.
Excellent presentation. In general, what is the punch line for working with both covariant and contravariant coordinates? They are both representing the same objects. The metric tensor is usually at hand anyways. At first it seems an unnecessary complication while on the way to general relativity. How come they didn't just go with one or the other? And left the other as a fun fact side note. Thanks
Hello Benedek and thank you for your question. Wikipedia discusses this issue in the quote below and further in the link below that. "The vector is called covariant or contravariant depending on how the transformation of the vector's components is related to the transformation of coordinates. Contravariant vectors are "regular vectors" with units of distance (such as a displacement) or distance times some other unit (such as velocity or acceleration). For example, in changing units from meters to millimeters, a displacement of 1 m becomes 1000 mm. Covariant vectors, on the other hand, have units of one-over-distance (typically such as gradient). For example, in changing again from meters to millimeters, a gradient of 1 K/m becomes 0.001 K/mm." www.wikiwand.com/en/Covariance_and_contravariance_of_vectors
Is linear algebra needed (I mean in a rigorous way starting from defining vector spaces and dual spaces and so on...) to fully understand Tensor and General Relativity? Because some Textbooks were pretty hard to read since they start from a very abstract point of view not even mentioning about differentials, chain rules from calculus. I really enjoyed the video by the way I really appreciate it. Thank you!
Hello and thank you for your comment. The answer is no because this video provides you with a basic introduction to basis vectors and one forms (the objects with raised indices). However, the more you learn the better so do continue to study linear algebra if you can. Thank you for the feedback and good luck with your studies.
Hi, thanks for the video, but why does every vector is written by a covariant component with contravarient basis, and vice versa. Intuitively, I thought isn't the component and basis are consistent?
The two bases are distinct, hence the upper and lower indexes, and behave in different ways unlike in Euclidean space where they really are just the same thing hence no reason to raise or lower indices. Sorry for the short answer. Have a look at this article; en.wikipedia.org/wiki/Covariance_and_contravariance_of_vectors and this video; ruclips.net/video/CliW7kSxxWU/видео.html In General Relativity we use a metric to raise and lower these indices that is not the same as the Euclidean metric.
Hello Sjaak, the content covered here does assume some prior knowledge of vector calculus. The main point of the video are the two forms of basis vectors that can be formed so could I suggest that a good starting point would be to focus on the meaning of the diagrams before moving on to deal with the notation and what it is trying to express. Hope that helps?
Excuse me What is Nable i u in 12:53 actually this notation is not clear why g^ij Nablaj u ej = n could you explain to me thank for your great videos i would recommand to other people
Hello Gary and thank you for your question. The inverse metric is the g^ij part and the nabla u is the derivative giving us the maximum direction of increase in the scalar u in each of the directions j. The inverse metric raises the j index on the resultant of nabla u so that we obey the Einstein summation convention and don't end up with two j's down below. We CANNOT have (nabla u)j e_j but we can and must have (nabla u)^j e_j. Hope that helps?
Hello Ron, thank you for your comment and you are correct however, in this case, we have u(covariant)v(contravariant) = u(contravariant)v(covariant) which was the point I was trying to show across lines 3 and 4. The point here is that there are four different looking ways to get the same result. At the time I did um and arrgh about whether I should write it in the form you have pointed out but my goal took precedence in the end.
Gracias a usted por generar y divulgar tan buena calidad de informacion , le escribo en español por que me agrada que sepa que a mucha gente le interesan estos temas , un saludo !!!!
Hell yesss. Clearest and most logical exposition on RUclips. Reasonable definitions, etc. This is a gold mine. Thank you!
Thanks for your comment. Much appreciated!
I was looking for this kind of explanation for a long time
Thank you for your comment.
Honestly, a very competent run through. Thanks!
Hello Logan and thank you for your comment. Much appreciated!
@@TensorCalculusRobertDavie Hi, you are very welcome. I browsed through some of your other titles just now, and I am excited to see a rich source of mathematics of my most favorite type. Would you mind if I cite you as a source in the text I am writing on general relativity (with a rigour in tensor calculus and differential/Riemannian geometry) -- especially for any instances that I am inspired to add to my work because of your content? If you would like, this is a hyperlink to my document. drive.google.com/open?id=1-MU7daeZ0Q8TefNOwzImcGD2uZIhktvZl3FO13R5UkQ
Thank you for the content!
You are welcome to cite my material andgood luck with your efforts.
Thank you Robert! I really enjoyed this video.
Hello Marina and thank you for your comment. Much appreciated.
The images at 1:59 and at 3:20 are good.
They are well organized and help us to get the whole picture of underlying concept.
Excellent!
Thanks a lot.
Thank you again!
Thank you! I really enjoyed this explanation :)
I like the ad placements on these videos. "Are you struggling with calculus?" If you're watching a video on curvature and differential geometry, then no, you're not struggling with calculus. You're struggling with something far beyond.
Yes, a bit ironic. I hope there aren't too many ads?
@@TensorCalculusRobertDavie No, it's not too bad. That's the price of posting stuff on youtube. They can put ads in your stuff and there's nothing you can do about it except not post videos. But I think it's a small price to pay for the freedom of being able to post mathematical content. I'm pretty grateful for youtube both as a viewer and as a poster.
What happens to the position vector when working with a manifold? how does one typically define a basis without a position vector.
Please have a look at the first few minutes of this video.
Excellent presentation.
In general, what is the punch line for working with both covariant and contravariant coordinates? They are both representing the same objects. The metric tensor is usually at hand anyways. At first it seems an unnecessary complication while on the way to general relativity. How come they didn't just go with one or the other? And left the other as a fun fact side note.
Thanks
Hello Benedek and thank you for your question. Wikipedia discusses this issue in the quote below and further in the link below that.
"The vector is called covariant or contravariant depending on how the transformation of the vector's components is related to the transformation of coordinates.
Contravariant vectors are "regular vectors" with units of distance (such as a displacement) or distance times some other unit (such as velocity or acceleration). For example, in changing units from meters to millimeters, a displacement of 1 m becomes 1000 mm.
Covariant vectors, on the other hand, have units of one-over-distance (typically such as gradient). For example, in changing again from meters to millimeters, a gradient of 1 K/m becomes 0.001 K/mm."
www.wikiwand.com/en/Covariance_and_contravariance_of_vectors
@@TensorCalculusRobertDavie So it doesnt matter if you use contravariant or covariant they are just used whenever most conveniently for transforms?
Is linear algebra needed (I mean in a rigorous way starting from defining vector spaces and dual spaces and so on...)
to fully understand Tensor and General Relativity? Because some Textbooks were pretty hard to read since they start from a very abstract point of view not even mentioning about differentials, chain rules from calculus.
I really enjoyed the video by the way I really appreciate it.
Thank you!
Hello and thank you for your comment. The answer is no because this video provides you with a basic introduction to basis vectors and one forms (the objects with raised indices). However, the more you learn the better so do continue to study linear algebra if you can.
Thank you for the feedback and good luck with your studies.
@@TensorCalculusRobertDavie Thankyou!
Hi, thanks for the video, but why does every vector is written by a covariant component with contravarient basis, and vice versa. Intuitively, I thought isn't the component and basis are consistent?
The two bases are distinct, hence the upper and lower indexes, and behave in different ways unlike in Euclidean space where they really are just the same thing hence no reason to raise or lower indices. Sorry for the short answer. Have a look at this article; en.wikipedia.org/wiki/Covariance_and_contravariance_of_vectors
and this video; ruclips.net/video/CliW7kSxxWU/видео.html
In General Relativity we use a metric to raise and lower these indices that is not the same as the Euclidean metric.
@@TensorCalculusRobertDavie Thank you for the reply, will take a look! :)
Very clear and precise summary.
Thank you David.
Hello Sjaak, the content covered here does assume some prior knowledge of vector calculus. The main point of the video are the two forms of basis vectors that can be formed so could I suggest that a good starting point would be to focus on the meaning of the diagrams before moving on to deal with the notation and what it is trying to express. Hope that helps?
12:24 , u • v = g(ij) ui vj , not square root of it. I think.(In Numerator)
You are right. Thank you for spotting that.
@@TensorCalculusRobertDavie Thank you for your all efforts, highly appreciated 👍👏👏
5:12 thank you so much
You're welcome!
good presentation. you explain nicely.
Thank you.
Excuse me
What is Nable i u in 12:53 actually
this notation is not clear
why g^ij Nablaj u ej = n
could you explain to me
thank for your great videos
i would recommand to other people
Hello Gary and thank you for your question.
The inverse metric is the g^ij part and the nabla u is the derivative giving us the maximum direction of increase in the scalar u in each of the directions j. The inverse metric raises the j index on the resultant of nabla u so that we obey the Einstein summation convention and don't end up with two j's down below.
We CANNOT have (nabla u)j e_j but we can and must have (nabla u)^j e_j.
Hope that helps?
At the beginning of the video, you have to assume that the coordinate transformation and its inverse is also differentiable
Thank you Zoltan, that is a good point about differentiability, I should have mentioned it at the beginning.
At 11:45, line 3 should end up as u(covariant)V(contravariant). Otherwise this is an excellent presentation.
Hello Ron, thank you for your comment and you are correct however, in this case, we have u(covariant)v(contravariant) = u(contravariant)v(covariant) which was the point I was trying to show across lines 3 and 4. The point here is that there are four different looking ways to get the same result. At the time I did um and arrgh about whether I should write it in the form you have pointed out but my goal took precedence in the end.
Muy Buenos sus Videos !!!!!
Vicente Matricardi
Muchos gracias.
Gracias a usted por generar y divulgar tan buena calidad de informacion , le escribo en español por que me agrada que sepa que a mucha gente le interesan estos temas , un saludo !!!!
Vicente Matricardi
Thanks Vicente.
Thanks, Robert Davie
Thanks sir
You're welcome.
GREAT JOB!!! (
Thank you Anthony.
correction...line 4
Which slide?