These lectures are the best thing I have ever found on RUclips. Nothing like becoming scientifically literate in my spare time. I still can't believe these are available for free online. I feel honored to be able to learn from the great mind of Leonard Susskind. I am in your debt.
00:20 On notation 03:06 Extrinsic and Intrinsic curvature 07:54 Back to the original question of a fake or real gravitation field if the space is intrinsicaly flat or curved 10:21 Tensor fields (starting with scalar fields) 14:01 Vector Fields (covariant vs contravariant) 16:39 Vector basis (covariant and contravariant components of a vector) 19:46 Projection of a vector into an axis 23:11 Definition of the metric in terms of covariant coodinate basis 23:24 Lenght of a vector in terms of the metric (relating this notion of the metric in the end to our original notion of the metric as a multiplication of partial derivatives as seen in lecture 1) 25:12 Difference between covariant and contravariant indices (introduction to lowering the index) 29:26 Covariant and contravariant vectors (from lecture 1) 30:56 Back to tensor alaysis (already seen in lecture 1) 34:16 Notice the symmetry of the covariant and contravariant indexes with the partial derivatives 39:00 How a tensor transforms 43:47 The invariance of tensor-described laws (T=0 equation holds in any frame of reference -> T=W holds in any reference frame). This is WHY we specifically use tensors for GR. (Notice that this is invariance happens because we define the way a tensor transforms in a linear way (sumation of coeficients)) 49:15 Operations on tensors (that yield new tensors) 52:24 1) Addition of tensors: yields new tensor with same rank as the two added tensors 54:24 2) Multiplication of tensors (also called tensor product): yields new tensor with a higher rank than each of the tensors multiplied (if T is rank (a,b) and W is rank (c,d) then TW is rank (a+c,b+d)). This can be seen as a generalization of the outer product (takes 2 vectors and yields a matrix) 1:01:19 3) Contraction of tensors (+ a lemma): yields a new tensor with the elimination of two of its indices (one covariant and the other contravariant) (if it's the contraction between two vectors or the contraction of a 2-index tensor the it yields a scalar). This can be seen as a generalization of the trace of a matrix or the inner product between two vectors 4) Differentiation (covariant) of tensors (Seen in lecture 3) (Another opperation that can be applied to tensors is raising or lowering indices but for this we first need to understand how the metric works) 1:16:36 The metric tensor (how many independen components it has) 1:23:54 How the metric tensor transforms: the metric tensor is really a tensor 1:28:52 The metric has an inverse (no zero eigenvalues, symmetric)
He reminds me of the best professors we have in my faculty of Physics. They prepare their lessons, they lay it out step by step, calmly, solving questions and always open to discussion and delivering every bit of information in an ordered manner. This is a really hard part of Physics so having a person like this that can explain it so brilliantly is a miracle.
Learning math from a physicist is WAY easier than learning it from a mathematician. This video was super helpful, though. I've been reading Einstein's The Foundation of the General Theory of Relativity and I've been getting stuck in the Mathematical Aids section (for I am a first-semester Sophomore. Tensors of rank 2, Christoffels, etc are still a bit outside my grasp). I became stuck on the expansion of equation 20a (Section B, Subsection 9), but this video (although not solving what I needed solved) gave me the knowledge I needed to do it on my own. My university's own physics and math professors either wouldn't or couldn't help me, so I lately thank this program that Stanford provided to help satisfy curious minds and the professor for practicing Einstein's principle of explaining things simply :)
What a brilliant man and a patient teacher. Every time a student asks a misconceived question he gets straight to the heart of the misconception without ever getting irritated. Despite the fact that I'm only passively watching, I can feel the structural logic of tensors solidifying in my mind. The whole upper/lower indices thing is about the deltas versus their reciprocals.
I bought a complete tensor calculus book and 100% self-learning for one year , and there are some minor doubt? After looking at the General Relativity for one video lesson , all my previous doubt fully clears ! Thanks for the ENLIGHTEN VIDEO knowledge sharing .
To supplement Susskind's explanation/motivation of contra vs covariant vectors: It's very easy to show that if you transform both vectors in a dot product using the same rule, you get a different number. Do this with a single vector (since length is calculated by taking dot product), and you find you've changed the length of a vector simply by making a coordinate transformation. That is clearly nonsense. Therefore, the two objects in the dot product cannot transform by the same rule, in fact, they must transform by factors which are inverse of each other, so they will cancel and give you the same number before and after the transformation. The fact that one 'copy' of the vector transforms under one rule while the other transforms under the inverse rule is what defines the one copy as contravariant and the other as covariant. In fact, you can probably get even simpler and just point out that the components of a contravariant vector must transform inversely (hence 'contra') from how the basis vectors transform in order for the object to be the same before and after the transformation. After all, this is what "covariant" means: the components "co-transform", i.e. transform by the same rule as, the basis vectors, i.e. inverse to how the components transform when referred to the basis vectors. In fact, you can really drive home the intuition by considering the simplest possible case: unit conversion. If you go from measuring a stick with certain units to measuring it with units half as long, the number you quote as the length of the stick must get twice as big in order for the stick to have the same length in both unit systems.
"It's pretty clear which end the stick has to go into. It has to go into the thing with the hole. You can't try putting a hole into a hole or a stick into a stick. You can only put the stick into the hole, and another stick can slide into another hole and some more holes to put more sticks into... And general relativity is a lot like that..." Leonard Susskind, 2012
oh wow, I've read several tensor analysis and tensor calculus books via self studying and i've taken enough prerequisite materials like vector calculus formally at my school but this by far is the best explanation of all the maths you need for GR
In large part, this seems to you like the best explanation precisely because you've been exposed to the subject before from all the material you've already read. You would not understand these lectures as much if you were new to the subject.
I am a 13 years old who accidentally fall asleep with youtube on and somehow I've ended up here Not complaining though, I've been interested to learn general relativity for a long time now. I probably won't be able to understand, but I will try my best
(rather, studying and learning about these and other science topics in general, but especially all the astronomy and topics directly or loosely tied to it) ...the way you think will change and its awesome 😊😊 never gets old! 🤘🏻
At 23:30 Prof skips from geometry to metric tensor, for covariant components. See the geometric meaning by drawing the dot product of V dot e1, on e1. It is a length along e1 of magnitude the lengths of e1 times the length of V time cosine of the angle between them. That is the projection of V on e1 if e1 has length 1. As e1 increased in magnitude, the dot product also increased in magnitude. So the covariant component is projection times. Magnitude of e1. Right?
1:31:40 are we assuming here that "g" is the metric tensor or do we assume that g^np is the inverse of g_mn? Because I dont see how it shoulf generally be that this equals to the identity for some arbitrary matrices. I see how one is the co- and one the contravariant version of the same matrix but do not see how that alone shoulf result in the identity. As of now we didnt learn how to pull indices up or down, so....
pk you do understand calculus preparation for this includes both single and multivariable calc, plus you also need to know Linear Algebra, Differential equations, special relativity, basic analysis, Lagrangian and Hamiltonian mechanics to even begin with GR. (Ps: You might be an engineering student, but I am a physics student so i know what im saying here) Also, you do realise modern physics comprises of Quantum Field theory and relativistic quantum mechanics as well? You can’t learn them only with ‘calculus’ lol, you need to learn group theory, lie groups etc etc.
I recommend Carroll's Spacetime and Geometry as a supplement---my fave intro GR textbook. I'd also suggest learning QED first to get super used to index summations and covariant transformations. It becomes second-nature quicker than you might think, but you do have to put in the work. QED is easier than GR and just as fascinating (you can skip the Lie algebra bits if you wish).
The first time after years I understood that stuff. Most books are going to some level of unnecessary details, thereby hiding the basic principles. REALLY good lecture - RESPECT! Symbolically I would like to invite Professor Susskind to a cup of coffee :-)
I am enjoying listening to this lecture series As a mathematics major I find his treatment of mathematical things almost blasphemous, in a good way. I think he thought to himself that General Relativity is so obvious that he could teach this to anybody and you don’t even have to understand higher mathematics. Its just about manipulating notation …. you can almost do this without thinking … I’m not so sure. He reminds me of my physics friends from university. As a math major working rigorously defining and proving things ,,,,, and then hearing my physics friends working with these things, integration, differential equations vector fields, gradients, curls etc …. like they were hammers ….. it always made me laugh and Im still laughing …. It works and physics is being done this way. I prefer listening to Roger Penrose for my physics. That said I am enjoying greatly these lectures. Thanks
_"its just about manipulating notation …. you can almost do this without thinking"_ If you move around symbols without understanding what they mean then you do not do physics, but math. A physicists needs a real world understanding of the math in contrast to the rule based understanding mathematicians have.
Thomas is right. As a mathematics graduate, I would say Susskind's level of mathematics exactness is quite poor. To a math-oriented person, this is like scrapping a fork on a plate. I personally go crazy. I'm sure Thomas does too. The point of physics if of course not mathematics, on the contrary. But this does not mean mathematics shouldn't be done properly when doing physics! Indeed, this is simply to avoid many mistakes! And you may find you understand BETTER, not less, by having a rigorous understanding of the mathematical tools you are using. This may slow you down usually because you have more things to check, but it is still absolutely necessary. Physicists may think they are smarter by using all these shortcuts without fully understanding the mathematical objects, but if you don't like how things work on a fundamental level, and you like shortcuts, you shouldn't be doing theoretical physics. Maybe then engineering is more suited for you. Sorry to be cruel, really, but I believe if you use math in physics, do it right, or don't! Susskind actually often does lectures without writing math at all (where he just discusses concepts).
I think his notation is correct. Remember to sum over an index (say p), it has to appear twice; once as a superscript and once as a subscript (or "upstairs" and "downstairs" as he calls them)
+atrumluminarium focus on learning the notation first, it makes everything easier. The notation will be overwhelming at first, but you will soon find that it will all be very simple; learning GR is next to impossible without the notation, and this notation is important to generalize laws and such across coordinate systems (the importance of this is obvious; space is not exclusively euclidean, and if one were to say try to define laws in a place where space is curved due to gravity, one would find these laws would differ when compared to the laws for a flat surface, but by a predictable amount). Think of vectors and scalars as extensions of tensors: A scalar is a rank 0 tensor; there is no index, and therefore a scalar will be the same in any coordinate system. To think of this, think about running 6mph on a track vs running 6mph on a straightaway; the concept of speed is scalar. Even though one will be moving in a different direction on the track when compared to the straightaway (due to turns and whatnot, this results in the velocity vector being different on the track when compared to the straightaway), one will still be running the same 6mph, the scalar part of velocity. A vector is a rank 1 tensor. A contravariant vector is one that is contracted with a basis vector (do you remember learning about i hat, j hat, and k hat? These are basis vectors) to form a vector. A contravariant vector has an upper index, and a basis vector has a lower index. These two indices are "contracted," which gives the vector (So the contravariant vector V^i, when contracted with ei, gives V^iei=V). A covariant vector is a vector dotted with a basis vector. This is when V is dotted with e, or V*ei gives Vi. A way to think of this is the covariant vector is when the vector adds the "extra" lower index from the basis vector; the vector itself is unchanged since the basis vector has a value of one. I hope this helps; rewatch these lectures if you need to, and try to look for other sources on vector analysis besides these lectures. Once you understand this notation and vector calculus, GR is a breeze.
^This is really good advice. Once you learn the notations and how to manipulate indices it gets a whole lot easier to read equations without your brain trying to escape your head by pouring out through your nose.
Why didn't my professor of differential geometry explain things like this? It's really all so clear. I love his face. The prominent nose, huge ears, and the eyes. A caricaturist's dream.
If any of you are having trouble with this strange definition of a tensor I would recommend reading chapters 2 and 3 in Robert Wald's "General Relativity" textbook. He gives a very good definition on what tensor is and shows how to do algebra and calculus with tensors.
CurlBro15 I miss an explicit example and representation. In lineair algebra courses you always get the vectors and matrices explicitely shown as some kind of collection of numbers. Wish the same was done with tensors, that would clear up things. The way I understand it, a rank one tensor is a vector, rank 2 a matrix and rank 3 some sort of cube containing numbers, is that a correct way of thinking of them?
Hello, from mathematical point of view, can you please let me know if my understandin is correct? We have a vector space - let's say 2-dimensional - with two bases: base B_1={e_1,e_2} that is a standard canonical base on cartesian plane i.e. e_1=(1,0), e_2=(0,1) and B_2={f_1,f_2} that is a linear transformation of B_1. We have a vector V that can be represented as V=a_1e_1+a_2e_2 and V=b_1f_1+b_2f_2. So here a_1 and a_2 are so called "covariant components of vector V" as they are the dot products of e_1 and e_2 (of length 1 each) with V (so projections on x and y axis). And b_1 and b_2 are called "contravariant components of vector V" as they are just representation of V in base B_2. Now we apply a transformation to our vector space (not necessairly linear, hence the outpus is a space) given by the functions y_1=y_1(x_1,x_2) and y_2=y_2(x_1,x_2) that are continuous and smooth that gives us a space (primed space). And we want to have a primed components of transformed vector V and than we can use the formulas visible on 37:57 to calculate a_1', a_2', b_1' and b_2'?
Anyone who has wrapped a brithday or Christmas gift should understand what intrinsic flatness means. Try wrapping a book, as compared to a bastket ball or a saddle. Try then wrapping a cylinder or a cone.
Just wanted to say this --- someone dare me to finish this series --- I will take you up as I simply MUST understand Relativity. Plus, I've got to put numbers to the fact that time is going slower (faster?) at my feet than at my head !!! That's what keeps us on the Earth !!!
At first it was not clear to me, at 1:29:05, how we know that there are no zero eigenvalues of g. But now I understand: At a given point, P, in the manifold, the square of the length of a vector, v, in the tangent space of P, is given by v(g(P))v. If g(P) has a zero eigenvalue, then by definition there exists a non-zero vector, u, in the tangent space at P with (g(P))u = 0, implying the contradiction that u has length zero. Of course this argument will not work in a Lorentzian manifold where it is possible to have a non-zero vector with zero length.
at 29:00 if e1 which is unit vector along x1 but not its not unity in magnitude but changing, that means e1 is also changing along x1 ,,,, so how can we take V =v1e1+v2e2+v3e3
in 16:30 when you introduce basis vector e, shouldn't it have two subscripts since it changes from point to point along each axis? ie. the e1 going from x0 to x1 will be different from e1 vector going from x1 to x2 and so on. because you said the distances between successive xi s are not the same
He was wrong when he said that as he clearly assumes in everything that follows that he's in a vector space. x1 and x2 (x and y) might not be perpendicular to one another but they live on straight lines and successive points are equally spaced. Calculus teaches us that a curved geometry has approximately this form when you consider a very small region. The earth is round but looks flat on a human scale. Of course this approximate 'flat (vector) space' may change as you move to different points.
17:30 are you assuming the basis respectively are constant? Contravariant components of a vector are the scalar multiples of the basis vectors of a vector.
Martín Villagra Every position on a coordinate system can always be represented as a direction vector multiplied by a magnitude (or an extension in that dimension). The way multiple dimensions are related to how distance is defined interdimensionally. (A unidimensional vector has distance defined on its own dimension only.)
Man...I was riding high after Lecture 1 on GR. 'Einstein Smeinstein...this isn't that hard' I told myself. Halfway through this lecture I feel like a 3rd grader that wandered into a multi variable calc lecture.
BringerOfBloood They are tensors, because even though they behave differently compared to scalars and vectors, they still have a well defined rule of transformation.
great lecture! I do not follow the part where he derives that the metric tensor really does transform like a covariant tensor. A covariant tensor requires that all components transform as per equation shown at 1.27. However he derives this starting at 1.25 about a statement about the distance which is a single number (i.e. g-mn * dx-m * dx-n summed over m and n). So the actual equation derived in 1.27 is not a statement about each component of g, but rather that the equation holds if we sum over all m,n,p & q. Given that in general g is different at each point, and this equation must hold for all points - I can kind of grasp that the sums on each side can only be equal if every component m,n is equal - but it does not seem to be what he derived. Am I missing something?
Does anyone have any tips on how to study this stuff? I can sort of get the logic but there is a lot to keep track of and be comfortable with in Tensor Analysis (I'm not even sure I understand fully the difference between covariant and contravariant tbh) :/
contravariant = vector / covariant = dual vector ( linear form that acts on vectors and give scalars) . This is related to the concepts of vector space and their dual vector spaces. You do not need to understand this to go through these lectures.
Watch lectures. Take notes. Solve problems that interest you just within your reach. Think about problems just out of your reach. Repeat until you are solving problems which were once far out of your reach. It is a huge topic and it is worth remembering that it took that Einstein bloke a decade to even state the theory, so the only way for lesser mortals like me to get anywhere at all is persistent effort, watching a variety of lectures by different lecturers, always returning to the subject, persisting persisting and more persisting. If anyone knows an easier way to learn this subject I'd love them to share that with me but I don't think there is. It's just a whole lot to take in.
I would say if you are serious about learning the material, go through it once so it feels familiar, then go back and listen to the lectures again. You will be surprised how much more you understand the second time.
It goes a bit fast, the ovariant and contravariant tensor components are the at the basis of tensor analysis and should deserve a whole course chapter. It should be first introduced in a linear vector space (e.g. considering vectors vs linear form components) without any reference to functions and local coordinates, otherwise it gets too difficult.
And when you watch at 2x speed, you don't have the time to process the information (and there's a lot of it!) and let it sink into your memory. You're only left with a feeling of hearing something smart. In that case, there's a lot of "60 seconds physics" videos out there.
Ah, also on the wikipedia page when it talks about a 'Field' F, we're usually talking about the real numbers (or complex numbers). So the dual space is the set of maps from the vector space to the real numbers, equipped with some fancy addition and scalar multiplication operations
This lesson reminds me of a dream I had that was quite disturbing. An entity had a mass that could be held up or moved on or through molecules of any kind, through the molecules and on the surface. It was as if the entity could disobey time and space and molecules and dimensions. An entity that could sink or rise through air, solid or liquid. A tongue that could stay in its mouth or lick through a window pane to taste the middle of your brain or the surface of your forehead. No boundaries no place to hide from something opaque and transparent at the same time and space. That is a dimension curve.
This is better than his previous treatment of GR. I don't like his choices of notation. I prefer using over-bars instead of primes, and whether I use over-bars or primes, I put the coordinate system designation symbol on the indices. It makes things much clearer. MTW use this technique. Nonetheless, his presentation is well done. I would do things differently; e.g., I would define partial differentiation in the context of arbitrary coordinates, and discuss the implicit function theorem.
Try some of the earlier material in the Theoretical Minimum series - theoreticalminimum.com/courses I don't think you need any of the QM course for GR, and you could probably skip 5 through 8 of the classical mechanics course, since that is necessary only for QM.
Does anyone know what the name of the cake that he's eating is? I'm studying in the library and I'm kinda hungry and the way he eats them makes me want some :)
Sir I watched it carefully, I got small idea about calculus of relativity. I wish to get more. My question is what was you were eating? Cake, bread or chese?
Why does he say covariant components come from dot products with basis vectors? I thought covariant components are the vector components in the reciprocal basis, that is, the basis with contravariant basis vectors with upper indices. See Wikipedia article on covariance and contravariance of vectors. I don't see how you could construct the vector using his definition of covariant components and basis vectors.
The student at the end was freaking me out. Professor Susskind was consistently expressing genuine interest in his students' comprehension, and the student was edging on becoming a nuisance.
I see many comments asking for a good reference book to learn about tensors. I recommend the text by Heinbockel. I had a very hard time making any headway when I first wanted to start learning about tensors, in no small part because it seemed at the time that no one was offering good explanations of what tensors represented physically, and the seemingly circular definition "tensors are mathematical objects that transform like tensors." I started every introductory text on the subject I could find in my Uni library and found most incomprehensible (blind manipulation of indices works pretty well after some practice but that isn't really understanding). It's one of those subjects, like group theory, which has many logical starting points so any two given texts don't seem to cover the same material until you've made much progress in both of them. In any case, Heinbockel was the one that helped me over that initial hump. Once you get started the subject isn't so bad imo. Fleisch may be even better if you want something as fast, easy, and intuitive as possible (you could probably tear through it in 1-2 days), but half of his 200 page text is stuff you probably already know. His descriptions are better than Heinbockel's, but Hein will give you more quantity and depth of understanding. He almost reminds me of Griffiths (and Strang) in how well he chooses examples and problems, but the writing is much dryer. If you're smarter than me and/or learning tensors for pure mathematics, stay away from these and grab something more rigorous.
Most frustrating part of my GR course was having to learn tensor analysis without good resources. I eventually cobbled together enough tensor understanding from various online sources that i was able to mostly understand my GR text (Weinberg, Gravitation and Cosmology), but it was a rough few weeks. I wish my mathematical methods (for physics) class would have introduced tensors instead of some of the other techniques that i haven't used since the course, but whacha gonna do...
Simmonds book _A Brief on Tensor Analysis_ (published by Springer) is pretty good as an introduction, putting in a belated plug for that one. There is also a standalone Feynman lecture on tensors in his _Lectures on Physics_ (published by Addison Wesley) which is exceptionally clear. That one lecture would be an almost ideal starting point for a lot of people. Also worth a bit of effort is Arthur Schild's more terse and densely packed _Tensor Calculus_ (published by Dover), though that's not exactly introductory, it's good for a bunch of stuff. Same for Levi-Civita's _Absolute Differential Calculus_ (also published by Dover), it's not pitched at beginners.
Here is a simple way to understand Contravariance and covariance. Contravariant vectors are the components of the vectors along each axis. Suppose we change the scale of the axis, the components also change. For example, say we scale our axes using millimeters (mm). Let's say there is a vector with components 30 mm x-component, and a 40 mm y-component. If we change the scale to cm, meaning we have INCREASED it 10-fold, we see the component vectors DECREASE in value to 2 cm in x-direction, an 4 cm in the y-direction. The scale coefficients of the vectors change opposite (contra) to the scale of the basis vectors. On the other hand, covariant vectors are dot products with the basis vectors and change in scale the same way as the basis vectors so they are co-variant. 13:55 The simplest way to determine whether a quantity is a scalar is to add a direction to the quantity and see if it makes sense. My glass has 200 mL of milk facing NORTH, and my brother has a class with 200 mL of water facing SOUTH. Volume is obviously a scalar, there is no directionality involved in measuring volume. Other scalars are length, area, temperature, and time. One important point, length or distance is not the same as displacement. Displacement is a change of position and is not a scalar because the direction is important. For example, I live in South Florida so I will end up at Disney World if I travel 200 miles northeast. but I will end up in the Atlantic Ocean if I travel 200 miles east, or Cuba for 200 miles south. 17:55 the one unit e vectors are called basis vectors.
Symmetric matrices can have a zero eigenvalue, so I'm wondering what additional property the metric tensor has that makes it not have zero eigenvalues as he suggests. Anyone?
An eigenvalue of zero would mean that by transforming coordinates, one whole dimension is collapsed to zero. Basically, a projection from three dimensions to two dimension would have zero as an eigenvalue, with the eigenvectors beeing all the vectors beeing perpendicular to the projected-on surface. We don't want that. We want to rediscribe the existing space with new coordinates not getting rid of one basis vector and project the vectorspace on something.
I think we want v.v>0 (we want length of non zero vector v to be positive) for all non zero v here, i.e v.v = v^TGv >0 which implies Gv 0 for v0. This means that columns of G are linearly independent, hence G is invertible.
Great lecture series I must say, but it bugs me that he keeps calling the whiteboard a "blackboard". It is really inconsequential for the overall quality though.
Is it only me who sees the list, Lecture Collection | General Relativity, in reverse order. That is, in my browser, the list of videos places the first lecture at the bottom. It would, of course, be neat to have the first lecture at the beginning of the list.
Normal velocity is not a tensor and so does not follow this rule. That's why we use 4-velocity which is a tensor. Only things moving at the speed of light have a 4-velocity of magnitude zero hence we agree on the speed of light in all reference frames.
Interesting enough from the way he calculated the "independent components", an n dimensional space will have T(n) independent components, where T(n) is the nth triangular number in the sequence 1,3,6,10,15 ...
In Susskinds own words: “A number of years ago I became aware of the large number of physics enthusiasts .... so I started a series of courses on modern physics ….. specifically aimed at people who know, or once knew, a bit of algebra and calculus, but are more or less beginners.”
What's happening is that he writes tensors in terms of components and a single component of a tensor is just a number. So when you write two tensors in terms of the components you can juggle the order without it mattering because all you are ever doing is multiplying two numbers. That is not the same thing as saying that two tensors are commutative under a tensor product, generally they will not be. It is partly why index notation is so cool, you can forget about all the non-commutative properties when dealing only with the components. The non-commutativity reappears when you come to actually write out a proper tensor product because you find that nth rank tensors are n-dimensional arrays in which location of the components does matter and suddenly you are forced by the index notation to compute the components and put them in the right place. So you lose and then also recover the non-commutativity by writing out tensors in terms of their components.
These lectures are the best thing I have ever found on RUclips. Nothing like becoming scientifically literate in my spare time. I still can't believe these are available for free online. I feel honored to be able to learn from the great mind of Leonard Susskind. I am in your debt.
I'm here 7 years later during Covid and I feel the same way 🙂
Me times n :)
Shut the f up bitch you're too dumb to understand any of this
My tensors agree with you
i read your comment 3 times before i find out it's spare time not space-time
00:20 On notation
03:06 Extrinsic and Intrinsic curvature
07:54 Back to the original question of a fake or real gravitation field if the space is intrinsicaly flat or curved
10:21 Tensor fields (starting with scalar fields)
14:01 Vector Fields (covariant vs contravariant)
16:39 Vector basis (covariant and contravariant components of a vector)
19:46 Projection of a vector into an axis
23:11 Definition of the metric in terms of covariant coodinate basis
23:24 Lenght of a vector in terms of the metric (relating this notion of the metric in the end to our original notion of the metric as a multiplication of partial derivatives as seen in lecture 1)
25:12 Difference between covariant and contravariant indices (introduction to lowering the index)
29:26 Covariant and contravariant vectors (from lecture 1)
30:56 Back to tensor alaysis (already seen in lecture 1)
34:16 Notice the symmetry of the covariant and contravariant indexes with the partial derivatives
39:00 How a tensor transforms
43:47 The invariance of tensor-described laws (T=0 equation holds in any frame of reference -> T=W holds in any reference frame). This is WHY we specifically use tensors for GR. (Notice that this is invariance happens because we define the way a tensor transforms in a linear way (sumation of coeficients))
49:15 Operations on tensors (that yield new tensors)
52:24 1) Addition of tensors: yields new tensor with same rank as the two added tensors
54:24 2) Multiplication of tensors (also called tensor product): yields new tensor with a higher rank than each of the tensors multiplied (if T is rank (a,b) and W is rank (c,d) then TW is rank (a+c,b+d)). This can be seen as a generalization of the outer product (takes 2 vectors and yields a matrix)
1:01:19 3) Contraction of tensors (+ a lemma): yields a new tensor with the elimination of two of its indices (one covariant and the other contravariant) (if it's the contraction between two vectors or the contraction of a 2-index tensor the it yields a scalar). This can be seen as a generalization of the trace of a matrix or the inner product between two vectors
4) Differentiation (covariant) of tensors (Seen in lecture 3)
(Another opperation that can be applied to tensors is raising or lowering indices but for this we first need to understand how the metric works)
1:16:36 The metric tensor (how many independen components it has)
1:23:54 How the metric tensor transforms: the metric tensor is really a tensor
1:28:52 The metric has an inverse (no zero eigenvalues, symmetric)
you're an angel
thnks for the effort dude thanks very much
эхх Эхх ХХХ эх эхх ж эхх Эх Хэзе эхх Эх Юрий зюзю
sfs.mit.edu/undergraduate-students/types-of-aid/mit-scholarship/
Wow! you have put (almost) as much work into the lecture as Susskind! Useful - very well done sir.
He reminds me of the best professors we have in my faculty of Physics. They prepare their lessons, they lay it out step by step, calmly, solving questions and always open to discussion and delivering every bit of information in an ordered manner. This is a really hard part of Physics so having a person like this that can explain it so brilliantly is a miracle.
Having struggled with primes in tensor calculus, finally you have kindly simplified it in this beautiful presentation. Thank you.
Learning math from a physicist is WAY easier than learning it from a mathematician.
This video was super helpful, though. I've been reading Einstein's The Foundation of the General Theory of Relativity and I've been getting stuck in the Mathematical Aids section (for I am a first-semester Sophomore. Tensors of rank 2, Christoffels, etc are still a bit outside my grasp). I became stuck on the expansion of equation 20a (Section B, Subsection 9), but this video (although not solving what I needed solved) gave me the knowledge I needed to do it on my own.
My university's own physics and math professors either wouldn't or couldn't help me, so I lately thank this program that Stanford provided to help satisfy curious minds and the professor for practicing Einstein's principle of explaining things simply :)
What a brilliant man and a patient teacher. Every time a student asks a misconceived question he gets straight to the heart of the misconception without ever getting irritated.
Despite the fact that I'm only passively watching, I can feel the structural logic of tensors solidifying in my mind. The whole upper/lower indices thing is about the deltas versus their reciprocals.
Countless thanks to Prof. Susskinds and Stanford University for making this highly informative materials accessible to the world with ease.
Professor Suskind is an awesome teacher. Never get tired of listening to him.
I bought a complete tensor calculus book and 100% self-learning for one year , and there are some minor doubt? After looking at the General Relativity for one video lesson , all my previous doubt fully clears ! Thanks for the ENLIGHTEN VIDEO knowledge sharing .
To supplement Susskind's explanation/motivation of contra vs covariant vectors: It's very easy to show that if you transform both vectors in a dot product using the same rule, you get a different number. Do this with a single vector (since length is calculated by taking dot product), and you find you've changed the length of a vector simply by making a coordinate transformation. That is clearly nonsense. Therefore, the two objects in the dot product cannot transform by the same rule, in fact, they must transform by factors which are inverse of each other, so they will cancel and give you the same number before and after the transformation. The fact that one 'copy' of the vector transforms under one rule while the other transforms under the inverse rule is what defines the one copy as contravariant and the other as covariant.
In fact, you can probably get even simpler and just point out that the components of a contravariant vector must transform inversely (hence 'contra') from how the basis vectors transform in order for the object to be the same before and after the transformation. After all, this is what "covariant" means: the components "co-transform", i.e. transform by the same rule as, the basis vectors, i.e. inverse to how the components transform when referred to the basis vectors.
In fact, you can really drive home the intuition by considering the simplest possible case: unit conversion. If you go from measuring a stick with certain units to measuring it with units half as long, the number you quote as the length of the stick must get twice as big in order for the stick to have the same length in both unit systems.
Your unit stock example just defined the components of the covariant metric tensor and the inverse of the metric tensor.
"It's pretty clear which end the stick has to go into. It has to go into the thing with the hole. You can't try putting a hole into a hole or a stick into a stick. You can only put the stick into the hole, and another stick can slide into another hole and some more holes to put more sticks into...
And general relativity is a lot like that..."
Leonard Susskind, 2012
oh wow, I've read several tensor analysis and tensor calculus books via self studying and i've taken enough prerequisite materials like vector calculus formally at my school but this by far is the best explanation of all the maths you need for GR
+Things2doBeforeIdie This guy 'Physicalizes' every math concept he introduces. Something most textbooks and teachers refrain from doing
So you actually understand all of this? Can we be friends lol
I wouldn't go as far as saying that this is ALL of the math you need to GR though
In large part, this seems to you like the best explanation precisely because you've been exposed to the subject before from all the material you've already read. You would not understand these lectures as much if you were new to the subject.
@@shahidullahkaiser1159 that isn’t true; I haven’t been formally introduced to GR, but I understand what he’s saying.
I love the example he used with the triangles to explain the distinction between intrinsic and extrinsic curvature.
absolutely... youtube university.... so many great lectures though few hold a candle too this man.
I am a 13 years old who accidentally fall asleep with youtube on and somehow I've ended up here
Not complaining though, I've been interested to learn general relativity for a long time now.
I probably won't be able to understand, but I will try my best
that's awesome!! these topics only get cooler and cooler!!!
(rather, studying and learning about these and other science topics in general, but especially all the astronomy and topics directly or loosely tied to it) ...the way you think will change and its awesome 😊😊 never gets old! 🤘🏻
I also fell asleep and ended up here.
I ended up here, then fell asleep
There are no job, take sociology classes on side.
My favourite lecturer and lectures , I can totally understand these calculations.
Brilliant
How clear the gentleman makes it, thank you Dr.
At 23:30 Prof skips from geometry to metric tensor, for covariant components. See the geometric meaning by drawing the dot product of V dot e1, on e1. It is a length along e1 of magnitude the lengths of e1 times the length of V time cosine of the angle between them. That is the projection of V on e1 if e1 has length 1. As e1 increased in magnitude, the dot product also increased in magnitude. So the covariant component is projection times. Magnitude of e1. Right?
These lectures are old
... But they will never really become useless
1:31:40 are we assuming here that "g" is the metric tensor or do we assume that g^np is the inverse of g_mn? Because I dont see how it shoulf generally be that this equals to the identity for some arbitrary matrices.
I see how one is the co- and one the contravariant version of the same matrix but do not see how that alone shoulf result in the identity. As of now we didnt learn how to pull indices up or down, so....
I still cannot get myself out of the stick and hole metaphor. Great lectures.
Try to get out of the pop culture, its mathematics :P
I'm sure everyone, including the lecturer, had the same (unspoken) associations to those statements.
lecture 1: 300k, lecture 2: 100k
lecture 5: 50k hahahah
2/3 dropped out when it got heavy
I am getting uni flashbacks
I don't understand any of this but I'm interested.... I guess that's a start.
me too haha
Bruh, you need to learn alot of stuff before you can even begin this
pk you do understand calculus preparation for this includes both single and multivariable calc, plus you also need to know Linear Algebra, Differential equations, special relativity, basic analysis, Lagrangian and Hamiltonian mechanics to even begin with GR. (Ps: You might be an engineering student, but I am a physics student so i know what im saying here)
Also, you do realise modern physics comprises of Quantum Field theory and relativistic quantum mechanics as well? You can’t learn them only with ‘calculus’ lol, you need to learn group theory, lie groups etc etc.
you will understand it keep working on multivariable calculus
and differential and integral caculus you will understand
I recommend Carroll's Spacetime and Geometry as a supplement---my fave intro GR textbook. I'd also suggest learning QED first to get super used to index summations and covariant transformations. It becomes second-nature quicker than you might think, but you do have to put in the work. QED is easier than GR and just as fascinating (you can skip the Lie algebra bits if you wish).
0 to 26:00 the metric tensor, covariance and contravariance of vectors. 31-> tensors
The first time after years I understood that stuff. Most books are going to some level of unnecessary details, thereby hiding the basic principles. REALLY good lecture - RESPECT! Symbolically I would like to invite Professor Susskind to a cup of coffee :-)
these professors are amazing!! it’s so much easier to learn when the teacher is entertaining
Wow, he is seriously patient!
I am enjoying listening to this lecture series
As a mathematics major I find his treatment of mathematical things almost blasphemous, in a good way.
I think he thought to himself that General Relativity is so obvious that he could teach this to anybody and you don’t even have to understand higher mathematics.
Its just about manipulating notation …. you can almost do this without thinking …
I’m not so sure.
He reminds me of my physics friends from university. As a math major working rigorously defining
and proving things ,,,,, and then hearing my physics friends working with these things, integration, differential equations vector fields, gradients, curls etc …. like they were hammers …..
it always made me laugh and Im still laughing ….
It works and physics is being done this way. I prefer listening to Roger Penrose for my physics.
That said I am enjoying greatly these lectures.
Thanks
+Thomas Bennett Physicists are never mathematically rigorous.
+Thomas Bennett If Physicsists start getting too involved with Mathematics, then they would be called mathematicians XD
***** wow thanks for sharing that.. I'll remember that when I start getting discouraged by the complex mathematics
_"its just about manipulating notation …. you can almost do this without thinking"_
If you move around symbols without understanding what they mean then you do not do physics, but math. A physicists needs a real world understanding of the math in contrast to the rule based understanding mathematicians have.
Thomas is right. As a mathematics graduate, I would say Susskind's level of mathematics exactness is quite poor. To a math-oriented person, this is like scrapping a fork on a plate. I personally go crazy. I'm sure Thomas does too. The point of physics if of course not mathematics, on the contrary. But this does not mean mathematics shouldn't be done properly when doing physics! Indeed, this is simply to avoid many mistakes! And you may find you understand BETTER, not less, by having a rigorous understanding of the mathematical tools you are using. This may slow you down usually because you have more things to check, but it is still absolutely necessary. Physicists may think they are smarter by using all these shortcuts without fully understanding the mathematical objects, but if you don't like how things work on a fundamental level, and you like shortcuts, you shouldn't be doing theoretical physics. Maybe then engineering is more suited for you.
Sorry to be cruel, really, but I believe if you use math in physics, do it right, or don't! Susskind actually often does lectures without writing math at all (where he just discusses concepts).
I think his notation is correct. Remember to sum over an index (say p), it has to appear twice; once as a superscript and once as a subscript (or "upstairs" and "downstairs" as he calls them)
possibly the only lecturer who can be allowed to take a big mouthful of cake immediately prior to speaking!
+atrumluminarium focus on learning the notation first, it makes everything easier. The notation will be overwhelming at first, but you will soon find that it will all be very simple; learning GR is next to impossible without the notation, and this notation is important to generalize laws and such across coordinate systems (the importance of this is obvious; space is not exclusively euclidean, and if one were to say try to define laws in a place where space is curved due to gravity, one would find these laws would differ when compared to the laws for a flat surface, but by a predictable amount).
Think of vectors and scalars as extensions of tensors:
A scalar is a rank 0 tensor; there is no index, and therefore a scalar will be the same in any coordinate system. To think of this, think about running 6mph on a track vs running 6mph on a straightaway; the concept of speed is scalar. Even though one will be moving in a different direction on the track when compared to the straightaway (due to turns and whatnot, this results in the velocity vector being different on the track when compared to the straightaway), one will still be running the same 6mph, the scalar part of velocity.
A vector is a rank 1 tensor.
A contravariant vector is one that is contracted with a basis vector (do you remember learning about i hat, j hat, and k hat? These are basis vectors) to form a vector. A contravariant vector has an upper index, and a basis vector has a lower index. These two indices are "contracted," which gives the vector (So the contravariant vector V^i, when contracted with ei, gives V^iei=V).
A covariant vector is a vector dotted with a basis vector. This is when V is dotted with e, or V*ei gives Vi. A way to think of this is the covariant vector is when the vector adds the "extra" lower index from the basis vector; the vector itself is unchanged since the basis vector has a value of one.
I hope this helps; rewatch these lectures if you need to, and try to look for other sources on vector analysis besides these lectures. Once you understand this notation and vector calculus, GR is a breeze.
^This is really good advice. Once you learn the notations and how to manipulate indices it gets a whole lot easier to read equations without your brain trying to escape your head by pouring out through your nose.
Why didn't my professor of differential geometry explain things like this? It's really all so clear.
I love his face. The prominent nose, huge ears, and the eyes. A caricaturist's dream.
Are you asking why your professor didn't explain this with a large nose and huge ears?
If any of you are having trouble with this strange definition of a tensor I would recommend reading chapters 2 and 3 in Robert Wald's "General Relativity" textbook. He gives a very good definition on what tensor is and shows how to do algebra and calculus with tensors.
Achkan Salehi I'm a mathematics student so when I saw that definition I didn't really like it lol but hey whatever floats your boat!
CurlBro15 I miss an explicit example and representation. In lineair algebra courses you always get the vectors and matrices explicitely shown as some kind of collection of numbers.
Wish the same was done with tensors, that would clear up things. The way I understand it, a rank one tensor is a vector, rank 2 a matrix and rank 3 some sort of cube containing numbers, is that a correct way of thinking of them?
from 37:24 one will have that the derivative are equal to the partial derivative, isn't it strange???
dy^{m} / dx^{n} = partialdy^{m} / partialdx^{n}
Hello, from mathematical point of view, can you please let me know if my understandin is correct? We have a vector space - let's say 2-dimensional - with two bases: base B_1={e_1,e_2} that is a standard canonical base on cartesian plane i.e. e_1=(1,0), e_2=(0,1) and B_2={f_1,f_2} that is a linear transformation of B_1. We have a vector V that can be represented as V=a_1e_1+a_2e_2 and V=b_1f_1+b_2f_2. So here a_1 and a_2 are so called "covariant components of vector V" as they are the dot products of e_1 and e_2 (of length 1 each) with V (so projections on x and y axis). And b_1 and b_2 are called "contravariant components of vector V" as they are just representation of V in base B_2. Now we apply a transformation to our vector space (not necessairly linear, hence the outpus is a space) given by the functions y_1=y_1(x_1,x_2) and y_2=y_2(x_1,x_2) that are continuous and smooth that gives us a space (primed space). And we want to have a primed components of transformed vector V and than we can use the formulas visible on 37:57 to calculate a_1', a_2', b_1' and b_2'?
Anyone who has wrapped a brithday or Christmas gift should understand what intrinsic flatness means. Try wrapping a book, as compared to a bastket ball or a saddle. Try then wrapping a cylinder or a cone.
What is that Leonard eats during lectures? It looks quite dense. Also, is that warm coffee? The lectures are quite long . . . . important questions.
short bread
Just wanted to say this --- someone dare me to finish this series --- I will take you up as I simply MUST understand Relativity. Plus, I've got to put numbers to the fact that time is going slower (faster?) at my feet than at my head !!! That's what keeps us on the Earth !!!
I know right! Not that it at all diminishes the brilliance of these lectures, but why is this guy always eating??
I feel terribly hungry. Burned all the sugar and still didn't get anything in.
You too can be this good at Physics - but only if you consume 6 cookies and 2l of coffee every hour.
At first it was not clear to me, at 1:29:05, how we know that there are no zero eigenvalues of g. But now I understand: At a given point, P, in the manifold, the square of the length of a vector, v, in the tangent space of P, is given by v(g(P))v. If g(P) has a zero eigenvalue, then by definition there exists a non-zero vector, u, in the tangent space at P with (g(P))u = 0, implying the contradiction that u has length zero. Of course this argument will not work in a Lorentzian manifold where it is possible to have a non-zero vector with zero length.
at 29:00 if e1 which is unit vector along x1 but not its not unity in magnitude but changing, that means e1 is also changing along x1 ,,,, so how can we take V =v1e1+v2e2+v3e3
in 16:30 when you introduce basis vector e, shouldn't it have two subscripts since it changes from point to point along each axis? ie. the e1 going from x0 to x1 will be different from e1 vector going from x1 to x2 and so on. because you said the distances between successive xi s are not the same
He was wrong when he said that as he clearly assumes in everything that follows that he's in a vector space. x1 and x2 (x and y) might not be perpendicular to one another but they live on straight lines and successive points are equally spaced.
Calculus teaches us that a curved geometry has approximately this form when you consider a very small region. The earth is round but looks flat on a human scale. Of course this approximate 'flat (vector) space' may change as you move to different points.
I think my confusion may be embedded in a higher dimension of confusion.
17:30 are you assuming the basis respectively are constant?
Contravariant components of a vector are the scalar multiples of the basis vectors of a vector.
Martín Villagra Every position on a coordinate system can always be represented as a direction vector multiplied by a magnitude (or an extension in that dimension). The way multiple dimensions are related to how distance is defined interdimensionally. (A unidimensional vector has distance defined on its own dimension only.)
Martín Villagra A vector has a magnitude which is what I was referring to. Yes, I’ve complete tensor analysis a while ago. But thanks.
Man...I was riding high after Lecture 1 on GR. 'Einstein Smeinstein...this isn't that hard' I told myself. Halfway through this lecture I feel like a 3rd grader that wandered into a multi variable calc lecture.
WATCHED ALL THE LECTURES, PREATY GOOD TEACHER...
love that Stanford does this
BringerOfBloood They are tensors, because even though they behave differently compared to scalars and vectors, they still have a well defined rule of transformation.
great lecture! I do not follow the part where he derives that the metric tensor really does transform like a covariant tensor. A covariant tensor requires that all components transform as per equation shown at 1.27. However he derives this starting at 1.25 about a statement about the distance which is a single number (i.e. g-mn * dx-m * dx-n summed over m and n). So the actual equation derived in 1.27 is not a statement about each component of g, but rather that the equation holds if we sum over all m,n,p & q. Given that in general g is different at each point, and this equation must hold for all points - I can kind of grasp that the sums on each side can only be equal if every component m,n is equal - but it does not seem to be what he derived. Am I missing something?
Im only in 8th grade but i watch this lecture and imagining me standing there 20 years later. By the way i love this calm and consistent voice!
Oh hell yeah! Thanks interstellarmonkey! You just filled up all my free time for the next couple years.
I suggest excellent tensor calculus lessons by prof. Grinfeld; can be found on youtube according to the spirit of prof. Susskind of the essentials
Absolutely agree. Pavel Grinfeld is simply outstanding.
Does anyone have any tips on how to study this stuff? I can sort of get the logic but there is a lot to keep track of and be comfortable with in Tensor Analysis (I'm not even sure I understand fully the difference between covariant and contravariant tbh) :/
contravariant = vector / covariant = dual vector ( linear form that acts on vectors and give scalars) . This is related to the concepts of vector space and their dual vector spaces. You do not need to understand this to go through these lectures.
Watch lectures. Take notes. Solve problems that interest you just within your reach. Think about problems just out of your reach. Repeat until you are solving problems which were once far out of your reach. It is a huge topic and it is worth remembering that it took that Einstein bloke a decade to even state the theory, so the only way for lesser mortals like me to get anywhere at all is persistent effort, watching a variety of lectures by different lecturers, always returning to the subject, persisting persisting and more persisting. If anyone knows an easier way to learn this subject I'd love them to share that with me but I don't think there is. It's just a whole lot to take in.
I would say if you are serious about learning the material, go through it once so it feels familiar, then go back and listen to the lectures again. You will be surprised how much more you understand the second time.
@@cwldoc4958 Thanks. Since I placed this comment 6 years ago I started and finished a Maths & Physics degree so my understanding has improved a lot :)
>×24:47 V*V
Is this simply a scalar?
It goes a bit fast, the ovariant and contravariant tensor components are the at the basis of tensor analysis and should deserve a whole course chapter. It should be first introduced in a linear vector space (e.g. considering vectors vs linear form components) without any reference to functions and local coordinates, otherwise it gets too difficult.
Subtitles have not been included in the first five lessons of this video course. Why? Can you insert them? Thank you
The key to making through these videos is to set playback to between 1.5 - 2 x. These are really all one hour lectures.
I think that's in part because sometimes he runs them like a press conference.
these lectures are aimed at people who left school a while ago so he's taking it slow
What are you talking about, I don't have to "make through" those LECTURES, I'm very interested in them!
And when you watch at 2x speed, you don't have the time to process the information (and there's a lot of it!) and let it sink into your memory. You're only left with a feeling of hearing something smart. In that case, there's a lot of "60 seconds physics" videos out there.
anybody link to that paper in the board>??
Ah, also on the wikipedia page when it talks about a 'Field' F, we're usually talking about the real numbers (or complex numbers).
So the dual space is the set of maps from the vector space to the real numbers, equipped with some fancy addition and scalar multiplication operations
I think you may be mistaken: they are two distinct usages of the word 'Field'.
Great day and enjoy each lecture
Awesome lesson!!! , great teacher. The technical stuff are in tons of books but what he does here, its not
This lesson reminds me of a dream I had that was quite disturbing. An entity had a mass that could be held up or moved on or through molecules of any kind, through the molecules and on the surface. It was as if the entity could disobey time and space and molecules and dimensions. An entity that could sink or rise through air, solid or liquid. A tongue that could stay in its mouth or lick through a window pane to taste the middle of your brain or the surface of your forehead. No boundaries no place to hide from something opaque and transparent at the same time and space. That is a dimension curve.
This is better than his previous treatment of GR. I don't like his choices of notation. I prefer using over-bars instead of primes, and whether I use over-bars or primes, I put the coordinate system designation symbol on the indices. It makes things much clearer. MTW use this technique.
Nonetheless, his presentation is well done.
I would do things differently; e.g., I would define partial differentiation in the context of arbitrary coordinates, and discuss the implicit function theorem.
You dont need the implicit function theorem here… because change of coordinates are diffeomorphisms
Contravarient is North, it has an n in it "Con" Cov"
Thanks Mohamed, but I could not open the link you sent.
He makes General Relativity so easy to understand!
What are the prerequisites for a course like this?
Try some of the earlier material in the Theoretical Minimum series - theoreticalminimum.com/courses
I don't think you need any of the QM course for GR, and you could probably skip 5 through 8 of the classical mechanics course, since that is necessary only for QM.
Does anyone know what the name of the cake that he's eating is? I'm studying in the library and I'm kinda hungry and the way he eats them makes me want some :)
0:50 IM DYING!!!! I think I found that way more amusing then I should have....
what other end of the stick?
@2:58 The word he's looking for is "bent", or the one he used before, "rolled".@6:58 The word he's looking for is "flex".
7:21 Its white board not black board! Only error I could point out :p
To a flat earther that is enough to invalidate the entire lecture series.
Sir I watched it carefully, I got small idea about calculus of relativity. I wish to get more. My question is what was you were eating? Cake, bread or chese?
Ches
Did the front of Stanford get changed since 1990?
@20:00~25:40 contravariant vs. covariant.
I have a few doubts but don't know where to post them, does someone have any idea?
i love that he's always eating something...and talking with his mouth full :)
Powered by cookies. (This -site- lecturer uses cookies click okay to accept)
Why does he say covariant components come from dot products with basis vectors? I thought covariant components are the vector components in the reciprocal basis, that is, the basis with contravariant basis vectors with upper indices. See Wikipedia article on covariance and contravariance of vectors. I don't see how you could construct the vector using his definition of covariant components and basis vectors.
The student at the end was freaking me out. Professor Susskind was consistently expressing genuine interest in his students' comprehension, and the student was edging on becoming a nuisance.
@I OFFER YOU THIS I don't care at all anymore. It doesn't bother me because it doesn't actually matter at all.
They are pseudo-tensors, which are a generalization of pseudo-vectors in the same what that tensors are of vectors.
great lecture, even though, what he said around the 1:00 Mark, could be misinterpreted very badly.
uhm no, why would it be? we aren't 12 years old anymore.
I see many comments asking for a good reference book to learn about tensors. I recommend the text by Heinbockel. I had a very hard time making any headway when I first wanted to start learning about tensors, in no small part because it seemed at the time that no one was offering good explanations of what tensors represented physically, and the seemingly circular definition "tensors are mathematical objects that transform like tensors."
I started every introductory text on the subject I could find in my Uni library and found most incomprehensible (blind manipulation of indices works pretty well after some practice but that isn't really understanding). It's one of those subjects, like group theory, which has many logical starting points so any two given texts don't seem to cover the same material until you've made much progress in both of them. In any case, Heinbockel was the one that helped me over that initial hump. Once you get started the subject isn't so bad imo.
Fleisch may be even better if you want something as fast, easy, and intuitive as possible (you could probably tear through it in 1-2 days), but half of his 200 page text is stuff you probably already know. His descriptions are better than Heinbockel's, but Hein will give you more quantity and depth of understanding. He almost reminds me of Griffiths (and Strang) in how well he chooses examples and problems, but the writing is much dryer.
If you're smarter than me and/or learning tensors for pure mathematics, stay away from these and grab something more rigorous.
Fleisch definitely works for me. Great substitute for my dysfunctional lectures!
Most frustrating part of my GR course was having to learn tensor analysis without good resources. I eventually cobbled together enough tensor understanding from various online sources that i was able to mostly understand my GR text (Weinberg, Gravitation and Cosmology), but it was a rough few weeks. I wish my mathematical methods (for physics) class would have introduced tensors instead of some of the other techniques that i haven't used since the course, but whacha gonna do...
Simmonds book _A Brief on Tensor Analysis_ (published by Springer) is pretty good as an introduction, putting in a belated plug for that one. There is also a standalone Feynman lecture on tensors in his _Lectures on Physics_ (published by Addison Wesley) which is exceptionally clear. That one lecture would be an almost ideal starting point for a lot of people. Also worth a bit of effort is Arthur Schild's more terse and densely packed _Tensor Calculus_ (published by Dover), though that's not exactly introductory, it's good for a bunch of stuff. Same for Levi-Civita's _Absolute Differential Calculus_ (also published by Dover), it's not pitched at beginners.
Thank you for providing these book recommendations!
Here is a simple way to understand Contravariance and covariance. Contravariant vectors are the components of the vectors along each axis. Suppose we change the scale of the axis, the components also change. For example, say we scale our axes using millimeters (mm). Let's say there is a vector with components 30 mm x-component, and a 40 mm y-component. If we change the scale to cm, meaning we have INCREASED it 10-fold, we see the component vectors DECREASE in value to 2 cm in x-direction, an 4 cm in the y-direction. The scale coefficients of the vectors change opposite (contra) to the scale of the basis vectors.
On the other hand, covariant vectors are dot products with the basis vectors and change in scale the same way as the basis vectors so they are co-variant.
13:55 The simplest way to determine whether a quantity is a scalar is to add a direction to the quantity and see if it makes sense. My glass has 200 mL of milk facing NORTH, and my brother has a class with 200 mL of water facing SOUTH. Volume is obviously a scalar, there is no directionality involved in measuring volume. Other scalars are length, area, temperature, and time.
One important point, length or distance is not the same as displacement. Displacement is a change of position and is not a scalar because the direction is important. For example, I live in South Florida so I will end up at Disney World if I travel 200 miles northeast. but I will end up in the Atlantic Ocean if I travel 200 miles east, or Cuba for 200 miles south.
17:55 the one unit e vectors are called basis vectors.
Seriously, if anyone takes the time to watch this, the notes in Phil Boswell's link are a blessing. From enthousiast to expert in 10hours ;-)
Symmetric matrices can have a zero eigenvalue, so I'm wondering what additional property the metric tensor has that makes it not have zero eigenvalues as he suggests. Anyone?
An eigenvalue of zero would mean that by transforming coordinates, one whole dimension is collapsed to zero. Basically, a projection from three dimensions to two dimension would have zero as an eigenvalue, with the eigenvectors beeing all the vectors beeing perpendicular to the projected-on surface.
We don't want that. We want to rediscribe the existing space with new coordinates not getting rid of one basis vector and project the vectorspace on something.
I think we want v.v>0 (we want length of non zero vector v to be positive) for all non zero v here, i.e v.v = v^TGv >0 which implies Gv 0 for v0. This means that columns of G are linearly independent, hence G is invertible.
How long until the professor asks for a like and subscribe?
He has tenure. He don't give a fuck about likes.
Brilliant teacher! Thank you so much sir! Toda.
damn, how is this guy using videos that's 11 years old, is this guy still alive
Lenny is still going.
Great lecture series I must say, but it bugs me that he keeps calling the whiteboard a "blackboard". It is really inconsequential for the overall quality though.
Is it only me who sees the list, Lecture Collection | General Relativity, in reverse order. That is, in my browser, the list of videos places the first lecture at the bottom.
It would, of course, be neat to have the first lecture at the beginning of the list.
If a vector like velocity is zero in another frame moving relative the vector dont will be not zero?
Normal velocity is not a tensor and so does not follow this rule. That's why we use 4-velocity which is a tensor. Only things moving at the speed of light have a 4-velocity of magnitude zero hence we agree on the speed of light in all reference frames.
The last student's question on the identity matrix - wtf. How low is the standard to get into Stanford?
Interesting enough from the way he calculated the "independent components", an n dimensional space will have T(n) independent components, where T(n) is the nth triangular number in the sequence 1,3,6,10,15 ...
it's simply n*(n+1)/2, it can be proved in linear algebra ;)
@@NicoBattelli Also known as the sum from 1 to n.
It imbedded in temperature
hey, what do you mean with "dual to that space"?
I woke up watching. Guess sleepy me is smart
Me to its been 35 years sence I did calculate
What math you need for taking this course?
In Susskinds own words: “A number of years ago I became aware of the large number of physics enthusiasts .... so I started a series of courses on modern physics ….. specifically aimed at people who know, or once knew, a bit of algebra and calculus, but are more or less beginners.”
Its energy of existence is time there in there is no space. Because you can change one position or point in existence forever without space time.
I was confused at minute 59, when he seems to imply that tensor products are commutative, my understanding is that they are not.
What's happening is that he writes tensors in terms of components and a single component of a tensor is just a number. So when you write two tensors in terms of the components you can juggle the order without it mattering because all you are ever doing is multiplying two numbers. That is not the same thing as saying that two tensors are commutative under a tensor product, generally they will not be. It is partly why index notation is so cool, you can forget about all the non-commutative properties when dealing only with the components. The non-commutativity reappears when you come to actually write out a proper tensor product because you find that nth rank tensors are n-dimensional arrays in which location of the components does matter and suddenly you are forced by the index notation to compute the components and put them in the right place. So you lose and then also recover the non-commutativity by writing out tensors in terms of their components.
Why can't the eigenvalues be zero for the metric tensor?.
Because if they were, then we would have a vector of magnitude zero, which makes no sense at all.
please can you insert subtitles?
thanks old man !
can we take a derivatieve of a vector
Yes. That is, you can take the derivative of a vector-valued function.
You can use derivatives to find the gradient of a vector field, etc.