Комментарии •

  • @Gismho
    @Gismho 2 года назад +26

    Yet another FIVE STAR explanation. Thank you. Extremely instructive, most interesting and very well presented with good diagrams.

  • @virati
    @virati 6 лет назад +57

    You've really got a gift. Let us know if we can support you somehow, would be great to do our part to keep these going!

    • @eigenchris
      @eigenchris 6 лет назад +30

      Thanks. I am strongly considering making a Patreon. If I do, I'll upload a video announcement.

  • @moardehali
    @moardehali Год назад +3

    Perhaps the greatest teachings on tensors. How superior these videos are compared to tensor videos from universities such as Stanford or MIT.

  • @kansuerdem2799
    @kansuerdem2799 5 лет назад +25

    What can i say ? thank you like the rest ? ... no no ... I have never learned so much in a short time... I am starting to believe that I am really smart... :) You are an "eigen-value "of teaching

  • @alberto1854
    @alberto1854 4 года назад +18

    The best virtual class I have ever watched. Your entire course is just fantastic. Thanks a lot for sharing your deep understanding of such quite abstract concepts.

    • @beoptimistic5853
      @beoptimistic5853 3 года назад

      ruclips.net/video/XQIbn27dOjE/видео.html 💐💐

  • @luckyang1
    @luckyang1 6 лет назад +14

    Perfect ! As every single video you made on the subject. I have never read anything so clear and detailed about the relationship of the different operators.

    • @philwatson698
      @philwatson698 6 лет назад +3

      Loved this too. Great ! Sorry - my public comment button won't work so I have had to put this as a reply to someone else's - sorry.

  • @twistedsector
    @twistedsector 4 года назад +7

    this is some god-tier teaching right here

    • @beoptimistic5853
      @beoptimistic5853 3 года назад

      ruclips.net/video/XQIbn27dOjE/видео.html 💐💐

  • @ΑυλίδηςΔημήτρης
    @ΑυλίδηςΔημήτρης 6 лет назад +32

    The journey to the planet of tensors is still going on smoothly. Houston , i think we do not have problem. The captain is the best. Roger and out.

    • @adityaprasad465
      @adityaprasad465 5 лет назад +1

      LOL. I think you mean "over and out" though.

  • @fernandogarciacortez4911
    @fernandogarciacortez4911 3 года назад +6

    What a great video indeed. I have learned a fair share of differential geometry up to this point, but all my relativity books lack an explanation/clarification on this subject that is the differential operator.
    I thought I was going to be fine leaving one of my books aside (Tensors, differential forms, and Variational principles by Lovelock) since I was already reading Kreysizg's book on differential geometry, but they each have their strengths.
    As mentioned in this video, I looked for the video on Raising and Lowering indices and ended up making notes for:
    - What are covectors (Tensors for beginners 4)
    - Tensor product (Tensors for beginners 15)
    - Lowering/Raising (Tensors for beginners 16)
    - Differential forms are covectors (Tensor calculus 6)
    - Covector field components (Tensor calculus 7)
    - Covector field transformation rules (Tensor calculus 8)
    - Integration with differential forms (Tensor calculus 9)
    - And finally, the 13th video on the tensor calculus playlist which made me come here, I will finally be able to take proper notes on it.
    Thanks a LOT, Eigenchris. I will surely go back and check out the other videos later, they are such a great companion to normal textbooks. Between the color-coded letters, ease of explanation, just perfect.
    Rethinking/Redefining old concepts is what makes these more advanced subjects a bit more complicated. We were told X thing has this and that. Your 'motivations' are awesome man.
    I'm sure many professors lack this understanding of concepts. What a gift to your viewers.

  • @mtach5509
    @mtach5509 2 года назад +2

    I LOVE YOUR LECTURE YOU ARE VERY GOOD TEACHER AND ALSO SHOW DEEP UNDERSTANDING

  • @hushaia8754
    @hushaia8754 3 года назад +3

    Excellent videos! These videos really clear the fog and I'm a Math/Physics teacher (I have never studied GR). I agree with Diego Alonso's comment!

  • @chimetimepaprika
    @chimetimepaprika 4 года назад +7

    Don't get me wrong; I like Khan and 3B1B and many others, but EigenChris has my favorite teaching and visuals style.

    • @beoptimistic5853
      @beoptimistic5853 3 года назад

      ruclips.net/video/XQIbn27dOjE/видео.html 💐💐

  • @gamesmathandmusic
    @gamesmathandmusic Год назад +1

    accidentally hit dislike but corrected to a like. love the series

  • @animalationstories
    @animalationstories 5 лет назад +5

    i cant thank you enough, you are a gem

  • @xueqiang-michaelpan9606
    @xueqiang-michaelpan9606 4 года назад +3

    I feel I finally understand Gradient. Thank you so much!

    • @beoptimistic5853
      @beoptimistic5853 3 года назад

      ruclips.net/video/XQIbn27dOjE/видео.html 💐💐

  • @operatorenabla8398
    @operatorenabla8398 3 года назад +2

    This is just incredibly clear

  • @VEVO.official-YT
    @VEVO.official-YT 7 месяцев назад

    well balance class , thank you. hope there are more soon.

  • @mjackstewart
    @mjackstewart 3 года назад +2

    You’re doing the Lord’s work with these series! Bravo!
    I do have one question. I’m getting better at canceling the superscripts and subscripts, but I don’t immediately recognize when I’ve got the Kronecker delta. In which situations does this happen? Or am I lacking an intuitive knowledge to see when this is actually happening?

    • @eigenchris
      @eigenchris 3 года назад

      Is there a particular example you can point to in this video or another video where it's not obvious to you? I just think of the Kronecker delta as the "identity matrix". It's what you get when you multiply a matrix with its inverse. (or in Einstein notation, when you sum matrix components with its inverse matrix components.)

    • @mjackstewart
      @mjackstewart 3 года назад

      @@eigenchris It’s at 19:07 and, ironically, where you multiply the vector metric tensor with the covariant metric tensor.
      I get that those should produce the identity matrix, or I guess, more properly, the Kronecker delta.
      However, I’m struggling to understand how the g(small)ij subscripts and the g(Fraktur)jk superscripts merge to form the Kronecker delta with ik subscripts and superscripts.
      Superficially, this occurs because of cancellation.
      I’m struggling with the intuition that the i column of the vector forms the identity matrix with the k row of the covector.
      Is what I’m making any sense? I’m really rusty with my linear algebra, and tensors are new to me-especially Einstein notation.
      Also, is there a video you recommend on the subscript/superscript cancelation rules for Einstein notation?
      Thank you, Jedi master Obi Wan! You ARE my only hope!
      And thank you so infinitely much for responding after all this time!

    • @eigenchris
      @eigenchris 3 года назад +1

      @@mjackstewart I think my "Tensors for Beginners 16" is where I introduce the idea of raising/lowering indices, and the inverse metric.
      The g with upper indices is DEFINED as the tensor that will sum with lower-index-g to give the kronecker delta. In the language of matrices, the g-upper matrix is the inverse of the g-lower matrix. It might be good to review the formula for the inverse of a 2x2 matrix if you're not familiar with that (just google "2x2 matrix inverse" to find the formula) and maybe as an exercise: invent 3-4 matrices, calculate their inverses, and multiply them by the original to see that they give the identity matrix.

  • @danlii
    @danlii 2 года назад

    Hi! I want to say I love your videos. One question: in video 11 you said that the metric tensor in polar coordinates is the identity when one uses the normalised definition of the theta basis vector. So, shouldn’t that mean that the gradient in polar coordinates should look the same as in cartesian coordinates when one uses the normalised definition of the theta basis vector?

    • @eigenchris
      @eigenchris 2 года назад +1

      If you start artificially normalizing basis vectors to length 1, you can no longer use the assumption that "basis vectors = tangent vectors along coordinate curves". So you can no longer expand a vector using multivariable chain rule without adding extra "fudge" factors. The "r" factor is one of the 'fudge factors' you would need to add when expanding vectors in a basis like this. So the formula ends up being the same.

  • @robertprince1900
    @robertprince1900 2 года назад

    I think confusion arises because you look at the basis and components separately to see if co or contravariant transformation applies where as your students are defining the whole VECTOR as one or the other depending on how the basis transforms. You also use covariant vector field for what look like a scalar, df, but as defined by piercings it must be a “convector” since it eats a vector, but it is strange it’s basis are scalars too.

  • @KuroboshiHadar
    @KuroboshiHadar 3 года назад +1

    Hello, this might be lost because of how long it's been since the vid was posted but I'm a little confused with your use of notation... Like, previously it was more or less defined that del/del(x) (for example) is a basis vector in the cartesian coord system (in the x direction) and that dx is a basis covector field in the x direction. Yet, in this video, you use d(something) as covector basis, but e(something) as vector basis, instead of the usual del/del(something)... Are those two the same and you used e(something) for the sake of clarity or is there some difference I'm missing?

    • @eigenchris
      @eigenchris 3 года назад +2

      ∂/∂x and e_x are different notations for the same thing in tensor calculus. Does that clear things up?

    • @KuroboshiHadar
      @KuroboshiHadar 3 года назад +3

      @@eigenchris It does, thank you! Surprising you still replied after all this time, thank you very much =D

  • @orchoose
    @orchoose 4 года назад +2

    I was studying using Gravitation by MTW and it can get rly confusing. Imo they use gradient in sense that its ''GENERAL'' gradient and in flat space levi civita connections are zero and cov. derivative becomes gradient. In the same sense as relativistic equations chnage to newtonian for low speeds.

  • @vajis4716
    @vajis4716 2 года назад +1

    11:40 Can I simply divide the equation by metric tensor instead of multipliing by inverse metric tensor? Is it possible to do such adjustment by Einstein summation notation like by normal equations from elementary and high school? Thanks.

    • @eigenchris
      @eigenchris 2 года назад +1

      It's not that simple. You have to remember the Einstein notation represents a sum (in 3D, you could write it out as a sum of 3 individual terms, each with a different metric component). Since each term in the sum has a different metric component, you can't just "divide" to get ride of them. You need to use the special rule with the inverse metric.

  • @JgM-ie5jy
    @JgM-ie5jy 6 лет назад +2

    This lecture seemed less exciting than the previous one. Even though it is less inspiring, you managed to put in an absolute gem : reconnecting with the traditional way of defining the gradient as a linear combination of partial derivatives. This is yet a vivid example of teaching instead of mere telling.

  • @khalidibnemasood202
    @khalidibnemasood202 5 лет назад

    Hi, at 10:19, you are writing df = (del_f/del_ci)*dcj. Is it correct? Should not the indices in the c should match? That is I think, it should be df = (del_f/del_ci)*dci.
    And, in any way, your videos are awesome. Are you planning to make a series for differential geometry?
    Thanks for your good and hard work.

    • @khalidibnemasood202
      @khalidibnemasood202 5 лет назад

      Ah, I see that corrected at 10:42. I was just confused.
      Thanks

  • @mtach5509
    @mtach5509 Год назад

    Another way to see the duality relate to gradient - time the position vector , which gives the direction along the position vector or along the covector - ie the gradient . but i think it most use withe refer to direction of diffrential position vector ie dx and dy - as it is also a unit vector and the meaning - again in the eye of the beholder - gradient (number) in the unit dx dy vector.

  • @robertforster8984
    @robertforster8984 3 года назад +2

    You are amazing. Do you have a patreon page where I can make a donation?

    • @eigenchris
      @eigenchris 3 года назад

      I have a ko-fi page here: ko-fi.com/eigenchris
      Thanks!

  • @hotchmery
    @hotchmery Год назад

    I think there's a mistake at 10:44, the starred equation should have repeated indices instead of i and j. Great video, thanks!

  • @drlangattx3dotnet
    @drlangattx3dotnet 5 лет назад

    At 9:03 you say this is the formula for the directional derivative. Do you mean for the components of the directional derivative?

  • @robertprince1900
    @robertprince1900 2 года назад

    Typically the gradient is covariant ( unless you want to use metric to make it contravarient) and you dot it with vector to get scalar df/ds, or just df if you want to leave out denominator.
    It seems like you are taking the df part, and simply defining an operation df(v) where you plot out level df sets and count piercings to get the same answer as gradf dot v.
    I’m not sure what that accomplishes or am I missing something? Is it just a reimagining using “piercings” instead of “ most in direction of gradient” to envision result?

  • @gaiuspliniussecundus1455
    @gaiuspliniussecundus1455 Год назад

    Great videos, and courses in fact. Any good books on tensor calculus for beginners? A companion to these videos?

    • @eigenchris
      @eigenchris Год назад +1

      Sorry, but I can't recommend any. I learned from lots of random articles and pages.

  • @mtach5509
    @mtach5509 Год назад

    the most important property of covector that it is align 90 degree to its pair vector - bothe start from the coordinate origin - (0,0)

  • @armannikraftar1977
    @armannikraftar1977 5 лет назад +3

    Amazing video again,
    just wanted to point out that at 10:20 , df should equal (partial f/partial c^i)*d(c^i) instead of (partial f/partial c^i)*d(c^j).
    if not, then i have no idea what we're doing here :D.

    • @Salmanul_
      @Salmanul_ 3 года назад +1

      Yes you're correct

  • @astronomianova797
    @astronomianova797 3 года назад

    I don't think the notation description is quite right (I think the math is fine): The gradient is exactly what MTW's Gravitation defines it to be, a one-form (lower index). It is naturally a one-form (covariant) by its definition. The exterior derivative uses the same notation because a gradient is one example of the more general category of exterior derivative. A gradient takes a 0-form to a 1-form. What takes a 1-form to a 2-form or 2-form to a 3-form? The exterior derivative. So what is the del operator that everyone calls the gradient (only in 3-D vector calculus)?
    It is the gradient (simply defined as the partial of some field, f, with respect to some coordinates; covariant) raised to contravariant by contracting with the metric. So what is being done here is fine except what he's actually doing is contracting again with the metric to lower the index without mentioning the only way to get a gradient with an upper index (gradient vector) is to contract it with a metric first. (Said another way: The del operator is not defined in this video. If it was you would see it must be a partial derivative, with lower index, then raised by the metric to get an upper index.) Edit: yet another way: he should have just started with the equation he ends up with around 12:00
    Final note: in relativity you don't use the del operator for the gradient because that is commonly used for something else; the covariant derivative.

  • @ChienandKun
    @ChienandKun 5 лет назад

    Hi there, I'm really enjoying your channel. Excuse this amateur question, but I'm wondering: What is the relationship between Helmholtz's decomposition theorem and the classification of Covariant and Contravariant vectors?
    One might assume that as a gradient is considered to be a covariant vector (designated with 'upper' indices), and that as you've shown, these vectors are products of acting on scalar fields, and cannot be 'raised' to a higher level tensor by the same operation. Contravariant vectors then, would be assumed to be 'sinusoidal' in nature, since they can be 'curled' into a 2nd rank tensor. My confusion here is that 1. I've never seen anyone make this association explicitly. 2. If, indeed according to Helmholtz every vector consists of a superposition of these two types of vectors, there does not seem to be a way to add covariant vectors to contravariant ones (with the same indices), in tensor notation.
    Thanks, sorry again for what might be a silly question.

    • @eigenchris
      @eigenchris 5 лет назад +2

      I never actually covered Helmholtz decomposition in school. Wikipedias it's about writing a vector field as a sum of a gradient plus a curl. Is that right? My understanding is that this is a statement purely about vectors (aka contravariant vectors), and doesn't relate to covectors. Adding vectors and covectors is not something you can really do, as they live in different vector spaces.
      I feel I have not answered your question. Can you be more specific about how you think covectors are related to the Helmholtz decomposition?

    • @ChienandKun
      @ChienandKun 5 лет назад

      @@eigenchris Hi, thanks for the reply. In a lot of the material I've read, a gradient is given as an example of a 1-form, and is expressed with upper indices in the tensor notation (like you're doing here). The fact that the page you sited on gradients, only used the word 'vector', does not necessarily indicate it meant 'contravariant vector', since that distinction isn't usually made when del operations are introduced, or in the realm of 3-D vector calculus in general, really. I think I'm trying to close that gap with this identification I'm assuming here. Helmholtz simply says that there are two kinds of vectors. The symmetry of these two kinds of vectors is on display within Maxwell's equations. The static electric field cannot be curled, but does have a divergence, whereas the magnetic field can be curled but has zero divergence. Helmholtz's theorem simply says that any vector is a combination of these two kinds of vectors. Now, since gradients are associated with always carrying upper indices, it's my assumption that lower indices are reserved for the other type of vectors. This should probably mean too, that if one lowers the indices of a gradient, it should be possible to curl the resulting (contravariant) vector. Adding these two types of vectors is helpful in fixing gauges. The Lorentz gauge involves adding an arbitrary gradient to the magnetic vector potential. But, as I said,and you confirmed, if we are to consider the magnetic vector potential to be contravariant, and the gradient to be covariant, there would be no way to add them in tensor notation. It does seem that vectors in an orthogonal (or orthonormal) basis, are super positions of covariant and contravariant vectors, since they are 'equal'. Although, they are not equal in the sense that the vectors of the contravariant frame can be operated upon through the curl, and the covariant frame cannot.
      That's as far as I've thought this through lol, thanks for indulging me.

    • @ChienandKun
      @ChienandKun 5 лет назад

      *Correction. According to Wiki, Covectors are denoted by lower indices, and Contravectors are denoted by upper ones. That is unless you're referring to basis vectors. A lot of the studying I've done has been in Geometric Algebra, and I guess to be different, they switched this convention. This doesn't really effect my question regarding the nature of the two sorts of vectors though, I just wanted to point this out in order to mitigate confusion. Sorry.

    • @ChienandKun
      @ChienandKun 5 лет назад

      The article does sort of make my point for me though. By listing velocity, acceleration and jerk as contravariant vectors. All of these vectors may be curled, and have no divergence, as I suspected. The magnetic vector potential is analogous to the velocity, and the acceleration is analogous to the dynamic electric field. In electrodynamics, the two electric fields, static (covariant) and dynamic (contravariant) would be added together.
      This leads me even more strongly to identify covectors with gradients and contravectors with 'sinusoidal' vectors.
      en.wikipedia.org/wiki/Covariance_and_contravariance_of_vectors

    • @eigenchris
      @eigenchris 5 лет назад +2

      @@ChienandKun I'm sort of losing you again, when you suggest that static fields are covariant and dynamic fields are contravariant. Also not sure what you mean by "sinusoidal vectors".
      Let me take a step back... as you mentioned, when a student first studies vector calculus, they learn div, grad, curl, and they also only learn about vectors (no mention of covectors). All of their calculations focus on 3D space with vectors only. This is important because the cross product and curl operations only make sense in 3D space. The Helmholtz decomposition forumla is also only true for 3D space, since it involves the curl.
      So if you're looking for the standard undergrad vector calculus interpretation of the Helmholtz formula, forget about covectors. Helmholtz decomposition is a statement about vector fields only, and it only works in 3D space. The output of the gradient is a 3D vector field, and the output of the curl is a 3D vector field. The sum of the two is another 3D vector field. No need to worry about covectors at all.
      Now, as a student studies more math, they will need to generalize vector calculus to higher dimensions. This means that they need to abandon the ideas of "curl" and "cross product", as they don't make sense in 4 dimensions or higher. In order to express the idea of "curls" and "rotations", instead of using the cross product, they will use the wedge product from exterior algebra. The difference between the electric field and magnetic field is not about covariant/contravariant vectors. Instead it is about the part of the exterior algebra that they live in. You might be interested in reading this article to learn more (particularly the parts about geometric algebra and differential forms): en.wikipedia.org/wiki/Mathematical_descriptions_of_the_electromagnetic_field
      I hope I haven't confused you too much, but in short, I don't think you should try to think of E and B in terms of covariant/contravariant vectors. In a first E&M course, they are just vector fields, full stop. In more advanced E&M courses, they can be treated differently (with exterior algebra).

  • @dsaun777
    @dsaun777 Год назад

    Is this differential operator, d, different from the differentials used in the line element ds^2=g(dx,dx)?

    • @eigenchris
      @eigenchris Год назад +1

      I'm honestly not sure what the "d" in "ds^2" really means. I think it's mostly a notational shortcut, telling you that you can replace ds with sqrt(g_uv x^u x^v).
      However you can interpret the "d" in most integrals to indicate integration over a differential form. I think I show this in video 9 or 10.

  • @stevenhawkins9962
    @stevenhawkins9962 4 года назад +1

    at approx. 10min 40s, the equation at the bottom of the screen, df=partial df/partial dc^j =…… why is the quotient partial dc^j and not partial dc^i ? I've guessed they are the same thing in this equation but I wanted to check

    • @active285
      @active285 3 года назад

      "c" is the chosen dual basis, so dc^j is just the corresponding 1-form, In "normal" Cartesian coordinates you might know the notation dx^1, ..., dx^n, then it just follows by the definition of the exterior derivative applied to a function f that
      df = ∂_i f dx^i (with Einstein notation).
      so here
      df = ∂f/∂c^j dc^j.

    • @beoptimistic5853
      @beoptimistic5853 3 года назад

      ruclips.net/video/XQIbn27dOjE/видео.html 💐💐

  • @drlangattx3dotnet
    @drlangattx3dotnet 5 лет назад

    At 7:28 the equation has v dot something = vector components times metric tensor times dual basis covectors. But doesn't the dot product yield a scalar?

  • @robertprince1900
    @robertprince1900 2 года назад

    (Also thanks for great content!!)

  • @tomaszkutek
    @tomaszkutek 3 года назад

    at 13:44 partial(f)/partial(c^k) are gradient components but are covariant. However vector components should be contravariant.

    • @eigenchris
      @eigenchris 3 года назад

      If we want to be technically correct, we should leave that kronecker delta in there, because both the partial(f)/partial(c^k) and the e_k are covaraint, so we need a 2-contravariant kronecker delta to make the einstein summation made sense. However, I wanted to show the link with what you'd normally see in a calc 2 or 3 class, so I abused notation somewhat and cancelled the j indices.

    • @tomaszkutek
      @tomaszkutek 3 года назад

      @@eigenchris thank you for explanation

  • @PM-4564
    @PM-4564 2 года назад

    I'm confused because I've read in multiple places that gradient is a covariant vector, which would seem to indicate that Nabla(F) = df/dx_i * e^i, where e^i is the contravariant basis (contravariant basis for covariant vector). But now I read on wikipedia that gradient uses the covariant basis, which would seem to indicate that it's a contravariant vector... If a gradient is a vector field, wouldn't it use the covariant basis so that it's vector field is contravariant? Not sure why I keep reading it's a covariant vector.

    • @eigenchris
      @eigenchris 2 года назад

      I tried to explain this at the beginning, but difference sources use the word "gradient" to mean different things. Sometimes it's the covector field "df" (covariant), and sometimes it's the vector field "∇f" (contravariant).

    • @PM-4564
      @PM-4564 2 года назад

      @@eigenchris This just occurred to me: If ∇f is contravariant, then ∇f = (df/dx)e_x + (df/dy)e_y.
      But e_x = d/dx, so ∇f = (df/dx)(d/dx) + (df/dy)(d/dy)... strange.
      Is that technically correct? because it looks strange to have two sets of (d/dx) in the expression. (And thanks for the clarification that ∇f = contravariant).

    • @eigenchris
      @eigenchris 2 года назад +1

      @@PM-4564 ∇f is not equal to (df/dx)e_x + (df/dy)e_y. This is only true in the special case of cartesian coordinates. The correct formula is ∇f = g^ij (df/dx^i)(∂/∂x^j), involving the inverse metric. The components are g^ij (df/dx^i), which are contravariant. In cartesian coordinates, g^ij is the identity matrix, which is why we get the first formula above.

    • @PM-4564
      @PM-4564 2 года назад

      @@eigenchris Yeah sorry I should have said assuming cartesian coordinates. Thanks for the reply - and thanks for this series.

  • @longsarith8106
    @longsarith8106 Год назад

    Excuse me, teacher. what is the difference between total derivative and exterior derivative?

    • @eigenchris
      @eigenchris Год назад

      The "total derivative" treats "dx" as meaning "a little bit of x". I don't think it has a very formal meaning (as least, as far as I know). The exterior derivative treats "dx" as a covector whose stacks match up with the level sets of the "x" values throughout space. The formulas look the same, but their meaning is different.

  • @observer137
    @observer137 5 лет назад

    I find an asymmetry that makes me uncomfortable and confusing. At 16:18, why del f has metric tensor component df does not?

    • @eigenchris
      @eigenchris 5 лет назад

      df is a covector field. It can be written as a linear combination of basis covector fields like dx and dy usong chain rule. No metric tensor is needed.
      To convert covector components into vector components, we need the metric tensor components to do the conversion. This is true for all covectoe/vector pairs. df and del f are just one example.

  • @qatuqatsi8503
    @qatuqatsi8503 Год назад

    Hey at 10:41 is it meant to be ∂f/∂c^j rather than ∂f/∂c^i

    • @eigenchris
      @eigenchris Год назад

      Yes, my bad. That's a typo.

  • @andrewmorehead3704
    @andrewmorehead3704 3 года назад

    Does this have to do with Riesz Representation in Linear Algebra?

    • @eigenchris
      @eigenchris 3 года назад

      Sorry, but I don't know what that is.

  • @brk1953
    @brk1953 2 года назад

    It seems to good but long to deal with tensors in this way but i dont like the concept covector
    Gradient is still a vector and df is an invariant scalar .what you try to do is to write the components of gradient as covariant components by multiplying metric tensor with the partial derfative of scalar field with respect to a certain coordinate good luck
    BASSEM FROM QATAR

  • @temp8420
    @temp8420 Год назад

    In your notes you say df transforms like a contravariant object but here you say it's a covariant object - maybe you are saying the covariant object has contravariant basis vectors which is true but I can't unpick the naming

    • @eigenchris
      @eigenchris Год назад

      df is a covector... basis covectors transform contravariantly, but covector components transform covariantly. Whenever you change coordinates, the covariant/contravariant changes balance out so that the object remains invariant.

    • @temp8420
      @temp8420 Год назад

      Many thanks for the reply - if I expand du into du^i * e^i where the second index is up and the first is down then I can get what you are suggesting. Does that sound right - don't know how you find the time to get the notes so detailed and reply to comments. Thanks again

    • @temp8420
      @temp8420 Год назад

      Unfortunately when I first learned this it was the old style component notation and I find the new notation very confusing - it used to be a tangent was always a vector or contravariant and gradient gave a covariant object.

    • @eigenchris
      @eigenchris Год назад +1

      Sorry, your reply got marked as "spam" so I didn't see it until now. I'm not sure if it's because your username is "Temp"? Maybe not, but I'm not sure why else it happened. Possibly something to consider when leaving comments on future videos.
      When we expand "du", we get du = (∂u/∂x^i) dx^i, where (∂u/∂x^i) has a lower index beneath the fraction line (covariant) and dx^i has an upper index (contravariant). Tangents are usually interpreted as vectors. The "gradient" of a function is something different--it is not produced from a curve; instead it is produced from a scalar field. The "df" level sets are a covector field and the "∇f" gradient is a vector field.

    • @temp8420
      @temp8420 Год назад

      @@eigenchris many thanks - it's making more sense now. Having someone responding is incredibly supportive and helpful

  • @harrisonbennett7122
    @harrisonbennett7122 2 года назад

    Excellent

  • @TmyLV
    @TmyLV 5 лет назад

    great tensor lessons

  • @deepbayes6808
    @deepbayes6808 5 лет назад +1

    Have you considered editing Wikipedia pages?

    • @eigenchris
      @eigenchris 5 лет назад +1

      I haven't, but there are several pages that use differing terminology, and the math/physics community doesn't really have a consensus on what's right, so I'm not sure what I'd write.

    • @deepbayes6808
      @deepbayes6808 5 лет назад +3

      @@eigenchris if I want to read a book or lecture note that's maximally consistent with your notation and definition what should I read?

  • @한두혁
    @한두혁 3 года назад

    I understand gradient now, but is there a way to define divergence and curl?

    • @eigenchris
      @eigenchris 3 года назад +1

      It requires more study, and learning what the "hodge dual" (star operator) is. Maybe wikipedia can get you started? en.wikipedia.org/wiki/Exterior_derivative#Invariant_formulations_of_operators_in_vector_calculus
      en.wikipedia.org/wiki/Hodge_star_operator

    • @한두혁
      @한두혁 3 года назад

      @@eigenchris Thankyou!

  • @alexheaton2
    @alexheaton2 Месяц назад

    Typo at 10:27. The indices i and j should be the same i and i, or j and j.

  • @drlangattx3dotnet
    @drlangattx3dotnet 5 лет назад

    Is there another term for "v dot something" ? That term sounds a bit strange as a mathematical term.

    • @eigenchris
      @eigenchris 5 лет назад

      Not that I'm aware of.

    • @thomasclark7493
      @thomasclark7493 4 года назад

      Its an operator, and it belongs in an operator space. This particular operator is linear and is the dual to the vector v. When say I dual, I mean quite simply, that by applying a simple process to the operator, you can obtain the original vector v, and only v. It is in this way that the dual covector belongs to the vector v, and is the unique linear operator which is also the dual to v.
      In much the same way that every point in R^2 can encode a vector, every point in R^2 can also encode a unique operator which takes vectors and gives a scalar. This operator can be obtained by applying the hodge star operator to the vector, and vice versa. In G3, this is represented by taking the dot product of the dual of a bivector with a vector, although this is only one way to represent this.

  • @anthonysegers01
    @anthonysegers01 5 лет назад

    Beautiful! Thank You.

  • @anthonyymm511
    @anthonyymm511 2 года назад

    I usually write “del f” to mean the 1-form written here as df to stay consist with the notation for covariant derivatives. That way the hessian is written “del del f” instead of the uglier “del df”. When I want to talk about the vector instead of the 1-form I just write “grad f”.

  • @lt4376
    @lt4376 3 года назад

    15:00 Big point here, in fact I write my normalized basis/unit vectors with a ‘hat’ to emphasize their magnitude is 1, as opposed to an unnormalized basis/unit vector.

  • @dylanledermann8629
    @dylanledermann8629 Месяц назад

    15:10 This is wrong. If they are normalized, then the metric tensor is already the identity matrix in that particular basis since the basis vectors in polar coordinates are orthogonal and due to the 1/r factor in front of the e(theta) basis vector, they will also be normalized. The inverse matrix of the identity is the identity. In this case, the ''gradient'' of a function will have the same components as its one-form. It doesn't even make sense for there to be one 1/r factor in front of d/d(theta) of f.

    • @eigenchris
      @eigenchris Месяц назад

      The issue here is that the standard θ variable increases from 0 to 2π for each circle everywhere in the 2D plane. If you want to change variables to θ_textbook (I'll write it as "θ_txt") everywhere, you would have to set θ_txt = r*θ, in all formulas. This means you have to make every coordinate circle have a different "length" in terms of θ_txt. For example, at a circle at r=2, θ_txt would increase from 0 to 4π. This is what is required to make the θ_txt tangent vectors have length 1 everywhere in the 2D plane. You have to "slow down" the increase of θ_txt for larger circles so that their tangent velocity vectors all have length 1.
      The formula at 15:10 uses a mixture of both θ and θ_txt, and that's why it doesn't look like you'd expect using your reasoning with the inverse metric. You'd have to change the ∂/∂θ derivative to a ∂/∂θ_txt derivative as well, which would absorb the extra factor of "1/r".

  • @pferrel
    @pferrel Год назад

    I'm still confused about what a covector and dual space actually are. Is a dual space the space of all functionals that can be associated with V or all "dot functionals" associated with vectors in V? Is the dual space a set of functions? When you called covectors column vectors this confused me because I'm not sure what the difference is between row and column vectors other than they fit into the linear algebra rules and functions/operators in different ways. Is a dual space just a set of rules that can be applied to V?
    Here you call df a covector, leading me to think covectors and dual space define functionals, df being one. 3:43 But now I don't get why covectors can be seen as a stack or level sets. If you pick a particular function, I get that the output could be seen as level sets, is this what is meant?

    • @eigenchris
      @eigenchris Год назад +1

      The dual space V* is defined as the set of all linear maps that take vectors from V and output scalars. Members of the dual space go by many different names, such as "dual vectors", "covectors", "linear functionals", and "bra vectors".
      When it comes to rows and columns, by convention I write vector components (contravariant) as columns and I write covector components (covariant) as rows.
      Have you wanted my "tensors for beginners #4" video? I go over how to draw the level sets for linear functionals. They always end up being a stack of equally-spaced planes.

    • @pferrel
      @pferrel Год назад

      @@eigenchris Thanks, I'm starting to grok this. Yes I watched #4 several times and it's finally sinking in. I get the parallel stack analogy but any particular stack must be for a particular functional, not all (covectors) at once so I'll now be able to see the generalization (I hope :-))

  • @Mysoi123
    @Mysoi123 2 года назад

    Thanks!

  • @darkinferno4687
    @darkinferno4687 6 лет назад

    when will u do a video about christoffel symbols?

    • @eigenchris
      @eigenchris 6 лет назад +1

      First one will be out this week. There will be at least 5 videos that deal with Christoffel symbols.

  • @michalbotor
    @michalbotor 5 лет назад

    beautiful!

  • @erikstephens6370
    @erikstephens6370 Год назад

    10:36: I think that j by the dcj should be an i.

  • @mtach5509
    @mtach5509 Год назад

    I thnk evry vector could be covector of another vector which is given by linearr map of this vectr to anotehr vector and then the original vector become covector of this linear map vector

  • @m.isaacdone5615
    @m.isaacdone5615 5 лет назад

    Thanks bro !

  • @EmrDmr0
    @EmrDmr0 2 года назад

    df: covector (1-form)
    del(f): vector
    V: vector
    g( , )= bilinear form
    Although g( delf , V ) = def(f) . V and df(V) = del(f) . V, What's the difference between g( delf , V ) and df(V)?
    Thank you!

    • @viliml2763
      @viliml2763 Год назад

      There is no difference. g( delf , V ) = def(f) and del(f) = df(V) means g( delf , V ) = df(V) by transitivity of equality. They are the same scalar.

  • @MTB_Nephi
    @MTB_Nephi 6 лет назад

    Please more Computing examples

    • @eigenchris
      @eigenchris 6 лет назад +1

      There's an example of computing the gradient in the next video (video 14).

  • @jeremiahlee6335
    @jeremiahlee6335 2 года назад

    Your convention is the same as in Michael Spivak

  • @gtf753
    @gtf753 4 года назад

    Great👏👏

    • @beoptimistic5853
      @beoptimistic5853 3 года назад

      ruclips.net/video/XQIbn27dOjE/видео.html 💐💐

  • @kimchi_taco
    @kimchi_taco Год назад +1

    Mind blown 🤯 why STEM degree don't teach this to students? 😭

  • @Xbox360SlimFan
    @Xbox360SlimFan 6 лет назад

    So basically the total differential is a covariant derivative?

    • @eigenchris
      @eigenchris 6 лет назад

      I'm not totally sure what you mean by that. When I think of the covariant derivative, I normally think of expressions involving Christoffel symbols.

    • @Naverb
      @Naverb 5 лет назад

      Yes. We start with our manifold, embed it in R^n, and note the Covariant derivative is essentially the orthogonal projection of the "total derivative" we know and love in R^n onto our surface (in the sense that the tangent bundle TS for our manifold S is merely the orthogonal projection of TR^n, which makes sense because tangent bundles are vector spaces). OK, so now to answer your question: if our manifold is R^n itself, the "orthogonal projection" is just the identity map, so we find the Covariant derivative really is the total derivative when working in R^n!

  • @smftrsddvjiou6443
    @smftrsddvjiou6443 5 месяцев назад

    Interesting. For menwas df so far a number.

  • @mastershooter64
    @mastershooter64 2 года назад +1

    This "Dee" f you're talking about sounds a lot like the total derivative

    • @eigenchris
      @eigenchris 2 года назад

      Yeah, the formula looks basically the same. Tensor calculus takes symbols from ordinary calculus and re-interprets them to have different meanings. In ordinary calculus, "dx" loosely means "a small change in x", but in tensor calculus, the "d" is re-interpreted to be an operator called the "exterior derivative".

  • @oslier3633
    @oslier3633 4 года назад

    Now I see why the differential is independent of coordinate system.

    • @beoptimistic5853
      @beoptimistic5853 3 года назад

      ruclips.net/video/XQIbn27dOjE/видео.html 💐💐

  • @klam77
    @klam77 4 года назад

    Ahh...! Brilliant. This is why they (Google) "do" Neural Nets with TENSORS. It would be super brilliant if you could do a video on tensors applied to neural nets representation and optimization (training).

    • @eigenchris
      @eigenchris 4 года назад +2

      I only have a passing familiarity with machine learning and neural nets, but my understanding is that there isn't much in common with the "tensors" used in machine learning and the tensors used in physics/relativity. For ML/NN, I think tensors are simply arrays of data and there isn't much emphasis on any coordinate system changes, coordinate transformations, or geometry. I know TensorFlow is a popular NN coding library, but I doubt it has much in common with this video series. You can feel free to correct me if I'm wrong.

    • @klam77
      @klam77 4 года назад

      ​@@eigenchris Hi, yes, this is precisely the understanding I am trying to gain. Especially where "gradient descent" is concerned and it is taught as the process of trying to align the "weight vector" appropriately with the input vector via the dot product to achieve the trained output, etc. Unfortunately, I was never taught Tensors in school, and so I am digging through, slowly with your good videos. But i took a flying leap to the gradient video here. I will go back to the beginner ones. (PS: It scares me that Google would call their routine using the term "tensor" as a cool marketing thing).

    • @eigenchris
      @eigenchris 4 года назад +1

      I wouldn't say it was just a marketing thing. It's just that computer programmers and physicists sometimes use the same words to mean slightly different things. Tensors is one of these words. If you want a good introduction to machine learning (with Neural Nets, gradient descent, and more), check out professor Andrew Ng's playlist. It is on both RUclips and Coursera.

    • @klam77
      @klam77 4 года назад

      @@eigenchris Indeed, I suspect you are right.
      Still, I will try and absorb this tensor material to see if it provides any closed form analytic expression of backpropagation methods, applied to nested sigmoid functions of vector dot products (which is essentially what a NN is in analytic form). But I mostly realize you're right: if tensors (in the physics sense) were applicable to NN, we would have heard of it by now! Cheers to you, thanks.

  • @drlangattx3dotnet
    @drlangattx3dotnet 5 лет назад

    "d c ^j " What the heck is that? Lost me there. How is "d anything" a basis vector?

    • @eigenchris
      @eigenchris 5 лет назад +1

      "c" in this case can be any coordinate system. For example in the cartrsian coordinate system dc^1 is dx and dc^2 is dy.
      You might want to watch videos 6-8 in this series on covector fields / differential forms to understand how dx and dy are a covector basis.

  • @mtach5509
    @mtach5509 Год назад

    you wrong, Gradient of field number function - f is actualy a covector or dual or 1-form - depend on the context , but dx dy and dz etc.. are al relate to position vector - from coordinate origin - thus multipile gradient ( as i said - covector ) with dx and dy give a dot or inner product - ie number which is the size of the vector in the direction of the position vector , if they are align - cos fee = 1 then this is the maximum vector size - ie gradient is maximum . if the gradient = 0 that it is tangent to position vector that intersect it at point p ( common point for gradient and position vector ) and the gradient dot dx dy is equal 0 .
    never the less gradient for itself - just by definition can be consider vector for itself - hence the duality - never the less the impotance and the use of gradient come after multiple it with dx and dy - so it is better always to refer to gradient as c o v e c t o r .

  • @Schraiber
    @Schraiber 2 года назад

    Wow this one was a mind fuck

  • @danielkrajnik3817
    @danielkrajnik3817 3 года назад

    tl;dr 17:57

  • @shuewingtam6210
    @shuewingtam6210 3 года назад

    At 10:44 written mistake c sup j ñot c sup i

  • @drlangattx3dotnet
    @drlangattx3dotnet 5 лет назад

    At 7:28 the equation has v dot something = vector components times metric tensor times dual basis covectors. But doesn't the dot product yield a scalar?

    • @eigenchris
      @eigenchris 5 лет назад

      Dot product with an empty slot is a covector (which is why it's written using the covector/epsilon basis). We get a scalar after we put in a vector input.

    • @drlangattx3dotnet
      @drlangattx3dotnet 5 лет назад

      @@eigenchris Now I understand. Thanks very much for patience with my questions. I did review lec 16. Now I will plug away. Just a hobbyist 40 years removed from college math classes.