3/3 Contravariant and Covariant tensor

Поделиться
HTML-код
  • Опубликовано: 25 ноя 2024

Комментарии • 59

  • @renmas00
    @renmas00 3 года назад +27

    Best explanation I could find available online! Thanks a lot.

  • @johnholme783
    @johnholme783 2 года назад +10

    One word; Brilliant! I’m currently studying general relativity and this video has been very helpful. First port of call for anybody wanting grasp the mathematics of GR!

  • @edwardmacnab354
    @edwardmacnab354 8 месяцев назад

    this is the first time i have seen a video that explains this and CLEARLY explains it . Congratulations and may you have further success !

  • @ArturoCastan
    @ArturoCastan 3 года назад +6

    Thanks a lot, a very clear demostration!

  • @RD2564
    @RD2564 2 года назад +1

    You nailed it Fizică, well done.

  • @singkreality3041
    @singkreality3041 3 года назад +8

    Covariant are calculated in perpendicular manner, how to we relate this with the physical meaning. Why covariant components are not measured in parallel.

    • @pratikshadash6487
      @pratikshadash6487 4 месяца назад

      Consider a source of light falling on the vector in a perpendicular direction to one of the axes..

    • @thevegg3275
      @thevegg3275 26 дней назад

      It could be if the definition changed. By the same argument, we could start calling cats, dogs, and dogs cats. It’s just nomenclature true to find a method.

  • @afreenaa6984
    @afreenaa6984 10 месяцев назад

    Good videos and nice presentation. Please go on with more physics areas

  • @guriyakum2971
    @guriyakum2971 Год назад

    Best explanation ever ❤

  • @thevegg3275
    @thevegg3275 Год назад +2

    On the graphs for covariant, I see that I prime doubled, but (i to 2i), but I don’t see any component that doubled. I can’t understand your graph.

    • @lucasgroves137
      @lucasgroves137 6 месяцев назад +1

      Is that around 7:30? The parallel and perpendicular components seem to both change contravariantly. I think the graphs in this clip are very poorly done, in particular the representation of unit vectors. But in the stupid era in which we live-Emperor's New Clothes-people wet their pants over mediocre content and call it excellent.

  • @hjpark2177
    @hjpark2177 Месяц назад +1

    At 7:47, I think you are wrong. When using covariant components, the basis vector should be transformed to contravariant basis vector.

  • @bimalbahuguna215
    @bimalbahuguna215 2 года назад +1

    Nicely explained.
    Thanks

  • @thevegg3275
    @thevegg3275 6 месяцев назад +1

    At 7:50 you start that the vector is a sub times the sub plus a sub two times E sub two, but you are ignoring the dual basis factors which give the covariant projection as a sub. One is super script one plus a sub two superscript two I’m wondering What you say true if you only consider the basis factors the sub one and the sub two and do not consider the dual basis factors the script one and the superscript two

  • @victoriarisko
    @victoriarisko 6 месяцев назад

    Very nicely done!

  • @ripz7562
    @ripz7562 2 года назад +1

    Brilliant ❤️

  • @vayunandanakishore6652
    @vayunandanakishore6652 10 месяцев назад

    very beautiful explanation..

  • @Dominic94St
    @Dominic94St 3 года назад +2

    Thank you for sharing!

  • @deniz.7200
    @deniz.7200 7 месяцев назад

    Any numerical exampels for both transformations?

  • @vivekcome98
    @vivekcome98 2 года назад

    Woaahhh...that was really helpful

  • @DargiShameer
    @DargiShameer 3 года назад +1

    Good explanation

  • @bablurana457
    @bablurana457 3 года назад

    Every one just write the formula but don't explain. except you thank's.

  • @mdnkhan1964
    @mdnkhan1964 Год назад

    Very good. Thanks.

  • @SteveRayDarrell
    @SteveRayDarrell 6 месяцев назад

    Let's say you are representing a vector with covariant components in a given basis (e1,e2). You want to change to a new basis (e'1,e'2). Let's say the matrix of the change of basis is A. if the components of the vector are (a,b)^t in the first basis, to get the components in the new basis you have to multiply A transpose to the left of (a,b), so (a',b')=A^t*(a,b)^t. So the components change with the transpose of A and not with A itself. Is this correct?

  • @jn3917
    @jn3917 3 года назад +1

    Thanks a lot sir

  • @darkknight3305
    @darkknight3305 10 месяцев назад

    Thank you very much

  • @nightshine751
    @nightshine751 3 года назад

    Very good method to explaination. But the quality of video is blur.

  • @physicsbhakt7571
    @physicsbhakt7571 2 года назад

    Easy
    To the point
    Adequate

  • @nipunaherath347
    @nipunaherath347 3 года назад +1

    thanks

  • @aniruddhabehere9836
    @aniruddhabehere9836 10 месяцев назад

    Excellent

  • @thevegg3275
    @thevegg3275 6 месяцев назад

    Hello, At minute 5:28 you said that doubling the X axis doubles the Y axis, but I’ve seen no evidence of that on your graph. It seems like you double the axis, but the Y axis as in the original.

  • @thevegg3275
    @thevegg3275 Год назад

    Echoing a previous comment, it is rather like comparing eggs to elephants, comparing parallel projection contravariant components to the gradient…ignoring the converse of parallel components aka perpendicular components of covariance.
    I think we need to a graphic with parallel projection finding contravariant components on the left side with perpendicular (dual basis vector) components on the right side. No need for talking about gradients.
    Further more if you could somehow connect this graph to the tensor components in a very specific example.
    For instance, in a skewed coordinate system…if the contravariant components are x=5 and y= 3 and the covariant components are x=-12 and y=7….
    how are these numbers placed into the tensor components.
    This type of graphic would quickly define both contravariant and covariance components as well as how the tensor is defined based on those components.
    Would you please be able to have a separate video with only that that would be awesome. Love your work!
    Thanks in advance.

  • @nosirovabdurahmon5964
    @nosirovabdurahmon5964 2 года назад

    Thank you a lot man.

  • @floend4810
    @floend4810 Год назад

    Why is there no example for using the transformation laws???
    For example: You have a curve in the cartesian 2D system and a tangent on this curve in one point!
    Then there is another coordinate system E with one axis being rotated and the other not!
    Which of the two transformation laws will give you the correct components of the tangent vector in the new system E???
    Obviously you would have to show that it IS the correct representation but parallel and perpendicular projections should do that!

  • @mukeshkumar-dt4mb
    @mukeshkumar-dt4mb 3 года назад

    Please upload complete series

  • @thevegg3275
    @thevegg3275 Год назад

    You seem to say the gradients are necessarily covariant and velocity vectors are necessarily contravariant? How so, do we not have the choice of covariant OR contravariant to define ANY gradient or vector?

    • @AdrianBoyko
      @AdrianBoyko 26 дней назад

      You would think so, if the story about parallel vs perpendicular projection told in this video is true.

  • @nomachinesinthisroom
    @nomachinesinthisroom Год назад

    Thanks man! Wondering why you chose the Romanian name for physics for the channel 👯‍♀

  • @AdrianBoyko
    @AdrianBoyko Месяц назад

    Video says vectors/tensors are invariant. Then at 3:30 it proposes a definition for “contravariant vector” and lists a few. This sort of careless language is one of the reasons why people find this subject difficult. The COMPONENTS can be covariant or contravariant, not the vector/tensor.

  • @thevegg3275
    @thevegg3275 Год назад

    I think I’m asking what does needing the gradient to define covariant components have to do with dual basis vectors to find covariant components. I see no connection whatsoever. This is a question I’ve been asking people for years and no one’s ever been able to answer.

  • @e4eduspace281
    @e4eduspace281 3 года назад

    Nice class

  • @brianyeh2695
    @brianyeh2695 11 месяцев назад

    thank you!!!!!!!

  • @thevegg3275
    @thevegg3275 Год назад

    For contra variant, you talk about doubling a basis, vector, and the component being half. But when you go to covariant, you don’t talk about doubling the bases factor, and then the component being double for some reason you go into gradients, which is like comparing apples and oranges. Do you agree? Why don’t we discuss gradient with contravariant as well

    • @edwardmacnab354
      @edwardmacnab354 8 месяцев назад

      because gradient is NOT a contravariant tensor ?

    • @thevegg3275
      @thevegg3275 8 месяцев назад

      @@edwardmacnab354 Not sure you understand my question.
      I can show both contravariant and covariant components visually in a sketch.
      What makes perpendicualar projection of a vector on to dual bases related to gradients while contravariant (parallel projection) is not related to gradients?
      And when used in a tensor I don't see how indices can be lowered unless you actually change the actual numbers (on a graph). ie if contrav indices relates to the contravar components from parallel projection, and one lowers it to covariant, then it seems the number have to also change to the covariant components from perpendicular projection.

    • @edwardmacnab354
      @edwardmacnab354 8 месяцев назад

      @@thevegg3275the covariant tensor and contravariant tensor are two completely different types of tensor to my understanding , and so the components act differently when the base is doubled or halved . But I must add I also am considerably confused by the whole concept . I can see the power of it when doing calculations as it greatly simplifies very complex manipulations , and so I will persist in my struggle to understand . Good Luck !

    • @thevegg3275
      @thevegg3275 8 месяцев назад

      @@edwardmacnab354
      The story goes that if you increase a contravariant basis vector (in order to keep the numerical combination of the basis vector times the component to equal the parallel projection of the the vector on that axis) the contravariant component varies inversely by decreasing...aka contra-variant.
      The problem comes when considering the covariant situation from perpendicular projection.
      Intuitively one thinks no matter the length of a covariant basis vector, surely the covariant component has to vary inversely. So it is confusing to read..."if you increase a covariant basis vector then the component will increase...aka co-variant".
      I believe the above is true because given that a covariant basis vector is actually calculated as the inverse of a contravariant basis vector (times the cos of theta)...as you increase a contravariant basis vector, you necessarily decrease the covariant basis vector, which means the covariant component increases.
      If I'm correct perhaps it should be taught as follows...
      A) As you increase or decrease a (contravariant basis vector), you decrease or increase a contravariant component, respectively.
      B) As you increase or decrease a (contravariant basis vector), you increase or decrease a contravariant component, respectively.
      Thoughts?

    • @nikopojani2133
      @nikopojani2133 5 месяцев назад

      In the end of point B) you mean (I beleive):.....covariant componentr, respectively.

  • @inquiringhuman2582
    @inquiringhuman2582 3 года назад

    dil mange more, can you share more on tensor calculus,please?

  • @mrigankasandilya4388
    @mrigankasandilya4388 3 года назад

    if lecture notes are provided it is much better

  • @puremaths9679
    @puremaths9679 3 года назад

    Nice

  • @brunoaugustorodriguesalves4080
    @brunoaugustorodriguesalves4080 2 года назад

    Same gere.

  • @thevegg3275
    @thevegg3275 Год назад

    Ok, after months of thinking on this, I have an explanation of covariant vs contravariant that no one has ever uttered, as far as I know. Here goes. Hold my beer!
    ---
    You combine contravariant (components times their regular basis vectors) tip to tail to reach the tip of the vector you're defining.
    But with covariant, you combine (covariant components times their dual basis vectors) tip to tail to get to the tip of the vector you're definging.
    Why does no one explain it like this?
    But my question is how is covarant components of dual basis vectors relate to the dot product? Please correct me if I'm wrong on the following...
    DOT PRODUCT: A (vector) dot B (vector) = a scalar quantity
    CONTRAVARIANT: described by the combination of contravariant (components times regular basis vectors) added tip to tail of
    A (vector) dot B (vector).
    COVARIANT: described by the combination of covariant (components times dual basis vectors) added tip to tail of
    A prime (vector) dot B prime (vector).
    QUESTION:
    If we dot product A prime (vector) with B prime (vector), does that scalar quantity equal
    A lower 1 prime times e upper 1 prime PLUS A lower 2 prime times e upper 2 prime?
    If so, arent we then saying that a scalr is equal to a vector???

  • @perdehurcu
    @perdehurcu 4 дня назад

    Hello. Sir, I think what you are telling me is complete nonsense. There is so much uncertainty that you don't understand the issue. You've wasted your time with these videos. Now I'm asking sir, what are Covariant and Contravariant? No one can understand anything when you say this. You have made an explanation full of contradictions. Good luck.

  • @rashahameed4937
    @rashahameed4937 3 года назад

    Please translate to Arabic

  • @jameshopkins3541
    @jameshopkins3541 Год назад

    YOU CAN NOT EXPLAIN IT!!!!!!

  • @shreyaskumarm3534
    @shreyaskumarm3534 2 года назад

    Thank you very much sir

  • @thevegg3275
    @thevegg3275 Год назад

    Ok, after months of thinking on this, I have an explanation of covariant vs contravariant that no one has ever uttered, as far as I know. Here goes. Hold my beer!
    ---
    You combine contravariant (components times their regular basis vectors) tip to tail to reach the tip of the vector you're defining.
    But with covariant, you combine (covariant components times their dual basis vectors) tip to tail to get to the tip of the vector you're definging.
    Why does no one explain it like this?
    But my question is how is covarant components of dual basis vectors relate to the dot product? Please correct me if I'm wrong on the following...
    DOT PRODUCT: A (vector) dot B (vector) = a scalar quantity
    CONTRAVARIANT: described by the combination of contravariant (components times regular basis vectors) added tip to tail of
    A (vector) dot B (vector).
    COVARIANT: described by the combination of covariant (components times dual basis vectors) added tip to tail of
    A prime (vector) dot B prime (vector).
    QUESTION:
    If we dot product A prime (vector) with B prime (vector), does that scalar quantity equal
    A lower 1 prime times e upper 1 prime PLUS A lower 2 prime times e upper 2 prime?
    If so, arent we then saying that a scalr is equal to a vector???