27. Positive Definite Matrices and Minima

Поделиться
HTML-код
  • Опубликовано: 2 фев 2025

Комментарии • 142

  • @RC-bm9mf
    @RC-bm9mf 4 года назад +66

    This lecture is definitely a positive effect on my grasp of the matrix, and this lecture plays a pivotal role in the whole series. Thank you prof. Strang. Nobody has explained those concepts so clearly and coherently -- a whole new world is ahead of me. This is a must-see. A genuine human heritage.

    • @rolandheinze7182
      @rolandheinze7182 4 года назад +11

      Don't you mean... a positive definite effect? *cymbal crash*

    • @Arycke
      @Arycke Год назад +4

      The first 2 lines are the joke. He said positive definite as "definitely ... positive."
      For those who thought the last comment with the rimshot was explaining the original joke. For completeness sake, no shade.

  • @PedroHernandez-gz4cn
    @PedroHernandez-gz4cn 12 лет назад +96

    my calculus 3 professor taught us, think of a saddle point as a point on a "pringles chip" a lot of people know exactly what it looks like.

  • @IMadeOfClay
    @IMadeOfClay Месяц назад +2

    This lecture blew my mind. That linear algebra can be used to solve multivariable functions and multivariable calculus problems... amazing. At one point he was dropping one mind blowing fact after another.
    Thank you for these awesome lectures.

  • @rolandheinze7182
    @rolandheinze7182 4 года назад +35

    After viewing the 3blue1brown video on Duality, I am just seeing xT*A*x as the results of applying a linear transformation to a vector x and then projecting that new vector back onto x; if the vectors still point "in the same direction" i.e. the projection is positive, then A is positive definite

    • @dalisabe62
      @dalisabe62 3 года назад +7

      Yes. In fact, there is another way to look at positive definite matrix; that is, the transformation of vector X is another vector that lies in the same quadrant as vector X. The angle between the vector and its transformation should always be acute.

    • @youssefel-mahdy922
      @youssefel-mahdy922 3 года назад

      Thank you!

  • @rakolman
    @rakolman 8 лет назад +47

    This is maybe the best lecture in the entire course.

    • @thej1091
      @thej1091 8 лет назад

      Lets see! gonna do it! There is also a recent close up video of gilbert doing a 20 minute bit on positive definite matrices!

    • @jhabriel
      @jhabriel 8 лет назад +3

      I agree with you! What an incredible way to show how different branches of mathematics actually refers to the same thing.

    • @ja-qk4vd
      @ja-qk4vd Год назад

      beautiful how comes together.

  • @ozzyfromspace
    @ozzyfromspace 4 года назад +16

    I watched this once, taking all notes. Then watched it again (with break) without taking notes. Then I read my notes (after a break). The lecture's quite good! Btw for those that thought the second derivatives thing came out of left field, Du is the directional derivative operator, so you want Du( Du( f(x1,x2,...,xn) ) ), which gives the expressions for x^T*A*x for a connected f and A. This tells us that x^T*A*x is like a second order operation for the derivative of a function when A is the Hessian (matrix of second derivatives) of said function f(). This wasn't clear to me initially so I was kinda lost. Best!

  • @rosh70
    @rosh70 2 года назад +4

    If I had teachers like Gilbert Strang, I likely would've had a Ph.D in Math by now. No kidding! I love this guy! He made me fall in LOVE with Mathematics.

  • @ozzyfromspace
    @ozzyfromspace 4 года назад +10

    The funny thing is, I'm also doing a general relativity series and got to covectors. I understand what they are computationally, but couldn't "visualize" them, beyond the typical "stacks". Then I stumbled on the interpretation of covectors as linear maps, which served as a connection to linear algebra. I tried so many things to built a geometric interpretation all night but nothing formed in my head properly, so I was like, "meh, I'm 80% done with Professor Strangs lectures, might as well do another one". He led with x^H*A*x > 0 and for some reason, everything just started to click (x^H*A is an implicit map, so I have ideas about how to analyze things in my other GR class). Funny how his lecture was what I needed to get my head turning again. Thank you, and awesome lecture! ☺️🙌🏽 Stay safe during #COVID19

  • @madsonpena2783
    @madsonpena2783 4 года назад +7

    This lecture is just amazing, what a beautiful thing, all coming together...

  • @iyalovecky
    @iyalovecky 10 лет назад +75

    This lecture is especially beautiful..

    • @Nakameguro97
      @Nakameguro97 10 лет назад +11

      Positive Definite connecting matrices, algebra, geometry, and calculus: priceless!

    • @sourabhdhere1124
      @sourabhdhere1124 5 лет назад +4

      Oh Yeah, It's All Coming Together.

  • @georgesadler7830
    @georgesadler7830 3 года назад +1

    From this lecture, I really understand Positive Definite Matrices and Minima thanks to Dr. Gilbert Strang. The examples really help me to fully comprehend this important subject.

  • @Eschewy
    @Eschewy 11 лет назад +2

    At 41:54, Strang gets the eigenvalues correct from memory. I'm impressed!

  • @RC-bm9mf
    @RC-bm9mf 2 года назад

    This lecture is a true masterpiece. I was in awe with the similar feeling of watching suspense thrillers.

  • @straus1482
    @straus1482 9 лет назад +15

    Never Seen someone like this.... Amazing!!!!

  • @dorupanciuc218
    @dorupanciuc218 6 лет назад +1

    the explanation starting at 25:10 helped me understand what a positive definite matrice is

  • @SauravKumar-xg4zr
    @SauravKumar-xg4zr 6 лет назад +3

    @15:16, I apologies.......................That's how a matchless, renowned and extraordinary person respect everyone, everywhere.
    Love you Prof. Gilbert Strange:)

  • @annawilson3824
    @annawilson3824 7 месяцев назад

    31:08 how everything beautifully connects together

  • @Slogan6418
    @Slogan6418 5 лет назад

    31:40 In the case of A = [2, 6; 6, 7] for example, when setting z = 0, what we actually get is a cross (an 'X' shape) ; when z euqals other values, then we get a hyperbola.

  • @parkerhyde7514
    @parkerhyde7514 4 года назад +4

    "had better be more than 18" was the correct answer

  • @aditiprasad5549
    @aditiprasad5549 2 года назад

    From 1,012,547 views in the first lecture to 203,355 views in 27th...You belong to 20% who made it this far, Congrats!

  • @rochesterjezini317
    @rochesterjezini317 4 года назад +1

    Very good lecture Professor Strang thank you from Amazon Forest.

  • @kartikarora8877
    @kartikarora8877 3 года назад +1

    It is my dream to meet prof. Gilbert strang . His voice , his words ,his action touch my soul . Please prof. Read my comment so that I can satisfy only by this And I pray You may live 1000 years .

  • @bearcharge
    @bearcharge 15 лет назад +15

    fancy tie!!!!

  • @pontifexmaximus_e
    @pontifexmaximus_e 2 месяца назад +1

    Symmetric matrices represent ortho-scaling linear transformation; or geometrically, a n-by-n symmetric matrix [A] scales vectors along n # of orthogonal axis by factors of their corresponding eigenvalues. the axis are line spans of the orthonormal vectors of [Q], which are from diagonalization decomposition of [A] = [Q][D][Q_Transpose]. The quadratic form represented by [A] is a functional qA:R^n→R defined by qA(x)=[x_Tr][A]x. if we set qA(x) = 1, then {x:qA(x) = 1} = [A][{{x' : ∥x'→∥_. = r1 }] or the football outline is the image of linear transformation A for a circle of x' such that the length all equals to some real number r1.

    • @pontifexmaximus_e
      @pontifexmaximus_e 2 месяца назад +1

      the distinctions between the matrices and linear transformations and the interpretations of linear transformation are important foundations in understanding linear algebra and abstract linear algebra and normed vector spaces in the Analysis and the Point-Set Topology and differential Geometry.

  • @yazhouhao7086
    @yazhouhao7086 7 лет назад +1

    Wow.....how beautiful it is! How beautiful!!!

  • @quirkyquester
    @quirkyquester 4 года назад +2

    sorta got the general idea of this whole lecture, im still not so good at the details. Reading the book and summary might be a good idea to solidate the knowledge if needed. Thank you Professor Strang and MIT! great lecture!

  • @yazhouhao7086
    @yazhouhao7086 7 лет назад

    Dr. Gilbert Strang is really a master!

  • @SphereofTime
    @SphereofTime 5 месяцев назад

    49:42 eigen value tell length of axis eigenvextors tell

  • @shashvatshukla
    @shashvatshukla 4 года назад +1

    My favourite sports team is the Seattle Submatrices

  • @shuravlasov
    @shuravlasov 11 лет назад +2

    Thanks prof. This is one of the best lectures of the course!

  • @jessstuart7495
    @jessstuart7495 Год назад

    This lecture is a good one to have Matlab or Octave pulled up to look at some of these surfaces.
    x=-10:1:10;
    y=-10:1:10;
    [XX, YY]=meshgrid(x,y);
    Z=2*XX.*XX + 12*XX.*YY + 7*YY.*YY;
    surf(XX,YY,Z)

  • @tiger3023381
    @tiger3023381 10 лет назад +3

    thanks prof. now I can imagine what eigenvectors and eigenvalues are like.

  • @brainstormingsharing1309
    @brainstormingsharing1309 4 года назад +1

    Absolutely well done and definitely keep it up!!! 👍👍👍👍👍

  • @dalisabe62
    @dalisabe62 2 года назад

    Intuitively, a positive definite matrix is one that has more stretching weight than a rotation weight. If it operates on a vector in the first quadrant, it guarantees that the output vector is also in the first quadrant. This is almost like a closure property under the transformation of a matrix. I need to examine the case when the x vector in Ax is not in the first quadrant. The entire underlying theme is creating something that is very close to an Eigen matrix with more weight around the diagonal. The important of Eigen matrices is well known in applications where the power matrix has the same effect of a power scalar function. With big matrices, there is nothing more valuable than decomposing a matrix in terms of simpler matrices among which the Eigen matrix is one of those decompositions. Diagonalizing a matrix and SVD is very related to this theme. Any analysis that attempts to transform the effect of a matrix to one that resembles a scalar transformation is the goal.

  • @GiovannaIwishyou
    @GiovannaIwishyou 3 года назад

    The mathematics fills my heart with beauty.

  • @rudypieplenbosch6752
    @rudypieplenbosch6752 2 года назад

    Love these lectures

  • @ADITYAMISHRA-g1p
    @ADITYAMISHRA-g1p 2 месяца назад

    A great lecture.

  • @starriet
    @starriet 2 года назад

    Question) If we complete the squares of the eq. (at 44:53), I don't think the squares get multiplied by the three pivots(2, 3/2, 4/3)... Can anyone tell me why?

  • @Ilija_Ilievski
    @Ilija_Ilievski 2 года назад +1

    Like watching someone like Aristotle teach

  • @eccesignumrex4482
    @eccesignumrex4482 7 лет назад +3

    Saddle Points: sometimes you gotta get up to get down.

  • @carlborgen
    @carlborgen 14 лет назад

    damn subtitle, my exam is in a couple of days and I need to watch this seriously, but the subtitle just keeps making me giggle ^^haha

  • @darkkalimdor1311
    @darkkalimdor1311 7 лет назад +1

    this man just related a rugby ball to the eigenvectors of a matrix. damn!

  • @tachiana3009
    @tachiana3009 9 месяцев назад

    This was amazing!!!!!!!!!!!!!!!!

  • @dangidelta
    @dangidelta 4 года назад

    I think my mind just exploded !

  • @sihanchen1331
    @sihanchen1331 9 лет назад

    The last function f(x1,x2,x3)>=0 can be proved by creating squares.

  • @ChessMemer69
    @ChessMemer69 3 года назад +1

    Quite possibly the hardest lecture, so far.

  • @martintoilet5887
    @martintoilet5887 4 года назад

    There's one video on Khan academy multivariable calculus talking about this, didn't quite get it. Now after his explanation, I finally know what is that.XD

  • @iharsh386
    @iharsh386 Год назад

    prof. nice tie 👍😁

  • @adnansomani8218
    @adnansomani8218 13 лет назад +4

    u just sumarized my 12 hours of studying in 50 mins

  • @divykala169
    @divykala169 4 года назад

    Is a hyperbola in high dimensional spaces called a hyperhyperbola?

  • @jonahansen
    @jonahansen 6 лет назад +1

    Damn he's good at teaching!

  • @dijo1469
    @dijo1469 7 лет назад +3

    Dove chocolate is not smooth as silk. This lecture is. indeed

  • @DiegoAndrade
    @DiegoAndrade 10 лет назад +1

    tremendous explanation thank you for sharing ...

  • @Shauracool123
    @Shauracool123 3 года назад

    How will we define negative definate. Will it have everything negative like
    1) All pivots are negative
    2) All e-values negative ?

  • @alkalait
    @alkalait 13 лет назад +11

    It's a .... superbowl

  • @alijoueizadeh8477
    @alijoueizadeh8477 6 лет назад

    Thank you.

  • @quirkyquester
    @quirkyquester 4 года назад

    could someone tell me 6:48 why is this true? how can you calculate lambda with trace? Thank youooouuu!

    • @quirkyquester
      @quirkyquester 4 года назад

      The trace of a matrix is the sum of its (complex) eigenvalues, and it is invariant with respect to a change of basis.

  • @lucasm4299
    @lucasm4299 6 лет назад +2

    ?? I understood everything except my pivots are 2, 3, 4 :(
    I love that last bit. Kind of like SVD.

    • @ortollj4591
      @ortollj4591 5 лет назад +1

      maybe this (with Sagemath) could help you ?
      sagecell.sagemath.org/?q=dfqzji

    • @aaronpaulhughes
      @aaronpaulhughes 4 года назад

      The Pivots ARE 2,3,4. He just took it one step further by dividing each row by the previous pivot. So the last row was [0,0,4] and he divided by pivot 3 to get [0,0,4/3]. He then took the 2nd row which was [0,3,-2] and divided by pivot 2 to get [0,3/2,-1]. If you take the product of the pivots 2,3,4 you get 24 which is the determinant of this new matrix. But by dividing by the previous pivot the product of the pivots [2, 3/2, 4/3]=4, which was the determinant of the original matrix.

  • @jasio83
    @jasio83 15 лет назад

    Is that video available or not? I'm following the course on you tube but I can't visualize this particular lecture (n° 27).

  • @xiaoweidu4667
    @xiaoweidu4667 4 года назад

    amazing!

  • @vslaykovsky
    @vslaykovsky 2 года назад

    47:25 drawing an eye 101

  • @ravinm100
    @ravinm100 8 лет назад

    Thanks!

  • @ChiGao-r8w
    @ChiGao-r8w 7 лет назад

    Can someone please explain why the condition for pivot test is (ac-b^2)/a > 0? If I just do the elimination to figure out the pivot, shouldn't the second pivot just be (c-b^2)/a?

    • @pavlenikacevic4976
      @pavlenikacevic4976 7 лет назад +2

      in order to make b (in 21 position) 0, you have to multiply first row with -b/a and add it to the second. When you do that with b (in 12 position) and add it to c, you get -b²/a + c, which is the same as (ac-b²)/a

    • @douglasespindola5185
      @douglasespindola5185 3 года назад

      @@pavlenikacevic4976 I know that it has 4 years, but I'd like to thank you, mr. Pavle. I'm curious about how did you get this insight? Greetings from Brazil!

    • @pavlenikacevic4976
      @pavlenikacevic4976 3 года назад

      @@douglasespindola5185 you're welcome. I don't remember where that is in the video, and I've also been out of maths for some time, so I cannot provide you any useful information at this point 😅

    • @douglasespindola5185
      @douglasespindola5185 3 года назад

      @@pavlenikacevic4976 thanks anyway. I'm studying for a job selection that will apply an exam about data science content. It'll be helpful. Why are you out of maths? Seems to me that you were a nice student.

    • @pavlenikacevic4976
      @pavlenikacevic4976 3 года назад

      @@douglasespindola5185 now I'm doing research in quantum chemistry. Doesn't really require math on a day to day basis. I needed math during my studies, to help me understand physics better
      Good luck with studying for the exam!

  • @cupckae1
    @cupckae1 6 месяцев назад

    His definition of saddle Prof Gilbert 😂

  • @MrSyrian123
    @MrSyrian123 6 лет назад

    Nice tie prof

  • @lobisw
    @lobisw 9 лет назад +16

    "What's the long word for bowl?"

    • @krille0o
      @krille0o 8 лет назад +8

      +Lobezno Meneses eliptic paraboloid z=(x/a)^2 + (y/b)^2

    • @usmanhassan1887
      @usmanhassan1887 6 лет назад

      It should be a function of three variables, not two variables.

    • @dangernoodle2868
      @dangernoodle2868 5 лет назад +2

      I would have just called it a hyper bowl.

    • @marcinkovalevskij5820
      @marcinkovalevskij5820 5 лет назад

      @@usmanhassan1887 f(x_1,x_2,x_3)

    • @usmanhassan1887
      @usmanhassan1887 5 лет назад

      Yes, it is, the long word for bowl is a function of 3 variables. Thank you.. @@marcinkovalevskij5820

  • @feelgood9570
    @feelgood9570 4 года назад

    35:00
    recap

  • @azaz868azaz5
    @azaz868azaz5 11 месяцев назад

    It seems that in mathematics everything at some point connects

  • @ArslanAlihanafi
    @ArslanAlihanafi 13 лет назад

    Where is lecture no 28?

  • @SHOURYAPRAKASHIIITDharwad
    @SHOURYAPRAKASHIIITDharwad Год назад

    I believe the graph of 2x^{2}+12xy+20y^{2}=0 do not exist

  • @camanhbui9655
    @camanhbui9655 9 лет назад

    Very nice! :)

  • @mohammedtarek9544
    @mohammedtarek9544 4 года назад +1

    am i the only one watching the lectures in 1.5 or 1.25 playback speed?

  • @theflaggeddragon9472
    @theflaggeddragon9472 8 лет назад

    This max and min method makes intuitive sense but does anyone have a proof showing that if the gradient is zero and the matrix is positive definite then the function is a local minimum?

    • @schrodingershat3063
      @schrodingershat3063 8 лет назад +3

      First of all, in order for a function to have a minimum, the gradient has to be zero. If this were not the case, there would be a direction next to the point in question where the function decreases - the direction opposite to the gradient - and this would contradict the fact that it is a minimum.
      If one takes an arbitrary function and carries out a Taylor expansion around the minimum, all terms of order higher than two can be neglected if we are close enough to the minimum (and we just need to know the function values right next to the minimum point to know that it is a minimum). Since the gradient is zero, all first-order derivatives are zero at the minimum point, and then we just have the constant term - the function value at the minimum - and the second order derivatives. The Taylor approximation around the minimum can thus be written as f(x_0 + x) ≈ f(x_0) + 1/2 (x^T A x), where A is the matrix of second derivatives (the Hessian) and x_0 is the minimum.
      Thus, if x^T A x > 0 for all x - i.e., the matrix A is positive definite - this is in particular valid for small x-values (such that x_0 + x is close to x_0), where the Taylor approximation is a good approximation of f(x_0 + x). Thus, we can conclude that in a neighbourhood of the point x_0 (i.e. if we are close enough so that the Taylor approximation is good), all other points give function values such that f(x_0 + x) > f(x_0), and thus x_0 is a local minimum, since the function has higher values for all points in a neighbourhood around this point.

  • @hamiltonianmarkovchainmc
    @hamiltonianmarkovchainmc 5 лет назад

    It's real algebraist hours, my dudes.

  • @mohammedtarek9544
    @mohammedtarek9544 4 года назад

    that tie looks cool tho

  • @gowrithampi9751
    @gowrithampi9751 5 лет назад

    A rugby ball has 2 out of three eigen values the same.

  • @FirstNameLastName-gf3dy
    @FirstNameLastName-gf3dy 4 года назад

    Hope we become neighbors in heaven Mr Strang.

  • @sameersharma4038
    @sameersharma4038 Год назад

    awsome

  • @imegatrone
    @imegatrone 13 лет назад

    I Really Like The Video Positive Definite Matrices and Minima From Your

  • @jiaqigan6398
    @jiaqigan6398 5 лет назад

    Thank you Prof.Strang 🇨🇳

  • @slatz20
    @slatz20 14 лет назад +1

    Ellipsoid :D
    Its like the UFO from the Movie Independence Day :D

  • @papapap2
    @papapap2 13 лет назад

    how about calling it an egg...

  • @adrianmh
    @adrianmh 4 года назад

    We're down from 1.2mill views on the first video :D

  • @cgu001
    @cgu001 15 лет назад

    man that was tragic...

  • @des6309
    @des6309 4 года назад

    Strang is so funny

  • @cooking60210
    @cooking60210 Год назад

    @2:50 Seattle submatrices lol

  • @SphereofTime
    @SphereofTime 7 месяцев назад

    4:34

  • @eglintonflats
    @eglintonflats 4 года назад

    Trump learned from Him how to wear a tie.

  • @muhammadhelmy5575
    @muhammadhelmy5575 2 года назад

    21:10

  • @Mimi54166
    @Mimi54166 4 года назад

    35:15

  • @melissaallinp.e.5209
    @melissaallinp.e.5209 4 года назад

    What's a long word for a bowl? lol

  • @PhuongNam-ys6ru
    @PhuongNam-ys6ru Год назад

    .

  • @lewlafanz6932
    @lewlafanz6932 2 года назад

    Probably not the best instructor. There are a lot of instructors on RUclips who are much much better teachers than him. Imagine the students at MIT have to through this .

  • @seventyfive7597
    @seventyfive7597 8 лет назад +8

    The absolute lack of rigor and the hand-waviness of the lesson makes this video a popular science segment rather than a math lesson. A nice popular science segment, but I certainly hope MIT students of Physics and engineers are directed to take something with a higher level, significantly so. Physics undergrads don't need total rigor, but such a total lack of it doesn't develop mathematical thinking.

    • @jean-fredericfontaine2695
      @jean-fredericfontaine2695 8 лет назад +21

      there is a 300+ pages book that come with that course (Strang wrote it) used in the majority of top level schools. I suggest you buy it and/or gtfo.

    • @bboysil
      @bboysil 7 лет назад +16

      He struggles to get the idea across, the intuition, which he does so masterfully. Very few teachers have this talent.... most math teachers are too pedantic and they lose their students along that way.

    • @hcgaron
      @hcgaron 7 лет назад +1

      You can only write so many proofs in 50 minutes. The proofs are very well laid out in the book. It's a good read (and a required read for students of the class)

    • @hcgaron
      @hcgaron 7 лет назад

      I have 4th edition as well, but I believe this was with 2nd edition. I can't recall but I believe I saw that on the OCW course page.

    • @DrTymish
      @DrTymish 6 лет назад +1

      This is intended to be an applied Linear Algebra class at MIT, for a more rigorous course on Linear Algebra (with proofs) Axler's or Friedberg are the best out there for the undergraduate level.

  • @yifuliu547
    @yifuliu547 8 лет назад

    autodidact

  • @tchappyha4034
    @tchappyha4034 5 лет назад

    Please prove the facts.

  • @muhammadhelmy5575
    @muhammadhelmy5575 2 года назад

    35:50