27. Positive Definite Matrices and Minima

Поделиться
HTML-код
  • Опубликовано: 9 сен 2024
  • MIT 18.06 Linear Algebra, Spring 2005
    Instructor: Gilbert Strang
    View the complete course: ocw.mit.edu/18-...
    RUclips Playlist: • MIT 18.06 Linear Algeb...
    27. Positive Definite Matrices and Minima
    License: Creative Commons BY-NC-SA
    More information at ocw.mit.edu/terms
    More courses at ocw.mit.edu

Комментарии • 140

  • @RC-bm9mf
    @RC-bm9mf 4 года назад +62

    This lecture is definitely a positive effect on my grasp of the matrix, and this lecture plays a pivotal role in the whole series. Thank you prof. Strang. Nobody has explained those concepts so clearly and coherently -- a whole new world is ahead of me. This is a must-see. A genuine human heritage.

    • @rolandheinze7182
      @rolandheinze7182 3 года назад +11

      Don't you mean... a positive definite effect? *cymbal crash*

    • @Arycke
      @Arycke 10 месяцев назад +2

      The first 2 lines are the joke. He said positive definite as "definitely ... positive."
      For those who thought the last comment with the rimshot was explaining the original joke. For completeness sake, no shade.

  • @PedroHernandez-gz4cn
    @PedroHernandez-gz4cn 11 лет назад +93

    my calculus 3 professor taught us, think of a saddle point as a point on a "pringles chip" a lot of people know exactly what it looks like.

    • @matiasmoanaguerrero8095
      @matiasmoanaguerrero8095 4 года назад +24

      or a saddle ?

    • @DeadPool-jt1ci
      @DeadPool-jt1ci 4 года назад +5

      @@matiasmoanaguerrero8095 lmao

    • @integralboi2900
      @integralboi2900 4 года назад +4

      (S)He’s the smartest person ever.

    • @mississippijohnfahey7175
      @mississippijohnfahey7175 2 года назад +3

      @@matiasmoanaguerrero8095 most American kids have seen way more pringles than they have saddles. Sorry John Wayne

    • @jasoncampbell1464
      @jasoncampbell1464 7 месяцев назад +1

      I think mathematicians should start calling it the Pringles point.
      “The function either curves upwards, downwards, or you have a Pringles point.” Has a nice ring to it

  • @rolandheinze7182
    @rolandheinze7182 3 года назад +33

    After viewing the 3blue1brown video on Duality, I am just seeing xT*A*x as the results of applying a linear transformation to a vector x and then projecting that new vector back onto x; if the vectors still point "in the same direction" i.e. the projection is positive, then A is positive definite

    • @dalisabe62
      @dalisabe62 3 года назад +5

      Yes. In fact, there is another way to look at positive definite matrix; that is, the transformation of vector X is another vector that lies in the same quadrant as vector X. The angle between the vector and its transformation should always be acute.

    • @youssefel-mahdy922
      @youssefel-mahdy922 2 года назад

      Thank you!

  • @rakolman
    @rakolman 8 лет назад +48

    This is maybe the best lecture in the entire course.

    • @thej1091
      @thej1091 8 лет назад

      Lets see! gonna do it! There is also a recent close up video of gilbert doing a 20 minute bit on positive definite matrices!

    • @jhabriel
      @jhabriel 8 лет назад +3

      I agree with you! What an incredible way to show how different branches of mathematics actually refers to the same thing.

    • @ja-qk4vd
      @ja-qk4vd 7 месяцев назад

      beautiful how comes together.

  • @iyalovecky
    @iyalovecky 10 лет назад +74

    This lecture is especially beautiful..

    • @Nakameguro97
      @Nakameguro97 9 лет назад +11

      Positive Definite connecting matrices, algebra, geometry, and calculus: priceless!

    • @sourabhdhere1124
      @sourabhdhere1124 4 года назад +4

      Oh Yeah, It's All Coming Together.

  • @ozzyfromspace
    @ozzyfromspace 4 года назад +15

    I watched this once, taking all notes. Then watched it again (with break) without taking notes. Then I read my notes (after a break). The lecture's quite good! Btw for those that thought the second derivatives thing came out of left field, Du is the directional derivative operator, so you want Du( Du( f(x1,x2,...,xn) ) ), which gives the expressions for x^T*A*x for a connected f and A. This tells us that x^T*A*x is like a second order operation for the derivative of a function when A is the Hessian (matrix of second derivatives) of said function f(). This wasn't clear to me initially so I was kinda lost. Best!

  • @ozzyfromspace
    @ozzyfromspace 4 года назад +9

    The funny thing is, I'm also doing a general relativity series and got to covectors. I understand what they are computationally, but couldn't "visualize" them, beyond the typical "stacks". Then I stumbled on the interpretation of covectors as linear maps, which served as a connection to linear algebra. I tried so many things to built a geometric interpretation all night but nothing formed in my head properly, so I was like, "meh, I'm 80% done with Professor Strangs lectures, might as well do another one". He led with x^H*A*x > 0 and for some reason, everything just started to click (x^H*A is an implicit map, so I have ideas about how to analyze things in my other GR class). Funny how his lecture was what I needed to get my head turning again. Thank you, and awesome lecture! ☺️🙌🏽 Stay safe during #COVID19

  • @straus1482
    @straus1482 8 лет назад +15

    Never Seen someone like this.... Amazing!!!!

  • @rosh70
    @rosh70 2 года назад +2

    If I had teachers like Gilbert Strang, I likely would've had a Ph.D in Math by now. No kidding! I love this guy! He made me fall in LOVE with Mathematics.

  • @madsonpena2783
    @madsonpena2783 4 года назад +7

    This lecture is just amazing, what a beautiful thing, all coming together...

  • @Eschewy
    @Eschewy 11 лет назад +2

    At 41:54, Strang gets the eigenvalues correct from memory. I'm impressed!

  • @annawilson3824
    @annawilson3824 2 месяца назад

    31:08 how everything beautifully connects together

  • @bearcharge
    @bearcharge 14 лет назад +15

    fancy tie!!!!

  • @georgesadler7830
    @georgesadler7830 3 года назад +1

    From this lecture, I really understand Positive Definite Matrices and Minima thanks to Dr. Gilbert Strang. The examples really help me to fully comprehend this important subject.

  • @RC-bm9mf
    @RC-bm9mf 2 года назад

    This lecture is a true masterpiece. I was in awe with the similar feeling of watching suspense thrillers.

  • @parkerhyde7514
    @parkerhyde7514 4 года назад +4

    "had better be more than 18" was the correct answer

  • @SauravKumar-xg4zr
    @SauravKumar-xg4zr 6 лет назад +3

    @15:16, I apologies.......................That's how a matchless, renowned and extraordinary person respect everyone, everywhere.
    Love you Prof. Gilbert Strange:)

  • @dorupanciuc218
    @dorupanciuc218 5 лет назад +1

    the explanation starting at 25:10 helped me understand what a positive definite matrice is

  • @Slogan6418
    @Slogan6418 5 лет назад

    31:40 In the case of A = [2, 6; 6, 7] for example, when setting z = 0, what we actually get is a cross (an 'X' shape) ; when z euqals other values, then we get a hyperbola.

  • @shashvatshukla
    @shashvatshukla 4 года назад +1

    My favourite sports team is the Seattle Submatrices

  • @tiger3023381
    @tiger3023381 10 лет назад +3

    thanks prof. now I can imagine what eigenvectors and eigenvalues are like.

  • @Ilija_Ilievski
    @Ilija_Ilievski 2 года назад +1

    Like watching someone like Aristotle teach

  • @quirkyquester
    @quirkyquester 4 года назад +2

    sorta got the general idea of this whole lecture, im still not so good at the details. Reading the book and summary might be a good idea to solidate the knowledge if needed. Thank you Professor Strang and MIT! great lecture!

  • @rochesterjezini317
    @rochesterjezini317 4 года назад +1

    Very good lecture Professor Strang thank you from Amazon Forest.

  • @yazhouhao7086
    @yazhouhao7086 6 лет назад +1

    Wow.....how beautiful it is! How beautiful!!!

  • @dalisabe62
    @dalisabe62 2 года назад

    Intuitively, a positive definite matrix is one that has more stretching weight than a rotation weight. If it operates on a vector in the first quadrant, it guarantees that the output vector is also in the first quadrant. This is almost like a closure property under the transformation of a matrix. I need to examine the case when the x vector in Ax is not in the first quadrant. The entire underlying theme is creating something that is very close to an Eigen matrix with more weight around the diagonal. The important of Eigen matrices is well known in applications where the power matrix has the same effect of a power scalar function. With big matrices, there is nothing more valuable than decomposing a matrix in terms of simpler matrices among which the Eigen matrix is one of those decompositions. Diagonalizing a matrix and SVD is very related to this theme. Any analysis that attempts to transform the effect of a matrix to one that resembles a scalar transformation is the goal.

  • @kartikarora8877
    @kartikarora8877 3 года назад +1

    It is my dream to meet prof. Gilbert strang . His voice , his words ,his action touch my soul . Please prof. Read my comment so that I can satisfy only by this And I pray You may live 1000 years .

  • @aditiprasad5549
    @aditiprasad5549 2 года назад

    From 1,012,547 views in the first lecture to 203,355 views in 27th...You belong to 20% who made it this far, Congrats!

  • @beriteri
    @beriteri 11 лет назад +2

    Thanks prof. This is one of the best lectures of the course!

  • @forheuristiclifeksh7836
    @forheuristiclifeksh7836 29 дней назад

    49:42 eigen value tell length of axis eigenvextors tell

  • @GiovannaIwishyou
    @GiovannaIwishyou 3 года назад

    The mathematics fills my heart with beauty.

  • @ChessMemer69
    @ChessMemer69 2 года назад +1

    Quite possibly the hardest lecture, so far.

  • @yazhouhao7086
    @yazhouhao7086 6 лет назад

    Dr. Gilbert Strang is really a master!

  • @alkalait
    @alkalait 13 лет назад +11

    It's a .... superbowl

  • @jessstuart7495
    @jessstuart7495 Год назад

    This lecture is a good one to have Matlab or Octave pulled up to look at some of these surfaces.
    x=-10:1:10;
    y=-10:1:10;
    [XX, YY]=meshgrid(x,y);
    Z=2*XX.*XX + 12*XX.*YY + 7*YY.*YY;
    surf(XX,YY,Z)

  • @brainstormingsharing1309
    @brainstormingsharing1309 3 года назад +1

    Absolutely well done and definitely keep it up!!! 👍👍👍👍👍

  • @eccesignumrex4482
    @eccesignumrex4482 7 лет назад +3

    Saddle Points: sometimes you gotta get up to get down.

  • @rudypieplenbosch6752
    @rudypieplenbosch6752 Год назад

    Love these lectures

  • @sihanchen1331
    @sihanchen1331 8 лет назад

    The last function f(x1,x2,x3)>=0 can be proved by creating squares.

  • @darkkalimdor1311
    @darkkalimdor1311 7 лет назад +1

    this man just related a rugby ball to the eigenvectors of a matrix. damn!

  • @dijo1469
    @dijo1469 6 лет назад +3

    Dove chocolate is not smooth as silk. This lecture is. indeed

  • @tanya-no4wi
    @tanya-no4wi 4 месяца назад

    This was amazing!!!!!!!!!!!!!!!!

  • @iharsh386
    @iharsh386 8 месяцев назад

    prof. nice tie 👍😁

  • @dangidelta
    @dangidelta 4 года назад

    I think my mind just exploded !

  • @jonahansen
    @jonahansen 5 лет назад +1

    Damn he's good at teaching!

  • @adnansomani8218
    @adnansomani8218 13 лет назад +4

    u just sumarized my 12 hours of studying in 50 mins

  • @DiegoAndrade
    @DiegoAndrade 10 лет назад +1

    tremendous explanation thank you for sharing ...

  • @martintoilet5887
    @martintoilet5887 4 года назад

    There's one video on Khan academy multivariable calculus talking about this, didn't quite get it. Now after his explanation, I finally know what is that.XD

  • @cupckae1
    @cupckae1 Месяц назад

    His definition of saddle Prof Gilbert 😂

  • @carlborgen
    @carlborgen 14 лет назад

    damn subtitle, my exam is in a couple of days and I need to watch this seriously, but the subtitle just keeps making me giggle ^^haha

  • @alijoueizadeh8477
    @alijoueizadeh8477 5 лет назад

    Thank you.

  • @jasoncampbell1464
    @jasoncampbell1464 7 месяцев назад

    Finally, lecture 27. Big data, machine learning, blockchain, artificial intelligence, digital manufacturing, big data analysis, quantum communication, and internet of things

  • @lucasm4299
    @lucasm4299 6 лет назад +2

    ?? I understood everything except my pivots are 2, 3, 4 :(
    I love that last bit. Kind of like SVD.

    • @ortollj4591
      @ortollj4591 5 лет назад +1

      maybe this (with Sagemath) could help you ?
      sagecell.sagemath.org/?q=dfqzji

    • @aaronpaulhughes
      @aaronpaulhughes 4 года назад

      The Pivots ARE 2,3,4. He just took it one step further by dividing each row by the previous pivot. So the last row was [0,0,4] and he divided by pivot 3 to get [0,0,4/3]. He then took the 2nd row which was [0,3,-2] and divided by pivot 2 to get [0,3/2,-1]. If you take the product of the pivots 2,3,4 you get 24 which is the determinant of this new matrix. But by dividing by the previous pivot the product of the pivots [2, 3/2, 4/3]=4, which was the determinant of the original matrix.

  • @vslaykovsky
    @vslaykovsky 2 года назад

    47:25 drawing an eye 101

  • @ravinm100
    @ravinm100 7 лет назад

    Thanks!

  • @xiaoweidu4667
    @xiaoweidu4667 4 года назад

    amazing!

  • @lobisw
    @lobisw 8 лет назад +16

    "What's the long word for bowl?"

    • @krille0o
      @krille0o 8 лет назад +8

      +Lobezno Meneses eliptic paraboloid z=(x/a)^2 + (y/b)^2

    • @usmanhassan1887
      @usmanhassan1887 6 лет назад

      It should be a function of three variables, not two variables.

    • @dangernoodle2868
      @dangernoodle2868 5 лет назад +2

      I would have just called it a hyper bowl.

    • @marcinkovalevskij5820
      @marcinkovalevskij5820 5 лет назад

      @@usmanhassan1887 f(x_1,x_2,x_3)

    • @usmanhassan1887
      @usmanhassan1887 5 лет назад

      Yes, it is, the long word for bowl is a function of 3 variables. Thank you.. @@marcinkovalevskij5820

  • @sameersharma4038
    @sameersharma4038 11 месяцев назад

    awsome

  • @feelgood9570
    @feelgood9570 3 года назад

    35:00
    recap

  • @user-wo3km2qe9d
    @user-wo3km2qe9d Год назад

    I believe the graph of 2x^{2}+12xy+20y^{2}=0 do not exist

  • @azaz868azaz5
    @azaz868azaz5 6 месяцев назад

    It seems that in mathematics everything at some point connects

  • @MrSyrian123
    @MrSyrian123 6 лет назад

    Nice tie prof

  • @divykala169
    @divykala169 3 года назад

    Is a hyperbola in high dimensional spaces called a hyperhyperbola?

  • @camanhbui9655
    @camanhbui9655 8 лет назад

    Very nice! :)

  • @starriet
    @starriet Год назад

    Question) If we complete the squares of the eq. (at 44:53), I don't think the squares get multiplied by the three pivots(2, 3/2, 4/3)... Can anyone tell me why?

  • @Shauracool123
    @Shauracool123 3 года назад

    How will we define negative definate. Will it have everything negative like
    1) All pivots are negative
    2) All e-values negative ?

  • @imegatrone
    @imegatrone 12 лет назад

    I Really Like The Video Positive Definite Matrices and Minima From Your

  • @jiaqigan6398
    @jiaqigan6398 4 года назад

    Thank you Prof.Strang 🇨🇳

  • @slatz20
    @slatz20 13 лет назад +1

    Ellipsoid :D
    Its like the UFO from the Movie Independence Day :D

  • @theflaggeddragon9472
    @theflaggeddragon9472 8 лет назад

    This max and min method makes intuitive sense but does anyone have a proof showing that if the gradient is zero and the matrix is positive definite then the function is a local minimum?

    • @schrodingershat3063
      @schrodingershat3063 8 лет назад +3

      First of all, in order for a function to have a minimum, the gradient has to be zero. If this were not the case, there would be a direction next to the point in question where the function decreases - the direction opposite to the gradient - and this would contradict the fact that it is a minimum.
      If one takes an arbitrary function and carries out a Taylor expansion around the minimum, all terms of order higher than two can be neglected if we are close enough to the minimum (and we just need to know the function values right next to the minimum point to know that it is a minimum). Since the gradient is zero, all first-order derivatives are zero at the minimum point, and then we just have the constant term - the function value at the minimum - and the second order derivatives. The Taylor approximation around the minimum can thus be written as f(x_0 + x) ≈ f(x_0) + 1/2 (x^T A x), where A is the matrix of second derivatives (the Hessian) and x_0 is the minimum.
      Thus, if x^T A x > 0 for all x - i.e., the matrix A is positive definite - this is in particular valid for small x-values (such that x_0 + x is close to x_0), where the Taylor approximation is a good approximation of f(x_0 + x). Thus, we can conclude that in a neighbourhood of the point x_0 (i.e. if we are close enough so that the Taylor approximation is good), all other points give function values such that f(x_0 + x) > f(x_0), and thus x_0 is a local minimum, since the function has higher values for all points in a neighbourhood around this point.

  • @hamiltonianmarkovchainmc
    @hamiltonianmarkovchainmc 5 лет назад

    It's real algebraist hours, my dudes.

  • @mohammedtarek9544
    @mohammedtarek9544 4 года назад

    that tie looks cool tho

  • @mohammedtarek9544
    @mohammedtarek9544 4 года назад +1

    am i the only one watching the lectures in 1.5 or 1.25 playback speed?

  • @FirstNameLastName-gf3dy
    @FirstNameLastName-gf3dy 4 года назад

    Hope we become neighbors in heaven Mr Strang.

  • @gowrithampi9751
    @gowrithampi9751 4 года назад

    A rugby ball has 2 out of three eigen values the same.

  • @jasio83
    @jasio83 14 лет назад

    Is that video available or not? I'm following the course on you tube but I can't visualize this particular lecture (n° 27).

  • @adrianmh
    @adrianmh 4 года назад

    We're down from 1.2mill views on the first video :D

  • @cgu001
    @cgu001 15 лет назад

    man that was tragic...

  • @cooking60210
    @cooking60210 7 месяцев назад

    @2:50 Seattle submatrices lol

  • @quirkyquester
    @quirkyquester 4 года назад

    could someone tell me 6:48 why is this true? how can you calculate lambda with trace? Thank youooouuu!

    • @quirkyquester
      @quirkyquester 4 года назад

      The trace of a matrix is the sum of its (complex) eigenvalues, and it is invariant with respect to a change of basis.

  • @des6309
    @des6309 4 года назад

    Strang is so funny

  • @muhammadhelmy5575
    @muhammadhelmy5575 2 года назад

    35:50

  • @forheuristiclifeksh7836
    @forheuristiclifeksh7836 2 месяца назад

    4:34

  • @papapap2
    @papapap2 13 лет назад

    how about calling it an egg...

  • @muhammadhelmy5575
    @muhammadhelmy5575 2 года назад

    21:10

  • @user-kx9ym3dk2e
    @user-kx9ym3dk2e 7 лет назад

    Can someone please explain why the condition for pivot test is (ac-b^2)/a > 0? If I just do the elimination to figure out the pivot, shouldn't the second pivot just be (c-b^2)/a?

    • @pavlenikacevic4976
      @pavlenikacevic4976 6 лет назад +2

      in order to make b (in 21 position) 0, you have to multiply first row with -b/a and add it to the second. When you do that with b (in 12 position) and add it to c, you get -b²/a + c, which is the same as (ac-b²)/a

    • @douglasespindola5185
      @douglasespindola5185 2 года назад

      @@pavlenikacevic4976 I know that it has 4 years, but I'd like to thank you, mr. Pavle. I'm curious about how did you get this insight? Greetings from Brazil!

    • @pavlenikacevic4976
      @pavlenikacevic4976 2 года назад

      @@douglasespindola5185 you're welcome. I don't remember where that is in the video, and I've also been out of maths for some time, so I cannot provide you any useful information at this point 😅

    • @douglasespindola5185
      @douglasespindola5185 2 года назад

      @@pavlenikacevic4976 thanks anyway. I'm studying for a job selection that will apply an exam about data science content. It'll be helpful. Why are you out of maths? Seems to me that you were a nice student.

    • @pavlenikacevic4976
      @pavlenikacevic4976 2 года назад

      @@douglasespindola5185 now I'm doing research in quantum chemistry. Doesn't really require math on a day to day basis. I needed math during my studies, to help me understand physics better
      Good luck with studying for the exam!

  • @ArslanAlihanafi
    @ArslanAlihanafi 13 лет назад

    Where is lecture no 28?

  • @eglintonflats
    @eglintonflats 3 года назад

    Trump learned from Him how to wear a tie.

  • @melissaallinp.e.5209
    @melissaallinp.e.5209 4 года назад

    What's a long word for a bowl? lol

  • @PhuongNam-ys6ru
    @PhuongNam-ys6ru 11 месяцев назад

    .

  • @lewlafanz6932
    @lewlafanz6932 2 года назад

    Probably not the best instructor. There are a lot of instructors on RUclips who are much much better teachers than him. Imagine the students at MIT have to through this .

  • @yifuliu547
    @yifuliu547 7 лет назад

    autodidact

  • @tchappyha4034
    @tchappyha4034 5 лет назад

    Please prove the facts.

  • @seventyfive7597
    @seventyfive7597 7 лет назад +8

    The absolute lack of rigor and the hand-waviness of the lesson makes this video a popular science segment rather than a math lesson. A nice popular science segment, but I certainly hope MIT students of Physics and engineers are directed to take something with a higher level, significantly so. Physics undergrads don't need total rigor, but such a total lack of it doesn't develop mathematical thinking.

    • @jean-fredericfontaine2695
      @jean-fredericfontaine2695 7 лет назад +21

      there is a 300+ pages book that come with that course (Strang wrote it) used in the majority of top level schools. I suggest you buy it and/or gtfo.

    • @bboysil
      @bboysil 7 лет назад +16

      He struggles to get the idea across, the intuition, which he does so masterfully. Very few teachers have this talent.... most math teachers are too pedantic and they lose their students along that way.

    • @hcgaron
      @hcgaron 6 лет назад +1

      You can only write so many proofs in 50 minutes. The proofs are very well laid out in the book. It's a good read (and a required read for students of the class)

    • @hcgaron
      @hcgaron 6 лет назад

      I have 4th edition as well, but I believe this was with 2nd edition. I can't recall but I believe I saw that on the OCW course page.

    • @DrTymish
      @DrTymish 6 лет назад +1

      This is intended to be an applied Linear Algebra class at MIT, for a more rigorous course on Linear Algebra (with proofs) Axler's or Friedberg are the best out there for the undergraduate level.

  • @Mimi54166
    @Mimi54166 4 года назад

    35:15