22. Diagonalization and Powers of A

Поделиться
HTML-код
  • Опубликовано: 3 окт 2024

Комментарии • 342

  • @bigfrankgaming2423
    @bigfrankgaming2423 Год назад +117

    This man single handedly saved my university algebra course, my teacher was just reading notes, he's actually expalining in a very clear manner.

  • @charmenk
    @charmenk 11 лет назад +119

    Good professor with good old blackboard and white chalk teaching method. This is way better than all the fancy powerpoints that many teachers use now a days.

  • @Tutkumsdream
    @Tutkumsdream 11 лет назад +74

    Thanks to him! I passed Linear Algebra.. I watched his videos for 4 days before final exam and I got 74 from final.. If I couldnt watch Dr.Strang's lectures, I would probably fail...

    • @snoefnone9647
      @snoefnone9647 9 месяцев назад +2

      For some reason i thought you were saying Dr. Strange's lecture!

  • @kayfouroneseven
    @kayfouroneseven 12 лет назад +58

    this is a bazillion times more straightforward and clear than the lectures i pay for at my university. :( I appreciate this being online

    • @bsmichael9570
      @bsmichael9570 10 месяцев назад +1

      He tells it like a story. It’s like he’s taking us all on a journey. You can’t wait to see the next episode.

  • @apocalypse2004
    @apocalypse2004 8 лет назад +238

    I think Strang leaves out a key point in the difference equation example, which is that the n unique eigenvectors form a basis for R^n, which is why u0 can be expressed as a linear combination of the eigenvectors.

    • @alessapiolin
      @alessapiolin 7 лет назад +1

      thanks!

    • @wontbenice
      @wontbenice 7 лет назад +5

      I was totally confused until you chimed in. Thx!

    • @seanmcqueen8498
      @seanmcqueen8498 6 лет назад +1

      Thank you for this comment!

    • @arsenron
      @arsenron 6 лет назад +17

      in my opinion it is so obvious that it is not worth stopping on it

    • @dexterod
      @dexterod 6 лет назад +13

      I think Strang assumed that A has n independent eigenvectors since most matrices do not have repeated eigenvalues.

  • @jollysan3228
    @jollysan3228 9 лет назад +210

    I agree.
    > Just one small correction at 32:30: It should have been S * LAMBDA^100 * c instead of LAMBDA^100 * S * c.

    • @Slogan6418
      @Slogan6418 5 лет назад +4

      thank you

    • @ozzyfromspace
      @ozzyfromspace 4 года назад +13

      The sad thing was, a few moments later he was struggling to explain things because even though he hadn't pinned down the error, he someone knew that something wasn't quite right. But he obviously had the core idea nailed

    • @alexandresoaresdasilva1966
      @alexandresoaresdasilva1966 4 года назад +6

      thank you so much, was about to post asking about this.

    • @吴瀚宇
      @吴瀚宇 4 года назад +12

      I stuck on this for like 10 mins, until I saw the comments here...

    • @maitreyverma2996
      @maitreyverma2996 4 года назад +5

      Perfect. I was about to write the same.

  • @eye2eyeerigavo777
    @eye2eyeerigavo777 5 лет назад +53

    Math surprises you everytime...🤔 Never thought that connections between rate of growth in system dynamics, fibonacci Series and diagnalization of an INDEPENDENT vectors will finally boil down INTO GOLDEN RATIO OF EIGENVALUES at END! 😳

  • @albertacristie99
    @albertacristie99 14 лет назад +16

    This is magnificiant!! I have no words to express how thankful I am towards the exposure of this video

  • @christoskettenis880
    @christoskettenis880 9 месяцев назад +1

    The explanations of this professor of all those abstract theorems and blind methodologies are simply briliant

  • @BirnieMac1
    @BirnieMac1 8 месяцев назад +1

    You know you’re in for some shenanigans when they pull out the “little trick”
    Professor Gilbert is an incredible teacher; I struggled with Eigenvalues and vectors in a previous course and this series of lectures has really helped understand it better
    Love your work Professor Gilbert

  • @cozmo4825
    @cozmo4825 Год назад +4

    Thank you a lot Prof. Strang you really made this course clear to understand, the way you teach these topics are superior, what really special about it is you put yourself in the position of your students answering their questions without them even asking, a true skill acquired not only by decades of teaching but actually having the mindset of a true teacher.
    Mr. Strang, words can't describe how good these lectures are its a true form of art.
    And thanks a lot for MIT OpenCourseWare for providing these lectures in good quality and free of charge, hopefully one day to I will pursue a master in my degree in electrical engineering in MIT.

  • @georgesadler7830
    @georgesadler7830 3 года назад +6

    From this latest lecture , I am learning more about eigenvalues and eigenvectors in relation to diagonalization of a matrix. DR. Strang continues to increase my knowledge of linear algebra with these amazing lectures.

  • @eroicawu
    @eroicawu 14 лет назад +5

    It's getting more and more interesting when differential equations are involved!

  • @Zumerjud
    @Zumerjud 9 лет назад +24

    This is so beautiful!

  • @sathviktummala5480
    @sathviktummala5480 3 года назад +5

    44:00 well that's an outstanding move

  • @kanikabagree1084
    @kanikabagree1084 4 года назад +2

    This teacher made fall in love with linear algebra thankyou ❤️

  • @meetghelani5222
    @meetghelani5222 9 месяцев назад

    Thank you for existing MITOCW and Prof. Gilbert Strang.

  • @SimmySimmy
    @SimmySimmy 5 лет назад +7

    through single matrix transformation, the whole subspace will expand or shrink with the rate of eigenvalues in the direction of its eigenvectors, suppose you can decompose a vector in this subspace into the linear combination of its eigenvectors, so after many times of the same transformation, the random vector will ultimately land on one of its eigenvectors with the largest eigenvalue.

  • @ozzyfromspace
    @ozzyfromspace 4 года назад +3

    For the curious:
    F_100 = (a^99 - b^99) * b/sqrt(5) + a^99 , where a = (1 + sqrt(5))/2 and b = (1 - sqrt(5))/2 are the two eigenvalues of our system of difference equations.
    Numerically, F_100 = ~3.542248482 * 10^20 ... it's a very large number that grows like ~1.618^k 😲
    Overall, great lecture Professor Strang! Thank you for posting, MIT OCW ☺️

  • @rolandheinze7182
    @rolandheinze7182 5 лет назад +4

    Hard lecture to get through personally but does illustrate some of the cool machinery for applying eigenvectors

  • @cuinuc
    @cuinuc 14 лет назад +19

    I love professor Strang's great lectures.
    Just one small correction at 32:30: It should have been S * LAMBDA^100 * c instead of LAMBDA^100 * S * c.

    • @starriet
      @starriet 2 года назад

      Nice catch!

    • @jeffery777
      @jeffery777 2 года назад

      haha I think so

    • @eyuptarkengin816
      @eyuptarkengin816 9 месяцев назад

      yeah, i though of the same thing and scrolled down the comments for a approval. Thanks mate :D

  • @neoneo1503
    @neoneo1503 3 года назад +2

    A*S=S*Lambda (Using the linear combination view (Ax1=b1 column part) of Matrix Multiplication), That is Brilliant and Clear! Thanks!

    • @neoneo1503
      @neoneo1503 3 года назад

      Also expressing the state u_0 to u_k as linear combination of eigenvectors (at 30:00 and 50:00)

    • @Wabbelpaddel
      @Wabbelpaddel 3 года назад

      Well, it's - if you interpret it that way - just a basis transformation from the standard base (up to isomorphism, then just additionally multiply the transforms of the alternate basis) onto the eigenvector basis.
      Provided of course, that either the characteristic polynomial factors distinctly, or that geometric and algebraic multiplicity match (because then the eigenspaces distinctly span the vector space up to isomorphism; if they weren't, you'd just have a subspace as a generating system).
      For anyone who wanted one more run-through.

    • @neoneo1503
      @neoneo1503 3 года назад +1

      @@Wabbelpaddel Thanks! =)

  • @alexspiers6229
    @alexspiers6229 7 месяцев назад

    This is one of the best in the series

  • @nguyenbaodung1603
    @nguyenbaodung1603 3 года назад +5

    I read something on SVD without even knowing about eigenvalues and eigenvectors, then watch a youtube video, explaining that V is actually the eigenvector decomposition of A^TA. Which is extremely insane when I got to see this video oh my godness. Now even haven't watched your SVD lecture, I can even tell the precise concept of it. Oh my godness Math is so perfect!!

  • @uzferry5524
    @uzferry5524 Год назад

    bruh the fibonacci example just blew my mind. crazy how linear algebra just works like that!!

  • @aattoommmmable
    @aattoommmmable 13 лет назад +5

    the lecture and the teacher of my life!

  • @Huayuan-p4z
    @Huayuan-p4z 10 месяцев назад

    I have learned about the Fibonacci sequence in my high school, and it is so good to have a new perspective on the magical sequence.I think the significane of learning lies in the collection of new perspectives.😀

  • @go_all_in_777
    @go_all_in_777 6 месяцев назад

    At 28:07, uk = (A^k)uo, can also be written as uk = S*(Lambda)^k*(S^-1)*uo. Also, we can write uo = S*c as explained at 30:00. therefore, uk = S*(Lambda)^k*(S^-1)*S*c=S*(Lambda)^k*c

    • @jeanpierre-st7rl
      @jeanpierre-st7rl 6 месяцев назад

      Hi @ 29:46 Uo = C1X1 + C2X2 +C3X3... Is U0 a vector? If so, How can split this U0 in to a combination of eigen vectors? What is Ci ? If you have any info pleases let me know. Thanks.

  • @ccamii__
    @ccamii__ Год назад

    Absolutely amazing! This lecture really helped me to understand better the ideas about Linear Algebra I've already had.

  • @coreconceptclasses7494
    @coreconceptclasses7494 4 года назад +4

    I got 70 out of 75 in my final linear algebra exam thanks MIT...

  • @syedsheheryarbokhari2780
    @syedsheheryarbokhari2780 Год назад +2

    There is a small writing mistake at 32:30 by Prof Strang. He writes (eigenvalue matrix)^100 multiplying (eigenvector matrix) multiplying c's (constants). It ought to be (eigenvector matrix) multiplying (eigenvalue matrix)^100 multiplying c's.
    At the end of the lecture Professor Strang does narrate the correct formula but it is easier to miss.

    • @clutterbrainx
      @clutterbrainx Год назад

      Yeah I was confused for a very long time there

  • @eren96lmn
    @eren96lmn 8 лет назад +55

    43:36 that moment when your professor's computational abilities goes far beyond standart human capabilities

    • @BalerionFyre
      @BalerionFyre 8 лет назад

      Yeah wtf? How did he do that in his head?? lol

    • @BalerionFyre
      @BalerionFyre 8 лет назад +45

      Wait a minute! He didn't do anything special. 1.618... is the golden ratio! He just knew the first 4 digits. Damn that's a little anticlimactic. Bummer.

    • @AdrianVrabie
      @AdrianVrabie 8 лет назад +2

      +Stephen Lovejoy Damn! :D Wow! AWESOME! I have no words! Nice spot! I actually checked it in Octave and I was amazed the prof could do it in his head. But I guess he knew the Fibonacci is related to the golden ratio.

    • @IJOHN84
      @IJOHN84 5 лет назад +6

      All students should know the solution to that golden quadratic by heart.

    • @ozzyfromspace
      @ozzyfromspace 4 года назад +2

      Fun fact since we're all talking about the golden ratio. The Fibonacci sequence isn't that special. Any sequence F_(k+2) = F_(k+1) + F_k for any seeds F_0 = a and F_1 = b != -a generate a sequence that grows at the rate (1+sqrt(5))/2 .. your golden ratio. Another fun way to check this: take the limit of the ratio of numbers in your arbitrary sequence with your preferred software :)
      edit: that's a great excuse to write a bit of code lol

  • @Mohamed1992able
    @Mohamed1992able 13 лет назад +1

    a big thanks tothis prof for his efforts to give us cours about linear algebra

  • @RolfBazuin
    @RolfBazuin 11 лет назад

    Who would have guessed, when this guy explains it, it almost sounds easy! You, dear dr. Strang, are a master at what you do...

  • @florianwicher
    @florianwicher 6 лет назад +3

    Really happy this is online! Thank you Professor :)

  • @dalisabe62
    @dalisabe62 4 года назад +8

    The golden ratio arose from the Fibonacci sequence and has nothing to do with eigenvectors or eigenvalues. The beauty of using the eigenvectors and eigenvalue of a matrix though is limiting the effect of the transformation to the change in magnitude only, which reduces dynamics systems such as population growth that is a function of several variables to be encoded in a matrix computation without worrying about the effect of direction or rotation typically associated with matrix transformation. Since eigenvectors and eigenvalues change the magnitude of the parameter vector only, the idea of employing the Eigen transformation concept is quite genius. The same technique could be used in any dynamic system that could be modeled as a matrix transformation but one that produces a change in magnitude only.

    • @Arycke
      @Arycke 11 месяцев назад +1

      Hence the title of his *example* as "Fibonacci Example." Nowhere was it stated explicitly sthat the golden ratio didn't arise from the Fibonacci sequence, so I don't see where you got that from. The example has a lot to do with eigenvalues and eigenvectors by design, and is using a simple recurrence relation to show a use case. The Fibonacci sequence isn't unique anyway.

  • @sharmabu
    @sharmabu 2 месяца назад

    absolutely beautiful

  • @shadownik2327
    @shadownik2327 8 месяцев назад

    Now I get it, so its like breaking the thing ( vector or matrix or system really) we want to transform into little parts and then transforming them individually cz thats easier as the parts get transformed in the same direction and then adding up all those pieces. E vectors tell us how to make the pieces and e values how to make the transformation with the given matrix or system. Wow thanks ! It’s like something fit in in my mind and became very simple.
    Basically this is like finding the easiest way to transform.
    Thanks to @MIT and Professor Strang for making this available online for free.

  • @cecilimiao
    @cecilimiao 14 лет назад +2

    @cuinuc
    I think they are actually the same, because LAMBDA is a diagonal matrix, you can have a try.

  • @bastudil94
    @bastudil94 10 лет назад +75

    There is a MISTAKE on the formula of the minute 32:31. It must be S(Λ^100)c in order to work as it is supposed. However it is an excellent lecture, thanks a lot. :)

    • @YaguangLi
      @YaguangLi 10 лет назад +3

      Yes, I am also confused by this mistake.

    • @sammao8478
      @sammao8478 9 лет назад

      Yaguang Li
      agree with you.

    • @AdrianVrabie
      @AdrianVrabie 8 лет назад

      +Bryan Astudillo Carpio why not S(Λ^100)S^{-1}c ???

    • @apocalypse2004
      @apocalypse2004 8 лет назад +4

      u0 is Sc, so S inverse cancels out with the S

    • @daiz9109
      @daiz9109 7 лет назад

      You're right... it confused me too...

  • @abdulghanialmasri5550
    @abdulghanialmasri5550 2 года назад

    The best math teacher ever.

  • @kunleolutomilayo4018
    @kunleolutomilayo4018 5 лет назад +1

    Thank you, Prof.
    Thank you, MIT.

  • @benzhang7261
    @benzhang7261 4 года назад +4

    Master Yoda passed on what he has learnt by fibonacci and 1.618.

  • @muyuanliu3175
    @muyuanliu3175 Месяц назад

    32:42, should be S lambda^100 c, great lecture, 3rd time I learn this

  • @eugenek951
    @eugenek951 8 месяцев назад

    He is my linear algebra super hero!🙂

  • @dexterod
    @dexterod 8 лет назад +24

    I'd say if you play this video at speed 1.5, it's even more awesome!

  • @Afnimation
    @Afnimation 11 лет назад +1

    well i got impressed at the begining, but when he stated the second eigenvalue i realized it is just the golden ratio... That does not demerits him, he's great!

  • @LAnonHubbard
    @LAnonHubbard 13 лет назад +1

    I've only just learnt about eigenvalues and eigenvectors from KhanAcademy and Strang's Lecture 21 so a lot of this went whoooosh over my head, but managed to find the first 20 minutes useful. Hope to come back to this when I've looked at differential equations (which AFAIK are very daunting), etc and understand more of it.

    • @rolandheinze7182
      @rolandheinze7182 5 лет назад

      Don't think you need diff EQ at all to understand the algebra. Maybe the applications

  • @gomasaanjanna2897
    @gomasaanjanna2897 3 года назад +4

    Iam from india I love your teaching

  • @dennisyangji
    @dennisyangji 15 лет назад

    A great lecture showing us the wonderful secret behind linear algebra

  • @wendywang4232
    @wendywang4232 12 лет назад +2

    something wrong with this lecture, 32:39, A^{100}u_0=SM^100c. Here I use M to substitute the eigenvalue diagonal matrix. The professor said A^{100}u_0=M^100Sc which is not correct.

  • @dwijdixit7810
    @dwijdixit7810 Год назад +1

    33:40 Correction: Eigenvalue matrix be multiplied to S from the right. That has been made in the book. Probably, it slipped off Prof. Strang in the flow.

  • @jasonhe6947
    @jasonhe6947 4 года назад

    absolutely a brilliant example for how to apply eigenvalues to real world problem

  • @amyzeng7130
    @amyzeng7130 2 года назад +1

    What a brilliant lecture !!!

  • @mospehraict
    @mospehraict 13 лет назад +1

    @PhilOrzechowski he does it to make first order difference equations system out of second order

  • @zyctc000
    @zyctc000 9 месяцев назад

    If any one ever asks you about why the Fibonacci and the golden ratio phi is connected , point him/her to this video.
    Thank you Dr. Strang

  • @tomodren
    @tomodren 12 лет назад

    Thank you for posting this. These videos will allow me to pass my class!

  • @maoqiutong
    @maoqiutong Год назад +1

    32:41 There is a slight error here. The result Λ^100 * S * C may be wrong. I think it should be S * Λ^100 * C.

  • @starriet
    @starriet 2 года назад +1

    Notes for future ref.)
    (7:16) there are _some_ matrices that do _NOT_ have n-independent eigenvectors, but _most_ of the matrices we deal with do have n-independent eigenvectors.
    (17:14) If all evalues are different, there _must_ be n-indep evectors. But if there are same evalues, it's possible _no_ n-indep evectors. (Identity matrix is an example of having the same evalues but still having n-indep evectors)
    * Also, the position of Lambda and S should be changed(32:36). You'll see why by just thinking matrix multiplication, and it can also be viewed by knowing A^100=S*Lambda^100*S^-1 and u_0=S*c.
    Thus, it should be S*Lambda^100*c, and this also can be thought of as 'transformation' between the two different bases - one of the two is the set of the egenvectors of A.
    * Also, (43:34) How prof. Strang could calculate that?? Actually that number _1.618033988749894..._ is called the 'golden ratio'.
    * (8:15) Note that A and Lambda are 'similar'. (And, S and S_-1(S inverse) transforms the coordinates.. you know what I mean.. both A and Lambda can be though of as some "transformation" based on different basis.. and S(or S_-1) transforms the coord between those two world.)

    • @deveshvarma8531
      @deveshvarma8531 2 года назад

      I spent a few hours on the second point before figuring it out :(

  • @phononify
    @phononify Год назад

    very nice discussion about Fibonacci ... great !

  • @technoshrink
    @technoshrink 9 лет назад +6

    U0 == "you know it"
    First time I've heard his boston accent c:

  • @praduk
    @praduk 15 лет назад

    Fibonacci numbers being solved for as an algebraic equation with linear algebra was pretty cool.

  • @chiaochao9550
    @chiaochao9550 3 года назад

    46:08 It should be F_100 is similar to c_1 * lambda1 * x_1. The professor missed x_1 here. But if you assume x_1 is 1 (which is the case here), then this is correct.

    • @starriet
      @starriet 2 года назад

      Not actually. F_100 is a number and x_1 is a vector.

  • @PaulHobbs23
    @PaulHobbs23 13 лет назад +2

    @lolololort
    1/2(1 + sqrt(5)) is also the golden ratio! Math is amazing =] I'm sure the professor knew the answer and didn't calculate it in his head on the spot.

  • @richarddow8967
    @richarddow8967 Год назад +1

    beautifully simple how that Fibonacci worked out

  • @ozzyfromspace
    @ozzyfromspace 4 года назад +6

    Did we ever prove that if the set of eigenvalues are distinct, the set of eigenvectors are linearly independent? I ask because at ~ 32:00 taking u_o = c1*x1 + c2*x2 + ... + cn*xn requires the eigenvectors to form a basis for an n-dimensional vector space (i.e. span the column space of an invertible matrix). It feels right but I have no solid background for how to think about it

    • @roshinis9986
      @roshinis9986 11 месяцев назад

      The idea is easy for 2d. If you have two distinct eigenvalues and their corresponding eigenvectors, you don't just have one eigenvector per eigenvalue, the whole span of that vector (its multiples forming a line) are also the eigenvectors associated with that eigenvalue. If the original eigenvectors were to be dependent, they would lie in the same line making it impossible for them to scale by a factor of two distinct eigenvalues simultaneously. I haven't yet been able to extend this intuition to 3 or higher dimensions though as now dependence need not mean lying in the same line.

    • @jeanpierre-st7rl
      @jeanpierre-st7rl 6 месяцев назад

      @@roshinis9986 Hi @ 29:46 Uo = C1X1 + C2X2 +C3X3... Is U0 a vector? If so, How can split this U0 in to a combination of eigen vectors? What is Ci ? If you have any info pleases let me know. Thanks.

  • @Hindusandaczech
    @Hindusandaczech 13 лет назад

    Bravo!!! Very much the best and premium stuff.

  • @blondii0072
    @blondii0072 12 лет назад +1

    Beautiful lecture. Thanks

  • @АлександрСницаренко-р4д

    MIT, thanks you!

  • @jingwufang1796
    @jingwufang1796 9 лет назад +5

    Great professor

  • @NisargJain
    @NisargJain 6 лет назад +3

    Never in my entire life, i would have been able to convert that Fibonacci sequence in matrix form. Untill, he did it.

    • @thedailyepochs338
      @thedailyepochs338 4 года назад

      can you explain how he did it

    • @NisargJain
      @NisargJain 4 года назад +1

      @@thedailyepochs338 sure, for understanding that we must first understand what fibonacci sequence is, in a fibonacci sequence every term is the sum of previous two terms (given the first two terms starting from 0 and then 1). So, the 3rd term F3= F2+F1=1+0=1. Similarly F(k+2)= F(k+1) + F(k). But in a matrix, let us assume that U(k) is 2 dimensional vector that consists of the first term as F(k+1) and 2nd term as F(k). Similarly U(k+1) would be [F(k+2), F(k+1)]. But see that the first term of U(k+1) that is F(k+2) is equal to sum of the term of U(k) that F(k+1) + F(k); and the second term of U(k+1) is is F(k+1) which is only the first term of U(k). Hence we get the matrix
      A=[ 1 1 ]
      [ 1 0]
      If you multiply, AU(k) which is
      A U(k)
      [ 1 1 ] [ F(k+1) ]
      [ 1 0] [ F(k) ]
      You will fet first term as sum of the terms of U(k) and second term as just the first term of U(k) which we deduced to be U(k+1).

    • @thedailyepochs338
      @thedailyepochs338 4 года назад

      @@NisargJain thanks , really appreciate it

  • @veronicaecheverria594
    @veronicaecheverria594 4 года назад

    What a great professor!!!

  • @hektor6766
    @hektor6766 5 лет назад

    You can hear the students chuckling as they recognized the Golden Ratio. Didn't quite recognize it as (1 + root 5)/2.

  • @rambohrynyk8897
    @rambohrynyk8897 10 месяцев назад

    It always shits me how quickly the students clammer to get out of the class….how are you not absolutely dumbfounded by the profundity of what this great man is laying down!!!!

  • @jojowasamanwho
    @jojowasamanwho Год назад

    19:21 I would sure like to see the proof that if there are no repeated eigenvalues, then there are certain to be n linearly independent eigenvectors

  • @jamesmcpherson3924
    @jamesmcpherson3924 4 года назад

    I had to pause to figure out how he got the eigenvectors at the end. Plugging in Phi works but it wasn’t until I watched again that I noticed he was pointing to the lambda^2-lambda-1=0 relationship to reveal the vector.

  • @zionen01
    @zionen01 14 лет назад

    Great stuff. I was able to do my homework with this lecture. I will definitely be getting Strang's book.

  • @shamsularefinsajib7778
    @shamsularefinsajib7778 11 лет назад

    Gilbert strang a great math teacher............

  • @sammatthew7
    @sammatthew7 Месяц назад +1

    GOLD

  • @SamSarwat90
    @SamSarwat90 7 лет назад

    I love you professor !!!

  • @kebabsallad
    @kebabsallad 13 лет назад

    @PhilOrzechowski , he says that he just adds it to create a system of equations.

  • @iebalazs
    @iebalazs 2 года назад

    At 32:32 the expression is actually S*Lamdba^100*c, and not Lambda^100*S*c .

  • @ricardocesargomes7274
    @ricardocesargomes7274 7 лет назад

    Thanks for uploading.!

  • @iDiAnZhu
    @iDiAnZhu 11 лет назад +4

    At around 32:45, Prof. Strang writes Lambda^100*S*c. Notation wise, shouldn't this be S*Lambda^100*c?

  • @pelemanov
    @pelemanov 13 лет назад +1

    @LAnonHubbard You don't really need to know about differential equations to understand this lecture. Just watch lessons 1 to 20 as well ;-). Takes you only 15h :-D.

  • @cooperxie
    @cooperxie 11 лет назад

    agree!
    that's what I plan to use in my teacing

  • @davidsfc9
    @davidsfc9 12 лет назад

    Great lecture !

  • @noorceen
    @noorceen 12 лет назад +2

    thank you :))
    you are amazing

  • @stumbling
    @stumbling 8 лет назад +11

    7:33 Surprise horn?

  • @lastchance8142
    @lastchance8142 2 года назад

    Clearly Prof.Strang is a master, and his lectures are brilliant. But how do the students learn without Q&A? Is this standard procedure at MIT?

  • @shavuklia7731
    @shavuklia7731 7 лет назад

    Fantastic! Thanks for uploading.

  • @Zoro3120
    @Zoro3120 8 лет назад +1

    In the computation of the Eigen values for A², he used A = SʌSˉ¹ to derive that ʌ² represents its Eigen value matrix. However this can be true only if S is invertible for A², which need not be always true.
    For example, for the matrix below (say A), the Eigen values are 1, -1(refer previous lecture). This would imply that A² has only one Eigen value of 1. This would imply that S has 2 columns which are same (if it has only one column then it is no longer square and hence inverse doesn't apply) and hence non invertible. This implies that this proof cannot be used for all the cases of the matrix A.
    _ _
    │ 0 1 │
    │ 1 0 │
    ¯ ¯
    Is there something I'm missing here?

    • @hinmatth
      @hinmatth 7 лет назад +2

      Please check 17:32

  • @alijoueizadeh8477
    @alijoueizadeh8477 5 лет назад

    Thank you.

  • @khanhdovanit
    @khanhdovanit 3 года назад

    15:02 interested information inside matrix - eigenvalues

  • @ax2kool
    @ax2kool 12 лет назад

    That was amazing and awe-inspiring. :)

  • @Stoikpilled
    @Stoikpilled 15 лет назад

    awesome!! Greetings from Peru

  • @leothegreat3
    @leothegreat3 12 лет назад

    thank you

  • @dadadada2367
    @dadadada2367 11 лет назад

    the best of the best

  • @guyfromkerala3577
    @guyfromkerala3577 3 года назад +1

    If I were a cameraman at MIT, I would carry a notebook all the time.

  • @blankety345
    @blankety345 14 лет назад +2

    yeah! the golden ratio!