Intuition and Examples for Lagrange Multipliers (Animated)

Поделиться
HTML-код
  • Опубликовано: 25 авг 2024

Комментарии • 63

  • @poo2uhaha
    @poo2uhaha 3 года назад +52

    Nice video - I appreciate the 3B1B-esque animations. This video was shared on an oxford university undergraduate physics group chat so you are helping a lot of people!

    • @casualscience
      @casualscience  3 года назад +11

      Hey thanks so much, it really makes me motivated to make more videos knowing that I'm helping students. I too was once a physics undergrad!

  • @Kaepsele337
    @Kaepsele337 2 года назад +15

    Nice! I've convinced myself before that Lagrangian multipliers work, but only at a symbolic level. The intuition that it just means that the gradients of f and g align was new to me, and gives me a much better understanding of why it works :)
    And should I ever forget how to do it, this will allow me to rederive it quickly, which is always useful. So, Thanks! I'll check out your other videos.

  • @HuLi-iota
    @HuLi-iota 5 месяцев назад

    Really helps, not only your video did makes me know it works , but why it works

  • @fabriai
    @fabriai Год назад +1

    Very nice video... thanks a lot. I was struggling to wrap my head around plugging L into the Grad resolution. Your explanation is easy to follow and sensible. Brilliant!

  • @monsieur910
    @monsieur910 2 года назад +2

    Wow this is a great video! I remember studying this in undergrad (engineering) and when I wanted to write my phd (physics) I decided to optimize numerically because I didn't want to be questioned about the method.

    • @casualscience
      @casualscience  2 года назад +1

      Hahaha, I know that feeling man, have to pick your battles! Thank you for the kind words!!

  • @youssefabsi6296
    @youssefabsi6296 Год назад +1

    thank you a lot. wish you had a series solely for optimization

  • @cauetrindade1181
    @cauetrindade1181 Месяц назад

    amazing video

  • @mauritzwiechmamnn7366
    @mauritzwiechmamnn7366 3 месяца назад

    Thank you, a big help for my bachelors thesis!

  • @algorithminc.8850
    @algorithminc.8850 2 месяца назад

    Thanks. Great video. I look forward to scoping your other videos. Subscribed. Cheers

  • @Dhruvbala
    @Dhruvbala 7 месяцев назад +1

    Brilliant insight! I wish you hadn't glossed over the part at 9:40 so quickly -- as that's kind of the crux of the entire video. I appreciate the thought you put into the explanation, though

  • @tranhoanglong2000
    @tranhoanglong2000 2 года назад

    This really helped me a lot, thank you for your explanation. I dont know why it does not have more views by now, maybe cause the video is still new. Great work ✨🇻🇳❤

    • @casualscience
      @casualscience  2 года назад +1

      Thanks for the comment Trần, really makes the work feel worthwhile to know I'm helping people learn!!

  • @gabrielpus-perchaud9063
    @gabrielpus-perchaud9063 Год назад +1

    Thank you, it is very useful

  • @maxyazhbin826
    @maxyazhbin826 2 года назад +1

    I like your content, it is education, not entertainment that has a bunch of music

    • @casualscience
      @casualscience  2 года назад

      Thank you max!

    • @theairaccumulator7144
      @theairaccumulator7144 2 года назад

      Exactly! 3blue1brown is possibly the least educative math channel. Why show pretty graphics without showing the math used to make them?

  • @hannahnelson4569
    @hannahnelson4569 Год назад

    Thank you! This helped me understand what the Lagrange equation means!

  • @orandanon
    @orandanon Год назад +1

    Nice video.
    Few remarks:
    1. Lagrange Multipliers Methods is a necessary condition for extrema, but not sufficient. Indeed one can give examples where "just" running the algorithm display yields a wrong solution. One way to solve this problem one needs to show that the domain is compact (i.e., closed and bounded), and then by Weierstrass, one knows that the function has a minimum and a maximum in the domain.
    2. In the final notes section the generalization for several constraints requires that g_1,...,g_k are linearly independent.

  • @1Fortnite1
    @1Fortnite1 Год назад

    phenomenal explanation!

  • @FFT3D
    @FFT3D День назад

    Is there a reference to the statement made at 7:22?

  • @gijsjespers4868
    @gijsjespers4868 2 года назад

    beautiful video, thank you !

  • @olivierbegassat851
    @olivierbegassat851 Год назад

    very nice explanation : )

  • @kashu7691
    @kashu7691 3 года назад

    this was perfect. thanks for making this

    • @casualscience
      @casualscience  3 года назад

      Thank you so much! I appreciate the kind words, and that you took the time to comment; it means a lot to me!

  • @yt-1161
    @yt-1161 2 года назад +1

    @2:57 n-1 degree of freedom or 1 degree of freedom

    • @casualscience
      @casualscience  2 года назад +2

      Hi, N-1 is correct there. You have one equation which relates a single variable to the N-1 other ones, so once you fix those N-1 numbers you immediately know the Nth, giving you N-1 DOFs. I do have a mistake at 2:48 though, I should say "choose the N-1 parameters", not "choose one of the ..." That might be the source of your confusion.
      thanks for the comment

  • @francoparnetti
    @francoparnetti 2 года назад +1

    I always wondered how to deduce the Lagrange function. Is there a way to prove (in an "elegant" way) that the function does what it does? Or did Lagrange just say "this just works and thats it"?

    • @casualscience
      @casualscience  2 года назад +2

      Hi Franco, I give a short proof in chapter: 9:23 - Types of Extrema. I also linked a short history in the description that might give some context: abel.math.harvard.edu/~knill/teaching/summer2014/exhibits/lagrange/genesis_lagrangemultpliers.pdf
      But I'll say it's pretty hard to know exactly what the old mathematicians were thinking when they came up with ideas; the culture around mathematics in the past was much more closed off. I will say, however, that Lagrange was an absolute master of finding ways to solve math problems by introducing a functions whose derivatives give the solution. He was considered the best mathematician of his time, holding the chair of the prussian academy after Euler. The lagrange multiplier is only one example of these "Lagrange functions". Another famous example his Lagrangian from classical mechanics: en.wikipedia.org/wiki/Lagrangian_mechanics
      Unfortunately that's the edge of my knowledge on the history/development. Lagrange's work is all in French, and I remember having difficulty finding English translations in grad school. I agree there is still a small leap there that doesn't flow naturally, perhaps that is simply Lagrange's brilliance... or perhaps a better historian will come along and have some more to say on the subject.
      Thanks for the comment!

    • @francoparnetti
      @francoparnetti 2 года назад

      @@casualscience Thank you!
      I asked because I tried to find a proof integrating, but I am not really sure if that's ok. I kinda hoped there was a smarter way to prove it. Also I'm not sure if I fully understand the paper, but thanks anyway!

  • @aashsyed1277
    @aashsyed1277 2 года назад +1

    How about maximizing a multivariable function with the constraint x²+y²< 1 ??

    • @casualscience
      @casualscience  2 года назад +2

      This would be similar to the situation at 1:44, here you are optimizing inside of a disk (in two dimensions that disk has volume). You would need to do a free optimization, then manually check which solutions are within the unit circle. You might also have a maximum on the boundary, so you'd want to also include any solutions that you get from using a Lagrange multiplier with g(x,y) = x²+y² -1

    • @SuperMrMuh
      @SuperMrMuh 2 года назад

      You might want to check out the Karush-Kuhn-Tucker conditions, which generalize Lagrange multipliers to inequality constraints: en.m.wikipedia.org/wiki/Karush%E2%80%93Kuhn%E2%80%93Tucker_conditions

  • @muuubiee
    @muuubiee 2 года назад +2

    Eh... I had some slight understanding of Lagrange multipliers (do note though, Gradients are NOT covered in the suggested course literature, not sure what they were thinking), but none of this really made sense.
    If you want to explain something you have to explain it on a level lower than the current one. If the person understands concepts like Gradients and Constraints they'll probably not have any problems with understanding Lagrange. Therefore there's no point in assuming that the people watching understands gradients and constraints. Your video is essentially trying to teach Lagrange multipliers to people who already understand Lagrange multipliers (and may be revisiting this to understand a concept that builds on Lagrange multipliers).

  • @griffinbur1118
    @griffinbur1118 10 месяцев назад

    Is there a mistake or two around 7:30? I follow most of this, but “there’s a constrained optimization in n dimensions which is the same as a free optimization problem in n-1 dimensions” seems to match my understanding more than what’s said. But I might be missing something.

    • @griffinbur1118
      @griffinbur1118 10 месяцев назад +1

      Oh, OK, I was just listening and not watching. Yes, I’m right on one point (your slides show the dimension difference as being between n and n+1, not n+1 and n-1).

    • @casualscience
      @casualscience  10 месяцев назад +1

      Yes, sorry the text is correct, I misspeak. I should say N+1 and N, nice catch!

    • @griffinbur1118
      @griffinbur1118 10 месяцев назад

      @@casualscience Overall, an excellent video! (I think my other point is just empty semantics: in some sense, the lower-dimensional optimization problem is "unconstrained" once the constraint is set, but relative to some hypothetical higher-dimensional problem, it is "constrained"...it seems like both capture the same mathematical meaning).

  • @hosz5499
    @hosz5499 2 года назад

    nice geometric interpretation of extra dimensions! We live in a 5D space with a constraint x5=0.

    • @casualscience
      @casualscience  2 года назад +3

      My plane of existence is actually x5=42069

    • @hosz5499
      @hosz5499 2 года назад

      @@casualscience seriously, what’s the meaning of the amplitude lambda in extra dim? Its thickness or rigidity? Eg?

    • @casualscience
      @casualscience  2 года назад

      ​@@hosz5499 In Lagrangian mechanics in physics, it has the interpretation as a type of force multiplier for the constraint. physics.stackexchange.com/questions/47651/how-are-constraint-forces-represented-in-lagrangian-mechanics
      In general, I think all you can say is that at the zeros it's the ratio of gradients.

    • @hosz5499
      @hosz5499 2 года назад

      @@casualscience i think it means the largest eigenvalue, if f is a quadratic of (x,y) locally (near zero). So local eigenvalues such that f and g gradients align or rescaled contours coincides

    • @hosz5499
      @hosz5499 2 года назад

      See ruclips.net/video/6oZT72-nnyI/видео.html

  • @EricBrunoTV
    @EricBrunoTV 2 года назад

    Please can you tell me the names of softwares you use to realize this video? Thank you

  • @ranam
    @ranam 2 года назад

    My question may be strange but I have no one to ask this can you tell me a Lagrange algorithm to find a minimum arbitrary volume within another volume which can contain it by maximum of it inside it or minimum of it out side 🙏🙏🙏

    • @casualscience
      @casualscience  2 года назад

      Hi, sorry I'm not sure I understand the question, are you asking to maximize the volume of some shape given that it will fit within another shape? Because I don't believe that problem has a simple solution. Also, I would take a look at math.stackexchange.com, that's a good place to post these kinds of questions.

    • @ranam
      @ranam 2 года назад

      @@casualscience yes any arbitrary volume inside other volume tells weather it could fit or being maximized

  • @sfglim5341
    @sfglim5341 2 года назад +1

    Not related but the thumbnail looks like There Existed an Addiction to Blood by Clipping

    • @casualscience
      @casualscience  2 года назад

      haha, well it's the gradient field of circles centered at the origin, so I feel like it existed before Clipping.

  • @bocckoka
    @bocckoka Год назад

    Worst thing is that I once did, and I no longer do.

    • @casualscience
      @casualscience  Год назад

      That doesn't bode well for the quality of my video 😨

  • @sridharbajpai2196
    @sridharbajpai2196 Год назад

    why cant go in direc of G..explain pls

    • @casualscience
      @casualscience  Год назад

      Hi this is covered from 7:20-9:40. But it's because you need G to have a fixed value (normally 0), if grad(G) is nonzero, you are moving in a direction where G changes, e.g. It goes from 0 to not 0. Since we only want to explore the space where G is 0, we have to move only in the directions perpendicular to grad(G), i.e. Along directions where G is not changing

  • @user-hf5nz3pe9d
    @user-hf5nz3pe9d Год назад

    one the most difficult to understand on this topic. could have used simple words.