Lagrange Multipliers : Data Science Basics

Поделиться
HTML-код
  • Опубликовано: 20 сен 2019
  • How do we use Lagrange Multipliers in Data Science?
    ---
    Like, Subscribe, and Hit that Bell to get all the latest videos from ritvikmath ~
    ---
    Check out my Medium:
    / ritvikmathematics

Комментарии • 41

  • @robertleo3561
    @robertleo3561 4 года назад +51

    Hey man these videos are great, you deserve more attention

    • @matthewchunk3689
      @matthewchunk3689 4 года назад +5

      ritvik doesn't just throw a bunch of equations on the board: he puts everything into perspective which is unique in mathy videos.

  • @jeonghwankim8973
    @jeonghwankim8973 3 года назад +20

    Great explanation man. Your explanation is way more intuitive than most of the videos out there.

  • @milindyadav7703
    @milindyadav7703 3 года назад +1

    One word.....AMAZING......great job

  • @fyaa23
    @fyaa23 4 года назад +1

    Well explained!

  • @93mrFilip
    @93mrFilip 2 года назад

    Awesome video mate! It makes so much sense

  • @fabriai
    @fabriai Год назад

    Thanks a lot for the video. I hadn't realized the relationship between lagrangians and eigenvectors.

  • @diegososa5280
    @diegososa5280 3 года назад +4

    This was fantastic. You have a gift for teaching.

  • @BhuvaneshSrivastava
    @BhuvaneshSrivastava 4 года назад +7

    This is great explaination. Thanks a ton.
    It would be great if you could make a video on maths behind Logistic Regression as well.

    • @ritvikmath
      @ritvikmath  4 года назад +2

      Oh, I have a video on that :) Please check my channel

  • @paulntalo1425
    @paulntalo1425 3 года назад +11

    Now that you have shown us the theoretical parts I request that it would be better to now start building the practical code in Python for each individual video. That would help cause theory to meet with code.

  • @DistortedV12
    @DistortedV12 3 года назад +5

    Just subscribed I like how you explain things and videos are very “snackable.”

  • @SupremeChickenx
    @SupremeChickenx 3 года назад +5

    holy shit that was magical

  • @jameswang7362
    @jameswang7362 3 года назад +11

    Quick question: at 1:29 the matrix S you give is not symmetric, but when you do the Lagrange multipliers at 7:14 you use the result that the matrix derivative of u^TSu is 2Su which (in your other video "Derivative of a Matrix") you derived for the 2x2 case assuming the matrix was symmetric. Does this result hold for the non-symmetric case? If so, how would I go about showing that? If not, what can be done for the non-symmetric case?

    • @brendonanderson9058
      @brendonanderson9058 3 года назад +4

      When the matrix S is not symmetric, the gradient of the function f(u) = u^T S u is f'(u) = (S + S^T)u. From the Lagrangian stationarity condition, this implies that 2\lambda should be an eigenvalue of the matrix S+S^T. Therefore, the solution u* of the optimization is the eigenvector of S+S^T corresponding to the largest eigenvalue of S+S^T.

    • @alejrandom6592
      @alejrandom6592 2 года назад +1

      Yeah I was thinking that too

  • @ankitsrivastava175
    @ankitsrivastava175 3 года назад +2

    Great videos.. very well explained.. thank you

  • @momaalim3086
    @momaalim3086 3 года назад +1

    Thank you very much sir. Really appreciate you making these super helpful videos.

  • @thirumurthym7980
    @thirumurthym7980 3 года назад

    thanks for this nice crisp video. If it is minimisation problem - means looking for minimum eigen value and its vectors? Im looking forwad to see the video to detail the basis of lagrange multipliers. Thanks once again.

  • @muhammadghazy9941
    @muhammadghazy9941 2 года назад

    Thank you man

  • @zimmri1
    @zimmri1 2 года назад

    Amazing

  • @1976turkish
    @1976turkish 3 года назад +1

    very clear explanation. good job my friend

  • @supriyamanna715
    @supriyamanna715 2 года назад

    your intuition level is just A+. I think you should do ds and ml full time

  • @alexrvolt662
    @alexrvolt662 4 года назад +4

    I'd like to have a deeper view about this interpretation.
    The interpretation I understand intuitively is that of wikipedia, where the gradients of the goal function and of the constraint function have to be colinear.
    I like the eigenvalue-based interpretation, but something is unclear for me: in the present case, you chose the constraint function in such a manner that we get the simplification that enables to write things as an eigenvalue problem.
    It's harder for me to interpret it in a more general manner, with any form of goal/constraint functions....
    Moreover, what about the fact that we also need to cancel d(Lagrangian function)/d(lambda)? I'm confused about how this could be related to eigenvalue....

    • @q0x
      @q0x 3 года назад +4

      From my point of view the eigenvalue/eigenvektor interpretation he uses her is just a SPECIAL CASE for the Problem u.TSu s.t. u.Tu = 1 he is discussing.
      While the Problem he discusses is certainly interesting the interpretation for a general case is misleading!
      There is no general connection between eigenvalues and lagrange multipliers (as far as i know).
      For multiple possible interpretations of lagrange multipliers see Convex Optimization / Stephen Boyd & Lieven Vandenberghe.

  • @EW-mb1ih
    @EW-mb1ih 2 года назад +4

    How do you choose the "order" of the constraint in your new function?
    I mean how do you choose to write:
    uTSu + \lambda(1-uTu) and not
    uTSu + \lambda(uTu-1) ?

    • @barkinkaratay7951
      @barkinkaratay7951 Год назад

      It doesn't matter. Both should lead to the same result.

  • @HansKrMusic
    @HansKrMusic 3 года назад +1

    thank you. To me, there is still something fishy going on - you say that you'd want to take the biggest eigenvalue of matrix S and find its corresponding eigenvector - in order to "maximize the success". Those eigenvalues and -vectors are complex-valued (a pair of a complex number and its conjugate), and we can't just compare complex numbers. So the best direction would be spiraling outwards?

  • @vinceb8041
    @vinceb8041 3 года назад +1

    u.T S u is a formula I've encountered several times, can anyone help me what it is actually? It seems we're projecting the linear transformation of u onto itself, but what does that tell us?

  • @MuammarElKhatib
    @MuammarElKhatib 3 года назад +1

    Great :)

  • @abmxnq
    @abmxnq 2 года назад

    long live and prosper

  • @amnont8724
    @amnont8724 Год назад

    6:00 didn't understand why lambda*(1-uTu) isn't 0, since we defined uTu = 1, didn't we?

  • @DestroManiak
    @DestroManiak 3 года назад

    What if the function that we're maximizing cannot be represented as a simple uT*S*u, only particular functions can. Or are we only interested in uT*S*u in data science anyways

  • @alejrandom6592
    @alejrandom6592 2 года назад +2

    6:45 BIG mistake right here. This derivative ONLY holds when S is symetric, which is NOT true for this case. The general solution is (S+S^T)u, which is easy to show that works. Other than that, good video ;)