Levenberg-Marquardt Algorithm

Поделиться
HTML-код
  • Опубликовано: 18 ноя 2024

Комментарии • 52

  • @janplechaty1702
    @janplechaty1702 Месяц назад +1

    I usually don't look at videos longer than 30 minutes but WOW.. I saw it whole and it was amazing. Many thanks to you!

  • @capsbr2100
    @capsbr2100 Год назад +4

    Fantastic. You made a complex subject seem easier to understand by your way of explaining it in a clear, intuitive, illustrative and easy language. Thank you very much.

  • @ut971
    @ut971 2 года назад +5

    Thank you soo much for uploading this. It means A LOT to every engeenering student in different parts of the world who is struggling to understand this algorithm.

    • @mehran1384
      @mehran1384  2 года назад +2

      You are welcome. Happy that you like the video. Please share this Channel with your friends.

  • @gabrielperez1369
    @gabrielperez1369 2 года назад +2

    Excellent explanation! Your English is very good and easy to understand! Thank you very much!

  • @thedanebear
    @thedanebear 11 месяцев назад

    Incredibly intuitive and helpful. Easily the best way out there to spend an hour to better understand this topic

  • @martvald
    @martvald 9 месяцев назад +1

    Thanks for the explanation. I will add that this is not LM though, this is a trust region method using GD and NR. While LM is a trust region-based method using GD and gauss-newton (GN). They look similar, but you would end up with x_(n+1) = x_n - (J^T*J + kI)J^T*E_n, where k is lambda, J is the jacobian matrix and E_n is error vector (see GN). But other than that, the explanation on how the weights etc is used is very descriptive.

    • @mauriciogonzalez1998
      @mauriciogonzalez1998 8 месяцев назад

      Hi, where could I look an explanation this clear about the real LM method?

    • @eaglezhou1243
      @eaglezhou1243 5 месяцев назад

      You are right. Strictly speaking, LM method is a trust region based method that solves the nonlinear least square problem. And in which Hessian uses JTJ instead of the conventional second order derivative. And gradient descent is replaced by the error vector.

  • @neoneo1503
    @neoneo1503 2 года назад +2

    Thanks for your explanation!! The Levenberg-Marquardt method that balances the converge-speed(Newton method) and converge-robustness(GD)

    • @mehran1384
      @mehran1384  2 года назад

      You are welcome. Happy to hear that you found the video useful. Please share this channel with your friends.

    • @neoneo1503
      @neoneo1503 2 года назад

      @@mehran1384 Yeah I will😊, Thanks!

  • @smchiew7708
    @smchiew7708 2 года назад +1

    Very clear explanation for the Levenberg-Marquardt algorithm. Thank you so much!

  • @justman7656
    @justman7656 Год назад +1

    Great and very clear explanation! Thank you so much for your work

  • @pedrohenriquesiscato9768
    @pedrohenriquesiscato9768 3 месяца назад +1

    Thank you for that video. Excellent explanation!

  • @mokhaladhasan6937
    @mokhaladhasan6937 Год назад

    Many thanks to you , it was very clear and simple explanation from a professional person. My understanding of this algorithm was stuck in some points (as GD😊😊 ) until this video.

  • @shafqatjabeen1104
    @shafqatjabeen1104 Год назад

    Thank you so much for this video. Very clear information

  • @polinba
    @polinba Год назад +1

    Thank you for the amazing video! It helped me a lot!

  • @skymanaditya
    @skymanaditya 3 года назад +1

    Great video. Explained with utmost clarity!

    • @mehran1384
      @mehran1384  3 года назад +1

      thanks. happy you liked it.

  • @vlado.erdman
    @vlado.erdman 3 года назад +1

    Great, easy to understand explanation. Thank you.

    • @mehran1384
      @mehran1384  3 года назад

      Happy that you found the video easy to follow. Please share this channel with your friends.

  • @priyachimurkar6058
    @priyachimurkar6058 2 года назад +1

    Nice Videos with excellent demonstration

    • @mehran1384
      @mehran1384  2 года назад

      Happy to hear that you liked the video. Please share this channel with your friends.

  • @zheka47
    @zheka47 2 года назад +1

    Amazing explanations!

  • @workaccount6597
    @workaccount6597 3 года назад +1

    I have been binge watching you videos about non-linear equation and their solvers and optimisers. By, every video I am getting more clarity. Your background in teaching students at different levels really helps you explaining very clearly. I question thought, do you think we( as in viewers) get the material from your videos?

    • @mehran1384
      @mehran1384  3 года назад

      Thanks. I am not sure if I understood your question about getting the material? Could you elaborate?

    • @workaccount6597
      @workaccount6597 3 года назад

      @@mehran1384 The one note notes are what I meant.

  • @RLDacademyGATEeceAndAdvanced
    @RLDacademyGATEeceAndAdvanced 2 года назад +1

    Excellent video

  • @kihoon2217
    @kihoon2217 2 года назад +1

    Great lecture

    • @mehran1384
      @mehran1384  2 года назад

      Thank you. Please share this channel with your friends.

  • @minute_machine_learning5362
    @minute_machine_learning5362 9 месяцев назад

    great talk and heavily informative.
    can you provide the sheet that you are presenting?

  • @Chadwikj
    @Chadwikj 8 месяцев назад

    Fantastic. Thank you!

  • @kleanthiskaramvasis9512
    @kleanthiskaramvasis9512 2 года назад +1

    Excellent presentation :) :)

    • @mehran1384
      @mehran1384  2 года назад

      Thank you. Please share this channel with your friends.

  • @tsalex1992
    @tsalex1992 Год назад

    Thanks for the video! From my understanding the most common heuristic for lambda is to having the increase factor be smaller than the decrease factor. However, I'm not sure that I understand the rational since we expect the algorithm to have more decreasing steps. At some point lambda will reach zero, or at least zero in the numerical sense - can you elaborate a bit more on this point?

  • @tshipmatic
    @tshipmatic 3 года назад +1

    Awesome video! easy to follow along. One question, is there a way to choose the initial value of lambda? or any value would work?

    • @mehran1384
      @mehran1384  3 года назад

      Sorry for the late response. Since lambda changes by an order of magnitude each time, the initial value of it is not so critical. An imperfect lambda just slows downs the entire convergence by only a few iterations.

  • @danielhelmanlee5126
    @danielhelmanlee5126 3 года назад +1

    Is this least squares and levenberg-marquardt algorithm? I see things like Jacobian matrix in other resources...

    • @mehran1384
      @mehran1384  3 года назад

      This is the standard LM algorithm. It has least squares as a part of it.

  • @ИльяЧугунов-д1с
    @ИльяЧугунов-д1с Год назад

    That's great!

  • @mohammadsheikhpour6612
    @mohammadsheikhpour6612 2 года назад

    thank you so much

  • @DongIncheonExpress
    @DongIncheonExpress 2 года назад

    Great Work! Thank you for the good explanation. Can i get your OneNote Lecture Notes that you showed to us in this lecture?

  • @sephgeodynamics9246
    @sephgeodynamics9246 2 года назад +1

    thank you

    • @mehran1384
      @mehran1384  2 года назад

      You are welcome. Please share this channel with your friends.

  • @gianmarcoalarcon6185
    @gianmarcoalarcon6185 3 года назад +1

    Nice Video!!!

  • @wwefan9391
    @wwefan9391 2 года назад +1

    Thank you for this great video ,but I'm just wondering,in the matlab code for the gradient descent method, why did you divide by norm(temp)? what's the purpose of it?

    • @mehran1384
      @mehran1384  2 года назад

      You are welcome. Diving by norm gives a unit vector (direction only) of the notion and magnitude of it is determined by alpha.

    • @wwefan9391
      @wwefan9391 2 года назад

      @@mehran1384 im a bit weak in linear algebra so I'm not sure what is alpha? Also norm(temp) is taking the norm of 2×2 matrix correct? Does dividing by the norm of a matrix also gives us the unit vector like when dividing by the norm of a vector? Because I thought taking the norm of a matrix gives us info about how big the elements are