Converting Constrained Optimization to Unconstrained Optimization Using the Penalty Method

Поделиться
HTML-код
  • Опубликовано: 23 окт 2024

Комментарии • 91

  • @ChristopherLum
    @ChristopherLum  4 года назад +19

    In case it is helpful, all my Optimization videos in a single playlist are located at ruclips.net/p/PLxdnSsBqCrrHo2EYb_sMctU959D-iPybT. Please let me know what you think in the comments. Thanks for watching!

  • @edwardmau5877
    @edwardmau5877 5 месяцев назад +1

    [AE 512] Thanks for going in depth and defining every variable, makes it easier and much more clear to follow. I also now understand the explicit differences between constrained and unconstrained optimization, you showed how to use both in order to utilize the efficiencies of both.

  • @darylfishback-duran3580
    @darylfishback-duran3580 4 года назад +5

    This was a fantastic video. I worked within MATLAB alongside the video and it was great to see all the ideas come together into the final plot showing the two constraints and the numerical minimum. The explanations are always clear and concise. Looking forward to the next ones!

    • @ChristopherLum
      @ChristopherLum  4 года назад +2

      Hi Daryl, I'm glad you liked it. Let me know what you think about the next few videos as well since they are going to build on this and use it to finally get our RCAM model flying the way we want it to.

  • @timproby7624
    @timproby7624 5 месяцев назад +1

    [AE 512] The clear distinction and purpose between unconstrained and constrained optimization is excellent

  • @Gholdoian
    @Gholdoian 5 месяцев назад

    AE 512: Wow such a powerful yet simple way to reframe optimization routines to use basic optimization schemes.

  • @koshiroyamaguchi9613
    @koshiroyamaguchi9613 2 года назад +2

    AA516: I have had vaguly understood constraind optimization ideas before, but this video cleared my understanding so much better. Thank you Prof. Lum!

  • @Kumky605
    @Kumky605 8 месяцев назад

    AA516: I have gone over optimization several times in my education and struggled through it at times. This video helped clear up a lot of confusion

  • @milesrobertroane955
    @milesrobertroane955 8 месяцев назад

    AA516: All of the Matlab visualizations were so helpful in understanding how the distance from the constraint impacts how much it is attracted in that direction!

  • @AlexandraSurprise
    @AlexandraSurprise 8 месяцев назад +1

    AA516: Allie S, THIS IS SO COOL! This type of mathematical manipulation is exactly what enticed me to go into math and engineering in the first place. I'm so excited to see the following videos!!

    • @ChristopherLum
      @ChristopherLum  8 месяцев назад +1

      Optimization is one of the coolest math topics. Feel free to check out the other videos if you are interested.

  • @rowellcastro2683
    @rowellcastro2683 8 месяцев назад

    AA516: These penalty functions are very nice and simple to implement in matlab using fminsearch. Thanks for the lecture Professor.

  • @akshaymishra2918
    @akshaymishra2918 9 месяцев назад

    One of the BEST videos to understand the topic.

  • @chayweaver.2995
    @chayweaver.2995 4 месяца назад

    AE512: This is a very cool visualization and new way of looking at constrained vs. unconstrained optimization.

  • @mayfu6508
    @mayfu6508 2 года назад

    this is just amazing, can't express how grateful i am.

    • @ChristopherLum
      @ChristopherLum  2 года назад

      Hi May,
      Thanks for the kind words, I'm glad you enjoyed the video. If the find the these videos to be helpful, I hope you'll consider supporting the channel via Patreon at www.patreon.com/christopherwlum. Given your interest in this topic, I'd love to have you a as a Patron as I'm able to talk/interact personally with all Patrons. Thanks for watching!
      -Chris

  • @manitaregmi6932
    @manitaregmi6932 2 года назад +1

    AA 516 - Another great lecture, I like how you use both Mathematica and Matlab together along with your lecture to explain the material.

  • @yaffetbedru6612
    @yaffetbedru6612 8 месяцев назад

    AA516: The visuals helped tons in my understanding of the constraints and their solutions.

  • @justinhendrick3743
    @justinhendrick3743 4 года назад +6

    Thanks for the lecture, Professor! One bit of constructive feedback. The audio is much louder while you're at the board than at the computer. Maybe doing some balancing of the loudness while editing the video together would help.

    • @ChristopherLum
      @ChristopherLum  4 года назад +2

      Hi Justin, thanks for the feedback. I'll look into this. I think the mic I use for the computer recordings is cleaner with less background noise which causes some of the issue in perceived loudness. How much of a volume difference do you perceive? Is it just this video or do others exhibit similar behavior?

  • @davidtelgen8114
    @davidtelgen8114 5 месяцев назад

    AE 512: Great explanation, excited to use this on RCAM

  • @WalkingDeaDJ
    @WalkingDeaDJ 5 месяцев назад

    Jason-AE512: This video appears to be a useful resource for understanding how to transform optimization problems, potentially valuable for students and professionals in fields like operations research or applied mathematics.

  • @zaneyosif
    @zaneyosif 5 месяцев назад

    AE512: Interesting to think about the differences between unconstrained and constrained. Depending on the alpha that is chosen, it looks like you can get a relatively similar solution to the constrained problem. I'm curious if there is a specific reason why we would choose to implement a penalty function/unconstrained optimization rather than an optimization? Is it just simply easier to solve (numerically)? Great video!

  • @bingxinyan8103
    @bingxinyan8103 2 года назад +1

    It is beneficial for me to understand how to covert the constrained opt problem into un-constrained ones. And very helpful with those implementations on Matlab. In applying cubic spline regression in engineering, I found a lot of papers using the penalty function to avoid overfitting or using the integrated square second derivative cubic spline penalty. I am confused about adding the "avoid overfitting" penalty and why I chose that form penalty. Would it be possible to give us a video about those? Plus, would it be possible to provide us with a video about the implementations using python? Whether it is possible or not, I've learned a lot from this video. Again, thank you very much.

  • @petermay6090
    @petermay6090 8 месяцев назад

    AA516: Useful and concise, thank you!

  • @disturbed_singer2758
    @disturbed_singer2758 4 года назад +1

    thank you for the lecture. the video was very helpful. keep up the good work. thanks again

  • @nikitatraynin1549
    @nikitatraynin1549 3 года назад

    Thank you! Great explanation and great video. Make sure to check your volume levels though as when you are screen sharing with matlab or mathematica the volume is much lower then when you are using the whiteboard.

  • @Mike-w6b4i
    @Mike-w6b4i 2 года назад

    Thanks for the lecture! It helps me a lot in my research through MOP!

  • @esanayodelebenjamin6875
    @esanayodelebenjamin6875 3 года назад

    Thank you for this video sir. It was really helpful for me.

  • @jia-hueiju264
    @jia-hueiju264 4 года назад

    great video!
    it's really really clear!
    Thanks, hope to see more great optimization lectures

  • @zhikunzhang8210
    @zhikunzhang8210 4 года назад +3

    Hi, Professor Lum, I am wondering how to choose the penalty parameters alpha for the penalty functions in practice? In the example, it seems that the larger alpha is better.

    • @dboozer4
      @dboozer4 2 года назад

      Start with a small value to lessen the sharp edge created and then increase with each iteration.

  • @darksufer
    @darksufer 3 года назад

    this video was helpful to understand more about optimization applications.

  • @fanghsuanhsu7008
    @fanghsuanhsu7008 4 года назад +2

    Thank for your Lecture,it is really helpful~

    • @ChristopherLum
      @ChristopherLum  4 года назад

      You're very welcome, there are several other similar videos on the channel. Please feel free to check them out and let me know what you think. Thanks for watching!

  • @alijudi5103
    @alijudi5103 3 года назад

    Great explanation. Many thanks for the effort.

  • @anilcelik16
    @anilcelik16 4 года назад +2

    Thank you for the videos. Is it possible to share Matlab codes?

  • @burningbush2009
    @burningbush2009 2 года назад +1

    AE512: Thanks for the video Professor! Is there ever an advantage to using a higher order term for the unconstrained part of fhat? ie fhat = f0 + alpha*f1^4 or similar?

    • @ChristopherLum
      @ChristopherLum  2 года назад

      You could if you want to penalize more aggressively as the 4th grows faster than the 2nd term.

  • @cupdhyaya
    @cupdhyaya 2 года назад

    Please discuss particle swarm for constrained optimization.

  • @tharunsankar4926
    @tharunsankar4926 4 года назад +2

    great vid professor!

    • @ChristopherLum
      @ChristopherLum  4 года назад

      Thanks Tharun. Standby for the last video that will actually get us trimming our aircraft model using this technique.

  • @sanjaykrkk
    @sanjaykrkk 4 года назад +1

    Thank for the lecture Professor Lum. For a large problem where we cannot compare approximate solution with actual solution, how to decide range for alpha values? As pointed out in one of the comments, it seems larger alpha is better. Is there any issue with that?

    • @jarekwatroba2663
      @jarekwatroba2663 3 года назад +1

      The issue is that there is a trade-off. The larger you make alpha, the higher your optimal function value will be, which is undesirable as you are looking for a minimum value. Your goal is to minimize the function given soft constraints. The closer you pull it to the constraint the higher your end result since the 1D parabola doesn't coincide with the local function minimum. Also, imagine if the function has "sharp" turns, ie. highly non-linear or you have many more variables. By imposing very high alpha, beta etc values, you potentially miss out on super optimized solutions which exist if you are willing to relax your constraints just a little bit. That's why he's checking an entire range of alpha. It makes more sense in higher dimensions and or more non-linear applications than just a 2D quadratic.

  • @idea9423
    @idea9423 2 года назад +1

    Thank you ☺️

  • @aijazsiddiqui1721
    @aijazsiddiqui1721 2 года назад

    Thank you for the video. Could you please share the notes which you are referring during the entire lecture. It would be quite helpful.

    • @ChristopherLum
      @ChristopherLum  2 года назад

      Hi Aijaz,
      Thanks for the kind words, I'm glad you enjoyed the video. If you find these videos helpful, I hope you'll consider supporting the channel via Patreon at www.patreon.com/christopherwlum or via the 'Thanks' button underneath the video. Given your interest in this topic, I'd love to have you a as a Patron as I'm able to talk/interact personally with all Patrons. I can also answer any questions and provide code/downloads on Patreon. Thanks for watching!
      -Chris

  • @reesetaylor3506
    @reesetaylor3506 5 месяцев назад

    AE 512: Interesting technique for optimatization. How would the inequality penalty function at 37:30 change if instead the constraint expression was f_i(x) >= 0 or just f_i(x) > 0? Won't the implementation using the max function fail to mimic this constraint properly?

  • @milesbridges3547
    @milesbridges3547 Год назад

    AA 516: This lecture really helped me better understand the power of optimization. What is the purpose of changing a constrained optimization problem to an approximate unconstrained problem using the penalty functions. Is the unconstrained problem just easier to solve numerically?

    • @ChristopherLum
      @ChristopherLum  Год назад +1

      In general, yes, unconstrained is much easier than constrained. In particular, fminsearch is unconstrained.

  • @aimeepak717
    @aimeepak717 5 месяцев назад +1

    AE512: I can see why having properly defined constraints is important to finding the approximately equivalent unconstrained optimization problem.

  • @Colin_Baxter_UW
    @Colin_Baxter_UW 8 месяцев назад

    AA516: I see how using the multiple cost function constraints will translate over into setting constraints for different variables in our RCAM model, like roll angle, pitch angle, etc.

  • @willpope3151
    @willpope3151 3 года назад +1

    [AA 516] One of my favorite lectures, I was surprised at how simple the penalty method is to implement. Is there a reason the original cost function is written using matrices and transpose vectors? I wasn't sure if that was something unique to optimization.

    • @ChristopherLum
      @ChristopherLum  3 года назад

      You don't have to use matrices and vectors, I (and other people in the optimization field) like to write it like this so I stuck with the standard convention.

  • @boeing797screamliner
    @boeing797screamliner 3 года назад +1

    AA516 - Great lecture as usual!

  • @princekeoki4603
    @princekeoki4603 8 месяцев назад

    AA516: Whats the penalty for the designer of they choose an exceeding large value of alpha?

  • @PatrickGalvin519
    @PatrickGalvin519 8 месяцев назад

    AA516: If it's known that the solution to the constrained problem exists, when it's converted into an unconstrained problem are there any drawbacks to just cranking alpha up to something like 1e6 to try to get very close to the exact solution?

  • @priyankadoiphode5
    @priyankadoiphode5 4 года назад

    Sir !! I just wanted to ask whether Static Optimization is Unconstrained optimization???

  • @arveanlabib5333
    @arveanlabib5333 2 года назад

    [AA 516] Great lecture! Would increasing the penalty parameter always improve the accuracy of the final converged value? If so, what is the point of using small penalty parameters?

    • @ChristopherLum
      @ChristopherLum  2 года назад

      Arvean, great question. Not always. You need to make the penalty parameters relative to the magnitude of the constraints. Let's chat more at office hours and I can more fully explain.

  • @alirtha2020
    @alirtha2020 3 года назад +1

    شكرا جزيلا اتمنى لك التوفيق
    لديه بحث لخصوص التحسين المقيد وغير المقيد ممكن مساعدة

  • @AJ-et3vf
    @AJ-et3vf Год назад

    Great video. Thank you

    • @ChristopherLum
      @ChristopherLum  Год назад

      Hi AJ,
      Thanks for the kind words, I'm glad you enjoyed the video. If you find these videos helpful, I hope you'll consider supporting the channel via Patreon at www.patreon.com/christopherwlum or via the 'Thanks' button underneath the video. Given your interest in this topic, I'd love to have you a as a Patron as I'm able to talk/interact personally with all Patrons. I can also answer any questions, provide code, notes, downloads, etc. on Patreon. Thanks for watching!
      -Chris

  • @bsgove
    @bsgove 4 месяца назад

    AE512: it's interesting that the teaching of optimization so often involved optimization functions of 1 to 2 dimensions, I imagine this is because it's really hard to visualize optimization problems of higher dimensions... I wonder if people have tried visualizing beyond 3 dimensions somehow.

  • @paramjeetkaur9208
    @paramjeetkaur9208 3 года назад

    great explanation sir.. can u tell value of alpha1 and alpha2.. and plz explain the matlab coding for this

  • @mayfu6508
    @mayfu6508 2 года назад

    thank you so much!

  • @chadigaali7680
    @chadigaali7680 3 года назад

    thank you very much

  • @knighttime19
    @knighttime19 3 года назад

    I have tried same principle for my problem, but didn't work unless one of the alphas was negative. Any comment is appreciated.

  • @tilio9380
    @tilio9380 3 года назад

    AA 516 This is a minor issue, but the last time stamp is incorrectly labeled.

    • @ChristopherLum
      @ChristopherLum  3 года назад

      Tim, thanks for catching this, I've updated it, does this look correct now? Please let me know if you find any other inconsistencies, thanks!

  • @Js_vici
    @Js_vici 3 года назад

    AA 516 - Thank you for the video! I am wondering why do we use x0 for all iterations? Can we instead use xhatstar values after the first iteration?

    • @ChristopherLum
      @ChristopherLum  3 года назад +1

      Chris, let's chat at office hours, it might be easier to talk about over Zoom.

  • @hasanhorata8381
    @hasanhorata8381 2 года назад

    AA 516 - Is there a reason why we square the penalty functions?

    • @ChristopherLum
      @ChristopherLum  2 года назад

      Hasan, great question. Yes, if we didn't square them then negative values would actually decrease the cost function and the optimizer would be incentivized to chose large negative values. Squaring the values gets around this.

  • @alexzhen179
    @alexzhen179 3 года назад

    AA516: Great lecture! I got a question about alpha. Seems like the solution converges to the optimized solution as alpha increases. Does it mean we can directly set alpha to infinity (analytically) or a very large number (numerically) to get the answer? How do we know that alpha is large enough that can get an approximately optimized solution? Another question is about when the are multiple constraints. In the example, alpha1 and alpha 2 have the same value. Is it always this case? If not, how do we weigh different alpha for different penalties?

    • @ChristopherLum
      @ChristopherLum  3 года назад

      Alex, all good questions, let's talk at office hours as this is probably easier to go over in person.

    • @mohamedelgamal6333
      @mohamedelgamal6333 3 года назад

      Would you Alex brief us about the answer you got for your a/m questions?

  • @ojasvikamboj6083
    @ojasvikamboj6083 Год назад

    A A 516: Ojasvi Kamboj

  • @ravinpech5220
    @ravinpech5220 2 года назад

    Can I ask for code teacher?

    • @ChristopherLum
      @ChristopherLum  2 года назад

      Hi,
      Thanks for reaching out. This is a benefit I provide to supporters on Patreon at www.patreon.com/christopherwlum. I'd love to have you as a Patron as I'm able to talk/interact personally with Patrons. Thanks for watching!
      -Chris

  • @kisitujohn6817
    @kisitujohn6817 25 дней назад

    Kisitu john

  • @anilcelik16
    @anilcelik16 4 года назад

    Gradient descent and stochastic gradient descent algorithms with real life applications would be very helpful I guess

    • @mohamedelgamal6333
      @mohamedelgamal6333 3 года назад

      I appreciate it if you could share a link to read more about Gradient descent and stochastic gradient descent algorithms and how to apply them in real-life applications. Many thanks

  • @rowellcastro2683
    @rowellcastro2683 8 месяцев назад

    AA516: 12:11 Is that Veritasium lol

  • @shavykashyap
    @shavykashyap 8 месяцев назад

    AA 516

  • @Po-ChihHuang
    @Po-ChihHuang 8 месяцев назад

    AA516:Po

  • @aaroncapozella5365
    @aaroncapozella5365 8 месяцев назад

    AA516

  • @esanayodelebenjamin6875
    @esanayodelebenjamin6875 3 года назад

    Thank you for this video sir. It was really helpful for me.