Descent methods and line search: Golden section

Поделиться
HTML-код
  • Опубликовано: 30 сен 2024
  • Bierlaire (2015) Optimization: principles and algorithms, EPFL Press. Section 11.2.2

Комментарии • 7

  • @reY-bk2vl
    @reY-bk2vl Год назад +1

    Fabulous work. Many thanks!

  • @pnachtwey
    @pnachtwey 4 месяца назад

    I would like to see a real example. In my case the line search begins from the current point in the opposite direction of the gradient. I must search along the line. I can just repeatably iterate along the line and evaluate the cost function until it no longer gets smaller. You are suggesting searching between two points but how far should the end point or 'low" point be from the current "high" point. You are assuming the boundary of the end point is known and will bracket the minimum point along the line search. What if it isn't?

  • @yassinebouzbiba1360
    @yassinebouzbiba1360 4 года назад +1

    Hi professor! Thank you for your explanation. I do however wonder why in your "proof" the ratio was (3-sqr(5))/2 and the golden section is (1+sqr(5))/2. Thank you from the Netherlands!!!

  • @nabajyotimajumdar4511
    @nabajyotimajumdar4511 3 года назад +1

    Thanks!

  • @isaacnewton1545
    @isaacnewton1545 3 года назад

    When strict-unimodality holds?

    • @MichelBierlaire
      @MichelBierlaire  3 года назад +1

      Proving unimodality is often hard. If it does not hold, the method will converge to one of the local optima.