MATLAB Session -- Steepest Ascent Method

Поделиться
HTML-код
  • Опубликовано: 18 ноя 2024

Комментарии •

  • @josephtraverso5355
    @josephtraverso5355 Год назад +1

    Your idea to show the guess converging on the peak is brilliant!

  • @mazinalbulushi8142
    @mazinalbulushi8142 Год назад +1

    So when you're adding x1 and x2, when is the airplane taking off?

    • @empossible1577
      @empossible1577  Год назад

      I think you misunderstood. You have to calculate the geometric mean of x1 and x2 so that the partial derivative of inertia during take off is accounted for. You see, the relationship of the variables falls within a hyperbolic paraboloid where the hypotenuse cannot be identified. I hope this clears things up.

  • @devenmhadgut2981
    @devenmhadgut2981 6 лет назад +1

    How do i convert this to a steepest descent algorithm? Also why can I not see the zig zag pattern as given in various textbooks?

  • @Alex-bc3li
    @Alex-bc3li 6 лет назад +1

    great Video , can I use this if I have only one variable which x only ?

    • @empossible1577
      @empossible1577  6 лет назад

      You can use it, but I would recommend using a method specifically tailored for single-variable optimization. These are covered under Topic 8 here:
      emlab.utep.edu/ee4386_5301_CompMethEE.htm

  • @LiMobileOfficial
    @LiMobileOfficial 5 лет назад +1

    Firstly ; Thanks for video. I have a question what is the differance between surf(x,y,F') and surf(x,y,F) ?

    • @empossible1577
      @empossible1577  5 лет назад +1

      MATLAB is an acronym meaning "MATrix LABoratory." Given this mission, MATLAB assumes everything is a matrix and matrix calculations are always assumed. The convention for access elements in matrices is A(row, column). In this framework, the first dimension of the array is vertical position and the second is horizontal position. For CEM, we must build a device onto an xy grid and we like to think of function as f(x,y) where the first argument is horizontal position and the second is vertical position. This is the exact opposite of what MATLAB does. There is no clear way through this that I have come up with. I have found that students struggle the most with building geometries into arrays. Since it is easier to think of arrays as A(x,y) instead of A(y,x), we treat them as if they were all A(x,y). That works fine until it is time to plot the array. The ' calculates a transpose and so it flips the data around so that it displays in the sense we have built our arrays.

    • @LiMobileOfficial
      @LiMobileOfficial 5 лет назад +1

      @@empossible1577 Thanks you for explanation👍

  • @susanwyfalamamanihuamani1817
    @susanwyfalamamanihuamani1817 4 года назад +1

    Where can i get the text book?

    • @empossible1577
      @empossible1577  4 года назад +1

      Here is the book I have on my syllabus...
      www.amazon.com/Numerical-Methods-Engineers-7-Ed/dp/9352602137/ref=sr_1_1?crid=3LJW20VU4MG2W&keywords=numerical+methods+for+engineers&qid=1577633002&s=books&sprefix=numerical+methods%2Cstripbooks%2C184&sr=1-1
      While it is a good book, I don't use it much. The course materials is essentially all in the notes for the class. Here is a link to the official course website:
      empossible.net/academics/emp4301_5301/

    • @susanwyfalamamanihuamani1817
      @susanwyfalamamanihuamani1817 4 года назад +1

      @@empossible1577 thank you very much!!!!

  • @prestonharris7406
    @prestonharris7406 4 года назад

    is it possible to do gradient descent with just one variable? I have the equation f(x) = (x-1)(x-2) with an initial guess of x =2. I really do not understand how am i supposed to perform a gradient descent on this equation?

    • @empossible1577
      @empossible1577  4 года назад

      Are you trying to find the roots (i.e. f=0) or are you trying to find minimums and maximums? The gradient descent method (GDM) is an optimizer intended to find minimums or maximums. You can use your multidimensional GDM on the function you gave. It should work without any modifications to your code. If you are looking to reformulate something specifically for 1D, I would not. 1D is a much simpler problem and better algorithms exist. The golden section search method is great. Check that one out.

    • @prestonharris7406
      @prestonharris7406 4 года назад

      @@empossible1577 I am trying to find the minimum of the function, i would prefer to use the golden section search but my professor wants me to use gradient descent for some reason.
      EDIT: Okay so after some debugging i know that to find the minimum i must update the position to go in the negative direction correct? But when i do x0 = x0 - gamma*gx; it just loops for infinity.

  • @TombRaideR133
    @TombRaideR133 2 года назад +1

    Can you share the matlab code?

  • @abdulrahimshihabuddin1119
    @abdulrahimshihabuddin1119 3 года назад

    How do I find alpha using exact line search method?

    • @empossible1577
      @empossible1577  3 года назад

      In my experience, it is trial and error. I am not aware of any exact value for alpha. Maybe I am not understanding your question.

    • @abdulrahimshihabuddin1119
      @abdulrahimshihabuddin1119 3 года назад

      @@empossible1577 I meant the step size .Once the ascent direction is known, we have to find step size,right? I thought there are different line searches methods to to find out the step size like exact line search and inexact line search methods
      I'm trying to implement steepest descent algorithm with exact line search method.

    • @empossible1577
      @empossible1577  3 года назад

      @@abdulrahimshihabuddin1119 I am not aware any technique that will give you the best value of alpha. In my experience, 0.1 works almost all of the time. You can also look at the magnitude of the slope. As it flattens out, smaller values of alpha may help. Be cautious about this. Otherwise, consider looking at conjugate gradients to determine step size.

  • @sabnambegam2088
    @sabnambegam2088 3 года назад

    Please make videos on visualizing high dimensional object into low dimension using multidimensional scalling in Matlab

    • @empossible1577
      @empossible1577  3 года назад +1

      Interesting...

    • @sabnambegam2088
      @sabnambegam2088 3 года назад

      @@empossible1577 thank you ... please make videos on manifold learning techniques

    • @empossible1577
      @empossible1577  3 года назад

      @@sabnambegam2088 What do you mean by "manifold?"

    • @sabnambegam2088
      @sabnambegam2088 3 года назад

      @@empossible1577 dimension reducing technique

  • @cynthiacastillo3349
    @cynthiacastillo3349 5 лет назад

    why did you used an aproximation and not the matlab actual "gradient"?

    • @empossible1577
      @empossible1577  5 лет назад

      Since the function is discrete to start with, even MATLAB's gradient() function is an approximation not really different than what was done here.