I think you misunderstood. You have to calculate the geometric mean of x1 and x2 so that the partial derivative of inertia during take off is accounted for. You see, the relationship of the variables falls within a hyperbolic paraboloid where the hypotenuse cannot be identified. I hope this clears things up.
You can use it, but I would recommend using a method specifically tailored for single-variable optimization. These are covered under Topic 8 here: emlab.utep.edu/ee4386_5301_CompMethEE.htm
MATLAB is an acronym meaning "MATrix LABoratory." Given this mission, MATLAB assumes everything is a matrix and matrix calculations are always assumed. The convention for access elements in matrices is A(row, column). In this framework, the first dimension of the array is vertical position and the second is horizontal position. For CEM, we must build a device onto an xy grid and we like to think of function as f(x,y) where the first argument is horizontal position and the second is vertical position. This is the exact opposite of what MATLAB does. There is no clear way through this that I have come up with. I have found that students struggle the most with building geometries into arrays. Since it is easier to think of arrays as A(x,y) instead of A(y,x), we treat them as if they were all A(x,y). That works fine until it is time to plot the array. The ' calculates a transpose and so it flips the data around so that it displays in the sense we have built our arrays.
Here is the book I have on my syllabus... www.amazon.com/Numerical-Methods-Engineers-7-Ed/dp/9352602137/ref=sr_1_1?crid=3LJW20VU4MG2W&keywords=numerical+methods+for+engineers&qid=1577633002&s=books&sprefix=numerical+methods%2Cstripbooks%2C184&sr=1-1 While it is a good book, I don't use it much. The course materials is essentially all in the notes for the class. Here is a link to the official course website: empossible.net/academics/emp4301_5301/
is it possible to do gradient descent with just one variable? I have the equation f(x) = (x-1)(x-2) with an initial guess of x =2. I really do not understand how am i supposed to perform a gradient descent on this equation?
Are you trying to find the roots (i.e. f=0) or are you trying to find minimums and maximums? The gradient descent method (GDM) is an optimizer intended to find minimums or maximums. You can use your multidimensional GDM on the function you gave. It should work without any modifications to your code. If you are looking to reformulate something specifically for 1D, I would not. 1D is a much simpler problem and better algorithms exist. The golden section search method is great. Check that one out.
@@empossible1577 I am trying to find the minimum of the function, i would prefer to use the golden section search but my professor wants me to use gradient descent for some reason. EDIT: Okay so after some debugging i know that to find the minimum i must update the position to go in the negative direction correct? But when i do x0 = x0 - gamma*gx; it just loops for infinity.
@@empossible1577 I meant the step size .Once the ascent direction is known, we have to find step size,right? I thought there are different line searches methods to to find out the step size like exact line search and inexact line search methods I'm trying to implement steepest descent algorithm with exact line search method.
@@abdulrahimshihabuddin1119 I am not aware any technique that will give you the best value of alpha. In my experience, 0.1 works almost all of the time. You can also look at the magnitude of the slope. As it flattens out, smaller values of alpha may help. Be cautious about this. Otherwise, consider looking at conjugate gradients to determine step size.
Your idea to show the guess converging on the peak is brilliant!
So when you're adding x1 and x2, when is the airplane taking off?
I think you misunderstood. You have to calculate the geometric mean of x1 and x2 so that the partial derivative of inertia during take off is accounted for. You see, the relationship of the variables falls within a hyperbolic paraboloid where the hypotenuse cannot be identified. I hope this clears things up.
How do i convert this to a steepest descent algorithm? Also why can I not see the zig zag pattern as given in various textbooks?
just search for minimum!
i need steepest descent algorithm too.have u found anything?
great Video , can I use this if I have only one variable which x only ?
You can use it, but I would recommend using a method specifically tailored for single-variable optimization. These are covered under Topic 8 here:
emlab.utep.edu/ee4386_5301_CompMethEE.htm
Firstly ; Thanks for video. I have a question what is the differance between surf(x,y,F') and surf(x,y,F) ?
MATLAB is an acronym meaning "MATrix LABoratory." Given this mission, MATLAB assumes everything is a matrix and matrix calculations are always assumed. The convention for access elements in matrices is A(row, column). In this framework, the first dimension of the array is vertical position and the second is horizontal position. For CEM, we must build a device onto an xy grid and we like to think of function as f(x,y) where the first argument is horizontal position and the second is vertical position. This is the exact opposite of what MATLAB does. There is no clear way through this that I have come up with. I have found that students struggle the most with building geometries into arrays. Since it is easier to think of arrays as A(x,y) instead of A(y,x), we treat them as if they were all A(x,y). That works fine until it is time to plot the array. The ' calculates a transpose and so it flips the data around so that it displays in the sense we have built our arrays.
@@empossible1577 Thanks you for explanation👍
Where can i get the text book?
Here is the book I have on my syllabus...
www.amazon.com/Numerical-Methods-Engineers-7-Ed/dp/9352602137/ref=sr_1_1?crid=3LJW20VU4MG2W&keywords=numerical+methods+for+engineers&qid=1577633002&s=books&sprefix=numerical+methods%2Cstripbooks%2C184&sr=1-1
While it is a good book, I don't use it much. The course materials is essentially all in the notes for the class. Here is a link to the official course website:
empossible.net/academics/emp4301_5301/
@@empossible1577 thank you very much!!!!
is it possible to do gradient descent with just one variable? I have the equation f(x) = (x-1)(x-2) with an initial guess of x =2. I really do not understand how am i supposed to perform a gradient descent on this equation?
Are you trying to find the roots (i.e. f=0) or are you trying to find minimums and maximums? The gradient descent method (GDM) is an optimizer intended to find minimums or maximums. You can use your multidimensional GDM on the function you gave. It should work without any modifications to your code. If you are looking to reformulate something specifically for 1D, I would not. 1D is a much simpler problem and better algorithms exist. The golden section search method is great. Check that one out.
@@empossible1577 I am trying to find the minimum of the function, i would prefer to use the golden section search but my professor wants me to use gradient descent for some reason.
EDIT: Okay so after some debugging i know that to find the minimum i must update the position to go in the negative direction correct? But when i do x0 = x0 - gamma*gx; it just loops for infinity.
Can you share the matlab code?
This video is the MATLAB code.
How do I find alpha using exact line search method?
In my experience, it is trial and error. I am not aware of any exact value for alpha. Maybe I am not understanding your question.
@@empossible1577 I meant the step size .Once the ascent direction is known, we have to find step size,right? I thought there are different line searches methods to to find out the step size like exact line search and inexact line search methods
I'm trying to implement steepest descent algorithm with exact line search method.
@@abdulrahimshihabuddin1119 I am not aware any technique that will give you the best value of alpha. In my experience, 0.1 works almost all of the time. You can also look at the magnitude of the slope. As it flattens out, smaller values of alpha may help. Be cautious about this. Otherwise, consider looking at conjugate gradients to determine step size.
Please make videos on visualizing high dimensional object into low dimension using multidimensional scalling in Matlab
Interesting...
@@empossible1577 thank you ... please make videos on manifold learning techniques
@@sabnambegam2088 What do you mean by "manifold?"
@@empossible1577 dimension reducing technique
why did you used an aproximation and not the matlab actual "gradient"?
Since the function is discrete to start with, even MATLAB's gradient() function is an approximation not really different than what was done here.