Awesome video man! Love your videos about numerical analysis as a numerical analysis enthusiast! You cover algorithms for solving nonlinear equations (and minimization it's related cousin) that aren't usually covered in most/usual numerical analysis channels.
All of the minimization techniques that were described in this series work not just with unimodal functions, but also with *multimodal* functions. They can also be used to find critical points.
Thanks for your videos! I like the way of your explanations via visual illustrations, that's really helpful. Also it's great that you refer to the history of algorithms which you are exploring. Are you planning to record some videos about multidimensional minimization methods, starting from basic things and then, for example, exploring the Broyden-Fletcher-Goldfarb-Shanno algorithm?
I'm glad you've found these videos helpful. I do plan on covering multivariable minimization and optimization, although there is a lot in the queue ahead of them.
I think I found something related to minimization, something about the Nelder-Mead method and Powell's method. I think these should be worth exploring. I wonder if there's not only just combining both golden section search and successive parabolic interpolation, but also the latter with dichotomous search, ternary search or Fibonacci search... These three methods are also guaranteed to find the result. But they are slower than successive parabolic interpolation or Brent's minimization method. But it would be worth a try.
I might add those two... we'll see if there is more interest from folks. I don't see much use in combing Ternary with SPI since GSS is just better hands-down. Fibonacci is slightly more efficient than GSS but also adds the overhead of needing the sequence so maybe not worth it. Dichotomous will save on iterations at the expense of function calls, but if the method works similarly to Brent then it would eventually just switch to using SPI.
@@OscarVeliz There's also Newton's method: it finds the minimum of a function (also the roots/zeroes of the derivative). We can also extend Newton's method to higher variables, by using the Hessian matrix. If the Hessian is unavailable or too expensive to compute, then quasi-Newton methods such as Broyden's method, the Davidon-Fletcher-Powell (DFP) method, the Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm or the symmetric rank-one (SR1) method, can be used to approximate the Hessian at each iteration. Definitely something worth checking out.
Let p be the numerator from 3:22 and q be the denominator. The 2 from the ½ goes with q. You'll notice that some of those terms are repeated in both, so it is useful to save those to variables to avoid recomputing, which is what I did for the example code in the linked repo.
Hey Oscar! Question: Where does the (3-sqrt(5))/2 come from at 2:50? That would mean that v,w,x are all bigger than b when initialing the method. That doesn’t seem right, but I may be wrong
That is from golden-section search. Take a look at 4:30 at where c is placed. The factor (3-sqrt(5))/2 is aboult 0.381 -- v, w, and x go where c would normally be. Meanwhile u goes where d would normally be.
Awesome video man! Love your videos about numerical analysis as a numerical analysis enthusiast! You cover algorithms for solving nonlinear equations (and minimization it's related cousin) that aren't usually covered in most/usual numerical analysis channels.
All of the minimization techniques that were described in this series work not just with unimodal functions, but also with *multimodal* functions. They can also be used to find critical points.
In that case you can expect to get one of many local minima, not necessarily the global one?
Thank you very much for posting this video, I found it incredibly useful!
Thanks for your videos! I like the way of your explanations via visual illustrations, that's really helpful. Also it's great that you refer to the history of algorithms which you are exploring. Are you planning to record some videos about multidimensional minimization methods, starting from basic things and then, for example, exploring the Broyden-Fletcher-Goldfarb-Shanno algorithm?
I'm glad you've found these videos helpful. I do plan on covering multivariable minimization and optimization, although there is a lot in the queue ahead of them.
I think I found something related to minimization, something about the Nelder-Mead method and Powell's method. I think these should be worth exploring. I wonder if there's not only just combining both golden section search and successive parabolic interpolation, but also the latter with dichotomous search, ternary search or Fibonacci search... These three methods are also guaranteed to find the result. But they are slower than successive parabolic interpolation or Brent's minimization method. But it would be worth a try.
I might add those two... we'll see if there is more interest from folks. I don't see much use in combing Ternary with SPI since GSS is just better hands-down. Fibonacci is slightly more efficient than GSS but also adds the overhead of needing the sequence so maybe not worth it. Dichotomous will save on iterations at the expense of function calls, but if the method works similarly to Brent then it would eventually just switch to using SPI.
@@OscarVeliz There's also Newton's method: it finds the minimum of a function (also the roots/zeroes of the derivative). We can also extend Newton's method to higher variables, by using the Hessian matrix. If the Hessian is unavailable or too expensive to compute, then quasi-Newton methods such as Broyden's method, the Davidon-Fletcher-Powell (DFP) method, the Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm or the symmetric rank-one (SR1) method, can be used to approximate the Hessian at each iteration. Definitely something worth checking out.
@@OscarVeliz Function calls are just the slightest of my worries.
Thanks!
Thank you sooo much @guillaumeleclerc3346 for my first ever Super Thanks!!
@@OscarVeliz you deserve it for sure, this channel replaced reference textbook for me when I have to implement solvers 🙂
Thank you for your great tutorial. May I know how we can compute p and q stated in the video?
Let p be the numerator from 3:22 and q be the denominator. The 2 from the ½ goes with q. You'll notice that some of those terms are repeated in both, so it is useful to save those to variables to avoid recomputing, which is what I did for the example code in the linked repo.
Hey Oscar! Question:
Where does the (3-sqrt(5))/2 come from at 2:50? That would mean that v,w,x are all bigger than b when initialing the method. That doesn’t seem right, but I may be wrong
Never mind I got the math wrong we good
That is from golden-section search. Take a look at 4:30 at where c is placed. The factor (3-sqrt(5))/2 is aboult 0.381 -- v, w, and x go where c would normally be. Meanwhile u goes where d would normally be.