Symmetric Rank 1 | Exact Line Search | Theory and Python Code | Optimization Techniques #7
HTML-код
- Опубликовано: 26 июн 2024
- In the seventh lecture, we talk about a well-known optimization technique which falls under the category of quasi Newton methods, and is called the symmetric rank 1 (SR1) algorithm. This lecture contains everything you need to know about the symmetric rank 1 optimization technique. I will show you how to use SR1 when combined with the exact line search method. The outline of this lecture is as follows:
⏲Outline⏲
00:00 Introduction
01:06 Symmetric Rank 1 Algorithm (SR1)
04:38 Exact Line Search
05:51 Python Implementation
20:02 Animation Module
35:34 Animating Iterations
40:22 Outro
📚Related Courses:
- 📚 Convex Optimization Extended Course • Convex Optimization
- 📚 Python Programming Extended Course • Python Programming
- 📚 Convex Optimization Applications Extended Course • The Transshipment Prob...
- 📚 Linear Algebra Extended Course • Linear Algebra
- 📚 Python projects course • Python
🔴 Subscribe for more videos on CUDA programming
👍 Smash that like button, in case you find this tutorial useful.
👁🗨 Speak up and comment, I am all ears.
💰 If you are able to, donate to help the channel
Patreon - / ahmadbazzi
BTC wallet - 3KnwXkMZB4v5iMWjhf1c9B9LMTKeUQ5viP
ETH wallet - 0x44F561fE3830321833dFC93FC1B29916005bC23f
DOGE wallet - DEvDM7Pgxg6PaStTtueuzNSfpw556vXSEW
API3 wallet - 0xe447602C3073b77550C65D2372386809ff19515b
DOT wallet - 15tz1fgucf8t1hAdKpUEVy8oSR8QorAkTkDhojhACD3A4ECr
ARPA wallet - 0xf54bEe325b3653Bd5931cEc13b23D58d1dee8Dfd
QNT wallet - 0xDbfe00E5cddb72158069DFaDE8Efe2A4d737BBAC
AAVE wallet - 0xD9Db74ac7feFA7c83479E585d999E356487667c1
AGLD wallet - 0xF203e39cB3EadDfaF3d11fba6dD8597B4B3972Be
AERGO wallet - 0xd847D9a2EE4a25Ff7836eDCd77E5005cc2E76060
AST wallet - 0x296321FB0FE1A4dE9F33c5e4734a13fe437E55Cd
DASH wallet - XtzYFYDPCNfGzJ1z3kG3eudCwdP9fj3fyE
This lecture contains any optimization techniques.
#optimizationtechniques #optimization #algorithm
This guy is the most underrated youtuber on planet earth.
It's rare when less viewed video gives best explanation. Your presentations are almost like 3Blue1Brown or Khan academy! Don't know why this video has this less view!!
Honestly, this guy is incredible. He explains everything soo precisely and efficiently without any unnecessary information. Thanks a lot for this video. You made my life easier.
I am a PhD student and I will be using optimization methods in my research.
Best lecture in quasi-newton method which I have so far found on internet!
Using LaTeX generated equations like a boss. Thank you sir Ahmad !
he did all this hard work and sent it to the internet for free. and he doesn't get too much but what he gets is RESPECT and credit for bringing new aspiring engineers to earth.
Hello Ahmad. Many thanks for your support! To be honest, I don't know much about gradient methods. I often use search-based optimization methods in my research such as GA, PSO ...
2:50 the animations are very nice. Thank you for taking time to record the lecture.
this guy, sat for about 1 hour and talked about newton in one video, and then released it for free. legend
I can't believe these type of courses are for free here, it's amazing how education has change.
Ten minutes of this video explains better than an hour of lecture in the course I’m taking🤣 thanks for saving my brain!
Understandable with example, rather than those who explained long enough using matrix formula only. Thank you 🙏✨
I have been watching your videos regularly and they are very informative. Thanking you for taking time to enlighten us. Would you mind making videos on conventional optimizationmethods like conjugate gradient methods?
Thank you so much for wonderful series of videos. Can you please make a video to solve a bi-level optimization problem with a number of variables to solve using different optimization solvers, like GA etc.,? It will be very much appreciated.
Dude, I'm less than 2 minutes in and I just want to say thank you so much for creating this absolute monster of a video.
I love this video, I feel so privileged to be growing up in an era where knowledge is so easily available, Ahmad is really helping to improve my and many other's opportunities.
Thanks so much for posting!!
I've known this man only for 40 minutes, but I feel like I owe him 40 decades of gratitude. Thank you for this awesome tutorial!
Your explanation is awesome. Extension from root-finding scenario to minimum-point-finding problem was exactly my question.
To find this whole course freely available on RUclips is such a gift. Seriously, you cover a LOT of ground.
Just finished watching and following along with this. Ahmad, thank you so much! It took me about 12 hours to actually get through it cus I kept pausing and going back to make sure I got all the things right.
I'm here from yesterday's 3b1b video on Newton's method of finding roots, after wondering if there's any way to use it for minimizing a function. Mainly to see why we can't use it instead of Stochastic Gradiend Descent in Linear Regression. Turns out the Hessian of functions with many components can turn out to be large and computationally intensive, and also that if the second derivative is not a parabola, it can lead you far away from the minima. Still it was nice to see how the operation works in practice, and you mentioned the same points about Hessians too. Good job 😊👍
ABSOLUTELY LOVE your 40 minute video series... Thanks a lot Ahmad :)😍
Thanks for posting these videos. They are quite helpful. So, to ensure that we minimize and not maximize, is it sufficient to ensure that the newton step has the same sign (goes towards the same direction) as the gradient? Is it ok to just change the sign of the step if that's not the case? (my experiments seem to indicate its not, but what should be done then?)
Hi Ahmad, how are you doing? Thank you so much for your videos. Personally, they have been very eye-opening and educational. This might be a far reached request, As a graduate student, your videos have been very helpful, especially with implementing which is missing in classes, but I'll like to know if you have any plan for a full-blown project implementation on any of your Playlists, be it ML or Math Optimization. Thank you
Hello Gaffar, I'm doing well, hope are you as well.
I'm very glad you found it useful. As a matter of fact, this is a very great idea. I will give this a deep thought, then act accordingly. Thank you for you idea :)
HOLYYYYY FKKK !!!! I really wish I came across your video much before I took the painful ways to learn all this… definitely a big recommendation for all the people I know who just started with optimisation courses. Great work !!!!!
Can we just take a moment to appreciate this guy for providing this type of content for free ? great help, Thank you sir! 🙏🙏🙏
What an absolutely epic contribution to the world. Thank you!
Ahmad can really keep you hooked up on the way he explains things. What a legend.
This is wonderful!
Hats off ! Ahmad I have no words to let you know how grateful I am for this free course, it is not only well designed but also easy to follow, God bless you.
Excellent video, it really helps me for understanding the quasi Newton method, thank you very much!
This course has literally changed my life. 2 years ago i started learning optimization from this course and now i am a software engineer intern at a great startup. Thanks Ahmad !
thank u very much it was soo helpful can i get the pdf version !!
Thank you for the amazing optimization algorithms tutorial! We Appreciate your time and the effort to teach us coding 😃
Super clear explanations and very well put together. Thank you!
Wonderful video for clearing optimization of newtons method for finding minima of function in machine learning
man, perfect explanation. clear and intuitive!
I love your videos!, having learnt all this in my gcses / a levels, just rewatching it after 4 months after my exams
The way you explain this is so helpful - love the comparison to the linear approximation. Thank you!
Amazing lecture! Muchas gracias!
Thanks for this tutorial. Awesome explanations perfect for beginners and experts.
Amazingly presented, thank you.
Thank you for the words of encouragement, I appreciate it!
Illuminating! Thank you
I can't even imagine how long it took to complete this video. Thanks a ton for your effort.
Awesome video! Thank you!
I really appreciate your precious effort ,not to mention how much fun and friendly to learn. Thanks Prof. Ahmad.
Gorgeous tutorial ! I have never even saw the pyhton interface in my life before, but with the help of your videos i feel like i understand a lot.
Sir your way of explaining is really good.
Brillant explanation, thank you so much.
An incredible work as usual. Congratulations for the whole video.
Really appreciate your course! Your tutorials are always so helpful.
WoW! This is amasing work man, thank you.
Amazing video. Looking forward to more.
Thank you very much for your suggestion! I will try my best.
Thank you, Ahmad, for the time and effort you took into making this marvellous tutorial. Much, much appreciated!
This was such an awesome explanation, so grateful thank you.
superb,excellent,best video
Awesome, thank you!
Amazing explaination! This is very helful for understanding. Thanks a lot sir.
This was exactly what I needed, thank you!
Loved the graphical presentation
Thank you soo much for the amazing lecture.
What the what?! Even I understood this. Killer tutorial!
Ahmad you should write your book , it'll be really helpful for literally a lot of people out there
I hope listening to this brings more positive RUclips channels like yours 💜
Amazing job! Thanks a lot!!
thanks , very informative
Another problem is for a negative curvature, the method climbs uphill. E.g. ML Loss functions tend to have a lot of saddle points, which attract the method, so gradient descent is used, because it can find the direction down from the saddle
OMG, this video just saved my homework
I think that the visualization makes sense if we think about approximating the function f(x) by its second order Taylor expansion around x_t. Taking the derivative of the second order Taylor expansion and setting it equal to zero leads us to the formula of the Newton's method for optimization. This operation is the same as minimizing the second order approximation of the function at x_t as depicted in the video.
This is brilliant thank you, hope you give us more visual insight into calculus related things
Ahmad is a legend !
Thank you for the video!
Again amazing
really appreciate your work :)
Your videos are awesome!
This was actually quite helpful :)
Thank you Ahmad !
We appreciate you ❤️
very good. thank you
Very nice and clear explanations
Hii Dr. Ahmad ,as per my knowledge this methods are used for Machine learning, where gradient descent is a classical algorithm to find minimum of a function( not always zero), If you know basics about ML then you will be familiar with loss function , so we have to minimize that function, for that we need its derivative to be zero, for finding that we use gradient as direction where the change in function is maximum.Now we have the direction but the we dont have the magnitude , for that we use a learning rate as a constant which is what 1st order does.In 2nd order we would use the magnitude which gives us the magnitude for which the point where derivative of function is 0 can be reached in less iterations.Thus 3rd order will ultimately result in finding the minimum of dervative of the loss function , but we need to find minimum of the loss function so ,it will be useless. Hope this was helpul
Wow, this looks like a great course! 😀
This is great.
Very nice. Thank you for your inside information Ahmad. Always pleased to watch your content.
Thank you for your amazing comment :-)
Thank You so much Sir✨
Very great content
Thank you!
lovely explanation 🤩🤩🤩🤩🤩🤩
Good job. I am subscribing !
All the best!!
Very good explanation
Great content
Keep up the good work :)
imagine disliking an optimization tutorial series and Ahmad released it for free.
Imagine how many people would have earned better living because of this effort put in by Ahmad. Huge Respect.