Symmetric Rank 1 | Exact Line Search | Theory and Python Code | Optimization Techniques #7

Поделиться
HTML-код
  • Опубликовано: 26 июн 2024
  • In the seventh lecture, we talk about a well-known optimization technique which falls under the category of quasi Newton methods, and is called the symmetric rank 1 (SR1) algorithm. This lecture contains everything you need to know about the symmetric rank 1 optimization technique. I will show you how to use SR1 when combined with the exact line search method. The outline of this lecture is as follows:
    ⏲Outline⏲
    00:00 Introduction
    01:06 Symmetric Rank 1 Algorithm (SR1)
    04:38 Exact Line Search
    05:51 Python Implementation
    20:02 Animation Module
    35:34 Animating Iterations
    40:22 Outro
    📚Related Courses:
    - 📚 Convex Optimization Extended Course • Convex Optimization
    - 📚 Python Programming Extended Course • Python Programming
    - 📚 Convex Optimization Applications Extended Course • The Transshipment Prob...
    - 📚 Linear Algebra Extended Course • Linear Algebra
    - 📚 Python projects course • Python
    🔴 Subscribe for more videos on CUDA programming
    👍 Smash that like button, in case you find this tutorial useful.
    👁‍🗨 Speak up and comment, I am all ears.
    💰 If you are able to, donate to help the channel
    Patreon - / ahmadbazzi
    BTC wallet - 3KnwXkMZB4v5iMWjhf1c9B9LMTKeUQ5viP
    ETH wallet - 0x44F561fE3830321833dFC93FC1B29916005bC23f
    DOGE wallet - DEvDM7Pgxg6PaStTtueuzNSfpw556vXSEW
    API3 wallet - 0xe447602C3073b77550C65D2372386809ff19515b
    DOT wallet - 15tz1fgucf8t1hAdKpUEVy8oSR8QorAkTkDhojhACD3A4ECr
    ARPA wallet - 0xf54bEe325b3653Bd5931cEc13b23D58d1dee8Dfd
    QNT wallet - 0xDbfe00E5cddb72158069DFaDE8Efe2A4d737BBAC
    AAVE wallet - 0xD9Db74ac7feFA7c83479E585d999E356487667c1
    AGLD wallet - 0xF203e39cB3EadDfaF3d11fba6dD8597B4B3972Be
    AERGO wallet - 0xd847D9a2EE4a25Ff7836eDCd77E5005cc2E76060
    AST wallet - 0x296321FB0FE1A4dE9F33c5e4734a13fe437E55Cd
    DASH wallet - XtzYFYDPCNfGzJ1z3kG3eudCwdP9fj3fyE
    This lecture contains any optimization techniques.
    #optimizationtechniques #optimization #algorithm

Комментарии • 181

  • @ardaerennaim182
    @ardaerennaim182 Год назад +1

    This guy is the most underrated youtuber on planet earth.

  • @techguru4792
    @techguru4792 Год назад +34

    It's rare when less viewed video gives best explanation. Your presentations are almost like 3Blue1Brown or Khan academy! Don't know why this video has this less view!!

  • @walak3955
    @walak3955 Год назад

    Honestly, this guy is incredible. He explains everything soo precisely and efficiently without any unnecessary information. Thanks a lot for this video. You made my life easier.

  • @awesomegameplays3126
    @awesomegameplays3126 Год назад +38

    I am a PhD student and I will be using optimization methods in my research.

  • @kerimetasc6981
    @kerimetasc6981 Год назад +37

    Best lecture in quasi-newton method which I have so far found on internet!

  • @gamingboychannel4992
    @gamingboychannel4992 Год назад +43

    Using LaTeX generated equations like a boss. Thank you sir Ahmad !

  • @bollywoodtalkies5052
    @bollywoodtalkies5052 Год назад

    he did all this hard work and sent it to the internet for free. and he doesn't get too much but what he gets is RESPECT and credit for bringing new aspiring engineers to earth.

  • @utpalgaming8189
    @utpalgaming8189 Год назад +37

    Hello Ahmad. Many thanks for your support! To be honest, I don't know much about gradient methods. I often use search-based optimization methods in my research such as GA, PSO ...

  • @yaglz4584
    @yaglz4584 Год назад +27

    2:50 the animations are very nice. Thank you for taking time to record the lecture.

  • @zmd9678
    @zmd9678 Год назад

    this guy, sat for about 1 hour and talked about newton in one video, and then released it for free. legend

  • @ayuuu2920
    @ayuuu2920 Год назад

    I can't believe these type of courses are for free here, it's amazing how education has change.

  • @robloxeren7527
    @robloxeren7527 Год назад +45

    Ten minutes of this video explains better than an hour of lecture in the course I’m taking🤣 thanks for saving my brain!

  • @furkanefebayrakc8080
    @furkanefebayrakc8080 Год назад

    Understandable with example, rather than those who explained long enough using matrix formula only. Thank you 🙏✨

  • @efey2605
    @efey2605 Год назад +44

    I have been watching your videos regularly and they are very informative. Thanking you for taking time to enlighten us. Would you mind making videos on conventional optimizationmethods like conjugate gradient methods?

  • @patronkral7664
    @patronkral7664 Год назад +35

    Thank you so much for wonderful series of videos. Can you please make a video to solve a bi-level optimization problem with a number of variables to solve using different optimization solvers, like GA etc.,? It will be very much appreciated.

  • @nihathatipoglu8936
    @nihathatipoglu8936 Год назад

    Dude, I'm less than 2 minutes in and I just want to say thank you so much for creating this absolute monster of a video.

  • @Luka-lf8ht
    @Luka-lf8ht Год назад

    I love this video, I feel so privileged to be growing up in an era where knowledge is so easily available, Ahmad is really helping to improve my and many other's opportunities.

  • @mehmetakif2211
    @mehmetakif2211 Год назад +28

    Thanks so much for posting!!

  • @ercansarusta677
    @ercansarusta677 Год назад

    I've known this man only for 40 minutes, but I feel like I owe him 40 decades of gratitude. Thank you for this awesome tutorial!

  • @haktankoctv7426
    @haktankoctv7426 Год назад

    Your explanation is awesome. Extension from root-finding scenario to minimum-point-finding problem was exactly my question.

  • @thekidvesly
    @thekidvesly Год назад

    To find this whole course freely available on RUclips is such a gift. Seriously, you cover a LOT of ground.

  • @nurettinefe537
    @nurettinefe537 Год назад

    Just finished watching and following along with this. Ahmad, thank you so much! It took me about 12 hours to actually get through it cus I kept pausing and going back to make sure I got all the things right.

  • @origamianddiy4861
    @origamianddiy4861 Год назад

    I'm here from yesterday's 3b1b video on Newton's method of finding roots, after wondering if there's any way to use it for minimizing a function. Mainly to see why we can't use it instead of Stochastic Gradiend Descent in Linear Regression. Turns out the Hessian of functions with many components can turn out to be large and computationally intensive, and also that if the second derivative is not a parabola, it can lead you far away from the minima. Still it was nice to see how the operation works in practice, and you mentioned the same points about Hessians too. Good job 😊👍

  • @bunyamincc1177
    @bunyamincc1177 Год назад

    ABSOLUTELY LOVE your 40 minute video series... Thanks a lot Ahmad :)😍

  • @purplerain1562
    @purplerain1562 Год назад +29

    Thanks for posting these videos. They are quite helpful. So, to ensure that we minimize and not maximize, is it sufficient to ensure that the newton step has the same sign (goes towards the same direction) as the gradient? Is it ok to just change the sign of the step if that's not the case? (my experiments seem to indicate its not, but what should be done then?)

  • @gaffarsolihu1617
    @gaffarsolihu1617 Год назад +55

    Hi Ahmad, how are you doing? Thank you so much for your videos. Personally, they have been very eye-opening and educational. This might be a far reached request, As a graduate student, your videos have been very helpful, especially with implementing which is missing in classes, but I'll like to know if you have any plan for a full-blown project implementation on any of your Playlists, be it ML or Math Optimization. Thank you

    • @AhmadBazzi
      @AhmadBazzi  Год назад +2

      Hello Gaffar, I'm doing well, hope are you as well.
      I'm very glad you found it useful. As a matter of fact, this is a very great idea. I will give this a deep thought, then act accordingly. Thank you for you idea :)

  • @thevowtv1563
    @thevowtv1563 Год назад

    HOLYYYYY FKKK !!!! I really wish I came across your video much before I took the painful ways to learn all this… definitely a big recommendation for all the people I know who just started with optimisation courses. Great work !!!!!

  • @zazasabah3378
    @zazasabah3378 Год назад

    Can we just take a moment to appreciate this guy for providing this type of content for free ? great help, Thank you sir! 🙏🙏🙏

  • @unknowngone782
    @unknowngone782 Год назад

    What an absolutely epic contribution to the world. Thank you!

  • @theworld-stoptime.5800
    @theworld-stoptime.5800 Год назад

    Ahmad can really keep you hooked up on the way he explains things. What a legend.

  • @frycomfort4002
    @frycomfort4002 Год назад +17

    This is wonderful!

  • @suleymanozcan6093
    @suleymanozcan6093 Год назад

    Hats off ! Ahmad I have no words to let you know how grateful I am for this free course, it is not only well designed but also easy to follow, God bless you.

  • @onlinestudylaksar1097
    @onlinestudylaksar1097 Год назад

    Excellent video, it really helps me for understanding the quasi Newton method, thank you very much!

  • @fatihbiz4105
    @fatihbiz4105 Год назад

    This course has literally changed my life. 2 years ago i started learning optimization from this course and now i am a software engineer intern at a great startup. Thanks Ahmad !

  • @essamsayedemam7078
    @essamsayedemam7078 Год назад +28

    thank u very much it was soo helpful can i get the pdf version !!

  • @pankajmittal3201
    @pankajmittal3201 Год назад

    Thank you for the amazing optimization algorithms tutorial! We Appreciate your time and the effort to teach us coding 😃

  • @jaimahakaal65
    @jaimahakaal65 Год назад

    Super clear explanations and very well put together. Thank you!

  • @flicksstudio4054
    @flicksstudio4054 Год назад

    Wonderful video for clearing optimization of newtons method for finding minima of function in machine learning

  • @user-mx5wi5dt8b
    @user-mx5wi5dt8b Год назад

    man, perfect explanation. clear and intuitive!

  • @ahmetmelihsanl9663
    @ahmetmelihsanl9663 Год назад

    I love your videos!, having learnt all this in my gcses / a levels, just rewatching it after 4 months after my exams

  • @Epicorstroys
    @Epicorstroys Год назад

    The way you explain this is so helpful - love the comparison to the linear approximation. Thank you!

  • @salihbeyy9864
    @salihbeyy9864 Год назад

    Amazing lecture! Muchas gracias!

  • @jadolive-fan5103
    @jadolive-fan5103 Год назад

    Thanks for this tutorial. Awesome explanations perfect for beginners and experts.

  • @RandomVideos-hl5kc
    @RandomVideos-hl5kc Год назад

    Amazingly presented, thank you.

  • @KarakterFilm4039
    @KarakterFilm4039 Год назад

    Thank you for the words of encouragement, I appreciate it!

  • @islamiclife9391
    @islamiclife9391 Год назад

    Illuminating! Thank you

  • @talhayavas6640
    @talhayavas6640 Год назад

    I can't even imagine how long it took to complete this video. Thanks a ton for your effort.

  • @AJ-et3vf
    @AJ-et3vf Год назад

    Awesome video! Thank you!

  • @sajalshah6522
    @sajalshah6522 Год назад

    I really appreciate your precious effort ,not to mention how much fun and friendly to learn. Thanks Prof. Ahmad.

  • @VRCreations2O
    @VRCreations2O Год назад

    Gorgeous tutorial ! I have never even saw the pyhton interface in my life before, but with the help of your videos i feel like i understand a lot.

  • @apptutorials2158
    @apptutorials2158 Год назад

    Sir your way of explaining is really good.

  • @007AryanVlogs
    @007AryanVlogs Год назад

    Brillant explanation, thank you so much.

  • @oyunking9012
    @oyunking9012 Год назад

    An incredible work as usual. Congratulations for the whole video.

  • @gangadharparate531
    @gangadharparate531 Год назад

    Really appreciate your course! Your tutorials are always so helpful.

  • @TvShow-ml3dz
    @TvShow-ml3dz Год назад

    WoW! This is amasing work man, thank you.

  • @Iamdevil.1
    @Iamdevil.1 Год назад

    Amazing video. Looking forward to more.

  • @superiorarmy416
    @superiorarmy416 Год назад

    Thank you very much for your suggestion! I will try my best.

  • @AbhishekKumar-kt6yp
    @AbhishekKumar-kt6yp Год назад

    Thank you, Ahmad, for the time and effort you took into making this marvellous tutorial. Much, much appreciated!

  • @eser_bodur9302
    @eser_bodur9302 Год назад

    This was such an awesome explanation, so grateful thank you.

  • @batuhanuysl6905
    @batuhanuysl6905 Год назад

    superb,excellent,best video

  • @lionjenkins2063
    @lionjenkins2063 Год назад

    Awesome, thank you!

  • @MRBEASTFAN1102
    @MRBEASTFAN1102 Год назад

    Amazing explaination! This is very helful for understanding. Thanks a lot sir.

  • @dammnoe
    @dammnoe Год назад

    This was exactly what I needed, thank you!

  • @benhammouyt7300
    @benhammouyt7300 Год назад

    Loved the graphical presentation

  • @husarr5111
    @husarr5111 Год назад

    Thank you soo much for the amazing lecture.

  • @gamerhappyonline4175
    @gamerhappyonline4175 Год назад

    What the what?! Even I understood this. Killer tutorial!

  • @user-rt3wl1wv4x
    @user-rt3wl1wv4x Год назад

    Ahmad you should write your book , it'll be really helpful for literally a lot of people out there

  • @furkanatukk
    @furkanatukk Год назад

    I hope listening to this brings more positive RUclips channels like yours 💜

  • @ihsan397
    @ihsan397 Год назад

    Amazing job! Thanks a lot!!

  • @Brns_8399
    @Brns_8399 Год назад

    thanks , very informative

  • @dinivideolarpaylasmlar8256
    @dinivideolarpaylasmlar8256 Год назад

    Another problem is for a negative curvature, the method climbs uphill. E.g. ML Loss functions tend to have a lot of saddle points, which attract the method, so gradient descent is used, because it can find the direction down from the saddle

  • @mehmetkaymak627
    @mehmetkaymak627 Год назад

    OMG, this video just saved my homework

  • @critcalops3107
    @critcalops3107 Год назад

    I think that the visualization makes sense if we think about approximating the function f(x) by its second order Taylor expansion around x_t. Taking the derivative of the second order Taylor expansion and setting it equal to zero leads us to the formula of the Newton's method for optimization. This operation is the same as minimizing the second order approximation of the function at x_t as depicted in the video.

  • @oyuntv7174
    @oyuntv7174 Год назад

    This is brilliant thank you, hope you give us more visual insight into calculus related things

  • @herseyburada9288
    @herseyburada9288 Год назад

    Ahmad is a legend !

  • @Abdullahqamar16
    @Abdullahqamar16 Год назад

    Thank you for the video!

  • @ahmyahmy9269
    @ahmyahmy9269 Год назад

    Again amazing

  • @acimasinhd9590
    @acimasinhd9590 Год назад

    really appreciate your work :)

  • @furkansenol1450
    @furkansenol1450 Год назад

    Your videos are awesome!

  • @madkar988
    @madkar988 Год назад

    This was actually quite helpful :)

  • @troguz195
    @troguz195 Год назад

    Thank you Ahmad !

  • @BrawlStars-hf8hm
    @BrawlStars-hf8hm Год назад

    We appreciate you ❤️

  • @user-bt5zx1li1i
    @user-bt5zx1li1i Год назад

    very good. thank you

  • @TechnicalRH
    @TechnicalRH Год назад

    Very nice and clear explanations

  • @shortsvideostrk
    @shortsvideostrk Год назад

    Hii Dr. Ahmad ,as per my knowledge this methods are used for Machine learning, where gradient descent is a classical algorithm to find minimum of a function( not always zero), If you know basics about ML then you will be familiar with loss function , so we have to minimize that function, for that we need its derivative to be zero, for finding that we use gradient as direction where the change in function is maximum.Now we have the direction but the we dont have the magnitude , for that we use a learning rate as a constant which is what 1st order does.In 2nd order we would use the magnitude which gives us the magnitude for which the point where derivative of function is 0 can be reached in less iterations.Thus 3rd order will ultimately result in finding the minimum of dervative of the loss function , but we need to find minimum of the loss function so ,it will be useless. Hope this was helpul

  • @roshan4542
    @roshan4542 Год назад

    Wow, this looks like a great course! 😀

  • @openmusicnocopyright3249
    @openmusicnocopyright3249 Год назад

    This is great.

  • @linkbox3117
    @linkbox3117 Год назад

    Very nice. Thank you for your inside information Ahmad. Always pleased to watch your content.

  • @CANBAZ52
    @CANBAZ52 Год назад

    Thank you for your amazing comment :-)

  • @snxgz2808
    @snxgz2808 Год назад

    Thank You so much Sir✨

  • @sujunprodhanwordpress
    @sujunprodhanwordpress Год назад

    Very great content

  • @kapalhesap4913
    @kapalhesap4913 Год назад

    Thank you!

  • @muhammadazan5540
    @muhammadazan5540 Год назад

    lovely explanation 🤩🤩🤩🤩🤩🤩

  • @how2make208
    @how2make208 Год назад

    Good job. I am subscribing !

  • @radiolight7793
    @radiolight7793 Год назад

    All the best!!

  • @Mitchyyy92
    @Mitchyyy92 Год назад

    Very good explanation

  • @afthabrazagaming3547
    @afthabrazagaming3547 Год назад

    Great content

  • @funnyhens1849
    @funnyhens1849 Год назад

    Keep up the good work :)

  • @newtech4730
    @newtech4730 Год назад

    imagine disliking an optimization tutorial series and Ahmad released it for free.

  • @ponnappaprakash2387
    @ponnappaprakash2387 Год назад

    Imagine how many people would have earned better living because of this effort put in by Ahmad. Huge Respect.