Tutorial 13- Global Minima and Local Minima in Depth Understanding

Поделиться
HTML-код
  • Опубликовано: 10 фев 2025
  • In mathematical analysis, the maxima and minima (the respective plurals of maximum and minimum) of a function, known collectively as extrema (the plural of extremum), are the largest and smallest value of the function, either within a given range (the local or relative extrema) or on the entire domain of a function (the global or absolute extrema)Pierre de Fermat was one of the first mathematicians to propose a general technique, adequality, for finding the maxima and minima of functions.
    Below are the various playlist created on ML,Data Science and Deep Learning. Please subscribe and support the channel. Happy Learning!
    Deep Learning Playlist: • Tutorial 1- Introducti...
    Data Science Projects playlist: • Generative Adversarial...
    NLP playlist: • Natural Language Proce...
    Statistics Playlist: • Population vs Sample i...
    Feature Engineering playlist: • Feature Engineering in...
    Computer Vision playlist: • OpenCV Installation | ...
    Data Science Interview Question playlist: • Complete Life Cycle of...
    You can buy my book on Finance with Machine Learning and Deep Learning from the below url
    amazon url: www.amazon.in/...
    🙏🙏🙏🙏🙏🙏🙏🙏
    YOU JUST NEED TO DO
    3 THINGS to support my channel
    LIKE
    SHARE
    &
    SUBSCRIBE
    TO MY RUclips CHANNEL

Комментарии • 52

  • @saravanakumarm5647
    @saravanakumarm5647 4 года назад +10

    Am self studying machine learning. Really your videos are amazing to get the full overview quickly and even a layman can understand.

  • @sairaj6875
    @sairaj6875 Год назад

    Stopped this video halfway through to say thank you! Your grasp on the topic is outstanding and your way of demonstration is impeccable. Now resuming the video!

  • @nithinmamidala
    @nithinmamidala 5 лет назад +11

    your videos are like a suspense movie. need to watch another, need to see till the final playlist.. so much time to spend to know the final result.

  • @shalinianunay2713
    @shalinianunay2713 4 года назад +2

    You making people fall in love with Deep learning.

  • @abhishek247ai6
    @abhishek247ai6 3 года назад +1

    You are awesome... One of the gems in this field who making others life simpler.

  • @hiteshyerekar2204
    @hiteshyerekar2204 5 лет назад +27

    Hi krish,your all video are too good.But do some practicle example on those videos so we can understand how to implement it practically.

    • @SundasLatif
      @SundasLatif 5 лет назад +1

      Yes, adding how to implement will make this series more helpful.

    • @aujasvimoudgil2738
      @aujasvimoudgil2738 4 года назад

      Hi Krish, Please make a playlist of practical implementation of these theoretical concepts

  • @poojarai7336
    @poojarai7336 6 месяцев назад

    you are a blessing for new students sir..God's gift to we students

  • @harshstrum
    @harshstrum 5 лет назад +2

    Krish bhaiya, you are just awesome. Thanks for all that you are doing for us.

  • @sudhasagar292
    @sudhasagar292 3 года назад +4

    this is sooo easily understandable sir.. Im sooo lucky to find you here.. thanks a ton for these valuable lessons sir.. keep shining..

  • @liudreamer8403
    @liudreamer8403 3 года назад

    very impressive explanation. Now I total adapt to India English. So wonderful

  • @CoolSwag351
    @CoolSwag351 3 года назад +8

    Hi Krish. Thanks a lot for your videos. You make me fell love with DL❤️ I took many introductory courses in coursera and udemy from which I couldn't understand all the concepts. You're videos are just amazing. One request, could you please make some practical implementations of the concepts so that it would be easy for us to understand in practical problems.

  • @muhammadshifa4886
    @muhammadshifa4886 2 года назад

    You are always awesome! Thanks Krish Naik

  • @SravaniGoud-b6r
    @SravaniGoud-b6r 27 дней назад

    Tq sir after seeing your videos some doubts are cleared

  • @sahilmahajan421
    @sahilmahajan421 2 года назад

    amazing. simple, short & crisp

  • @mohdazam1404
    @mohdazam1404 5 лет назад +2

    Ultimate explanation, thanks Krish

  • @vishaljhaveri7565
    @vishaljhaveri7565 3 года назад

    Thank you, Krish sir. Good explanation.

  • @vgaurav3011
    @vgaurav3011 4 года назад +1

    Very very amazing explanation thanks a lot!!!

  • @sarahashmori8999
    @sarahashmori8999 2 года назад

    i like this video you explained this very well! thank you!

  • @enoshsubba5875
    @enoshsubba5875 4 года назад +9

    Never Skip Calculus Class.

  • @ahmedpashahayathnagar5022
    @ahmedpashahayathnagar5022 2 года назад

    nice explanation Sir

  • @vikashverma7893
    @vikashverma7893 4 года назад

    Nice explanation krish sir ..........

  • @touseefahmad4892
    @touseefahmad4892 5 лет назад +1

    Nice Explanation Krish Sir ...

  • @thealgorithm7633
    @thealgorithm7633 5 лет назад +1

    Very nice explanation

  • @baaz5642
    @baaz5642 3 года назад

    Awesome!

  • @louerleseigneur4532
    @louerleseigneur4532 3 года назад

    Thanks Krish

  • @sandipansarkar9211
    @sandipansarkar9211 4 года назад +7

    Hi Krish, .That was also a great video in terms of understandingPlease make a playlist of practical implementation of these theoretical concepts.Then please download the ipynb notebook just below so that we can practice it in jupyter notbook

  • @zzzmd11
    @zzzmd11 4 года назад +2

    Hi Krish, very informative as always. Thank you so much. Can you pls also do a tutorial on Fokker Planck equation...Thanks alot in advance...

  • @munjirunjuguna5701
    @munjirunjuguna5701 2 года назад +2

    Hello Krish,
    Thanks for the amazing work you are doing.
    Quick one: you have talked about the derivative being zero when updating the weights...so how do you tell it's a global minima and not the vanishing GD problem?

    • @sportsoctane
      @sportsoctane Год назад

      U will check for the slope, let say you are getting started from negative slope, that means weights are getting decreased, now after reaching zero if it changes to positive, that means you got ur minima. As for vanishing it will keep decreasing only. Correct me @anyone if I'm wrong.

  • @knowledgehacker6023
    @knowledgehacker6023 5 лет назад +1

    very nice

  • @9999afshin
    @9999afshin 18 дней назад

    Nice

  • @mscsakib6203
    @mscsakib6203 5 лет назад

    Awesome...

  • @vishaldas6346
    @vishaldas6346 4 года назад

    I don't think if the derivative of loss function for calculating new weights should be used as when equal to zero it makes the weights for the neural networks to W(new) = W(old). It would be related to vanishing gradient problem. Isn't it like the derivative of loss function for the output of neural network used where the y actual and y hat becomes approximately equal and the weights are optimised iteratively. Please make me correct if I'm wrong.

  • @xiyaul
    @xiyaul 4 года назад

    You have mentioned in previous video that you will talk about Momentum in this video but i am yet to hear....

  • @ohn0oo
    @ohn0oo Год назад

    what if i have a decrease form 8 to infinity, would the lowest visible point still be my global minima?

  • @rafibasha1840
    @rafibasha1840 3 года назад

    Hi Krish,when the slope is zero at local maxima why don’t we consider local/global maxima instead of minima

  • @shefaligoyal3907
    @shefaligoyal3907 2 года назад

    at global minima if the deriavtive of the loss function wrt w becomes 0 then wold=wnew and lead to no change in value then how the loss function value be reduced?

  • @Velnio_Išpera
    @Velnio_Išpera 3 года назад

    Why do we need to minimize cost function in machine learning, what's the purpose of this? Yeah, I understand that there will be less erorrs etc., but I need to understand it from fundamental perspective. Why don't we use global maximum for example?

    • @aritratalapatra8452
      @aritratalapatra8452 2 года назад

      You minimise the error of your prediction, maxima means the point where error function is highest.

  • @anindyabanerjee743
    @anindyabanerjee743 4 года назад +2

    If at global minima w'new is equal to w'old ,what is point of reaching there ?? am I missing something?? @krish naik

    • @bhagyashrighuge4170
      @bhagyashrighuge4170 3 года назад

      after that point slope increases or decreses

    • @KrishnaMishra-fl6pu
      @KrishnaMishra-fl6pu 3 года назад

      The whole point is to reach the global minima only... Because at global minima you get W and at that W you'll get minimum loss..

  • @ibrahimShehzadGul
    @ibrahimShehzadGul 4 года назад

    I think, at local minima the "∂L/∂w" is not = 0, bcz the ANN output is not equal to the required output. if I am wrong please correct me

  • @jaggu_007i
    @jaggu_007i 4 года назад

    krish bro when the w new and w old are equal then that will be forming the vanishing gradient decent right??

    • @alinawaz8147
      @alinawaz8147 2 года назад

      no bro vanishing gradient is a problem that occurs in chain rule when we use sigmoid or tanh to overcome that problem we use the ReLu activation function

  • @mizgaanmasani8456
    @mizgaanmasani8456 4 года назад +1

    why do Neurons need to get converge at global minima ?

    • @ish694
      @ish694 4 года назад +5

      Neurons dont. Weights converge to some values and those values represent the point at which the loss functions is at its minimum. Our goal here is to formulate some loss function and to find the weights or parameters that optimize, minimize, that loss function. Because if we don't optimize it, then our function won't learn any input-output relationship. It wont know what to predict when given a set of inputs.
      Also I think when he said neurons converge at the end, he meant parameters of a neuron not the value of a neuron itself.

  • @quranicscience9631
    @quranicscience9631 5 лет назад

    nice

  • @prerakchoksi2379
    @prerakchoksi2379 4 года назад

    How do we deal with local maxima I am still not clear

    • @adityaanand3065
      @adityaanand3065 3 года назад

      Look for simulated annealing... you will get your answer. There are definitely many other methods, but I know this one.