Machine Learning Tutorial Python - 4: Gradient Descent and Cost Function

Поделиться
HTML-код
  • Опубликовано: 10 сен 2024

Комментарии • 710

  • @codebasics
    @codebasics  2 года назад +9

    Check out our premium machine learning course with 2 Industry projects: codebasics.io/courses/machine-learning-for-data-science-beginners-to-advanced

  • @angulimaldaku4877
    @angulimaldaku4877 4 года назад +167

    3Blue1Brown is a great channel so is your explanation. Kudos to you!
    Also, it is quite appreciable how you positively promote and credit other's good work. That kind of Genuity is much needed.

  • @mdlwlmdd2dwd30
    @mdlwlmdd2dwd30 3 года назад +22

    For people who wants to know whats behind of scene:
    The reason we get partial derivative m t function (mse): - 2/n (summation) x_i ( y_i (mx_i+b)) is due to chain rule in calculus.
    We want to take m deriviative and as you see m would be gone as m^(1) and m^(1-1) = 1 and leave only x_i. with chain rule we dissect the function.
    so suppose we have random function F(m)= (am+b)^2, we would deal with (am+b)^2 first -> 2*(am+b) X df/dm (am+b) -> 2*(am+b) X a . likewise you'd use chain rule for same MSE above. and get - 2/n (summation) x_i ( y_i (mx_i+b))
    Please don't accept as it is then you never learn why things are working completely and come up with your own solution. Easy way is never get you where you want it.

    • @datalearningsihan
      @datalearningsihan Год назад

      None really understands why we have -(2/n) instead of 2/n. if you do the calculations, even with the chainrule, you will get 2/n, never will get negative values!

    • @awakenwithoutcoffee
      @awakenwithoutcoffee 3 месяца назад

      @@datalearningsihan I think it is indeed to prevent any negative values to occur.

  • @mukulbarai1441
    @mukulbarai1441 3 года назад +181

    It has become so clear that I am gonna teach it to my dog.

    • @codebasics
      @codebasics  3 года назад +21

      👍🙂

    • @farazaliahmad3257
      @farazaliahmad3257 3 года назад +3

      Just Do it....

    • @Austin-pw2ud
      @Austin-pw2ud 3 года назад +33

      Dont do it!
      He may become a threat to humanity!

    • @eresque7766
      @eresque7766 2 года назад +3

      @@Austin-pw2ud he may become the cleverest but he'll remain being a good boy

    • @Austin-pw2ud
      @Austin-pw2ud 2 года назад +5

      @@eresque7766 ouuuuuchh! Tht touched my ♥

  • @chokoprty
    @chokoprty 4 месяца назад +16

    Watching this at 2x, like if you are too 😂

  • @IVIRnathanreilly
    @IVIRnathanreilly Год назад +12

    I've been struggling with my online lectures on machine learning. Your videos are so helpful. I can't thank you enough!

  • @codebasics
    @codebasics  4 года назад

    How to learn coding for beginners | Learn coding for free: ruclips.net/video/CptrlyD0LJ8/видео.html

  • @codebasics
    @codebasics  4 года назад +1

    Stochastic vs Batch vs Mini gradient descent: ruclips.net/video/IU5fuoYBTAM/видео.html
    Step by step roadmap to learn data science in 6 months: ruclips.net/video/H4YcqULY1-Q/видео.html
    Machine learning tutorials with exercises:
    ruclips.net/video/gmvvaobm7eQ/видео.html

  • @mamtachaudhary5281
    @mamtachaudhary5281 3 года назад +9

    I have gone through so many materials and couldn't understand a thing on these, but this video is amazing .Thanks for putting all you videos.

  • @waytosoakshya1127
    @waytosoakshya1127 Год назад +2

    Finally, found the best ML tutorials. Coding with mathematics combined and explained very clearly. Thank you!

  • @alidi5616
    @alidi5616 4 года назад +11

    This is the best tutorial i have ever seen. This is truly from scratch. Thank you so much

  • @officesuperhero9611
    @officesuperhero9611 6 лет назад +58

    I’m so excited to see you uploaded a new video on machine learning. I’ve watched your other 3 a couple of times. They’re really top notch. Thank you. Please keep this series going. You’re a great teacher too.

  • @SudiKrishnakum
    @SudiKrishnakum 3 года назад +1

    I followed tonnes of tutorials on gradient descent. Nothing came close to the simplicity of your explanation. Now I have a good grasp of this concept! thanks for this sir!

  • @vanlindertpoffertje3032
    @vanlindertpoffertje3032 5 лет назад +8

    Thank you so much for the detailed explanation! I have difficulties understanding these theories but most of the channels just explain without mentioning the basics. With your explanation, it is now it is soooo clear! amazing!!

  • @ecgisamal
    @ecgisamal 2 года назад +1

    If there will be an award of best teacher of theworld so the the award would go to this person, programming hero and brackeys

    • @codebasics
      @codebasics  2 года назад +2

      🙏 thanks for your kind words ayuro

    • @ecgisamal
      @ecgisamal 2 года назад

      @@codebasics Please can you make a playlist on tutorial of opencv python?

  • @AYUSHKUMAR-dm1xg
    @AYUSHKUMAR-dm1xg 4 года назад

    who are the people disliking these videos. These people work hard and make these videos for us. Please if you don't like it, don't watch it but don't dislike it. It is misleading to the people who come to watch these videos. I know many of us have studied some of these concepts before, but he is making videos for everyone and not for a few section of people. I feel that this channel's videos are amazing and doesn't deserve any dislikes.

    • @codebasics
      @codebasics  4 года назад

      Thanks ayush. I am moved by your comment and kind words. I indeed put lot of effort in making these videos. Dislikes are fine but at the same time if these people put a reason on why they disliked, it will help me a lot in terms of feedback and future improvements 😊

  • @valijoneshniyazov
    @valijoneshniyazov 3 года назад +9

    when you calculate partail derivatives, dont assume x or y zero, assume them constants instead.
    for example
    f(x,y)= x*y
    your partial derivatives will be 0
    but it should be x and y

    • @rajdipdas1329
      @rajdipdas1329 2 года назад

      no why partial deriative will be zero ,we have to analyze it as a constant df(x,y)/dx=x.dy/dx+y this will be the derivative with respect to x and df(x,y)/dy=x+y.dx/dy.

  • @hfe1833
    @hfe1833 5 лет назад +10

    Thank you, I think I found the right channel for machine learning

  • @sarikamishra7051
    @sarikamishra7051 3 года назад +2

    Sir u r the best teacher I ever got for Machine Learning.

  • @ishaanverma9asn523
    @ishaanverma9asn523 3 года назад +2

    this is the best ML course I've ever came upon !

  • @vishnusagubandi8274
    @vishnusagubandi8274 4 года назад +1

    I think this is best gradient descent tutorial even better than andrew ng sir
    I got stuck with andrew sir tutorial and later came up here
    Finally got it...Thanks a lot bro🙏🙏

  • @VijaykumarS7
    @VijaykumarS7 10 месяцев назад +1

    You explained in the simplest way this complex concept. Best teacher in the world 🎉🎉

    • @codebasics
      @codebasics  10 месяцев назад

      Glad you liked it ! 😊

  • @jenglong7826
    @jenglong7826 5 лет назад +2

    This was an excellent explanation! Not too technical and explained in simple terms without losing its key elements. I used this to supplement Andrew Ng's Machine learning course on Coursera (which has gotten technical real quick) and it's been really helpful thanks

    • @codebasics
      @codebasics  5 лет назад +2

      Glad you found it useful Chia Jeng.

  • @ajinkyap4246
    @ajinkyap4246 4 года назад +6

    Ty for this wonderful tutorials. Literally the best channel I found on youtube for Data Science. Have followed your guidelines since the beginning and it has helped me very much.
    Exercise Output: Learning Rate = 0.0002, m = 1.0177381667350405, b = 1.9150826165722297, Cost = 31.604511334602297, Previous Cost = 31.604511334602297, Iteration = 415533
    PS: Try it in pycharm jupyter get's stuck after 91915 iterations.

    • @ankitnayal6091
      @ankitnayal6091 2 года назад +1

      how did you come to this learning rate to be your input ?

    • @antimuggle_ridhi2565
      @antimuggle_ridhi2565 10 месяцев назад

      how did you tweek the number of iterations ?

    • @TY-zl1vw
      @TY-zl1vw 8 месяцев назад

      Thanks for sharing your result, my most recent tweak only reduced Cost to 31.60451133489039. With the build-in LinearRegression(the sklearn Module approach), the Cost end up as 31.60451133352958, so, use that as the benchmark I suppose.

  • @ayushlabh
    @ayushlabh 5 лет назад +10

    It's the most helpful video I have seen till now on gradient descent . Great work . Looking forward for more videos on machine learning .

    • @khushidonda7168
      @khushidonda7168 Год назад

      can you help me how to plot all values of m and b on chart?

  • @sharathchandrachowdary6828
    @sharathchandrachowdary6828 Год назад +3

    This video is just enough to describe the excellence of your explanation. Simply mind blowing.

    • @khushidonda7168
      @khushidonda7168 Год назад

      can you help me how to plot all values of m and b on chart?

  • @MondayMotivations
    @MondayMotivations 4 года назад +1

    I don't know why you are so underrated. Only 73K SUBSCRIBERS. You deserve way more than that, I mean the way you clear the concepts. You're simply awesome man.

    • @codebasics
      @codebasics  4 года назад

      I am happy this was helpful to you

    • @geekyprogrammer4831
      @geekyprogrammer4831 3 года назад

      now he got 281K and in future I expect it to be more :D

  • @princeekanim1804
    @princeekanim1804 4 года назад +2

    This tutorial made me finally understand gradient descent and cost function ... I dont know you how u did but you did... thanks man. I really appreciate

    • @codebasics
      @codebasics  4 года назад +1

      You're very welcome Prince :) I am glad your concepts are clear now.

    • @princeekanim1804
      @princeekanim1804 4 года назад

      codebasics no problem keep it up you’re a great teacher

  • @ramsawasthi
    @ramsawasthi 5 лет назад +15

    Great tutorial, explained in very easy language in very less time.

  • @TheSocialDrone
    @TheSocialDrone 4 года назад +3

    This was a difficult topic for me; then I spent the time to watch your video, thank you for making my learning easier! Very nice explanation.

    • @codebasics
      @codebasics  4 года назад +1

      👍😊

    • @khushidonda7168
      @khushidonda7168 Год назад

      can you help me how to plot all values of m and b on chart?

  • @daychow4659
    @daychow4659 Год назад

    Omg!!! This is my first time seeing people to calculate how gradients decent works!!!!

  • @yashagarwal3999
    @yashagarwal3999 4 года назад +3

    so calmly and nicely u have explained a tough topic to beginners

  • @fernandoriosleon
    @fernandoriosleon 4 года назад +11

    Finalmente aprendí Gradient descent, Finally I leaned Gradient Descent, thank you so much 🙏

  • @fridayemmanueljames4873
    @fridayemmanueljames4873 Год назад

    Waooo, for a long time I've struggled to really understand the gradient descent algorithm. I feel like a pro

  • @How_About_This428
    @How_About_This428 2 года назад

    Indians as always, so smart and brilliant people
    Thank you for the video it helped me a lot

  • @rajarshiroy8794
    @rajarshiroy8794 Год назад +1

    I tried it and it was fun,
    I took iterations = 1000000,
    learning_rate= 0.0002
    from the gradient_descent function m= 1.0192813318173286, b = 1.8057225128259167, in 120760 iterations,
    while from sklearn regression model coefficient = 1.01773624 and intercept = 1.9152193111568891.

    • @antimuggle_ridhi2565
      @antimuggle_ridhi2565 10 месяцев назад

      what should be the range of iterations? because the lower the number the better

  • @hrithvikreddy6643
    @hrithvikreddy6643 3 месяца назад +1

    The Exercise that you have shared is taking many no of iterations to get to the correct intercepts and coefficients... my laptop hung many times doing that problem 😵‍💫😵‍💫😵‍💫😵‍💫

    • @Haven_Hue
      @Haven_Hue 3 месяца назад +1

      code (try this):
      import pandas as pd
      import numpy as np
      import math
      data = pd.read_csv("D:\\Machine_learning\\Grad_des\\test_scores.csv")
      x = data['math'].to_numpy()
      y = data['cs'].to_numpy()
      def gradient_descent(x,y):
      m_curr = b_curr = 0
      iterations = 10000
      n = len(x)
      learning_rate = 0.001
      prev = 0
      for i in range(iterations):
      y_predicted = m_curr * x + b_curr
      md = -(2/n)*sum(x*(y-y_predicted))
      bd = -(2/n)*sum(y-y_predicted)
      m_curr = m_curr - learning_rate * md
      b_curr = b_curr - learning_rate * bd
      cost = (1/n) + sum([val**2 for val in (y-y_predicted)])
      if math.isclose(cost,prev,rel_tol=1e-09):
      break
      print("m {}, b {}, cost {}, iteration {}".format(m_curr,b_curr,cost,i))
      prev = cost
      gradient_descent(x,y)

  • @Arun_Kumar_x86
    @Arun_Kumar_x86 3 года назад +2

    guys i have a doubt when the cost is low we get higher accuracy but when we had 10 iterations we had like cost on 0. but when we did 1000 iterations we got 1. cost but we got good accuracy? why that?

  • @creator025
    @creator025 5 лет назад +2

    this thing in solution --> ''' Good students always try to solve exercise on their own first and then look at the ready made solution
    I know you are an awesome student !! :)
    Hence you will look into this code only after you have done your due diligence.
    If you are not an awesome student who is full of laziness then only you will come here
    without writing single line of code on your own. In that case anyways you are going to
    face my anger with fire and fury !!!
    '''it really works :P
    thanks

    • @codebasics
      @codebasics  5 лет назад

      Ha ha nice.i knew it will work for someone atleast 😊

    • @hussain5755
      @hussain5755 5 лет назад

      sumit you are awesome

  • @sararamadan1907
    @sararamadan1907 3 года назад +1

    I wanted to thank you before ending watching the video just to tell you that you make my day by implying this lesson

    • @codebasics
      @codebasics  3 года назад

      sara i am glad you liked it and thanks for leaving a comment :)

  • @clarizalook2396
    @clarizalook2396 4 года назад +1

    i'm confused. This is something very new to me despite I've studied calculus in my undergrad years. I did not get it fully but the code worked from my end. Perhaps soon, the more I get into different models, I'd slowly understand this. Thanks for sharing all these.

    • @codebasics
      @codebasics  4 года назад +1

      Yup clarie. The tip here is to go slowly without getting overwhelmed. Don't give up and slowly you will start understanding it 😊👍

  • @likhitakorla571
    @likhitakorla571 3 месяца назад

    Sir ,your explanation is the best i have found for the ML!!
    recitification : in the jupyter notebook code given in the github there is a small error in the plt.scatter() function we should use linewidths=5
    if we take it as linewidth='5' it genrates a type error. do check it !!

  • @rishabkumar9578
    @rishabkumar9578 3 года назад +1

    You are the best teacher for data science... thanks

  • @saltanatkhalyk3397
    @saltanatkhalyk3397 3 года назад

    thank you for such easy explanation. was reading about gradient descent many times but this is the first time I understood the math behind that.

  • @leooel4650
    @leooel4650 5 лет назад +1

    Eddited*
    struggled a bit with the math.isclose() function.
    m = 1.017738166...
    b = 1.9152193111... with a learning rate of 0.0002
    I had 0.00018 learning rate in the beginning but i found out that i had a small type error.
    Thank you for your time and your precious knowledge that you share with us free of charge!

  • @ishitasadhukhan1
    @ishitasadhukhan1 2 года назад +1

    The best tutorial on Gradient Descent !

  • @khushidonda7168
    @khushidonda7168 Год назад +1

    i have a doubt.
    can you help me how to plot all values of m and b on chart?

  • @rifatislam2340
    @rifatislam2340 8 месяцев назад +1

    Coursera : For Certification !
    Random Indian Video : For Proper Understanding !

  • @Opinionman2
    @Opinionman2 2 года назад +1

    Best video on the topic I’ve seen so far!
    Thanks

  • @stellarspacetraveler
    @stellarspacetraveler 2 года назад +1

    Note to the commenters below who say you don't need the for loop. The reason that the instructor is using a for loop with the variable "i" that is not used in the body of the loop, is that he wants to be able to print out the intermediate values of the variables so that you can see them changing while they converge to the solution. If he went directly to the solution without the for loop, you would not see the printed variables changing.

  • @GlobalDee_
    @GlobalDee_ 3 года назад +1

    Waoh ,waoh. Codebasics to the world.You are such a great teacher sir.Thanks for sharing these series.........

  • @radhekrashna2148
    @radhekrashna2148 Год назад

    Thank you
    Now it's Cristal clear what is gradient function , cost function (MSE) , sloap , intercept , what these terms is and how scikit learn library uses to implement such machine learning content

  • @akashmishra5553
    @akashmishra5553 3 года назад +1

    Hey, thanks for creating all these playlists. These are so good. I think the viewers should at least like and comment in order to show some love and support.

  • @Pant10
    @Pant10 7 месяцев назад +1

    sir, at 25:40 you say that the optimum value for m and b is 2 and 3 , how do we know that??

  • @hemantsrivastava3745
    @hemantsrivastava3745 3 года назад +6

    For plotting graph, in the .ipynb remove inverted commas in linewidth.

    • @khushidonda7168
      @khushidonda7168 Год назад

      can you help me how to plot all values of m and b on chart?

  • @achyutambharatam2116
    @achyutambharatam2116 3 года назад +1

    I got an error in Jupyter Notebok while ploting graph..AMd I exactly Copy Pasted your entire ipynb code 😑

  • @prathameshjoshi9199
    @prathameshjoshi9199 3 года назад +2

    Please help me, I've a doubt.
    While calculating slope of cost function, if we don't know the cost function beforehand, how can we calculate the slope of cost function ? I mean, if I know that my cost function looks like a sigmoid(for eg.) Then I can use Sigmoid Dervative to find out the Slope of Cost function.
    But If don't know what my cost function looks like, how can I decide which derivative formula to use, to calculate slope ?

  • @slainiae
    @slainiae 7 месяцев назад +1

    Is this the same as:
    reg = linear_model.LinearRegression()
    reg.fit()
    that was shown in your previous two videos?

  • @kumarc4853
    @kumarc4853 4 года назад +1

    StatQuest with Josh Stramer is also a great channel. He explains slightly things better, but without code

    • @ayush9psycho
      @ayush9psycho 3 года назад

      josh strammer is the best person to look to if u want to have a solid understanding of algorithms!

  • @siddharthsingh6393
    @siddharthsingh6393 3 года назад +1

    can we do this without using for loop

  • @RashmiAshok-ky4nf
    @RashmiAshok-ky4nf 6 месяцев назад

    Question: A formula is shown for MSE at 4:30 in this video replacing y predicted with slope of x formula. Xi is the input so mxi+b would give yi not y predicted. So (Yi - Y predicted) would be zero. I think you need to substitute Yi with slope of x and not y predicted. Correct me if I am wrong.
    Great videos btw I just started learning ML and your videos are amazing. Thanks for making these!

  • @krystianprogress4521
    @krystianprogress4521 2 года назад

    Thanks to you I finally understood what the gradient descent is

  • @usmanriaz8396
    @usmanriaz8396 2 года назад

    best video on gradient descent and cost function. understood the match pretty well., excellent,. love from pakistan

  • @ayushgupta80
    @ayushgupta80 5 месяцев назад

    In machine learning , we have input-output ; by help of these values , we derive 'equation' known as prediction function .
    Firstly we draw the line which is most appropriate and passes through nearest points of o/p values .
    Then calculate Mean Squared Error ( popular cost function) = 1/n *( summation i =1 to i = n ) *{[ yi - y_predicted]} square or MSE = (1/n) * Σ(actual - forecast)2
    y_predicted = mx + b
    Gradient descent is an algorithm that finds best fit line for given training data set .
    Code :
    import numpy as np
    def gradient_descent(X, y):
    m_curr = b_curr = 0
    iterations = 10000
    n = len(X)
    learning_rate = 0.08
    for i in range(iterations):
    y_predicted = m_curr*x + b_curr
    cost = (1/n)*sum([val**2 for val in (y-y_predicted)])
    md = -(2/n)*sum(x*(y-y_predicted)) #derivative of m
    bd = -(2 / n) * sum(y - y_predicted) #derivative of b
    m_curr = m_curr - learning_rate*md
    b_curr = b_curr - learning_rate * bd
    print("m {},b {},cost {} , iteration {}".format(m_curr,b_curr,cost,i))
    x = np.array([1,2,3,4,5])
    y = np.array([5,7,9,11,13])
    gradient_descent(x, y)
    We are supposed to find the learning rate for which the cost decreases continously

  • @saifullahshahen
    @saifullahshahen 3 года назад

    i watched this video the third times and after third time, now this is understandable.

  • @sukumarroychowdhury4122
    @sukumarroychowdhury4122 3 года назад

    Hey: you are absolutely excellent. I have seen many guys offering machine learning tutorials. None is as simple, as clear and as educative as you are. Best regards, Sukumar Roy Chowdhury - ex Kolkata, Portland, OR, USA

    • @codebasics
      @codebasics  3 года назад

      Sukumar, I am glad this video helped 👍🙏

  • @piousringlearning
    @piousringlearning 5 лет назад

    Hi Sir,
    learning_rate = 0.001
    m 3.893281222433439e+95, b 5.493754515745566e+93, cost 1.000639601934861e+193, iteration 102m
    This is my result.
    You are really a great teacher.

    • @piousringlearning
      @piousringlearning 5 лет назад

      Dear Sir,
      Thank you for replying, actually that day after I had posted my comment I have realized that I have not reached the minima value, actually "cost 1.000639601934861e+193" is written in scientific notation so it looks like very small, but actually its not. Then I have changed my learning rate and number of iterations and now below is my result:cost 1.000639601934861e+193
      m 1.0201403245621825, b 1.7448479267089132, cost 31.606177703903207, iteration 99999

  • @kalpavrikshika8256
    @kalpavrikshika8256 4 года назад +4

    Can anyone explain the 'mathisclose' logic?
    My code ends with a cost of 31.6045, which still seems high, so why is it breaking and giving me the coefficient and slope?

    • @ritikpratapsingh9128
      @ritikpratapsingh9128 4 года назад

      minimum cost depends on your values of x and y. Your cost may be optimum at 31.6045 too. because you might have taken point such that the sum of squares of error is minimum at 31.6045 only.

    • @vinodreddy2303
      @vinodreddy2303 3 года назад +1

      If two consecutive iteration cost values are almost same (isclose) means that your model has reached optimal, so no further improvement is possible.

  • @aayush135
    @aayush135 2 года назад +1

    Superb!!
    Your lectures are very good and make complicated things very easy. May you keep growing in your life.

  • @moududhassan3026
    @moududhassan3026 6 лет назад +4

    The best one for Gradient Descent, Thank you,

  • @Otaku-Chan01
    @Otaku-Chan01 Год назад +1

    I'm finding so hard to find the optimal value for learning rate and the number of iteration for the same set of x and y with same sequence but with 10 elements in it.
    can anyone help me out!??

  • @prajwal3114
    @prajwal3114 3 года назад

    One of the best Tutorial for Gradient Descent.

  • @durgeshkpal
    @durgeshkpal 2 года назад +1

    how to plot those iterations/ visual representations at 26:42.
    thanks for the machine learning playlist.

  • @ajaykushwaha-je6mw
    @ajaykushwaha-je6mw 3 года назад

    Best ever video on Gradient Descent.

  • @vivekkumargoel2676
    @vivekkumargoel2676 4 года назад +2

    sir followed your tutorial but getting runtime warning overflow error in python how to correct that

  • @rajanalexander4949
    @rajanalexander4949 2 года назад +1

    Sharp, to the point, succinct. Great stuff!

  • @PollyMwangi-cp3jn
    @PollyMwangi-cp3jn 6 месяцев назад +1

    What does the learning rate mean?

  • @shahariarsarkar3433
    @shahariarsarkar3433 2 года назад

    Sir, your github dataset is shown corrupted. Can you check that. Please help us. I couldn't practice because of corrupted copy.

  • @i.t.878
    @i.t.878 2 года назад

    Such an excellent tutorial, the clearest I have seen on this topic. Kudos. Thank you.

  • @geekyprogrammer4831
    @geekyprogrammer4831 3 года назад

    I think best explaination of Gradient Descent on the Internet!

    • @codebasics
      @codebasics  3 года назад +1

      I am happy this was helpful to you.

    • @GlobalDee_
      @GlobalDee_ 3 года назад +1

      You can say that again...It is the best I have seen.Thanks so much sir

  • @nithesh2011
    @nithesh2011 4 года назад +2

    Hi,
    Why we are doing only subtraction every time gradient descent function?
    What if the m gradient is negative? Don't we write like this m_curr = m_curr + learning_rate * md ?
    Thanks in advance

    • @cefax875
      @cefax875 3 года назад

      The tangent line for each m will slant downwards as we converge for m=0 .So we will have negative slope only .We are subtracting each time just to get a new smaller value of m.

  • @king1_one
    @king1_one 7 месяцев назад

    sir , make a video about parameters used in Machine Learning and explain each parameters and its usage
    and same to Deep learning also. sir please its a humble request .tq

  • @ou8xa1vkk64
    @ou8xa1vkk64 3 года назад

    Little hard for me! I cant do the exercise myself. But 100% sure no one will teach easier than this in the world. Keep doing it love you lot!!!!!!!!!

  • @yourlifeonpower
    @yourlifeonpower 6 месяцев назад

    Very clear, concise and helpful! Thank you !

  • @harryinatube
    @harryinatube 6 лет назад +2

    Great video and teaching method. You have an art of keeping things simple but still teach advanced concepts. I get a very good and quick overview and understanding from your videos. Thanks a lot.

  • @jhansisetty9429
    @jhansisetty9429 5 лет назад

    It was a very useful video. After watching many other videos, I understood the concept in the best way after watching your video. Keep making such tutorials which are simple and easy to understand the complex topics. Thankyou.

  • @kasahunabdisa6022
    @kasahunabdisa6022 Год назад

    great and simple approach to learning gradient descent . Thank you for your effort

  • @muhammadazamaslam5809
    @muhammadazamaslam5809 Месяц назад

    I have a question in the end, how would we know that expected value of m & b are 2 & 3 while adjusting the parameters? because teacher here is adjusting the parameters so the values of m & b remain 2 & 3. Eventhough we got even lower cost function which was around 0.00010 in his first run of the program. confused. Another question the goal here is to find the lower cost function of some expected values of m & b?

  • @yangfarhana3660
    @yangfarhana3660 3 года назад

    Clearly broken down concepts, very very good video, thank you for this amazing guide!

  • @user-ys1qg4cl9g
    @user-ys1qg4cl9g 11 месяцев назад

    Best explanation even seen. Love from Bangladesh

  • @Naveensairaheem
    @Naveensairaheem 7 месяцев назад

    here it is. It will print the values for M and B only if prev_cost match with current cost.
    def gradient_descent(x, y):
    m_curr = b_curr = 0
    iterations = 1000000
    learning_rate = 0.0002
    n = len(x)
    cost=0
    prev_cost=1
    for i in range(iterations):
    y_predicted = m_curr * x + b_curr
    cost = (1 / n) * sum([val ** 2 for val in (y - y_predicted)])
    md = -(2 / n) * sum(x * (y - y_predicted))
    bd = -(2 / n) * sum(y - y_predicted)
    m_curr = m_curr - learning_rate * md
    b_curr = b_curr - learning_rate * bd
    if math.isclose(cost, prev_cost, rel_tol=1e-20):
    print("m {} b {} cost {} prev_cost{} actual_cost {} i {} ".format(m_curr, b_curr, cost, prev_cost,cost,i))
    break
    prev_cost=cost
    #print("m {} b {} cost {} prev_cost{} actual_cost {} i {} ".format(m_curr, b_curr, cost, prev_cost,cost,i))

  • @julianandresgomezgomez7264
    @julianandresgomezgomez7264 3 года назад

    Hi, i just wanna say that it doesn´t need the list comprehension (23:18):
    cost = (1/n) * sum( [val**2 for val in (y-y_predicted)] ) = (1/n) * sum( (y - y_pred)**2 )
    I like your video, Thank you very much!
    Blessings!

    • @codebasics
      @codebasics  3 года назад

      Yup I know I kept it to make tutorial very easy and simple for those who don't understand vector operations through numpy

  • @AlonAvramson
    @AlonAvramson 3 года назад

    You provide this complex material in such a nice and easy way. Thank you!

  • @banelenyide9087
    @banelenyide9087 4 месяца назад

    this man is god sent

  • @jaiprathapgv2273
    @jaiprathapgv2273 3 года назад +2

    Sir, I have another doubt we are importing and using linear regression from sklearn. Will the gradient descent happens inside the linear regression model and gives us the result or should I use the gradient descent model separately?

  • @HananAhmed0311
    @HananAhmed0311 Год назад

    19:38 sir why did you use -(2/n) in code because there minus sign with formula tell me!

  • @williamflores7658
    @williamflores7658 5 лет назад +18

    isn't this channel amazing?

  • @srishtikumari6664
    @srishtikumari6664 3 года назад

    Insightful!
    Deep understanding of ML is necessary. You explained it very well

  • @yousufali_28
    @yousufali_28 5 лет назад +1

    Thanks for taking step by step approach and making it easy. 👍

  • @mayankjain24in
    @mayankjain24in 4 года назад

    awesome explanation. plz keep it up ..... also appreciate how you credit others for their work, that's very rare