Gradient Descent in Neural Networks | Batch vs Stochastics vs Mini Batch Gradient Descent

Поделиться
HTML-код
  • Опубликовано: 5 сен 2024

Комментарии • 80

  • @ManasNandMohan
    @ManasNandMohan 3 месяца назад +16

    Sir we are finding the difficult to crack the job in data science placement so please make a dedicated 100 days series of placement tricks & tips, Requested everyone to upvote it , so that sir notice it

  • @ashishsinha5338
    @ashishsinha5338 5 месяцев назад +4

    I was looking for perfect GD in short video, and here it is, literally the perfect one so far,, great job and effort I must say. keep making short videos like these.

  • @Tariqkhan-i6v
    @Tariqkhan-i6v 24 дня назад

    You are an excellent teacher; you explain everything in detail so that concepts become clear. Thank you so much for your efforts.

  • @swatipandey3942
    @swatipandey3942 5 месяцев назад +1

    You are an excellent teacher; you explain everything in detail so that concepts become clear. Thank you so much for your efforts. You are truly a gem.

  • @pradeepkumarverma5226
    @pradeepkumarverma5226 2 года назад +4

    Sir videos kafi acche hai seekhne ko bhi bht kuch milta hai bs sir time continuity banaye rkhie sir, kafi wait krna pad jata hai apke videos ke liye

  • @aishwaryap.s.v.s7387
    @aishwaryap.s.v.s7387 9 месяцев назад +2

    i thankyou from bottom of my heart i am a dataengineer looking for career transition and u made concepts very clear.Thankyou!!!!!!

  • @talibdaryabi9434
    @talibdaryabi9434 Год назад +3

    What a fantastic teaching method; great job.

  • @gradentff39
    @gradentff39 9 месяцев назад +4

    this video contains good content so pls change the thumbnail and put it attractivly like ur photo with background this topics

  • @anitaga469
    @anitaga469 Год назад +1

    Good Content, Great Explanation and an exceptionally gifted teacher. Learning is truly made enjoyable by your videos. Thank you for your hard work and clear teaching Nitish Sir.

  • @divyab592
    @divyab592 4 месяца назад

    Wow!!clear explanation i never got from anywhere

  • @kindaeasy9797
    @kindaeasy9797 2 месяца назад +1

    16:24 but i feel that SGD will take lesser time , because if we apply Batcg GD , it will update weights using whole data set , say its a data set which has 2 million entries , it will take more time , vese SGD will also take good amout of time in this case , i think we cant say which one will take more time in such cases , time taken depends on a lot of factors , so we cant clearly say that BGD will take lesser time .

  • @pradyumnsrivastava3845
    @pradyumnsrivastava3845 5 месяцев назад

    Best explanation on gradient descent

  • @sameerabanu3115
    @sameerabanu3115 11 месяцев назад +2

    Mind-blowing class

  • @pavantripathi1890
    @pavantripathi1890 3 месяца назад

    Thanks for the wonderful explanation!

  • @ParthivShah
    @ParthivShah 4 месяца назад +1

    Thank You Very Much Sir.

  • @SaiKiranAdusumilli
    @SaiKiranAdusumilli Год назад +4

    Awesome man ... big thanks.. I was very confused about batch_size .. After watching this video i found to know that there is relationship between batch size and gradient decent ❤️❤️ Now deep learning is getting interesting to me

  • @Gudduyadav_1989
    @Gudduyadav_1989 Месяц назад

    Bahut aala... 😊

  • @narendraparmar1631
    @narendraparmar1631 6 месяцев назад

    Easy explanation , Thank You

  • @kuldeepsingh1121
    @kuldeepsingh1121 2 месяца назад

    Always like your video before watching it.❤

  • @kindaeasy9797
    @kindaeasy9797 2 месяца назад

    what is the use of shuffling the data set in SGD , vese bhi randomly samples choose ho rhe hai , in Mini Batch it is fine

  • @khanmaheboob
    @khanmaheboob 4 месяца назад

    Sir please provide short notes for this which will help full for us

  • @hossain9410
    @hossain9410 Месяц назад

    How to apply mini batch gradient descent. Please show the implementation

  • @elonmusk4267
    @elonmusk4267 2 месяца назад

    intuitive

  • @kindaeasy9797
    @kindaeasy9797 2 месяца назад

    19:15 i feel like results will be different in case of bigger data sets

  • @rb4754
    @rb4754 3 месяца назад

    So good

  • @user-rw3rf1nl2b
    @user-rw3rf1nl2b 4 месяца назад

    Thank you so much sir

  • @user-ct3ok6dq9r
    @user-ct3ok6dq9r 6 месяцев назад

    sir,please suggest me some projects od deep learning
    for college resume

  • @ZuhaVibes
    @ZuhaVibes 11 месяцев назад +1

    Sir apne Lectures ko compile kr k book bna do...
    Need to buy

  • @chandannelson
    @chandannelson Год назад +1

    Awesome

  • @AbcdAbcd-ol5hn
    @AbcdAbcd-ol5hn Год назад

    Bole tho jhakkas Bhai!!..

  • @belalvai6653
    @belalvai6653 8 месяцев назад

    Nice video sir

  • @SaiSatya-t6v
    @SaiSatya-t6v Месяц назад

    can a get teh dataset please

  • @AkashBhandwalkar
    @AkashBhandwalkar 2 года назад +3

    Hey Nitish, can you please tell me which tracking/writing pad did you use in your Machine Learning playlist videos.
    In the current ones your using a Samsung tab. What about the previous videos?
    Also could you please tell me the size of it too.

  • @manishmaurya2365
    @manishmaurya2365 2 года назад +1

    Bhaiya ML projects ke playlist me kuch add Karo na please!!!

  • @dhirendrapratap9205
    @dhirendrapratap9205 2 года назад +2

    sir ,can you explain what is the difference between GD and Backprop Algorithm , though both are calculating derivatives (chain rule in Neural nets) and then updating weights. I know its kinda silly question to ask

    • @atifali12191987
      @atifali12191987 2 года назад

      In back propagation to calculate the loss gradient descent is used.

    • @sMKUMARSAISKLM
      @sMKUMARSAISKLM Год назад +2

      backpropagation is a type of training method in which gradient descent is used.

  • @KapilSharma56419
    @KapilSharma56419 11 месяцев назад

    Best Explanation

  • @tusharbedse9523
    @tusharbedse9523 2 года назад +1

    u r great as always nitish!!
    bhai tune to youtube chod diya tha na .. can anyone clear me on this what actually that video was about?

  • @life3.088
    @life3.088 2 года назад +2

    how can u shre one note ,note with us

  • @chetanchavan647
    @chetanchavan647 6 месяцев назад

    great Video

  • @rajbir_singh0517
    @rajbir_singh0517 7 месяцев назад

    fantastic....

  • @user-bt1vx7du3e
    @user-bt1vx7du3e Год назад

    best explaination

  • @asmitpatel9746
    @asmitpatel9746 2 года назад

    Nice 1 bhaiyya

  • @shubhankarsharma2221
    @shubhankarsharma2221 Год назад

    Outstanding

  • @debojitmandal8670
    @debojitmandal8670 Год назад +1

    Hi sir i think u have mixed sorcastic and batch when u were explaining the 2 graphs u plotted so batch will give u not stable curve but sorcastic will give u more stable or smooth graphs
    Bcs more points u have smoother the graph will be less points u have not smooth the graph will be but in batch u have less no of times u have updated the w and b so less points
    So i dint follow ur logic
    Also sir even stocastic gradient descent uses for product to calculate y pred so both places ur using dot product so please can you explain me why are you saying that dot product is replacing the loop on batch bcs ur using dot in both the algorithm

    • @ali75988
      @ali75988 9 месяцев назад

      he didn't. Google this topic on geeksforgeeks "ML | Stochastic Gradient Descent (SGD)"

  • @sanjaisrao484
    @sanjaisrao484 8 месяцев назад

    Thanks 🎉

  • @ali75988
    @ali75988 9 месяцев назад

    8:34 50 points ka loss aik sath calculate ho kae avg ho ga? otherwise tou loss buhat bara ae ga i guess.

  • @jayantsharma2267
    @jayantsharma2267 2 года назад

    GREAT CONTENT

  • @ZuhaVibes
    @ZuhaVibes 11 месяцев назад

    Thank you Man

  • @illusions8101
    @illusions8101 Год назад

    Thanks sir

  • @rahulrajbhar3724
    @rahulrajbhar3724 2 года назад

    sir ek Python interview ka series bna dijiya sir if it is possible.

  • @namanmodi7536
    @namanmodi7536 2 года назад +2

    25:20 😂🤣🤣🤣

  • @waheedweins
    @waheedweins Месяц назад

  • @ahmadtalhaansari4456
    @ahmadtalhaansari4456 Год назад

    Revising my concepts.
    August 11, 2023😅

  • @yashjain6372
    @yashjain6372 Год назад

    best

  • @codewithdanial1343
    @codewithdanial1343 2 года назад

    Very best 👌 sir

  • @bibhutibaibhavbora8770
    @bibhutibaibhavbora8770 Год назад

    ❤❤❤

  • @rafibasha4145
    @rafibasha4145 2 года назад +1

    Please covet Adam and other optimization techniques also weight untilaization process ,leqening rate decay

  • @ShubhamSharma-gs9pt
    @ShubhamSharma-gs9pt 2 года назад +1

    Sir i wanna recommend your videos to my south Indian and international friends.. plz make some videos in only English too..thanks:)

    • @namanmodi7536
      @namanmodi7536 2 года назад +1

      na bhai hindi me hi bana ne do aap krish sir ka video dekho ruclips.net/user/krishnaik06

  • @poojakumari2869
    @poojakumari2869 2 года назад

    ❤❤👏👏👏

  • @ajitnayak2919
    @ajitnayak2919 Год назад

    great explanation sir

  • @YadavSachin01
    @YadavSachin01 2 года назад

    image processing

  • @life3.088
    @life3.088 2 года назад

    kindly sir

  • @youtubefanclub1595
    @youtubefanclub1595 2 месяца назад +1

    Ambani mere 🦜🦜

  • @sandipansarkar9211
    @sandipansarkar9211 2 года назад

    finished watching

  • @life3.088
    @life3.088 2 года назад

    kindly sir aghar ap k sare one note k notes mel jae

  • @znyd.
    @znyd. 3 месяца назад

    🤍

  • @shriqam
    @shriqam 2 года назад +1

    please add English Subtitles to your Videos

  • @zkhan2023
    @zkhan2023 2 года назад

    Thanks Sir