Gradient Descent in Neural Networks | Batch vs Stochastics vs Mini Batch Gradient Descent

Поделиться
HTML-код
  • Опубликовано: 27 янв 2025

Комментарии • 95

  • @ashishsinha5338
    @ashishsinha5338 9 месяцев назад +9

    I was looking for perfect GD in short video, and here it is, literally the perfect one so far,, great job and effort I must say. keep making short videos like these.

  • @swatipandey3942
    @swatipandey3942 10 месяцев назад +3

    You are an excellent teacher; you explain everything in detail so that concepts become clear. Thank you so much for your efforts. You are truly a gem.

  • @sohelshaikhh
    @sohelshaikhh 4 месяца назад +4

    35:00 Here batch sizes are in exponents of 2 because thats how GPU allocates the chunk. If you put the batch size as 100 then it would still reserve space for 128 and would only use 100.
    Also, the allocated space that its not going to use will cause issue as GPU can only reserve continuous chunk unlike CPU where it can allocate chunks which are not necessarily continuous.

  • @agrimkrishna166
    @agrimkrishna166 Месяц назад +6

    Batch gradient Descent : an old man , who is walking with a stick slowly but towards his home.
    Stochastic : his grandchild , who is drunk , but heading towards home.
    epochs: nopof steps

  • @pradeepkumarverma5226
    @pradeepkumarverma5226 2 года назад +4

    Sir videos kafi acche hai seekhne ko bhi bht kuch milta hai bs sir time continuity banaye rkhie sir, kafi wait krna pad jata hai apke videos ke liye

  • @PS_nestvlogs
    @PS_nestvlogs 2 месяца назад

    very very nicely explained. as i always feel and tell everyone for one stop channel for data science this is the best channel as content is real quality and the way Nitish you explains makes it really easy for a beginner to understand the complex topics also. Thank you so much

  • @Tariqkhan-i6v
    @Tariqkhan-i6v 5 месяцев назад

    You are an excellent teacher; you explain everything in detail so that concepts become clear. Thank you so much for your efforts.

    • @hitanshramtani9076
      @hitanshramtani9076 4 месяца назад

      can u explain what is batch size . is it the no of rows

    • @BaladityasaiChinni
      @BaladityasaiChinni 3 месяца назад +1

      @@hitanshramtani9076 Yes geek, number of rows or points taken at a time to vectorise and sum up their loss and update the parameters once per batch

  • @talibdaryabi9434
    @talibdaryabi9434 Год назад +3

    What a fantastic teaching method; great job.

  • @SHIVAMGUPTA-wb5mw
    @SHIVAMGUPTA-wb5mw 3 месяца назад

    wow, 50 jagah gradient descent samajhane k try kiya , ab jake smjh aya
    great work

  • @kindaeasy9797
    @kindaeasy9797 7 месяцев назад +3

    16:24 but i feel that SGD will take lesser time , because if we apply Batcg GD , it will update weights using whole data set , say its a data set which has 2 million entries , it will take more time , vese SGD will also take good amout of time in this case , i think we cant say which one will take more time in such cases , time taken depends on a lot of factors , so we cant clearly say that BGD will take lesser time .

  • @aishwaryap.s.v.s7387
    @aishwaryap.s.v.s7387 Год назад +2

    i thankyou from bottom of my heart i am a dataengineer looking for career transition and u made concepts very clear.Thankyou!!!!!!

  • @vishnusit1
    @vishnusit1 28 дней назад

    excellent man.. i salute you. Brilliantly explained the concept these optimization technique..

  • @anitaga469
    @anitaga469 2 года назад +1

    Good Content, Great Explanation and an exceptionally gifted teacher. Learning is truly made enjoyable by your videos. Thank you for your hard work and clear teaching Nitish Sir.

  • @abdulwahabkhan1086
    @abdulwahabkhan1086 3 месяца назад

    The explanation of that little nuance about which is faster in SGD and BGD ❤❤

  • @ManasNandMohan
    @ManasNandMohan 7 месяцев назад +23

    Sir we are finding the difficult to crack the job in data science placement so please make a dedicated 100 days series of placement tricks & tips, Requested everyone to upvote it , so that sir notice it

  • @pradyumnsrivastava3845
    @pradyumnsrivastava3845 10 месяцев назад

    Best explanation on gradient descent

  • @mohdimransiddiqui2566
    @mohdimransiddiqui2566 3 месяца назад

    Great learning 😀😀😀

  • @Learnopia1
    @Learnopia1 3 месяца назад

    While updating the weights of first layer we use weights of last layer in SGD so which one we will consider the old values of them or the updated values.
    please answer this doubt of mine

  • @chomskyinequality
    @chomskyinequality 26 дней назад

    great, thanks a lot bro, quite detailed explanation!

  • @divyab592
    @divyab592 8 месяцев назад

    Wow!!clear explanation i never got from anywhere

  • @rockykumarverma980
    @rockykumarverma980 Месяц назад

    Thank you so much sir 🙏🙏🙏

  • @knowfact2
    @knowfact2 5 дней назад

    from where we can get this collab notebook

  • @gradentff39
    @gradentff39 Год назад +7

    this video contains good content so pls change the thumbnail and put it attractivly like ur photo with background this topics

  • @kindaeasy9797
    @kindaeasy9797 7 месяцев назад

    what is the use of shuffling the data set in SGD , vese bhi randomly samples choose ho rhe hai , in Mini Batch it is fine

  • @sameerabanu3115
    @sameerabanu3115 Год назад +2

    Mind-blowing class

  • @SaiKiranAdusumilli
    @SaiKiranAdusumilli 2 года назад +4

    Awesome man ... big thanks.. I was very confused about batch_size .. After watching this video i found to know that there is relationship between batch size and gradient decent ❤️❤️ Now deep learning is getting interesting to me

  • @Vaishnavi-dz6ef
    @Vaishnavi-dz6ef 10 дней назад

    yes sir very very helpful thanks a lot

  • @life3.088
    @life3.088 2 года назад +2

    how can u shre one note ,note with us

  • @pavantripathi1890
    @pavantripathi1890 8 месяцев назад

    Thanks for the wonderful explanation!

  • @kuldeepsingh1121
    @kuldeepsingh1121 7 месяцев назад

    Always like your video before watching it.❤

  • @narendraparmar1631
    @narendraparmar1631 11 месяцев назад

    Easy explanation , Thank You

  • @ali75988
    @ali75988 Год назад

    8:34 50 points ka loss aik sath calculate ho kae avg ho ga? otherwise tou loss buhat bara ae ga i guess.

  • @ParthivShah
    @ParthivShah 9 месяцев назад +1

    Thank You Very Much Sir.

  • @hossain9410
    @hossain9410 6 месяцев назад

    How to apply mini batch gradient descent. Please show the implementation

  • @Gudduyadav_1989
    @Gudduyadav_1989 5 месяцев назад

    Bahut aala... 😊

  • @AkashBhandwalkar
    @AkashBhandwalkar 2 года назад +3

    Hey Nitish, can you please tell me which tracking/writing pad did you use in your Machine Learning playlist videos.
    In the current ones your using a Samsung tab. What about the previous videos?
    Also could you please tell me the size of it too.

  • @kindaeasy9797
    @kindaeasy9797 7 месяцев назад

    19:15 i feel like results will be different in case of bigger data sets

  • @ABHISHEKKUMAR-i2b7t
    @ABHISHEKKUMAR-i2b7t 11 месяцев назад

    sir,please suggest me some projects od deep learning
    for college resume

  • @dhirendrapratap9205
    @dhirendrapratap9205 2 года назад +2

    sir ,can you explain what is the difference between GD and Backprop Algorithm , though both are calculating derivatives (chain rule in Neural nets) and then updating weights. I know its kinda silly question to ask

    • @atifali12191987
      @atifali12191987 2 года назад

      In back propagation to calculate the loss gradient descent is used.

    • @sMKUMARSAISKLM
      @sMKUMARSAISKLM 2 года назад +2

      backpropagation is a type of training method in which gradient descent is used.

  • @belalvai6653
    @belalvai6653 Год назад

    Nice video sir

  • @chandannelson
    @chandannelson Год назад +1

    Awesome

  • @AbcdAbcd-ol5hn
    @AbcdAbcd-ol5hn Год назад

    Bole tho jhakkas Bhai!!..

  • @rb4754
    @rb4754 7 месяцев назад

    So good

  • @ZuhaVibes
    @ZuhaVibes Год назад +1

    Sir apne Lectures ko compile kr k book bna do...
    Need to buy

  • @SiyaRajput-k8c
    @SiyaRajput-k8c 8 месяцев назад

    Thank you so much sir

  • @SaiSatya-t6v
    @SaiSatya-t6v 6 месяцев назад

    can a get teh dataset please

  • @khanmaheboob
    @khanmaheboob 9 месяцев назад

    Sir please provide short notes for this which will help full for us

  • @debojitmandal8670
    @debojitmandal8670 Год назад +1

    Hi sir i think u have mixed sorcastic and batch when u were explaining the 2 graphs u plotted so batch will give u not stable curve but sorcastic will give u more stable or smooth graphs
    Bcs more points u have smoother the graph will be less points u have not smooth the graph will be but in batch u have less no of times u have updated the w and b so less points
    So i dint follow ur logic
    Also sir even stocastic gradient descent uses for product to calculate y pred so both places ur using dot product so please can you explain me why are you saying that dot product is replacing the loop on batch bcs ur using dot in both the algorithm

    • @ali75988
      @ali75988 Год назад

      he didn't. Google this topic on geeksforgeeks "ML | Stochastic Gradient Descent (SGD)"

  • @namanmodi7536
    @namanmodi7536 2 года назад +2

    25:20 😂🤣🤣🤣

  • @KapilSharma56419
    @KapilSharma56419 Год назад

    Best Explanation

  • @tusharbedse9523
    @tusharbedse9523 2 года назад +1

    u r great as always nitish!!
    bhai tune to youtube chod diya tha na .. can anyone clear me on this what actually that video was about?

  • @asmitpatel9746
    @asmitpatel9746 2 года назад

    Nice 1 bhaiyya

  • @chetanchavan647
    @chetanchavan647 11 месяцев назад

    great Video

  • @elonmusk4267
    @elonmusk4267 6 месяцев назад

    intuitive

  • @sanjaisrao484
    @sanjaisrao484 Год назад

    Thanks 🎉

  • @rajbir_singh0517
    @rajbir_singh0517 11 месяцев назад

    fantastic....

  • @shubhankarsharma2221
    @shubhankarsharma2221 Год назад

    Outstanding

  • @akashzingade
    @akashzingade Год назад

    best explaination

  • @jayantsharma2267
    @jayantsharma2267 2 года назад

    GREAT CONTENT

  • @manishmaurya2365
    @manishmaurya2365 2 года назад +1

    Bhaiya ML projects ke playlist me kuch add Karo na please!!!

  • @bibhutibaibhavbora8770
    @bibhutibaibhavbora8770 Год назад

    ❤❤❤

  • @ZuhaVibes
    @ZuhaVibes Год назад

    Thank you Man

  • @zkhan2023
    @zkhan2023 2 года назад

    Thanks Sir

  • @rahulrajbhar3724
    @rahulrajbhar3724 2 года назад

    sir ek Python interview ka series bna dijiya sir if it is possible.

  • @waheedweins
    @waheedweins 6 месяцев назад

  • @poojakumari2869
    @poojakumari2869 2 года назад

    ❤❤👏👏👏

  • @ajitnayak2919
    @ajitnayak2919 2 года назад

    great explanation sir

  • @yashjain6372
    @yashjain6372 Год назад

    best

  • @codewithdanial1343
    @codewithdanial1343 2 года назад

    Very best 👌 sir

  • @rafibasha4145
    @rafibasha4145 2 года назад +1

    Please covet Adam and other optimization techniques also weight untilaization process ,leqening rate decay

  • @ShubhamSharma-gs9pt
    @ShubhamSharma-gs9pt 2 года назад +1

    Sir i wanna recommend your videos to my south Indian and international friends.. plz make some videos in only English too..thanks:)

    • @namanmodi7536
      @namanmodi7536 2 года назад +1

      na bhai hindi me hi bana ne do aap krish sir ka video dekho ruclips.net/user/krishnaik06

  • @ahmadtalhaansari4456
    @ahmadtalhaansari4456 Год назад

    Revising my concepts.
    August 11, 2023😅

  • @YadavSachin01
    @YadavSachin01 2 года назад

    image processing

  • @sandipansarkar9211
    @sandipansarkar9211 2 года назад

    finished watching

  • @life3.088
    @life3.088 2 года назад

    kindly sir

  • @life3.088
    @life3.088 2 года назад

    kindly sir aghar ap k sare one note k notes mel jae

  • @shriqam
    @shriqam 2 года назад +1

    please add English Subtitles to your Videos

  • @znyd.
    @znyd. 8 месяцев назад

    🤍

  • @youtubefanclub1595
    @youtubefanclub1595 7 месяцев назад +1

    Ambani mere 🦜🦜

  • @rupendraofficial4325
    @rupendraofficial4325 3 месяца назад

    💓

  • @illusions8101
    @illusions8101 Год назад

    Thanks sir