How Backpropagation works in RNN | Backpropagation Through Time

Поделиться
HTML-код
  • Опубликовано: 11 янв 2025

Комментарии • 105

  • @prodigy4836
    @prodigy4836 Год назад +111

    I am currently a working AI engineer at a computer vision company. I have taken multiple deep learning courses throughout my journey ranging from MIT, Stanford lectures to regular youtube channels. And I am being completely honest here that I have found your way of teaching, explanations to be the MOST Intuitive and easy to understand. Please keep up the good work. You are helping loads of students out there who like me, find it difficult to grasp such complex concepts. Cheers!

  • @usmanyousaaf
    @usmanyousaaf 11 месяцев назад +59

    Stanford ❌ campus x ✅

  • @srijandeo1620
    @srijandeo1620 Год назад +24

    Andrew NG would be proud of you.. keep up the good work

  • @danielgelfond8491
    @danielgelfond8491 Год назад +5

    even though i didnt understand half of the words, because english is not my first language, this video was so good and intuitive on the subject that it outmatches so many other articles on the internet. truely a gem. great job and thankyou

  • @karanhadiyal65
    @karanhadiyal65 Год назад +7

    Sir, your videos are amazing, you boil down complex topics with intuitive understanding. Thanks a lot...

  • @MuhammadAmirMoazzamKhanNiazi
    @MuhammadAmirMoazzamKhanNiazi 7 дней назад

    Awesome man...best DL lectures till now on RUclips I have found.which are very easy to understand.

  • @shakilkhan4306
    @shakilkhan4306 Год назад +4

    you care us a lot ...
    Thanks man..God bless you

  • @memorable_bookmarks
    @memorable_bookmarks 2 года назад +13

    Sir you are blessing to us learners.. Eagerly waiting for the video thanks for uploading and hats off to your commitment.

  • @muralik98
    @muralik98 7 месяцев назад +1

    Nitish sir always explains what generally people consider as obvious but in reality which is not. He touches the nitty gritty of every topic

  • @narendraparmar1631
    @narendraparmar1631 10 месяцев назад +3

    Thanks for all of hard work your putting to explain this.

  • @AryanSingh-fe5uy
    @AryanSingh-fe5uy Год назад +4

    Sir You are a true gem . Thanks a lot for this state of art teaching 🙂

  • @deekshakushwaha-n4m
    @deekshakushwaha-n4m 3 месяца назад +1

    Sir truely I love you so much from my depth of heart. I respect you alot your videos just make me in love with AI subjects. thank you so much for this, thank you thank you so much for doing such an selfless work for us and making this videos, truely sir so much respect for you

  • @rovo-k
    @rovo-k 2 года назад +3

    sir you're like "Aaj ka video ekdamm important hai" in every video, isiliye mai sab video dekhchuka😅
    thank you so much for all these videos.

  • @because2022
    @because2022 Месяц назад

    Great way of teaching complicated concepts in a intuitive way. Amazing work. Keep rocking.

  • @rafibasha4145
    @rafibasha4145 2 года назад +4

    Thanks bro,please be consistent with this playlist

  • @anime_aura_01
    @anime_aura_01 Год назад +1

    best explaitin on whole youtube

  • @rohitrannavre9382
    @rohitrannavre9382 2 года назад +6

    Sir, please complete this playlist as soon as possible.

  • @abrarkhan9462
    @abrarkhan9462 2 года назад +4

    Doing a great job sir

  • @mahfuzraihan8690
    @mahfuzraihan8690 7 месяцев назад

    This is outstanding explanation, I found is very easy and understandable. thanks again

  • @miscellaneoushub1956
    @miscellaneoushub1956 2 года назад +3

    Awesome video
    Sir one video please on LSTM and GRU intuition with hyperparameter tuning.

  • @_AmbujJaiswal
    @_AmbujJaiswal 8 месяцев назад

    god level content.....mad respect broooo

  • @pravinshende.DataScientist
    @pravinshende.DataScientist 2 года назад +1

    best one .. Thank you nitish sir

  • @pranavgandhiprojects
    @pranavgandhiprojects 6 месяцев назад

    You explained it so well! Thankyou so much i am in love with your channel!

  • @AbhijeetKumar-mc8fr
    @AbhijeetKumar-mc8fr 9 месяцев назад

    Brilliant man! Awesome!
    God bless you!

  • @garimadhanania1853
    @garimadhanania1853 8 месяцев назад

    really amazing content, appreciate all your efforts! so comprehensive man, its a joy to learn from you :)

  • @coder10796
    @coder10796 Год назад +1

    Sir aapne 10:20 par o1 me +b nahi lagaya hai

  • @ketanshinde583
    @ketanshinde583 2 года назад +1

    Sir your channel is growing rapidly, glad to see that😁

  • @rohitpotluri4378
    @rohitpotluri4378 2 месяца назад

    Amazing tutorial. CFBR!

  • @PavanTripathi-rj7bd
    @PavanTripathi-rj7bd Год назад +1

    Best explanation!

  • @dakshbhatnagar
    @dakshbhatnagar 2 года назад +1

    I am working on AI development on a side real world project and the requirement there is RNN, in desperate need of these RNN videos.

  • @pranjalthakur3065
    @pranjalthakur3065 9 месяцев назад

    this playlist is sooo muchh amazing 👏👏

  • @vighneshgaikwad575
    @vighneshgaikwad575 Год назад +1

    Sir sorry but 17:39 I don’t understand why we are branching out on O3 instead of Wo cause O3 is comming after Wo and it is dependent on the Wh, O2,X3,Wi, my only point is branching from Wo makes more sense rather then O3, please someone correct me if I am wrong here

    • @AmartyaTalukdarch21b012
      @AmartyaTalukdarch21b012 Месяц назад

      that is because we need to calculate the derivative of L w.r.t Wi, hence we are trying to find which all terms are dependent on Wi. W0 is not dependent on Wi, hence we do not consider it. However, O3 is dependent of Wi, hence we branch that out to reach Wi. On branching it can be seen that not only O3 is dependent on Wi, but O2 is also dependent on Wi, hence we branch O2 further, and this process goes on until we find no element that is dependent of Wi

  • @anishkhatiwada2502
    @anishkhatiwada2502 Год назад +1

    He is Indian Andrew Ng.

  • @AakashGoyal25
    @AakashGoyal25 5 месяцев назад

    Best explanation :)

  • @pavangoyal6840
    @pavangoyal6840 Год назад +1

    Excellent!!

  • @AnuragGupta-hb9du
    @AnuragGupta-hb9du 2 года назад +2

    Nitish Sir,
    When you will upload next deep learning vedio? Sir u r simply best

  • @pungliyavithika843
    @pungliyavithika843 Год назад +1

    Best Video

  • @ronylpatil
    @ronylpatil 2 года назад +1

    Sir please keep uploading videos on RNN.

  • @deepak_kori
    @deepak_kori Год назад +1

    you are best sir

  • @talibdaryabi9434
    @talibdaryabi9434 Год назад +3

    I wish the series could be complete. I am almost finished with the videos and just wondering how I can learn the remaining topics without your videos.

  • @osho_magic
    @osho_magic 2 года назад +1

    Last year you were telling about a playlist of time series I didn’t find it yet

  • @rahulsbytes
    @rahulsbytes 6 месяцев назад

    So , Have We to store the values activation of each layer, o1,o2...... ? so that we make the calculations

  • @aniketjoshi7817
    @aniketjoshi7817 2 года назад +1

    Sir, we all are waiting for the following video. Could you upload it as soon as possible?

  • @Guruji-vz8eg
    @Guruji-vz8eg 4 месяца назад

    Y hat = sigma ( oiwi) but i think you have by mistake write sigma (o3wo), correct? (11:48)

  • @piyushpathak7311
    @piyushpathak7311 2 года назад +1

    Please upload daily one videos in Deep learning playlist Pls

  • @Sidcr07
    @Sidcr07 2 года назад +2

    Sir plz increase the frequency of videos, at this rate idk how much time will be required to complete it.

  • @ParthivShah
    @ParthivShah 8 месяцев назад +1

    Thank You Sir.

  • @kindaeasy9797
    @kindaeasy9797 5 месяцев назад

    14:18 you havent added bias to the outout equations

  • @amritajoshi8729
    @amritajoshi8729 5 месяцев назад

    Sir model.predict(sequences) throws error in the embedding part. i dont know why

  • @mrityunjayupadhyay7332
    @mrityunjayupadhyay7332 Год назад +1

    Nice

  • @te_b4_73_sushant_yelurkar4
    @te_b4_73_sushant_yelurkar4 2 года назад +1

    Please complete deep learning playlist

  • @maheshdhaker112
    @maheshdhaker112 2 года назад +1

    🙏 sir please make complete intuitive video on LSTM.

  • @usamakhan1795
    @usamakhan1795 2 года назад +1

    plz tell the time complexity of backpropogation algorithm ?

  • @darshanayenkar
    @darshanayenkar 2 года назад +1

    sir plz complete this playlist, and plz explain LSTM

  • @paragbharadia2895
    @paragbharadia2895 4 месяца назад +1

    rnn layer is like group discussion: and after discussion everyone passes there output forward..haha

  • @hey.Sourin
    @hey.Sourin 8 месяцев назад

    Thank you so much sir!!!

  • @Faiz2k3
    @Faiz2k3 Год назад +1

    Iss notebook ki link de dijiye for future reference

  • @omkarfadtare3054
    @omkarfadtare3054 2 года назад +1

    Please upload next videos

  • @riyagupta1841
    @riyagupta1841 2 года назад +1

    Sir please upload next videos

  • @rishabhsharma1685
    @rishabhsharma1685 10 месяцев назад

    Pronounced dell (d) so many times that, me watching the video and went to sleep and dreaming of working in DELL company.

  • @ritikgupta4175
    @ritikgupta4175 5 месяцев назад

    sir why you are not considering the biases ?

  • @mathclass1717
    @mathclass1717 Год назад +1

    Sir please make a video on LSTM🙏🙏🙏🙏🙏🙏🙏

  • @Noob31219
    @Noob31219 2 года назад +6

    Time keise manage karte ho sir😅 time management pe video dalo ek😁

    • @campusx-official
      @campusx-official  2 года назад +21

      Bas time management pe hi video banane ka time nai mil pa raha hai

    • @osho_magic
      @osho_magic 2 года назад

      @@campusx-official😂😂😂😂😂

    • @its._ankur
      @its._ankur 2 года назад

      @@campusx-official 🤣🤣🤣

    • @taufiq-ai
      @taufiq-ai 2 года назад

      @@campusx-official দাদা রক্স।

    • @dilkashejaz9072
      @dilkashejaz9072 5 месяцев назад

      ​@@campusx-official😂

  • @zainulabideen9758
    @zainulabideen9758 2 года назад +1

    Next video ks din upload hogi ?

  • @vamshiharshik6495
    @vamshiharshik6495 2 года назад +2

    Sir please make a video on how to convert a ML code into federated code as there are no resources in RUclips please sir

  • @Naman_Bansal102
    @Naman_Bansal102 6 месяцев назад

    Wow Sir

  • @yashshrivastava1612
    @yashshrivastava1612 2 года назад +4

    Hi Nitish
    I have been following your channel and covered almost everything in ml and deep learning yet.
    I understand your time constraint
    Can you help us with topics left to cover after this in the data science journey so we can learn them from other books and notes since we have to complete the syllabus.

  • @zainulabideen9758
    @zainulabideen9758 2 года назад +3

    Bhai deep learning playlist december month ky end tk complete ho jayi gi ?

    • @campusx-official
      @campusx-official  2 года назад

      Sorry, nai ho paayegi. Will tak more time. Will try today cover rnn before Dec

    • @muskangupta4864
      @muskangupta4864 2 года назад

      @@campusx-official sir, kab tak ho paayegi? 🥺🥺it's one of the best deep learning playlist and i am completely depenedent on it🥺🥺🥺please sir, jitna ho sake jaldi jaldi daalna aur videos and kabhi stop mat karna deep learning ki videos daalna

    • @near_.
      @near_. Год назад

      @@muskangupta4864 If u get more videos on DL series please let me know. Iam in learning phase. 😊🙏

  • @yashjain6372
    @yashjain6372 Год назад +1

    best

  • @sachinbagale563
    @sachinbagale563 9 месяцев назад +2

    Stanford Who ?

  • @rupeshkataria81
    @rupeshkataria81 2 года назад +1

    Sir please video upload kijiye

  • @muhammadishaq-co4fu
    @muhammadishaq-co4fu 2 года назад +1

    sir hope you are feeling good.
    sir please tell me how to have question answer with you.

  • @amritajoshi8729
    @amritajoshi8729 5 месяцев назад

    Vocabulary Size: The len(tokenizer.word_index) gives the number of unique tokens (words) found in your text. If your text contains 10 unique words, len(tokenizer.word_index) will be 10.
    Padding Token: Keras uses a special token (usually represented as 0) for padding. This token is not part of your vocabulary but is essential to ensure that all sequences have the same length. Therefore, we need to reserve an extra index for this padding token. therefore insead of 17 it should be 17+1 =18 inside the embedding function because there are now 18 unique representation of words.

  • @Aizaz_ali_shah
    @Aizaz_ali_shah Год назад +1

    next plzx

  • @DRAGON-zf5qw
    @DRAGON-zf5qw 5 месяцев назад +1

  • @abhishek_raghav
    @abhishek_raghav Год назад +1

    sir aap apna name ab
    'MahaTracher' rakhlo

  • @taufiq-ai
    @taufiq-ai 2 года назад +1

    ভাই ৮%

  • @pratikpal5565
    @pratikpal5565 2 года назад

    Sir aaj video nahi aaya?

  • @sandipansarkar9211
    @sandipansarkar9211 2 года назад +1

    finished watching

  • @princekhunt1
    @princekhunt1 7 месяцев назад

    🤯

  • @DeepakKumar-xo1hk
    @DeepakKumar-xo1hk 2 года назад +1

    test