PyTorch Tutorial - RNN & LSTM & GRU - Recurrent Neural Nets

Поделиться
HTML-код
  • Опубликовано: 28 ноя 2024

Комментарии • 111

  • @teetanrobotics5363
    @teetanrobotics5363 4 года назад +28

    The more the code you explain, the more I love this channel. Just amazing. Keep it up.

    • @patloeber
      @patloeber  4 года назад +2

      Thanks 😊 really glad you enjoy it

  • @phatarapransaraluck3196
    @phatarapransaraluck3196 3 года назад +4

    I love you. Very clear explanation. I have been looking for this content for a while

  • @yijiesun6098
    @yijiesun6098 Год назад

    This video gives me a very clear picture of implementing RNN with Pytorch. I really appreciate it!

  • @rigeltal8212
    @rigeltal8212 2 года назад +1

    Man.. You are really good in explaining.. Finally understood RNN, LSTM and GRU implementation from your video and the official documentation.

  • @jinyoungchoi3443
    @jinyoungchoi3443 3 года назад

    Thanks so much! I watched through all of your pytorch tutorial, and it is the best pytorch tutorial on youtube!!

    • @patloeber
      @patloeber  3 года назад

      thanks! Glad you like it :)

  • @남노성민
    @남노성민 4 года назад +2

    1시간 동안 인터넷 돌아다녀도 LSTM 차원 안 맞아서 고생했는데 이거 보고 좀 풀렸네. 굿

  • @MrAstor69
    @MrAstor69 2 года назад

    Very nice tutorial for Pytorch. Thanks for the initial code.

  • @username42
    @username42 4 года назад +1

    i am sure you will get 10K very soon and 100K and then 1M :) keep the vibe dude

  • @AladdinPersson
    @AladdinPersson 4 года назад +6

    Nice video Patrick and big congratulations on 10k subscribers, there's been a lot hard work for you to get to that point! I'm sure the best is yet to come too :) I know we make quite similar videos, but I am very happy that is the case because it drives me to make better videos and I learn a lot from you as well 👊 Also the more people doing videos about PyTorch, TensorFlow and machine learning in general the better it will be for people wanting to learn about these things which is ultimately the goal.

    • @patloeber
      @patloeber  4 года назад +1

      Thank you! I just discovered your channel a few weeks back. Yes some topics are very similar, but I think this is good. I really enjoy your teaching style, and I hope you keep going! I'm sure you will also gain more followers soon! Would you be interested in some kind of collaboration in the future?
      Best,
      Patrick

    • @AladdinPersson
      @AladdinPersson 4 года назад +1

      @@patloeber For sure, if we find something we can collaborate on! :)

    • @syedmuhammaddanish8511
      @syedmuhammaddanish8511 4 месяца назад

      ​ @AladdinPersson Bros so did you collaborate? :)

  • @gradientO
    @gradientO 4 года назад

    From freeecodecamp!! Thanks for your content man!

  • @jaaaaaang
    @jaaaaaang Год назад

    your example is clear !! thanks

  • @HeyImAK
    @HeyImAK 3 года назад +1

    Your tutorials are incredible! thank you so much!

  • @dipanwitamallick8793
    @dipanwitamallick8793 4 года назад

    I was waiting for you to upload some videos on deep learning ...thanks so much !!!

  • @hpaghababa8111
    @hpaghababa8111 3 года назад +1

    Great Tutorial. Thanks! Keep doing your jab.
    I would appreciate more if you could add a small part to the video to explain how you can implement "many to many" case too.

  • @waqasahmed8973
    @waqasahmed8973 4 года назад

    Danke Schön Patrick
    !

  • @zhengguanwang4337
    @zhengguanwang4337 2 года назад +1

    perfect ourses!!!!!!!

  • @nasser-eddinemonir8443
    @nasser-eddinemonir8443 3 года назад +8

    Thank you for you great videos.
    15:02 little question : aren't we suppose to initialize the cell_state and the hidden_state (t-1) at each epoch instead of each lstm_cell (inner loop) ? Otherwise, the cell_state which is supposed to play the memory role will be useless...
    Thanks !

    • @SmartHDesigner
      @SmartHDesigner Год назад

      Bumping this question ;D

    • @googm
      @googm Год назад

      Why would you need to retain memory across batches in an image classification task?

  • @SyntharaPrime
    @SyntharaPrime 2 года назад

    great work. Thank you so much

  • @zhengguanwang4337
    @zhengguanwang4337 2 года назад +1

    Do you have tutorial of hyperparameters for RNN.? That would be great!!!!

  • @prajwol_poudel
    @prajwol_poudel 2 года назад

    from the out, _. The _ has the hidden state of last time step, no need to use the out in the forward propagation{out[:,-1,:]}. For lstm out, (hn, cn) = self.lstm(x,(h0,c0)) and for rnn/gru out, hn = self.rnn(x,h0) / out , hn= self.gru(x,h0).
    for forward progagation just use this last hidden state out=self.fc(hn).
    I am referring to 11:40.

  • @easonjia4791
    @easonjia4791 2 года назад

    You're the best!

  • @navnisch1590
    @navnisch1590 4 года назад +1

    @10:33, the shape of output tensor from the RNN model that is mentioned in Pytorch documentation is (seq_length, batch, hidden_size) whereas @11:03 the shape of output tensor from the RNN model is indicated as (batch_size, seq_length, hidden_size) . Which one is correct ? The output shape that is mentioned in the Pytorch documentation or the one that is in this tutorial ?

    • @patloeber
      @patloeber  4 года назад +1

      Both can be correct, by default it’s the one mentioned in the docs. But we use the argument batch_first=True here which turns it around. I usually prefer to have the batches as first dimension

  • @buttert5091
    @buttert5091 2 года назад

    Thanks a lot

  • @jieluo3736
    @jieluo3736 4 года назад

    you save my life, best wishes for you ^_^

  • @mahdiha5104
    @mahdiha5104 3 года назад

    This is useful ,thanks a lot

  • @brofessorsbooks3352
    @brofessorsbooks3352 3 года назад

    REALLY GOOD !

  • @xinjin871
    @xinjin871 3 года назад

    super understandable!

  • @rouhollahabolhasani1853
    @rouhollahabolhasani1853 4 года назад

    great tutorial. thank you!

  • @guccihabenero
    @guccihabenero 4 года назад +1

    thank you for the video! how would I go about using the previous hidden state from the previous batch in the new input?

    • @patloeber
      @patloeber  4 года назад

      Hmm why would you want to do that?

    • @guccihabenero
      @guccihabenero 4 года назад

      If the sequence I am feeding the lstm is not the whole item. E.g I want to feed an lstm a sequence of 11 frames, one current frame and 10 previous frames. The total number of frames in the item is ~155. Each batch of 11 frames needs to output 1 frame. I want to use the hidden state of previous input to aid the learning of the next frame

  • @jinorohit2921
    @jinorohit2921 3 года назад

    amazinggg stuff!

  • @thuancollege
    @thuancollege Год назад

    Nice!

  • @user-mb3mf2og9k
    @user-mb3mf2og9k 3 года назад +1

    The h0 and c0 init should be in __init__, not forward(). right?

    • @JonasBalandraux
      @JonasBalandraux 5 месяцев назад

      I agree, we should want to use memory since we're using RNNs, but here it's for image classification, so in the end memory is useless.

  • @zhengguanwang4337
    @zhengguanwang4337 2 года назад +1

    do you have hyperparameter turning?

  • @AgentRex42
    @AgentRex42 4 года назад +1

    Hi, do you think you will make videos on reinforcement learning ?

    • @patloeber
      @patloeber  4 года назад +1

      Yes! It’s a complex topic but I definitely want to make tutorials about this in the future!

    • @AgentRex42
      @AgentRex42 4 года назад +1

      @@patloeber Awesome !

  • @DanielWeikert
    @DanielWeikert 4 года назад

    great work. Any best practice ideas to figure out the relevant tensor shapes at various steps? That is challenging to me. THanks

  • @dhananjaykansal8097
    @dhananjaykansal8097 4 года назад

    You’re tooo goooooodddddd

  • @uliliulili
    @uliliulili 3 года назад

    When u changed the code for LSTM in __init__, didnt you forget to change in forward to LSTM? Seems like you run GRU again.

  • @Sajacky1994
    @Sajacky1994 2 года назад

    why did you take the first output of the RNN and then do [:-1:] on it, why didnt you just take the second output of the RNN, which is the hidden state of the last sequence element?

  • @shaantanukulkarni5668
    @shaantanukulkarni5668 3 года назад

    thanks!

  • @cedar9124
    @cedar9124 3 года назад

    hi, how can we use pygad lib with pytorch? specially for optimization of RNNs

  • @nicolasarayacaro94
    @nicolasarayacaro94 3 года назад

    how could it be done with an rgb image? [Batch_size,3,W,H]

  • @plusminuschirag
    @plusminuschirag 4 года назад

    More I watch more I want to watch.

    • @patloeber
      @patloeber  4 года назад

      That's nice to hear!

  • @dasc000
    @dasc000 3 года назад

    This is what a teacher sounds like when the first grader still doesn't get it after explaining 5 times, but if he starts yelling the parents will complain

  • @yanfeng5519
    @yanfeng5519 2 года назад

    Great

  • @user-cv7io9tz4v
    @user-cv7io9tz4v 3 года назад

    Good video, but wouldn't the classification be better if you'd connect all the outputs of the recurrent layer to the classes via the linear layer?

  • @sankettgorey91
    @sankettgorey91 2 года назад

    i tried hard but my loss doesn't reduce...my code is same as yours. when i run your code its good but when i run my code, loss remains same. do you know why so?

  • @randomforrest9251
    @randomforrest9251 3 года назад

    your videos are really helpful. For some weird reason, the torchsummary summary states that there are no learnable parameters in my rnn layers.. that must be a bug..or am i doing it wrong? XD

  • @iganarendra
    @iganarendra 3 года назад

    Great tutorial sir, thank you... i feel this one is also intermediate friendly... since i am moving from keras. BTW outside of the tutorial, what is the IDE theme, that looks great i like it !

    • @adenmustafa7101
      @adenmustafa7101 3 года назад

      i realize Im kind of off topic but does anybody know of a good website to watch new tv shows online ?

    • @patloeber
      @patloeber  3 года назад

      I Think I used Monokai or Night Owl

  • @HieuTran-rt1mv
    @HieuTran-rt1mv 4 года назад

    Can you talk about Transformer architecture/seq2seq model???

  • @parisabagherzadeh4
    @parisabagherzadeh4 3 года назад

    Hi, can I see the codes of this video from the first ?
    Thanks in advance

  • @zhengguanwang4337
    @zhengguanwang4337 2 года назад

    i have a question,
    loss = criterion(outputs, labels)
    expected scalar type Long but found Float.
    Does anybody can help me to solve this problem? thank you.

  • @cowwoc2022
    @cowwoc2022 3 года назад

    At 10:29 the documentation says that the output has shape [seq_len, batch, hidden_size] but at 11:00 you write that the output has shape [batch_size, seq_length, hidden_size] and proceed to code accordingly. Can you please confirm whether this is a bug in the video and code?

    • @cowwoc2022
      @cowwoc2022 3 года назад

      Ah, I figured it out. At 6:25 we see that setting batch_first = True causes the output shape to become [batch_size, seq_length, hidden_size], so the video and code are correct.

    • @patloeber
      @patloeber  3 года назад

      Yep! Be very careful with this argument and the correct order :D

  • @akother6521
    @akother6521 3 года назад

    Can someone please help me with an LSTM neural network model? I have to use the phm8 NASA dataset. I have preprocessed the data but I am not sure how to proceed. Please let me know!

  • @canernm
    @canernm 3 года назад

    Thanks for the great videos :) I have a quick question. At 4:52, why do we define self.hidden_size and self.num_layers, but don't do the same with the other inputs of the __init__ function?

    • @patloeber
      @patloeber  3 года назад +1

      because they are needed in the forward function as well, and the others aren't

    • @canernm
      @canernm 3 года назад

      @@patloeber Ok, thanks!

  • @user-or7ji5hv8y
    @user-or7ji5hv8y 4 года назад

    Can you do a tutorial on the Attention and Transformer model.

    • @patloeber
      @patloeber  4 года назад

      Should definitely be added in the future

  • @razi_official
    @razi_official 4 года назад

    It's very nice tutorial i have ever come across
    sir Could you make a video on Signature Verification by using ResNet neural network and triplet loss functions?
    please must reply me. Tha nk you

    • @patloeber
      @patloeber  4 года назад +1

      Thanks! Will have a look at that...

  • @saurrav3801
    @saurrav3801 4 года назад

    Bro what is your opinion about pycaret (ml library)

    • @patloeber
      @patloeber  4 года назад +1

      Havent' used it yet. But maybe it's worth checking it out

  • @saurrav3801
    @saurrav3801 4 года назад

    Bro .....can you add nn.sequential ....in upcoming videos😁

  • @flamboyantperson5936
    @flamboyantperson5936 4 года назад

    hey what screen recorder do you use please tell me

  • @harshraj22_
    @harshraj22_ 4 года назад

    Can we have some small video on Meshed-,Memory-Transformers pleeeaasseee 🥺🥺🥺

  • @abderrahmanebououden5173
    @abderrahmanebououden5173 4 года назад

    thanks, sir for this tutorial. can you give us a tutorial about object detection (Yolo) in the future thanks, sir.

    • @patloeber
      @patloeber  4 года назад +1

      Yes this is already on my list

  • @yeshuang2226
    @yeshuang2226 4 года назад

    Hi Sir: just notice in your github of pytorchtutorial 13_feedforward.py, @ line 98 _, predicted = torch.max(outputs.data, 1).
    Since this line is under torch.no_grad(), I think _, predicted = torch(outputs, 1) is okay.

    • @patloeber
      @patloeber  4 года назад

      yep I think you are right

  • @shivibhatia1613
    @shivibhatia1613 Год назад

    why is no one using a real data set, iris, mnist and other dataset serve no purpose in real world.

  • @TheThunderSpirit
    @TheThunderSpirit Год назад

    wasted tutorial. copied from pytorch/example github. that too u removed dropout. what a waste

  • @decioashcar9801
    @decioashcar9801 3 года назад

    Thanks a lot ! Great work !