L15.7 An RNN Sentiment Classifier in PyTorch

Поделиться
HTML-код
  • Опубликовано: 23 дек 2024

Комментарии • 27

  • @bitdribble
    @bitdribble 2 года назад +1

    Great presentation. Have spent a couple weeks now, every night, doing your videos and hands on notebooks! And I feel I made a lot more progress than with other, less coding-oriented classes.
    Suggestion: define TEXT_COLUMN_NAME, LABEL_COLUMN_NAME as local variables, in all caps, and reference them as variable names everywhere.

  • @kafaayari
    @kafaayari 3 года назад +1

    Hello Prof. Raschka. What an amazing hands on tutorial on RNN!
    I have seen one issue. At 37:26, "packed", the return value of "pack_padded_sequence", is not passed to the next layer "self.rnn".
    But still this version is much better than the first one. As far as I've experimented, the reason is that when you enable sorting within batch, the sequence lengths in batches are very similar. This way RNN learns much better instead of learning dummy paddings.

    • @SebastianRaschka
      @SebastianRaschka  3 года назад +2

      Thanks for the note! You are right, it should have been "self.rnn(packed)" not "self.rnn(embedded)" -- updated it in the code. Interestingly, it worked similarly well before. This is probably due to the sorting (sort_within_batch) as you described.

  • @Rahulsircar94
    @Rahulsircar94 2 года назад +3

    for text preprocessing you could have used a library like neattext.

    • @SebastianRaschka
      @SebastianRaschka  2 года назад +2

      Haven't heard about it before, thanks for the suggestion!

  • @donatocapitella
    @donatocapitella Год назад

    Thanks so much for this, I have been looking for examples of RNNs in pytorch, this is very clear. Has anybody figured out how to use the new torchtext API? They removed legacy and the provided migration guide is also broken, it's been a challenge to figure out how to get this to run with the current API.

  • @randb9378
    @randb9378 3 года назад +1

    Great video! Does the in the vocabulary indicate words that are not in our vocabulary? So in case our LSTM encounters an unknown word, it will be regarded as ?

    • @SebastianRaschka
      @SebastianRaschka  3 года назад +1

      Yes, that's correct, all words that are not in the vocabulary will be mapped to the unknown word token ''

  • @akashghosh4766
    @akashghosh4766 2 года назад +1

    If I am not wrong is this a single unit LSTM unit used in the model?

  • @sadikaljarif9635
    @sadikaljarif9635 Год назад

    how to fix this??

  • @abubakarali6399
    @abubakarali6399 3 года назад

    nn.lstm handles itself, previous output is input to next in the network?

  • @debabratasikder9448
    @debabratasikder9448 Год назад

    AttributeError: module 'torchtext' has no attribute 'legacy'

  • @vikramsandu6054
    @vikramsandu6054 2 года назад +1

    Wonderful tutorial. Thanks.

  • @milanradovanovic3693
    @milanradovanovic3693 3 года назад +2

    Hello Sebastian. Love your books, just keep it up that way. As I said many times your book along with Aurelion Geron one are the best books on subject. I have read second and thrid edition and I always keep it in a desk, although I ve read it page to page... P. S. Convolution types pictures, same and valid when you explain them in a book, are replaced its unsignificant detail but cause it is repeated in second and third edition I thought just to let you know. Best regards

    • @SebastianRaschka
      @SebastianRaschka  3 года назад +1

      Thanks for the kind words! Regarding the picture: Yeah, I agree. We fixed it in a reprint of the 2nd edition but somehow the publishers reverted it back to the original version when they layouted the drafts for the 3rd edition. It's frustrating but I will remind them to double-check this carefully next time. Thanks for the feedback!

  • @madhu1987ful
    @madhu1987ful 2 года назад +1

    This is really awesome stuff 🙂 Do you also have videos on transformer/BERT architecture? and the codes related to that?

    • @SebastianRaschka
      @SebastianRaschka  2 года назад +2

      Glad to hear you found it useful! And yes, I some videos on transforms incl Bert and a code video (DistilBert if I recall correctly). It should be all under L19 in this playlist. For easier reference to the individual videos, here's also an overview page: sebastianraschka.com/blog/2021/dl-course.html#l19-self-attention-and-transformer-networks

  • @DataTheory92
    @DataTheory92 3 года назад

    Hi can I get the pdfs ?

    • @SebastianRaschka
      @SebastianRaschka  3 года назад +1

      I made a page here with links to all the material. It's probably easiest to look it up from there: sebastianraschka.com/blog/2021/dl-course.html

  • @abderahimmazouz2088
    @abderahimmazouz2088 10 месяцев назад

    sm i believe it means small model

  • @saadouch
    @saadouch 3 года назад

    thanks boss!