Lesson 4: Practical Deep Learning for Coders 2022

Поделиться
HTML-код
  • Опубликовано: 1 авг 2024
  • 00:00:00 - Using Huggingface
    00:03:24 - Finetuning pretrained model
    00:05:14 - ULMFit
    00:09:15 - Transformer
    00:10:52 - Zeiler & Fergus
    00:14:47 - US Patent Phrase to Phase Matching Kaggle competition
    00:16:10 - NLP Classification
    00:20:56 - Kaggle configs, insert python in bash, read competition website
    00:24:51 - Pandas, numpy, matplotlib, & pytorch
    00:29:26 - Tokenization
    00:33:20 - Huggingface model hub
    00:36:40 - Examples of tokenized sentences
    00:38:47 - Numericalization
    00:41:13 - Question: rationale behind how input data was formatted
    00:43:20 - ULMFit fits large documents easily
    00:45:55 - Overfitting & underfitting
    00:50:45 - Splitting the dataset
    00:52:31 - Creating a good validation set
    00:57:13 - Test set
    00:59:00 - Metric vs loss
    01:01:27 - The problem with metrics
    01:04:10 - Pearson correlation
    01:10:27 - Correlation is sensitive to outliers
    01:14:00 - Training a model
    01:19:20 - Question: when is it ok to remove outliers?
    01:22:10 - Predictions
    01:25:30 - Opportunities for research and startups
    01:26:16 - Misusing NLP
    01:33:00 - Question: isn’t the target categorical in this case?
    Transcript thanks to wyquek, jmp, bencoman, fmussari, mike.moloch, amr.malik, kurianbenoy, gagan, and Raymond Wu on forums.fast.ai.
    Timestamps thanks to RogerS49 and Wyquek on forums.fast.ai.

Комментарии • 35

  • @zzznavarrete
    @zzznavarrete Год назад +42

    As a 26-year-old data scientist with three years of industry experience, I closely follow your course, Jeremy. I want to express my gratitude for your excellent teachings and the enthusiasm you bring to every class. Thank you very much.

  • @DJcatamount
    @DJcatamount 9 месяцев назад +4

    This is a legendary video! Just within a year after this upload human society is being transformed by these general purpose "Transformers" 🚀

  • @ILikeAI1
    @ILikeAI1 8 месяцев назад +7

    It’s great to see that the hugging face model hub is nearly 10X the size since this was recorded

  • @jharkins
    @jharkins Год назад +15

    Going through the course - Jeremy your teaching style is amazing. I *really* appreciate what you're doing. 41:42 was my mind blown moment this class. It's arbitrary - you just have to do something consistent for it to learn from. So amazing that we're at this point in the deep learning curve already. Thanks!

  • @vikramsandu6054
    @vikramsandu6054 Год назад +8

    Amazing video. The thing I like most about is the small hacks and tricks provided by Jeremy in between the topics.

  • @TheAero
    @TheAero 11 месяцев назад +12

    55k people watched, 5k finished this lesson, 1k will apply what learned, 100 will excel in their knowledge. Work hard and you will be a legend. Small steps, huge goals.!

  • @mizoru_
    @mizoru_ 2 года назад +3

    Great to have an introduction to transformers from Jeremy!

  • @mukhtarbimurat5106
    @mukhtarbimurat5106 Год назад

    Great lesson, thanks!

  • @tumadrep00
    @tumadrep00 Год назад +2

    As always, an excellent video Jeremy

  • @Xxpat
    @Xxpat Год назад

    What an awesome lecture..

  • @DevashishJose
    @DevashishJose Год назад +2

    Thank you so much for this amazing lecture Jeremy. as always this was really insightful and a good learning experience.

  • @analyticsroot1898
    @analyticsroot1898 2 года назад

    great tutorial

  • @erichlehmann3667
    @erichlehmann3667 Год назад +1

    Favorite lesson so far 👌

  • @adamkonopka4942
    @adamkonopka4942 Год назад

    Awesome lesson! Thanks for this series!

  • @tanmeyrawal644
    @tanmeyrawal644 Год назад +3

    At 1:33:31 num_labels depends upon the number of categories. So if we treating this as classification problem then it should have been num_labels=5

    • @harshathammegowda1615
      @harshathammegowda1615 6 месяцев назад +1

      the labels in num_labels is in a different context here. consider the label as the feature/category being predicted and not the category's values. so here its just the score which is 1 category/column, hence num_labels=1 albeit it can have upto 5 values - 0, 0.25, 0.5, 0.75, 1.0
      so, if the model were to also predict something like patent acceptance/reject, then num_labels=2 (score + this)

  • @ucmaster2210
    @ucmaster2210 10 месяцев назад

    I have a list of keywords, thousands of rows long. Which deep learning model to use to classify them into different topics? Topics are not known in advance. Thank

  • @blenderpanzi
    @blenderpanzi 8 месяцев назад

    Does it matter in what order the vocabulary is numbered? Like assume the vocabulary is just the English alphabet, does it matter for how the neuronal network works if A B C is numbered 1 2 3 or e.g. 26 3 15? Given all the continuous mathematical operations in the network (floating point math), does it matter which tokens are numerically next to each other and wich have a bigger distance?

  • @juancruzalric6605
    @juancruzalric6605 4 месяца назад

    if you want to predict two classes: 1 and 0 from a dataset. How can you add the F1_score metric?

  • @stevesan
    @stevesan Год назад +2

    great video. a question about validation and testing: at 58:44 he says you have to "go back to square one" if your test set result is bad. what does that mean in practice? does that mean you have to delete your data, in the rm * sense? or just perhaps re-shuffle train vs. test vs. validation (which may not be possible like in the time series case..in which case, get new data?) and even if your test result WAS good, there is still a chance THAT was by coincidence, right?

    • @NegatioNZor
      @NegatioNZor Год назад +2

      If you got a decent result on the validation-set, and then end up with a bad result on your held-out test set, this means that your solution (probably) has some flaw.
      "Going back to square one" in this sense, just means that you have to re-evaluate your solution. Often the best way of doing that is testing the most basic model, with the most basic data you have, just to see that it gives sensible answers in that case. It has nothing to do with deleting the data or re-shuffling train/test :)

  • @toromanow
    @toromanow Год назад

    Where can I find the notebook for this lesson? The Chapter 4 from the book is about something different (image classifier).

    • @chrgeo8342
      @chrgeo8342 7 месяцев назад

      Check chapter 10 of the book

  • @DearGeorge3
    @DearGeorge3 Год назад

    I faced a lot weird warning in the transformers block while executing the script. Absolutely unclear which of them can be ignored and which are critical. Can report but where..

    • @eftilija
      @eftilija Год назад

      you can always post any questions or issues on the fastai forums

  • @SarathSp06
    @SarathSp06 Год назад +3

    Great content.
    I was wondering why the AutoTokenizer have to be initialized with a pre-trained model if all it does is tokenization. How would it differ when different models are used ?

    • @Slava705
      @Slava705 Год назад +1

      A pretrained model has a vocabulary and a tokenizer is based on a vocabulary. Also I guess each model's tokenizer produces a slightly different data structure, that's why there is no single universal tokenizer.

    • @schrodingersspoon1705
      @schrodingersspoon1705 Год назад +1

      Just my attempt to answer the question, I could be wrong. I believe it is because each pre-trained model has its own method of tokenization that it accepts, so each model has its own Tokenizer. AutoTokenizer given the model you are going to use just fetches the corresponding Tokenizer that works with that model.

    • @tharunnarannagari2148
      @tharunnarannagari2148 Год назад +1

      I guess, different tokenizers generate different tokens for same sentence. And pretrained model would expect the incoming input tokens to match its embedding layer weights for best finetuning, since model weights are freezed.

  • @davidchen6087
    @davidchen6087 7 месяцев назад

    A bit confused about the predicted value being a continuous number between 0~1. I thought we were training a classifier that would categorize the inputs as identical, similar, different.

    • @florianvonstosch
      @florianvonstosch 4 месяца назад

      From the Kaggle competition page:
      Score meanings
      The scores are in the 0-1 range with increments of 0.25 with the following meanings:
      1.0 - Very close match. This is typically an exact match except possibly for differences in conjugation, quantity (e.g. singular vs. plural), and addition or removal of stopwords (e.g. “the”, “and”, “or”).
      0.75 - Close synonym, e.g. “mobile phone” vs. “cellphone”. This also includes abbreviations, e.g. "TCP" -> "transmission control protocol".
      0.5 - Synonyms which don’t have the same meaning (same function, same properties). This includes broad-narrow (hyponym) and narrow-broad (hypernym) matches.
      0.25 - Somewhat related, e.g. the two phrases are in the same high level domain but are not synonyms. This also includes antonyms.
      0.0 - Unrelated.

    • @florianvonstosch
      @florianvonstosch 4 месяца назад

      Just noticed the "with increments of 0.25" part. I guess this makes the problem kind of a hybrid between classification and regression.

  • @xychenmsn
    @xychenmsn Год назад +1

    Are you teaching in a college?