Lesson 4: Practical Deep Learning for Coders 2022
HTML-код
- Опубликовано: 1 авг 2024
- 00:00:00 - Using Huggingface
00:03:24 - Finetuning pretrained model
00:05:14 - ULMFit
00:09:15 - Transformer
00:10:52 - Zeiler & Fergus
00:14:47 - US Patent Phrase to Phase Matching Kaggle competition
00:16:10 - NLP Classification
00:20:56 - Kaggle configs, insert python in bash, read competition website
00:24:51 - Pandas, numpy, matplotlib, & pytorch
00:29:26 - Tokenization
00:33:20 - Huggingface model hub
00:36:40 - Examples of tokenized sentences
00:38:47 - Numericalization
00:41:13 - Question: rationale behind how input data was formatted
00:43:20 - ULMFit fits large documents easily
00:45:55 - Overfitting & underfitting
00:50:45 - Splitting the dataset
00:52:31 - Creating a good validation set
00:57:13 - Test set
00:59:00 - Metric vs loss
01:01:27 - The problem with metrics
01:04:10 - Pearson correlation
01:10:27 - Correlation is sensitive to outliers
01:14:00 - Training a model
01:19:20 - Question: when is it ok to remove outliers?
01:22:10 - Predictions
01:25:30 - Opportunities for research and startups
01:26:16 - Misusing NLP
01:33:00 - Question: isn’t the target categorical in this case?
Transcript thanks to wyquek, jmp, bencoman, fmussari, mike.moloch, amr.malik, kurianbenoy, gagan, and Raymond Wu on forums.fast.ai.
Timestamps thanks to RogerS49 and Wyquek on forums.fast.ai.
As a 26-year-old data scientist with three years of industry experience, I closely follow your course, Jeremy. I want to express my gratitude for your excellent teachings and the enthusiasm you bring to every class. Thank you very much.
This is a legendary video! Just within a year after this upload human society is being transformed by these general purpose "Transformers" 🚀
It’s great to see that the hugging face model hub is nearly 10X the size since this was recorded
Going through the course - Jeremy your teaching style is amazing. I *really* appreciate what you're doing. 41:42 was my mind blown moment this class. It's arbitrary - you just have to do something consistent for it to learn from. So amazing that we're at this point in the deep learning curve already. Thanks!
Amazing video. The thing I like most about is the small hacks and tricks provided by Jeremy in between the topics.
55k people watched, 5k finished this lesson, 1k will apply what learned, 100 will excel in their knowledge. Work hard and you will be a legend. Small steps, huge goals.!
Great to have an introduction to transformers from Jeremy!
Great lesson, thanks!
As always, an excellent video Jeremy
What an awesome lecture..
Thank you so much for this amazing lecture Jeremy. as always this was really insightful and a good learning experience.
great tutorial
Favorite lesson so far 👌
Awesome lesson! Thanks for this series!
At 1:33:31 num_labels depends upon the number of categories. So if we treating this as classification problem then it should have been num_labels=5
the labels in num_labels is in a different context here. consider the label as the feature/category being predicted and not the category's values. so here its just the score which is 1 category/column, hence num_labels=1 albeit it can have upto 5 values - 0, 0.25, 0.5, 0.75, 1.0
so, if the model were to also predict something like patent acceptance/reject, then num_labels=2 (score + this)
I have a list of keywords, thousands of rows long. Which deep learning model to use to classify them into different topics? Topics are not known in advance. Thank
Does it matter in what order the vocabulary is numbered? Like assume the vocabulary is just the English alphabet, does it matter for how the neuronal network works if A B C is numbered 1 2 3 or e.g. 26 3 15? Given all the continuous mathematical operations in the network (floating point math), does it matter which tokens are numerically next to each other and wich have a bigger distance?
if you want to predict two classes: 1 and 0 from a dataset. How can you add the F1_score metric?
great video. a question about validation and testing: at 58:44 he says you have to "go back to square one" if your test set result is bad. what does that mean in practice? does that mean you have to delete your data, in the rm * sense? or just perhaps re-shuffle train vs. test vs. validation (which may not be possible like in the time series case..in which case, get new data?) and even if your test result WAS good, there is still a chance THAT was by coincidence, right?
If you got a decent result on the validation-set, and then end up with a bad result on your held-out test set, this means that your solution (probably) has some flaw.
"Going back to square one" in this sense, just means that you have to re-evaluate your solution. Often the best way of doing that is testing the most basic model, with the most basic data you have, just to see that it gives sensible answers in that case. It has nothing to do with deleting the data or re-shuffling train/test :)
Where can I find the notebook for this lesson? The Chapter 4 from the book is about something different (image classifier).
Check chapter 10 of the book
I faced a lot weird warning in the transformers block while executing the script. Absolutely unclear which of them can be ignored and which are critical. Can report but where..
you can always post any questions or issues on the fastai forums
Great content.
I was wondering why the AutoTokenizer have to be initialized with a pre-trained model if all it does is tokenization. How would it differ when different models are used ?
A pretrained model has a vocabulary and a tokenizer is based on a vocabulary. Also I guess each model's tokenizer produces a slightly different data structure, that's why there is no single universal tokenizer.
Just my attempt to answer the question, I could be wrong. I believe it is because each pre-trained model has its own method of tokenization that it accepts, so each model has its own Tokenizer. AutoTokenizer given the model you are going to use just fetches the corresponding Tokenizer that works with that model.
I guess, different tokenizers generate different tokens for same sentence. And pretrained model would expect the incoming input tokens to match its embedding layer weights for best finetuning, since model weights are freezed.
A bit confused about the predicted value being a continuous number between 0~1. I thought we were training a classifier that would categorize the inputs as identical, similar, different.
From the Kaggle competition page:
Score meanings
The scores are in the 0-1 range with increments of 0.25 with the following meanings:
1.0 - Very close match. This is typically an exact match except possibly for differences in conjugation, quantity (e.g. singular vs. plural), and addition or removal of stopwords (e.g. “the”, “and”, “or”).
0.75 - Close synonym, e.g. “mobile phone” vs. “cellphone”. This also includes abbreviations, e.g. "TCP" -> "transmission control protocol".
0.5 - Synonyms which don’t have the same meaning (same function, same properties). This includes broad-narrow (hyponym) and narrow-broad (hypernym) matches.
0.25 - Somewhat related, e.g. the two phrases are in the same high level domain but are not synonyms. This also includes antonyms.
0.0 - Unrelated.
Just noticed the "with increments of 0.25" part. I guess this makes the problem kind of a hybrid between classification and regression.
Are you teaching in a college?
Yes, at the University of Queensland