In the next part we'll fine tune BERT to classify toxic comments and show you a couple of fine-tuning tricks/hacks along the way. Next part video: ruclips.net/video/UJGxCsZgalA/видео.html Complete tutorial (including Jupyter notebook): curiousily.com/posts/multi-label-text-classification-with-bert-and-pytorch-lightning/ Thanks for watching!
my pretrained bert model is returning the value of batch_size, max_token_length and classes and the target is of size batch_size, so not able to calculate loss
Hey, Yes, there is next part coming up. It will contain the actual model + some hacks/tricks that will make our model train faster and perform better. Thanks for watching!
Hi Venelin, thank you for the great information, I have a question. How you prepared classes to be in the numerical shape? I have about 2K classes and they are text labels. Can you give me a hint?
Some more questions, if you have the time ^^ Ist unsqueeze basically just the opposite of flatten? If so, why did we flatten the data in the first place?
Hi Venelin!! Great work:) May I know whether continuous retraining is possible using BERT? i.e.,I have a fine-tuned model,Can I further tune it using additional dtaset without merging the new dataset with the old.
Your videos are really helpful! Would your example work just as well with BertForSequenceClassification, or is there a specific reason why you use the 'generic' BERT model?
how to do that trained huggingface model on my own dataset? how i can start ? i don't know the structure of the dataset? help.. very help how I store voice and how to lik with its text how to orgnize that I an looking for any one help me in this planet Should I look for the answer in Mars?
Hi, when I try to download from your Google Drive with the link I recieve the following error message: Permission denied: drive.google.com/uc?id=1VuQ-U7TtggShMeuRSA_hzC8qGD12LRkr Maybe you need to change permission over 'Anyone with the link'?
In the next part we'll fine tune BERT to classify toxic comments and show you a couple of fine-tuning tricks/hacks along the way.
Next part video: ruclips.net/video/UJGxCsZgalA/видео.html
Complete tutorial (including Jupyter notebook): curiousily.com/posts/multi-label-text-classification-with-bert-and-pytorch-lightning/
Thanks for watching!
After 5 months of being AWOL, you came back with a bang... Great video thanks
I was having such a hard time plugging hf with lightning, much clearer how they are plugged together. tx!
Thank you man , you helped me get through my master thesis.
Thanks for great tutorial, where I can find the notebook for this video? It looks like, it's missing inside github repo. Thanks
Thanks for the tutorial!! This is so helpful! Would you share the link of the codes (the colab page)? Thankssss!!!
my pretrained bert model is returning the value of batch_size, max_token_length and classes and the target is of size batch_size, so not able to calculate loss
Do more and regular videos brother🙌🏻hatsoff for your efforts😁
Thanks for the encouragement! I appreciate it 🙏
Was this supposed to end so suddenly like that? When or where will we get the rest of it? Or is there a 'rest of it'?!
Hey,
Yes, there is next part coming up. It will contain the actual model + some hacks/tricks that will make our model train faster and perform better.
Thanks for watching!
Great video! looking forward to BERT4Rec Fine Tuning
Hi Venelin, thank you for the great information, I have a question. How you prepared classes to be in the numerical shape? I have about 2K classes and they are text labels. Can you give me a hint?
Some more questions, if you have the time ^^ Ist unsqueeze basically just the opposite of flatten? If so, why did we flatten the data in the first place?
Nice explanation ❤️ . I Love it the way you explain things and goes hands on. It'll very helpful If you upload Named entity Recognition using BERT.
Its awesome .where is the second part for this .?
Hi Venelin!! Great work:)
May I know whether continuous retraining is possible using BERT?
i.e.,I have a fine-tuned model,Can I further tune it using additional dtaset without merging the new dataset with the old.
the gdown link is down, is toxic_comment.csv the train.scv dataset from toxic comment classification at Kaggle?
Your videos are really helpful! Would your example work just as well with BertForSequenceClassification, or is there a specific reason why you use the 'generic' BERT model?
Amazing content and tutorials bro. Thank you so much . Could you please organise all your videos into proper playlists?
how do you save the model?
pytorch_lightning has seed_everything() in utilities. Great vid :)
Didn't know that. Thanks!
how to do that trained huggingface model on my own dataset? how i can start ? i don't know the structure of the dataset? help.. very help
how I store voice and how to lik with its text how to orgnize that
I an looking for any one help me in this planet
Should I look for the answer in Mars?
Can you give me resources or video how I fine-tuning my question Answering work with my own dataset ??
Haven't played with Question Answering yet. What type of Question Answering you have? Can you give me a sample of your dataset?
@@venelin_valkov the dataset is same as squad format
Is this script availabe somewhere?
Hi, when I try to download from your Google Drive with the link I recieve the following error message:
Permission denied: drive.google.com/uc?id=1VuQ-U7TtggShMeuRSA_hzC8qGD12LRkr
Maybe you need to change permission over 'Anyone with the link'?
Just tried downloading from another account. It works fine. Use this in Google Colab:
!gdown --id 1VuQ-U7TtggShMeuRSA_hzC8qGDl2LRkr
Try it out
Thx a lot, it works :)
This video assumes deep familiarity with PyTorch. Otherwise you're just flying blind.
What happened to the rest of the video?
The final part is coming soon. Will show the fine-tuning and inference using the model :)
Thanks for watching!
Hahahaha, that toxic comment made my day lol
Cool