Great video but i have one question. What you have shown in this video is strictly related to "BertTokenizer" from the transformers library? I've heard that you said a lot about word embeddings and now im confused what is the purpose of SentenceTransformer('all-MiniLM-L6-v2') then? Why not just use BertTokenizer?
Quite helpful. I watched it after reading BERT paper and it makes more sense now.
great step-by-step. easy to follow and helpful!
Great to know you liked @kevon
Easy to understand. Thank you so much! 🙏
Glad it was helpful!
Great video but i have one question. What you have shown in this video is strictly related to "BertTokenizer" from the transformers library? I've heard that you said a lot about word embeddings and now im confused what is the purpose of SentenceTransformer('all-MiniLM-L6-v2') then? Why not just use BertTokenizer?
The video was very useful for me, thank you!
Great to know you liked @starsmaker
To implement the BERT model, do I need to run it on the google colab where I have access to their GPU or my local windows is sufficient for it?
Always better to run in a machine with GPU. In windows without GPU you may be able to run, but will be terribly slow.
Exactly what i needed Thx! :D
Great to hear you liked 🙂
Greate video. Please Explain regarding how tokenTypeIds gets generated. its required for for the tabular data.
Thanks @muhdbaasit8326 - And Good question, will try to cover that very soon. Stay tuned.
is there sources available, actually i want to get the BERT embedding and pass it to BiLSTM ?? i want to learn about ?
Generally first learning resources on BERT is HuggingFace official doc. Then for many implementations examples you can search in Kaggle and Github
Arrasou, obrigada pela video aula.
Pleasure and glad to know
very helpful ! thank you
Thank you!