Great introduction. A good follow-up might be introduction to more advanced, contextual embeddings like BERT. Something I would love personally would be a comparison between BERT, XLNet, Universal Sentence Encoder, etc and the best model to pick based on the use case. For example, BERT would work well for predicting next sentence or missing words in text whereas USE would work better for semantic similarity. Just a suggestion!
The errors are ones you're likely to run into during implementation rather than flaws with the approach. Overall, the advantages (fast to train for new data, approximations of meanings) tend to outweigh the disadvantages for most applications.
Madam please help me in the followings the total number of unique words in T the total number of training examples in T the ratio of positive examples to negative examples in T the average length of document in T the max length of document in T
Clear and concise explanation! You have a great aptitude for teaching and I am so happy I came across your channel!
Please increase volume... content is extremely great, to the point and elaborative.
it cleared my concepts
I am using highest highest volume to listen ,as the tutorial is very useful
This is great! I love that it is so direct :)
Great introduction. A good follow-up might be introduction to more advanced, contextual embeddings like BERT. Something I would love personally would be a comparison between BERT, XLNet, Universal Sentence Encoder, etc and the best model to pick based on the use case. For example, BERT would work well for predicting next sentence or missing words in text whereas USE would work better for semantic similarity. Just a suggestion!
Clear and concise explanation!
very nice video and helpful in learning word embeddings.
What an amazing explanation. Thanks
❤ the videos. Thanks
Thank you; this is an excellent expalanation!
Amazing content, have learned a lot.
Thanks for the video.
Great intro.
This was a great video! Thanks
Perfect! Thanks for a clear explanation
Great video, precise content! Thank you.
Very great! I love it so much. Can we get this presentation?
helped out a lot. thanks
Awesome tutorial @rctatman
if it has so many disadvantages and errors then why it is used even today?(if it is used)
The errors are ones you're likely to run into during implementation rather than flaws with the approach. Overall, the advantages (fast to train for new data, approximations of meanings) tend to outweigh the disadvantages for most applications.
Madam please help me in the followings
the total number of unique words in T
the total number of training examples in T
the ratio of positive examples to negative examples in T
the average length of document in T
the max length of document in T
Succinct and informative. @Rasa A bit nitpicky, but at 3:46 you probably meant homonyms and not homophones.
She meant homophones correctly
Term Meaning Spelling Pronunciation
Homonym Different Same Same
Homophone Different (No requirement) Same
I've watched full playlist in reverse order😂
Suggestion: Audio quality is poor
Could you please speak louder?