Semantics with Word2Vec | The Skip-Gram Model, Some Probability, and Softmax Activation
HTML-код
- Опубликовано: 14 окт 2024
- In this video I give a detailed overview of the structure of the Skip-Gram model for creating word embeddings, go over the softmax activation function, and talk about some import probability theory. In the next video, I will talk about entropy and cross-entropy, and how the loss for our network will be calculated
This is the best explanation of wor2vec I found on internet . Thanks
best lecture ever for word2vec basic
Only the left earphone works for your video. I thought suddenly my earphone got damaged🤣
Great explanation but try to improve voice recording quality.
Great video. The best one out there for word2vec. One question - If in the last layer weights are the same for all the context words, how are different context words at different positions predicted?
I have the same question
amazing video ! really easy to understand and follow :D would love to hear a "lounder" sound for the following videos tho
Thank you so much! It helped a lot! :D
Great explanation
it's great but your voice is a bit low
Too low volume, needs a good 30% more. Not even mentioning that the speech comes from one side only, instead of the center. lol. Yeah i'm just bitching, but i rather bitch than walk away silently. gonna watch this now. ... well, after several minutes of watching I have to say that the audio is just making this really problematic. RUclips and system-volume are at 100% already, and it's still only a better whisper. Thanks for trying, though, the video *seems* like it's a good one.