word2vec is a recipe by which you build a fairly shallow neural network : the input layer is typycally a million neurons (one per word) for the english language, but the second layer is much smaller (about 100 neurons). The size of the hidden layer is the size of the vector, and the vector itself is the values of the neurons after activation
Thanks a lot for this wonderful explanation.
Allez les Suisse! Great to see my fellow Swiss talking about one of my fav algo
Great video, really well structured.
Thanks a lot. I have a question, does gensim similarity method works as cosine distance ???
can we convert a sentence to vectors using word2vec?
how we know that the word cat represent by these three numbers
exactly... same doubt here...
you can scrape wikipedia for the 'co-occurence matrix'. i.e. for every word 'cat' in wikipedia, how many times is 'dog' in the same sentence?
word2vec is a recipe by which you build a fairly shallow neural network : the input layer is typycally a million neurons (one per word) for the english language, but the second layer is much smaller (about 100 neurons). The size of the hidden layer is the size of the vector, and the vector itself is the values of the neurons after activation
Really nice job on your content, thank you for taking the time. Let me know how I can help?
Great video, really clear!
Great explanation!
♫ One love, ONE-HOT
Let's get together and feel all right
Hear the children cryin' (one love)
Hear the children cryin' (ONE-HOT) ♫
Great explanation
nicely and easly explained
How do we represent cat by those three numbers
Thanks for this vidéo!
Excellent channel!
Aime si tu es là grâce à N.D. !
hey lê !!