To learn more about Lightning: lightning.ai/ Support StatQuest by buying my book The StatQuest Illustrated Guide to Machine Learning or a Study Guide or Merch!!! statquest.org/statquest-store/ NOTE: A lot of people ask for the math at 13:16 to be clarified. In that example we have 3,000,000 inputs, each connected to 100 activation functions, for a total of 300,000,000 weights on the connections from the inputs to the activation functions. We then have another 300,000,000 weights on the connections from activations functions to the outputs. 300,000,000 + 300,000,000 = 2 * 300,000,000
In simple words, word embeddings is the by-product of training a neural network to predict the next word. By focusing on that single objective, the weights themselves (embeddings) can be used to understand the relationships between the words. This is actually quite fantastic! As always, great video @statquest!
This channel is literally the best thing happened to me on youtube! Way too excited for your upcoming video on transformers, attention and LLMs. You're the best Josh ❤
I was struggling to understand NLP and DL concepts, thinking of dropping my classes, and BAM!!! I found you, and now I'm writing a paper on neural program repair using DL techniques.
Damn, when I first learned about this 4 years ago, it took me two days to wrap my head around to understand these weights and embeddings to implement in codes. Just now, I need to refreshe myself the concepts since I have not worked with it in a while and your videos illustrated what I learned (whole 2 days in the past) in just 16 minutes !! I wished this video existed earlier !!
haha I love your opening and your teaching style! when we think something is extremely difficult to learn, everything should begin with singing a song, that make a day more beautiful to begin with ( heheh actually I am not just teasing lol, I really like that ) thanks for sharing your thoughts with us
For those of you who find it hard to understand this video, my recommendation is to watch it at a slower pace and make notes of the same. It will really make things much more clear.
your work is extremely amazing and so helpful for new learns who want to go into detail of working of Deep Learning models , instead of just knowing what they do!! Keep it up!
Hey Josh, i'm a brazilian student and i love to see your videos, it's such a good and fun to watch explanation of every one of the concepts, i just wanted to say thank you, cause in the last few months you made me smile beautiful in the middle of studying, so, thank you!!! (sorry for the bad english hahaha)
Hey Josh. A great new series that I, and many others, would be excited to see is bayesian statistics. Would love to watch you explain the intricacies of that branch of stats. Thanks as always for the great content and keep up with the neural-network related videos. They are especially helpful.
Absolutely mind blowing and amazing presentation! For the Word2Vec's strategy for increasing context, does it employ the 2 strategies in "addition" to the 1-Output-For-1-Input basic method we talked about in the whole video or are they replacements? Basically, are we still training the model on predicting "is" for "Gymkata" in the same neural network along with predicting "is" for a combination of "Gymkata" and "great"?
Thank you Josh for this great video. I have a quick question about the Negative Sampling: If we only want to predict A, why do we need to keep the weights for "abandon" instead of just ignoring all the weights except for "A"?
If we only focused on the weights for "A" and nothing else, then training would cause all of the weights to make every output = 1. In contrast, by adding some outputs that we want to be 0, training is forced to make sure that not every single output gets a 1.
Thank you very much for your excellent tutorials! Josh. Here I have a question, at around 13:30 of this video tutorial, you mentioned to multiply by 2. I am not sure why 2? I mean if there are more than 2 outputs, will we multiply the number of output nodes, instead of 2? Thank you for your clarification in advance.
If we have 3,000,000 words and phrases as inputs, and each input is connected to 100 activation functions, then we have 300,000,000 weights going from the inputs to the activation function. Then from those 100 activation function, we have 3,000,000 outputs (one per word or phrase), each with a weight. So we have 300,000,000 weights on the input side, and 300,000,000 weights on the output side, or a total of 600,000,000 weights. However, since we always have the same number of weights on the input and output sides, we only need to calculate the number of weights on one side and then just multiply that number by 2.
Great Explanation. Please make a video on how do we connect the output of an Embedding Layer to an LSTM/GRU for doing classification for say Sentiment Analysis
Thanks sooooo much for your videos. Let me not belabor the praise as it's been established that you are triple bam! 🙂 Meanwhilt, I've understood every single thing in your deep learning series up till this video. I'm still a bit confused about the negative sampling thing. I don't understand the idea of how using "aardvark" to predict "a" and "abandon" somehow means we are excluding "abandon". The concept is the only thing I've not understood in the 17 videos of this neural network/deep learning playlist. I would appreciate your help.
The idea is that there is one word for which we want the final output value to be 1 and everything else needs to be 0s. However, rather than focusing on every single output, we just focus on the one word that we want the output to be 1 and just a handful of words that we want the output to be 0, rather than all of them.
@@statquest So does this mean this negative sampling is implemented in each round of backpropagation optimization? I am also not quite sure about this part either. I guess a more detailed (but simplified) demo clarify this concept better. Or maybe some articles to reference?
@@statquest So the word that "we don't want to predict", means the words that we just want their predicted output value (prob) to be zero right? Is this done via teacher forcing method to force the output of one word to be 1, and the words that we don't want to predict to be zero?
@@oliverlee2819 The first part is correct. The second part is a little off. This isn't technically teacher forcing. We're just focusing on the 1 word we want the output to be 1 and a handful of words we want the output to be 0.
Does this mean the neural net to get the embeddings can only have a single layer? I mean: 1. Say total 100 words of corpus 2. First hidden layer (with say I put the embedding size of 256) 3. Then another layer to predict the next word which will be 100 words again. Here, to plot the graph, or say to use the cosine similarity to get how close two words are, I will simply have to use the 256 weights of both words from the first hidden layer, right? So does that mean we can only have a single layer to optimise? Can't we add 2, 3, 50 layers? And if we can, then weights of which layer should we take as the embeddings to compare the words? Will you please guide? Thanks! You are a gem as always 🙌
There are no rules in neural networks, just guidelines. Most of the advancements in the field have come from people doing things differently and new. So feel free to try "multilayer word embedding" if you would like. See what happens! You might invent the next transformer.
@@statquest Haha, yes but... then weights of which layer should be used? 🤔😅 Yeah, I can use any as there are no strict rules, may take mean or something... but if there are any embedding models... may I know what is the standard? Thanks 🙏👍
I am sorry but you do not turn on advertisements to get money from RUclips, do you? And thank you so much for your effort to make videos. You make the inequality in approaching knowledge lesser and lesser. I am very grateful. Hope you always have the happiest life!
Why should he not turn on advertisement? Do you sell your services without any compensation? If you are so bothered about advertisement there is a subscription for that.
@NoNonsense_01 they never meant to turn on lol. They said they are grateful becoz of no ads. Other videos often make money from ads. This channel doesnt do that. It helps in providing users with aid in Many ML topics. Thats y he is appreciating the channel.
This is the most challenging video of the series so far IMO. I've watched it several times too, but I understand everything apart from the last part on negative sampling. And yes, I've watched and understood every single video (16 of them on the playlist up to this point) before this one in the series. This is my first time of experiencing this in his videos.
Great vid. So your going to do a vid on transformer architectures? That would be incredible if so. Btw bought your book. Finished it in like 2 weeks. Great work on it!
It would also be nice to have a video about the difference between LM (linear regression models) and GLM (Generalized Linear Models). I know they're different but don't quite understand thAT when interpreting them or programming them in R. THAAANKS!
Linear models are just models based on linear regression and I describe them here in this playlist: ruclips.net/p/PLblh5JKOoLUIzaEkCLIUxQFjPIlapw8nU Generalized Linear Models is more "generalized" and includes Logistic Regression ruclips.net/p/PLblh5JKOoLUKxzEP5HA2d-Li7IJkHfXSe and a few other methods that I don't talk about like Poisson Regression.
al fin alguien me explica como se convierte en si, todos me dicen "usa red que te lo ejecuta automaticamente" pero yo quiero saber que esta haciendo esa red internamente... al fiiiin
The first video I ever made is on R-squared: ruclips.net/video/2AQKmw14mHM/видео.html NOTE: Back then I didn't know about machine learning, so I only talk about R-squared in the context of fitting a straight line to data. In that context, R-squared can't be negative. However, with other machine learning algorithms, it is possible.
I guess we can also take the weights from the activation function to the softmax function? That would also be two weights per word and the intuition is the same --> similar words will have similar weights.
To be honest, I don't know if that would work out. It's possible that no one knows - I don't think they have worked out why word embedding networks work the way they do. Regardless, it sounds like a fun thing to try and see what happens.
I appreciate the knowledge you've just shared. It explains many things to me about neural networks. I have a question though, If you are randomly assigning a Value to a word, why not try something easier? For example, In Hebrew, each of the letters of the Alef - Bet is assigned a value. these values are added together to form a sum of a word. It is the context of the word, in a sentence that forms the block. Sabe? Take a look at Gamatra, Hewbew has been doing this for thousands of years. Just a thought.
Would that method result in words used in similar contexts to have similar numbers? Does it apply to other languages? Other symbols? And can we end up with multiple numbers per symbol to reflect how it can be used or modified in different contexts?
I wish I could answer that question better than to tell you context is EVERYTHING in Hebrew, a language that has but doesn't use vowels, since all who use the language understand the consonant-based word structures. Not only that, but in the late 1890s Rabbis from Ukraine and Azerbaijan developed a mathematical code that was used to predict word structures from the Torah that were accurate to a value of 0.001%. Others have tried to apply it to other books like Alice in Wonderland and could not duplicate the result. You can find more information on the subject through a book called, The Bible Code, which gives much more information as well as the formuli the Jewish Mathameticians created. While it is a poor citation, I have included this Wikipedia link: en.wikipedia.org/wiki/Bible_code#:~:text=The%20Bible%20code%20(Hebrew%3A%20%D7%94%D7%A6%D7%95%D7%A4%D7%9F,has%20predicted%20significant%20historical%20events. The book is available on Amazon if you find it peaks your interest. Please let me know if this helps. @@statquest
Hey, Josh! Absolutely amazing series!!! If I understand correctly, the input weights of a specific word (e.g., gymkata) are its coordinates in multi-dimensional space? The coordinates can be used to calculate cosine similarity to find similar meanings as well(e.g., girl queen, guy king)? And is that true the philosophy applies to LLMs such as GPT embeddings? GPT Text-embeddings-ada-002 has 1536 dimensions, which means there are 1536 nodes in the 1st hidden layer?
In theory it applies to LLMs, but those networks are so complex that I'm not 100% sure they do. And a model with 1536 dimensions has 1536 nodes in the first layer.
Awesome video! This time, I feel I miss one step through. Namely, how do you train this network? I mean, I get that we want the network as such that similar words have similar embeddings. But what is the 'Actual' we use in our loss function to measure the difference from and use backpropagation with?
@@statquest haha I feel like I didn't ask the question well :D How would the network know, without human input, that Troll 2 and Gymkata is very similar and so it should optimize itself so that ultimately they have similar embeddings? (What "Actual" value do we use in the loss function to calculate the residual?)
@@balintnk We just use the context that the words are used in. Normal backpropagation plus the cross entropy loss function where we use neighboring words to predict "troll 2" and "gymkata" is all you need to use to get similar embedding values for those. That's what I used to create this video.
I struggled with this video series and its only been with 3 blue 1 brown's incredibly comprehensive and clear videos on deep learning that I've been able to understand gradient descent, back propagation and basic feed forward networks. Just different learning and training styles I guess.
That make sense to me. I made these videos because 3blue1brown's video's didn't help me understand any of these topics. So if 3blue1brown's works for you, bam!
@@statquest TBH, 3B1B videos are great but I often find it difficult to understand some concepts. I read the comments and see lots of positive reviews and then wonder if I'm the one who is dumb for not understanding some of the things he's explaining. I guess more than half of the people who positively review a math video just do it because of the crowd effect. I guess people get carried away by the cool graphics/visualizations which are often good but sometimes insufficient on their own to clearly explain concepts. That said, 3B1B is a great channel and I appreciate it being free. But, the truth still remains that his videos are not the clearest to understand. I've understood every single thing in your deep learning series up till this video. I'm still a bit confused about the negative sampling thing. I don't understand the idea of how using "aardvark" to predict "a" and "abandon" somehow means we are excluding "abandon". The concept is the only thing I've not understood in the 17 videos of this neural network/deep learning playlist. I would appreciate your help.
@@vicadegboye684 The idea is that there is one word for which we want the final output value to be 1 and everything else needs to be 0s. However, rather than focusing on every single output, we just focus on the one word that we want the output to be 1 and just a handful of words that we want the output to be 0, rather than all of them.
Hey Josh , thanks for this amazing video. It was an amazing explanation of a cool concept. However I have a question. If in a corpus , I also have a document that states Troll 2 is bad!. Will the word bad and awesome share the similar embedding vector? If not can you please give an explanation. Thank you so much for helping around
It's possible that they would, since it occurs in the exact same context. However, if you have a larger dataset, you'll get "bad" in other, more negative contexts, and you'll get "awesome" in other, more positive contexts, and that will, ultimately, affect the embeddings for each word.
I thought one should be using ArgMax during the validation step, after the optimization of weights and bias was already made. Anyway, this was an interesting video, I learned a lot.
I think there's a small mistake at 14:57. He says that we don't want to predict 'abandon' and yet he includes it in the list. I think he meant to say 'aardvark' instead. [edit]: The video is correct! Read bottom reply if you have the same question.
The video is correct at that time point. At that point we are selecting words we do want to predict, meaning we want their output values to be 0 instead of 1. However, we only select a handful of words that we want to have the predictions be 0 instead of all of the words we do not want to predict.
@@statquest After careful rewatching the video a couple of times I noticed a missunderstanding of the word "predict" on my part. If I understand correctly, by saying we don't want to predict specific words, that entails calculating the outcome in the output layer so we can reduce their values through backpropagation. Before I understood it as "we don't want to 'predict', as in clalculate the values, for specific words"
Hi Josh, again the best explanation for the concept. However, I have a doubt. As per the explanation, word-embeddings are the weights associated with each word between the input and activation function layer. These weights are obtained after training on large text corpus like wikipedia. When I train another model using these embeddings on another set of data, the weights (embeddings) will change during back-propagation while training. So the embeddings will not remain same and change with every model we train. Is it correct interpretation or I am missing something here.
When you build a neural network, you can specify which weights are trainable and which should be left as is. This is the basis of "fine-tuning" a model - just training specific weights rather than all of them. So, you can do that. Or you, you can just start from scratch - don't pre-train the word embeddings, but train them when you train everything else. This is what most large language models, like ChatGPT, do.
That's awesome! But how would the multilingual word2vec be trained? Would the training dataset simply include corpus of two (or more) languages? or would additional NN infrastructure be required?
Are you asking about something that can translate one language to another? If so, then, yes, additional infrastructure is needed and I'll describe it in my next video in this series (it's called "sequence2sequence").
@@statquest not exactly, it's more like having similar words from multiple languages to be mapped within the same vector spaces. so for example King and "King" in French, German and Spanish - would appear to be the same.
@@JohnDoe-r3m Hmmm.. I'm not sure how that would work because the the english word "king" and the Spanish translation, "rey", would be in different contexts (For example, the english "king" would be in a phrase "all hail the king", and the spanish version would be in a sentence that had completely different words (even if they meant the same thing).
To learn more about Lightning: lightning.ai/
Support StatQuest by buying my book The StatQuest Illustrated Guide to Machine Learning or a Study Guide or Merch!!! statquest.org/statquest-store/
NOTE: A lot of people ask for the math at 13:16 to be clarified. In that example we have 3,000,000 inputs, each connected to 100 activation functions, for a total of 300,000,000 weights on the connections from the inputs to the activation functions. We then have another 300,000,000 weights on the connections from activations functions to the outputs. 300,000,000 + 300,000,000 = 2 * 300,000,000
In simple words, word embeddings is the by-product of training a neural network to predict the next word. By focusing on that single objective, the weights themselves (embeddings) can be used to understand the relationships between the words. This is actually quite fantastic! As always, great video @statquest!
bam! :)
Not necessarily just the next word. Your statement is specific.
Probably the most important concept in NLP. Thank you explaining it so simply and rigorously. Your videos are a thing of beauty!
Wow, thank you!
Josh; this is the absolutely clearest and most concise explanation of embeddings on RUclips!
Thank you very much!
totally agree
In the first 19 seconds my mans explains Word Embedding more simply and elegantly than anything else out there on the internet.
Thanks!
Statquest is by far the best machine learning Chanel on RUclips to learn the basic concepts. Nice job
Thank you!
This channel is literally the best thing happened to me on youtube! Way too excited for your upcoming video on transformers, attention and LLMs. You're the best Josh ❤
Wow, thanks!
Yes, please do a video on transformers. Great channel.
@@MiloLabradoodle I'm working on the transformers video right now.
@@statquest Can't wait to see it!
Was literally struggling to understand this concept, and then I found this goldmine.
Bam! :)
I was struggling to understand NLP and DL concepts, thinking of dropping my classes, and BAM!!! I found you, and now I'm writing a paper on neural program repair using DL techniques.
BAM! :)
Oh my Gosh, StatQuest is surely the greatest channel I found to learn the whole universe in simple way. WOW!
Thank you! :)
The phrase "similar words will have similar numbers" in the song will stick with me for a long time, thank you!
bam!
Damn, when I first learned about this 4 years ago, it took me two days to wrap my head around to understand these weights and embeddings to implement in codes. Just now, I need to refreshe myself the concepts since I have not worked with it in a while and your videos illustrated what I learned (whole 2 days in the past) in just 16 minutes !! I wished this video existed earlier !!
Thanks!
It's so nice to google and realize that there is a StatQuest about your question, when you are certain of that there hadn't been one some time before
BAM! :)
On of the best videos I've seen till now regarding Embeddings.
Thank you!
This is by far the best video on embeddings. A while university corse is broken down in 15minutes
Thanks!
So good!!! This is literally the best deep learning tutorial series I find… after a very long search on the web!
Thank you! :)
That was the first time I actually understood embeddings - thanks!
bam! :)
this channel is the best resource of ML in the entire internet
Thank you!
BAM!! StatQuest never lie, it is indeed super clear!
Thank you! :)
Wow!! This is the best definition I have ever heard or seen, of word embedding. Right at 09:35. Thanks for the clear and awesome video. You guy rock!!
Thanks! :)
This is the best explanation of word embedding I have come across.
Thank you very much! :)
This is the only video I was finding to understand this basic concept for NLP! tHANKS
Thanks!
When I watched this,I have only one question which is why all the others failed to explain this if they are fully understood the concept?
bam!
@@statquest Double Bam!
Bam the bam!
@@ah89971 A lot people in this industry (even with a phd) actually dont.
Thank you Josh, this is something I've been meaning to wrap my head around for a while and you explained it so clearly!
Glad it was helpful!
haha I love your opening and your teaching style! when we think something is extremely difficult to learn, everything should begin with singing a song, that make a day more beautiful to begin with ( heheh actually I am not just teasing lol, I really like that ) thanks for sharing your thoughts with us
Thanks!
Bro , i have my master degree in ML but trust me you explain it better than my teachers ❤❤❤
Big thanks
Thank you very much! :)
For those of you who find it hard to understand this video, my recommendation is to watch it at a slower pace and make notes of the same. It will really make things much more clear.
0.5 speed bam!!! :)
your work is extremely amazing and so helpful for new learns who want to go into detail of working of Deep Learning models , instead of just knowing what they do!!
Keep it up!
Thanks!
Keep up the amazing work (especially the songs) Josh, you're making live easy for thousands of people !
Wow! Thank you so much for supporting StatQuest! TRIPLE BAM!!!! :)
Hopefully everyone following this channel has Josh's book. It is quite excellent!
Thanks for that!
This video explains the source of the multiple dimensions in a word embedding, in the most simple way. Awesome. :)
Thanks!
This is one of the best sources of information.... I always find videos a great source of visual stimulation... thank you.... infinite baaaam
BAM! :)
Absolutely the best explanation that I've found so far! Thanks!
Thank you! :)
Hey Josh, i'm a brazilian student and i love to see your videos, it's such a good and fun to watch explanation of every one of the concepts, i just wanted to say thank you, cause in the last few months you made me smile beautiful in the middle of studying, so, thank you!!! (sorry for the bad english hahaha)
Muito obrigado!!! :)
Thank goodness I found this channel! You've got great content and an excellent teaching methodology here!
Thanks!
Thank you for your excellent work. A video on negative sampling would be a valuable addition.
I'll keep that in mind.
I love all of your songs. You should record a CD!!! 🤣
Thank you very much again and again for the elucidating videos.
Thanks!
I promise I'll be member in your channel when I get my first data science job
BAM! Thank you very much! :)
StatQuest is great! I learn a lot from your channel. Thank you very much!
Glad you enjoy it!
Hey Josh. A great new series that I, and many others, would be excited to see is bayesian statistics. Would love to watch you explain the intricacies of that branch of stats. Thanks as always for the great content and keep up with the neural-network related videos. They are especially helpful.
That's definitely on the to-do list.
@@statquest looking forward to it.
Thank you so much for this playlist! Got to learn a lot of things in a very clear manner. TRIPLE BAM!!!
Thank you! :)
the best video I saw about this topic so far. Great Content! Congrats!!
Wow, thanks!
Thank you statquest!!! Finally I started to understand LSTM
Hooray! BAM!
Way better than my University slides. Thanks
Thanks!
Great presentation, You saved my day after watching several videos, thank you!
Glad it helped!
Absolutely mind blowing and amazing presentation! For the Word2Vec's strategy for increasing context, does it employ the 2 strategies in "addition" to the 1-Output-For-1-Input basic method we talked about in the whole video or are they replacements? Basically, are we still training the model on predicting "is" for "Gymkata" in the same neural network along with predicting "is" for a combination of "Gymkata" and "great"?
Word2Vec uses one of the two strategies presented at the end of the video.
Thank you so much Mr.Josh Starmer, you are the only one that makes ML concepts easy to understand
Can you , please , explain Glove ?
I'll keep that in mind.
Machine learning explained like Sesame Street is exactly what I need right now.
bam!
Thanks for posting. It is indeed a clear explanation and helped me move forward with my studies.
Glad it was helpful!
Hey Josh! Loved seeing your talk at BU! Appreciate your videos :)
Thanks so much! :)
Thank you Josh for this great video. I have a quick question about the Negative Sampling: If we only want to predict A, why do we need to keep the weights for "abandon" instead of just ignoring all the weights except for "A"?
If we only focused on the weights for "A" and nothing else, then training would cause all of the weights to make every output = 1. In contrast, by adding some outputs that we want to be 0, training is forced to make sure that not every single output gets a 1.
You ARE the Batman and Superman of machine learning!
:)
BAM! Thanks for your video, I finally realize what the negative sampling means ~
Happy to help!
I admire your work a lot. Salute from Brazil.
Muito obrigado! :)
Thank you sir. Your explanation is great and your work is much appreciated.
Thanks!
Thank you very much for your excellent tutorials! Josh. Here I have a question, at around 13:30 of this video tutorial, you mentioned to multiply by 2. I am not sure why 2? I mean if there are more than 2 outputs, will we multiply the number of output nodes, instead of 2? Thank you for your clarification in advance.
If we have 3,000,000 words and phrases as inputs, and each input is connected to 100 activation functions, then we have 300,000,000 weights going from the inputs to the activation function. Then from those 100 activation function, we have 3,000,000 outputs (one per word or phrase), each with a weight. So we have 300,000,000 weights on the input side, and 300,000,000 weights on the output side, or a total of 600,000,000 weights. However, since we always have the same number of weights on the input and output sides, we only need to calculate the number of weights on one side and then just multiply that number by 2.
@@statquest Thanks for explaining! I also had the same question.
Ohhhhhhhhh! I missed that the first time around! BTW: (Stat)Squatch and Norm are right: StatQuest is awesome!!
highly valuable video and book tutorial, thanks for putting this kind of special tuts out here .
Glad you liked it!
mr.Starmer I think you really loved Troll 2 😅
:)
You are a beautiful human! Thank you so much for this video! I was finally able to understand this concept! Thanks so much again!!!!!!!!!!!!! :)
Glad it was helpful!
Thank you so much for these videos. It really helps with the visuals because I am dyslexic… Quadruple BAM!!!! lol 😊
Happy to help!
Great Explanation. Please make a video on how do we connect the output of an Embedding Layer to an LSTM/GRU for doing classification for say Sentiment Analysis
I show how to connect it to an LSTM for language translation here: ruclips.net/video/L8HKweZIOmg/видео.html
@@statquest Thank You Professor Josh !
Thanks sooooo much for your videos. Let me not belabor the praise as it's been established that you are triple bam! 🙂
Meanwhilt, I've understood every single thing in your deep learning series up till this video. I'm still a bit confused about the negative sampling thing. I don't understand the idea of how using "aardvark" to predict "a" and "abandon" somehow means we are excluding "abandon". The concept is the only thing I've not understood in the 17 videos of this neural network/deep learning playlist. I would appreciate your help.
The idea is that there is one word for which we want the final output value to be 1 and everything else needs to be 0s. However, rather than focusing on every single output, we just focus on the one word that we want the output to be 1 and just a handful of words that we want the output to be 0, rather than all of them.
@@statquest So does this mean this negative sampling is implemented in each round of backpropagation optimization? I am also not quite sure about this part either. I guess a more detailed (but simplified) demo clarify this concept better. Or maybe some articles to reference?
@@oliverlee2819 Yes, you do negative sampling every single time.
@@statquest So the word that "we don't want to predict", means the words that we just want their predicted output value (prob) to be zero right? Is this done via teacher forcing method to force the output of one word to be 1, and the words that we don't want to predict to be zero?
@@oliverlee2819 The first part is correct. The second part is a little off. This isn't technically teacher forcing. We're just focusing on the 1 word we want the output to be 1 and a handful of words we want the output to be 0.
You literally saved my life
bam! :)
Does this mean the neural net to get the embeddings can only have a single layer? I mean:
1. Say total 100 words of corpus
2. First hidden layer (with say I put the embedding size of 256)
3. Then another layer to predict the next word which will be 100 words again.
Here, to plot the graph, or say to use the cosine similarity to get how close two words are, I will simply have to use the 256 weights of both words from the first hidden layer, right?
So does that mean we can only have a single layer to optimise? Can't we add 2, 3, 50 layers? And if we can, then weights of which layer should we take as the embeddings to compare the words? Will you please guide?
Thanks! You are a gem as always 🙌
There are no rules in neural networks, just guidelines. Most of the advancements in the field have come from people doing things differently and new. So feel free to try "multilayer word embedding" if you would like. See what happens! You might invent the next transformer.
@@statquest Haha, yes but... then weights of which layer should be used? 🤔😅 Yeah, I can use any as there are no strict rules, may take mean or something... but if there are any embedding models... may I know what is the standard?
Thanks 🙏👍
@@enchanted_swiftie The standard is to use a single set of weights that go to activation functions.
@@statquest Oops, okay... 😅
Fantasticly simple, and complete!
Thanks!
the best channel ever.
Double bam! :)
I am sorry but you do not turn on advertisements to get money from RUclips, do you? And thank you so much for your effort to make videos. You make the inequality in approaching knowledge lesser and lesser. I am very grateful. Hope you always have the happiest life!
Thank you!
Why should he not turn on advertisement? Do you sell your services without any compensation? If you are so bothered about advertisement there is a subscription for that.
@@anhnguyenvan5806 Membership is available :-)
@NoNonsense_01 they never meant to turn on lol. They said they are grateful becoz of no ads. Other videos often make money from ads. This channel doesnt do that. It helps in providing users with aid in Many ML topics. Thats y he is appreciating the channel.
watched this video multiple times but unable to understand a thing. I'm sure I am dumb and the Josh is great!
Maybe you should start with the basics for neural networks: ruclips.net/video/CqOfi41LfDw/видео.html
This is the most challenging video of the series so far IMO. I've watched it several times too, but I understand everything apart from the last part on negative sampling. And yes, I've watched and understood every single video (16 of them on the playlist up to this point) before this one in the series. This is my first time of experiencing this in his videos.
Great vid. So your going to do a vid on transformer architectures? That would be incredible if so.
Btw bought your book. Finished it in like 2 weeks. Great work on it!
Thank you! My video on Encoder-Decoders will come out soon, then Attention, then Transformers.
@@statquest When the universe needs you most, you provide
It would also be nice to have a video about the difference between LM (linear regression models) and GLM (Generalized Linear Models). I know they're different but don't quite understand thAT when interpreting them or programming them in R. THAAANKS!
Linear models are just models based on linear regression and I describe them here in this playlist: ruclips.net/p/PLblh5JKOoLUIzaEkCLIUxQFjPIlapw8nU Generalized Linear Models is more "generalized" and includes Logistic Regression ruclips.net/p/PLblh5JKOoLUKxzEP5HA2d-Li7IJkHfXSe and a few other methods that I don't talk about like Poisson Regression.
@@statquest Thanks Josh!! I'll watch them all 🤗
I love the way you teach!
Thanks!
Thank you Statquest!!!!
Any time!
al fin alguien me explica como se convierte en si, todos me dicen "usa red que te lo ejecuta automaticamente" pero yo quiero saber que esta haciendo esa red internamente... al fiiiin
Muchas grasias!!! BAM!
Please make a video about the metrics for prediction performance: RMSE, MAE and R SQUARED. 🙏🏼🙏🏼🙏🏼 YOURE THE BEST!
The first video I ever made is on R-squared: ruclips.net/video/2AQKmw14mHM/видео.html NOTE: Back then I didn't know about machine learning, so I only talk about R-squared in the context of fitting a straight line to data. In that context, R-squared can't be negative. However, with other machine learning algorithms, it is possible.
I guess we can also take the weights from the activation function to the softmax function?
That would also be two weights per word and the intuition is the same --> similar words will have similar weights.
To be honest, I don't know if that would work out. It's possible that no one knows - I don't think they have worked out why word embedding networks work the way they do. Regardless, it sounds like a fun thing to try and see what happens.
I appreciate the knowledge you've just shared. It explains many things to me about neural networks. I have a question though, If you are randomly assigning a Value to a word, why not try something easier?
For example, In Hebrew, each of the letters of the Alef - Bet is assigned a value. these values are added together to form a sum of a word. It is the context of the word, in a sentence that forms the block. Sabe? Take a look at Gamatra, Hewbew has been doing this for thousands of years. Just a thought.
Would that method result in words used in similar contexts to have similar numbers? Does it apply to other languages? Other symbols? And can we end up with multiple numbers per symbol to reflect how it can be used or modified in different contexts?
I wish I could answer that question better than to tell you context is EVERYTHING in Hebrew, a language that has but doesn't use vowels, since all who use the language understand the consonant-based word structures.
Not only that, but in the late 1890s Rabbis from Ukraine and Azerbaijan developed a mathematical code that was used to predict word structures from the Torah that were accurate to a value of 0.001%.
Others have tried to apply it to other books like Alice in Wonderland and could not duplicate the result.
You can find more information on the subject through a book called, The Bible Code, which gives much more information as well as the formuli the Jewish Mathameticians created.
While it is a poor citation, I have included this Wikipedia link: en.wikipedia.org/wiki/Bible_code#:~:text=The%20Bible%20code%20(Hebrew%3A%20%D7%94%D7%A6%D7%95%D7%A4%D7%9F,has%20predicted%20significant%20historical%20events.
The book is available on Amazon if you find it peaks your interest. Please let me know if this helps.
@@statquest
@starquest,
I had not heard from you about the Wiki?
My favourite topic its magic. Bam!!
:)
Hey, Josh! Absolutely amazing series!!!
If I understand correctly, the input weights of a specific word (e.g., gymkata) are its coordinates in multi-dimensional space? The coordinates can be used to calculate cosine similarity to find similar meanings as well(e.g., girl queen, guy king)?
And is that true the philosophy applies to LLMs such as GPT embeddings? GPT Text-embeddings-ada-002 has 1536 dimensions, which means there are 1536 nodes in the 1st hidden layer?
In theory it applies to LLMs, but those networks are so complex that I'm not 100% sure they do. And a model with 1536 dimensions has 1536 nodes in the first layer.
You mean 1536 dimensions, not 1546, right? @@statquest
@@张超-o2z yep
Thanks for enlightening us Master.
Any time!
Awesome video! This time, I feel I miss one step through. Namely, how do you train this network? I mean, I get that we want the network as such that similar words have similar embeddings. But what is the 'Actual' we use in our loss function to measure the difference from and use backpropagation with?
Yes
@@statquest haha I feel like I didn't ask the question well :D How would the network know, without human input, that Troll 2 and Gymkata is very similar and so it should optimize itself so that ultimately they have similar embeddings? (What "Actual" value do we use in the loss function to calculate the residual?)
@@balintnk We just use the context that the words are used in. Normal backpropagation plus the cross entropy loss function where we use neighboring words to predict "troll 2" and "gymkata" is all you need to use to get similar embedding values for those. That's what I used to create this video.
I struggled with this video series and its only been with 3 blue 1 brown's incredibly comprehensive and clear videos on deep learning that I've been able to understand gradient descent, back propagation and basic feed forward networks. Just different learning and training styles I guess.
That make sense to me. I made these videos because 3blue1brown's video's didn't help me understand any of these topics. So if 3blue1brown's works for you, bam!
@@statquest TBH, 3B1B videos are great but I often find it difficult to understand some concepts. I read the comments and see lots of positive reviews and then wonder if I'm the one who is dumb for not understanding some of the things he's explaining. I guess more than half of the people who positively review a math video just do it because of the crowd effect. I guess people get carried away by the cool graphics/visualizations which are often good but sometimes insufficient on their own to clearly explain concepts. That said, 3B1B is a great channel and I appreciate it being free. But, the truth still remains that his videos are not the clearest to understand.
I've understood every single thing in your deep learning series up till this video. I'm still a bit confused about the negative sampling thing. I don't understand the idea of how using "aardvark" to predict "a" and "abandon" somehow means we are excluding "abandon". The concept is the only thing I've not understood in the 17 videos of this neural network/deep learning playlist. I would appreciate your help.
@@vicadegboye684 The idea is that there is one word for which we want the final output value to be 1 and everything else needs to be 0s. However, rather than focusing on every single output, we just focus on the one word that we want the output to be 1 and just a handful of words that we want the output to be 0, rather than all of them.
Hey Josh , thanks for this amazing video. It was an amazing explanation of a cool concept. However I have a question. If in a corpus , I also have a document that states Troll 2 is bad!. Will the word bad and awesome share the similar embedding vector? If not can you please give an explanation. Thank you so much for helping around
It's possible that they would, since it occurs in the exact same context. However, if you have a larger dataset, you'll get "bad" in other, more negative contexts, and you'll get "awesome" in other, more positive contexts, and that will, ultimately, affect the embeddings for each word.
@@statquest Thank you so much Josh for your quick reply
Do you have any discord groups or any other forum where can ask questions ?
@@nimitnag6497 Unfortunately not.
I thought one should be using ArgMax during the validation step, after the optimization of weights and bias was already made. Anyway, this was an interesting video, I learned a lot.
After optimization, you can use ArgMax, but SoftMax allows us to pick words based on a distribution and that can make things more interesting.
Great video for explaining word2vec!
Thanks!
I think there's a small mistake at 14:57. He says that we don't want to predict 'abandon' and yet he includes it in the list. I think he meant to say 'aardvark' instead.
[edit]: The video is correct! Read bottom reply if you have the same question.
The video is correct at that time point. At that point we are selecting words we do want to predict, meaning we want their output values to be 0 instead of 1. However, we only select a handful of words that we want to have the predictions be 0 instead of all of the words we do not want to predict.
@@statquest After careful rewatching the video a couple of times I noticed a missunderstanding of the word "predict" on my part. If I understand correctly, by saying we don't want to predict specific words, that entails calculating the outcome in the output layer so we can reduce their values through backpropagation. Before I understood it as "we don't want to 'predict', as in clalculate the values, for specific words"
@@paranoid_android8470 I agree - the wording could be improved since it is slightly ambiguous as to what it means to predict and not to predict.
Keep going statquest!!
That's the plan!
Can you do one for wav2vec? It seemingly taps on the same concept as word2vec but the equations are so much more complex.
I'll keep that in mind.
This was lovely! thank you.
Thanks!
Amazing lecture, congrats. The audio was also made from an NPL (Natural Language Processing), right?
The translated overdubs were.
Love this :D Notifications gang here :)
Thanks!
This guy really loves Troll 2!
bam!
Great work. Thank you.
Thanks!
Great explanation my man!!
Thank you!
Hi, I love your videos! They're really well explained. Could you please make a video on partial least squares (PLS)
I'll keep that in mind.
Hi Josh, again the best explanation for the concept. However, I have a doubt. As per the explanation, word-embeddings are the weights associated with each word between the input and activation function layer. These weights are obtained after training on large text corpus like wikipedia. When I train another model using these embeddings on another set of data, the weights (embeddings) will change during back-propagation while training. So the embeddings will not remain same and change with every model we train. Is it correct interpretation or I am missing something here.
When you build a neural network, you can specify which weights are trainable and which should be left as is. This is the basis of "fine-tuning" a model - just training specific weights rather than all of them. So, you can do that. Or you, you can just start from scratch - don't pre-train the word embeddings, but train them when you train everything else. This is what most large language models, like ChatGPT, do.
Hi josh thanksfor making such great videos like this, i wanted to ask why we don't have a bias here it can help in getting better word embeddings?
The bias would just be a constant offset for all of the word embeddings, so we might as well just add 0 (or not use any bias).
@@statquest got it, thanks for all the great videos, you helped me in getting my dream job again.
@@preet111 Congratulations!!! TRIPLE BAM!!!
i like the way he talks
:)
That's awesome! But how would the multilingual word2vec be trained? Would the training dataset simply include corpus of two (or more) languages? or would additional NN infrastructure be required?
Are you asking about something that can translate one language to another? If so, then, yes, additional infrastructure is needed and I'll describe it in my next video in this series (it's called "sequence2sequence").
@@statquest not exactly, it's more like having similar words from multiple languages to be mapped within the same vector spaces. so for example King and "King" in French, German and Spanish - would appear to be the same.
@@JohnDoe-r3m Hmmm.. I'm not sure how that would work because the the english word "king" and the Spanish translation, "rey", would be in different contexts (For example, the english "king" would be in a phrase "all hail the king", and the spanish version would be in a sentence that had completely different words (even if they meant the same thing).
Thankyou Sirr
:)
Love this channel.
Glad to hear it!