Great video. After going through several explanations and videos, yours is the clearest and I finally understand the use of the Embedding layer. Thank you.
This was a very helpful video, mostly vids focus on the use case rather than what the embedding is. You nailed it with a very elaborate explanation. Thank you
this was awesome! i am hunting down videos for multinomial text classification, and this helped shed insights on when to use embedding, why to, and how! and also the production phase for corps? exactly what i was looking for!
Amazing video.....beautifully explained!. This is exactly what I was looking for to understand the Embedding layer. Great work!...please keep uploading more videos :)
We already have the word2vector model that can map words to vectors. I am wondering why we need to build the word Embedding layer by ourself? Because Embedding layer and word2vector model does exactly the same things, and word2vec model is well-trained.
thank you very much, i'm working on a problem that involves sparse categorical data and your explanation and practical examples were superb, will be frequenting your channel often (subscribed) :) thanks Jeff
Question: How could you use model persistence for sub tasks when using two different datasets? I created a cop of the original, and substituted 3 labels in my target column for another label. For instance, I have a NLP multi-classification problem, where I need to classify the x as 4 diffferent labels like 1, 2, 3, or 4. 1, 2, 3 labels are related, and their labels can be substituted as 5 so that it's now a binary classification problem. Now, I only need to differentiate between 4 and 5, but I'm still left with the classification between 1, 2, 3, which I'm not too sure how to use the initial classification (4 and 5 binary classification) to help in the second model. I can't find any information if SKLearn allows this like Keras does. Thanks for any suggestions.
Thank you sooo much. Washington U must be an awesome college. If you write model.add(Embedding (10, 4, input_length =2)), Is the number of neurons in the embedded layer 10? or is it 4? or 2 ? Also is the embedded layer the same as the input layer? thanks so much !
i can't get it.. 6:33 The input vector is [1,2] and the output is 2 rows of the lookuptable but no row is multiplied by 2... how is this possible? 9:47 Why the hell input is [[0,1]] and the output is 2 rows of the lookuptable? I mean why is the input like this? The dimentions of the input and the lookup matrix do not match. The multiplication is meaningless. Or am I missing smth?
While creating Embedding layer input_dim are the number of unique words in vocabulary which is 2 as input_data =np.array([1,2]), SO why we put it 10 ??
Good video and your simple coding examples are excellent (because I can replicate them and try it out). However, your explanation (narration) in the last 4 or so minutes gets compressed.... you speak very very fast and scroll very fast, including some scrolling that basically seems to happen off-screen. Thanks for the lesson!
Great video. After going through several explanations and videos, yours is the clearest and I finally understand the use of the Embedding layer. Thank you.
I agree with this comment. This video is the clearest explanation for embeddings I've been able to find.
don't have words to explain how great this series is.!! speechless.!!
The best explaining of the Embeddings in tensorflow which I've whenever seen.
This was a very helpful video, mostly vids focus on the use case rather than what the embedding is. You nailed it with a very elaborate explanation. Thank you
The discovery of the year! Thank you for your lectures!
You're very welcome!
Best explanation of embedding layers ever !
this was awesome! i am hunting down videos for multinomial text classification, and this helped shed insights on when to use embedding, why to, and how! and also the production phase for corps? exactly what i was looking for!
Really good video, very digestible. Thank you Jeff!
Thanks! Glad it was helpful.
Good video. I would have liked to see a single sentence inputted into the model at the end to show how to evaluate single inputs
Amazing video.....beautifully explained!. This is exactly what I was looking for to understand the Embedding layer. Great work!...please keep uploading more videos :)
Awesome, thank you! Subscribe so you do not miss any :-)
Great explanation Jeff.
This was great, esp. the 2nd half
Professionally done! Good job!
At 12:00, instead of one hot, can we use tf.keras.preprocessing.text.Tokenizer and fit_on_texts methods, please correct me if i am wrong.
my exact same thought
Finally! My struggle ended 😁👍
An explanation of gradient descent and how the loss gradients are propagated back to the embedding layer would be nice
This was very helpful. Thank you
Glad it was helpful!
We already have the word2vector model that can map words to vectors. I am wondering why we need to build the word Embedding layer by ourself? Because Embedding layer and word2vector model does exactly the same things, and word2vec model is well-trained.
awesome Explanation!
Thanks!
thank you very much, i'm working on a problem that involves sparse categorical data and your explanation and practical examples were superb, will be frequenting your channel often (subscribed) :) thanks Jeff
Thank you for the great explanation! Further, I wanted to understand, is there a way we can look up the embeddings for each word in the corpuses
Expecting more information
now how to do find_similar using that embedding weights layer?
Awesome.. Love it
Question: How could you use model persistence for sub tasks when using two different datasets? I created a cop of the original, and substituted 3 labels in my target column for another label. For instance, I have a NLP multi-classification problem, where I need to classify the x as 4 diffferent labels like 1, 2, 3, or 4. 1, 2, 3 labels are related, and their labels can be substituted as 5 so that it's now a binary classification problem. Now, I only need to differentiate between 4 and 5, but I'm still left with the classification between 1, 2, 3, which I'm not too sure how to use the initial classification (4 and 5 binary classification) to help in the second model. I can't find any information if SKLearn allows this like Keras does. Thanks for any suggestions.
Thank you sooo much. Washington U must be an awesome college. If you write model.add(Embedding (10, 4, input_length =2)), Is the number of neurons in the embedded layer 10? or is it 4? or 2 ? Also is the embedded layer the same as the input layer? thanks so much !
i can't get it..
6:33 The input vector is [1,2] and the output is 2 rows of the lookuptable but no row is multiplied by 2... how is this possible?
9:47 Why the hell input is [[0,1]] and the output is 2 rows of the lookuptable? I mean why is the input like this? The dimentions of the input and the lookup matrix do not match. The multiplication is meaningless. Or am I missing smth?
you mention its dimension reduction but then again point and say not exactly can you elaborate?
Hi, will we learn the attention model in the near future? Like LSTM and attention.
Attention, not currently, but I may do a related video on it outside the course.
@@HeatonResearch Great. Thank you so much. Look forward to that tutorial.
While creating Embedding layer input_dim are the number of unique words in vocabulary which is 2 as input_data =np.array([1,2]), SO why we put it 10 ??
10 is the number of unique words we have.
Good video and your simple coding examples are excellent (because I can replicate them and try it out). However, your explanation (narration) in the last 4 or so minutes gets compressed.... you speak very very fast and scroll very fast, including some scrolling that basically seems to happen off-screen. Thanks for the lesson!