What are Embedding Layers in Keras (11.5)

Поделиться
HTML-код
  • Опубликовано: 3 окт 2024
  • НаукаНаука

Комментарии • 42

  • @raulsaenz6177
    @raulsaenz6177 5 лет назад +18

    Great video. After going through several explanations and videos, yours is the clearest and I finally understand the use of the Embedding layer. Thank you.

    • @giovannimeono8802
      @giovannimeono8802 2 года назад

      I agree with this comment. This video is the clearest explanation for embeddings I've been able to find.

  • @suryagaur7440
    @suryagaur7440 5 лет назад +6

    don't have words to explain how great this series is.!! speechless.!!

  • @nitroflap
    @nitroflap 4 года назад +5

    The best explaining of the Embeddings in tensorflow which I've whenever seen.

  • @WisamMechano
    @WisamMechano 3 года назад +2

    This was a very helpful video, mostly vids focus on the use case rather than what the embedding is. You nailed it with a very elaborate explanation. Thank you

  • @AlexeyMatushevsky
    @AlexeyMatushevsky 3 года назад

    The discovery of the year! Thank you for your lectures!

  • @ashishpatil1716
    @ashishpatil1716 4 года назад

    Best explanation of embedding layers ever !

  • @SatyaBhambhani
    @SatyaBhambhani 2 года назад

    this was awesome! i am hunting down videos for multinomial text classification, and this helped shed insights on when to use embedding, why to, and how! and also the production phase for corps? exactly what i was looking for!

  • @alexanderk5835
    @alexanderk5835 2 года назад +1

    Really good video, very digestible. Thank you Jeff!

  • @FiveJungYetNoSmite
    @FiveJungYetNoSmite 2 года назад +1

    Good video. I would have liked to see a single sentence inputted into the model at the end to show how to evaluate single inputs

  • @amitraichowdhury8148
    @amitraichowdhury8148 3 года назад

    Amazing video.....beautifully explained!. This is exactly what I was looking for to understand the Embedding layer. Great work!...please keep uploading more videos :)

    • @HeatonResearch
      @HeatonResearch  3 года назад

      Awesome, thank you! Subscribe so you do not miss any :-)

  • @sambitmukherjee1713
    @sambitmukherjee1713 4 года назад

    Great explanation Jeff.

  • @davidporterrealestate
    @davidporterrealestate 2 года назад

    This was great, esp. the 2nd half

  • @netfission
    @netfission 4 года назад

    Professionally done! Good job!

  • @himanshutanwani_
    @himanshutanwani_ 4 года назад +3

    At 12:00, instead of one hot, can we use tf.keras.preprocessing.text.Tokenizer and fit_on_texts methods, please correct me if i am wrong.

  • @guzu672
    @guzu672 3 года назад

    Finally! My struggle ended 😁👍

  • @RH-mk3rp
    @RH-mk3rp 2 года назад

    An explanation of gradient descent and how the loss gradients are propagated back to the embedding layer would be nice

  • @mohajeramir
    @mohajeramir 4 года назад

    This was very helpful. Thank you

  • @beizhou2488
    @beizhou2488 5 лет назад +2

    We already have the word2vector model that can map words to vectors. I am wondering why we need to build the word Embedding layer by ourself? Because Embedding layer and word2vector model does exactly the same things, and word2vec model is well-trained.

  • @sebastian81ism
    @sebastian81ism 3 года назад

    awesome Explanation!

  • @franklyvulgar1
    @franklyvulgar1 Год назад

    thank you very much, i'm working on a problem that involves sparse categorical data and your explanation and practical examples were superb, will be frequenting your channel often (subscribed) :) thanks Jeff

  • @mukherjisandeep
    @mukherjisandeep 2 года назад

    Thank you for the great explanation! Further, I wanted to understand, is there a way we can look up the embeddings for each word in the corpuses

  • @ankitmaheshwari7310
    @ankitmaheshwari7310 4 года назад

    Expecting more information

  • @blasttrash
    @blasttrash Год назад

    now how to do find_similar using that embedding weights layer?

  • @sanjaykrish8719
    @sanjaykrish8719 4 года назад

    Awesome.. Love it

  • @stackexchange7353
    @stackexchange7353 4 года назад

    Question: How could you use model persistence for sub tasks when using two different datasets? I created a cop of the original, and substituted 3 labels in my target column for another label. For instance, I have a NLP multi-classification problem, where I need to classify the x as 4 diffferent labels like 1, 2, 3, or 4. 1, 2, 3 labels are related, and their labels can be substituted as 5 so that it's now a binary classification problem. Now, I only need to differentiate between 4 and 5, but I'm still left with the classification between 1, 2, 3, which I'm not too sure how to use the initial classification (4 and 5 binary classification) to help in the second model. I can't find any information if SKLearn allows this like Keras does. Thanks for any suggestions.

  • @tonycardinal413
    @tonycardinal413 3 года назад

    Thank you sooo much. Washington U must be an awesome college. If you write model.add(Embedding (10, 4, input_length =2)), Is the number of neurons in the embedded layer 10? or is it 4? or 2 ? Also is the embedded layer the same as the input layer? thanks so much !

  • @coobit
    @coobit 4 года назад

    i can't get it..
    6:33 The input vector is [1,2] and the output is 2 rows of the lookuptable but no row is multiplied by 2... how is this possible?
    9:47 Why the hell input is [[0,1]] and the output is 2 rows of the lookuptable? I mean why is the input like this? The dimentions of the input and the lookup matrix do not match. The multiplication is meaningless. Or am I missing smth?

  • @apratimgholap2930
    @apratimgholap2930 4 года назад

    you mention its dimension reduction but then again point and say not exactly can you elaborate?

  • @beizhou2488
    @beizhou2488 5 лет назад +1

    Hi, will we learn the attention model in the near future? Like LSTM and attention.

    • @HeatonResearch
      @HeatonResearch  5 лет назад

      Attention, not currently, but I may do a related video on it outside the course.

    • @beizhou2488
      @beizhou2488 5 лет назад

      @@HeatonResearch Great. Thank you so much. Look forward to that tutorial.

  • @suryagaur7440
    @suryagaur7440 5 лет назад

    While creating Embedding layer input_dim are the number of unique words in vocabulary which is 2 as input_data =np.array([1,2]), SO why we put it 10 ??

    • @sachink7955
      @sachink7955 4 года назад

      10 is the number of unique words we have.

  • @ramonolivier57
    @ramonolivier57 4 года назад

    Good video and your simple coding examples are excellent (because I can replicate them and try it out). However, your explanation (narration) in the last 4 or so minutes gets compressed.... you speak very very fast and scroll very fast, including some scrolling that basically seems to happen off-screen. Thanks for the lesson!