Word embedding using keras embedding layer | Deep Learning Tutorial 40 (Tensorflow, Keras & Python)

Поделиться
HTML-код
  • Опубликовано: 23 ноя 2024

Комментарии • 62

  • @codebasics
    @codebasics  2 года назад +1

    Check out our premium machine learning course with 2 Industry projects: codebasics.io/courses/machine-learning-for-data-science-beginners-to-advanced

  • @JIUSIZHENG
    @JIUSIZHENG 9 месяцев назад +1

    It‘s definitely the best video to learn word embedding on RUclips.

  • @changqi
    @changqi 2 года назад +2

    I have to see you are a amazing mentor. Your tutorial insights me so much. 10 minutes ago, I have nothing about embedding tables. But now I transparently know it.

  • @alifia276
    @alifia276 3 года назад +4

    Thank you so much for sharing! Just starting out with tensorflow , you have saved me a lot of time, please keep sharing:)

  • @tmorid3
    @tmorid3 Год назад +1

    Thank you very much for this and previous videos. Exaplaning very clearly about embedding .

  • @FragileAndFree
    @FragileAndFree 3 года назад +1

    So clean n to the point teaching🙌

  • @ramandeepbains862
    @ramandeepbains862 2 года назад +1

    instead of randomly assigning the padding use below code to check the max length :
    arr=[]
    for i in encoded_reviews:

    arr.append(np.max(len(i)))

    print(np.max(arr))

  • @koushik7604
    @koushik7604 2 года назад

    Amazing Dhaval... It gave me a very clear idea.

  • @prernasingh262
    @prernasingh262 Год назад

    Thank you for such a nice video, it was very informative and easy to understand. Keep it up

  • @regivm123
    @regivm123 2 года назад

    It is a great tutorial. Thanks. You have mentioned that you will paste the link of Jason Brownlee article.. but it is missing.

  • @MLDSInsights
    @MLDSInsights 3 года назад +3

    Please add more data science, machine learning and deep learning projects. From beginner to advance level.

  • @vinaykumardaivajna5260
    @vinaykumardaivajna5260 Год назад

    Great explantion as Always.

  • @personalac4562
    @personalac4562 3 года назад

    Everything need to know about software. Every Information...

  • @แพรวเธียรเจนนาวิวัฒน์

    Thanks a lot for all the contents. Your explanation is really awesome.

  • @jyotikokate8478
    @jyotikokate8478 2 года назад

    That was a very valuable tutorial, thank you very much Sir!!!

  • @triularity
    @triularity Год назад

    Maybe this explains why some online product reviews have conflicting stars vs narrative. I always thought it was just people that didn't understand the rating scale, thinking 1-star was best, as in "number 1 rated" when paired with a very positive written review (and 5-stars paired with a bad written review). But now I wonder if it is just people trying to break this form of training, since it would otherwise be a good source of training data.

  • @aliksmshaik-x8t
    @aliksmshaik-x8t 2 месяца назад

    I have a question: From where exactly we have taken the words like After, Zonal etc.,?

  • @hardikvegad3508
    @hardikvegad3508 3 года назад

    Sir Amazing Explanation. Please make a video on Glove. Thank you.

  • @himanshusirsat8474
    @himanshusirsat8474 2 года назад

    Thank you so much for this wonderful video

  • @girmayohannis4659
    @girmayohannis4659 7 месяцев назад

    Bro,I liked your tutorials very much!Thank you very much go ahead. Next I have simple question that is can we say BoW type of word embedding? or other type like unique number so on?

  • @sergiochavezlazo5362
    @sergiochavezlazo5362 Год назад

    Another amazing video! Thank u so much. How can use this model to predict Y for new review comments that contains more or different words than in the training dataset?

  • @mohammadkareem1187
    @mohammadkareem1187 Год назад

    Great videos, keep it up. I was wondering how do we create users or items embedding? let's say to calculate similarity between 2 users to recommend a certain product?

  • @knowfact2
    @knowfact2 10 месяцев назад

    thanks for this series😍. i want to know how do we get this pdf which you are using here to teach

  • @smithaabraham980
    @smithaabraham980 6 месяцев назад

    Sir, I would like to know that when we are performing one_hot encoding of all possible words in the sample, is it ok if we are getting same no. representing different words. As per rule each word should be represented by a unique digit

  • @adityapradhan2842
    @adityapradhan2842 3 года назад

    Great videos Sir, very informative . Could you please add next videos and complete playlist a little bit sooner so many academic people are studying from this playlist. Thank you for making such videos. eagerly waiting for word2vec and bert model

    • @codebasics
      @codebasics  3 года назад +1

      word2vec video is coming up soon and yes I will try to wrap this up asap.

    • @adityapradhan2842
      @adityapradhan2842 3 года назад

      @@codebasics Thanks for replying sir. I will be waiting 😊🤗

  • @djs749
    @djs749 2 года назад

    Hi, Thank you so much for all these wonderful set of videos. Can you kindly upload something related to the Time Distributed layer and what it exactly does?

  • @vikashdas1852
    @vikashdas1852 3 года назад +1

    Hatts off to you Guruji

  • @ahmadalghooneh2105
    @ahmadalghooneh2105 3 года назад +1

    That was a rich tutorial, thank you!

  • @randb9378
    @randb9378 3 года назад

    Great video! Thanks! How do we choose which words will be in our vocabulary? For example, if our voc is 5000 do we choose the most 5000 most frequent words?

  • @nksbits
    @nksbits Год назад

    @codebasics
    At 6:50 you spoke about pushing an 8X1 matrix into a sigmoid function and computing loss. The output is a differnet 8X1 matrix correct? And then you are calculating loss from actual value? Isn't this same a pushing 2 4X1 matrix individually? Why are we merging two 4X1 matrix to form an 8X1 matrix?

  • @AlienAI23
    @AlienAI23 3 года назад +2

    delicious tutorial

  • @22prajwalgaikwad61
    @22prajwalgaikwad61 10 месяцев назад

    if the whole model is giving less accuracy then does it means that the word embedding that the model did wasn't great as well ?

  • @TheVerbalAxiom
    @TheVerbalAxiom 3 года назад

    THANK you so much for this tutorial. It's taught me a lot I really needed to know! I'm subscribing. I hope you'll continue to make more in depth videos about Tensorflow and all machine learning topics!

  • @ASHISHPONDIT
    @ASHISHPONDIT 2 года назад

    nice explanation, thank you a lot

  • @seyedalirezaabbasi
    @seyedalirezaabbasi 11 месяцев назад

    Well played. Nice

  • @utkarshgoyal6742
    @utkarshgoyal6742 3 года назад +1

    Hi this video was so informational! Thank you so much! @6:15 why do you have to flatten the matrix, can we do it without that?

    • @hardikvegad3508
      @hardikvegad3508 3 года назад +1

      we flatten it to create a single long feature vector. Which makes our computation very efficient.

    • @nullvoid7543
      @nullvoid7543 3 года назад

      @@hardikvegad3508 right

  • @girmayohannis4659
    @girmayohannis4659 7 месяцев назад

    Dears,what is embedded_voca_size? how do I know it? sorry I didn't understand it!

  • @raychang4710
    @raychang4710 3 года назад

    I am not sure if I have a misunderstanding. If I have T words input to a embedding layer that output dimension is D,then I'll get a T*D matrix as output??

  • @marcusrose8239
    @marcusrose8239 3 года назад

    You're amazing keep doing what you're doing, also do you have any reinforcement learning theory videos.

  • @yonahcitron226
    @yonahcitron226 2 года назад

    great stuff

  • @argha-qi5hf
    @argha-qi5hf 2 года назад

    Thank you soo muchh..!!

  • @debatradas9268
    @debatradas9268 2 года назад

    thank you so much

  • @tonycardinal413
    @tonycardinal413 3 года назад

    Awesome video! thank you so much. If you write model.add(Embedding (1000, 500, input_length =X.shape[1])), Is the number of neurons in the embedded layer 500? or is it 1000? Also is the embedded layer the same as the input layer? thanks so much !

  • @jongcheulkim7284
    @jongcheulkim7284 3 года назад

    Thank you.

  • @mahalerahulm
    @mahalerahulm 3 года назад

    very nice !

  • @personalac4562
    @personalac4562 3 года назад

    How can i create my own software . resources every information about software. Basic to advance every resources. My own software not for companies. Every information please tell me. ????

  • @GirishKumar-ek7si
    @GirishKumar-ek7si 3 года назад

    Amazing videos, thank you Sir.
    Just a small correction in the video, i think Index of weights matrix should be 10,2 instead of 11,3

    • @jerrychuang81
      @jerrychuang81 Год назад

      I agree. The index of the embedding weights should be 10 and 2 corresponding to number 11 and 3 one hot encoding

  • @EranM
    @EranM Год назад

    use Tokenizer, not one-hot

  • @himanshuyasav398
    @himanshuyasav398 3 года назад +2

    First coment

  • @owenmoogk
    @owenmoogk 3 года назад +2

    second lol

  • @jerkmeo
    @jerkmeo 3 года назад

    very awesome and easy to understand video...thanks mate

  • @roopagowda9271
    @roopagowda9271 Год назад

    x=["amazing restaurant","nice food amazing"]
    tokenizer = Tokenizer()
    tokenizer.fit_on_texts(x)
    sequences=tokenizer.texts_to_sequences(x)
    print("from tokenizer:",sequences)
    vocab_size = len(tokenizer.word_index)+1
    print("from one hot:",[one_hot(i,5) for i in x])
    output from tokenizer: [[1, 2], [3, 4, 1]]
    output from one hot: [[1, 4], [4, 2, 1]]
    Hello sir,
    From the above example, i see that one hot has represented the same for "restaurant" and "nice". However tokenizer has taken
    into consideration the overall corpus and is able to differentiate. Please provide input on this case.