Word Embeddings, Word2Vec And CBOW Indepth Intuition And Working- Part 1 | NLP For Machine Learning

Поделиться
HTML-код
  • Опубликовано: 30 сен 2024

Комментарии • 42

  • @parmoksha
    @parmoksha 3 месяца назад +2

    this is first time i actually understood how embeddings are generated using word2vec. In most other tutorial on word2vec this exact thing was missing

  • @venkyramakrishnan5712
    @venkyramakrishnan5712 3 месяца назад

    Like your videos and earnest style of speaking, But I was confused about king - man + queen = woman
    Logically this seems more correct ?
    King - man + woman = queen ?

  • @mudumbypraveen3308
    @mudumbypraveen3308 11 месяцев назад +1

    I think towards the end the explanation of window size is wrong. if you multiply (7x5 )* (5x7) you output is basically a 7x7 matrix. so for each vocab word you have one vector of size 1x7 representing it. Also I believe window size does not mean feature vector it just means that how many words you are sampling before and after the context word. It is ultimately the final layer output dimensions which would have the embeddings. For e.g. last hidden layer is of size (7x512) you would get (7x7) * (7*512) which would give you embeddings of 7x512.

    • @BenASabu
      @BenASabu 10 месяцев назад

      I am not sure , but i think its not a matrix multiplication , if its analogus to matrix multiplication then what you said seems to be correct,

    • @mudumbypraveen3308
      @mudumbypraveen3308 10 месяцев назад

      @@BenASabu it is always matrix multiplications in deep learning unlike classical ML algos.

    • @BenASabu
      @BenASabu 10 месяцев назад

      @@mudumbypraveen3308 bro could you please explain me how the initial 7*5 matrix come for each input word and , like how does the machine is able to attain the concept of feature representation in training

    • @abhisheksharmac3p0
      @abhisheksharmac3p0 3 месяца назад

      yup even i didnt understood the last segment, it became hotch potch

  • @luffyd9724
    @luffyd9724 Год назад +2

    Sir in Hindi batch there are 5 video already uploaded but in english batch there is only one video .why this difference are there ??

  • @shriramdhamdhere7030
    @shriramdhamdhere7030 Год назад +1

    you said that window size defines the length of vector to which a word is transformed but in the next video while training you had a window size of
    5 but got a vector of 100 dimension ?? pls clarify

  • @HarshPatel-iy5qe
    @HarshPatel-iy5qe 5 месяцев назад +1

    why does number of hidden neuron should equal to window size , it can be anything , right? our window size decides the inputs words neuron layer which is window size-1.
    Correct me if i am wrong

    • @mehdi9771
      @mehdi9771 Месяц назад

      you are correct , there is no error in your explanation. one note in window size is it is
      a hyperparameter that can be fine tuned based on results.

    • @shudhanshushrotriya6931
      @shudhanshushrotriya6931 7 дней назад

      As for the hidden layer size, it also is a hyperparameter. Even in the paper Google published, they used 5 as a typical window size. And for the feature they experimented with embedding sizes of 100 to 1000 dimensions.

  • @apurvakhomane4992
    @apurvakhomane4992 Год назад +2

    Please teach in FSDS May batch also

  • @aszanisani
    @aszanisani Год назад +2

    i love your explanation, can't wait to the next part🤩

  • @amiralikhatib4843
    @amiralikhatib4843 Год назад +1

    I can't wait for the next part which the SkipGram approach will be discussed

  • @encianhoratiu5301
    @encianhoratiu5301 4 месяца назад

    Window size doesn't give the embedding dimension.

  • @dineshshelke631
    @dineshshelke631 Год назад +1

    Sir FSDS Batch mai bhi padhao please

  • @abhisheksharmac3p0
    @abhisheksharmac3p0 3 месяца назад

    last portion was confusing

  • @CARevanthkumar
    @CARevanthkumar Год назад

    i want ML models more efficiency that project link

  • @rachakondaeshwar4129
    @rachakondaeshwar4129 5 дней назад

    Great video

  • @cricjack9076
    @cricjack9076 Год назад

    Hello sir please reply every video I put a comment about data science course but you don't reply?

  • @apoorva3635
    @apoorva3635 Год назад

    41:49 - What is the point of initializing weights when all the 0s (which are n-1 in number) multiplied with any number will anyways remain the same?

  • @swagatsanketpriyadarsan6231
    @swagatsanketpriyadarsan6231 Год назад

    Can you please put out videos for computer vision. DL_CV

  • @ameybikram5781
    @ameybikram5781 Год назад

    the same word can be present in differnt sentences !! so we calc the vector for thtat word in every sentence and take the averge?????

  • @AI-Brain-or-Mind
    @AI-Brain-or-Mind Год назад

    grate sir
    i leaned so much thing from your videos thanks lot sir
    sir can you show us where to to download pretrained model of word2vec

  • @wenzhang5879
    @wenzhang5879 7 месяцев назад

    is the number of hidden neurons equal to the window size?

  • @bhesht
    @bhesht Год назад

    Excellent series, wonderfully explained mechanisms - you won't see this elsewhere. Thank you!

  • @raviv1752
    @raviv1752 Год назад

    Thank You Krish for wonderfull explaination

  • @khaderather
    @khaderather Год назад

    Hope your Dubai tour was good.

  • @nandansadiwala4113
    @nandansadiwala4113 3 месяца назад

    The videos are very helping for me. Thanks Krish. Waiting For Some More Adv. Conceptual Videos Related to Deep Learning.

  • @naveenkuruvinshetti6610
    @naveenkuruvinshetti6610 9 месяцев назад

    Too many ada

  • @rafsankabir9152
    @rafsankabir9152 6 месяцев назад

    Amazing!!!!

  • @farbodzamani7248
    @farbodzamani7248 Год назад

    thanks a lot

  • @prateekcaire4193
    @prateekcaire4193 Год назад +3

    wrong in many ways. window size and feature dimensions need not same. word2Vec is 2 layered NN. here only one layer is shown. Overall poorly explained

    • @KushalSharma
      @KushalSharma 8 месяцев назад

      I do agree brother! It is wrongly explained

  • @booksagsm
    @booksagsm Год назад

    Is this the first video lecture on nlp

    • @raviv1752
      @raviv1752 Год назад

      ruclips.net/p/PLZoTAELRMXVNNrHSKv36Lr3_156yCo6Nn

    • @apurvakhomane4992
      @apurvakhomane4992 Год назад

      No

    • @booksagsm
      @booksagsm Год назад

      @@apurvakhomane4992 do you have the entire nlp link by any chance

    • @everythingprohd1270
      @everythingprohd1270 Год назад

      @@booksagsm
      Here's the playlist ruclips.net/p/PLZoTAELRMXVMdJ5sqbCK2LiM0HhQVWNzm

  • @Umariqbal-kp1nz
    @Umariqbal-kp1nz Год назад

    Deep learning road map