Understand Cosine Similarity | 2 Minute Tutorial

Поделиться
HTML-код
  • Опубликовано: 15 июл 2023
  • This is a quick introduction to cosine similarity - one of the most important similarity measures in machine learning!
    Cosine similarity meaning, formula and example!
    If you like this video, hit a like and subscribe!
    Paper icon used in a video:
    www.flaticon.c..." -Paper icons created by monkik - Flaticon

Комментарии • 35

  • @desecrator718
    @desecrator718 21 день назад +1

    Thanks for the quick and crisp explanation

  • @muna4840
    @muna4840 8 месяцев назад +7

    Such a simplistic yet on-point explanation.... cheers mate!

  • @omkarrajmane9408
    @omkarrajmane9408 20 дней назад

    Wow, wan't expecting much as it was such a short video. But this was the most valuable video that was so concise and made me understand the concept much better! Thanks a lot !

  • @992_cup
    @992_cup 2 месяца назад +2

    Thank you for the straightforward examples. On point!

  • @maxreed9666
    @maxreed9666 2 месяца назад +3

    Brilliantly concise explanation. Have a like. :)

  • @amisharadinkhai
    @amisharadinkhai Месяц назад

    simple and well explained! thank you

  • @simo_woman
    @simo_woman Год назад +3

    Great having it explained in short! Thanks and waiting for the next one!

  • @minlingg91
    @minlingg91 11 месяцев назад +6

    useful video and clear explanation! thank you and keep up the good work!

    • @danielkrei
      @danielkrei  10 месяцев назад

      Glad it was helpful!

  • @galacticimaginarium
    @galacticimaginarium 3 месяца назад +1

    Indeed pretty amazing explaination, helped me a lot! Thanks.

  • @soberian
    @soberian 10 месяцев назад +2

    Great and efficient explanations, you deserve million views, this help me alot. Can you explain about K-Nearest Neighbors? I would love to watch your explanation.

    • @danielkrei
      @danielkrei  9 месяцев назад +1

      Great suggestion!

  • @exoticcoder5365
    @exoticcoder5365 10 месяцев назад

    Very useful to have a quick recall on the calculation part 👍

  • @EjazAhmed-pf5tz
    @EjazAhmed-pf5tz 8 месяцев назад

    unbelievable
    i have tommorrow exam and still writing this comment which i do not usually
    thank you so much for such simple explanation and explaining 2 hour lecture in just 2 minutes thank you once again

  • @benny9794
    @benny9794 6 месяцев назад

    Super well explained - thanks man!

    • @danielkrei
      @danielkrei  6 месяцев назад

      Glad it was helpful!

  • @n.waitforit.z7182
    @n.waitforit.z7182 10 месяцев назад +1

    Great stuff, thanks!

    • @danielkrei
      @danielkrei  10 месяцев назад

      Glad you liked it!

  • @aleefbilal6211
    @aleefbilal6211 15 дней назад

    bro looks 18 and 30 at the same time.
    anyways, great and quick explanation. Thanks.

  • @mrbacal7
    @mrbacal7 8 месяцев назад

    Thank you, good work!

    • @danielkrei
      @danielkrei  8 месяцев назад

      Glad it was helpful!

  • @muna4840
    @muna4840 7 месяцев назад +1

    Is the magnitude of B ~= 1.732 or ~= **2.236**

  • @sriharinair227
    @sriharinair227 9 месяцев назад

    thanks!

  • @toastrecon
    @toastrecon 10 месяцев назад

    I was listening to a talk the other day, and someone mentioned that cosine similarity might be replaced eventually by something called "learned representation"? I may have it wrong, but have been struggling to find any info on it. Have you heard of that?

    • @danielkrei
      @danielkrei  10 месяцев назад +1

      thanks for question!
      Term "learned representation" is usually used when talking about embeddings. Those learned representations (or embeddings) are vectors made by an algorithm, and containing features extracted from input. Similarity or distance measures are used to measure how simmilar/different those embedings are. For example how simmilar are two faces, to images of a car etc.

    • @toastrecon
      @toastrecon 10 месяцев назад

      @@danielkrei Oh, interesting! I thinking that the vector store had all of the embedded vectors that were high dimensional representations of what the model found in terms of similarity or relatedness, and that cosine similarity was the algorithm used to quantify relatedness between two ideas, words, or phrases? I didn't know if there was some new way of representing those connections, if adding to the vector store after the initial embeddings were generated would create issues. Maybe you'd be stuck recalculating the entire "matrix" if you added more info? Thanks for the video!

  • @nikola4628
    @nikola4628 10 месяцев назад

    Can you explain why 3 vectors? Because there are three sentences? Then you got v1, v2, v3 and v1, v2 and v1, v3, why there is no v2, v3? Is it always first and then all others?

    • @soberian
      @soberian 10 месяцев назад

      Isn't the task is to calculate the first one? So which is the most similar to the first one.

    • @danielkrei
      @danielkrei  9 месяцев назад

      In this example I was looking for similarity between the first vector and the other two. Depending on the task you may compute the score for other pairs too.

  • @Wilhuf1
    @Wilhuf1 9 месяцев назад

    Oh I thought cosine similarity ranged only between 0.0 and 1.0.

    • @danielkrei
      @danielkrei  9 месяцев назад +1

      Good point! It really depends on data you are using but theoretical range is the same as the range of cosine [-1;1]