6 - 6 - Multinomial Naive Bayes- A Worked Example .mp4

Поделиться
HTML-код
  • Опубликовано: 7 янв 2025

Комментарии • 42

  • @helihobby
    @helihobby 6 лет назад +10

    Seriously, this is a good example which easily to understand.

  • @mhfateen
    @mhfateen 12 лет назад +3

    how simple and helping! It turned out that doing it practically & interactively makes it more understandable instead of just writing long equations. Thank You Sir!!

  • @TeamTRAINIT
    @TeamTRAINIT 10 месяцев назад

    really simple, finally I understand multinomial naive bayes

  • @bhaskargarai8371
    @bhaskargarai8371 2 года назад

    Such an awesome example - Really helpful for understanding with an example👍👍

  • @faisalalaisaee6604
    @faisalalaisaee6604 5 лет назад +5

    could you please explain how you got the size of vocabulary |V| as 6?

    • @hhvable
      @hhvable 5 лет назад +6

      its the total number of distinct words that are occurring in the given documents. so those six are Chinese, Beijing, Shanghai, Macao, Tokyo, Japan. Rest are the repetition of those words.

  • @LutfarRahmanMilu
    @LutfarRahmanMilu 7 лет назад +1

    Thank you. You know how to make things obvious!

    • @featuresky5084
      @featuresky5084 7 лет назад +1

      Yes I agree. I watched this video 2 years ago. When I needed today, I searched whole youtube for this specific video. This is such a nice example with a really nice explanation. Edited: punctuation

  • @bagasandriann
    @bagasandriann Год назад

    whats the different multinomial naive bayes and the basic naive bayes?

  • @championsplace1646
    @championsplace1646 6 лет назад +1

    this video really helped me...thanks!!

  • @gepliprl8558
    @gepliprl8558 8 лет назад +1

    Dear Rafael Merino García, thank you!!

  • @piotrchodyko6278
    @piotrchodyko6278 6 лет назад

    Wow, really good tutorial. Best wishes from Poland

  • @etaifour2
    @etaifour2 7 лет назад

    very good explanation, very very good, thank you for posting this

  • @yawenzheng2960
    @yawenzheng2960 4 года назад

    It's a very nice video, thank you! If I may give a bit of advice, imho, if "word bag" is defined and "features" of each document is explicitly written, it might be easier to understand for new learners. Great video though, thanks!

  • @tuananhtran5071
    @tuananhtran5071 8 месяцев назад

    Why do we have to
    smoothing for Chinese, they both
    appear in 2 classes

  • @namhoang353
    @namhoang353 10 лет назад +4

    Dear Rafael Merino García! Thank for your presentation. I have a problem with Multinomial Naive Bayes. I can't fully understand the meaning one of the fragment in the formula of the probability of a document in Multinomial Naive Bayes Model.
    P(di|Cj) = P(|di|). |di|!. U(P(Wt|Cj)^Nit / Nit !) with i = 1, .., |V|. U is Integration, comment isn't allowed for special symbol so I can't express it.
    My question:
    P(|di|), what does this probability mean? How to compute it?
    Please explain for me! Thanks you so much.
    Best regard!
    Nam.

  • @hhvable
    @hhvable 5 лет назад

    Perfect explanation

  • @Favwords
    @Favwords 6 лет назад

    how to compute P(d5)?

  • @koushikshomchoudhury9108
    @koushikshomchoudhury9108 6 лет назад +1

    Why did you not include the word 'Sanghai' ? Or did I not hear you ignoring it intentionally since I watched at 2x speed?

    • @ifargantech
      @ifargantech 3 года назад +1

      Why you listen by a speed of 2x? hhhhh

  • @kadhumalii7231
    @kadhumalii7231 2 года назад

    where is the multinomial?

  • @Favwords
    @Favwords 6 лет назад

    What if there are more than one feature?

  • @hombreazu1
    @hombreazu1 11 лет назад

    Thanks for this. So helpful.

  • @sultanismail4970
    @sultanismail4970 3 года назад

    Thank you man...........

  • @adeeluet
    @adeeluet 11 лет назад

    What if there is unknown word in testing document?

    • @angelbeltre8022
      @angelbeltre8022 7 лет назад +1

      Probability = 0

    • @hhvable
      @hhvable 5 лет назад +1

      For future purpose :
      If the text that we are trying to classify has not been occurred even once then its probability would be 0.
      However, this would make the entire sentences probability to be 0.
      To avoid this we add 1 which he adds in the video as well that there is some probability of it being in any of the category. Adding 1 is part of laplacian smoothing.

  • @Ludibrolo
    @Ludibrolo 11 лет назад

    Thank you, this was really helpful!

  • @YusufSaidCANBAZ
    @YusufSaidCANBAZ 7 лет назад

    thank you sooo much.

  • @abirkolin4702
    @abirkolin4702 3 месяца назад

    thanks

  • @ismetozturk947
    @ismetozturk947 5 лет назад

    very good

  • @ElizaberthUndEugen
    @ElizaberthUndEugen 5 лет назад +1

    I don't see any multinomiel here.

  • @randythamrin5976
    @randythamrin5976 4 года назад

    I saw naive but not multinomial

  • @hiteshochani3990
    @hiteshochani3990 7 лет назад

    Thanks!

  • @pavithraradhakrishnan8229
    @pavithraradhakrishnan8229 5 лет назад

    to the point

  • @mariel871
    @mariel871 2 года назад

    how about giving credit to the author of the example and the slides? (Dan Jurafski) You are explaining everything as if it was your work.

  • @adisatriapangestu9815
    @adisatriapangestu9815 6 лет назад +1

    how to compute multi-label classification using this classifier ?

    • @koushikshomchoudhury9108
      @koushikshomchoudhury9108 6 лет назад +1

      I'm not sure, just an idea: Calculate conditional probabilities of the words for the third, fourth....nth class. Then find P(c3|d5), P(c4|d5), ..... P(cn|d5) using the same approach. The P(ci|d5) with max value will be the most probable class of the sample d5.