Learning Decision Tree

Поделиться
HTML-код
  • Опубликовано: 29 сен 2024

Комментарии • 23

  • @mandarmalekar6640
    @mandarmalekar6640 3 года назад +6

    This is probably one of the best tutorials I have found on net. Thanks a lot Madam, continue the good work.

  • @theacademician_cse
    @theacademician_cse 4 года назад +3

    Respected Madam, I am learning lots of things from your video. Madam, if P+ = 1, and P- = 0, then Entropy(S) = NaN, since, 0*log2(0) is NaN (not a number), but not zero. It is asymptotic. Thank You.

    • @sudiptoghosh5740
      @sudiptoghosh5740 4 года назад +4

      No. We find the limit of x->0 xlog2x . It can be found by LHospital's rule, and is equal to 0. It isn't the value of the function, it is the limit we consider at x->0.

  • @getfitwithakhil
    @getfitwithakhil 6 лет назад +2

    Greetings Dr. Sudesha Sarkar,
    At 21:25 why is the Entropy[29+, 35-] is calculated as -29/64log2 29/64.. ... why divide by 64 when the formula does not have divide by total samples.

    • @nabanitapaul7581
      @nabanitapaul7581 6 лет назад +6

      In Entropy formula, p+ is the probability of positive sample, which will be (no. of positive sample)/total sample.
      Similarly, p- is the probability of negative sample

    • @getfitwithakhil
      @getfitwithakhil 6 лет назад +3

      Ohh gotcha. Thank you very much.

  • @ankurchanda1112
    @ankurchanda1112 4 месяца назад

    This is probably the best decision tree explanation I've come accross. Thank you madam.

  • @shivamtiwari8106
    @shivamtiwari8106 15 дней назад

    Nice lecture,

  • @zinalpatel8962
    @zinalpatel8962 6 лет назад +2

    In the topic 'when to stop ' i can't understand 3rd reason.

    • @konakoteswararao7892
      @konakoteswararao7892 3 года назад

      That means there is only one positive and negative point...you can choose from that you can choose any one of the feature

    • @digvijaymahamuni7722
      @digvijaymahamuni7722 3 года назад

      3 reasons are 1) completely dominant 2) partially dominant 3) when we run out of attributes

    • @rahulpramanick2001
      @rahulpramanick2001 Год назад

      @zinalpatel8962
      According to my interpretation the 3rd point means that when only few examples are falling under a split those may be the outliers or the noisy examples. So to avoid overfitting the decision tree we should avoid that split.

  • @AbdulhakeemEideh
    @AbdulhakeemEideh 2 месяца назад

    Great Professor!!!

  • @amarsomani6596
    @amarsomani6596 5 лет назад +1

    Where can I get the content or PPT?

    • @mctfellow
      @mctfellow 5 лет назад +1

      Just enroll to the course in nptel
      ( Introduction to machine learning )

  • @quagzlor
    @quagzlor 5 лет назад +2

    thanks ma'am, have an exam tomorrow and this really helped

  • @Sandoverwater
    @Sandoverwater 6 лет назад

    where is the slide ?

  • @Uma7473
    @Uma7473 5 лет назад

    thank u mam

  • @saurabhshukla3080
    @saurabhshukla3080 6 лет назад

    Nice series of tutorials.

  • @vickythechamp
    @vickythechamp 5 лет назад

    thank you

  • @tapanjeetroy8266
    @tapanjeetroy8266 6 лет назад

    Thank you mam

  • @anumolukumar585
    @anumolukumar585 5 лет назад +1

    not good