Decision Trees Geometric Intuition | Entropy | Gini impurity | Information Gain

Поделиться
HTML-код
  • Опубликовано: 31 дек 2024

Комментарии • 71

  • @AmarSharma60436
    @AmarSharma60436 9 месяцев назад +39

    I wish akinator would detect you someday 😢 😁

    • @mastercode1011
      @mastercode1011 3 месяца назад +2

      tum khud ka akinator bna do with campusx Data row

  • @arpitchampuriya9535
    @arpitchampuriya9535 2 года назад +40

    Decision Tree categorical on variables- 00:28
    Decision tree on numerical variables - 06:08
    Geometric Intuition - 07:57
    How decision tree works? - 10:49
    Terminology- 13:56
    Common doubts regarding Decision tree - 14:51
    Advantages and Disadvantages of decision tree - 16:12
    Interesting game to understand decision tree - 18:30
    Entropy - 21:42
    Entropy calculation - 25:28
    Entropy vs Probability graph - 31:30
    Entropy for continuous variables - 33:16
    Information gain - 36:12
    GINI Impurity - 41:40
    Why to use GINI over Entropy? - 48:36
    Handling numerical data - 50:28

  • @vatsalyaa.m
    @vatsalyaa.m Год назад +104

    I had to like when you said you dont know BTS... Respect

  • @prasadbhandarkar7757
    @prasadbhandarkar7757 3 года назад +38

    Please start deep learning tutorial series as well . Your explanation makes each and everything clear . Thank you so much for the one of the best tutorial series on machine learning ❤️

  • @Otaku-Chan01
    @Otaku-Chan01 8 месяцев назад +7

    Great Lecture sir, whatever topics you taught entropy, Information Gain, gini impurity, I don't think anyone else could teach with this much easy.
    Hats off to you sir

  • @pournimasawant7872
    @pournimasawant7872 2 месяца назад +1

    Lectures provided by you are amazing!!! ❤️The way you provides in detail explaination of concepts , no other can do.👍

  • @aienthu2071
    @aienthu2071 2 года назад +5

    Your Channel is a Gold Mine 💎🔥🔥

  • @laxmimansukhani1825
    @laxmimansukhani1825 3 месяца назад +1

    Great Teacher !! Your videos are my saviours !!

  • @ty_25_tanmayijadhav29
    @ty_25_tanmayijadhav29 9 месяцев назад +1

    Explanation is excellent sir.. thank you

  • @BCS_NOORJABEEN
    @BCS_NOORJABEEN Год назад +1

    thank u so much sir, this is the best video i have seen on decision tree.

  • @chessfreak8813
    @chessfreak8813 2 года назад +13

    I think linear reg assumptions, ROC AUC MPAE are remaining so could you plz make videos on that? Because I observed u do reaserch and then make videos.. Because when I read some blogs on medium or towardsdatascience I can relate ur explanations.. Thanks!!

  • @krunalgandhare4026
    @krunalgandhare4026 3 года назад +7

    I guess it should be -4/4log(4/4) - 0/4log(0/4) for the middle node at 39:12

  • @shashankarora2945
    @shashankarora2945 2 месяца назад

    This was absolutely brilliant education

  • @kislaykrishna8918
    @kislaykrishna8918 3 года назад +1

    Great teacher you are . Crystal clear understanding 🙏

  • @ShubhadipBera-c2f
    @ShubhadipBera-c2f 7 месяцев назад

    simply awesome explanation... it was very helpful. Thanks

  • @adityasoni1639
    @adityasoni1639 4 месяца назад +2

    On what basis did you first choose the column Outlook as your root node ?

  • @oshoinspires_1
    @oshoinspires_1 2 месяца назад

    best ever explanation , thanks Sir

  • @DataTalesByMuskan
    @DataTalesByMuskan Месяц назад +1

    Akinator can guess Nitish sir as well !!🥳

  • @kanakkanak1520
    @kanakkanak1520 2 года назад +1

    Very clear explanation as always. Thanks!

  • @mohammadvahidansari8212
    @mohammadvahidansari8212 3 года назад +8

    Great work sir!!!!!
    your explanation is too good.....
    Will you upload this type of videos of topic SVM later in future????

    • @campusx-official
      @campusx-official  3 года назад +6

      Yes all the algorithms will covered one by one

  • @shekharbanerjee9738
    @shekharbanerjee9738 Год назад +8

    Does not know BTS, best teacher ever

  • @PriyanshuMohanty-k7i
    @PriyanshuMohanty-k7i 5 месяцев назад +1

    20:18.... respect

  • @devyaniwaje2758
    @devyaniwaje2758 2 года назад +1

    great explanation on every concept

  • @abhishekagarwal785
    @abhishekagarwal785 2 года назад +7

    Thanks for awesome video , really liked it..! At 40:31 should it be 0.94-0.69?

  • @sonalkudva1839
    @sonalkudva1839 10 месяцев назад

    @CampusX could you please tell me where can i find the link for the paper which explains the difference between GI and entropy

  • @HimanshuSharma-we5li
    @HimanshuSharma-we5li 2 года назад

    You are a brilliant teacher sir .

  • @karanparashar6824
    @karanparashar6824 10 месяцев назад

    Very informative video.

  • @vishnujatav6329
    @vishnujatav6329 2 года назад +1

    Little bit understand the decision trees. Thank you

  • @ANONYMOUS-xj1kd
    @ANONYMOUS-xj1kd 16 дней назад

    at 5:09 i guess the decision tree is a bit wrong... windy and humidity should be swapped in it..

  • @HARSHYadav-pw8oo
    @HARSHYadav-pw8oo 3 месяца назад

    at 29:34 i think instaed of base 2 base 3 must be there as we are having 3 possible answer can someone please clarify in the same thread thanks in advance

  • @parthamete821
    @parthamete821 2 месяца назад

    Sir i cann't understand the tree at 5:10 shouldn't the sunny and rainy be swapped

  • @rockykumarverma980
    @rockykumarverma980 3 месяца назад

    Thank you so much sir🙏🙏🙏

  • @thatsfantastic313
    @thatsfantastic313 2 года назад

    ♥♥ No words Sir no words

  • @abanticaadhikary6941
    @abanticaadhikary6941 Год назад

    Thank you for making it so easy and simple

  • @ParthivShah
    @ParthivShah 9 месяцев назад

    Thank You Sir.

  • @sumtpathak19
    @sumtpathak19 10 месяцев назад +1

    sir there would be mistake, on calculating information gain u consider E(parent) = 0.97 but E(parent) = 0.94 I guess. Please check it once.

  • @pulkitsinghal9781
    @pulkitsinghal9781 3 месяца назад

    I think It should be 0.94 for the entropy of parent node at 40:21

  • @MWASI-kk8nn
    @MWASI-kk8nn 17 дней назад

    thank you sir

  • @battelbots_shots5896
    @battelbots_shots5896 10 месяцев назад

    sir example 2 ma sunny and humidity pe dependent ha and rain and wind pe ha model ne ulta bata deya shayad

  • @finestopedia5352
    @finestopedia5352 2 года назад

    Just to make things clear lim x tends to 0 x log(x) =0 Hence -0/5log(0/5) is 0. We cannot put x=0 as x=0 is not defined

  • @ujefmalek77
    @ujefmalek77 2 года назад +1

    Thank you ❤️

  • @rashidyaseen6270
    @rashidyaseen6270 2 месяца назад

    Can we get slodes that you are showing 😊

  • @GauravKumarGupta-fn8pw
    @GauravKumarGupta-fn8pw Год назад

    what to do if we have more then one attributes?

  • @saurabhnitkian
    @saurabhnitkian Год назад

    At time 7:50, if the first decision for PL< 2.0 is false, then PL should be greater than 2.0, making the second decision "PL< 1.5 " as wrong.

  • @gaurikarale7248
    @gaurikarale7248 2 года назад

    Thank you so much 🥰

  • @ambrosiasociety
    @ambrosiasociety Год назад

    what if we have more than 1 column then what will we do?

  • @Vernika19
    @Vernika19 Год назад +1

    Hi CampusX, although the explanation is great but I would advice to use word certinity or order in place of knowledge because the more certain we are about a data, less entropy is there.

  • @prathameshchaudhari4937
    @prathameshchaudhari4937 2 года назад +1

    I would like to support this channel with money but the link you provide has 500 rs of minimum payment. Can you please provide with alternate method?

  • @statisticsguruji856
    @statisticsguruji856 2 года назад

    thank you sir 🙂

  • @LYRICS.77RR_
    @LYRICS.77RR_ 17 дней назад

    This video is very theoretical..................

  • @fullthrottlevishal
    @fullthrottlevishal 7 месяцев назад

    entropy of while calculating information gain is 0.94 not 0.97sir

  • @monikachivate6287
    @monikachivate6287 2 года назад

    Thank you sir

  • @MuhammadNaeem-cz6ub
    @MuhammadNaeem-cz6ub Год назад

    max entropy for n class problem is logn to the base 2.

  • @mohammadzahid3431
    @mohammadzahid3431 Год назад

    at time 28.28 (The second table entropy should be .21713 I think)

  • @Julaiarvind
    @Julaiarvind 3 года назад +1

    sir, please make a video on Linear Regression Assumptions

  • @Faiz2k3
    @Faiz2k3 Год назад

    Sound quality 😮

  • @kiranravish4700
    @kiranravish4700 11 месяцев назад

    can i get vedios notes?

  • @SKILLCRYSTAL
    @SKILLCRYSTAL 2 года назад +8

    20:14 BTS army be like : gazab bezati hai 😂 .

  • @AbdulRahman-zp5bp
    @AbdulRahman-zp5bp 3 года назад

    sir please share those slides
    thanks :)

  • @nizamfarooqui5950
    @nizamfarooqui5950 3 месяца назад

  • @Adarshhb767
    @Adarshhb767 Месяц назад

    "I don't know what is BTS"

  • @waghakshay447
    @waghakshay447 11 месяцев назад +7

    Sir unknowingly roast BTS 😂😂 "I don't know BTS"

  • @univer_se1306
    @univer_se1306 11 месяцев назад

    didnot understood last 8 minutes of video

  • @madhusmitamajhi9351
    @madhusmitamajhi9351 Год назад +1

    BTS is a k-pop group and they are world wide famous. Just letting you know

    • @pavanyadavalli6888
      @pavanyadavalli6888 6 месяцев назад

      i was expecting this kind rply just after sir said idk bts 😂😂

  • @ali75988
    @ali75988 11 месяцев назад +1

    A better approach would have been one single example of how it works - like example hai, per just not like he teaches kae kia kidher sae ho rha with notepad. Secondly, too much dry content in one lecture.
    feel free to disagree. I ain't hitler