Decision Trees Geometric Intuition | Entropy | Gini impurity | Information Gain

Поделиться
HTML-код
  • Опубликовано: 3 июл 2024
  • Decision Trees use metrics like Entropy and Gini Impurity to make split decisions. Entropy measures the disorder or randomness in a dataset, while Gini Impurity quantifies the probability of misclassifying a randomly chosen element. Information Gain, derived from these metrics, guides the tree in selecting the most informative features for optimal data splits, contributing to effective decision-making in classification tasks.
    ============================
    Do you want to learn from me?
    Check my affordable mentorship program at : learnwith.campusx.in/s/store
    ============================
    📱 Grow with us:
    CampusX' LinkedIn: / campusx-official
    CampusX on Instagram for daily tips: / campusx.official
    My LinkedIn: / nitish-singh-03412789
    Discord: / discord
    E-mail us at support@campusx.in
    ⌚Time Stamps⌚
    00:00 - Intro
    00:14 - Example 1
    03:00 - Where is the Tree?
    04:00 - Example 2
    06:09 - What if we have numerical data?
    07:57 - Geometric Intuition
    10:50 - Pseudo Code
    11:54 - Conclusion
    14:00 - Terminology
    14:53 - Unanswered Questions
    16:16 - Advantages and Disadvantages
    18:04 - CART
    18:45 - Game Example
    21:45 - How do decision trees work? / Entropy
    22:15 - What is Entropy
    25:40 - How to calculate Entropy
    29:40 - Observations
    31:35 - Entropy vs Probability
    36:20 - Information Gain
    41:40 - Gini Impurity
    50:30 - Handling Numerical Data

Комментарии • 51

  • @vatsalyaa.m
    @vatsalyaa.m 10 месяцев назад +43

    I had to like when you said you dont know BTS... Respect

  • @prasadbhandarkar7757
    @prasadbhandarkar7757 3 года назад +24

    Please start deep learning tutorial series as well . Your explanation makes each and everything clear . Thank you so much for the one of the best tutorial series on machine learning ❤️

  • @arpitchampuriya9535
    @arpitchampuriya9535 Год назад +30

    Decision Tree categorical on variables- 00:28
    Decision tree on numerical variables - 06:08
    Geometric Intuition - 07:57
    How decision tree works? - 10:49
    Terminology- 13:56
    Common doubts regarding Decision tree - 14:51
    Advantages and Disadvantages of decision tree - 16:12
    Interesting game to understand decision tree - 18:30
    Entropy - 21:42
    Entropy calculation - 25:28
    Entropy vs Probability graph - 31:30
    Entropy for continuous variables - 33:16
    Information gain - 36:12
    GINI Impurity - 41:40
    Why to use GINI over Entropy? - 48:36
    Handling numerical data - 50:28

  • @aienthu2071
    @aienthu2071 Год назад +3

    Your Channel is a Gold Mine 💎🔥🔥

  • @kislaykrishna8918
    @kislaykrishna8918 2 года назад +1

    Great teacher you are . Crystal clear understanding 🙏

  • @kanakkanak1520
    @kanakkanak1520 Год назад +1

    Very clear explanation as always. Thanks!

  • @HimanshuSharma-we5li
    @HimanshuSharma-we5li Год назад

    You are a brilliant teacher sir .

  • @BCS_NOORJABEEN
    @BCS_NOORJABEEN 8 месяцев назад +1

    thank u so much sir, this is the best video i have seen on decision tree.

  • @ty_25_tanmayijadhav29
    @ty_25_tanmayijadhav29 3 месяца назад

    Explanation is excellent sir.. thank you

  • @user-ll1fc1rd5t
    @user-ll1fc1rd5t Месяц назад

    simply awesome explanation... it was very helpful. Thanks

  • @devyaniwaje2758
    @devyaniwaje2758 Год назад +1

    great explanation on every concept

  • @prateeksinha08
    @prateeksinha08 2 месяца назад

    Great Lecture sir, whatever topics you taught entropy, Information Gain, gini impurity, I don't think anyone else could teach with this much easy.
    Hats off to you sir

  • @abanticaadhikary6941
    @abanticaadhikary6941 Год назад

    Thank you for making it so easy and simple

  • @thatsfantastic313
    @thatsfantastic313 Год назад

    ♥♥ No words Sir no words

  • @ParthivShah
    @ParthivShah 3 месяца назад

    Thank You Sir.

  • @ujefmalek77
    @ujefmalek77 2 года назад +1

    Thank you ❤️

  • @gaurikarale7248
    @gaurikarale7248 2 года назад

    Thank you so much 🥰

  • @karanparashar6824
    @karanparashar6824 3 месяца назад

    Very informative video.

  • @krunalgandhare4026
    @krunalgandhare4026 2 года назад +7

    I guess it should be -4/4log(4/4) - 0/4log(0/4) for the middle node at 39:12

  • @vishnujatav6329
    @vishnujatav6329 2 года назад +1

    Little bit understand the decision trees. Thank you

  • @statisticsguruji856
    @statisticsguruji856 Год назад

    thank you sir 🙂

  • @chessfreak8813
    @chessfreak8813 2 года назад +10

    I think linear reg assumptions, ROC AUC MPAE are remaining so could you plz make videos on that? Because I observed u do reaserch and then make videos.. Because when I read some blogs on medium or towardsdatascience I can relate ur explanations.. Thanks!!

  • @monikachivate6287
    @monikachivate6287 2 года назад

    Thank you sir

  • @abhishekagarwal785
    @abhishekagarwal785 2 года назад +6

    Thanks for awesome video , really liked it..! At 40:31 should it be 0.94-0.69?

  • @shekharbanerjee9738
    @shekharbanerjee9738 Год назад +4

    Does not know BTS, best teacher ever

  • @finestopedia5352
    @finestopedia5352 Год назад

    Just to make things clear lim x tends to 0 x log(x) =0 Hence -0/5log(0/5) is 0. We cannot put x=0 as x=0 is not defined

  • @mohammadvahidansari8212
    @mohammadvahidansari8212 3 года назад +8

    Great work sir!!!!!
    your explanation is too good.....
    Will you upload this type of videos of topic SVM later in future????

    • @campusx-official
      @campusx-official  3 года назад +6

      Yes all the algorithms will covered one by one

  • @sonalkudva1839
    @sonalkudva1839 4 месяца назад

    @CampusX could you please tell me where can i find the link for the paper which explains the difference between GI and entropy

  • @GauravKumarGupta-fn8pw
    @GauravKumarGupta-fn8pw 7 месяцев назад

    what to do if we have more then one attributes?

  • @rajat9302
    @rajat9302 4 месяца назад

    sir there would be mistake, on calculating information gain u consider E(parent) = 0.97 but E(parent) = 0.94 I guess. Please check it once.

  • @battelbots_shots5896
    @battelbots_shots5896 4 месяца назад

    sir example 2 ma sunny and humidity pe dependent ha and rain and wind pe ha model ne ulta bata deya shayad

  • @ambrosiasociety
    @ambrosiasociety Год назад

    what if we have more than 1 column then what will we do?

  • @AmarSharma60436
    @AmarSharma60436 3 месяца назад

    I wish akinator would detect you someday 😢 😁

  • @88oitlgftk
    @88oitlgftk 10 месяцев назад

    Sound quality 😮

  • @mohammadzahid3431
    @mohammadzahid3431 9 месяцев назад

    at time 28.28 (The second table entropy should be .21713 I think)

  • @MuhammadNaeem-cz6ub
    @MuhammadNaeem-cz6ub 7 месяцев назад

    max entropy for n class problem is logn to the base 2.

  • @prathameshchaudhari4937
    @prathameshchaudhari4937 Год назад +1

    I would like to support this channel with money but the link you provide has 500 rs of minimum payment. Can you please provide with alternate method?

  • @saurabhnitkian
    @saurabhnitkian Год назад

    At time 7:50, if the first decision for PL< 2.0 is false, then PL should be greater than 2.0, making the second decision "PL< 1.5 " as wrong.

  • @fullthrottlevishal
    @fullthrottlevishal Месяц назад

    entropy of while calculating information gain is 0.94 not 0.97sir

  • @Julaiarvind
    @Julaiarvind 2 года назад +1

    sir, please make a video on Linear Regression Assumptions

  • @AbdulRahman-zp5bp
    @AbdulRahman-zp5bp 2 года назад

    sir please share those slides
    thanks :)

  • @Vernika19
    @Vernika19 9 месяцев назад

    Hi CampusX, although the explanation is great but I would advice to use word certinity or order in place of knowledge because the more certain we are about a data, less entropy is there.

  • @kiranravish4700
    @kiranravish4700 5 месяцев назад

    can i get vedios notes?

  • @ranirathore4176
    @ranirathore4176 2 года назад +7

    20:14 BTS army be like : gazab bezati hai 😂 .

  • @univer_se1306
    @univer_se1306 5 месяцев назад

    didnot understood last 8 minutes of video

  • @waghakshay447
    @waghakshay447 4 месяца назад +5

    Sir unknowingly roast BTS 😂😂 "I don't know BTS"

  • @ali75988
    @ali75988 5 месяцев назад

    A better approach would have been one single example of how it works - like example hai, per just not like he teaches kae kia kidher sae ho rha with notepad. Secondly, too much dry content in one lecture.
    feel free to disagree. I ain't hitler

  • @madhusmitamajhi9351
    @madhusmitamajhi9351 7 месяцев назад +1

    BTS is a k-pop group and they are world wide famous. Just letting you know

    • @pavanyadavalli6888
      @pavanyadavalli6888 26 дней назад

      i was expecting this kind rply just after sir said idk bts 😂😂