Decision Trees Geometric Intuition | Entropy | Gini impurity | Information Gain
HTML-код
- Опубликовано: 3 июл 2024
- Decision Trees use metrics like Entropy and Gini Impurity to make split decisions. Entropy measures the disorder or randomness in a dataset, while Gini Impurity quantifies the probability of misclassifying a randomly chosen element. Information Gain, derived from these metrics, guides the tree in selecting the most informative features for optimal data splits, contributing to effective decision-making in classification tasks.
============================
Do you want to learn from me?
Check my affordable mentorship program at : learnwith.campusx.in/s/store
============================
📱 Grow with us:
CampusX' LinkedIn: / campusx-official
CampusX on Instagram for daily tips: / campusx.official
My LinkedIn: / nitish-singh-03412789
Discord: / discord
E-mail us at support@campusx.in
⌚Time Stamps⌚
00:00 - Intro
00:14 - Example 1
03:00 - Where is the Tree?
04:00 - Example 2
06:09 - What if we have numerical data?
07:57 - Geometric Intuition
10:50 - Pseudo Code
11:54 - Conclusion
14:00 - Terminology
14:53 - Unanswered Questions
16:16 - Advantages and Disadvantages
18:04 - CART
18:45 - Game Example
21:45 - How do decision trees work? / Entropy
22:15 - What is Entropy
25:40 - How to calculate Entropy
29:40 - Observations
31:35 - Entropy vs Probability
36:20 - Information Gain
41:40 - Gini Impurity
50:30 - Handling Numerical Data
I had to like when you said you dont know BTS... Respect
lol
Please start deep learning tutorial series as well . Your explanation makes each and everything clear . Thank you so much for the one of the best tutorial series on machine learning ❤️
Decision Tree categorical on variables- 00:28
Decision tree on numerical variables - 06:08
Geometric Intuition - 07:57
How decision tree works? - 10:49
Terminology- 13:56
Common doubts regarding Decision tree - 14:51
Advantages and Disadvantages of decision tree - 16:12
Interesting game to understand decision tree - 18:30
Entropy - 21:42
Entropy calculation - 25:28
Entropy vs Probability graph - 31:30
Entropy for continuous variables - 33:16
Information gain - 36:12
GINI Impurity - 41:40
Why to use GINI over Entropy? - 48:36
Handling numerical data - 50:28
Your Channel is a Gold Mine 💎🔥🔥
Great teacher you are . Crystal clear understanding 🙏
Very clear explanation as always. Thanks!
You are a brilliant teacher sir .
thank u so much sir, this is the best video i have seen on decision tree.
Explanation is excellent sir.. thank you
simply awesome explanation... it was very helpful. Thanks
great explanation on every concept
Great Lecture sir, whatever topics you taught entropy, Information Gain, gini impurity, I don't think anyone else could teach with this much easy.
Hats off to you sir
Thank you for making it so easy and simple
♥♥ No words Sir no words
Thank You Sir.
Thank you ❤️
Thank you so much 🥰
Very informative video.
I guess it should be -4/4log(4/4) - 0/4log(0/4) for the middle node at 39:12
Little bit understand the decision trees. Thank you
thank you sir 🙂
I think linear reg assumptions, ROC AUC MPAE are remaining so could you plz make videos on that? Because I observed u do reaserch and then make videos.. Because when I read some blogs on medium or towardsdatascience I can relate ur explanations.. Thanks!!
Thank you sir
Thanks for awesome video , really liked it..! At 40:31 should it be 0.94-0.69?
Does not know BTS, best teacher ever
Just to make things clear lim x tends to 0 x log(x) =0 Hence -0/5log(0/5) is 0. We cannot put x=0 as x=0 is not defined
Great work sir!!!!!
your explanation is too good.....
Will you upload this type of videos of topic SVM later in future????
Yes all the algorithms will covered one by one
@CampusX could you please tell me where can i find the link for the paper which explains the difference between GI and entropy
what to do if we have more then one attributes?
sir there would be mistake, on calculating information gain u consider E(parent) = 0.97 but E(parent) = 0.94 I guess. Please check it once.
sir example 2 ma sunny and humidity pe dependent ha and rain and wind pe ha model ne ulta bata deya shayad
what if we have more than 1 column then what will we do?
I wish akinator would detect you someday 😢 😁
Sound quality 😮
at time 28.28 (The second table entropy should be .21713 I think)
max entropy for n class problem is logn to the base 2.
I would like to support this channel with money but the link you provide has 500 rs of minimum payment. Can you please provide with alternate method?
At time 7:50, if the first decision for PL< 2.0 is false, then PL should be greater than 2.0, making the second decision "PL< 1.5 " as wrong.
entropy of while calculating information gain is 0.94 not 0.97sir
sir, please make a video on Linear Regression Assumptions
sir please share those slides
thanks :)
Hi CampusX, although the explanation is great but I would advice to use word certinity or order in place of knowledge because the more certain we are about a data, less entropy is there.
can i get vedios notes?
20:14 BTS army be like : gazab bezati hai 😂 .
didnot understood last 8 minutes of video
Sir unknowingly roast BTS 😂😂 "I don't know BTS"
A better approach would have been one single example of how it works - like example hai, per just not like he teaches kae kia kidher sae ho rha with notepad. Secondly, too much dry content in one lecture.
feel free to disagree. I ain't hitler
BTS is a k-pop group and they are world wide famous. Just letting you know
i was expecting this kind rply just after sir said idk bts 😂😂