Decision Tree categorical on variables- 00:28 Decision tree on numerical variables - 06:08 Geometric Intuition - 07:57 How decision tree works? - 10:49 Terminology- 13:56 Common doubts regarding Decision tree - 14:51 Advantages and Disadvantages of decision tree - 16:12 Interesting game to understand decision tree - 18:30 Entropy - 21:42 Entropy calculation - 25:28 Entropy vs Probability graph - 31:30 Entropy for continuous variables - 33:16 Information gain - 36:12 GINI Impurity - 41:40 Why to use GINI over Entropy? - 48:36 Handling numerical data - 50:28
Please start deep learning tutorial series as well . Your explanation makes each and everything clear . Thank you so much for the one of the best tutorial series on machine learning ❤️
Great Lecture sir, whatever topics you taught entropy, Information Gain, gini impurity, I don't think anyone else could teach with this much easy. Hats off to you sir
I think linear reg assumptions, ROC AUC MPAE are remaining so could you plz make videos on that? Because I observed u do reaserch and then make videos.. Because when I read some blogs on medium or towardsdatascience I can relate ur explanations.. Thanks!!
at 29:34 i think instaed of base 2 base 3 must be there as we are having 3 possible answer can someone please clarify in the same thread thanks in advance
Hi CampusX, although the explanation is great but I would advice to use word certinity or order in place of knowledge because the more certain we are about a data, less entropy is there.
A better approach would have been one single example of how it works - like example hai, per just not like he teaches kae kia kidher sae ho rha with notepad. Secondly, too much dry content in one lecture. feel free to disagree. I ain't hitler
I wish akinator would detect you someday 😢 😁
tum khud ka akinator bna do with campusx Data row
Decision Tree categorical on variables- 00:28
Decision tree on numerical variables - 06:08
Geometric Intuition - 07:57
How decision tree works? - 10:49
Terminology- 13:56
Common doubts regarding Decision tree - 14:51
Advantages and Disadvantages of decision tree - 16:12
Interesting game to understand decision tree - 18:30
Entropy - 21:42
Entropy calculation - 25:28
Entropy vs Probability graph - 31:30
Entropy for continuous variables - 33:16
Information gain - 36:12
GINI Impurity - 41:40
Why to use GINI over Entropy? - 48:36
Handling numerical data - 50:28
I had to like when you said you dont know BTS... Respect
lol
Please start deep learning tutorial series as well . Your explanation makes each and everything clear . Thank you so much for the one of the best tutorial series on machine learning ❤️
Great Lecture sir, whatever topics you taught entropy, Information Gain, gini impurity, I don't think anyone else could teach with this much easy.
Hats off to you sir
Lectures provided by you are amazing!!! ❤️The way you provides in detail explaination of concepts , no other can do.👍
Your Channel is a Gold Mine 💎🔥🔥
Great Teacher !! Your videos are my saviours !!
Explanation is excellent sir.. thank you
thank u so much sir, this is the best video i have seen on decision tree.
I think linear reg assumptions, ROC AUC MPAE are remaining so could you plz make videos on that? Because I observed u do reaserch and then make videos.. Because when I read some blogs on medium or towardsdatascience I can relate ur explanations.. Thanks!!
I guess it should be -4/4log(4/4) - 0/4log(0/4) for the middle node at 39:12
This was absolutely brilliant education
Great teacher you are . Crystal clear understanding 🙏
simply awesome explanation... it was very helpful. Thanks
On what basis did you first choose the column Outlook as your root node ?
best ever explanation , thanks Sir
Akinator can guess Nitish sir as well !!🥳
Very clear explanation as always. Thanks!
Great work sir!!!!!
your explanation is too good.....
Will you upload this type of videos of topic SVM later in future????
Yes all the algorithms will covered one by one
Does not know BTS, best teacher ever
hehehe
20:18.... respect
great explanation on every concept
Thanks for awesome video , really liked it..! At 40:31 should it be 0.94-0.69?
haan correct.
@CampusX could you please tell me where can i find the link for the paper which explains the difference between GI and entropy
You are a brilliant teacher sir .
Very informative video.
Little bit understand the decision trees. Thank you
at 5:09 i guess the decision tree is a bit wrong... windy and humidity should be swapped in it..
at 29:34 i think instaed of base 2 base 3 must be there as we are having 3 possible answer can someone please clarify in the same thread thanks in advance
Sir i cann't understand the tree at 5:10 shouldn't the sunny and rainy be swapped
Thank you so much sir🙏🙏🙏
♥♥ No words Sir no words
Thank you for making it so easy and simple
Thank You Sir.
sir there would be mistake, on calculating information gain u consider E(parent) = 0.97 but E(parent) = 0.94 I guess. Please check it once.
I think It should be 0.94 for the entropy of parent node at 40:21
thank you sir
sir example 2 ma sunny and humidity pe dependent ha and rain and wind pe ha model ne ulta bata deya shayad
Just to make things clear lim x tends to 0 x log(x) =0 Hence -0/5log(0/5) is 0. We cannot put x=0 as x=0 is not defined
Thank you ❤️
Can we get slodes that you are showing 😊
what to do if we have more then one attributes?
At time 7:50, if the first decision for PL< 2.0 is false, then PL should be greater than 2.0, making the second decision "PL< 1.5 " as wrong.
Thank you so much 🥰
what if we have more than 1 column then what will we do?
Hi CampusX, although the explanation is great but I would advice to use word certinity or order in place of knowledge because the more certain we are about a data, less entropy is there.
I would like to support this channel with money but the link you provide has 500 rs of minimum payment. Can you please provide with alternate method?
thank you sir 🙂
This video is very theoretical..................
entropy of while calculating information gain is 0.94 not 0.97sir
Thank you sir
max entropy for n class problem is logn to the base 2.
at time 28.28 (The second table entropy should be .21713 I think)
sir, please make a video on Linear Regression Assumptions
Sound quality 😮
can i get vedios notes?
20:14 BTS army be like : gazab bezati hai 😂 .
sir please share those slides
thanks :)
"I don't know what is BTS"
Sir unknowingly roast BTS 😂😂 "I don't know BTS"
didnot understood last 8 minutes of video
BTS is a k-pop group and they are world wide famous. Just letting you know
i was expecting this kind rply just after sir said idk bts 😂😂
A better approach would have been one single example of how it works - like example hai, per just not like he teaches kae kia kidher sae ho rha with notepad. Secondly, too much dry content in one lecture.
feel free to disagree. I ain't hitler