I realized years after graduation, many professors either have received no training in teaching or have little interest in teaching, undergrads in particular. I can't say I've learned more on RUclips than I did in college but I have a whole lot of "OOOOOH, that's what my professor was talking about!" moments when watching videos like this. This stuff would've altered my life 20 years ago.
Whilst I definitely agree, I have to say that, in order to understand algorithms like this one, you'll have to just work through them. No matter how many interesting and well thought out videos you watch, it'll always be most effective if you afterwards try and build it yourself. The fact that you're watching this in your free time shows that you are interested in the topic. That's also worth a lot. Sometimes you'll only be able to appreciate what professors taught you, after you get out of college/uni and realize how useful it would have been.
Thanks a lot, Josh. To a very basic beginner, every sentence you say is a gem. It took me half hour to get the full meaning of the first 4 mins of the video, as I was taking notes and repeating it to myself to grasp everything that was being said. The reason I wanted to showcase my slow pace is to say how important and understandable I felt in regard to every sentence. And, it wasn't boring at all. Great job, and please, keep em coming.
Even it took me more than 30 minutes to complete & understand the video. I can not tell you how this explanation is amazing ! This is how we calculate the impurity ! PS: G(k) = Σ P(i) * (1 - P(i)) i = (Apple, Grape,Lemon) 2/5 * (1- 2/5) + 2/5 * (1- 2/5) + 1/5 *(1-1/5)= 0.4 * (0.6) + 0.4 * (0.6) + 0.2 * (0.8)= 0.24 + 0.24 + 0.16 = 0.64
I have already known the concept; however, when I have to translate the concept into code ... I find it quite difficut and this video explains that smoothly. Thank you so much for the explanation!
This is the best single resource on decision trees that I've found, and it's a topic that isn't covered enough considering that random forests are a very powerful and easy tool to implement. If only they released more tutorials!
As a beginner, this work has given me hope to pursue a career in ML. I have red and understood the concepts of Decision Tree. But the code becomes a mountain which has been levelled. Jose, thank you my brother and may God continue to increase you 🙏.
I think you might confusing Information Gain and Gini Index. Information gain is reduce of entropy, not reduce of gini impurity. I almost did a mistake in my Engineering paper because of this video. But I luckily noticed different definition of information gain in a different source. Maybe it's just thing of naming but it can mislead people who are new in this subject :/
Yes. Information gain and Gini index are not really related to each other when we generate a decision tree. They are two different approaches. But overall still a wonderful video.
Thank you very much for this video! I learnt a lot on how to understand Gini Coefficient and how it is used to pick the best questions to split the data!
You are creating a question with only one value, what if I want to have a question like "Is it GREEN OR YELLOW?". So, basically, I will have to test all combinations of values of size 2 to find the best info_gain for a particular attribute. Furthermore, we could test all possible sizes of a question. Would that give a better result or is it better to use only one value of the attribute to build the question?
On top of that, why do we use binary partitioning? Can't we use the same attribute to ask a new question on the false rows, but excluding attribute values used in the true rows?
Started to watch the series 2 days ago, you are explaining SO well. Many thanks! More videos on additional types of problems we can solve with Machine Learning would be very helpful. Few ideas: traveling salesman problem, generating photos while emulating analog artefacts or simple ranking of new dishes I would like to try based on my restaurants' order history. Even answering with the relevant links/terminology would be fantastic. Also, would be great to know what problems are still hard to solve or should not be solved via Machine Learning :)
Question about calculating impurity. If we do probability, we first draw data which give us probability of 0.2 then we draw label which give us another 0.2. Shouldn't the impurity be 1 - 0.2*0.2=0.96?
I understood it as "when Gini Impurity of parent node is zero, Information Gain with child nodes is also zero. So we don't have to ask more question to classify." Is it right?
you are awesome, man! but why is it that, the second question on if the color is yellow? you separated only apple when two grapes are red. Or is it because they are already taken care of at the first false split of the node?
Gosh I remember when this series first started, I knew nothing about AI or machine learning and now I'm like full on neural nets and TensorFlow. Gotta admit since I don't have formal education on ml, I don't classical models as much I understand neural nets.
Could you do an example in which the output triggers a method that changes it's self based on success or failure? An easier example, iterations increase or decrease based on probability; Or left, right up, down memorizing a maze pattern?
I have a follow up question. How did we come up with the questions. As in..how did we know we would like to ask if the diameter is > 3, why not ask if diameter > 2?
This is the best tutorial on the net but this uses CART. I was really hoping to use C5.0 but unfortunately the package is only available in R. I used rpy2 to call the C50 function in Python. It would be great if there'd be a tutorial on that.
It is easy to find best split if data is categorical. How do split happens in a time optimized way if variable is continuous unlike color or just 2 values of diameter? Should I just run through min to max values? Can median be used here? Please suggest!!
Why Impurity is calculated one way on 5:33 and on the code it's calculated differently? (1-(times the # of possible labels) vs 1-(# of possible labels)**2)?
At 8:41 He says Now the previous call returns and this node become decision node. What does that mean? How is this possible to return to the root node(false branch(upper line ))after executing the final return of the function. Please give your thoughts it will help me a lot.
I don't get one thing here. How do we determine the number for the question. Like I understand that we try out different features to see which gives us the most info but how do we choose the number and condition for it?
Hello sir, it is possible to classify animal-trapped camera images and segregate them into folders using an automatic process. This can be done using machine learning and computer vision techniques. Please make a video. I work in the forest department. Many Photographs capture a maximum of 18 lakhs. One by one segregation of having a problem. Please help us
Why can't all professors explain things like this? My professor: "Here is the idea for decision tree, now code it"
agreed!
I realized years after graduation, many professors either have received no training in teaching or have little interest in teaching, undergrads in particular. I can't say I've learned more on RUclips than I did in college but I have a whole lot of "OOOOOH, that's what my professor was talking about!" moments when watching videos like this. This stuff would've altered my life 20 years ago.
Same! I really wish they could dig more into the coding part, but they either don't cover it or don't teach coding well.
hey can someone give the link for doing pruning
Whilst I definitely agree, I have to say that, in order to understand algorithms like this one, you'll have to just work through them. No matter how many interesting and well thought out videos you watch, it'll always be most effective if you afterwards try and build it yourself. The fact that you're watching this in your free time shows that you are interested in the topic. That's also worth a lot. Sometimes you'll only be able to appreciate what professors taught you, after you get out of college/uni and realize how useful it would have been.
In nearly 10 min, he explained the topic extremely well
Amazing job.
right
Because he knows how to write an explanation tree
Thanks a lot, Josh. To a very basic beginner, every sentence you say is a gem. It took me half hour to get the full meaning of the first 4 mins of the video, as I was taking notes and repeating it to myself to grasp everything that was being said.
The reason I wanted to showcase my slow pace is to say how important and understandable I felt in regard to every sentence.
And, it wasn't boring at all.
Great job, and please, keep em coming.
I'm curious, how did your career pan out? Still in ml?
you are right he is
I am crying tears of joy! How can you articulate such complex topics so clearly!
Even it took me more than 30 minutes to complete & understand the video. I can not tell you how this explanation is amazing !
This is how we calculate the impurity !
PS: G(k) = Σ P(i) * (1 - P(i))
i = (Apple, Grape,Lemon)
2/5 * (1- 2/5) + 2/5 * (1- 2/5) + 1/5 *(1-1/5)=
0.4 * (0.6) + 0.4 * (0.6) + 0.2 * (0.8)=
0.24 + 0.24 + 0.16 = 0.64
or 1 - (2/5)^2 - (2/5)^2 - (1/5)^2
Thank you very much for this explanation I went to the comment section to ask this question but you answer it very nicely.
I have already known the concept; however, when I have to translate the concept into code ... I find it quite difficut and this video explains that smoothly.
Thank you so much for the explanation!
The same case here man
humm he is great teacher
This is the best single resource on decision trees that I've found, and it's a topic that isn't covered enough considering that random forests are a very powerful and easy tool to implement. If only they released more tutorials!
a year later, finally!
As a beginner, this work has given me hope to pursue a career in ML. I have red and understood the concepts of Decision Tree. But the code becomes a mountain which has been levelled. Jose, thank you my brother and may God continue to increase you 🙏.
I think you might confusing Information Gain and Gini Index. Information gain is reduce of entropy, not reduce of gini impurity. I almost did a mistake in my Engineering paper because of this video. But I luckily noticed different definition of information gain in a different source. Maybe it's just thing of naming but it can mislead people who are new in this subject :/
Yes. Information gain and Gini index are not really related to each other when we generate a decision tree. They are two different approaches. But overall still a wonderful video.
thanks for clarify this!
This single 9-minute video does a way better job than what my ML teacher did for 3 hours.
I know, right? I had the same experience with an instructor. . . it was a horrible memory. Thanks for the video!
Welcome back Josh, thought we would never get another awesome tutorial, thanks for your good work.
One of the clearest and most accessible presentations I have seen. Well done! (and thanks!)
So much value in just 10 mins, this is Gold
Excuse me, I am still not clear about how the value of 0.64 comes out, can you explain a little more?
Brilliant explanation !
Ty
I've never seen any other channels like this. So deep and perfect.
Thanks, was really looking for this series...nice to see you back
loving this series, glad it's back
so INSTRUCTIVE. thank you so much for your clear & precise explanation
You have no idea how your videos helped me out on my journey on Machine Learning. thanks a lot Josh you are awesome.
回复
Please keeps this series going.
It's awesome!
Thank you very much for this video! I learnt a lot on how to understand Gini Coefficient and how it is used to pick the best questions to split the data!
Thanks for your sharing. You made it easy to understand for everybody
best video about decision tree thus far
thank you for such a simple yet comprehensive explanation.
I don't generally comment on videos but this video has so much clarity something had to be said
Was waiting for the next episode! Thank you!
yes
In the most simple and comprehensive way. Great job!
Thank you Josh! This is my first encounter with machine learning and you made it very interesting.
You are creating a question with only one value, what if I want to have a question like "Is it GREEN OR YELLOW?". So, basically, I will have to test all combinations of values of size 2 to find the best info_gain for a particular attribute. Furthermore, we could test all possible sizes of a question. Would that give a better result or is it better to use only one value of the attribute to build the question?
On top of that, why do we use binary partitioning? Can't we use the same attribute to ask a new question on the false rows, but excluding attribute values used in the true rows?
Thank you Josh for preparing and explaining this presentation aa well as the software to help the understanding of the topics. Great job!
do you have link for doing pruning
The script is tightly edited. Much appreciated.
Finally after a year. Pls continue this course.
This was awesome. Please continue this series.
Long time!
i've been waiting for so long
This video has saved my life 😆
Started to watch the series 2 days ago, you are explaining SO well. Many thanks!
More videos on additional types of problems we can solve with Machine Learning would be very helpful. Few ideas: traveling salesman problem, generating photos while emulating analog artefacts or simple ranking of new dishes I would like to try based on my restaurants' order history. Even answering with the relevant links/terminology would be fantastic.
Also, would be great to know what problems are still hard to solve or should not be solved via Machine Learning :)
Question about calculating impurity. If we do probability, we first draw data which give us probability of 0.2 then we draw label which give us another 0.2. Shouldn't the impurity be 1 - 0.2*0.2=0.96?
You explained it so well. I have been struggling to get it since 2 days. great job !!
Sooo dooope !!!!
Helpful 🔥🔥🔥
Please continue your good work! We love you!
Please cover ID 3 algorithm, explanation for CART was great!
best video on decision trees! super clear explanation
It is great course. I hope you continue and make videos to all machine learning algorithms. Thanks Alot.
Awesome video, helped me lot.... Was struggling to understand these exact stuffs.....Looking forward to the continuing courses.
best and most helpful tutorial ever seen! Thanks!
Awesome tutorial, many thanks!
well explained in such a short time
I thought CART determined splits solely on gini index and that ID3 uses the average impurity to produce information gain.
Would be glad to see English subtitles added to this episode as well.
His english is very clear for me
amazing video!!! Thank you so much for the great lecture and showing the python code to make us understand the algorithm better!
At first glance this almost looks like Huffman coding. Thanks for the great vid BTW!
great to see you back
I like your video, man. Its real simple and cool.
this is gold
I understood it as "when Gini Impurity of parent node is zero, Information Gain with child nodes is also zero. So we don't have to ask more question to classify." Is it right?
perfect video on the implementation and the topic
Thank
Thank's it was very informative. It took me hours to understand what is meant. Keep going
you are awesome, man! but why is it that, the second question on if the color is yellow? you separated only apple when two grapes are red. Or is it because they are already taken care of at the first false split of the node?
very useful.this the best tutorial out on web
Great video and such clear code to accompany it! I learned a lot :)
Clear with good English and Python explanations. So nice to find both together! Thank you!
6:22 Impurity = 0.62? How? What is the formular?
Gosh I remember when this series first started, I knew nothing about AI or machine learning and now I'm like full on neural nets and TensorFlow. Gotta admit since I don't have formal education on ml, I don't classical models as much I understand neural nets.
Best explanation ever, thank you sir
That was suuuuper amazing!! Thanks for the video!
Why Impurity is 0.62 after partitioning on "Is color green" on the left subtree?
Love the music!
Could you do an example in which the output triggers a method that changes it's self based on success or failure? An easier example, iterations increase or decrease based on probability; Or left, right up, down memorizing a maze pattern?
thanks! that was helpful)
Impeccable explanation!
I have a follow up question. How did we come up with the questions. As in..how did we know we would like to ask if the diameter is > 3, why not ask if diameter > 2?
After such a long time!
incredible !
Why is the impurity at the decision node "color=green" equal to 0.62
Great explanation, thank you so much!
You’re back!!!
The Gino impurity function in the code does not output the same responses listed in the video. It’s quite confusing.
Great series!
Excellent explanation keep it up!
This was awesome. Thanks!
Yaaaay! Your back!
That's a really good video. Very enlightening, thanks =)
This is the best tutorial on the net but this uses CART. I was really hoping to use C5.0 but unfortunately the package is only available in R. I used rpy2 to call the C50 function in Python. It would be great if there'd be a tutorial on that.
Thanks! Well done!
Thanks for your lovely lecture.how to catagorize more than 2 prediction classes at the same time ?
He is a nice person
Amazing and Tnx for sharing the code
Oh boi, oh boi, OH BOI
Could you make a similar video on fuzzy decision tree classifiers or share a good source for studying and implementing them?
Thanks for sharing. Respect!
Thank you sir thank you so much.
It is easy to find best split if data is categorical. How do split happens in a time optimized way if variable is continuous unlike color or just 2 values of diameter? Should I just run through min to max values? Can median be used here? Please suggest!!
Why Impurity is calculated one way on 5:33 and on the code it's calculated differently? (1-(times the # of possible labels) vs 1-(# of possible labels)**2)?
same question..
The wiki explains this one line derivation
en.wikipedia.org/wiki/Decision_tree_learning#Gini_impurity
At 8:41 He says Now the previous call returns and this node become decision node. What does that mean? How is this possible to return to the root node(false branch(upper line ))after executing the final return of the function. Please give your thoughts it will help me a lot.
great tutorial!
I don't get one thing here. How do we determine the number for the question. Like I understand that we try out different features to see which gives us the most info but how do we choose the number and condition for it?
Thanks Google Gods. Please accept my data.The tutorial was brilliant!
Hello sir, it is possible to classify animal-trapped camera images and segregate them into folders using an automatic process. This can be done using machine learning and computer vision techniques. Please make a video. I work in the forest department. Many Photographs capture a maximum of 18 lakhs. One by one segregation of having a problem. Please help us