Let’s Write a Decision Tree Classifier from Scratch - Machine Learning Recipes #8

Поделиться
HTML-код
  • Опубликовано: 26 сен 2024

Комментарии • 231

  • @dabmab2624
    @dabmab2624 4 года назад +212

    Why can't all professors explain things like this? My professor: "Here is the idea for decision tree, now code it"

    • @exoticme4760
      @exoticme4760 4 года назад +2

      agreed!

    • @pauls60r
      @pauls60r 4 года назад +12

      I realized years after graduation, many professors either have received no training in teaching or have little interest in teaching, undergrads in particular. I can't say I've learned more on RUclips than I did in college but I have a whole lot of "OOOOOH, that's what my professor was talking about!" moments when watching videos like this. This stuff would've altered my life 20 years ago.

    • @carol8099
      @carol8099 4 года назад

      Same! I really wish they could dig more into the coding part, but they either don't cover it or don't teach coding well.

    • @avijitmandal9124
      @avijitmandal9124 4 года назад

      hey can someone give the link for doing pruning

    • @Skyfox94
      @Skyfox94 3 года назад +1

      Whilst I definitely agree, I have to say that, in order to understand algorithms like this one, you'll have to just work through them. No matter how many interesting and well thought out videos you watch, it'll always be most effective if you afterwards try and build it yourself. The fact that you're watching this in your free time shows that you are interested in the topic. That's also worth a lot. Sometimes you'll only be able to appreciate what professors taught you, after you get out of college/uni and realize how useful it would have been.

  • @nbamj88
    @nbamj88 7 лет назад +279

    In nearly 10 min, he explained the topic extremely well
    Amazing job.

  • @FacadeMan
    @FacadeMan 6 лет назад +90

    Thanks a lot, Josh. To a very basic beginner, every sentence you say is a gem. It took me half hour to get the full meaning of the first 4 mins of the video, as I was taking notes and repeating it to myself to grasp everything that was being said.
    The reason I wanted to showcase my slow pace is to say how important and understandable I felt in regard to every sentence.
    And, it wasn't boring at all.
    Great job, and please, keep em coming.

  • @donking6996
    @donking6996 4 года назад +5

    I am crying tears of joy! How can you articulate such complex topics so clearly!

  • @riadhsaid3548
    @riadhsaid3548 5 лет назад +14

    Even it took me more than 30 minutes to complete & understand the video. I can not tell you how this explanation is amazing !
    This is how we calculate the impurity !
    PS: G(k) = Σ P(i) * (1 - P(i))
    i = (Apple, Grape,Lemon)
    2/5 * (1- 2/5) + 2/5 * (1- 2/5) + 1/5 *(1-1/5)=
    0.4 * (0.6) + 0.4 * (0.6) + 0.2 * (0.8)=
    0.24 + 0.24 + 0.16 = 0.64

    • @senyaisavnina
      @senyaisavnina 4 года назад

      or 1 - (2/5)^2 - (2/5)^2 - (1/5)^2

    • @vardhanshah8843
      @vardhanshah8843 4 года назад

      Thank you very much for this explanation I went to the comment section to ask this question but you answer it very nicely.

  • @hbunyamin
    @hbunyamin 5 лет назад +13

    I have already known the concept; however, when I have to translate the concept into code ... I find it quite difficut and this video explains that smoothly.
    Thank you so much for the explanation!

  • @cbrtdgh4210
    @cbrtdgh4210 6 лет назад +5

    This is the best single resource on decision trees that I've found, and it's a topic that isn't covered enough considering that random forests are a very powerful and easy tool to implement. If only they released more tutorials!

  • @shreyanshvalentino
    @shreyanshvalentino 7 лет назад +73

    a year later, finally!

  • @sundayagu5755
    @sundayagu5755 4 года назад +1

    As a beginner, this work has given me hope to pursue a career in ML. I have red and understood the concepts of Decision Tree. But the code becomes a mountain which has been levelled. Jose, thank you my brother and may God continue to increase you 🙏.

  • @JulitaOtusek
    @JulitaOtusek 6 лет назад +32

    I think you might confusing Information Gain and Gini Index. Information gain is reduce of entropy, not reduce of gini impurity. I almost did a mistake in my Engineering paper because of this video. But I luckily noticed different definition of information gain in a different source. Maybe it's just thing of naming but it can mislead people who are new in this subject :/

    • @liuqinzhe508
      @liuqinzhe508 2 года назад +3

      Yes. Information gain and Gini index are not really related to each other when we generate a decision tree. They are two different approaches. But overall still a wonderful video.

    • @leonelp9593
      @leonelp9593 2 года назад

      thanks for clarify this!

  • @hyperealisticglass
    @hyperealisticglass 5 лет назад +7

    This single 9-minute video does a way better job than what my ML teacher did for 3 hours.

    • @marklybeer9038
      @marklybeer9038 3 года назад

      I know, right? I had the same experience with an instructor. . . it was a horrible memory. Thanks for the video!

  • @WilloftheWinds
    @WilloftheWinds 7 лет назад +8

    Welcome back Josh, thought we would never get another awesome tutorial, thanks for your good work.

  • @TomHarrisonJr
    @TomHarrisonJr 5 лет назад +2

    One of the clearest and most accessible presentations I have seen. Well done! (and thanks!)

  • @AyushGupta-kp9xf
    @AyushGupta-kp9xf 3 года назад

    So much value in just 10 mins, this is Gold

  • @fathimadji8570
    @fathimadji8570 3 года назад +2

    Excuse me, I am still not clear about how the value of 0.64 comes out, can you explain a little more?

  • @andrewbeatty5912
    @andrewbeatty5912 7 лет назад +25

    Brilliant explanation !

  • @mindset873
    @mindset873 4 года назад

    I've never seen any other channels like this. So deep and perfect.

  • @anupam1
    @anupam1 7 лет назад +3

    Thanks, was really looking for this series...nice to see you back

  • @BlueyMcPhluey
    @BlueyMcPhluey 7 лет назад +1

    loving this series, glad it's back

  • @ryanp9441
    @ryanp9441 2 года назад

    so INSTRUCTIVE. thank you so much for your clear & precise explanation

  • @BestPromptHub
    @BestPromptHub 6 лет назад

    You have no idea how your videos helped me out on my journey on Machine Learning. thanks a lot Josh you are awesome.
    回复

  • @falmanna
    @falmanna 7 лет назад

    Please keeps this series going.
    It's awesome!

  • @tymothylim6550
    @tymothylim6550 3 года назад

    Thank you very much for this video! I learnt a lot on how to understand Gini Coefficient and how it is used to pick the best questions to split the data!

  • @huuhieupham9059
    @huuhieupham9059 5 лет назад +1

    Thanks for your sharing. You made it easy to understand for everybody

  • @leoyuanluo
    @leoyuanluo 4 года назад

    best video about decision tree thus far

  • @AbdulRahman-jl2hv
    @AbdulRahman-jl2hv 4 года назад

    thank you for such a simple yet comprehensive explanation.

  • @aryamanful
    @aryamanful 6 лет назад

    I don't generally comment on videos but this video has so much clarity something had to be said

  • @gorudonu
    @gorudonu 7 лет назад +2

    Was waiting for the next episode! Thank you!

  • @alehandr0s
    @alehandr0s 4 года назад

    In the most simple and comprehensive way. Great job!

  • @gautamgadipudi8213
    @gautamgadipudi8213 4 года назад

    Thank you Josh! This is my first encounter with machine learning and you made it very interesting.

  • @muslimbekabduganiev7483
    @muslimbekabduganiev7483 3 года назад +1

    You are creating a question with only one value, what if I want to have a question like "Is it GREEN OR YELLOW?". So, basically, I will have to test all combinations of values of size 2 to find the best info_gain for a particular attribute. Furthermore, we could test all possible sizes of a question. Would that give a better result or is it better to use only one value of the attribute to build the question?

    • @muslimbekabduganiev7483
      @muslimbekabduganiev7483 3 года назад

      On top of that, why do we use binary partitioning? Can't we use the same attribute to ask a new question on the false rows, but excluding attribute values used in the true rows?

  • @msctube45
    @msctube45 4 года назад

    Thank you Josh for preparing and explaining this presentation aa well as the software to help the understanding of the topics. Great job!

  • @rohitgavirni3400
    @rohitgavirni3400 4 года назад

    The script is tightly edited. Much appreciated.

  • @georgevjose
    @georgevjose 7 лет назад +55

    Finally after a year. Pls continue this course.

  • @johnstephen399
    @johnstephen399 7 лет назад

    This was awesome. Please continue this series.

  • @sajidbinmahamud2414
    @sajidbinmahamud2414 7 лет назад +12

    Long time!
    i've been waiting for so long

  • @dunstantough5134
    @dunstantough5134 2 года назад

    This video has saved my life 😆

  • @BreakPhreak
    @BreakPhreak 7 лет назад

    Started to watch the series 2 days ago, you are explaining SO well. Many thanks!
    More videos on additional types of problems we can solve with Machine Learning would be very helpful. Few ideas: traveling salesman problem, generating photos while emulating analog artefacts or simple ranking of new dishes I would like to try based on my restaurants' order history. Even answering with the relevant links/terminology would be fantastic.
    Also, would be great to know what problems are still hard to solve or should not be solved via Machine Learning :)

  • @mingzhu8093
    @mingzhu8093 5 лет назад +1

    Question about calculating impurity. If we do probability, we first draw data which give us probability of 0.2 then we draw label which give us another 0.2. Shouldn't the impurity be 1 - 0.2*0.2=0.96?

  • @lenaara4569
    @lenaara4569 6 лет назад

    You explained it so well. I have been struggling to get it since 2 days. great job !!

  • @Po-YuJuan-g9k
    @Po-YuJuan-g9k Год назад +1

    Sooo dooope !!!!
    Helpful 🔥🔥🔥

  • @stefanop.6097
    @stefanop.6097 7 лет назад +1

    Please continue your good work! We love you!

  • @Conk-bepis
    @Conk-bepis 5 лет назад +2

    Please cover ID 3 algorithm, explanation for CART was great!

  • @rodrik1
    @rodrik1 6 лет назад

    best video on decision trees! super clear explanation

  • @dinasamir2778
    @dinasamir2778 4 года назад

    It is great course. I hope you continue and make videos to all machine learning algorithms. Thanks Alot.

  • @debanjandhar6395
    @debanjandhar6395 6 лет назад

    Awesome video, helped me lot.... Was struggling to understand these exact stuffs.....Looking forward to the continuing courses.

  • @congliulyc
    @congliulyc 6 лет назад

    best and most helpful tutorial ever seen! Thanks!

  • @omarsherif88
    @omarsherif88 2 года назад

    Awesome tutorial, many thanks!

  • @ritikvimal4915
    @ritikvimal4915 4 года назад

    well explained in such a short time

  • @jakobmethfessel6226
    @jakobmethfessel6226 5 лет назад +1

    I thought CART determined splits solely on gini index and that ID3 uses the average impurity to produce information gain.

  • @saimmehmood6936
    @saimmehmood6936 7 лет назад +1

    Would be glad to see English subtitles added to this episode as well.

    • @hamza-325
      @hamza-325 7 лет назад

      His english is very clear for me

  • @guccilover2009
    @guccilover2009 5 лет назад

    amazing video!!! Thank you so much for the great lecture and showing the python code to make us understand the algorithm better!

  • @mrvzhao
    @mrvzhao 7 лет назад +3

    At first glance this almost looks like Huffman coding. Thanks for the great vid BTW!

  • @jaydevparmar9876
    @jaydevparmar9876 7 лет назад

    great to see you back

  • @IvanSedov-i7f
    @IvanSedov-i7f 4 года назад

    I like your video, man. Its real simple and cool.

  • @codersgarage2279
    @codersgarage2279 4 года назад

    this is gold

  • @slr3123
    @slr3123 2 года назад

    I understood it as "when Gini Impurity of parent node is zero, Information Gain with child nodes is also zero. So we don't have to ask more question to classify." Is it right?

  • @muratcan__22
    @muratcan__22 6 лет назад

    perfect video on the implementation and the topic

  • @Xiaoniana
    @Xiaoniana 4 года назад

    Thank
    Thank's it was very informative. It took me hours to understand what is meant. Keep going

  • @senyotsedze3388
    @senyotsedze3388 10 месяцев назад

    you are awesome, man! but why is it that, the second question on if the color is yellow? you separated only apple when two grapes are red. Or is it because they are already taken care of at the first false split of the node?

  • @moeinhasani8718
    @moeinhasani8718 6 лет назад

    very useful.this the best tutorial out on web

  • @adampaxton5214
    @adampaxton5214 3 года назад

    Great video and such clear code to accompany it! I learned a lot :)

  • @andreachristelle5359
    @andreachristelle5359 5 лет назад

    Clear with good English and Python explanations. So nice to find both together! Thank you!

  • @j0kersama669
    @j0kersama669 3 года назад +1

    6:22 Impurity = 0.62? How? What is the formular?

  • @guitarheroprince123
    @guitarheroprince123 7 лет назад

    Gosh I remember when this series first started, I knew nothing about AI or machine learning and now I'm like full on neural nets and TensorFlow. Gotta admit since I don't have formal education on ml, I don't classical models as much I understand neural nets.

  • @sergior.m.5694
    @sergior.m.5694 6 лет назад

    Best explanation ever, thank you sir

  • @sarrakharbach
    @sarrakharbach 6 лет назад

    That was suuuuper amazing!! Thanks for the video!

  • @mohammadbayat1635
    @mohammadbayat1635 10 месяцев назад

    Why Impurity is 0.62 after partitioning on "Is color green" on the left subtree?

  • @kwarnkham3836
    @kwarnkham3836 4 года назад

    Love the music!

  • @allthingsmmm
    @allthingsmmm 4 года назад

    Could you do an example in which the output triggers a method that changes it's self based on success or failure? An easier example, iterations increase or decrease based on probability; Or left, right up, down memorizing a maze pattern?

  • @hiskaya
    @hiskaya 10 месяцев назад

    thanks! that was helpful)

  • @doy2001
    @doy2001 5 лет назад

    Impeccable explanation!

  • @aryamanful
    @aryamanful 6 лет назад

    I have a follow up question. How did we come up with the questions. As in..how did we know we would like to ask if the diameter is > 3, why not ask if diameter > 2?

  • @سميرشيخ-ب1س
    @سميرشيخ-ب1س 7 лет назад +1

    After such a long time!

  • @juliocardenas4485
    @juliocardenas4485 2 года назад

    incredible !

  • @HarpreetKaur-qq8rx
    @HarpreetKaur-qq8rx 4 года назад

    Why is the impurity at the decision node "color=green" equal to 0.62

  • @leiverandres
    @leiverandres 6 лет назад

    Great explanation, thank you so much!

  • @erikslatterv
    @erikslatterv 7 лет назад +1

    You’re back!!!

  • @aseperate
    @aseperate Год назад

    The Gino impurity function in the code does not output the same responses listed in the video. It’s quite confusing.

  • @Yaxoi
    @Yaxoi 7 лет назад

    Great series!

  • @christospantazopoulos8049
    @christospantazopoulos8049 6 лет назад

    Excellent explanation keep it up!

  • @elliottgermanovich3081
    @elliottgermanovich3081 5 лет назад

    This was awesome. Thanks!

  • @houjunliu5978
    @houjunliu5978 7 лет назад

    Yaaaay! Your back!

  • @xavierk99
    @xavierk99 5 лет назад

    That's a really good video. Very enlightening, thanks =)

  • @shadowfox87
    @shadowfox87 6 лет назад

    This is the best tutorial on the net but this uses CART. I was really hoping to use C5.0 but unfortunately the package is only available in R. I used rpy2 to call the C50 function in Python. It would be great if there'd be a tutorial on that.

  • @qwertybrain
    @qwertybrain 5 лет назад

    Thanks! Well done!

  • @adamtalent3559
    @adamtalent3559 4 года назад

    Thanks for your lovely lecture.how to catagorize more than 2 prediction classes at the same time ?

  • @rifatmahammod4002
    @rifatmahammod4002 4 года назад +1

    He is a nice person

  • @arminhejazian5306
    @arminhejazian5306 2 года назад

    Amazing and Tnx for sharing the code

  • @kaankesgin4295
    @kaankesgin4295 7 лет назад +4

    Oh boi, oh boi, OH BOI

  • @bhuvanagrawal1323
    @bhuvanagrawal1323 5 лет назад

    Could you make a similar video on fuzzy decision tree classifiers or share a good source for studying and implementing them?

  • @dragolov
    @dragolov 5 лет назад

    Thanks for sharing. Respect!

  • @whatif3753
    @whatif3753 4 года назад

    Thank you sir thank you so much.

  • @uditarpit
    @uditarpit 5 лет назад

    It is easy to find best split if data is categorical. How do split happens in a time optimized way if variable is continuous unlike color or just 2 values of diameter? Should I just run through min to max values? Can median be used here? Please suggest!!

  • @Dedsman
    @Dedsman 7 лет назад +3

    Why Impurity is calculated one way on 5:33 and on the code it's calculated differently? (1-(times the # of possible labels) vs 1-(# of possible labels)**2)?

    • @yizhang8106
      @yizhang8106 7 лет назад

      same question..

    • @ThePujjwal
      @ThePujjwal 6 лет назад

      The wiki explains this one line derivation
      en.wikipedia.org/wiki/Decision_tree_learning#Gini_impurity

  • @browneealex288
    @browneealex288 3 года назад

    At 8:41 He says Now the previous call returns and this node become decision node. What does that mean? How is this possible to return to the root node(false branch(upper line ))after executing the final return of the function. Please give your thoughts it will help me a lot.

  • @aydinahmadli7005
    @aydinahmadli7005 5 лет назад

    great tutorial!

  • @panlis6243
    @panlis6243 6 лет назад

    I don't get one thing here. How do we determine the number for the question. Like I understand that we try out different features to see which gives us the most info but how do we choose the number and condition for it?

  • @MrAlekoukos
    @MrAlekoukos 5 лет назад

    Thanks Google Gods. Please accept my data.The tutorial was brilliant!

  • @kaziranga_national_park
    @kaziranga_national_park Год назад

    Hello sir, it is possible to classify animal-trapped camera images and segregate them into folders using an automatic process. This can be done using machine learning and computer vision techniques. Please make a video. I work in the forest department. Many Photographs capture a maximum of 18 lakhs. One by one segregation of having a problem. Please help us