Decision Tree Machine Learning: Stop #1 on Your DIY Data Science Roadmap

Поделиться
HTML-код
  • Опубликовано: 28 ноя 2024

Комментарии • 11

  • @DaveOnData
    @DaveOnData  2 месяца назад

    Save 20% off my machine learning online courses using code RUclips ⬇
    Cluster Analysis with Python online course:
    bit.ly/ClusterAnalysisWithPythonCourse
    My "Intro to Machine Learning" online course:
    bit.ly/TDWIIntroToML

  • @davidghope1
    @davidghope1 Месяц назад +1

    Another absolute gem, incredibly well articulated, explanation and examples are brilliant

    • @DaveOnData
      @DaveOnData  Месяц назад

      Thank you so much for taking the time to leave these kind words. I'm glad you are enjoying my content.

  • @rbjassoc6
    @rbjassoc6 3 месяца назад +2

    Thanks!

    • @DaveOnData
      @DaveOnData  3 месяца назад

      My pleasure! I hope you found the video useful.

    • @rbjassoc6
      @rbjassoc6 3 месяца назад +1

      Just had one question... I looked at the CSV files. And I was wondering how large the original data set is. Meaning I understand that you took data out of a larger data set to create the training set.. and I understand because you put 3,000 in. 3000 from the original data set. Is used for the learning set... Or did I not understand that aspect

    • @DaveOnData
      @DaveOnData  3 месяца назад +1

      The CSVs a are cleaned subset of the original dataset. You can get the original raw data here: archive.ics.uci.edu/dataset/2/adult

  • @VastCNC
    @VastCNC 3 месяца назад +1

    Very interesting and helpful. The greedy aspect I struggle with, are there alternatives that combine root nodes or is it a problem that gets solved with larger data sets?

    • @DaveOnData
      @DaveOnData  3 месяца назад

      Glad you enjoyed the video! As to your question, what aspect of greedy selection are you struggling with?

    • @VastCNC
      @VastCNC 3 месяца назад +1

      @@DaveOnData I just thought the selection of the first that met the criteria would neglect the 2nd, or more parameters that also met the criteria, so could be leaving out key criteria that could inter-relate to future nodes more effectively than the first. I think it’s more of a glaring issue with the contrived example, and probably moot in larger data samples, in which it’s intended to function.

    • @DaveOnData
      @DaveOnData  3 месяца назад

      If I understand your concern correctly, one way to think about decision trees' greedy nature is computational complexity.
      To keep this simple, let's think only about the tree's root node.
      In an ideal world, the decision tree algorithm would always pick the root note based on the single most optimal combination of dataset and hyperparameter values. However, this is computationally infeasible as the algorithm must search through every possible tree that could be learned before knowing the single best root node.