Decision Trees are more powerful than you think

Поделиться
HTML-код
  • Опубликовано: 8 янв 2025

Комментарии • 27

  • @CodeEmporium
    @CodeEmporium  2 года назад +5

    This is another fun video! I took my time trying to edit this one so it's easy to follow. There are technical details throughout the video, but hoping this one is well paced with adequate pauses to digest material. Also removed the background music for this one since it might be distracting. Please let mw know how this video pans out for you in the comments. Would appreciate some feedback as I'm trying make challenging content easy to understand. Thanks so much for the support!

    • @scitechtalktv9742
      @scitechtalktv9742 2 года назад +1

      I will watch that video with great interest and will react if need be !

  • @nathanblaxall8703
    @nathanblaxall8703 2 года назад +4

    I can't believe how very good this is ... very clearly explained, and you explained everything. Thank you!

    • @CodeEmporium
      @CodeEmporium  2 года назад

      Thank you for the compliments. Really appreciate it :)

  • @josephedappully1482
    @josephedappully1482 2 года назад +1

    Great video, and thanks for including the references in the notes too!

    • @CodeEmporium
      @CodeEmporium  2 года назад +1

      Of course! Thanks for watching!

  • @atifadib
    @atifadib 2 года назад +3

    I am a big fan of this channel... Waiting for a video on GANs from you!

    • @CodeEmporium
      @CodeEmporium  2 года назад +1

      Thank you!! Also, i should have a playlist of a few videos on GANs.

  • @upa240
    @upa240 2 года назад +3

    To be Precise, I would express divergence in terms of conditional probabilities (Prob[Cured|Treated]-Prob[Cured|NonTreated])^2+(Prob[NotCured|Treated]-Prob[Not Cured|NonTreated])^2. In the video the definition of divergence is not same as stated above. It was defined in terms of joint probability instead of conditional probability. Please correct me , if I am wrong...

    • @myfelicidade
      @myfelicidade 10 месяцев назад

      Either you are right, or the probabilities in the video are wrong

  • @123ravitz
    @123ravitz 2 года назад

    Very clear video and well-explained.
    However, you didn't discuss on very important issues in the causal trees, such as overlap assumption, when we cannot have an inference of the counter factual due to low overlap in some regions of the feature space

  • @beshosamir8978
    @beshosamir8978 Год назад

    Very clear and informative video, thank your for sharing it but i have a small question about divergence formula .
    what if we have 10 persons , 5 treated and not cured also the another 5 not treated but cured , do u notice something ?
    it will have a high divergence despite that it is a horrible situations which is mean we need to ignore this split , but how whenever the divergence is too high in this case ?

  • @darnokjarud9975
    @darnokjarud9975 2 года назад +1

    Great vid, thank you for your work!

  • @sinanwang1262
    @sinanwang1262 2 года назад

    thanks for your great explanation. I would like to ask how do we obtain the ITE after the split?

  • @KenJee_ds
    @KenJee_ds 2 года назад

    Did you do all those visuals in manim?

  • @tannyakumar284
    @tannyakumar284 2 года назад

    Great video! I have a question. While investigating the individual level (cognitive and non-cognitive traits) and the family level factors (parental education, income, marital status, conflict with the child, warmth) that affect adolescent academic performance measured in terms of GPA, I wanted to use causal trees. Now, all my covariates and the outcome are continuous and there is no "treatment" that I am providing. In such a case, how do I use and interpret this method? I just have a set of predictors based on previous literature. Thanks in advance!

  • @davidlawton5824
    @davidlawton5824 2 года назад +1

    Can you do one on XGBoost please?

    • @CodeEmporium
      @CodeEmporium  2 года назад +3

      I have a video titled "Boosting - EXPLAINED" I made maybe 2 years ago. I explain many forms of boosting and it's evolution. I think the last few minutes should be dictated to Xgboost.

  • @scitechtalktv9742
    @scitechtalktv9742 2 года назад

    Very clear and informative video, thank you for making it ! After watching it, this question came up with me: in the case of “ordinary, non-causal decision trees the random forest was developed. A single decision tree is too sensitive to small changes in the data upon which it is trained; training a set of decision trees on random subsets of the data and on random subsets of the input variables in the case of random forests proved to be a good to be a good way of circumventing that brittleness of the single decision tree !
    But is this also the case for causal decision trees? Does there exist Causal Forests to fight the same problem?

    • @CodeEmporium
      @CodeEmporium  2 года назад +1

      Excellent question! Yes. In practice, we would actually implement this as a random forest instead of using this as a decision tree because of that sensitivity issue you mentioned. In fact, i think all the resources i mentioned in the description of this video imply randomly forests are used. But to illustrate the concept if causality, i thought decision trees are easier to dissect and discuss.
      Overall, good thought!

    • @scitechtalktv9742
      @scitechtalktv9742 2 года назад

      @@CodeEmporium I agree that it is best to start off with causal decision trees🌲 🌲 . Causal Decision Forests are a relatively simple extension of Causal Decision Trees indeed.
      Keep up the good work making these highly informative videos, I appreciate them very much !

  • @soumen_das
    @soumen_das Год назад

    why would I use Causal Decision tree against uplift model?

  • @МаксимКоняев-э3з
    @МаксимКоняев-э3з 2 года назад

    cool!

  • @lXIlIllIl
    @lXIlIllIl 2 месяца назад

    Ur god