Credit risk analysis using Decision Tree| How to do a Decision Tree in spss| CHAID algorithm

Поделиться
HTML-код
  • Опубликовано: 27 окт 2024

Комментарии • 31

  • @RitikaKatara-v3o
    @RitikaKatara-v3o 11 месяцев назад

    you made a wonderful explanation of decision tree modeling via spss. thankyou

  • @amogharajgkulkarni6045
    @amogharajgkulkarni6045 2 месяца назад +2

    Thank u sir .🙏😀

  • @biswadeepghoshal7943
    @biswadeepghoshal7943 2 месяца назад +1

    Just a few questions.
    While fitting a decision tree, isn't a node split into two nodes only? Here, specifically for medium income group, with respect to age, the node has been split into four nodes, instead of two.
    Also, the two terminal nodes at the extreme left provide the same value of the dependent variable, which is "Bad" credit risk, following the majority class rule. But weren't the two nodes supposed to provide two different values of the dependent variable? Otherwise these terminal nodes would not have been created since they are not providing any different prediction from the node from which they got created (because the goodness of split value is low for the mother node here). Same goes for the terminal nodes at extreme right.
    Is all this due to the CHAID algorithm being used here?

    • @theoutlier7395
      @theoutlier7395  2 месяца назад

      CART model supports binary splits. however chaid supports multiple splits

  • @ConnorBloom1898
    @ConnorBloom1898 Год назад

    This was a very helpful tutorial. Thank you so much!

  • @VKjkd
    @VKjkd 2 года назад +1

    Great breakdown, very good!

  • @Michael0208
    @Michael0208 Месяц назад

    Hello, sir; it was highly informative. Do you have any video about how to write interpretation with Decission Tree analysis? how do we ave to add image from SPSS etc.

  • @thou_yangba
    @thou_yangba 2 месяца назад

    Very helpful sir 🔥

  • @bemotivated4827
    @bemotivated4827 Год назад +1

    great job, thanks man

  • @rezawildan
    @rezawildan 10 месяцев назад

    Thanks for the video, it really helps me. Can i ask a question?. when i run the analyze, why there's only dependent variable node presented? while there's about 28 independent variables as an input?

    • @theoutlier7395
      @theoutlier7395  10 месяцев назад

      Please check your sample size, might be low. secondly the chi square association might be weak and thirdly the number of samples in the parent node and child node please reduce sop that it can create a tree

  • @intellectMind2024
    @intellectMind2024 Год назад

    nice explanation ... Thank You 😊

  • @shepherdchikomana7868
    @shepherdchikomana7868 Год назад +1

    hi would you mind sharing the link where you got the dataset from please?

    • @theoutlier7395
      @theoutlier7395  Год назад

      docs.google.com/spreadsheets/d/1fsVX0ZL_-O5SCbBzieT6QSI_QBU-2CHp/edit?usp=sharing&ouid=110167476365142506887&rtpof=true&sd=true

  • @saurabhjoshi2089
    @saurabhjoshi2089 2 года назад

    In confusion matrix have you assumed probability as 0.5 as threshold for classifying as good and bad?

  • @Andreas-ni9it
    @Andreas-ni9it 2 года назад

    What is the ''influence variable'' at the down? What is its implication in classification tree?

    • @theoutlier7395
      @theoutlier7395  Год назад

      My apologies for the delayed response, The idea of "influencer variable" is not clearly explained in SPSS documentation! Will get back to you once I find good information on this..

  • @hassanchhaiba154
    @hassanchhaiba154 2 года назад

    Hi Sir, thank you for this video. So if we use Decision tree we can identify the number of scorecard we need to developpe? after this wi use logistic regression of every segmentation?? thank you

    • @theoutlier7395
      @theoutlier7395  Год назад

      Running the decision tree is the first part then we need to create the score card. No need to run Logistic regression .

  • @nadirghauri1266
    @nadirghauri1266 2 года назад +1

    please sir

  • @adoninews402
    @adoninews402 Год назад

    Hi