Sir great respect for you. I will rate your course higher than many top coursera courses. I have watched all your ML videos till now. I am doing masters from IIT kanpur. If i will get a job in this domain, then many credit will go to you and your dedication. 👒 off
Thanks for the informative video. I'd just like to kindly point out that for the 2nd example of Tree with 15 datapoints and 2 features, there was a slight error in the node importance formula for the 2nd and 3rd node. As per the formula you mentioned earlier, the impurity weight should have been 3/9 instead of 3/15. That explains the discrepancy in the feature importance numbers between your calculations and the package.
Its a great video! Thanks for explaining in detail. It would be very much helpful If you do similar video on how permutation importance is calculated. And more questions. Does this 'feature importance' helps in finding a root cause for a problem?
Hello, Thanks for the explanation. I have one question. My question is, Does using best features helps to reduce the training data sets. Say I do not have a large datasets, but I can make independent variable that is highly corelated with the dependent variable, will it help me reduce my traning data sets. Your response will be highly valuable.
random forest me jo DT use hotey hai wo kitne DT hotey hai during training pata ker saktey jai kya ya randomly select hotey depending on rows and col which they select during row and feature sampling.
Hi sir, how can we check which features contributed most for each prediction?? Suppose we built model to predict if loan should be given or not...... Then if a person ask why did my application get regected, then feature importantance will differ from person to person..... So, how to check feature importanance for each prediction?????
Hi, that was very informative. I have a question regarding the above problem: In Decision Tree, if a feature is more important, as we saw 1st feature was more important. Shouldn't it be the root node? Is there any relation b/w what should be the order of nodes with the feature importance?
there are parameters which decide the feature that has to be used for root note , in random forest , we have multiple decision trees , so it comes down to Feature Sampling
Sir great respect for you. I will rate your course higher than many top coursera courses. I have watched all your ML videos till now. I am doing masters from IIT kanpur. If i will get a job in this domain, then many credit will go to you and your dedication. 👒 off
IIT kanpur mei hoke bhi "IF " word ka use krra hai BKL
Have you got the job in this domain?
Thanks for putting so much efforts. Appreciate your work! Worth watching
Thanks for the informative video. I'd just like to kindly point out that for the 2nd example of Tree with 15 datapoints and 2 features, there was a slight error in the node importance formula for the 2nd and 3rd node. As per the formula you mentioned earlier, the impurity weight should have been 3/9 instead of 3/15. That explains the discrepancy in the feature importance numbers between your calculations and the package.
@20:32 based on formula it should be 3/9*0.44
😊
so awesome sir! you explained everything in detail! thanks!
Great Discussion . Thanks
Awesome... really helpful.... i was wandering for such easily understandable video.
Its a great video! Thanks for explaining in detail. It would be very much helpful If you do similar video on how permutation importance is calculated.
And more questions. Does this 'feature importance' helps in finding a root cause for a problem?
Thankyou nitish sir
Man you're a beast. Awesome
Hi Nitish,pls cover feature selection ,xgboost and s
will the OOB samples not be 0 when we use default RandomforestClassifier() as all rows are fed to each decision tree ?
Hello, Thanks for the explanation. I have one question. My question is, Does using best features helps to reduce the training data sets. Say I do not have a large datasets, but I can make independent variable that is highly corelated with the dependent variable, will it help me reduce my traning data sets. Your response will be highly valuable.
can we do landslide predicition from this?
Awesome ❤❤❤
what if there are more than 2 columns.. then how will the importance be calculated and how will the x/ (x+y) formula for the nodes look?
Can also plot feature importance for SVM classifier and KSVM??
thanks sir , maja aaya
random forest me jo DT use hotey hai wo kitne DT hotey hai during training pata ker saktey jai kya ya randomly select hotey depending on rows and col which they select during row and feature sampling.
Plz explain golden features??
Hi sir, how can we check which features contributed most for each prediction??
Suppose we built model to predict if loan should be given or not...... Then if a person ask why did my application get regected, then feature importantance will differ from person to person..... So, how to check feature importanance for each prediction?????
Hello Sir, plz update this series. Thanks
How feature Importance is calculated in Decision Tree for Regression ?
Pls cover feature selection xgboost knn dbscan catboost
Hi, that was very informative. I have a question regarding the above problem:
In Decision Tree, if a feature is more important, as we saw 1st feature was more important. Shouldn't it be the root node? Is there any relation b/w what should be the order of nodes with the feature importance?
there are parameters which decide the feature that has to be used for root note , in random forest , we have multiple decision trees , so it comes down to Feature Sampling
If the zero column has more feature importance then y cant be it primary node
20:46 PICHE DHEKO
best
high cardinality as in numbers also or only categorical data?
cardinality in categorical data Binod
How is this different from Mutual Information?