Размер видео: 1280 X 720853 X 480640 X 360
Показать панель управления
Автовоспроизведение
Автоповтор
Thank you for such clear explanation. Google should be showing this as first result when someone looks for Decision Trees - Regression.
Man, this Is so clear, hi from Colombia, I shared this videos to my students I just hope they whatch them.
agree. you explain so clearly what's happening under the hood of the algorithm
Wonderful video! Makes things so clear!
Thanks, very clear and easy to understand
great explanation.wonderful. thank you
Great explanation
Thank You, this was just too clear that I had to bookmark it ;)
very good video, the audio is excellent, and the video is indeed awesome. keep up the good work. do you have any video similar to this, but the focus is on classification? i hope you have. God bless
please check my channel, I have a few videos on classification algorithms like kNN and Classification Trees
Amazing , thank you ❣
Great video... thanks
Thank you very much this is very informative.
Thank You!
Question: If one predictor variable is Continuous, how is it evaluated against the Categorical? Is Categorical chosen over Continuous?
perhaps you should categorize.
Excellent
The individual weighted sd is the impurity and the difference from the SD(s) is the gini gain?
great video. It would be great if you could show us another split from the child node that you gained. Thanks
you need to proportionally add sum of squared residuals (or variance) not the standard deviation of the leafs.
How important is normality to your example, and to CART at large?
Can you please share the worksheet?
super
Svp la partie finale de cet algorithme ,the last part for this algorithm
Thanks for the explanation Question : Can we use the P-value to choose the most prominent among all predictors ?
In general, P-values are not used for feature selection.
Hi, why Excel? I have been waiting for your new videos on R. Please do machine learning, deep learning, neural network in R. Thank you so much
your video is too much irritating and doesn't at all clear the concept .
Thank you for such clear explanation. Google should be showing this as first result when someone looks for Decision Trees - Regression.
Man, this Is so clear, hi from Colombia, I shared this videos to my students I just hope they whatch them.
agree. you explain so clearly what's happening under the hood of the algorithm
Wonderful video! Makes things so clear!
Thanks, very clear and easy to understand
great explanation.wonderful. thank you
Great explanation
Thank You, this was just too clear that I had to bookmark it ;)
very good video, the audio is excellent, and the video is indeed awesome. keep up the good work. do you have any video similar to this, but the focus is on classification? i hope you have. God bless
please check my channel, I have a few videos on classification algorithms like kNN and Classification Trees
Amazing , thank you ❣
Great video... thanks
Thank you very much this is very informative.
Thank You!
Question: If one predictor variable is Continuous, how is it evaluated against the Categorical? Is Categorical chosen over Continuous?
perhaps you should categorize.
Excellent
The individual weighted sd is the impurity and the difference from the SD(s) is the gini gain?
great video. It would be great if you could show us another split from the child node that you gained. Thanks
you need to proportionally add sum of squared residuals (or variance) not the standard deviation of the leafs.
How important is normality to your example, and to CART at large?
Can you please share the worksheet?
super
Svp la partie finale de cet algorithme ,the last part for this algorithm
Thanks for the explanation
Question : Can we use the P-value to choose the most prominent among all predictors ?
In general, P-values are not used for feature selection.
Hi, why Excel? I have been waiting for your new videos on R. Please do machine learning, deep learning, neural network in R. Thank you so much
your video is too much irritating and doesn't at all clear the concept .