NOTE: Gradient Boost traditionally uses Regression Trees. If you don't already know about Regression Trees, check out the 'Quest: ruclips.net/video/g9c66TUylZ4/видео.html Also NOTE: In Statistics, Machine Learning and almost all programming languages, the default base for the log function, log(), is log base 'e' and that is what I use here. Support StatQuest by buying my books The StatQuest Illustrated Guide to Machine Learning, The StatQuest Illustrated Guide to Neural Networks and AI, or a Study Guide or Merch!!! statquest.org/statquest-store/
I am a bit confused. The first Log that you took : Log(4/2) - was that to some base other than e? Cause e^(log(x)) = x for log to the base e And hence the probability will be simply 2/(1+2) = 2/3 = No of Yes / Total Obs = 4/6 = 2/3 Pls let me know if this is correct.
@@parijatkumar6866 The log is to the base 'e', and yes, e^(log(x)) = x. However, sometimes we don't have x, we just have the log(x), as is illustrated at 9:45. So, rather than use one formula at one point in the video, and another in another part of the video, I believe I can do a better job explaining the concepts if I am consistent.
For Gradient Boost for CLASSIFICATION, because we convert the categorical targets(No or Yes) to probabilities(0-1) and the residuals are calculated from the probabilities, when we build a tree, we still use REGRESSION tree, which use sum of squared residuals to split the tree. Is it correct? Thank you.
Thank you so much Josh, I watch 2-3 videos everyday of your machine learning playlist and it just makes my day. Also the fact that you reply to most of the people in the comments section is amazing. Hats off. I only wish the best for you genuinely.
I'm enjoying the thorough and simplified explanations as well as the embellishments, but I've had to set the speed to 125% or 150% so my ADD brain can follow along. Same enjoyment, but higher bpm (bams per minute)
Thank you very much! Your step by step explanation is very helpful. It gives to people with poor abstract thinking like me chance to understand all math of these algorithms.
Thanks for the video! I’ve been going on a statquest marathon for my job and your videos have been really helpful. Also “they’re eating her...and then they’re going eat me!....OH MY GODDDDDDDDDDDDDDD!!!!!!”
Amazing illustration of a complicated concept. This is best explanation. Thank you so much for all your efforts in making us understand the concepts very well !!! Mega BAM !!
Very simple and practical lesson. I did created a worked sample based on this with no problems. It might be obvious, but not explained there, that initial mean odd should be more than 1. It might be explained as odd of more rare event should be closer to zero. Glad to see this video arrived just at the time I started to interest this topic. I guess it will become a "bestseller"
Hello Josh! I think that there might be a mistake in methodology at min 5:11 compared to what you showed in part 4 of the series for computing the residual. In this video, the first set of residuals you computed it as (Observed - log(odds) = residuals) and in part 4 you calculate it as (Observed - probability = residuals), so in this scenario where we have Observed as 1, log(odds) as 0.7, and p as 0.66, shouldn't the residuals be (1 - 0.66 = 0.33) instead of (1 - 0.7 - 0.3)? Love your videos and I am a huge fan!
I think you are confused, perhaps because the log(odds) = log(4/2) = 0.7 = 4/6 = probability. So, in this specific situation, both the log(odds) and the probability are the same. Thus, when we calculate the residuals, we use the probability. The equation is Residual = (observed - probability), as can been see in earlier at 4:49
@@statquest I actually have a draft paper (not submitted yet) and included you in the acknowledgements if that is ok with you. I will be very happy to send it to you when we have a version out.
Why when plugging into the logistic function around 2:42 is 1+e^log(4/2) in the denominator and not 1+ e^-log(4/2)? (Given the sigmoid is 1/[1+e^-x]). When I try plugging in e^(log(4/2))/[1+e^log(4/2)] I get 0.574, and when I use e^(log(4/2))/[1+e^-log(4/2)] I get something closer (0.776). What base is the log in? (I tried base 2 and base e but got diff results still)
In statistics, machine learning and most programming languages, the default log function is log to the base 'e'. So, in all of my videos, I use log to the base 'e'. In this video, we use e^log(odds) / (1 + e^log(odds)) to convert the odds. This equation is derived here: ruclips.net/video/BfKanl1aSG0/видео.html As to why you're not getting 0.7 when you do the math, you need to double check that you are using base 'e'. For example, when I use base 10, I get the same result you got: > exp(log10(4/2)) / (1 + exp(log10(4/2))) [1] 0.5746943 However, when I use base 'e', I get the result in the video: > exp(log(4/2)) / (1 + exp(log(4/2))) [1] 0.6666667
I wish I could give you the money that I pay in tuition to my university. It's ridiculous that people who are paid so much can't make the topic clear and comprehensible like you do. Maybe you should do teaching lessons for these people. Also you should have millions of subscribers!
First of all, I would like to thank you, Dr. Josh, for all these great videos. I would like to ask how important, in your experience, it is to understand the algorithms mathematics, as you analyze them in parts 2 and 4, especially for people who want to work in the analysis of biological data. Thanks a lot again! you really helped me understand many machine learning topics.
One of the reasons I split these videos into "main ideas" and "mathematical details" was I felt that the "main ideas" were more important for most people. The details are interesting, and helpful if you want to build your own tree based method, but not required.
@statquest Thank you for your reply! Also, I would like to thank you again for all this knowledge that you provide. I have never seen a better teaching methodology than yours ! :)
How do you create the classification trees using residual probabilities? Do you stop using some kind of purity index during the optimization in that case? Or do you use regression methods?
Hi, I have a few questions: 1. How do we know when GBDT algorithms stops( except the M, number of trees) 2. how do I choose value for the M, how do I know this is optimal ? Nice work by the way, best explanation I found on the internet.
You can stop when the predictions stop improving very much. You can try different values for M and plot predictions after each tree and see when predictions stop improving.
6:42 The transformation formula has as numerator "something came from log(odds)" , while denominator has probabilities. The output for this fraction is something in terms of log(odds). I don't get the point of why ... Maybe because we have lets say: log(prob)/prob=log( )???
HEY ! THANKS FOR THIS AWESOME VIDEO. I HAVE A QUESTION : IN THE 12:00 MIN HOW DID YOU BUILD THIS NEW TREE? WHAT WAS THE CRITERIA FOR CHOOSING AGE LESS THAN 66 AS THE ROOT ?
Just curious at 3:00 why we dont just count the probability instead we use log odd and obtain probability from it. This two probabilities are essentially the same thing arn't them? I guess they are the same and the reason being we want to perform gradient boost tree on log odd instead of directly on probability since it might cause over shot and obtain something not in [0,1].
That's correct. By using the log(odds), which go from negative infinity to positive infinity, we can add as many trees together as we want without any fear of going out of range. In contrast, if we used probabilities, we would have to check to make sure we stayed within values between 0 and 1.
Thanks again for a wonderful video. At 6:00 and 12:00, regression trees are built to fit residuals. How are the conditions obtained to building these trees?
@@statquest At 12:00 it checks "Age < 66". Why do we specifically check for "Age < 66" instead of say "Age < 71"? Was the value 66 obtained based on some mathematical basis? Thanks again for promptly responding to all the questions.
XGBoost Part 1, Regression: ruclips.net/video/OtD8wVaFm6E/видео.html Part 2 Classification: ruclips.net/video/8b1JEDvenQU/видео.html Part 3 Details: ruclips.net/video/ZVFeW798-2I/видео.html Part 4, Crazy Cool Optimizations: ruclips.net/video/oRrKeUCEbq8/видео.html
thanks for this video! qq - @7:45 : How did the output of the tree have negative values in their leaf? Even we use it as a classifier, shouldn't the value be in terms of ratio of positives to negatives?
The output from each tree (the values in the leaves) are on the log(odds) scale, which we later convert into a probability of being one of the two classifications. For details, see: 14:27
Thanks for this video. But one question. Does the tree that you constructed for predicting residuals at 5:30 use sum of squared errors as in case of regression trees or GINI index as in case of decision trees? Since we have only two target values
In a pinned comment I wrote "Gradient Boost traditionally uses Regression Trees. If you don't already know about Regression Trees, check out the 'Quest: ruclips.net/video/g9c66TUylZ4/видео.html"
Hey josh great videos!! But I want to ask a doubt around 6:40. To add the leaf and tree's prediction, we are converting tree's prediction through that formula to convert it into log(odds) format, the same type as of leaf and continue to do the same process for each subsequent trees, Right. My question is why not we convert the initial single leaf's output to probability format for once and spare all the predictions of further trees from that conversion formula ?
Because the log(odds) goes from negative infinity to positive infinity, allowing us to add as many trees as we please without having to worry that we will go too far. In contrast, if we used probability, then we would have hard limits at 0 and 1, and then we would have to worry about adding too many trees and going over or under etc.
I think there is a mistake, in the way the tree classified to predict after 14:41. As Age = 25 and the explanation takes to right, which shouldn't have been. Typically, "Yes" follows to the direction of the Arrow and a "No" to the left. However, its contrary to the assumptions. Correct me if I am wrong. Great Explanation.
There are different conventions for drawing trees. The one I follow is that if the statement is "true", you take the left branch. If the statement is "false", you take the right branch. I try to be consistent with this.
This is absolutely a great video. Will you cover why we can use residual/(p*(1-p)) as the log of odds in your next video? Very excited for the part 4!!
Yes! The derivation is pretty long - lots of little steps, but I'll work it out entirely in the next video. I'm really excited about it as well. It should be out in a little over a week.
How do you create each tree? In your decision tree video you use them for classification, but here they are used to predict the residuals (something like regression trees)
First of all thank you for such a great explanations. Great job! It would be great if you could make a video about the Seurat package, which very powerful tool for single cell RNA analysis.
Thanks so much for the amazing videos as always! One question: why the loss function for Gradient Boost classification uses residual instead of cross entropy? Thanks!
Because we only have two different classifications. If we had more, we could use soft max to convert the predictions to probabilities and then use cross entropy for the loss.
NOTE: Gradient Boost traditionally uses Regression Trees. If you don't already know about Regression Trees, check out the 'Quest: ruclips.net/video/g9c66TUylZ4/видео.html Also NOTE: In Statistics, Machine Learning and almost all programming languages, the default base for the log function, log(), is log base 'e' and that is what I use here.
Support StatQuest by buying my books The StatQuest Illustrated Guide to Machine Learning, The StatQuest Illustrated Guide to Neural Networks and AI, or a Study Guide or Merch!!! statquest.org/statquest-store/
I am a bit confused. The first Log that you took : Log(4/2) - was that to some base other than e? Cause e^(log(x)) = x for log to the base e
And hence the probability will be simply 2/(1+2) = 2/3 = No of Yes / Total Obs = 4/6 = 2/3
Pls let me know if this is correct.
@@parijatkumar6866 The log is to the base 'e', and yes, e^(log(x)) = x. However, sometimes we don't have x, we just have the log(x), as is illustrated at 9:45. So, rather than use one formula at one point in the video, and another in another part of the video, I believe I can do a better job explaining the concepts if I am consistent.
For Gradient Boost for CLASSIFICATION, because we convert the categorical targets(No or Yes) to probabilities(0-1) and the residuals are calculated from the probabilities, when we build a tree, we still use REGRESSION tree, which use sum of squared residuals to split the tree. Is it correct? Thank you.
@@jonelleyu1895 Yes, even for classification, the target variable is continuous (probabilities instead of Yes/No), and thus, we use regression trees.
I cannot imagine the amount of time and effort used to create these videos. Thanks!
Thank you! Yes, I spent a long time working on these videos.
Love these videos! You deserve a Nobel prize for simplifying machine learning explanations!
Wow, thanks!
Thank's Josh. You have no idea how much you've helped me throughout your videos. God bless you.
Thank you!
This content shouldn’t be free Josh. So amazing Thank You 👏🏽
Thank you very much! :)
Thank you so much Josh, I watch 2-3 videos everyday of your machine learning playlist and it just makes my day. Also the fact that you reply to most of the people in the comments section is amazing. Hats off. I only wish the best for you genuinely.
bam!
@@statquest Double Bam!
Bam?
@@sameepshah3835 Triple Bam!
The best explanation I've seen so far. BAM! Catchy style as well ;)
Thank you! :)
@@statquest are the individual trees which are trying to predict the residuals regression trees?
@@arunavsaikia2678 Yes, they are regression trees.
I'm enjoying the thorough and simplified explanations as well as the embellishments, but I've had to set the speed to 125% or 150% so my ADD brain can follow along.
Same enjoyment, but higher bpm (bams per minute)
Awesome! :)
you really explain complicated things in very easy and catchy way.
i like the way you BAM
BAM!!! :)
Thanks for all you've done. You know your videos is first-class and precision-promised learning source for me.
Great to hear!
Thank you very much! Your step by step explanation is very helpful. It gives to people with poor abstract thinking like me chance to understand all math of these algorithms.
Glad it was helpful!
Will recommend the channel for everyone study the machine learning :) Thanks a lot, Josh!
Thank you! :)
You have explained the Gradient Boosting Regressor and Classifier very well. Thank you!
Thank you!
I wish I had a teacher like Josh! Josh, you are the best! BAAAM!
Thank you!:)
Finally a video that shows the process of gradent boosting. Thanks a lot.
Thanks!
That's an excellent lesson and a unique sense of humor. Thank you a lot for the effort in producing these videos!
Glad you like them!
I'm new to ML and these contents are gold. Thank you so much for the effort!
Glad you like them!
Thanks for the video! I’ve been going on a statquest marathon for my job and your videos have been really helpful. Also “they’re eating her...and then they’re going eat me!....OH MY GODDDDDDDDDDDDDDD!!!!!!”
AWESOME!!!
This is amazing. This is the nth time I have come back to this video!
BAM! :)
Fantastic video , I was confused about the gradient boosting, after watching all parts of gb technique from this channel, I understood it very well :)
Bam! :)
Yet again. Thank you for making concepts understandable and applicable
Thanks!
Amazing illustration of a complicated concept. This is best explanation. Thank you so much for all your efforts in making us understand the concepts very well !!! Mega BAM !!
Thank you! :)
Respect and many thanks from Russia, Moscow
Thank you!
Absolutely wonderful. You are are my guru and a true salute to you
Thank you!
my life has been changed for 3 times. First, when I met Jesus. Second, when I found out my true live. Third, it's you Josh
Triple bam! :)
Thank you Josh for another exciting video! It was very helpful, especially with the step-by-step explanations!
Hooray! I'm glad you appreciate my technique.
Love these videos. Starting to understand the concepts. Thank you Josh.
Thank you! :)
Very simple and practical lesson. I did created a worked sample based on this with no problems.
It might be obvious, but not explained there, that initial mean odd should be more than 1. It might be explained as odd of more rare event should be closer to zero.
Glad to see this video arrived just at the time I started to interest this topic.
I guess it will become a "bestseller"
I have beeeeennnn waiting for this video..... Awesome job Joshh
Thanks!
Hello Josh! I think that there might be a mistake in methodology at min 5:11 compared to what you showed in part 4 of the series for computing the residual. In this video, the first set of residuals you computed it as (Observed - log(odds) = residuals) and in part 4 you calculate it as (Observed - probability = residuals), so in this scenario where we have Observed as 1, log(odds) as 0.7, and p as 0.66, shouldn't the residuals be (1 - 0.66 = 0.33) instead of (1 - 0.7 - 0.3)?
Love your videos and I am a huge fan!
I think you are confused, perhaps because the log(odds) = log(4/2) = 0.7 = 4/6 = probability. So, in this specific situation, both the log(odds) and the probability are the same. Thus, when we calculate the residuals, we use the probability. The equation is Residual = (observed - probability), as can been see in earlier at 4:49
I was wrong! All your songs are great!!!
Quadruple BAM!
:)
Another great lecture by Josh Starmer.
Hooray! :)
@@statquest I actually have a draft paper (not submitted yet) and included you in the acknowledgements if that is ok with you. I will be very happy to send it to you when we have a version out.
@@ElderScrolls7 Wow! that's awesome! Yes, please send it to me. You can do that by contacting me first through my website: statquest.org/contact/
@@statquest I will!
man, you videos are just super good, really.
Thank you!
It is perfectly understood. Thank you so much!
Glad it was helpful!
Super Cool to understand and study, Keep Up master..........
Thank you!
Thank you so much for this series, I understand everything thanks to you!
bam! :)
Why when plugging into the logistic function around 2:42 is 1+e^log(4/2) in the denominator and not 1+ e^-log(4/2)? (Given the sigmoid is 1/[1+e^-x]). When I try plugging in e^(log(4/2))/[1+e^log(4/2)] I get 0.574, and when I use e^(log(4/2))/[1+e^-log(4/2)] I get something closer (0.776). What base is the log in? (I tried base 2 and base e but got diff results still)
In statistics, machine learning and most programming languages, the default log function is log to the base 'e'. So, in all of my videos, I use log to the base 'e'. In this video, we use e^log(odds) / (1 + e^log(odds)) to convert the odds. This equation is derived here: ruclips.net/video/BfKanl1aSG0/видео.html
As to why you're not getting 0.7 when you do the math, you need to double check that you are using base 'e'. For example, when I use base 10, I get the same result you got:
> exp(log10(4/2)) / (1 + exp(log10(4/2)))
[1] 0.5746943
However, when I use base 'e', I get the result in the video:
> exp(log(4/2)) / (1 + exp(log(4/2)))
[1] 0.6666667
You save me from the abstractness of machine learning.
Thanks! :)
Already waiting for Part 4...thanks as always Josh!
I'm super excited about Part 4 and should be out in a week and a half. This week got a little busy with work, but I'm doing the best that I can.
thanks alot , ur videos helped me too much, plz keep going
Thank you!
All your videos are super amazing!!!!
Thank you! :)
Best original song ever in the start!
Yes! This is a good one. :)
Hi Josh, great video.
Thank you so much for your great effort.
Thank you!
I wish I could give you the money that I pay in tuition to my university. It's ridiculous that people who are paid so much can't make the topic clear and comprehensible like you do. Maybe you should do teaching lessons for these people. Also you should have millions of subscribers!
Thank you very much!
nice explanation and easy to understand thanks bro
You are welcome
the best video for GBT
Thanks!
Amazing and Simple as always. Thank You
Thank you very much! :)
God bless you josh
I really appreciate
Thank you!
First of all, I would like to thank you, Dr. Josh, for all these great videos. I would like to ask how important, in your experience, it is to understand the algorithms mathematics, as you analyze them in parts 2 and 4, especially for people who want to work in the analysis of biological data. Thanks a lot again! you really helped me understand many machine learning topics.
One of the reasons I split these videos into "main ideas" and "mathematical details" was I felt that the "main ideas" were more important for most people. The details are interesting, and helpful if you want to build your own tree based method, but not required.
@statquest Thank you for your reply! Also, I would like to thank you again for all this knowledge that you provide. I have never seen a better teaching methodology than yours ! :)
@@ΒΑΣΙΛΗΣ_ΛΕΒΕΝΤΑΡΟΣ Thank you very much! :)
thanks for videos. best of anything else I did see. Will use this 'pe-pe-po-pi-po" as message alarm on phone)
bam!
I'm Thankful to u Joshhh , HURRAY ! GIGA BAMM!
:)
Excellent as always! Thanks Josh!
Thank you! :)
1:13 1:29 2:45 5:11 5:36 6:10 6:42 7:04 9:05 10:42 12:00 13:27 14:28 15:08 15:53
So finallyyyy the MEGAAAA BAMMMMM is included.... Awesomeee
Yes! I was hoping you would spot that! I did it just for you. :)
@@statquest i was in office when i first wrote the comment earlier so couldn't see the full video...
How does the multi-classification algorithm work in this case? Using one vs rest method?
It's been over 11 months and no reply from josh... bummer
have the same question
@@Nushery12 well, we could use one vs rest approach
It uses a Softmax objective in the case of multi-class classification. Much like Logistic(Softmax) regression.
Listening to your song makes me thinking of Phoebe Buffay haha.
Love it, anyway !
See: ruclips.net/video/D0efHEJsfHo/видео.html
@@statquest Smelly stat, smelly stat, It's not your fault (to be so hard to understand)
@@statquest btw i like your explanation on gradient boost too
THIS IS A BAMTABULOUS VIDEO !!!!!!
BAM! :)
How do you create the classification trees using residual probabilities? Do you stop using some kind of purity index during the optimization in that case? Or do you use regression methods?
We use regression trees, which are explained here: ruclips.net/video/g9c66TUylZ4/видео.html
Great video! Thank you!
Thanks!
Thank you so much can you please make a video for Support Vector Machines
Agreed!
Now I want to watch Troll 2
:)
Somewhere around the 15 min mark I made up my mind to search this movie on google
@@AdityaSingh-lf7oe bam
Your are very helpful, thank you!
Thank you!
really liked this intro
bam! :)
Hi, I have a few questions: 1. How do we know when GBDT algorithms stops( except the M, number of trees) 2. how do I choose value for the M, how do I know this is optimal ?
Nice work by the way, best explanation I found on the internet.
You can stop when the predictions stop improving very much. You can try different values for M and plot predictions after each tree and see when predictions stop improving.
@@statquest thank you!
Gradient Boost: BAM
Gradient Boost: Double BAM
Gradient Boost: Triple BAM
Gradient Boost: Quadruple BAM
Great Gradient Boost franchise)
Thanks so much! XGBoost is next! It's an even bigger and more complicated algorithm, so it will be many, many BAMs! :)
I thought you are ganna say PentaBAM -> Unstoppable -> Godlike (if you play League of Legend
You r amazing sir! 😊 Great content
Thanks a ton! :)
Bloody awesome 🔥
Thanks!
@statquest you mentioned at 10:45 that we build a lot of trees. Are you trying to refer to bagging or having different tree at each iteration?
Each time we build a new tree.
6:42 The transformation formula has as numerator "something came from log(odds)" , while denominator has probabilities.
The output for this fraction is something in terms of log(odds). I don't get the point of why ... Maybe because we have lets say: log(prob)/prob=log( )???
Agreed - I'm not really sure how this works, but it does.
HEY ! THANKS FOR THIS AWESOME VIDEO. I HAVE A QUESTION : IN THE 12:00 MIN HOW DID YOU BUILD THIS NEW TREE? WHAT WAS THE CRITERIA FOR CHOOSING AGE LESS THAN 66 AS THE ROOT ?
Gradient Boost uses Regression Trees: ruclips.net/video/g9c66TUylZ4/видео.html
subscribed sir....nice efforts sir
Thank you! :)
Just curious at 3:00 why we dont just count the probability instead we use log odd and obtain probability from it. This two probabilities are essentially the same thing arn't them? I guess they are the same and the reason being we want to perform gradient boost tree on log odd instead of directly on probability since it might cause over shot and obtain something not in [0,1].
That's correct. By using the log(odds), which go from negative infinity to positive infinity, we can add as many trees together as we want without any fear of going out of range. In contrast, if we used probabilities, we would have to check to make sure we stayed within values between 0 and 1.
Thanks again for a wonderful video. At 6:00 and 12:00, regression trees are built to fit residuals. How are the conditions obtained to building these trees?
I'm not sure I understand your question. Can you clarify? We just build trees with the residuals as the target variable.
@@statquest At 12:00 it checks "Age < 66". Why do we specifically check for "Age < 66" instead of say "Age < 71"? Was the value 66 obtained based on some mathematical basis? Thanks again for promptly responding to all the questions.
@@madaramarasinghe829 Gradient Boost uses regression trees. To learn how regression trees are built, see: ruclips.net/video/g9c66TUylZ4/видео.html
Josh my hero!!!
:)
Hey Josh,
I really enjoy your teaching. Please make some videos on XG Boost as well.
XGBoost Part 1, Regression: ruclips.net/video/OtD8wVaFm6E/видео.html
Part 2 Classification: ruclips.net/video/8b1JEDvenQU/видео.html
Part 3 Details: ruclips.net/video/ZVFeW798-2I/видео.html
Part 4, Crazy Cool Optimizations: ruclips.net/video/oRrKeUCEbq8/видео.html
thanks for this video! qq - @7:45 : How did the output of the tree have negative values in their leaf? Even we use it as a classifier, shouldn't the value be in terms of ratio of positives to negatives?
The output from each tree (the values in the leaves) are on the log(odds) scale, which we later convert into a probability of being one of the two classifications. For details, see: 14:27
Thanks for this video. But one question. Does the tree that you constructed for predicting residuals at 5:30 use sum of squared errors as in case of regression trees or GINI index as in case of decision trees? Since we have only two target values
In a pinned comment I wrote "Gradient Boost traditionally uses Regression Trees. If you don't already know about Regression Trees, check out the 'Quest: ruclips.net/video/g9c66TUylZ4/видео.html"
Hey josh great videos!! But I want to ask a doubt around 6:40. To add the leaf and tree's prediction, we are converting tree's prediction through that formula to convert it into log(odds) format, the same type as of leaf and continue to do the same process for each subsequent trees, Right.
My question is why not we convert the initial single leaf's output to probability format for once and spare all the predictions of further trees from that conversion formula ?
Because the log(odds) goes from negative infinity to positive infinity, allowing us to add as many trees as we please without having to worry that we will go too far. In contrast, if we used probability, then we would have hard limits at 0 and 1, and then we would have to worry about adding too many trees and going over or under etc.
Hi Statquest would you please make a video about naive bayes? Please it would be really helpful
amazing as always !!
Any time! :)
The legendary MEGA BAM!!
Ha! Thank you! :)
Congrats!! Nice video! Ultra bam!!
Thank you very much! :)
I salute your hardwork, and mine too
Thanks
I think there is a mistake, in the way the tree classified to predict after 14:41. As Age = 25 and the explanation takes to right, which shouldn't have been. Typically, "Yes" follows to the direction of the Arrow and a "No" to the left. However, its contrary to the assumptions. Correct me if I am wrong.
Great Explanation.
There are different conventions for drawing trees. The one I follow is that if the statement is "true", you take the left branch. If the statement is "false", you take the right branch. I try to be consistent with this.
Thank you for good videos!
Thanks! :)
Thank you, awesome video
Thank you! :)
God bless you , thanks you so so so much.
Thank you! :)
16:25 My first *Mega Bam!!!*
yep! :)
This is absolutely a great video. Will you cover why we can use residual/(p*(1-p)) as the log of odds in your next video? Very excited for the part 4!!
Yes! The derivation is pretty long - lots of little steps, but I'll work it out entirely in the next video. I'm really excited about it as well. It should be out in a little over a week.
How do you create each tree? In your decision tree video you use them for classification, but here they are used to predict the residuals (something like regression trees)
same question
First of all thank you for such a great explanations. Great job!
It would be great if you could make a video about the Seurat package, which very powerful tool for single cell RNA analysis.
This guy literally coming to my dreams 😂
:)
thank you very much for your videos !
when will you post the next one ?
absolute gold
Thank you! :)
Thank you very much for sharing! :)
Thanks! :)
15:42 What class should be classified if the probability is exactly 0.5 since 0.5 is equal to the threshold ?
Probably the best thing to do is to return an error or a warning.
very detailed and convincing
Thank you! :)
Wow! I haven't seen a Mega BAM before!
:) That was for a special friend.
Can GB for classification be used for multiple classes? If yes, how will the math be, the video explains for binary classes.
Thanks so much for the amazing videos as always! One question: why the loss function for Gradient Boost classification uses residual instead of cross entropy? Thanks!
Because we only have two different classifications. If we had more, we could use soft max to convert the predictions to probabilities and then use cross entropy for the loss.
@@statquest Thank you!