XGBoost Part 2 (of 4): Classification

Поделиться
HTML-код
  • Опубликовано: 26 сен 2024

Комментарии • 400

  • @statquest
    @statquest  4 года назад +30

    Corrections:
    14:24 I meant to say "larger" instead of "lower.
    18:48 In the original XGBoost documents they use the epsilon symbol to refer to the learning rate, but in the actual implementation, this is controlled via the "eta" parameter. So, I guess to be consistent with the original documentation, I made the same mistake! :)
    Support StatQuest by buying my book The StatQuest Illustrated Guide to Machine Learning or a Study Guide or Merch!!! statquest.org/statquest-store/

    • @parijatkumar6866
      @parijatkumar6866 3 года назад +1

      Very nice videos. God bless you man!!

    • @rahul-qo3fi
      @rahul-qo3fi Год назад

      15:27 The similarity equations are missing residual **2 (Thanks for the detailed explanations Love your content)

    • @statquest
      @statquest  Год назад +1

      @@rahul-qo3fi At 15:27 we are calculating the output values for the leaf, not similarity scores, and the equation in the video at this time point is the correct equation for calculating output values.

    • @rahul-qo3fi
      @rahul-qo3fi Год назад +1

      @@statquest aah got it, thanks:)

    • @pedromerrydelval7260
      @pedromerrydelval7260 7 месяцев назад

      Hi Josh, I don't understand the mention to parameter "min_child_weight" in 12:58. Is that a typo or am I missing something. Thanks!

  • @TY-il7tf
    @TY-il7tf 4 года назад +91

    How do I pass any interviews without these videos? I don't know how much I owe you Josh!

    • @statquest
      @statquest  4 года назад +10

      Thanks and good luck with your interview. :)

    • @TheParijatgaur
      @TheParijatgaur 4 года назад

      did you clear ?

    • @guneygpac6505
      @guneygpac6505 4 года назад +15

      I got a few academic papers under review thanks to Josh. I watch his videos first before studying the other sources. Without his videos it would be Xhard to understand those sources. I put his name in the acknowledgements for helpful suggestions (he did actually reply to me several times here). I wish I could cite some of his papers but they are very unrelated to my area (economics). Unfortunately that's all I can do because the exchange rate would make any donations I can make look very stupid...

    • @arda8206
      @arda8206 3 года назад

      @@guneygpac6505 I think you are from Turkey :D

    • @jiangtaoshuai1188
      @jiangtaoshuai1188 2 года назад

      so you also yell BAM!! ?

  • @manuelagranda2932
    @manuelagranda2932 4 года назад +56

    I finished with this video all the list, I am from Colombia and is hard to pay for learn about this concepts, so I am very gratful for your videos, and now my mom hates me when I say Double Bamm for nothing!! jajaja

    • @statquest
      @statquest  4 года назад +6

      That's awesome! I'm glad the videos are helpful. :)

  •  Год назад +5

    From Vietnam, and hats off to your talent in explaining complicated things in a way that I feel so comfortable to continue watching.

  • @alihaghighat1244
    @alihaghighat1244 Год назад +4

    When we use fit(X_train,y_train) and predict(X_test) without watching Josh's videos or studying the underline concepts, nothing happens even if we get good results.
    Thank you Josh for simplifying these hard pieces of stuff for us and creating these perfect numerical examples. Please keep up this great work.

  • @ramyasreddy4357
    @ramyasreddy4357 12 дней назад +1

    Thank you Josh! You literally broke everything into little detail... Missed to meet you this time in India!

    • @statquest
      @statquest  11 дней назад

      Thank you! Maybe we can meet up next time.

  • @shaelanderchauhan1963
    @shaelanderchauhan1963 2 года назад +6

    Josh, On a scale of 5 you are a level 5 Teacher. I have learned so much from your videos. I owe so much to Andrew Ng and You. I will contribute to Patreon Once I get a Job. Thank you

  • @prathamsinghal5261
    @prathamsinghal5261 4 года назад +12

    Josh! You made a machine learning a beautiful subject and finally I m in love with these Super BAM videos.

  • @wucaptian1155
    @wucaptian1155 4 года назад +15

    You are a nice guy , absolutely! I can't wait for the part 3.Although I have been learned XGBoost from the original paper, I can still get more interesting things from your video.Thank you :D

  • @wongkitlongmarcus9310
    @wongkitlongmarcus9310 3 года назад +3

    as a beginner of data science, I am super grateful for all of your tutorials. Helps a lot!

  • @saptarshisanyal4869
    @saptarshisanyal4869 2 года назад +2

    All the boosting and bagging algorithms are complicated algorithms. In universities, I have hardly seen any professor who can make these algorithms understand like Joshua does. Hats off man !!

  • @chelseagrinist
    @chelseagrinist 3 года назад +4

    Thank you so much for making Machine Learning this easy for us . Grateful for your content . Love from India

  • @yukeshdatascientist7999
    @yukeshdatascientist7999 3 года назад +1

    I have come across all the videos from gradient boosting till now, you clearly explain each and every step. Thanks for sharing the information with all. It helps a lot of people.

    • @statquest
      @statquest  3 года назад

      Glad it was helpful!

  • @seanmcalevey4566
    @seanmcalevey4566 4 года назад +3

    Yo fr these are the best data science/ML explanatory vids on the web. Great work, Josh!

    • @statquest
      @statquest  4 года назад

      Thank you very much! :)

  • @joshisaiah2054
    @joshisaiah2054 3 года назад +3

    Thanks Josh. You're a life saver and have made my Data Science transition a BAM experience. Thank You!

  • @changning2743
    @changning2743 4 года назад +1

    I must have watched almost every video at least three times during this pandemic. Thank you so much for your effort!

    • @statquest
      @statquest  4 года назад

      Wow!!! Thank you very much! :)

  • @amalsakr1381
    @amalsakr1381 4 года назад

    Million Thanks Josh. I can not wait to watch other videos about XGBoost, lightBoost, CatBoost and deep learning. Your videos are the best.

    • @statquest
      @statquest  4 года назад

      Part 3 on XGBoost should be out on Monday.

  • @madhur089
    @madhur089 3 года назад +1

    Josh you are saviour...thanks a ton for making these fantastic videos...your video lectures are simple and crystal clear! Plus I love the sounds you make in between :)

  • @hassaang
    @hassaang 3 года назад +1

    Bravo! Thanks for making life easy. Thanks and appreciation from Qatar.

    • @statquest
      @statquest  3 года назад

      Hello Qatar!! Thank you very much!

  • @lambdamax
    @lambdamax 4 года назад +2

    Thanks for boosting my confidence in understanding. There was this recent Kaggle tutorial that said LightGBM model "usually" does better performance than xgboost, but it didn't provide any context! I remember that xgboost was used as a gold standard-ish about 2-3 years ago(even CERN uses it if I'm not mistaken). Anyhoo, I hope I can keep up with all of this. I need to turn my boosters on.

    • @statquest
      @statquest  4 года назад +3

      I'm happy to boost your confidence! Part 3 will explain the math if you are interested in those details - they are not required - and Part 4 will describe a lot of optimizations that XGBoost uses to be efficient (making it easier to find good hyper-parameters).

  • @furqonarozikin7157
    @furqonarozikin7157 3 года назад +1

    thanks buddy, its hard for me to know how xgboost works in classification, but this tutorial has explained well

  • @parinitagupta6973
    @parinitagupta6973 4 года назад +2

    All the videos are awesome and this is THE BAMMEST way to learn about ML and predictive modelling. Can we also have some videos about time series and the underlying concepts. That would be TRIPLE TRIPLE BAM!!!

    • @statquest
      @statquest  4 года назад +1

      Thank you very much! :)

  • @lakhanfree317
    @lakhanfree317 4 года назад +1

    Finally yay waited for these video. For long but worth the wait. Thanks for everything.

  • @maruthiprasad8184
    @maruthiprasad8184 11 месяцев назад +1

    hats off all my doubts clarified here, superb cooooooooooooool Big BAAAAAAAAMMMMMMMMMM!

  • @allen8376
    @allen8376 5 месяцев назад +1

    The little calculation noises give me life

    • @statquest
      @statquest  5 месяцев назад +1

      beep, boop, beep!

  • @nurdauletkemel8155
    @nurdauletkemel8155 3 года назад +1

    Wow, I just discovered this channel and will use it to prep for my interview BAM! But the interview is in 2 hours Smal BAM :ccccccc

  • @jamemamjame
    @jamemamjame 2 года назад +1

    Ty very much, will buy your song within tomorrow morning from Thailand :)

  • @zzygyx9119
    @zzygyx9119 Год назад +1

    awesome explanation! I bought your book "The statquest illustrated guide to machine learning" even though I have understanded all the concepts.

    • @statquest
      @statquest  Год назад

      Thank you so much!!! I really appreciate your support.

  • @lfalfa8460
    @lfalfa8460 Год назад +1

    Classification is not a vacation,
    it is not a sensation,
    but it's cooooool!
    🤣

  • @himanshumangoli6708
    @himanshumangoli6708 3 года назад +1

    I hope you were my teacher in my college days. So instead of watching your videos, i am able to create it.

  • @thebearguym
    @thebearguym 11 месяцев назад +1

    Enjoyed it! Cool explanation

  • @andrewwilliam2209
    @andrewwilliam2209 4 года назад +1

    Hey Josh, you might not see this, but I really look up to you and your videos. I got sucked into machine learning last month, and you have made the journey easier thusfar. If I get an internship or something in the following months, I'll be sure to donate to you and hit you up on your social media to thank you :). Hopefully one day I will have enough knowledge to share it widely like you.
    Cheers

    • @statquest
      @statquest  4 года назад

      Thank you very much! Good luck with your studies! :)

    • @andrewwilliam2209
      @andrewwilliam2209 4 года назад +1

      @@statquest thanks Josh, will definitely update you in a year or two about the progress I've made😀

    • @statquest
      @statquest  4 года назад

      Bam!

  • @mehdi5753
    @mehdi5753 4 года назад +4

    Thanx for this simplification, can you do the same this for LGBM and CatBoost ?

  • @manojbhardwaj27
    @manojbhardwaj27 4 года назад +1

    @Josh Starmer: I would like to know about PRUNING concept in XGB.
    Are Gamma and Cover used for Pre-Pruning or Post-Pruning. In sklearn, we generally use Pre-Pruning which make more sense to me.
    However, from you tutorial it's seems like we are doing Post-Pruning (after full tree built).
    Can you please specify with a reason ?

    • @statquest
      @statquest  4 года назад +1

      These videos on XGBoost describe how XGBoost was designed from the ground up. Thus, the reason for anything in these video is "that's the way they designed XGBoost."

  • @Kevin7896bn
    @Kevin7896bn 4 года назад +13

    Hit that like button before watching it.

  • @dc33333
    @dc33333 2 года назад +1

    The music is fantastic.

  • @mrcharm767
    @mrcharm767 Год назад +1

    concepts going straight to my head as if u shot arrows bam!!!!!

  • @abylayamanbayev8403
    @abylayamanbayev8403 2 года назад

    Thank you very much professor! I would love to see your explanations of statistical learning theory covering following topics: concentration inequalities, rademacher complexity and so on

    • @statquest
      @statquest  2 года назад

      I'll keep that in mind.

  • @LucasPeitton
    @LucasPeitton 5 месяцев назад

    These videos are being truly helpful. Many thanks for sharing them! I do have a question RE XGBoost usage context. You mentioned that XGB is designed for large, complicated datasets; does this mean that it performs poorly with smaller datasets? Thanks in advance

    • @statquest
      @statquest  5 месяцев назад

      I'm not sure - I just know that it has tons of optimizations for large datasets. To learn more about them, see: ruclips.net/video/oRrKeUCEbq8/видео.html

  • @yusufbalci4935
    @yusufbalci4935 4 года назад +1

    Very well explained!! Awesome..

  • @khaikit1232
    @khaikit1232 Год назад

    Hi Josh,
    At 19:20, it is written that:
    log(odds) Prediction = 0 + (0.3 x -2) = -0.6
    However I was just wondering since the tree is predicting the residuals, isn't the output of the XGBoost tree a probability? So shouldn't we convert the output from probabilities to log(odds) before we add it to the initial guess of 0?

    • @statquest
      @statquest  Год назад +1

      The tree predicts residuals, but the output values from the leaves are not residuals, instead, they calculated at 14:58. Now, to be honest, I have no idea why that particular formula results in a log(odds), but it must, because that is what both XGBoost and Gradient Boost do, and neither of them do anything else before calculating the final log odds.

  • @ankurbhattacharjee3912
    @ankurbhattacharjee3912 3 года назад +1

    I have a question that for the initial predicted output we have taken 0.5 but this is classification problem why did we choose 0.5 as the default value.. I mean why the predicted initial value couldn't have been any other value say 1 or 0. Probably my question seems stupid, apologies in advance..

    • @statquest
      @statquest  3 года назад +1

      You can set the initial predicted value to be whatever you want, but, by default, it is 0.5. To be honest, this seems fairly reasonable for classification (since the goal is to have probabilities between 0 and 1 and 0.5 is halfway between them). However, it seems totally crazy for regression, but that's the way it is and the guy that made XGBoost is totally fine with it.

  • @muralik98
    @muralik98 10 месяцев назад +1

    Rule No 1 before watching statquest video.
    Like and then click on play button

  • @globamia12
    @globamia12 4 года назад +1

    Your videos are so funny and smart! Thank you

  • @muralikrishna9499
    @muralikrishna9499 4 года назад +5

    After a long time..... BAMMM!

  • @gabrielpadilha8638
    @gabrielpadilha8638 2 года назад

    Josh, good morning, let me ask you a question. You said that we can put the initial probability to a value different than 0.5 if, for example, the training dataset is unbalanced. That means that xgboost can deal with unbalanced datasets without the needing to balanced the training dataset before submitting it to the model?

    • @statquest
      @statquest  2 года назад

      I'm not really sure. It probably depends on how imbalanced the data are.

  • @omreekapon2465
    @omreekapon2465 Год назад

    Great explanation like always! just a small question, at 10:12 you mentioned that the cover is defined as the similarity score minus lambda, but it looks in the equation that is plus, so what is the right answer? thanks for such an amazing explanations!

    • @statquest
      @statquest  Год назад

      The denominator = [Sum (previous * (1 - previous)] + lambda. Cover = Sum (previous * (1 - previous). Thus, cover = denominator - lambda = [Sum (previous * (1 - previous)] + lambda - lambda = Sum (previous * (1 - previous)

  • @코드벅스
    @코드벅스 4 года назад

    Thank you for marvelous video!
    I have some questions regarding what's explained
    1. Can Number of trees we make be controlled by what we call 'Epoch' in ML?
    2. When the model runs through epochs, is there any chance some epochs go the other way from the answer value?
    - I understood that by setting learning rate too high, new prediction will bypass the answer, causing the learning procedure to
    fluctuate a lot.
    3. Ways we can slow down learning speed, I think are
    1) Larger cover, 2) Larger gamma, 3) larger lambda
    is it right? or are there more ways to control the speed?
    Always thanks to all the efforts you made for the materials!

    • @statquest
      @statquest  4 года назад

      1) I think you can use that terminology if you want, but I don't know of anyone else who does. In xgboost, the parameter you set for the number of trees is "num_boost", and generally speaking, building trees is called "boosting".
      2) I don't know.
      3) Although not mentioned in the original paper, XGBoost contains a few other ways to slow down learning (add regularization). For full details, see the manual: xgboost.readthedocs.io/en/latest/parameter.html

    • @코드벅스
      @코드벅스 4 года назад +1

      @@statquest
      Thanks for kind reply! :)

  • @santoshkumar-bz9mg
    @santoshkumar-bz9mg 4 года назад +2

    U r awesome
    Love from INDIA

  • @vijaykrish64
    @vijaykrish64 4 года назад +1

    Must watch videos.Just a small question,why do we need both cover and gamma for pruning?

    • @statquest
      @statquest  4 года назад +2

      Although gamma is thoroughly discussed in the original manuscript, cover is never mentioned. So my best guess is that while both cover and gamma do similar things, there are still differences in how they do them and the types of leaves they prune. For example, you could have a leaf with a lot of residuals in it (and thus, a relatively high "cover", so cover would not prune), but if they are not very similar, you will have a low similarity score and a low gain (so gamma would prune).

  • @paligonshik6205
    @paligonshik6205 4 года назад +1

    Thanks a lot, keep doing an awesome job

    • @statquest
      @statquest  4 года назад

      Thank you very much! :)

  • @jingo6221
    @jingo6221 4 года назад +1

    life saver, cannot thank more

    • @statquest
      @statquest  4 года назад

      Thanks! Part 3 should be out soon.

  • @shalinirajanna4281
    @shalinirajanna4281 4 года назад +1

    Thank you such good videos. I see that XGBoost has boot alpha and lambda parameters. you've explained about lambda, where would alpha fit in ?

    • @statquest
      @statquest  4 года назад +2

      Alpha was added after the original publication, so I didn't cover it. Presumably alpha is just like lambda and makes the trees shorter and shrinking the output values. And presumably it can shrink output values all the way to 0, just like lasso regression (and presumably lambda can not, just like ridge regression).

  • @yuchenzhao6411
    @yuchenzhao6411 4 года назад +1

    8:04 If two thresholds have same 'Gain' why would we pick "Dosage < 15" rather than "Dosage < 5"? Dose it matters for larger dataset?
    13:23 Since in part1 we set gamma=130 and part2 we set gamma=3, I'm wondering how do we choose the value for gamma?

    • @statquest
      @statquest  4 года назад +3

      1) If 2 or more thresholds have the same "best GINI score", then just pick one, it doesn't matter. Since this is a greedy algorithm it does not look ahead to see if one of those choices is better in the long run.
      2) When we use XGBoost for regression, the residuals can be relatively large, so gamma may need to be relatively large. When we use XGBoost for classification, the residuals are relatively small, so gamma may need to be relatively small. You can always just build a few trees to get a sense of what values for gamma make sense for pruning.

    • @yuchenzhao6411
      @yuchenzhao6411 4 года назад

      @@statquest thank you very much Josh! Really enjoy your video!

  • @dhruvbishnoi8840
    @dhruvbishnoi8840 4 года назад +1

    Hi Josh,
    What happens if after splitting the node, one leaf has cover lower than the set threshold and the other leaf has cover greater than the set threshold.
    Splitting would not be performed, right?

  • @henkhbit5748
    @henkhbit5748 3 года назад

    Love this series of xgboost. I read your answer about finding the best gamma value parameter using cross validation. According this video xgboost does not create new leaves when the gain < 0. When is extra pruning necessary? I suppose pruning can be done using lambda and additionally use gamma to prevent overfitting...?

    • @statquest
      @statquest  3 года назад +1

      Trees, in general, are notorious for over fitting the data. Random chance can easily result in a gain < 0 and adding an extra parameter for pruning will help prevent over fitting. For more details about the need for pruning trees in general, see: ruclips.net/video/D0efHEJsfHo/видео.html

  • @anggipermanaharianja6122
    @anggipermanaharianja6122 3 года назад +1

    Awesome vid!

  • @osmanparlak1756
    @osmanparlak1756 3 года назад

    Thanks a lot Josh for making ML algorithms understandable. I am learning a lot from your videos. Just one question on the order when splitting to create the trees. I think it doesn't matter whether you start from the last two or first two as we check all.

  • @francescoperia9768
    @francescoperia9768 Год назад

    Hi Josh, I cannot understand why at minute 08:15, after you created the first split (Dosage < 15) and the consequent similarity gain, you don't update the predicted probabilities of the residuals by using the formula e^log(ODDS) / (1 + e^log(ODDS)). In the video it seems that the "previous predicted probability" remains always the initial 0.5, so I'm asking if it should be changed after the first split instead. Thank you in advance

    • @statquest
      @statquest  Год назад +1

      The predicted probabilities should not be changed until after we have created the entire tree and calculated the output values for the leaves.

    • @francescoperia9768
      @francescoperia9768 Год назад +1

      Oh my mistake, you are totally right.. thank you very much. So basically like a standard Gradient Boosting Classifier I build the whole weak learner tree and once I obtain the output leaf values (which are log(ODDS) values calculated with the same formula as the standard GB Classifier apart from lambda) I compute the new prediction starting from the previous one. Then I convert the new log(ODDS) prediction into probability using the logistic function.

  • @pranavkolapkar645
    @pranavkolapkar645 Месяц назад

    I have this question regarding XGboost classifier & Random or decision tree classifier
    I want to know that after label /ordinal encoding , the categorical variables are changed to 0,1,2 etc format and are in integer datatype. So will converting the datatype to category or object from integer change change output. The though to ask this is that if we keep datatype as integer then can splitting condition be a decimal value like 1.5,2.5 etc as against 1 or 2 etc when the datatype is category

    • @statquest
      @statquest  Месяц назад

      I talk about technical details of using XGBoost in this video: ruclips.net/video/GrJP9FLV3FE/видео.html

  • @ocamlmail
    @ocamlmail 2 года назад

    14:24 -- shouldn't be higher values for gamma in order to prune? Lower value for gamma hence Gain - gamma is tend to be positive, hence no prune.

    • @statquest
      @statquest  2 года назад

      Oops!! I should have said "larger" instead of "lower".

  • @jamiescotcher1587
    @jamiescotcher1587 3 года назад

    Hi Josh,
    Specifically, the gradient of the training loss is used to predict the target variables for each successive tree, right? Therefore, does a steeper gradient imply it is going to try harder to correctly predict a specific sample that has been mis-classified, or does it mean it will work harder to predict any member of a certain true class?
    Thanks!

    • @statquest
      @statquest  3 года назад

      For details on how XGBoost treats misclassified samples and how, exactly, it tries harder to correctly classify them, see ruclips.net/video/oRrKeUCEbq8/видео.html

  • @raj345to
    @raj345to 3 года назад +1

    which vedio making tool do u use .....its so cool.

    • @statquest
      @statquest  3 года назад +1

      I answer these questions in this video: ruclips.net/video/crLXJG-EAhk/видео.html

  • @amirsayyed2158
    @amirsayyed2158 4 года назад +1

    Where can I get your Xgboost slides???

    • @statquest
      @statquest  4 года назад +1

      I'll try to make a study guide soon.

  • @EvanZamir
    @EvanZamir 4 года назад +5

    You really should write a book.

  • @junghyunlee781
    @junghyunlee781 2 года назад

    Thanks for video. 12:58 So you mean 'cover' is equal to hyperparameter 'min_child_weight' ??

  • @LL-hj8yh
    @LL-hj8yh Год назад +1

    Hey Josh, how does the similarity score here related to gini/entropy we use for XGBoost’s classification?

    • @statquest
      @statquest  Год назад +1

      I'm not sure I understand your question. Are you wanting to compare the similarity score for XGBoost to how classification is done (with GINI or entropy) for a normal decision tree? If so, they are not related. This similarity score is derived from loss function, whereas GINI and entropy are just used because they work. For details on the XGBoost similarity score, see: ruclips.net/video/ZVFeW798-2I/видео.htmlsi=iv2nJpFE41ijE3zo

    • @LL-hj8yh
      @LL-hj8yh Год назад

      @@statquest thanks Josh! I was earlier under the impression that we need to specify gini or entropy in a xgboost classifier, which seems incorrect as they are only for decision tree, not XGBoost’ classifier. Yet is it true that the similarity score and gini/entropy serve the same purpose, that is to calculate the similarity/purity therefore determine the split?
      Thanks again and congrats on 1M subscribers, that says a lot!

    • @statquest
      @statquest  Год назад +1

      @@LL-hj8yh Yes, the similarity score and GINI serve the same purpose, but we can't use them (Gini or entropy) here since we are fitting the tree to continuous values (even for classification). Thanks!

  • @pierrebedu
    @pierrebedu Год назад

    great explanations! and how does this generalize to multiclass classification? Thanks (one vs all classif repeated n_classes times? )

    • @statquest
      @statquest  Год назад +1

      That's one way to do it. I believe that you can also swap out the loss function and use cross entropy.

  • @priyabratbishwal5149
    @priyabratbishwal5149 3 года назад

    Hi Josh ,
    How to make a tree with multiple predictors using XG boost .Here you showed only single variable called Dosage . How to do it for multiple variables?
    Thanks

    • @statquest
      @statquest  3 года назад

      For each variable in your dataset, you go through the process shown here. You then select the variable that results in the best similarity score.

  • @ntnydv
    @ntnydv Год назад +1

    Thanks!

    • @statquest
      @statquest  Год назад

      HOORAY!!! Thank you so much for supporting StatQuest!!! BAM! :)

  • @kamaldeep8257
    @kamaldeep8257 4 года назад

    Hi Josh, Thank you for such a great explanation. Just want to clarify one thing i.e. is this cover concept applies specifically on the xgboost trees or is it a normal method for all the tree-based algorithms. As every tree-based algorithm have this min_child_weight parameter in sklearn library.

    • @statquest
      @statquest  4 года назад

      Every tree based method has a way of filter out leaves that do no have enough samples going to them, however, the way XGBoost does it is unique.

  • @karangupta6402
    @karangupta6402 3 года назад +2

    Awesome :)

  • @aneesarom
    @aneesarom Год назад

    5:47 if we conisder root node as dosage < 15 then similarity will not be 0 right. since it has 3 elements less than less than 15

    • @statquest
      @statquest  Год назад +1

      No, the similarity of the root node stays the same, regardless of the threshold we use, because it still contains all of the residuals. However, the similarities in the leaf nodes will change.

  • @mohammedgodil4166
    @mohammedgodil4166 2 года назад

    when predicting a value for existing model why did u convert initial prediction 0.5 in to log of odds ? and when u did gradient boost for classification u did not converted that initial prediction to log of odd . this is confusing me pls help .

    • @statquest
      @statquest  2 года назад

      In both cases (XGBoost and Gradient Boost) we use log odds for the initial prediction (see: ruclips.net/video/jxuNLH5dXCs/видео.html ). The reason we use log(odds) is that it's range is from -infinity to +infinity, which means we can add as many trees as we want without having to worry about going above or below some maximum or minimum value. In contrast, probabilities only go from 0 to 1, so we would have to check each time we made a tree to make sure we don't go over 1 or below 0.

  • @Brandy131991
    @Brandy131991 2 года назад

    Hi Josh, thank you for your amazing videos. They are really helping me a lot.
    One thing i still don‘t get is how does xgboost predict multiple classes (e.g. „most likely drug to use“ with drugs 1,2 and 3)?
    Does this work like in multinomial logistic regression, where each class is checked against a baseline-class? Or is it something like a random forrest when using xgboost?

    • @statquest
      @statquest  2 года назад +1

      When there are multiple classes, XGBoost uses the softmax objective function. I explain softmax in my series on Neural Networks: ruclips.net/video/CqOfi41LfDw/видео.html

  • @yulinliu850
    @yulinliu850 4 года назад +1

    Awesome bang. Happy 2020

  • @hubert1990s
    @hubert1990s 4 года назад +1

    while cover makes a leaf not to be sufficient enough to stay in the tree, is it also kinda pruning?

    • @statquest
      @statquest  4 года назад +1

      That is correct. Cover is a way to enforce pruning and not over fitting the training data.

  • @siddhantk007
    @siddhantk007 4 года назад

    You have used example where x (variable/feature) is continuous. How are the unique regression trees made when x is discrete or ranked ? Like the candidate selection using gain and similarity scores ?

    • @statquest
      @statquest  4 года назад +1

      When the feature is discrete or ranked, we use the exact same method described in this video. This is because we are fitting the tree to the residuals, which will still be continuous, regardless of whether the feature is discrete or continuous.

    • @siddhantk007
      @siddhantk007 4 года назад +1

      @@statquest thanks for the quick response ! your videos are simply amazing...

  • @dikshantgupta5539
    @dikshantgupta5539 3 года назад

    for purning the tree , is gain-gamma is same as cover value? As you remove the leaf when you calculate the cover value and also when you calculate gain-gamma

    • @statquest
      @statquest  3 года назад

      For details on cover (and everything else in XGBoost), see: ruclips.net/video/ZVFeW798-2I/видео.html

  • @gahbor
    @gahbor Год назад

    If my dataset has a binary target variable to predict, and most features are also binary, would it make sense to go with min_child_weight=0 ?

    • @statquest
      @statquest  Год назад

      Probably not since that will result in the trees overfitting your data.

  • @rrrprogram8667
    @rrrprogram8667 4 года назад +1

    Hit and like first... Then later i am gonna watch video... MEGAAAA BAMMMM

  • @yjj.7673
    @yjj.7673 4 года назад +1

    That's great. BTW is there a video that only contains songs? ;)

  • @itisakash
    @itisakash 4 года назад +1

    Hey thanks for the videos. Can't wait for the remaining parts in the XGboost series. When are you gonna release the next part?

    • @statquest
      @statquest  4 года назад +1

      Since you are a member, you'll get early access to part 3 this coming monday (January 27). Part 4 will be available for early access 2 weeks later.

  • @61_shivangbhardwaj46
    @61_shivangbhardwaj46 3 года назад +2

    Thnx sir😊

  • @zahrahsharif8431
    @zahrahsharif8431 4 года назад

    Hi Josh, if there were outliers in the data say dosage 1000, this wouldn't affect how the tree makes it's split therefore outliers do not affect it? Aren't tree methods robust to outliers

    • @statquest
      @statquest  4 года назад

      Trees can be less sensitive to outliers than other methods, however, it's always a good idea to remove them first.

  • @장재은-b6i
    @장재은-b6i 4 года назад +2

    your lecture is triple bamm!
    do you have any plan to teach deep learning?

    • @statquest
      @statquest  4 года назад +2

      As soon as I finish with XGBoost.

  • @tudormanoleasa9439
    @tudormanoleasa9439 3 года назад

    What do you do if the cover of a left leaf is less than 1, but the cover of a right leaf is greater than 1? Do you only remove the left leaf or the entire subtree made of root, left leaf, right leaf?

    • @statquest
      @statquest  3 года назад

      If the cover value for one of the leaves is too small, we remove both leaves.

  • @nishidutta3484
    @nishidutta3484 4 года назад

    How is dosage selected as the first split and not any other variable? Is it on the basis of gini impurity?

    • @statquest
      @statquest  4 года назад

      Dosage is selected because it is the only variable. If there were more variables, we would pick the variable (and associated threshold) that gave us the largest value for Gain.

  • @hubert1990s
    @hubert1990s 4 года назад +2

    Can we apply gini instead of gain in XGBoost?

    • @statquest
      @statquest  4 года назад +2

      This is an interesting question. In part 3 (which will be out in a few weeks), you'll see how the similarity scores and regularization are all derived from a single formula and I'm not sure how it would work if we swapped in GINI. So check back in in a few weeks and watch the next video in the series the reason GINI is not used may make more sense.

  • @asabhinavrock
    @asabhinavrock 4 года назад

    Hey Josh. Your videos are really informative and easy to understand. I have joined your channel today and look forward to more exciting content coming up. I was also eager to see your third video in the XGBoost Series. When will that be live?

    • @statquest
      @statquest  4 года назад

      If you go to the community page, you may be able to find a link to part 3 since you are a channel member. Here's the link to the community page: ruclips.net/user/joshstarmercommunity

    • @asabhinavrock
      @asabhinavrock 4 года назад +1

      @@statquest Finally. Made my day!!!

    • @statquest
      @statquest  4 года назад +1

      @@asabhinavrock Awesome!!! Thank you very much.

  • @ahmedabuali6768
    @ahmedabuali6768 3 года назад

    please could you do more video, i am in love with your lectures, I want a video in how we use negative binomial in estimating the sample size

    • @statquest
      @statquest  3 года назад

      I'll keep that in mind.

    • @ahmedabuali6768
      @ahmedabuali6768 3 года назад

      @@statquest do you have lecture notes for these videos? I start downloading the video and prepare my slides. may if you have lecture notes for each video will help me documenting these as a book for me.

    • @statquest
      @statquest  3 года назад +1

      @@ahmedabuali6768 I have PDF study guides for some of my videos here: statquest.org/studyguides/ and I am writing a book that I hope will come out next year.

    • @ahmedabuali6768
      @ahmedabuali6768 3 года назад

      @@statquest that is good, I can pay for all at once? it will take time from me,
      please, I see you forgot to talk about multinet done by Nir Freidman, it is very important.

    • @statquest
      @statquest  3 года назад +1

      @@ahmedabuali6768 I'm not familiar with Multinet, so I can't say if it is important or not. And you are more than welcome to buy all of the study guides! That would be awesome. Thanks for your support.

  • @FF4546
    @FF4546 2 года назад

    Hello Josh, thank you for your video.
    How would this work with more than one variable? Does each variable end up with only one threshold?
    Thank you!

    • @statquest
      @statquest  2 года назад

      You test every variable to find the optimal thresholds and use the one that does the best. However, XGBoost has some optimizations explained here: ruclips.net/video/oRrKeUCEbq8/видео.html

  • @kcAndyyyyy
    @kcAndyyyyy 5 месяцев назад

    I'm kind confused what do you mean by saying "If we prune, then we will subtract gamma for the next Gain value" at 23:58

    • @statquest
      @statquest  5 месяцев назад +1

      I'm just saying that when we prune a node, then we'll test the next one up the tree to determine if we should prune that one as well. And we keep doing that until we get to a node that we should not prune.

    • @kcAndyyyyy
      @kcAndyyyyy 5 месяцев назад

      @@statquest Thanks for replying! So we are using the same gamma for every node pruning right?

    • @statquest
      @statquest  5 месяцев назад +1

      @@kcAndyyyyy yep

  • @KUNALVERMAResScholarDeptofMath
    @KUNALVERMAResScholarDeptofMath 3 года назад

    Hi Josh, Why are we taking the last two values at 6:04?

    • @statquest
      @statquest  3 года назад

      I'm not sure I understand your question. At 6:04, we put 3 residuals in the leaf on the left and 1 residual in the leaf on the right.

  • @zeus1082
    @zeus1082 Год назад

    Thank you for the explanation. Why are we using different decision nodes for each new trees?
    entropy is calculated independent of the residual right?

    • @statquest
      @statquest  Год назад

      I'm not sure I understand your question, however, if you want to learn about the underlying details (i.e. see more of the math) of how XGBoost works, see: ruclips.net/video/ZVFeW798-2I/видео.html

    • @zeus1082
      @zeus1082 Год назад

      @@statquest for each new tree, the root node is different in the video,so I'm confused why the root nodes are different since we are using the same gini or entropy to decide the root node.

    • @statquest
      @statquest  Год назад

      @@zeus1082 Every time we build a tree, we update the residuals. Different residuals = different trees.

    • @zeus1082
      @zeus1082 Год назад

      @@statquest aren't we deciding the split nodes based on gini ?. Please refer a video/timestamp where we decide the split node based on the residuals

    • @statquest
      @statquest  Год назад

      @@zeus1082 See: 3:20 That said, I appreciate your interest in these topics, but I would it would help me if you could watch the videos, all of them (including a 4 gradient boost videos), in order, and maybe watch them a few times before asking more questions. It is possible that my videos are not the best learning tool for you, so I would also consider seeing how other people teach this topic, or consider reading the original manuscript.

  • @deana00
    @deana00 Год назад

    Hi, thanks for your great video! But, I have question here..
    Why the way to get the initial prediction in xgboost different from the gradient boosting? In gradient boosting, you were using log odds, but in xgboost you were set it 0.5, am I missing something?

    • @statquest
      @statquest  Год назад

      Gradient Boosting and XGBoost start differently. In gradient boosting, we use the training data to make an initial estimate (log(odds) or probability) for the initial prediction. In contrast, with XGBoost, we just set the initial prediction to 0.5.

    • @deana00
      @deana00 Год назад

      @@statquest I'm sorry, but I still dont get it, why is it so?

    • @statquest
      @statquest  Год назад

      @@deana00 Because that is how they define it in the XGBoost manuscript.

    • @deana00
      @deana00 Год назад

      @@statquest ahh, thank you for your answer. do you plan to make a video about lightgbm? or histogram-based algorithm?

    • @statquest
      @statquest  Год назад

      @@deana00 At some point I'd like to make some videos about lightGBM. It's just a matter of finding the time to do them.

  • @karangupta6402
    @karangupta6402 3 года назад

    Hi Josh:
    Can it be possible to make some video on the scale_pos_weight feature of XGBoost and how it can help in solving imbalanced datasets problems?

    • @statquest
      @statquest  3 года назад +1

      I'll keep that in mind.

  • @mehuljain4920
    @mehuljain4920 4 года назад

    Hi
    Hope it’s not too late to get a reply on this video from you.
    I just wanted to know how the tree will grow when there are more variables. Like in decision tree it takes 1 variable in the root followed by other variables in other nodes.
    How will xgb build its tree.
    Thanks

    • @statquest
      @statquest  4 года назад

      XGBoost builds its trees just like any other tree algorithm, although it uses a different criteria for selecting the best way to split the data.

  • @dylangaldes7044
    @dylangaldes7044 3 года назад

    Ive been researching on how to use XGBoost for image classification, unfortunately I did not find a lot of research papers on this. Is it a good algorithm for this job, Classification has multiple different classes that are either various types of diseases on leaf plants or a healthy leaf. Thank you

    • @statquest
      @statquest  3 года назад

      I've never done that myself, but I've heard of people who have and been successful.

  • @rafsunahmad4855
    @rafsunahmad4855 3 года назад

    Is knowing the math behind algorithm must or just knowing that how algorithms works is enough? please please please give a reply.

    • @statquest
      @statquest  3 года назад +1

      It depends on what your goal is. If you only want to use the algorithm, then knowing the math is optional, and instead, you only need to know the main ideas of what it does and how it works. If you want to create your own algorithms, then you should learn the math.

    • @rafsunahmad4855
      @rafsunahmad4855 3 года назад

      In data science interview do the interviewer ask math behind algorithm?

    • @statquest
      @statquest  3 года назад

      @@rafsunahmad4855 It depends, but I think that they often do.