The Gradient Boosted Algorithm Explained!

Поделиться
HTML-код
  • Опубликовано: 20 сен 2024
  • In the gradient-boosted trees algorithm, we iterate the following:
    - We train a tree on the errors made at the previous iteration
    - We add the tree to the ensemble, and we predict with the new model
    - We compute the errors made for this iteration.

Комментарии • 8

  • @math_in_cantonese
    @math_in_cantonese 3 месяца назад +1

    Thanks, I forgot some details about Gradient Boosted Algorithm and I was too lazy to look it up.

  • @py2992
    @py2992 3 месяца назад +1

    Thank you for this video !

  • @SoimulPatriei
    @SoimulPatriei 3 месяца назад

    Very good and intuitive explanation of the algorithm. Thank-you!

  • @astudent8885
    @astudent8885 3 месяца назад

    Do you mean that the new tree is predicting the error? In that case, wouldn't you subtract the new prediction from the previous predictions

    • @TheMLTechLead
      @TheMLTechLead  3 месяца назад

      So we have an ensemble of trees F that predicts y such that F(x) = \hat{y}. The error is y - F(x) = e. We want to add a tree that predicts the error T(x) = \hat{e} = e + error = y - F(x) + error. Therefore F(x) + T(x) = y + error

  • @antoniopiemontese6078
    @antoniopiemontese6078 3 месяца назад

    What's the difference of this algorithm from Boosting as explained in Hastie & Tibshirani's book published in 2013 (first version). It does seem the same.

    • @TheMLTechLead
      @TheMLTechLead  3 месяца назад

      Why do you expect them to be different?

    • @TheMLTechLead
      @TheMLTechLead  3 месяца назад +1

      Maybe you are asking what is the difference between boosting in general and gradient boosting in particular? To be fair, my video in not going deep enough to highlight the differences. In a coming video, I am going to go into the details of how XGBoost works and I believe that should clear up the confusion.