AdaBoost Math with Example clearly explained Step by Step Machine Learning Ensembles

Поделиться
HTML-код
  • Опубликовано: 1 окт 2024

Комментарии • 12

  • @sibusisomtiyane6913
    @sibusisomtiyane6913 3 года назад +3

    Well explained 👏 👌, though I am not sure if the For loop and recursive formula are correctly defined. If the initial t = 0, doesn't that mean you will have f(0) = f(0 - 1) + lambda*g(0)... what is f(0-1)

    • @machinelearningmastery
      @machinelearningmastery  3 года назад +2

      Thank you for pointing that. The loop starts t=1 and runs till t=T. I shall have this correction updated.

  • @hasanfahad5292
    @hasanfahad5292 3 года назад +4

    Best!

  • @meha1233
    @meha1233 8 месяцев назад +1

    You should mention the normalized method. I kill myself to find out how to normalize those numbers

    • @machinelearningmastery
      @machinelearningmastery  8 месяцев назад

      Which normalization would you like to see? The wgt computation in each iteration is normalized. Could you clarify.

  • @teachtech2777
    @teachtech2777 2 года назад +2

    Well explained...Thank you !!!

  • @yurigansmith
    @yurigansmith Год назад +1

    In this example the new weights for the formerly misclassified examples are increased, while the weights for the correctly classified are decreased (which seems reasonable to me at the moment). But if e_t becomes greater than 0.5, lambda_t becomes negative and the direction of the weight adaptation is swapped, which would lead to undersampling of the misclassified and oversampling of the correctly classified examples in the next round. Is lambda "allowed" to become negative in the first place? Somewhere (slides on boosting algorithms) I read that lambda is supposed to be non-negative, but I'm not sure if I understood the statement resp. context of the statement correctly.

    • @machinelearningmastery
      @machinelearningmastery  Год назад

      Great Question.
      A. First, there are two weights in the system - one driving the weight of data point and another driving weight of the classifier. I shall explain both in a second.
      B. Second, the fact is error >0.5 will give negative lambda. This is part of the design as I shall clarify below.
      So, when error0; when error = 0.5, then lambda =0; when error > 0.5 then lambda

  • @JINGCUI-su1se
    @JINGCUI-su1se Год назад +1

    Very clear explanation, this is what I want!

  • @firstkaransingh
    @firstkaransingh Год назад +1

    Clear and precise explanation 👍