Well explained 👏 👌, though I am not sure if the For loop and recursive formula are correctly defined. If the initial t = 0, doesn't that mean you will have f(0) = f(0 - 1) + lambda*g(0)... what is f(0-1)
In this example the new weights for the formerly misclassified examples are increased, while the weights for the correctly classified are decreased (which seems reasonable to me at the moment). But if e_t becomes greater than 0.5, lambda_t becomes negative and the direction of the weight adaptation is swapped, which would lead to undersampling of the misclassified and oversampling of the correctly classified examples in the next round. Is lambda "allowed" to become negative in the first place? Somewhere (slides on boosting algorithms) I read that lambda is supposed to be non-negative, but I'm not sure if I understood the statement resp. context of the statement correctly.
Great Question. A. First, there are two weights in the system - one driving the weight of data point and another driving weight of the classifier. I shall explain both in a second. B. Second, the fact is error >0.5 will give negative lambda. This is part of the design as I shall clarify below. So, when error0; when error = 0.5, then lambda =0; when error > 0.5 then lambda
Well explained 👏 👌, though I am not sure if the For loop and recursive formula are correctly defined. If the initial t = 0, doesn't that mean you will have f(0) = f(0 - 1) + lambda*g(0)... what is f(0-1)
Thank you for pointing that. The loop starts t=1 and runs till t=T. I shall have this correction updated.
Best!
You should mention the normalized method. I kill myself to find out how to normalize those numbers
Which normalization would you like to see? The wgt computation in each iteration is normalized. Could you clarify.
Well explained...Thank you !!!
In this example the new weights for the formerly misclassified examples are increased, while the weights for the correctly classified are decreased (which seems reasonable to me at the moment). But if e_t becomes greater than 0.5, lambda_t becomes negative and the direction of the weight adaptation is swapped, which would lead to undersampling of the misclassified and oversampling of the correctly classified examples in the next round. Is lambda "allowed" to become negative in the first place? Somewhere (slides on boosting algorithms) I read that lambda is supposed to be non-negative, but I'm not sure if I understood the statement resp. context of the statement correctly.
Great Question.
A. First, there are two weights in the system - one driving the weight of data point and another driving weight of the classifier. I shall explain both in a second.
B. Second, the fact is error >0.5 will give negative lambda. This is part of the design as I shall clarify below.
So, when error0; when error = 0.5, then lambda =0; when error > 0.5 then lambda
Very clear explanation, this is what I want!
Glad it was helpful!
Clear and precise explanation 👍
Glad you liked it