Disappointed to see such a poor non-intuitive explanation of such a beautiful method. I can only imagine how IIT students would have studied. Videos from channels like Statquest, etc are way way better.
use the method of max log likelihood to fit the model, the numerical method to actually calculate beta (coefficient of x) is Newton-Raphson's method (see the book Elements of Statistical Learning)
Log is monotonic function so it can be used in the context of original function. Also log is computationally efficient when we are using SGD for the logistic optimization equation
Disappointed to see such a poor non-intuitive explanation of such a beautiful method. I can only imagine how IIT students would have studied. Videos from channels like Statquest, etc are way way better.
These were my exact thoughts going through this video
Great video, can you explain why :-
derivative of g(beta T x) = (1 - g(beta T x)) derivative beta T x
Video :- 17:29
Explained at 7:40
@@srinivasdivishad8626 tehre is a mistake it is a faxtor of g(beta TX) is missing
really great explanation madam
jaane wo kaise log the jinke dimaag ko yeh samajhne ki shakti mila,
hamne to jab seekhna chaha kathinaaiyo ka saamna kara *crying emoji *
Hi mam,
How to calculate intercept value in logistic regression by hand. Any formula is there for intercept value.
Nice Video.
I am learning Data science.I have a doubt in logistic regression. can you explain How to calculate intercept value.
use the method of max log likelihood to fit the model, the numerical method to actually calculate beta (coefficient of x) is Newton-Raphson's method (see the book Elements of Statistical Learning)
why are we learning P(Y|X) and what do we mean by B(beta) parameterizes X ?
Great lecture
Thank you mam. Very useful.
Greetings from Nepal, T.U.
At z tends to 0 the value is .5 3:21
Thank you......for these Explanation..Madam
for gradient descent the update has to be beta = beta - alpha*delta of Beta
how is this different from convex optimization?
Thank you
Why we take log of this expression at 12:52
It's for handling numerical underflows and reducing numbers of multiplications, which is computationally expensive.
Log is monotonic function so it can be used in the context of original function. Also log is computationally efficient when we are using SGD for the logistic optimization equation
To simplify computation in the function where powers of number are involved