Logistic Regression

Поделиться
HTML-код
  • Опубликовано: 29 сен 2024

Комментарии • 23

  • @sharatpc2371
    @sharatpc2371 3 года назад +11

    Disappointed to see such a poor non-intuitive explanation of such a beautiful method. I can only imagine how IIT students would have studied. Videos from channels like Statquest, etc are way way better.

    • @SuryaBoddu
      @SuryaBoddu Год назад +1

      These were my exact thoughts going through this video

  • @ravimishra339
    @ravimishra339 6 лет назад +2

    Great video, can you explain why :-
    derivative of g(beta T x) = (1 - g(beta T x)) derivative beta T x
    Video :- 17:29

  • @srikanthtammina1900
    @srikanthtammina1900 4 года назад

    really great explanation madam

  • @chaotic_singer13
    @chaotic_singer13 4 месяца назад +1

    jaane wo kaise log the jinke dimaag ko yeh samajhne ki shakti mila,
    hamne to jab seekhna chaha kathinaaiyo ka saamna kara *crying emoji *

  • @arundasari773
    @arundasari773 5 лет назад +4

    Hi mam,
    How to calculate intercept value in logistic regression by hand. Any formula is there for intercept value.

  • @arundasari773
    @arundasari773 5 лет назад +3

    Nice Video.
    I am learning Data science.I have a doubt in logistic regression. can you explain How to calculate intercept value.

    • @rishabhghosh155
      @rishabhghosh155 3 года назад

      use the method of max log likelihood to fit the model, the numerical method to actually calculate beta (coefficient of x) is Newton-Raphson's method (see the book Elements of Statistical Learning)

  • @manaspeshwe8297
    @manaspeshwe8297 4 года назад +1

    why are we learning P(Y|X) and what do we mean by B(beta) parameterizes X ?

  • @tolifeandlearning3919
    @tolifeandlearning3919 2 года назад +1

    Great lecture

  • @adityarazpokhrel7626
    @adityarazpokhrel7626 4 года назад

    Thank you mam. Very useful.
    Greetings from Nepal, T.U.

  • @thecodingdice3107
    @thecodingdice3107 2 года назад

    At z tends to 0 the value is .5 3:21

  • @rameshlanka171
    @rameshlanka171 4 года назад

    Thank you......for these Explanation..Madam

  • @rajeshreddy3133
    @rajeshreddy3133 4 года назад +2

    for gradient descent the update has to be beta = beta - alpha*delta of Beta

  • @alluprasad5976
    @alluprasad5976 5 лет назад

    how is this different from convex optimization?

  • @arunselvabio
    @arunselvabio 4 года назад

    Thank you

  • @sivanandapanda9793
    @sivanandapanda9793 5 лет назад

    Why we take log of this expression at 12:52

    • @AK-lp3ze
      @AK-lp3ze 5 лет назад +4

      It's for handling numerical underflows and reducing numbers of multiplications, which is computationally expensive.

    • @akashsaha3921
      @akashsaha3921 4 года назад

      Log is monotonic function so it can be used in the context of original function. Also log is computationally efficient when we are using SGD for the logistic optimization equation

    • @sankarse1162
      @sankarse1162 3 года назад +1

      To simplify computation in the function where powers of number are involved