2. Maximum Likelihood Estimators

Поделиться
HTML-код
  • Опубликовано: 8 янв 2025

Комментарии • 8

  • @omphileashley5260
    @omphileashley5260 3 года назад +3

    your videos are about to save my semester :)

    • @chessability.
      @chessability.  3 года назад +1

      Good luck! Let me know if you have any questions.

    • @omphileashley5260
      @omphileashley5260 3 года назад

      how can i get ahold of your outside youtube?

  • @nuclearcornflakes3542
    @nuclearcornflakes3542 3 года назад +1

    you the goat breh

  • @adiratna96
    @adiratna96 3 года назад +1

    Hey I had a question about the log-likelihood. What if the derivative of log-likelihood wrt theta gives me a constant? How do I interpret that in terms of the MLE of theta?

    • @chessability.
      @chessability.  3 года назад +1

      Interesting question! I think that would mean that the MLE is just any real number; that is, all real numbers result in the same log-likelihood, which means they are all equally 'good' (or bad) at optimizing the log-likelihood.
      If this was the case, the broader intuition would be that the PDF of the random variable simply doesn't change with the parameter theta, so it makes sense that we can't build an estimate for theta from the observed data.
      Does that help?

    • @adiratna96
      @adiratna96 3 года назад

      @@chessability. yes it does, thank you. It was definitely difficult to interpret.