The Cramer-Rao Lower Bound ... MADE EASY!!!

Поделиться
HTML-код
  • Опубликовано: 9 июн 2024
  • What is a Cramer-Rao Lower Bound? How can we prove an estimator is the best possible estimator? What is the efficiency of an estimator?

Комментарии • 7

  • @RoyalYoutube_PRO
    @RoyalYoutube_PRO 14 дней назад

    Fantastic video... preparing for IIT JAM MS

  • @ligandroyumnam5546
    @ligandroyumnam5546 26 дней назад

    Thanks for uploading all this content. I am about to begin my masters in data science soon and I was trying to grasp some math theory which is hard for me coming from a CS Background. Your videos make it so simple to digest all these topics.

  • @santiagodm3483
    @santiagodm3483 Месяц назад

    Nice videos. I'm now preparing for my masters and it will be quite useful; the connection between CRLW and the standard error of the estimates by MLE makes this very nice.

  • @ridwanwase7444
    @ridwanwase7444 24 дня назад

    Fisher information is negative of expected value of double derivative of log L, then why we multiply with 'n' to get it?

    • @briangreco2718
      @briangreco2718  24 дня назад

      I was assuming the L here is the likelihood of a single data point. In that case, you just multiply by n at the end to get the information of all n observations. If L is the likelihood of all n data points, then the answer will already contain the n and you don't have to multiply at the end. The two methods are equivalent when the data is independent and identically distributed.

    • @ridwanwase7444
      @ridwanwase7444 24 дня назад

      @@briangreco2718 Thanks for replying so quickly! I have another question, is MLE of population mean always guarantee that it will have the CRLB variance?

    • @briangreco2718
      @briangreco2718  24 дня назад

      Hmm, I don't think this is true in general. At some level, it's certainly not true if we're talking about the CRLB of unbiased estimators, because the MLE is sometimes biased. For example, in a uniform distribution on [0,theta], the MLE is biased, and the Fisher Information is not even defined.
      My guess is that this applies for some "location families", which the normal, binomial, poisson would all be. For a "scale family" like the exponential distribution, in the parameterization where the mean is 1/lambda, I do not believe the MLE meets the CRLB.