Score test (Lagrange Multiplier test) - introduction

Поделиться
HTML-код
  • Опубликовано: 15 ноя 2024

Комментарии • 46

  • @Senfbro
    @Senfbro 2 года назад

    Really great! 6 minutes of this video were sufficient to give me a proper understanding of the test.

  • @Brc240
    @Brc240 3 года назад

    I was learning this during my lectures, and I couldn't understand what my professor was saying (partly because he speaks quite fast), furthermore he didn't give any intuition. Thank you so much for making this video, I understand this test now.

  • @DPPer5566
    @DPPer5566 6 лет назад +1

    You enlightened me! I've been obsessed by this for long time!
    Thx sooooo much!

  • @HesterPrynne998
    @HesterPrynne998 8 лет назад +2

    Thank you for this easily understood explanation, it was immensely helpful!

  • @tkzahw
    @tkzahw 7 лет назад +6

    The term in the middle should be the variance itself and not the inverse, correct?
    So we multiply the square of the gradient by the variance, not divide.
    To be like:
    LM = S(theta_o)' . Var(theta_o) . S(theta_o)

  • @ayandakeith
    @ayandakeith 7 лет назад +15

    Why am I even attending my lectures?

  • @kangzhou1831
    @kangzhou1831 6 лет назад +7

    I think the item in the medium of sandwich should be inverse of fisher information, instead of inverse of variance, since you have to take variance on the whole score function.

    • @lastua8562
      @lastua8562 4 года назад +1

      Is that not implicit by using the "vector" as the parameter (underlined theta) for the variance?

    • @algorithmo134
      @algorithmo134 8 месяцев назад

      @kangzhou1831 I agree

  • @gabrielwong1991
    @gabrielwong1991 10 лет назад +4

    Basically my lecturer and Greene book is useless... he gave us the proof and stuff in matrix form. Literally not understandable what the intuition behind lol Ben could actually make a text book on this and it is very helpful indeed
    Can someone tells me what on earth is mean value theorem and how does it apply to Wald hypothesis test under maximum likelihood estimation?

    • @hounamao7140
      @hounamao7140 8 лет назад +3

      I feel you, if I ever graduate they should replace the name of my university by youtube since I prolly got 90% of my education from it..

  • @YNY-9307
    @YNY-9307 5 лет назад +2

    But can you also talk about the fisher information? Sometimes we use LM test not for MLE but other kinds of estimstes, where we need to use Fisher information.

    • @lastua8562
      @lastua8562 4 года назад

      Can you explain how Fisher relates to this please? I would be interested.

  • @bramhendriks8423
    @bramhendriks8423 10 лет назад

    I wish my lecturer would've explained it like this... Thanks:)

  • @leolei9352
    @leolei9352 Год назад

    Concise and clear!

  • @shramansen9670
    @shramansen9670 3 года назад

    Brilliant explanation

  • @sammypan3528
    @sammypan3528 4 года назад +1

    Thank you Ben... But isn't the variance(theta 0) just zero? Since theta zero is null hypothesis parameter value which is a constant? Am I getting something wrong here?

    • @sherlocksilver9392
      @sherlocksilver9392 3 года назад

      I also don't understand this. I'm thinking it's maybe something to do with the Fisher information?

  • @liao9134
    @liao9134 7 лет назад +1

    You are a live saver!

  • @RealMcDudu
    @RealMcDudu 4 года назад +2

    What happens when the null is even further, at the tails, the slope there is close to 0... So this test will fail to reject when it most should? :-/

    • @RealMcDudu
      @RealMcDudu 4 года назад +2

      So it turns out that although the likelihood can have tails, the log likelihood is usually very steep. It basically looks like a steep mountain - so it probably won't happen in that case. stats.idre.ucla.edu/wp-content/uploads/2016/02/nested_tests.gif

    • @RealMcDudu
      @RealMcDudu 4 года назад +1

      So it turns out that although the likelihood can have tails, the log likelihood is usually very steep. It basically looks like a steep mountain - so it probably won't happen in that case. stats.idre.ucla.edu/wp-content/uploads/2016/02/nested_tests.gif

  • @MeerkatStatistics
    @MeerkatStatistics 4 года назад

    Just to note, if it's not clear, that you calculate Score and Var / Information matrix at the full model, and then replace the values for the coefficients with the H0 assumptions. So your score test will be different depending on what is your full model assumption.

    • @nghenry458
      @nghenry458 2 года назад

      This is something I found confusing in reading through the LM test, that it emphasise the unnecessity to evaluate the full model, and yet it seems to me that the score is obtained by plugging in theta-zero to the partial derivate of the unrestricted model. I am also confused as to how to evaluate the fisher information at theta-zero (or is it what is supposed to be done?)

  • @lastua8562
    @lastua8562 4 года назад

    Is such a likelihood distribution aesthetically the exact same as a pdf for the parameter?

  • @LilCommander
    @LilCommander 6 лет назад

    This makes so much sense now. Thanks!

  • @anindadatta164
    @anindadatta164 2 года назад

    Is the function of the parameter(the likelyhood function) also normally distributed, to enable use of chisquare function for calculating the score test?

  • @SomethingSoOriginal
    @SomethingSoOriginal 7 лет назад +4

    Still don't understand, doesn't seem intuitive to me

  • @eiz8745
    @eiz8745 6 лет назад

    Wish to watch this before exam :(

  • @cmfrtblynmb02
    @cmfrtblynmb02 3 года назад

    doesn't this make susceptible to local minima?
    Alsois simply var(theta_0) var(theta)? Does it depend on the the null hypothesis value we picked?

  • @stephen38620
    @stephen38620 10 лет назад +5

    Would an extremely off parameter create a low score, and hence a low LM statistic, making the LM statistic incorrect?

    • @indragesink
      @indragesink 9 лет назад

      Stephen Lee and then in the steeper part, in the video between red teta-zero and yellow teta-zero, the null would (actually) be more likely rejected than at the red teta-zero, although this steeper part is actually closer to the teta-ml. If it would be put in another way, I think it could make sense though, cause the slope (score) could automatically take into account the variance (as was in de denominator in the test of the previous vid.).

    • @anonymousblimp
      @anonymousblimp 9 лет назад +3

      +Stephen Lee
      My lecturer defined the Score as the derivatives of the log likelihood function. In this case, the graph of the likelihood function, rather than looking like a normal distribution, is a parabola opening downward. Thus you do not have this issue where the slope gets flatter on the tails, it only gets steeper.

    • @lastua8562
      @lastua8562 4 года назад

      @@anonymousblimp Thank you for the explanation. Is this actually the case (and is hence not a normal distribution)?

    • @lastua8562
      @lastua8562 4 года назад

      @@indragesink I personally think that this will strictly depend on the likelihood function/distribution in question here, which does not need to be approximately normal. It could take any form, parabola as mentioned below, but also less steep distributions. Did you find the answer in the meantime?
      If "taking into account the variance", why would there be any change in var(theta_0) and how do we actually find the variance of theta_0?

  • @drew96
    @drew96 7 лет назад +1

    This definition of the score test looks quite different than this one: en.wikipedia.org/wiki/Score_test
    It should be the second derivative of the log-likelihood, not the variance. I guess these converge through Cramer Rao bound but I still find it confusing.
    This test as defined seems more like a Wald test:
    en.wikipedia.org/wiki/Wald_test

    • @SpartacanUsuals
      @SpartacanUsuals  7 лет назад +1

      Hi Paul, thanks for your comment. They are the same. The distribution of this statistic is, asymptotically (that's the key thing here), a chi-squared distribution. The variance is an estimator of the information matrix. The score is the numerator -- it is the derivative of the log likelihood with respect to the parameters, evaluated at the ML estimates. This is different to the Wald test, where the numerator is the squared deviation of the MLE away from the null hypothesis values. You'll find that the denominator for the Wald is exactly the same as for the LM test (see page 780 of this: citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.458.4713&rep=rep1&type=pdf). Hope that clears it up. Best, Ben

    • @francoisallouin1865
      @francoisallouin1865 7 лет назад +1

      Just wanted to point out that the link does not work . but I am happy with Ben's explanation .( it says : No document with DOI "10.1.1.458.4713"
      The supplied document identifier does not match any document in our repository

    • @algorithmo134
      @algorithmo134 8 месяцев назад

      @@SpartacanUsualshi the denominator of the score test and the Wald test are not the same. The denominator of the wald test statistic is the variance of the mle which is the inverse of the fisher information whereas the denominator of the score test is the fisher information. You can check wikipedia

  • @VolcanicDonut
    @VolcanicDonut 5 лет назад

    So what is theta?

  • @meenakshigautam4249
    @meenakshigautam4249 3 года назад

    can you please help me with😅 one R-code related to this?

  • @Byc845
    @Byc845 4 года назад

    Is the denominator Var(\theta_0)? Why isn't it Var(\hat{\theta})?

    • @lastua8562
      @lastua8562 4 года назад

      Because we are only evaluating the score at the hypothesized value, and we do not even consider a ML estimator, i.e. Var(\hat{\theta}).
      However, I wonder how to get the variance of theta_0, any ideas?

  • @ayoungchun3806
    @ayoungchun3806 6 лет назад +1

    brilliant. thanx

  • @SuperBafta
    @SuperBafta 4 года назад

    haapy born day to CR Rao

  • @jamesmorelle862
    @jamesmorelle862 4 года назад

    coeur sur toi