30: Maximum likelihood estimation

Поделиться
HTML-код
  • Опубликовано: 13 дек 2024

Комментарии •

  • @CharlotteC1990
    @CharlotteC1990 8 лет назад +22

    Best MLE video on RUclips! Thank you :)

    • @davorgolik7873
      @davorgolik7873 16 дней назад

      Agree! Thank you also from my side!

  • @FloppyDobbys
    @FloppyDobbys 7 лет назад +2

    Most clear explanation i have seen on RUclips thus far

  • @hojkoff
    @hojkoff 4 года назад +2

    Great explanation. Clear, structured and explained in simple understandable terms. Thanks for taking the time to put this together.

  • @argentina2152
    @argentina2152 8 лет назад +2

    Thank you so much. Now it all makes sense. I had difficulty grasping the idea of MLE, but with your explanation I feel confident going back to the lectures and being able to follow them.

  • @stevegyro1
    @stevegyro1 6 лет назад +2

    Matthew you are outstanding as a teacher. Thank you for the many insights and teaching.
    -Steve G.

    • @alonsobentley3351
      @alonsobentley3351 3 года назад

      You all probably dont care at all but does any of you know of a way to log back into an instagram account??
      I was stupid lost my password. I love any tips you can offer me

  • @shane1146
    @shane1146 8 лет назад +19

    Matthew you are awesome.
    I wish you did a video on Bayesian too. Bayesian, MCMC one please??

  • @franciscomendoza3778
    @franciscomendoza3778 2 года назад

    Really nice presentation

  • @CC-op3ez
    @CC-op3ez Месяц назад

    Thank you so much for your video, especially for warnings about model selection and AIC. Would you please explain more (or give me some documents or references) about "do not combine model selection with hypothesis testing. The p value significance will be inflated because you are implicitly testing multiple hypotheses with model selection"

  • @Maha_s1999
    @Maha_s1999 6 лет назад +1

    Fantastic! Thank you so much for this super clear exposition.

  • @مصطفىعبدالجبارجداح

    بارك الله فيكم وجزاكم الله خير الجزاء

  • @tag_of_frank
    @tag_of_frank 6 лет назад +4

    isnt the (probability of x given theta) = (probability of theta given x)(probability x)/(probability theta)
    If this is the case, the "likelihood function" as you defined, is it really equal to the probability of x given theta ?
    If so, why, since it is missing those two terms extra terms?

    • @cube2fox
      @cube2fox 6 лет назад +1

      Fahraynk I have the same question, please tell me when you found an answer.

    • @MM-du7je
      @MM-du7je 6 лет назад +3

      That is true in a Bayesian setting, where the parameters in a given model are treated as being random.
      The point of maximum likelihood estimation, on the other hand, and Frequentist inference is we treat parameters as being static, such that we can estimate them. I wish the author had written in the more common notation for density for MLE, with f(x ; theta) instead of f(x given theta), it's confusing when this isn't known.
      Btw you are correct, so Bayes theorem works for densities too! p(theta) is the density of the parameter, p(theta given x) is the parameter conditioned on the data (the thing we want!) and p(x) the normalizing constant. Bayesian inference is basically the science of picking a prior based on objective/subjective mathematical means.

  • @saurabh75prakash
    @saurabh75prakash 6 лет назад +1

    Nicely explained, thanks!

  • @RPDBY
    @RPDBY 6 лет назад

    AIC - the lower the better, LL - the higher the better, but both measure the same concept, so using both is a redundancy, one will suffice (as one will always go down when the other goes up judging by the formula). Did i get it right?

  • @elliott8175
    @elliott8175 2 года назад

    Thank you so much!! Such clear explanations!!

  • @roma9026
    @roma9026 8 лет назад

    Thank you very much for your introduction!

  • @StephenRoseDuo
    @StephenRoseDuo 7 лет назад

    How does restricted maximum likelihood estimation change the description here?

  • @profmo
    @profmo 8 лет назад

    Thank you for taking the time to make this video.

  • @PedroRibeiro-zs5go
    @PedroRibeiro-zs5go 7 лет назад +1

    Great video man! Helped me a lot, all the best :D

  • @taded7169
    @taded7169 8 лет назад

    Really it is very interesting!! Thank You!!

  • @sukursukur3617
    @sukursukur3617 2 года назад

    Yeaahh thats the clear explanation

  • @ndiegow1
    @ndiegow1 8 лет назад

    Amazing, I finally understood MLE

  • @joelmhaske8185
    @joelmhaske8185 4 года назад

    Seriously good!

  • @joseluisredondogarcia5244
    @joseluisredondogarcia5244 8 лет назад

    The last slide is gold

  • @golamwahid8630
    @golamwahid8630 5 лет назад

    Thank you very much!

  • @hounamao7140
    @hounamao7140 8 лет назад

    you're amazing sir

  • @clairekunesh4637
    @clairekunesh4637 8 лет назад

    Very helpful

  • @azadalmasov5849
    @azadalmasov5849 7 лет назад

    It is interesting to me why they just do not divide AIC eqn. by 2.

  • @ivarockazi
    @ivarockazi 7 лет назад +1

    nice explanation...but i ended up cleaning my laptop screen.. after 4.49

  • @RafaelLima-ox9ul
    @RafaelLima-ox9ul 8 лет назад

    cool! Good job!

  • @aravindhan9368
    @aravindhan9368 7 лет назад +1

    Thankyou sir. Its very helpful. Can you please show the mathematical workout of this in figures?

  • @sandrahuhu7429
    @sandrahuhu7429 7 лет назад

    this is great!

  • @mnkmnkification
    @mnkmnkification 8 лет назад

    AWESOME!

  • @rawiyahalraddadi7064
    @rawiyahalraddadi7064 6 лет назад

    Thank you!

  • @usmansaeed678
    @usmansaeed678 7 лет назад

    thanks

  • @looploop6612
    @looploop6612 7 лет назад

    Is likelihood same as probability?