Maximum likelihood estimation - Poisson, exponential examples

Поделиться
HTML-код
  • Опубликовано: 27 окт 2024

Комментарии • 32

  • @RB3565
    @RB3565 5 лет назад +4

    This is the most straight forward explaination for any struggling student. If you cannot understand this tutorial, forget this subject. Thank you.

  • @fatsquirrel75
    @fatsquirrel75 6 лет назад +1

    Beautiful explanation. Finally following what's going on. My professor just throws out random formulas with no idea where they come from. Love your work.

  • @lm5050
    @lm5050 7 лет назад +1

    Thankyou! Your quality of teaching far exceeds the average professor in statistics.

  • @I_Am_Midnight-i
    @I_Am_Midnight-i 6 лет назад +2

    Best video I've seen regarding likehood function.

  • @avadhootv
    @avadhootv 5 лет назад

    Hi. You probably saved my thesis. I was finding it difficult to wrap my head around MLE as steps were confusing. I am trying to model levy walk and this video has been very helpful. Kudos.

  • @HarpreetSingh-ke2zk
    @HarpreetSingh-ke2zk 8 лет назад

    Really-really a expected likelihood way to present a concept. Thanks

  • @dsreddy6969
    @dsreddy6969 9 месяцев назад

    Thank you so much for this bro, I found the right guide for understanding MLE

  • @nanjingian
    @nanjingian 8 лет назад +1

    Great tutorial. Best on MLE I have ever seen. Thanks Phil.

  • @mehranhosseinzadeh8072
    @mehranhosseinzadeh8072 9 лет назад +2

    Indeed understandable. It is 12:30 am here and I was about to give up on finding a nice thing on RUclips. Thanks

    • @PhilChanstats
      @PhilChanstats  9 лет назад

      +Mehran Hosseinzadeh Never give up! Hope you were not disappointed.

    • @mehranhosseinzadeh8072
      @mehranhosseinzadeh8072 9 лет назад +1

      Well I was close to but thanks to you now I have a way better understanding regarding MLF

    • @PhilChanstats
      @PhilChanstats  9 лет назад

      mle is one of those topics that 1st year students find hard to grasp. It's something they've not seen in school.

  • @Ihatenicknames1
    @Ihatenicknames1 10 лет назад +1

    This is seriously the best explanation of maximum likelihood estimation that I have ever seen. THANK YOU SO MUCH. You truly saved my day, if that means anything to you ;)
    There is just one little thing I don't really understand. Why do we get e^(-lamda)(sum e(i)) and not e^(-lamda)(sum e(n)), do we not sum e n number of times?

  • @prosimulate
    @prosimulate 7 лет назад

    What a transparent easy to follow video. Great work!

  • @bRyANthen0112
    @bRyANthen0112 10 лет назад +1

    thankyou, best explanation by far I've searched.

  • @omarkanaan7515
    @omarkanaan7515 6 лет назад

    Great explanation, Phil! Thank you.

  • @galaxymm5363
    @galaxymm5363 3 года назад

    Nice video, thank you

  • @LedCepelin
    @LedCepelin 6 лет назад

    Thanks for the refresher! You made me a little nervous when you missed the log for lambda, I was about to go and re-learn maths haha

  • @richardgruss8171
    @richardgruss8171 7 лет назад +1

    Very clear explanation. Thank you!

  • @januszstolarek9862
    @januszstolarek9862 7 лет назад +2

    What a great work! thank you

  • @brostoch9677
    @brostoch9677 4 года назад

    i like it. good explain

  • @stewartmoore5158
    @stewartmoore5158 5 лет назад

    So much better than my useless lecturer.

  • @youhenok
    @youhenok 7 лет назад +1

    Very nice....yea, ofcourse its a gentle one

  • @GAWRRELL
    @GAWRRELL 10 лет назад +1

    Can you please make an example of this using real world data?

  • @mugerwahamudane7449
    @mugerwahamudane7449 5 лет назад

    you're the best

  • @adityamanimishra5053
    @adityamanimishra5053 4 года назад

    Nice video

  • @mr.luvnagpal7407
    @mr.luvnagpal7407 4 года назад

    you are amazing

  • @HOOHA333
    @HOOHA333 7 лет назад

    I got two question
    1. Why is logic behind maximizing the joint probability. I meant why to maximize the joint probability in order to find theta ?
    2. If i am not sure about range of lamda, how will I check where global maxima is situated ?

    • @PhilChanstats
      @PhilChanstats  7 лет назад

      1. You may think of it as the following: given a model (chosen by you) for the data, it's finding the parameters that is consistent with the data.
      2. In models newbies come across (eg based on exponential family of distributions) the likelihood function is concave, and so an mle will be the global maximizer. If your model has 1 parameter, and if the 2nd derivative of the likelihood, or easier is the log-lik, is negative then the function is concave for all permissible parameter values.

  • @samah241
    @samah241 8 лет назад

    I want to know the meaning of penalized mle

  • @vijaybalu6867
    @vijaybalu6867 7 лет назад

    I want to maximum likelihood estimation how to using spss video

    • @PhilChanstats
      @PhilChanstats  7 лет назад

      For this please refer to my SPSS videos