Maximum Likelihood estimation - an introduction part 1

Поделиться
HTML-код
  • Опубликовано: 27 окт 2024

Комментарии • 125

  • @adamomer9658
    @adamomer9658 9 лет назад +247

    i really appreciate as many other econometricians through over the world,because we, in the third world we suffer so much from inconvenient environment to persue high education especially in the field of statistics.God bless you Mr.Ben

    • @semmicolon
      @semmicolon 6 лет назад +14

      Im in Canada and my masters level econ lecturer couldn't teach this properly

    • @wanjadouglas3058
      @wanjadouglas3058 4 года назад +1

      wouldn't agree more

  • @Alessio11092
    @Alessio11092 6 лет назад +15

    Today I have my Econometrics exam in my master. Let 1 millionth of Ben's knowledge resides in me. For real, this is truly a life savior. Many people including me really appreciate your hard work and dedication. Thanks to your explanations, this subject has became much easier and interesting. You are the Khan Academy for econometrics!

    • @lastua8562
      @lastua8562 4 года назад +1

      His voice is much easier to listen to than Khan academy. Do you have any recommendation for Tobit estimation? I could not find it in Ben's work.

  • @sami-samim
    @sami-samim 8 лет назад +13

    I thank YOU and the founder of RUclips... and the internet!

  • @SpartacanUsuals
    @SpartacanUsuals  11 лет назад +27

    Hi, many thanks. Glad to hear you found it helpful! Thanks, Ben

  • @wut_heart
    @wut_heart 7 лет назад +2

    thanks so much Ben, you are a really gifted teacher. a mere half hour of your videos have really opened up this concept for me!

  • @estelleliu7195
    @estelleliu7195 10 месяцев назад

    Much more clear than what my professor taught us. Thanks for making this video!

  • @nesleyorochena6223
    @nesleyorochena6223 9 лет назад +2

    One of the best videos I have seen on MLE.

  • @wanjadouglas3058
    @wanjadouglas3058 4 года назад +2

    Paul, this was very helpful. Doing QM2 as part of my Ph.D. coursework in economics and you always help clarify concepts. A real estimation and especially with normal PDF would suffice to elucidate things more.

  • @ayikkathilkarthik4312
    @ayikkathilkarthik4312 4 года назад +1

    Really appreciate that explanation, I was getting confused in this thing.
    You cleared all my doubt.
    Thanks.

  • @kursworld
    @kursworld 3 года назад

    Best explanation so far about the meaning of Likelihood function!

  • @stolfan1234
    @stolfan1234 2 года назад

    It’s so nice to get some sort of intuitive feeling about this. Thank You!

  • @MrAndrewDAmato
    @MrAndrewDAmato 6 лет назад +2

    This was really helpful. I still don't know how to do my homework lol but this was definitely a step in the right direction. Thank you!

  • @blognv1
    @blognv1 7 лет назад +1

    Thank you sir. To be honest, I am not sure about the different between likelihood and probability, but I did understand MLE after watching your videos.

  • @khumomatlala7106
    @khumomatlala7106 7 лет назад

    I might be wrong but this is my understanding of this video:
    P is the probability that we pick/choose/observe a male from the population.
    That mean that 1 - P is the probability of choosing/picking/observing a female.
    In this video, he is trying to estimate what P (i.e the probability of choosing a male in the UK) is if it was not already given to us.
    Note: The distribution used is a Bernoulli Distribution.

  • @jaredgreathouse3672
    @jaredgreathouse3672 4 года назад +3

    This is the Khan Academy of econometrics

  • @Daniel-cu9wj
    @Daniel-cu9wj 6 лет назад

    As usual, great explanation Ben Lambert. Thank you for the effort you put in making these videos. I come here everyday after my econometrics II class to get a refresher. More often than not, I learn more from your videos than from class. Cheers.

  • @Partho2525
    @Partho2525 10 лет назад +19

    you know you are a great teacher...thanks

    • @SpartacanUsuals
      @SpartacanUsuals  10 лет назад +9

      Hi, many thanks for your message, and kind words. Best, Ben

    • @enassabed4102
      @enassabed4102 10 лет назад +2

      Ben Lambert thank you for your courses, they are very helpful, you are a great teacher Mr. Ben

    • @Paswansonu80
      @Paswansonu80 7 лет назад

      great sir

  • @ProgrammingTime
    @ProgrammingTime 10 лет назад +4

    Excellent videos, I've been interested in statistics as a personal interest and these videos are extremely helpful, Keep up the good work!

  • @busarahall7554
    @busarahall7554 5 лет назад

    You saved my grade on my last midterm! Thanks!!!

  • @yanhaong5309
    @yanhaong5309 9 лет назад

    Best explanation on ml I have ever seen...thanks.

  • @MrScotchpie
    @MrScotchpie 8 лет назад +6

    For something so simple and intuitive, this makes it sound very complex.

  • @2beokisgr8
    @2beokisgr8 4 месяца назад

    Great refresher for me, thanks

  • @Mr1Lemos
    @Mr1Lemos 8 лет назад +1

    Great video, good explanation that allows to clearly understand the concept.

  • @lizrael
    @lizrael 2 года назад

    You are a life-saver.

  • @GAment_11
    @GAment_11 6 лет назад

    Extremely clear. Subscribed. Thank you so much for taking the time to do this.

    • @lastua8562
      @lastua8562 4 года назад

      Most of all, he is doing it all completely for free. Best man, helping thousands of people, but if people would know, probably a few millions.

  • @kelpgy
    @kelpgy Месяц назад

    this is ridiculously helpful thank you

  • @connyv.3807
    @connyv.3807 4 года назад

    Thank you for this wonderful explanation.

  • @saargolan
    @saargolan 2 года назад

    Excellent explanation.

  • @lawrencecohen1619
    @lawrencecohen1619 2 года назад

    Excellent video providing great clarity on the Maximum Likelihood estimation.

  • @gello1337
    @gello1337 6 лет назад

    Ben you are truly amazing.

  • @Grandremone
    @Grandremone 4 года назад +1

    Great explanation, thank you!!

  • @charlesledesma305
    @charlesledesma305 4 года назад

    Excellent explanation!

  • @MrRynRules
    @MrRynRules 3 года назад

    Thank you! Really appreciate your explanation!

  • @tenzinnamdhak
    @tenzinnamdhak 8 лет назад

    hi there, i really enjoyed your video. It helped me in understanding the concept. it would have been much better if you use the two variable model and Yi being normally and independently distributed between mean and variances.

  • @luciapage-harley8860
    @luciapage-harley8860 4 года назад +1

    Hi, I have a question! Why are you using the conditional pdf f(xi | p)? In other tutorials i've seen them use this one, the marginal pdf and the joint pdf but I can't find an explanation on why :) thank you!

  • @yusifovaze
    @yusifovaze 9 лет назад

    Your videos are great, man, thank you very much and wish u good luck!

  • @zoomnguyen951
    @zoomnguyen951 2 года назад

    Excellent! Thank you very much!

  • @statisticstime4734
    @statisticstime4734 4 года назад

    Excellent!

  • @shgidi
    @shgidi 6 лет назад

    Great Lectures! I would suggest differentiating capital X from x by writing the small x by making it more curly, like this
    כc

  • @junkfire4554
    @junkfire4554 6 лет назад

    I'm lost at 5:49. Are you saying that we're seeing whether the observations we ended up getting align with the probability of getting those observations? So that the higher the 'likelihood', the less biased and more consistent our estimator is?

  • @andrewkivela5668
    @andrewkivela5668 4 года назад

    I actually understood this!

  • @Theirviewmatters
    @Theirviewmatters 10 лет назад

    I don't know if this is a stupid question. I'm studying statistics right now and in my book it says P(p/x)=productsign f(xi/p). In your lecture it's turned around: p(x/p) instead of p(p/x). Can you explain it to me? I don't have a clue what I'm doing here!

  • @michaelleming9123
    @michaelleming9123 7 лет назад

    nicely done (and the subtitles are a hoot)

  • @SpencerLupul
    @SpencerLupul Год назад

    Thanks for the lesson! Very helpful.
    Though after spending the last few days brushing up on statistics… it amazes me just how many stats teachers use binary gender as an example in their videos… isn’t this actually a mistake? I mean… it’s no mystery that there exist people outside of male/female definition. Therefore, it makes an empirical lesson feel like it is making a socio-normative conclusion. I will keep leaving this comment on stats teacher‘s videos, because i think it’s a conversation worth having.
    After all, if what you are teaching is factual…. Then the examples should be without a doubt factual in nature. Or do you disagree?

  • @lastua8562
    @lastua8562 4 года назад

    1) Our original function is only for the binary case, i.e 1 vs 0?
    2) Is MLE only for binary cases? If not, how would we use p in alternate functions?
    Thanks.

  • @mastahid
    @mastahid 7 лет назад +1

    which one is given? the parameter or the observed data?

  • @MaksUsanin
    @MaksUsanin 8 лет назад

    Hello Ben, can you explain me some moments please. in your example you using f(Xi | P) in video 1:23 - this style you created for yourself ?, Who created the rules ? Can its be like f(Bj \ T) ... ? (or another style from my imagination ) how you decode this symbols/formulas to useful information? Thanks you for the answer

    • @1994RandomUser
      @1994RandomUser 8 лет назад

      +Maks Usanin Hi Maks, using Xi is fairly standard procedure because you are wanting to know a certain value of x, given a probability distribution. The P however is usually whatever parameter tends to be used. For this specific scenario, P is appropriate as it follows a bernoulli distribution (can take values 0 or 1) and p tends to be the parameter used.
      Try not to get too hung up on symbols, just think the second part is the parameter from the distribution function, and the first is what you want to know from that distribution function.

  • @airhead3409
    @airhead3409 7 лет назад

    @Ben Lambert how exactly did you derive that f(x_i | p) = .... ? Is that some sort of bernouille cross-entropy? I just would like to know how to get to that result :)

  • @deepintheslums
    @deepintheslums 8 лет назад

    Great explanation

  • @Ha-mb4yy
    @Ha-mb4yy 9 месяцев назад

    why haven't you included the binomial coefficient in the function?

  • @OsamaComm
    @OsamaComm 11 лет назад

    Very Nice, I am so thankful.

  • @fgatzlaff
    @fgatzlaff 7 лет назад

    Hi Ben,
    I much appreciate your video and introduction to the likelihood function. It's really straight forward and i like the way how you structured the video. However i can't wrap my head around the function p^xi*(1-p)^1-xi. Could you may explain what the logic behind this formula? Like, why this assumption is logically correct and how it was created?
    kind regards,
    florian

    • @praveenkumar-mh2dt
      @praveenkumar-mh2dt 2 года назад

      Since, It is a binary outcome, you can consider it as Bernouli random variable. That's the function for modelling a Bernouli RV. You can think of it as binomial distribution with n=1.

  • @atrus3823
    @atrus3823 Год назад

    Can't we simplify this more by just summing the exponents for p and (1 - p), since p^{x_1}*p^{x_2} = p^{x_1 + x_2}?

  • @billykovalsky8149
    @billykovalsky8149 4 года назад

    I don't get it. The expression 'dL/dP = 0' is not explained, taken out of nowhere.

  • @jakobtheking1
    @jakobtheking1 5 лет назад

    I really like your videos, they help a lot but to be honest in this video in the end your explaining is very vague..what to you mean we maximize the likelihood over choice of p?

  • @leojboby
    @leojboby 7 лет назад

    Still at 0% progress wrt to MLE. Peaks and valleys exist at derivatives of 0, we are assuming the shape of L. Moreover, p is in turn expressed in terms of x. How is this dealt with? Even before finding the derivative of this joint probability I'm at a loss...

  • @alirezagyt
    @alirezagyt 7 лет назад

    So how in the last line we get to the joint probability from the conditional probability?
    I think the fact that the variables are independent would let us write each conditional probability separate, but I don't think it would let us change conditional probability to joint probability.

  • @saketanand6076
    @saketanand6076 7 лет назад

    You are great teacher..could you add a series of lecture on time series as play list..you have the videos but it is scattered

  • @kerolesmonsef4179
    @kerolesmonsef4179 4 года назад

    very helpful . thanks

  • @coconutking23
    @coconutking23 10 лет назад +2

    thank you sir, just made my day :)

  • @musicjunkie8228
    @musicjunkie8228 7 лет назад

    Wish you'd enable community contribution so we could fix those subtitles for you! :)

    • @SpartacanUsuals
      @SpartacanUsuals  7 лет назад +2

      Hi, thanks for your idea -- I didn't know such a thing existed! I have switched this on now, so anyone who wants to help, can do. All the best, Ben

  • @70ME3E
    @70ME3E 11 месяцев назад

    what's "the idere is"? is that short for 'the idea here is'?

  • @diodin8587
    @diodin8587 4 года назад

    At 1:34, P(X_i|p) should be written as P(X_i; p).

  • @juliangermek4843
    @juliangermek4843 10 лет назад +1

    What I still don't understand is the following: If you look at a sample of 100 people to estimate p (probability that its a man) for the whole UK, you use the Likelihood way which is quite a complicated calculation. Why don't you just count how many men you got in the sample to get the ratio #men/#everyone? Eg 60 men out of 100 makes p=0,6.

    • @aBigBadWolf
      @aBigBadWolf 8 лет назад

      +Julian Germek If you follow this series you will see that this is actually the case.

  • @nasirminhas1163
    @nasirminhas1163 3 года назад

    Hi I want to learn History of MLE ..can you uplaod its history ..

  • @violinonero
    @violinonero 6 лет назад

    How does taking the derivative of the function give us the maximum estimation? The derivative can be zero not only for maxima, but also for minima and saddle points. This would only work for unimodal distributions. How do we proceed for distribution functions that have many local maxima and minima??

  • @GAWRRELL
    @GAWRRELL 10 лет назад

    Can you make an example using real world data? I'm a programmer and I want to implement this algorithm.

    • @mixxxxaxxxx
      @mixxxxaxxxx 9 лет назад +1

      if you found anything please pass it to me...every prof is giving great lectures with some gorgeous mathematical notations (i guess the reason for that that they dont communicate in plain english anymore) with no real world examples at all

  • @rohanvaswani9418
    @rohanvaswani9418 7 лет назад

    Shouldn't the probability function be p(xi|p) = ... rather than f(xi|p) = ...?

  • @chh376
    @chh376 7 лет назад

    Super clear!! tks!!

  • @brianclark4796
    @brianclark4796 10 лет назад

    what does the likelihood function look like for a distribution that is not binomial but is still discrete? say my y is not just male and female but also transgender?

  • @Feyling__1
    @Feyling__1 7 лет назад

    you have saved me

  • @lauramiller7418
    @lauramiller7418 2 года назад

    The closed captioning is pretty laughable so it's a good thing I can actually understand! Might be less useful to someone not a native English speaker

  • @gongyaochen
    @gongyaochen 9 лет назад

    Very clear!

  • @shoutash
    @shoutash 9 лет назад +3

    @Ben I'm a little confused. The pdf that you use is supposed to be the actual pdf or is this something you define arbitrarily?

    • @Prithviization
      @Prithviization 8 лет назад

      +Ashish Vinayak my question too

    • @SpartacanUsuals
      @SpartacanUsuals  8 лет назад

      +wannawinit Hello both, not sure I fully understand the question? The pdf that we define represents a model of the given circumstance - in most cases it is an abstraction used to try to understand, and interpret reality. It is not actually a real thing. Therefore, there is no such 'actual' thing (apart from the trivial cases of where we are doing simulations from a given distribution on a computer). It is just a tool used to try to make sense of things. However, it is not 'arbitrary' either. A given likelihood has a raft of assumptions behind it, which dependent on the situation, may make more or less sense. Therefore, we need to be careful when choosing our likelihood to make sure we pick one that is pertinent to the particular circumstances. Not sure if any of this helps, or if I've not understood the question. Best, Ben

    • @Prithviization
      @Prithviization 8 лет назад +1

      +Ben Lambert Thanks Ben. But f(x|p)= p*xi + (1-p)*(1-xi), also gives the same result. ie when xi=1, f(x|p) = p, else when xi=0, then f(x|p) = 1-p. This makes sense too.
      Why have you specifically chosen Bernoulli distribution as the PDF of the population?

    • @SpartacanUsuals
      @SpartacanUsuals  8 лет назад +2

      +wannawinit Good question! Essentially your distribution is the same as that of a Bernoulli r.v.. Because it is that of a Bernoulli r.v! It is the same because, xi can only take the values 0 or 1, meaning that the overall likelihood (of all your date) is the same as mine. Therefore all ML estimates will be the same. Hope that helps! Best, Ben

    • @Prithviization
      @Prithviization 8 лет назад

      +Ben Lambert Thanks for your reply. But when I try to find [Product(p*xi + (1-p)*(1-xi)) for i = 1 to n] , take its log and differentiate it wrt p, I don't get the same result. Could you please explain?

  • @shiwenzhang4343
    @shiwenzhang4343 2 года назад

    thanks a lot!

  • @engkareemhamed
    @engkareemhamed 4 года назад

    i need help in matlab program in this topic please if you able to help me

  • @gijsbinnenmars2891
    @gijsbinnenmars2891 Год назад

    Love you xx

  • @gzlc
    @gzlc 5 лет назад

    Amazing

  • @saurabhsinha940
    @saurabhsinha940 9 лет назад

    Awesome!

  • @arjunjung2007
    @arjunjung2007 8 лет назад

    i would love to get your help on some work I'm currently doing!

  • @BobWaist
    @BobWaist Год назад

    great video, but please get a better mic!

  • @GuillaumeR666
    @GuillaumeR666 6 лет назад

    I still don't get it, guess it's not my cup of tea

  • @andersonbessa9044
    @andersonbessa9044 5 лет назад

    It seems that you used p to represent the population and the probability hahaha. Just this was a little confusing. Other than that, great explanation!

  • @evanrudibaugh8772
    @evanrudibaugh8772 6 лет назад

    The intro looks rather scary to most of the world in the 18th and 19th century. The UK is invading that purple island!

  • @user19107
    @user19107 9 лет назад

    can Xi be any value?

  • @razaws6967
    @razaws6967 6 лет назад

    great!

  • @vijoyjyotineog1896
    @vijoyjyotineog1896 5 лет назад +2

    voice is damn low

  • @siddhantvats9088
    @siddhantvats9088 7 лет назад

    I didn't get what is P here

    • @khumomatlala7106
      @khumomatlala7106 7 лет назад

      P is the probability that we pick/choose/observe a male from the population.
      That mean that 1 - P is the probability of choosing/picking/observing a female.
      In this video, he is trying to estimate what P (i.e the probability of choosing a male in the UK) is if it was not already given to us.
      Note: The distribution used is a Bernoulli Distribution.

  • @adorablecheetah2930
    @adorablecheetah2930 5 лет назад

    Voice is too low

  • @seth69
    @seth69 6 лет назад

    IS THIS MLE????

    • @SpartacanUsuals
      @SpartacanUsuals  6 лет назад

      Yes! Best, Ben

    • @ccuuttww
      @ccuuttww 5 лет назад

      @@SpartacanUsuals Is it equal to grand mean at the same time ?

  • @shynggyskassen942
    @shynggyskassen942 3 года назад

    watch with 2x

  • @tinlizzyification
    @tinlizzyification 5 лет назад

    Yowsahs the captioning for this is completely whacked.

  • @larry3317
    @larry3317 7 лет назад

    you should explain things more thoroughly for the dumb people like me, maybe show all the work out

  • @movahediacademy9017
    @movahediacademy9017 7 лет назад +7

    Why are women number 0? sexiiisttttt.. Im triggered :p

  • @kykise1395
    @kykise1395 4 года назад

    Just another waste of brain power in school. The chances of you needing this for a job are so low. Unless your pursuing the career of becoming a meteorologist or something related.

    • @sammypan3528
      @sammypan3528 4 года назад

      MLE is in fact used in almost most of fields that uses statistics, that includes your banks, your financial services, the phone you are using, and any policy making (I can't make a list of everything but there are actually quite a lot of applications)...

  • @johnwayne2349
    @johnwayne2349 6 лет назад

    I am a feminist and this is offensive

  • @souravdhiman385
    @souravdhiman385 4 года назад

    your Audio sucks. you should be little louder

  • @larry3317
    @larry3317 7 лет назад +1

    is this guy trying to copy khan academy?

  • @ryllae8059
    @ryllae8059 5 лет назад +1

    Seriously, u cant teach. Just stop.
    After 5:22 He just starts talking gibberish.

  • @coconutking23
    @coconutking23 10 лет назад +1

    thank you sir, just made my day :)

    • @SpartacanUsuals
      @SpartacanUsuals  10 лет назад

      Hi, thanks very much for your message and kind words. Best, Ben