Beautiful explanation. Finally following what's going on. My professor just throws out random formulas with no idea where they come from. Love your work.
Hi. You probably saved my thesis. I was finding it difficult to wrap my head around MLE as steps were confusing. I am trying to model levy walk and this video has been very helpful. Kudos.
This is seriously the best explanation of maximum likelihood estimation that I have ever seen. THANK YOU SO MUCH. You truly saved my day, if that means anything to you ;) There is just one little thing I don't really understand. Why do we get e^(-lamda)(sum e(i)) and not e^(-lamda)(sum e(n)), do we not sum e n number of times?
I got two question 1. Why is logic behind maximizing the joint probability. I meant why to maximize the joint probability in order to find theta ? 2. If i am not sure about range of lamda, how will I check where global maxima is situated ?
1. You may think of it as the following: given a model (chosen by you) for the data, it's finding the parameters that is consistent with the data. 2. In models newbies come across (eg based on exponential family of distributions) the likelihood function is concave, and so an mle will be the global maximizer. If your model has 1 parameter, and if the 2nd derivative of the likelihood, or easier is the log-lik, is negative then the function is concave for all permissible parameter values.
This is the most straight forward explaination for any struggling student. If you cannot understand this tutorial, forget this subject. Thank you.
Beautiful explanation. Finally following what's going on. My professor just throws out random formulas with no idea where they come from. Love your work.
Thankyou! Your quality of teaching far exceeds the average professor in statistics.
Best video I've seen regarding likehood function.
Hi. You probably saved my thesis. I was finding it difficult to wrap my head around MLE as steps were confusing. I am trying to model levy walk and this video has been very helpful. Kudos.
Really-really a expected likelihood way to present a concept. Thanks
Thank you so much for this bro, I found the right guide for understanding MLE
Great tutorial. Best on MLE I have ever seen. Thanks Phil.
Indeed understandable. It is 12:30 am here and I was about to give up on finding a nice thing on RUclips. Thanks
+Mehran Hosseinzadeh Never give up! Hope you were not disappointed.
Well I was close to but thanks to you now I have a way better understanding regarding MLF
mle is one of those topics that 1st year students find hard to grasp. It's something they've not seen in school.
This is seriously the best explanation of maximum likelihood estimation that I have ever seen. THANK YOU SO MUCH. You truly saved my day, if that means anything to you ;)
There is just one little thing I don't really understand. Why do we get e^(-lamda)(sum e(i)) and not e^(-lamda)(sum e(n)), do we not sum e n number of times?
What a transparent easy to follow video. Great work!
thankyou, best explanation by far I've searched.
Great explanation, Phil! Thank you.
Nice video, thank you
Thanks for the refresher! You made me a little nervous when you missed the log for lambda, I was about to go and re-learn maths haha
Very clear explanation. Thank you!
What a great work! thank you
i like it. good explain
So much better than my useless lecturer.
Very nice....yea, ofcourse its a gentle one
Can you please make an example of this using real world data?
you're the best
Nice video
you are amazing
I got two question
1. Why is logic behind maximizing the joint probability. I meant why to maximize the joint probability in order to find theta ?
2. If i am not sure about range of lamda, how will I check where global maxima is situated ?
1. You may think of it as the following: given a model (chosen by you) for the data, it's finding the parameters that is consistent with the data.
2. In models newbies come across (eg based on exponential family of distributions) the likelihood function is concave, and so an mle will be the global maximizer. If your model has 1 parameter, and if the 2nd derivative of the likelihood, or easier is the log-lik, is negative then the function is concave for all permissible parameter values.
I want to know the meaning of penalized mle
I want to maximum likelihood estimation how to using spss video
For this please refer to my SPSS videos