If by LME you mean linear mixed-effects models, probably not any time soon, but that's a good idea. I have lots of videos on estimators and their properties though (MLE, MoM, Consistency, Unbiasedness, CRLB etc)
Correct - it’s 0. The definition of completeness tells us that if that expected value is zero, then the function must also be 0 everywhere. But X1-X2 is basically never 0.
great content, keep it up
I’ve exam tomorrow and noticed you uploaded the video 20 minutes ago 😂 That’s a signal
😂
Thank You
really neat video, could you do one on estimators and LME?
If by LME you mean linear mixed-effects models, probably not any time soon, but that's a good idea. I have lots of videos on estimators and their properties though (MLE, MoM, Consistency, Unbiasedness, CRLB etc)
I don't understand the first "not-complete" example. E[X_1 - X_2] = E[X_1] - E[X_2] = mu - mu = 0, no?
Correct - it’s 0. The definition of completeness tells us that if that expected value is zero, then the function must also be 0 everywhere. But X1-X2 is basically never 0.