![Brian Greco - Learn Statistics!](/img/default-banner.jpg)
- Видео 23
- Просмотров 91 658
Brian Greco - Learn Statistics!
Добавлен 16 июн 2021
linktr.ee/briangreco
The Monty Hall Problem without probability
The Monty Hall Problem is a classic probability puzzle with a counter-intuitive result. Instead of using probability trees, let's try to deeply understand the problem!
Просмотров: 730
Видео
Analysis of Variance (ANOVA) and F statistics .... MADE EASY!!!
Просмотров 1659 часов назад
Learn the intuition behind ANOVA and calculating F statistics!
The Cramer-Rao Lower Bound ... MADE EASY!!!
Просмотров 796Месяц назад
What is a Cramer-Rao Lower Bound? How can we prove an estimator is the best possible estimator? What is the efficiency of an estimator?
Outliers in Data Analysis... and how to deal with them!
Просмотров 8002 месяца назад
How do we deal with outliers in data analysis? There's no one-size-fits-all solution!
Link functions for GLMs... MADE EASY!!!
Просмотров 4523 месяца назад
What is a link function in a generalized linear model (GLM)? Find out!
The Uniform Distribution MLE is... very UNLIKELY
Просмотров 2633 месяца назад
How do we find the maximum likelihood estimate for the uniform distribution? And learn an important lesson that MLEs may not mean exactly what you think!
Bayesian vs. Frequentist Statistics ... MADE EASY!!!
Просмотров 5 тыс.3 месяца назад
What is the difference between Bayesian and Frequentist statistics?
Maximum Likelihood Estimation ... MADE EASY!!!
Просмотров 10 тыс.3 месяца назад
Learn all about Maximum Likelihood Estimation (MLE)! If you don't know what a likelihood function is, check out my video here: ruclips.net/video/bXGjQnpGGIo/видео.html
The simplest non-parametric test... The Sign Test
Просмотров 6683 месяца назад
Learn about a cool and very easy non-parametric test called the sign test.
Skewness... MADE EASY!!!
Просмотров 2963 месяца назад
Learn about right and left skewed probability distributions, and how to remember which is which!
Unbiased Estimators ... Made Easy!
Просмотров 1,8 тыс.3 месяца назад
What is an unbiased estimator? Learn about a nice property of some estimators!
Can more data be BAD??? (The 10% rule and Finite Population Correction)
Просмотров 2313 месяца назад
Learn about why having too much data can sometimes be a bad thing! If we sample too much of the population without replacement, our data ends up being dependent and our calculations are very inaccurate.
Inverse Transform Sampling ... MADE EASY!!!
Просмотров 8433 месяца назад
Learn how to generate any random variable using a uniform(0,1) random number generator and the inverse CDF function!
Regularization... Made Easy!!!
Просмотров 1843 месяца назад
Learn about the concept of regularization, which makes sure our model is not overfit to new data!
Heteroskedasticity and Homoskedasticity... What are they???
Просмотров 9833 месяца назад
Learn about these big words in statistics and the equal variance assumption!
Independent vs Mutually Exclusive Events ... MADE EASY!!!
Просмотров 6253 месяца назад
Independent vs Mutually Exclusive Events ... MADE EASY!!!
Probability vs. Likelihood ... MADE EASY!!!
Просмотров 26 тыс.3 месяца назад
Probability vs. Likelihood ... MADE EASY!!!
The Method of Moments ... Made Easy!
Просмотров 9 тыс.4 месяца назад
The Method of Moments ... Made Easy!
Sufficient Statistics and the Factorization Theorem
Просмотров 3,7 тыс.4 месяца назад
Sufficient Statistics and the Factorization Theorem
Chebyshev's Inequality ... Made Easy!
Просмотров 9 тыс.5 месяцев назад
Chebyshev's Inequality ... Made Easy!
Empirical Rule (68-95-99.7 Rule) and Z-scores!
Просмотров 1,2 тыс.5 месяцев назад
Empirical Rule (68-95-99.7 Rule) and Z-scores!
What is R-Squared (R^2) ... REALLY?
Просмотров 6 тыс.5 месяцев назад
What is R-Squared (R^2) ... REALLY?
Actually, I have a 3 in 3 Chance of winning. A car is nice, but a new furry friend is even nicer.
If revealing goats "transfers" the probability of that door to the remaining doors, why doesn't the probability get transferred to the door we selected? The only difference between them is that we selected one of the doors, how does out selection affect what's behind it? Isn't this literally gambler's fallacy?
The chance that the car is behind your original door is 1/3. The chance that it is behind the other two doors is 2/3. This second probability isn't "transferred" to the other door, but stays in the "non-selected" part of the doors. So the two other doors still have a 2/3 chance of containing the car, but one of the doors is open and showing a goat. Imagine the whole problem differently: You have to choose one of 3 doors (chance is 1/3). But then Monty Hall allows you to change your choice to the two other doors. You can either chose the one original door or choose the two other doors. It's obvious that you would choose the two doors.
@@zucmaidik1442 Isn't the only difference between the first and the second door is that we selected the first one. I'm sorry ever since I have heard of this I really can't wrap my head around it.
The door you selected is different than the other doors because you selected it, so there is no way the host interacts with it or reveals any information about your door. You know the host will never reveal a goat behind the door you selected, so when he does reveal a goat behind a door, the probability is only being transferred to doors you didn't initially select.
@@sujalgarewal2685 Don't look at the other door as a single door, but as the two doors you didn't select. Your second choice is between your original door and the two other doors, one of which is open. The chance that your original door has the car is 1/3. The chance that the other two doors have the car is 2/3. Since one of the two other doors is open, the chance of 2/3 is now applicable to the other closed door.
I like to teach this problem using sets: the set of doors you picked (which contains only 1 door) and the set of doors you didn't pick (the negated set, initially containing 2 doors). Removing 1 of the doors from the 2nd set doesn't change the overall probability that the set you initially picked has only a 1/3 chance of being correct and the negated set still has a 2/3 chance of being correct, but within that set there is now only 1 door. So switching sets (and therefore switching doors) doubles your chances.
your videos are the best explanations I've found anywhere online, keep it up! (Also, requesting a video on ridge/lasso regression)
Thanks! I think I will probably make a video on ridge or LASSO eventually, but it might not be for a while! In the meantime, I do have a short, not-very-technical video on regularization, the motivation behind LASSO/Ridge.
love your videos❤
You are really good at explaining things simply... there are so few good pure statistics teachers on youtube... thanks a lot for your help!
I think this is the first time I've genuinely grasped this concept. Thank you so much!
Wish you were my prof :/
Hands down the best video on MLE!!!
Thank you!
can you explain why is your intercept -500? the diagram shows that the intercept of the line should be positive. so why is it negative?
The y-intercept is not shown on the graph at all, because the x axis only goes from 60 to 70. X = 0 is way to the left.
@@briangreco2718 But the regression here is drawn with origin as 0. also the regression line is cutting the Y axis somewhere between 50-100, lets assume 75. so it shows when x=0, y=75, which basically is the intercept. I am a bit confused on this. how is the intercept -500 and the graph shows something else
The graph doesn’t show the x=0, so you are reading the graph incorrectly. The equation is correct and you understand the equation correctly, but you are reading the graph incorrectly. There is no y axis.
I like how you are pretty much taking 'Fundamentals of Mathematical Statistics' but verbalizing and visualizing it... it's very handy and I would love to continue watching every video you make
Thank you!
that's a fantastic visual explanation... you are about to become very popular amongst statistics students worldwide
3:04 I love how he describe the indepence of these samples by talking about the coins coming from '3 sets of 10 flips' ... this ensures that the second sample isn't reliant on the first and the third sample isn't reliant on the second and first and so on... in other words, the samples are independent If the samples were taken from a single set of binomial, the probabilty of success of second flip as well as first flip is dependent on success or fail of first sample
To be clear, we are still assuming all the 30 flips are independent and have the same probability of heads - we are just changing how summarize the data. Whether we talking about each flip individually, 3 sets of 10, or 1 set of 30, all 30 coin flips are independent.
Fantastic video... preparing for IIT JAM MS
Thank you!
As they had no wings the strangers could not fly away, and if they jumped down from such a height they would surely be killed.
man, u managed to explain something that my prof spent 1 whole month explaining in a singular video, a fantastically made video!!!!!
AMAZING explanation!
Fisher information is negative of expected value of double derivative of log L, then why we multiply with 'n' to get it?
I was assuming the L here is the likelihood of a single data point. In that case, you just multiply by n at the end to get the information of all n observations. If L is the likelihood of all n data points, then the answer will already contain the n and you don't have to multiply at the end. The two methods are equivalent when the data is independent and identically distributed.
@@briangreco2718 Thanks for replying so quickly! I have another question, is MLE of population mean always guarantee that it will have the CRLB variance?
Hmm, I don't think this is true in general. At some level, it's certainly not true if we're talking about the CRLB of unbiased estimators, because the MLE is sometimes biased. For example, in a uniform distribution on [0,theta], the MLE is biased, and the Fisher Information is not even defined. My guess is that this applies for some "location families", which the normal, binomial, poisson would all be. For a "scale family" like the exponential distribution, in the parameterization where the mean is 1/lambda, I do not believe the MLE meets the CRLB.
Thanks for uploading all this content. I am about to begin my masters in data science soon and I was trying to grasp some math theory which is hard for me coming from a CS Background. Your videos make it so simple to digest all these topics.
The clarity you provide--as in, what the zero or 1 on the x-axis of the normal distribution represent but more importantly what they don't represent, which has been a source of confusion (and a drag) for me, is now more clear and finally validates a hunch/H-sub-A I've held; too many terms in statistics which I've encountered have been near-tautologies and a gigantic obstacle for me. In my humble and quasi-researched opinion about learning, cognitive transfer, linguistics, and abstraction, I postulate that for a new subject, especially those often found as hardly intuitive (clearly as a function of many factors), require the most clarity and for me an exhaustive list of features and areas of overlap, as well as an explicit articulation of the areas or features an idea does not connect with. THANK YOU for the excellent presentation!
Thank you, absolutely!
Great video!
I have questions about statistical inference. Can you help me solve them?
If you have a question related to the video, I may be able to help. If it’s not related to the video, I probably can’t help.
@@briangreco2718 It is about statistical inference, unbiased estimator and sufficient statistic
It is related to statistical inference, adequate statistics and an unbiased estimator@@briangreco2718
It is about statistical inference, unbiased estimator and sufficient statistic@@briangreco2718
@@briangreco2718 Yes, related to the video
I love how you put the context of sufficiency in real life chance events. Thank you for this gold video!
this is very helpful, thank you so much ❤
After ages of scrolling through the internet to understand what probability means, I finally reach my destination. I always felt that the analogy of 'Probability of flipping a coin' has different interpretations, one being where we can confirm our beliefs after something has happened for a large frequency of time ( Now I can confirm that it is officially called as the FREQUENTIST APPROACH ) and other one where we just know that it is equally likely. Wrapping my head around these concepts and confirming my beliefs was really a painful one. I finally feel satisfied. Thanks Brian for this video.
I freaking love you
AWESOME VIDEO
I'm glad I found your channel. I have never seen a better explanation of mathematical statistics, nobody else is even close! You are doing an amazing job there
Thanks for the kind words :)
you are a genius
This is a great explanation. I love the visuals showing how they are all related. Thank you.
Nice videos. I'm now preparing for my masters and it will be quite useful; the connection between CRLW and the standard error of the estimates by MLE makes this very nice.
Thank you! You made this make total intuitive sense in less than 2 minutes where other videos were taking 10+ minutes and I still didn't have that intuitive understanding
So glad it helped! I agree, it is usually presented in a way that hides the very simple intuition behind the idea.
Well done! Thanks for the vid, however the video is a bit too complicated for me.
Great explanation
insane explanation
wow Thankyou Brian, very clear explaination
Please upload more of these videos so helpful
Ain't no one teaches statistics like you ❤ Thankyou soo much for giving such elaborative explanations.. And your illustrations regarding these inequalities made them very simple to understand
Its giving a hint of Heisenberg uncertainty principle
Thx, brother. Making this sht feel like 5th grade math. Ez PZ. wazzup then edit: no diddy, cute eyes brother. go get em
truly fantastic video, watched this and immediately popped off on my homework question. shoutout!
7:26 would have been easier to use the variance formula for uniform: (b-a)^2/12 and rearrange for E(X^2) = var(X)+E(X)
Yeah, that’s what I probably would’ve done myself to save some calculus too - for the video I just wanted to emphasize the idea rather than the most efficient method. Thanks for watching!
these are perfect! dumbs it down really well while retaining all the info
Thanks!
Thank you!
Thanks for the straightforward explanation!! Now I can understand why "sufficient" is sufficient!
Send love to U! Your Mic is getting better
Thanks, I got a new microphone so the only video with the old microphone is the Markov's inequality one :) All other current videos and future ones should have very good audio!
What if you don't know what's the data distribution?
Maximum likelihood basically requires that you assume something about the distribution, otherwise you get those extreme examples that I mention throughout the video.
@@briangreco2718 thank you for the answer
Finally understood what it means when we say intervals are random in frequentist paradigm.
By God what an absolutely amazing video.