Brian Greco - Learn Statistics!
Brian Greco - Learn Statistics!
  • Видео 23
  • Просмотров 91 658
The Monty Hall Problem without probability
The Monty Hall Problem is a classic probability puzzle with a counter-intuitive result. Instead of using probability trees, let's try to deeply understand the problem!
Просмотров: 730

Видео

Analysis of Variance (ANOVA) and F statistics .... MADE EASY!!!
Просмотров 1659 часов назад
Learn the intuition behind ANOVA and calculating F statistics!
The Cramer-Rao Lower Bound ... MADE EASY!!!
Просмотров 796Месяц назад
What is a Cramer-Rao Lower Bound? How can we prove an estimator is the best possible estimator? What is the efficiency of an estimator?
Outliers in Data Analysis... and how to deal with them!
Просмотров 8002 месяца назад
How do we deal with outliers in data analysis? There's no one-size-fits-all solution!
Link functions for GLMs... MADE EASY!!!
Просмотров 4523 месяца назад
What is a link function in a generalized linear model (GLM)? Find out!
The Uniform Distribution MLE is... very UNLIKELY
Просмотров 2633 месяца назад
How do we find the maximum likelihood estimate for the uniform distribution? And learn an important lesson that MLEs may not mean exactly what you think!
Bayesian vs. Frequentist Statistics ... MADE EASY!!!
Просмотров 5 тыс.3 месяца назад
What is the difference between Bayesian and Frequentist statistics?
Maximum Likelihood Estimation ... MADE EASY!!!
Просмотров 10 тыс.3 месяца назад
Learn all about Maximum Likelihood Estimation (MLE)! If you don't know what a likelihood function is, check out my video here: ruclips.net/video/bXGjQnpGGIo/видео.html
The simplest non-parametric test... The Sign Test
Просмотров 6683 месяца назад
Learn about a cool and very easy non-parametric test called the sign test.
Skewness... MADE EASY!!!
Просмотров 2963 месяца назад
Learn about right and left skewed probability distributions, and how to remember which is which!
Unbiased Estimators ... Made Easy!
Просмотров 1,8 тыс.3 месяца назад
What is an unbiased estimator? Learn about a nice property of some estimators!
Can more data be BAD??? (The 10% rule and Finite Population Correction)
Просмотров 2313 месяца назад
Learn about why having too much data can sometimes be a bad thing! If we sample too much of the population without replacement, our data ends up being dependent and our calculations are very inaccurate.
Inverse Transform Sampling ... MADE EASY!!!
Просмотров 8433 месяца назад
Learn how to generate any random variable using a uniform(0,1) random number generator and the inverse CDF function!
Regularization... Made Easy!!!
Просмотров 1843 месяца назад
Learn about the concept of regularization, which makes sure our model is not overfit to new data!
Heteroskedasticity and Homoskedasticity... What are they???
Просмотров 9833 месяца назад
Learn about these big words in statistics and the equal variance assumption!
Independent vs Mutually Exclusive Events ... MADE EASY!!!
Просмотров 6253 месяца назад
Independent vs Mutually Exclusive Events ... MADE EASY!!!
Probability vs. Likelihood ... MADE EASY!!!
Просмотров 26 тыс.3 месяца назад
Probability vs. Likelihood ... MADE EASY!!!
The Method of Moments ... Made Easy!
Просмотров 9 тыс.4 месяца назад
The Method of Moments ... Made Easy!
Sufficient Statistics and the Factorization Theorem
Просмотров 3,7 тыс.4 месяца назад
Sufficient Statistics and the Factorization Theorem
Chebyshev's Inequality ... Made Easy!
Просмотров 9 тыс.5 месяцев назад
Chebyshev's Inequality ... Made Easy!
Empirical Rule (68-95-99.7 Rule) and Z-scores!
Просмотров 1,2 тыс.5 месяцев назад
Empirical Rule (68-95-99.7 Rule) and Z-scores!
What is R-Squared (R^2) ... REALLY?
Просмотров 6 тыс.5 месяцев назад
What is R-Squared (R^2) ... REALLY?
Markov's Inequality ... Made Easy!
Просмотров 12 тыс.Год назад
Markov's Inequality ... Made Easy!

Комментарии

  • @zucmaidik1442
    @zucmaidik1442 День назад

    Actually, I have a 3 in 3 Chance of winning. A car is nice, but a new furry friend is even nicer.

  • @sujalgarewal2685
    @sujalgarewal2685 День назад

    If revealing goats "transfers" the probability of that door to the remaining doors, why doesn't the probability get transferred to the door we selected? The only difference between them is that we selected one of the doors, how does out selection affect what's behind it? Isn't this literally gambler's fallacy?

    • @zucmaidik1442
      @zucmaidik1442 День назад

      The chance that the car is behind your original door is 1/3. The chance that it is behind the other two doors is 2/3. This second probability isn't "transferred" to the other door, but stays in the "non-selected" part of the doors. So the two other doors still have a 2/3 chance of containing the car, but one of the doors is open and showing a goat. Imagine the whole problem differently: You have to choose one of 3 doors (chance is 1/3). But then Monty Hall allows you to change your choice to the two other doors. You can either chose the one original door or choose the two other doors. It's obvious that you would choose the two doors.

    • @sujalgarewal2685
      @sujalgarewal2685 День назад

      @@zucmaidik1442 Isn't the only difference between the first and the second door is that we selected the first one. I'm sorry ever since I have heard of this I really can't wrap my head around it.

    • @briangreco2718
      @briangreco2718 День назад

      The door you selected is different than the other doors because you selected it, so there is no way the host interacts with it or reveals any information about your door. You know the host will never reveal a goat behind the door you selected, so when he does reveal a goat behind a door, the probability is only being transferred to doors you didn't initially select.

    • @zucmaidik1442
      @zucmaidik1442 День назад

      @@sujalgarewal2685 Don't look at the other door as a single door, but as the two doors you didn't select. Your second choice is between your original door and the two other doors, one of which is open. The chance that your original door has the car is 1/3. The chance that the other two doors have the car is 2/3. Since one of the two other doors is open, the chance of 2/3 is now applicable to the other closed door.

    • @Casey_W
      @Casey_W День назад

      I like to teach this problem using sets: the set of doors you picked (which contains only 1 door) and the set of doors you didn't pick (the negated set, initially containing 2 doors). Removing 1 of the doors from the 2nd set doesn't change the overall probability that the set you initially picked has only a 1/3 chance of being correct and the negated set still has a 2/3 chance of being correct, but within that set there is now only 1 door. So switching sets (and therefore switching doors) doubles your chances.

  • @andyyang8876
    @andyyang8876 3 дня назад

    your videos are the best explanations I've found anywhere online, keep it up! (Also, requesting a video on ridge/lasso regression)

    • @briangreco2718
      @briangreco2718 3 дня назад

      Thanks! I think I will probably make a video on ridge or LASSO eventually, but it might not be for a while! In the meantime, I do have a short, not-very-technical video on regularization, the motivation behind LASSO/Ridge.

  • @crystal-pang
    @crystal-pang 3 дня назад

    love your videos❤

  • @RoyalYoutube_PRO
    @RoyalYoutube_PRO 4 дня назад

    You are really good at explaining things simply... there are so few good pure statistics teachers on youtube... thanks a lot for your help!

  • @drose98
    @drose98 4 дня назад

    I think this is the first time I've genuinely grasped this concept. Thank you so much!

  • @ishitaraj7723
    @ishitaraj7723 6 дней назад

    Wish you were my prof :/

  • @ishitaraj7723
    @ishitaraj7723 6 дней назад

    Hands down the best video on MLE!!!

  • @PriyankaRoy-r1g
    @PriyankaRoy-r1g 11 дней назад

    can you explain why is your intercept -500? the diagram shows that the intercept of the line should be positive. so why is it negative?

    • @briangreco2718
      @briangreco2718 11 дней назад

      The y-intercept is not shown on the graph at all, because the x axis only goes from 60 to 70. X = 0 is way to the left.

    • @PriyankaRoy-r1g
      @PriyankaRoy-r1g 11 дней назад

      @@briangreco2718 But the regression here is drawn with origin as 0. also the regression line is cutting the Y axis somewhere between 50-100, lets assume 75. so it shows when x=0, y=75, which basically is the intercept. I am a bit confused on this. how is the intercept -500 and the graph shows something else

    • @briangreco2718
      @briangreco2718 11 дней назад

      The graph doesn’t show the x=0, so you are reading the graph incorrectly. The equation is correct and you understand the equation correctly, but you are reading the graph incorrectly. There is no y axis.

  • @RoyalYoutube_PRO
    @RoyalYoutube_PRO 13 дней назад

    I like how you are pretty much taking 'Fundamentals of Mathematical Statistics' but verbalizing and visualizing it... it's very handy and I would love to continue watching every video you make

  • @RoyalYoutube_PRO
    @RoyalYoutube_PRO 13 дней назад

    that's a fantastic visual explanation... you are about to become very popular amongst statistics students worldwide

  • @RoyalYoutube_PRO
    @RoyalYoutube_PRO 14 дней назад

    3:04 I love how he describe the indepence of these samples by talking about the coins coming from '3 sets of 10 flips' ... this ensures that the second sample isn't reliant on the first and the third sample isn't reliant on the second and first and so on... in other words, the samples are independent If the samples were taken from a single set of binomial, the probabilty of success of second flip as well as first flip is dependent on success or fail of first sample

    • @briangreco2718
      @briangreco2718 14 дней назад

      To be clear, we are still assuming all the 30 flips are independent and have the same probability of heads - we are just changing how summarize the data. Whether we talking about each flip individually, 3 sets of 10, or 1 set of 30, all 30 coin flips are independent.

  • @RoyalYoutube_PRO
    @RoyalYoutube_PRO 14 дней назад

    Fantastic video... preparing for IIT JAM MS

  • @5romir
    @5romir 14 дней назад

    Thank you!

  • @Susan_8626
    @Susan_8626 17 дней назад

    As they had no wings the strangers could not fly away, and if they jumped down from such a height they would surely be killed.

  • @brucelam115
    @brucelam115 18 дней назад

    man, u managed to explain something that my prof spent 1 whole month explaining in a singular video, a fantastically made video!!!!!

  • @siavashk100
    @siavashk100 23 дня назад

    AMAZING explanation!

  • @ridwanwase7444
    @ridwanwase7444 24 дня назад

    Fisher information is negative of expected value of double derivative of log L, then why we multiply with 'n' to get it?

    • @briangreco2718
      @briangreco2718 24 дня назад

      I was assuming the L here is the likelihood of a single data point. In that case, you just multiply by n at the end to get the information of all n observations. If L is the likelihood of all n data points, then the answer will already contain the n and you don't have to multiply at the end. The two methods are equivalent when the data is independent and identically distributed.

    • @ridwanwase7444
      @ridwanwase7444 24 дня назад

      @@briangreco2718 Thanks for replying so quickly! I have another question, is MLE of population mean always guarantee that it will have the CRLB variance?

    • @briangreco2718
      @briangreco2718 24 дня назад

      Hmm, I don't think this is true in general. At some level, it's certainly not true if we're talking about the CRLB of unbiased estimators, because the MLE is sometimes biased. For example, in a uniform distribution on [0,theta], the MLE is biased, and the Fisher Information is not even defined. My guess is that this applies for some "location families", which the normal, binomial, poisson would all be. For a "scale family" like the exponential distribution, in the parameterization where the mean is 1/lambda, I do not believe the MLE meets the CRLB.

  • @ligandroyumnam5546
    @ligandroyumnam5546 26 дней назад

    Thanks for uploading all this content. I am about to begin my masters in data science soon and I was trying to grasp some math theory which is hard for me coming from a CS Background. Your videos make it so simple to digest all these topics.

  • @TheErgunPascu
    @TheErgunPascu 28 дней назад

    The clarity you provide--as in, what the zero or 1 on the x-axis of the normal distribution represent but more importantly what they don't represent, which has been a source of confusion (and a drag) for me, is now more clear and finally validates a hunch/H-sub-A I've held; too many terms in statistics which I've encountered have been near-tautologies and a gigantic obstacle for me. In my humble and quasi-researched opinion about learning, cognitive transfer, linguistics, and abstraction, I postulate that for a new subject, especially those often found as hardly intuitive (clearly as a function of many factors), require the most clarity and for me an exhaustive list of features and areas of overlap, as well as an explicit articulation of the areas or features an idea does not connect with. THANK YOU for the excellent presentation!

  • @LayLenChing
    @LayLenChing 28 дней назад

    Great video!

  • @ninuuh
    @ninuuh Месяц назад

    I have questions about statistical inference. Can you help me solve them?

    • @briangreco2718
      @briangreco2718 Месяц назад

      If you have a question related to the video, I may be able to help. If it’s not related to the video, I probably can’t help.

    • @ninuuh
      @ninuuh Месяц назад

      @@briangreco2718 It is about statistical inference, unbiased estimator and sufficient statistic

    • @ninuuh
      @ninuuh Месяц назад

      It is related to statistical inference, adequate statistics and an unbiased estimator@@briangreco2718

    • @ninuuh
      @ninuuh Месяц назад

      It is about statistical inference, unbiased estimator and sufficient statistic​@@briangreco2718

    • @ninuuh
      @ninuuh Месяц назад

      @@briangreco2718 Yes, related to the video

  • @charlesSTATS
    @charlesSTATS Месяц назад

    I love how you put the context of sufficiency in real life chance events. Thank you for this gold video!

  • @rafaelhadi6342
    @rafaelhadi6342 Месяц назад

    this is very helpful, thank you so much ❤

  • @avadhsavsani1148
    @avadhsavsani1148 Месяц назад

    After ages of scrolling through the internet to understand what probability means, I finally reach my destination. I always felt that the analogy of 'Probability of flipping a coin' has different interpretations, one being where we can confirm our beliefs after something has happened for a large frequency of time ( Now I can confirm that it is officially called as the FREQUENTIST APPROACH ) and other one where we just know that it is equally likely. Wrapping my head around these concepts and confirming my beliefs was really a painful one. I finally feel satisfied. Thanks Brian for this video.

  • @MoneerGhanem
    @MoneerGhanem Месяц назад

    I freaking love you

  • @conceptualprogress
    @conceptualprogress Месяц назад

    AWESOME VIDEO

  • @ops428
    @ops428 Месяц назад

    I'm glad I found your channel. I have never seen a better explanation of mathematical statistics, nobody else is even close! You are doing an amazing job there

  • @jwbpark
    @jwbpark Месяц назад

    you are a genius

  • @sabinaharding1990
    @sabinaharding1990 Месяц назад

    This is a great explanation. I love the visuals showing how they are all related. Thank you.

  • @santiagodm3483
    @santiagodm3483 Месяц назад

    Nice videos. I'm now preparing for my masters and it will be quite useful; the connection between CRLW and the standard error of the estimates by MLE makes this very nice.

  • @kevinbreckman5321
    @kevinbreckman5321 Месяц назад

    Thank you! You made this make total intuitive sense in less than 2 minutes where other videos were taking 10+ minutes and I still didn't have that intuitive understanding

    • @briangreco2718
      @briangreco2718 Месяц назад

      So glad it helped! I agree, it is usually presented in a way that hides the very simple intuition behind the idea.

  • @olaoluwaodeyemi4059
    @olaoluwaodeyemi4059 Месяц назад

    Well done! Thanks for the vid, however the video is a bit too complicated for me.

  • @phillipmunkhuwa5435
    @phillipmunkhuwa5435 Месяц назад

    Great explanation

  • @severed_toast
    @severed_toast Месяц назад

    insane explanation

  • @jann4249
    @jann4249 Месяц назад

    wow Thankyou Brian, very clear explaination

  • @jwbpark
    @jwbpark 2 месяца назад

    Please upload more of these videos so helpful

  • @TousibAhmedBPSO-kq1nk
    @TousibAhmedBPSO-kq1nk 2 месяца назад

    Ain't no one teaches statistics like you ❤ Thankyou soo much for giving such elaborative explanations.. And your illustrations regarding these inequalities made them very simple to understand

  • @TousibAhmedBPSO-kq1nk
    @TousibAhmedBPSO-kq1nk 2 месяца назад

    Its giving a hint of Heisenberg uncertainty principle

  • @existentialrap521
    @existentialrap521 2 месяца назад

    Thx, brother. Making this sht feel like 5th grade math. Ez PZ. wazzup then edit: no diddy, cute eyes brother. go get em

  • @JL-vg5yj
    @JL-vg5yj 2 месяца назад

    truly fantastic video, watched this and immediately popped off on my homework question. shoutout!

  • @trolltoll440
    @trolltoll440 2 месяца назад

    7:26 would have been easier to use the variance formula for uniform: (b-a)^2/12 and rearrange for E(X^2) = var(X)+E(X)

    • @briangreco2718
      @briangreco2718 2 месяца назад

      Yeah, that’s what I probably would’ve done myself to save some calculus too - for the video I just wanted to emphasize the idea rather than the most efficient method. Thanks for watching!

  • @trolltoll440
    @trolltoll440 2 месяца назад

    these are perfect! dumbs it down really well while retaining all the info

  • @qkdnrnskfirnsvabk
    @qkdnrnskfirnsvabk 2 месяца назад

    Thanks!

  • @qkdnrnskfirnsvabk
    @qkdnrnskfirnsvabk 2 месяца назад

    Thank you!

  • @qkdnrnskfirnsvabk
    @qkdnrnskfirnsvabk 2 месяца назад

    Thanks for the straightforward explanation!! Now I can understand why "sufficient" is sufficient!

  • @LuksYang
    @LuksYang 2 месяца назад

    Send love to U! Your Mic is getting better

    • @briangreco2718
      @briangreco2718 2 месяца назад

      Thanks, I got a new microphone so the only video with the old microphone is the Markov's inequality one :) All other current videos and future ones should have very good audio!

  • @dariofabian8819
    @dariofabian8819 2 месяца назад

    What if you don't know what's the data distribution?

    • @briangreco2718
      @briangreco2718 2 месяца назад

      Maximum likelihood basically requires that you assume something about the distribution, otherwise you get those extreme examples that I mention throughout the video.

    • @dariofabian8819
      @dariofabian8819 2 месяца назад

      @@briangreco2718 thank you for the answer

  • @shreyanshchouhan3097
    @shreyanshchouhan3097 2 месяца назад

    Finally understood what it means when we say intervals are random in frequentist paradigm.

  • @ifeanyianene6770
    @ifeanyianene6770 2 месяца назад

    By God what an absolutely amazing video.