Explaining Power

Поделиться
HTML-код
  • Опубликовано: 23 янв 2025

Комментарии • 30

  • @pipertripp
    @pipertripp 10 месяцев назад +6

    What I like most about your channel is that I can just hit the "like" button straight-away without having to wait until the end to see if I'm actually going to like the video or not. I just know that I am so, so I can just get on with it.

  • @santiagodm3483
    @santiagodm3483 10 месяцев назад +37

    Babe, stop the yapping. Very normal uploaded a new video!

  • @heythere7130
    @heythere7130 9 месяцев назад +3

    thank you SO much for this i have a stats midterm in less than a week and this is exactly what i needed

    • @very-normal
      @very-normal  9 месяцев назад +3

      Good luck!!

    • @kodirovsshik
      @kodirovsshik 8 месяцев назад

      how did you do?

    • @heythere7130
      @heythere7130 8 месяцев назад +1

      @@kodirovsshik pretty well 😎😎

    • @kodirovsshik
      @kodirovsshik 8 месяцев назад +1

      @@heythere7130 Congrats 🥳

  • @dogcard664
    @dogcard664 10 месяцев назад +2

    FREAKING LOVE YOUR VIDEOS!!!🎉❤❤❤ PLEASE KEEP THEM COMING!!!!

  • @jarosawszyc8287
    @jarosawszyc8287 10 месяцев назад +2

    Dang, your content is gold

  • @Imperial_Squid
    @Imperial_Squid 10 месяцев назад

    Very nice video! And you're right that seeing it in video form definitely helps with intuition about how tweaking various factors creates tradeoffs between different parts of the test being done

  • @stuartoftherabbit
    @stuartoftherabbit 10 месяцев назад +2

    12:00 I felt that. 😭
    Overall this is a fantastic refresher on power! But I'm really not clear why you say it's ambiguous how variance will effect power. You show us a clear visual depicting the critical region under the null moving further and further into the distribution under the alternative as population variance goes up. This reduces the area under the alternative distribution that is to the right of the critical value. Looks pretty unambiguously negative to me.
    Or maybe you are setting up for discussing effect sizes as a proportion of a variance measure, e.g. as in Cohen's d...? (this thought just occurred to me as I wrote the above, don't @ me)
    Anecdotally, when I have done statistical planning for clinical trials in real life, getting good estimates of population variances from the investigators has consistently been a challenge and has sometimes been a significant driver of sample sizes.

    • @trentneilson9783
      @trentneilson9783 10 месяцев назад +1

      I had this thought too. It would also be good to mention the importance of "pilot" studies when attempting to estimate this population variance for adequate sample size estimations for the "main" study.

  • @robertlo9336
    @robertlo9336 9 месяцев назад

    Good day, I’ve a question regarding the contents of the video,4:30. You are making an assumption of both population variances being known and hence perform t-test. In my course (i study in Russia) usually assuming known variances z-test is performed (though it should be noted that either populations are distributed normally or sample size is larger than 40). So wouldn’t it be more appropriate to perform a z-test or am I missing something? Thank you for your videos, I’ve been really enjoying them through my 1st year❤

    • @very-normal
      @very-normal  9 месяцев назад

      Hello! You’re right that if we assume to know the population variances we could use the z-test. But in practical settings, I usually try to minimize the number of assumptions I make. Assuming the variances can make your results too optimistic (confidence intervals slightly too short), so the t-test lets me avoid having to make the assumption. At larger sample sizes, the results are essentially the same, so it’s kind of a moot point.
      Thank you for watching!

  • @yurisich
    @yurisich 10 месяцев назад

    What is the name of the variable that determines the cost or effort to increase the sample size by one? That would appear to be the major limiting factor for determining suitable hypotheses. Is there a name for a bias that describes the pursuit of low cost sampling versus high ones, especially if high cost sampling uncovers a type two confirmation of the hypothesis?

  • @ControlAlpha
    @ControlAlpha 4 месяца назад

    6:29 t=mean(xc)-mean(xt) follows normal distribution?i am confused.

    • @ControlAlpha
      @ControlAlpha 4 месяца назад

      If it is a not a normal districbution,the quantile is no longer a z_0.025🤨

    • @very-normal
      @very-normal  4 месяца назад

      Yeah, at about 4:44 I mention that we’re using the central limit theorem to make sure that each of the sample means both have a normal distribution. The difference of two sample means is still normal. The letter “T” is used because it’s usually used to denote a test statistic

  • @koenvandenberg1623
    @koenvandenberg1623 10 месяцев назад

    Nice video! At 6:54 you note the blue area as 1-Beta, but this blue area is the area where you reject, which is the power, so it should be noted as beta right?

    • @very-normal
      @very-normal  10 месяцев назад

      Thanks! In this case, beta is the probability of a Type-II error, shown at around 3:06. Subsequently, power is 1 - beta

  • @Guy-mh4xq
    @Guy-mh4xq 10 месяцев назад +1

    Finally, a stats lesson worthy of the son of Sparda

  • @Jhyram2727
    @Jhyram2727 10 месяцев назад

    Hey, nice video. It would be interesting to compare the frequentist and bayesian way for solving Test Hipotesis. And show how the bayesian way solves the problem to make alpha and betha zero as n increases, while frequentist make alpha constant.

  • @KuRayZay
    @KuRayZay 7 месяцев назад

    What are your thoughts on post-hoc power analyses? I've seen some people say they are useless, but I've also seen papers get rejected over it. Could you explain if/when it is appropriate?

    • @very-normal
      @very-normal  7 месяцев назад

      To be honest, I don’t have a lot of experience with them, so take my opinion with a grain of salt.
      In my mind, power is a tool for planning an experiment; it gives us a numerical justification for why a particular sample size is good. So, I am not sure what the purpose would be for a post-hoc power analysis. The data’s already been collected, so the decision to reject or fail to reject is just an analysis away.
      So to answer your question, I don’t think they’re useful in general. Would you be able to tell me what kind of field you work in? I’m kind of curious to read into this a bit more now

    • @KuRayZay
      @KuRayZay 7 месяцев назад

      @@very-normal I'm in veterinary medicine. I am definitely fully on board with all the explanations against post-hoc power analyses and have yet to come across a rational argument in favor. It seems to be a settled argument when I do some cursory searches, but I don't have the strongest stats background (which is why I watch your videos!)

  • @dewinmoonl
    @dewinmoonl 3 месяца назад

    You're so good

  • @kelly4187
    @kelly4187 9 месяцев назад

    Dude I completed my MSc in stats and still NEVER quite got it. Unfortunately my uni leaned very heavily involved on "applied" statistics and so only dipped deep enough into the theory to understand the method. I'm now having to play catchup filling the gaps...

  • @DonQuiGoddelaManCHAD
    @DonQuiGoddelaManCHAD 10 месяцев назад +2

    This. Is. Power.

    • @Guy-mh4xq
      @Guy-mh4xq 10 месяцев назад

      HE GETS IT

  • @abhisek-chatterjee
    @abhisek-chatterjee 10 месяцев назад

    Hey man! Fellow Statistician here. Great videos! Can you share your LinkedIn? Want to connect.