What I like most about your channel is that I can just hit the "like" button straight-away without having to wait until the end to see if I'm actually going to like the video or not. I just know that I am so, so I can just get on with it.
Very nice video! And you're right that seeing it in video form definitely helps with intuition about how tweaking various factors creates tradeoffs between different parts of the test being done
12:00 I felt that. 😭 Overall this is a fantastic refresher on power! But I'm really not clear why you say it's ambiguous how variance will effect power. You show us a clear visual depicting the critical region under the null moving further and further into the distribution under the alternative as population variance goes up. This reduces the area under the alternative distribution that is to the right of the critical value. Looks pretty unambiguously negative to me. Or maybe you are setting up for discussing effect sizes as a proportion of a variance measure, e.g. as in Cohen's d...? (this thought just occurred to me as I wrote the above, don't @ me) Anecdotally, when I have done statistical planning for clinical trials in real life, getting good estimates of population variances from the investigators has consistently been a challenge and has sometimes been a significant driver of sample sizes.
I had this thought too. It would also be good to mention the importance of "pilot" studies when attempting to estimate this population variance for adequate sample size estimations for the "main" study.
Good day, I’ve a question regarding the contents of the video,4:30. You are making an assumption of both population variances being known and hence perform t-test. In my course (i study in Russia) usually assuming known variances z-test is performed (though it should be noted that either populations are distributed normally or sample size is larger than 40). So wouldn’t it be more appropriate to perform a z-test or am I missing something? Thank you for your videos, I’ve been really enjoying them through my 1st year❤
Hello! You’re right that if we assume to know the population variances we could use the z-test. But in practical settings, I usually try to minimize the number of assumptions I make. Assuming the variances can make your results too optimistic (confidence intervals slightly too short), so the t-test lets me avoid having to make the assumption. At larger sample sizes, the results are essentially the same, so it’s kind of a moot point. Thank you for watching!
What is the name of the variable that determines the cost or effort to increase the sample size by one? That would appear to be the major limiting factor for determining suitable hypotheses. Is there a name for a bias that describes the pursuit of low cost sampling versus high ones, especially if high cost sampling uncovers a type two confirmation of the hypothesis?
Yeah, at about 4:44 I mention that we’re using the central limit theorem to make sure that each of the sample means both have a normal distribution. The difference of two sample means is still normal. The letter “T” is used because it’s usually used to denote a test statistic
Nice video! At 6:54 you note the blue area as 1-Beta, but this blue area is the area where you reject, which is the power, so it should be noted as beta right?
Hey, nice video. It would be interesting to compare the frequentist and bayesian way for solving Test Hipotesis. And show how the bayesian way solves the problem to make alpha and betha zero as n increases, while frequentist make alpha constant.
What are your thoughts on post-hoc power analyses? I've seen some people say they are useless, but I've also seen papers get rejected over it. Could you explain if/when it is appropriate?
To be honest, I don’t have a lot of experience with them, so take my opinion with a grain of salt. In my mind, power is a tool for planning an experiment; it gives us a numerical justification for why a particular sample size is good. So, I am not sure what the purpose would be for a post-hoc power analysis. The data’s already been collected, so the decision to reject or fail to reject is just an analysis away. So to answer your question, I don’t think they’re useful in general. Would you be able to tell me what kind of field you work in? I’m kind of curious to read into this a bit more now
@@very-normal I'm in veterinary medicine. I am definitely fully on board with all the explanations against post-hoc power analyses and have yet to come across a rational argument in favor. It seems to be a settled argument when I do some cursory searches, but I don't have the strongest stats background (which is why I watch your videos!)
Dude I completed my MSc in stats and still NEVER quite got it. Unfortunately my uni leaned very heavily involved on "applied" statistics and so only dipped deep enough into the theory to understand the method. I'm now having to play catchup filling the gaps...
What I like most about your channel is that I can just hit the "like" button straight-away without having to wait until the end to see if I'm actually going to like the video or not. I just know that I am so, so I can just get on with it.
Babe, stop the yapping. Very normal uploaded a new video!
thank you SO much for this i have a stats midterm in less than a week and this is exactly what i needed
Good luck!!
how did you do?
@@kodirovsshik pretty well 😎😎
@@heythere7130 Congrats 🥳
FREAKING LOVE YOUR VIDEOS!!!🎉❤❤❤ PLEASE KEEP THEM COMING!!!!
Dang, your content is gold
Very nice video! And you're right that seeing it in video form definitely helps with intuition about how tweaking various factors creates tradeoffs between different parts of the test being done
12:00 I felt that. 😭
Overall this is a fantastic refresher on power! But I'm really not clear why you say it's ambiguous how variance will effect power. You show us a clear visual depicting the critical region under the null moving further and further into the distribution under the alternative as population variance goes up. This reduces the area under the alternative distribution that is to the right of the critical value. Looks pretty unambiguously negative to me.
Or maybe you are setting up for discussing effect sizes as a proportion of a variance measure, e.g. as in Cohen's d...? (this thought just occurred to me as I wrote the above, don't @ me)
Anecdotally, when I have done statistical planning for clinical trials in real life, getting good estimates of population variances from the investigators has consistently been a challenge and has sometimes been a significant driver of sample sizes.
I had this thought too. It would also be good to mention the importance of "pilot" studies when attempting to estimate this population variance for adequate sample size estimations for the "main" study.
Good day, I’ve a question regarding the contents of the video,4:30. You are making an assumption of both population variances being known and hence perform t-test. In my course (i study in Russia) usually assuming known variances z-test is performed (though it should be noted that either populations are distributed normally or sample size is larger than 40). So wouldn’t it be more appropriate to perform a z-test or am I missing something? Thank you for your videos, I’ve been really enjoying them through my 1st year❤
Hello! You’re right that if we assume to know the population variances we could use the z-test. But in practical settings, I usually try to minimize the number of assumptions I make. Assuming the variances can make your results too optimistic (confidence intervals slightly too short), so the t-test lets me avoid having to make the assumption. At larger sample sizes, the results are essentially the same, so it’s kind of a moot point.
Thank you for watching!
What is the name of the variable that determines the cost or effort to increase the sample size by one? That would appear to be the major limiting factor for determining suitable hypotheses. Is there a name for a bias that describes the pursuit of low cost sampling versus high ones, especially if high cost sampling uncovers a type two confirmation of the hypothesis?
6:29 t=mean(xc)-mean(xt) follows normal distribution?i am confused.
If it is a not a normal districbution,the quantile is no longer a z_0.025🤨
Yeah, at about 4:44 I mention that we’re using the central limit theorem to make sure that each of the sample means both have a normal distribution. The difference of two sample means is still normal. The letter “T” is used because it’s usually used to denote a test statistic
Nice video! At 6:54 you note the blue area as 1-Beta, but this blue area is the area where you reject, which is the power, so it should be noted as beta right?
Thanks! In this case, beta is the probability of a Type-II error, shown at around 3:06. Subsequently, power is 1 - beta
Finally, a stats lesson worthy of the son of Sparda
Hey, nice video. It would be interesting to compare the frequentist and bayesian way for solving Test Hipotesis. And show how the bayesian way solves the problem to make alpha and betha zero as n increases, while frequentist make alpha constant.
What are your thoughts on post-hoc power analyses? I've seen some people say they are useless, but I've also seen papers get rejected over it. Could you explain if/when it is appropriate?
To be honest, I don’t have a lot of experience with them, so take my opinion with a grain of salt.
In my mind, power is a tool for planning an experiment; it gives us a numerical justification for why a particular sample size is good. So, I am not sure what the purpose would be for a post-hoc power analysis. The data’s already been collected, so the decision to reject or fail to reject is just an analysis away.
So to answer your question, I don’t think they’re useful in general. Would you be able to tell me what kind of field you work in? I’m kind of curious to read into this a bit more now
@@very-normal I'm in veterinary medicine. I am definitely fully on board with all the explanations against post-hoc power analyses and have yet to come across a rational argument in favor. It seems to be a settled argument when I do some cursory searches, but I don't have the strongest stats background (which is why I watch your videos!)
You're so good
Dude I completed my MSc in stats and still NEVER quite got it. Unfortunately my uni leaned very heavily involved on "applied" statistics and so only dipped deep enough into the theory to understand the method. I'm now having to play catchup filling the gaps...
This. Is. Power.
HE GETS IT
Hey man! Fellow Statistician here. Great videos! Can you share your LinkedIn? Want to connect.