NOTE: This StatQuest was brought to you, in part, by a generous donation from TRIPLE BAM!!! members: M. Scola, N. Thomson, X. Liu, J. Lombana, A. Doss, A. Takeh, J. Butt. Thank you!!!! Support StatQuest by buying my book The StatQuest Illustrated Guide to Machine Learning or a Study Guide or Merch!!! statquest.org/statquest-store/
I’m watching this and tears are coming to my eyes. So many classes where I felt so dumb. But I just needed it explained a certain way. I can’t thank you enough for your videos, I’m going to share them with everyone I know doing stats
I never saw an author answer every question or acknowledge every comment; it's remarkable (and I'm not saying you HAVE TO do it; it's a huge job. I think it takes more time than making the videos themselves! -which, by the way, are excellent-)
Thank you for the great content again! Do you plan on making a video on Cohen's d, Cochrane's Q and all those meta-analysis merriments one day? :D (desperate PhD student asking)
Omg ! I should have waited and take stats inference course this sem instead of last semester. This video is awesome ! I finally understand power now! Keep the hard work up Josh!
Your videos are the best I’ve seen. I’m working through a python for data science course…your explanations are fantastic and the animations make the concepts so easy to understand for applying with python. I can’t thank you enough for sharing!
I guess I'll have to watch the next video to see the difference between adding more samples to gat more power and the dreaded P-Hacking... on to the next video!! Thanks for the videos..
Your videos are ULTRA BAM, I reject the null hyphotesis ''if your videos make a difference?''. P value of 0,00000001 for your channel man. I hope I dont screw up the statistics in this comment ;v
When the Null Hypothesis says "subscribe to my channel for more stat videos," my small p-value says "I will continue to watch your videos without subscribing!" MEDIUM BAM
I feel I am out of words that will fit in your appreciation. You are simply amazing !!!!! Thank you so much and keep these gems coming, please. Can you please explain what is Tukey HSD analysis and how do we perform that?
I am taking a stat course for my grad school. I am new to this other than one basic stat class in high school. You explained this in a few minutes better than my textbook. Thank you, sir!
@statquest Can u pls make a video explaining concepts of stationarity and consistency in time series? And also the difference between weak and strong law of large numbers? Will be really helpful☺
Hi,your videos are great,and the opening sequences as well,thanks a lot My question is, what do I do if I want the power but at the same time don't know if my null hypothesis is true or not
You never know about the null in advance. You just calculate power based on the assumptions that the two things are different and by so much. So, given those assumptions, you can calculate power. We then do the test and, if there is a difference, we should be able to detect it and reject the null.. If not, then we'll just fail to reject the null.
@statquest I am sorry I don't understand what do you mean by the two things.For example at 4:25 you have said that the concept of power doesn't apply here since we know beforehand if our null hypothesis is correct or not. However what if I don't know,in short,is there any situation ever where I should not calculate power when doing hypothesis testing? Thanks a lot in advance
@@noname-go2kt Yes, when we know that there is no difference, then Power does not apply. But we never know - otherwise we wouldn't bother doing the test to begin with. So we assume that there is a difference and carry on from there. In theory you should always do a power analysis if you have reason to believe you need to do a statistical test.
Hello there, I have been following your tutorial recently. Great job but far lower sub. for the work you guys have done. Knowledge spread in a intuitive way is invaluable. Thumb up.
When you are driving in the rain, you can kinda see where you are going(how I felt about statistics before your video), the wind shield wiper is still necessary because it makes everything clearer (that's how your videos are to me). Confession: I was rolling my eyes when the cheesy BAM!!!s came out.
I love the way you teach..... But honestly I love your opening! "There's clouds outside... But who cares.... It's time for Stat Quest... STATQUEST... "
From this video i get to know that increase in power will decrease the type 1 error. May i right Mr.jos.. What a intutive video man...salute for your knowledge sharing....
Last week my professor told me that we don't power to explain a result. I wondered the powerful one in the group says we don't have power. That powerlessness took me here. Thanks a lot for explaining power so clearly. She (professor) is powerful again, because she understood it. Thanks again.
Out of curiosity, at 2.13 even though we are knowingly calculating the p-values from samples from 2 distributions, do we still apply the BH method (FDR) discussed in the previous stat quest? When do we use FDR?
When you know that you have two distributions to begin with, there's no point in even calculating the p-value to begin with. In other words, this is just an example. That being said, in practice, we never know, and if we do multiple tests, then we should correct for that with FDR or some other method.
I hate that I'm finding out about this channel so late. My test is tomorrow and watching your videos would've helped so much. Nonetheless, I'll be binging these videos all night haha
Type 1 errors are false positives, type 2 errors are false negatives. Power and power analyses are important for reducing the number of false negatives.
Beautiful explanation, Thanks a lot! There's only one thing that I still find confusing that is: if the two distributions highly overlap with each other, what prevents us from thinking that both come from the same distribution? I mean we are looking at the weights of the exact same species (mice) why did we have a priori assumption that the weights of those on the special diet are different from those on a normal diet?
If we have no reason to think that the mice come from two distributions, then we would not spend the time and money testing the hypothesis to begin with. So, in this case, we must know something about the diet - maybe one is very unhealthy, and the other one is very healthy - and this causes us to suspect that they might be from different distributions, and this then justifies spending the time and money doing the experiment and testing the hypothesis.
How do you calculate the P value of something that isn't due to chance? I watched your video on calculating P values, the example was on coins. Tossing coins is purely by chance if the coin was fair. Since food and drug are not fair, they are rigged, ie, they have a purposeful effect, how do you calculate P value? What is the value for each step? You "How to calculate p-values" video listed 3 steps. What would be the value of step 1 and 2? Would you say "What are the chances of the weight being specifically a value" (say, 80.19 grams) or "What are the chances of the weight being above or below a certain value"? Because if that is the case, it would be 50% since, when there is a clear difference between the 2, one entire group would be more than your set value, and the other will be less, as your graph demonstrates. Ie if 3 out of 6 are above a certain value (normal diet) and 3 out of 6 are under that value, it would be 3/6 which is 50%. Then since the value can be set at anything, if I set it super low or super high so it includes all or neither of the two groups, such as "chances of a mouse being less than 0 grams" or "more than 100kg" then you can say 0 out of 6. How do you calculate the p value then? As always: I hate statistics and statistics make no sense.
Again, there is almost always variation in the data we collect. Some of that variation is due to things were are interested in, some of that variation is due to things were are not interested in. p-values help us filter out the variation we are not interested in.
Hi Josh, thank you for posting such great content. Question...how does a measurement measurement repeatability error (from Gage R&R study) impact the power of the test as apposed to bias or reproducibility error? i.e. does measurement repeatability error reduce the power of the test?
You are an amazing teacher. Here I am preparing for interviews many years after I left school,.. instead of referring to my notes from school , i am watching your videos. A big thank you to you !!
when you say "power is the probability that we will correctly reject the null hypothesis" - is the alternative of correctly rejecting the null hypothesis ONLY incorrectly rejecting the null hypothesis? or does it include incorrectly NOT rejecting the null hypothesis
And at the time point of the video, 6:13, I think your conclusion was also reversed. If we get a P value less than 0.05, it is the case that we fail to reject the null hypothesis. We would accept it wrongly!
Again, the wording in the video is correct. When the p-value is small, we reject the null hypothesis that there is no difference. For details, see: ruclips.net/video/0oc49DyA3hU/видео.html
At the time point of the video, 5:32, I think your conclusion was reversed. If we get a larger P value, we don't fail to reject the null hypothesis. We correctly reject it!
The wording is correct. We fail to reject the null hypothesis when the p-value is large. For details on why we use this (strange) wording, see: ruclips.net/video/0oc49DyA3hU/видео.html
By adding measurements, we increase power But if we are unsure if the measurements are from the same or different distribution, Won't that be considered as p-hacking?
No, we don't have to be sure that they are from a different distribution, we just assume that they are and we assume that the difference is some value and the variation is some other value. Then we then find the sample size that would give us confidence that, if those assumptions are reasonable, we will reject the null hypothesis.
NOTE: This StatQuest was brought to you, in part, by a generous donation from TRIPLE BAM!!! members: M. Scola, N. Thomson, X. Liu, J. Lombana, A. Doss, A. Takeh, J. Butt. Thank you!!!!
Support StatQuest by buying my book The StatQuest Illustrated Guide to Machine Learning or a Study Guide or Merch!!! statquest.org/statquest-store/
What tools do you use to make your videos?
@@ShivamSaini-xt6rg I use keynote and final cut pro.
I’m watching this and tears are coming to my eyes. So many classes where I felt so dumb. But I just needed it explained a certain way. I can’t thank you enough for your videos, I’m going to share them with everyone I know doing stats
Thank you very much! :)
I get it. So true!
Double BAM!!!
bro i could literally watch these videos for fun they're so good
bam! :)
honestly this is the only stat video i was entertained haha and i actually really enjoyed the video and understood everything!
Bam!
I never saw an author answer every question or acknowledge every comment; it's remarkable (and I'm not saying you HAVE TO do it; it's a huge job. I think it takes more time than making the videos themselves! -which, by the way, are excellent-)
BAM! :)
The P value by seeing....
bam! :)
There is cloud outside today but really who cares😄? It is time to StatQuest !!!
bam! :)
You are a truly gifted teacher. Thanks a lot for this video! I feel like statistics is that much more approachable thanks to this channel :)
Wow, thank you!
"Shameless Self Promotion" ... hahahaha ... I can't stop laughing ... so cute ...
:)
Dang I cant reject the awesomeness of these videos
BAM! :)
I have just had a StatQuest Marathon today. You are one of my favourite teachers on RUclips for knowledge. Thank you sir!
Wow, thank you!
Thank you for the great content again! Do you plan on making a video on Cohen's d, Cochrane's Q and all those meta-analysis merriments one day? :D (desperate PhD student asking)
Hi Josh! Please make a video on why caclulating post-hoc power is bad! Thank you!
I'll keep that in mind.
You explained the concept in a very simple, explicit, and fun way. Thank you.
Thank you!
You are awesome in every awesome way possible
The way you incorporate details of sample size in statistics is so great
Thank you so much 😀
🤡🤡
Omg ! I should have waited and take stats inference course this sem instead of last semester. This video is awesome ! I finally understand power now! Keep the hard work up Josh!
Hooray! Thanks!
Love the sound effects
Yet another topic I just figured I'd never get so simply and logically explained. Thank you, Josh.
Hooray!!! :)
Dude, I really want to purchase your book after watching some of your videos!! Great job in explaining
Awesome, thank you!
thank you so much for existing you are literally saving my life
Hooray! :)
Your videos are the best I’ve seen. I’m working through a python for data science course…your explanations are fantastic and the animations make the concepts so easy to understand for applying with python. I can’t thank you enough for sharing!
Wow, thanks!
I guess I'll have to watch the next video to see the difference between adding more samples to gat more power and the dreaded P-Hacking... on to the next video!! Thanks for the videos..
bam! :)
Your videos are ULTRA BAM, I reject the null hyphotesis ''if your videos make a difference?''. P value of 0,00000001 for your channel man.
I hope I dont screw up the statistics in this comment ;v
bam! :)
When the Null Hypothesis says "subscribe to my channel for more stat videos," my small p-value says "I will continue to watch your videos without subscribing!" MEDIUM BAM
Noted
@statquest man, you should create courses for Data Camp since they pretty much suck in explaining difficult statistical things :)
I'll keep that in mind!
(all data comes from .me)
😀 Man you made my day
:)
I usually watch this when I'm eating. Mice... Yuck. Another example please 😳. Great content though!
noted
Very comprehensive explanation to get the insight even for beginners. At the same time, I also had fun. My heartfelt thanks to you Sir!
Thanks!
I never knew statistics can be so much fun!!! You are a star... thanks for doing this.. :)
Thanks so much! :)
Laugh upon hearing there's a shameless self promotion LOL
Good job!
Thank you! :)
I feel I am out of words that will fit in your appreciation. You are simply amazing !!!!! Thank you so much and keep these gems coming, please. Can you please explain what is Tukey HSD analysis and how do we perform that?
Thanks, and I'll keep that topic in mind.
Wow, this short video has explained to me what a 2 hour lecture failed to explain! Thankyou so much
Glad it was helpful!
I am taking a stat course for my grad school. I am new to this other than one basic stat class in high school. You explained this in a few minutes better than my textbook. Thank you, sir!
Glad it was helpful!
@statquest Can u pls make a video explaining concepts of stationarity and consistency in time series? And also the difference between weak and strong law of large numbers? Will be really helpful☺
3:15, 7:00 for great summary (doesn’t miss anything from the video)
Yep!
Thank you !! For simplifying for the rest of us.
Thanks!
Excellent explanation of Power; this was helpful. Thank you.
Glad it was helpful!
Man you are awesome!
I hope some day ! will teach like you
Thank you!
Thank you so much for your amazing explanation. One of the best resources out there
Thank you! :)
Thanks for the strong video! Cheers from Ox.
Thank you!
Hi,your videos are great,and the opening sequences as well,thanks a lot
My question is, what do I do if I want the power but at the same time don't know if my null hypothesis is true or not
You never know about the null in advance. You just calculate power based on the assumptions that the two things are different and by so much. So, given those assumptions, you can calculate power. We then do the test and, if there is a difference, we should be able to detect it and reject the null.. If not, then we'll just fail to reject the null.
@statquest I am sorry I don't understand what do you mean by the two things.For example at 4:25 you have said that the concept of power doesn't apply here since we know beforehand if our null hypothesis is correct or not.
However what if I don't know,in short,is there any situation ever where I should not calculate power when doing hypothesis testing?
Thanks a lot in advance
@@noname-go2kt Yes, when we know that there is no difference, then Power does not apply. But we never know - otherwise we wouldn't bother doing the test to begin with. So we assume that there is a difference and carry on from there.
In theory you should always do a power analysis if you have reason to believe you need to do a statistical test.
Hello there, I have been following your tutorial recently. Great job but far lower sub. for the work you guys have done. Knowledge spread in a intuitive way is invaluable. Thumb up.
Thank you very much! :)
I fully expected to hear "AND SUBSTITUTE MY OWN" after you said "I REJECT YOUR HYPOTHESIS"
:)
When you are driving in the rain, you can kinda see where you are going(how I felt about statistics before your video), the wind shield wiper is still necessary because it makes everything clearer (that's how your videos are to me).
Confession: I was rolling my eyes when the cheesy BAM!!!s came out.
bam! :)
I love the way you teach.....
But honestly I love your opening!
"There's clouds outside... But who cares.... It's time for Stat Quest... STATQUEST... "
Ha! I'd forgotten about this tune. It's a good one. :)
@@statquest thank you so much for your reply.... U r too good to be true♥️
Finally understand what is power, thank you for your wonderful video!
Hooray! :)
From this video i get to know that increase in power will decrease the type 1 error. May i right Mr.jos..
What a intutive video man...salute for your knowledge sharing....
Thanks!
The varieties of BAMs got me 😂😂😂😂😂
Thanks for this video . Really helped me understand the concept
Hooray! :)
This is so helpful thank you so much! 🙏🏽✨
Greetings from Belgium 🇧🇪☺️
Thanks and greetings from Spain! (I'm in spain for the next week for work).
This is fun and truly enjoyable, keep doing it!
Thank you! :)
So goood. Muchas gracias desde España !!
De nada! :)
Last week my professor told me that we don't power to explain a result. I wondered the powerful one in the group says we don't have power. That powerlessness took me here. Thanks a lot for explaining power so clearly. She (professor) is powerful again, because she understood it. Thanks again.
bam!
SMALL BAM
:)
I love the song in beginning of videos:)
Hooray! :)
这位大哥的语气充满了干劲,让我欲罢不能
:)
Oh wow, I didn’t expect the example problem to be the exact one I was looking for!!!
bam!
thank you so much for this fun video
Glad you enjoyed it!
♥ what a wonderful video! not even comparable to my dry epidemiology classes...
Wow, thank you!
BAMMMM
YES! :)
Bam!!!!! this video is great
Thank you!
Great video thank you!
Thanks!
amazing...each lecture is a treat
Than you!
LOVE YOUR VIDEOS, BAM!!!
Glad you like them!
Thank you so much for This
Thanks!
My new favorite video ^^^
bam! :)
Awsome video. No one else ever explained me so simply what "Power of a study" is.
BAM! :)
Out of curiosity, at 2.13 even though we are knowingly calculating the p-values from samples from 2 distributions, do we still apply the BH method (FDR) discussed in the previous stat quest? When do we use FDR?
When you know that you have two distributions to begin with, there's no point in even calculating the p-value to begin with. In other words, this is just an example. That being said, in practice, we never know, and if we do multiple tests, then we should correct for that with FDR or some other method.
I hate that I'm finding out about this channel so late. My test is tomorrow and watching your videos would've helped so much. Nonetheless, I'll be binging these videos all night haha
Good luck on your test! :)
@@statquest thank you!
How does this have to do with type I and II errors?
Type 1 errors are false positives, type 2 errors are false negatives. Power and power analyses are important for reducing the number of false negatives.
@@statquest this helps thank you!!
Thumb up for the intro
BAM! :)
All data comes from me!!
BAM! :)
Beautiful explanation, Thanks a lot!
There's only one thing that I still find confusing that is: if the two distributions highly overlap with each other, what prevents us from thinking that both come from the same distribution? I mean we are looking at the weights of the exact same species (mice) why did we have a priori assumption that the weights of those on the special diet are different from those on a normal diet?
If we have no reason to think that the mice come from two distributions, then we would not spend the time and money testing the hypothesis to begin with. So, in this case, we must know something about the diet - maybe one is very unhealthy, and the other one is very healthy - and this causes us to suspect that they might be from different distributions, and this then justifies spending the time and money doing the experiment and testing the hypothesis.
@@statquest Got it, Thanks again ❤
Very helpful and with comedic relief. Thank you so much
Bam! :)
How do you calculate the P value of something that isn't due to chance? I watched your video on calculating P values, the example was on coins. Tossing coins is purely by chance if the coin was fair. Since food and drug are not fair, they are rigged, ie, they have a purposeful effect, how do you calculate P value? What is the value for each step? You "How to calculate p-values" video listed 3 steps. What would be the value of step 1 and 2? Would you say "What are the chances of the weight being specifically a value" (say, 80.19 grams) or "What are the chances of the weight being above or below a certain value"? Because if that is the case, it would be 50% since, when there is a clear difference between the 2, one entire group would be more than your set value, and the other will be less, as your graph demonstrates. Ie if 3 out of 6 are above a certain value (normal diet) and 3 out of 6 are under that value, it would be 3/6 which is 50%.
Then since the value can be set at anything, if I set it super low or super high so it includes all or neither of the two groups, such as "chances of a mouse being less than 0 grams" or "more than 100kg" then you can say 0 out of 6. How do you calculate the p value then?
As always: I hate statistics and statistics make no sense.
Again, there is almost always variation in the data we collect. Some of that variation is due to things were are interested in, some of that variation is due to things were are not interested in. p-values help us filter out the variation we are not interested in.
Hi Josh, thank you for posting such great content. Question...how does a measurement measurement repeatability error (from Gage R&R study) impact the power of the test as apposed to bias or reproducibility error? i.e. does measurement repeatability error reduce the power of the test?
I'm not familiar with measurement measurement repeatability error
Amazing content!
MANY MANY THANKS ❤❤❤❤❤
Most welcome 😊
I'm here again
:)
sMaL bAm... :o
:)
You are an amazing teacher. Here I am preparing for interviews many years after I left school,.. instead of referring to my notes from school , i am watching your videos. A big thank you to you !!
thank you so so so much
You're welcome!
when you say "power is the probability that we will correctly reject the null hypothesis" - is the alternative of correctly rejecting the null hypothesis ONLY incorrectly rejecting the null hypothesis? or does it include incorrectly NOT rejecting the null hypothesis
Power assumes that the null is not true and we should reject it, so the only alternative from correctly rejecting it is to not reject it.
I literally love you
:)
I LOVE YOU SO MUCH
:)
YOU ARE THE GREATEST
Thank you! :)
This is amazing! In just 8 minutes! thank you❤️
Glad you liked it!!
And at the time point of the video, 6:13, I think your conclusion was also reversed. If we get a P value less than 0.05, it is the case that we fail to reject the null hypothesis. We would accept it wrongly!
Again, the wording in the video is correct. When the p-value is small, we reject the null hypothesis that there is no difference. For details, see: ruclips.net/video/0oc49DyA3hU/видео.html
At the time point of the video, 5:32, I think your conclusion was reversed. If we get a larger P value, we don't fail to reject the null hypothesis. We correctly reject it!
The wording is correct. We fail to reject the null hypothesis when the p-value is large. For details on why we use this (strange) wording, see: ruclips.net/video/0oc49DyA3hU/видео.html
Thanks so much!
You bet!
By adding measurements, we increase power
But if we are unsure if the measurements are from the same or different distribution, Won't that be considered as p-hacking?
No, we don't have to be sure that they are from a different distribution, we just assume that they are and we assume that the difference is some value and the variation is some other value. Then we then find the sample size that would give us confidence that, if those assumptions are reasonable, we will reject the null hypothesis.
You're the best !!!!!
Thanks!
love the humor!
Thanks!
You're an angel.
Thanks!
Thank you!
You're welcome!
So helpful!
Thanks!
Thank you for your great videos!
Thank you so much for your support!!! :)
This is the best stats channel on RUclips
Thank you! :)
Sir I really need ur help urgently
Sir plz help me in urgent
Sir I need an algorithm name based on statistics which is used in machine learning
Linear Regression
Really clear
Thanks!
Most fun statistics i've had!! BAMMM
BAM! :)
Much power in the lol - so good man
Thanks! :)
so helpful!!
:)