Thanks! I am tryng calculators for sample sizes to a RCT and I am trying to rising the power with a very objective H1. Resuls will be in % and using a chi-square to compare groups. I expect a effect size of 0.5. It was very usefull
Hi. You're saying that changing the sample size changes the distance between the means? The sample size does not change the means, it changes the variances of the sample distributions.
Is it valid to use the mean and standard deviation from a similar, but not exactly the same, study to estimate the sample size for a retrospective study? For example, the reference study was done in patients with diabetes, but the retrospective research will be done in patients who had a stroke.
Thank you for explaining it so well concepts and the example makes sense if you follow along and take notes of each variable. One question though is 12:30 when you have P(Z-0.35190697) = 1 - 0.363 = 0.637. how did you get from calculating 0.352 to 0.363. where did the 0.363 come from?
@@marxianus It's based on z-table. And the calculation in a video is a little bit misleading (I'll show it later) P(Z > -0.352) probability that all other z that are > -0.352 (-0.351, ... 0, 0.1, 0.352... 3.4) - these are VALUES, not probabilities (just in case). "More than" starting from that value. But the Z-table itself is about P (Z =< z) - "Less than" !!! So, basically when we talk about P(Z> -0.352) there are 2 areas of z-distribution. P(Z > -0.352) AND P(Z - 0.352) = 1 - P(Z =< -0.352). Table says that " -0.352" = ROW (-0.3) and COLUMN (0.05) = 0.3632 (this is probability, not value!) => 1 - 0.3632 => 0.6368 The confusing thing is P (Z < 0.352) = 1 - 0.3632 => 1 - P(Z >= 0.352). Since table is about P(Z =< z) -> we can P(Z >= 0.352) only by = 1 - P(Z < 0.352). Table say that P (Z < 0.352) itself = ROW (0.3) and COLUMN (0.05) it is 0.6368 (that makes all this calculation excess) P(Z >= 0.352) = 1 - 0.6368 = 0.3632 ---> P(Z < 0.352) = 1 - 0.3632. Also, you can find out by visualisation that 1 - P(Z =< -0.352) is symmetric to 1 - P(Z >= 0.352) Hope, it helps :)
If your sample is large enough n > 30 … we donot bother about the population been normal or not .. we can use the formula to calculate power … « central limit theory » Most of the times the sample is large enough
I also think power and effect size come in play when we want to design the experiment … so we donot worry about the data distribution been normal or not normal at this stage … It is after the experiment ..that according to the type of hypothesis test we want to carry out to confirm statistically our results .. that we can be interested in whether to carry out a parametric test or a non parametric test according to the distribution of the values of our metric …
Thanks sir.. listening to your video on my bed .. I found it so intuitive.. going to rewatch it tomorrow and take down some notes.
Thanks! I am tryng calculators for sample sizes to a RCT and I am trying to rising the power with a very objective H1. Resuls will be in % and using a chi-square to compare groups. I expect a effect size of 0.5. It was very usefull
Well done sir, thank you for sharing this video!
Hi. You're saying that changing the sample size changes the distance between the means? The sample size does not change the means, it changes the variances of the sample distributions.
Thank you for the comment! I was just thinking the same.
Is it valid to use the mean and standard deviation from a similar, but not exactly the same, study to estimate the sample size for a retrospective study? For example, the reference study was done in patients with diabetes, but the retrospective research will be done in patients who had a stroke.
Quite help full sir. Gone through your research paper as well , "Zero Tb Kids Initiative on Tibetan Kids", i am working on the same in Sikkim.
Thank you for explaining it so well concepts and the example makes sense if you follow along and take notes of each variable.
One question though is 12:30 when you have P(Z-0.35190697) = 1 - 0.363 = 0.637. how did you get from calculating 0.352 to 0.363. where did the 0.363 come from?
It's from the z value tanle :www.z-table.com/
@@hsinyang1796 There's still question: HOW? Please, help:)
@@marxianus It's based on z-table.
And the calculation in a video is a little bit misleading (I'll show it later)
P(Z > -0.352) probability that all other z that are > -0.352 (-0.351, ... 0, 0.1, 0.352... 3.4) - these are VALUES, not probabilities (just in case). "More than" starting from that value. But the Z-table itself is about P (Z =< z) - "Less than" !!!
So, basically when we talk about P(Z> -0.352) there are 2 areas of z-distribution. P(Z > -0.352) AND P(Z - 0.352) = 1 - P(Z =< -0.352).
Table says that " -0.352" = ROW (-0.3) and COLUMN (0.05) = 0.3632 (this is probability, not value!) => 1 - 0.3632 => 0.6368
The confusing thing is P (Z < 0.352) = 1 - 0.3632 => 1 - P(Z >= 0.352).
Since table is about P(Z =< z) -> we can P(Z >= 0.352) only by = 1 - P(Z < 0.352).
Table say that P (Z < 0.352) itself = ROW (0.3) and COLUMN (0.05) it is 0.6368 (that makes all this calculation excess)
P(Z >= 0.352) = 1 - 0.6368 = 0.3632 ---> P(Z < 0.352) = 1 - 0.3632.
Also, you can find out by visualisation that 1 - P(Z =< -0.352) is symmetric to 1 - P(Z >= 0.352)
Hope, it helps :)
In the slide table at 1:40 shouldn't Power = 1-alpha since if H_0 is false, H_a should be true? I am confused
Could you please add non parametric tests distributions, effect size, power, ...
If your sample is large enough n > 30 … we donot bother about the population been normal or not .. we can use the formula to calculate power … « central limit theory »
Most of the times the sample is large enough
I also think power and effect size come in play when we want to design the experiment … so we donot worry about the data distribution been normal or not normal at this stage …
It is after the experiment ..that according to the type of hypothesis test we want to carry out to confirm statistically our results .. that we can be interested in whether to carry out a parametric test or a non parametric test according to the distribution of the values of our metric …
Effect size is nor clearly explained here. Is there a corresponding video I should watch first?
Still can't understand shit