Thank you so much for the video, it was quite helpful. I have a question referring to the comparisons used to adjust the P-value in the Bonferroni correction. Following your example, if I have a categorical variable with 5 categories (A, B, C, D, E), shouldn't be the number in which we divide the 0,05 to get the adjusted P-Value 10 instead of 4? I mean we are doing the comparison between all the categories to each other, so (A-B, A-C, A-D, A-E, B-C, B-D, B-E, C-D, C-E, D-E). This got me really confused because I have found different media where they use a different number of comparisons and I am not sure which number should I use to adjust my P-value. In my case, I have a categorical variable with 9 categories, should I use 8, 9, or 36? I would be really grateful if someone can help me to solve this. Thank you again! :)
in my master thesis i had a similar problem and even tho there is mathematical discussion about this, usually the correct way is to multiply alpha by the total combinations (without repetition by switching place)... so 9 categories, creating all possible pairs tests is 9C2 = 36. now you might have the same doubt as I do which is, even tho accepting the logic for that 36, why should my comparisons in ur example: (A-B, A-C, A-D, A-E) put in the same basket as (B-C, B-D, B-E) etc instead of being independent or at least have something like multiplying by 4 each instead of 10 in that case (5C2), thus considering (B-A, B-C, B-D, B-E) and the same for C D and E. meaning why not independently compare EACH one of the categories with their pairs and multiply alpha by only 4 (number of tests for each group) instead of 36.... this is the true doubt that I would love for someone to explain. @Design eLearning Tutorials
Thank you so much. This really a helpful video. May I ask you a specific question? I have 2 groups A (290 patients) and B ( 87 patients). I will test the statistical difference between the 2 groups based on some numeric variables( age, WBC, systolic blood pressure, and diastolic blood pressure) and some categorical variables ( gender, ethnicity, history of specific disease, and diabetes). For the numerical variables, I will use Mann Whitney test and for the categorical ones I will use chi square test. My question is: how we can use the adjustment of p value in this case, should we divide by 8 ( No of categorical variables + No of numerical variables) or by 4 for each test, or it is not necessary to to do adjustment in this case? Thank you so much.
@@DesigneLearningTutorials I have a question about this. Do I understand it correctly that you calculate an individual adjusted p value (using Bonferroni method) for each individual test type (Mann Whitney U, t-test, Chi square test,..)? In this case this results in 2 adjusted p-values, with which you compare your hypotheses dependent on if they are numeric of categorical variables?
What does it really mean by doing multiple comparison ? Is it using the same IVs repeatedly on multiple tests OR using the same DVs repeatedly on multiple tests ? Or is it apply to both situations ?
Suppose you have two groups (men and women) and want to compare how they are similiar or different across five forms of motivation (e.g., social, financial, role, idea, and adventure). In that case, you are making multiple comparisons: five in fact. You divide the original alpha by 5, 0.05/5 = 0.01
@@zikryaiman7208 In your methods (data analysis sub-part) explain that you ran, for example, five t-tests to explore variables 1-5. To mitigate errors, you used an adjusted alpha (Bonferroni Correction) of 0.01.
Hello, I've found very good your explanation for that it is easy to understand, though I'm not totally convinced by the reason / example you mention to explain that the risk of type 1 error increases... Let's put that we have two levels and we find significant difference very close to "alpha". Later, we add more observations on the same dataset. These additional observations are on additional levels. We do the same T-test on the first two levels, with bonferroni correction, and find that the formerly found difference is not significant anymore. Why would this be, if we did not modify the original data? As for me, i tried to interpret this in this way: when there are lots of levels, a minor difference (true difference with a smaller dataset) might become insignificant when adding more levels of the same variable with bigger differences among themselves... The Bonferroni correction tries to implement that notion that a minor difference must be even more sustentated when there are lots of levels of observation with large differences. Am I right in this interpretation?
This is such a fantastic explanation! You have an extraordinary talent for explaining matters simpel and effective. Please keep making more videos. Thanks! If I could ask a related question: In your example of the 5 different websites with the SUS score. Should one first do an Anova or Kruskal-Wallis before proceeding to the t-test respectively the Dunn's test? Or, is in this case a simple t-test / mann-whitney test with bonferroni correction correct? I got the exact same case and I am totally confused. Thanks!
Hi Maurice. The tests you mentioned are all different. You choose the right test to run once instead of running one test then another. I recommend you read the book The Tao of Statistics. It explains which method to use in the simplest and most joyous way possible.
Thankyou so much for helping with my psychology masters project , clearly explained and to the point 😀
Thank you so much for the video, it was quite helpful.
I have a question referring to the comparisons used to adjust the P-value in the Bonferroni correction. Following your example, if I have a categorical variable with 5 categories (A, B, C, D, E), shouldn't be the number in which we divide the 0,05 to get the adjusted P-Value 10 instead of 4? I mean we are doing the comparison between all the categories to each other, so (A-B, A-C, A-D, A-E, B-C, B-D, B-E, C-D, C-E, D-E).
This got me really confused because I have found different media where they use a different number of comparisons and I am not sure which number should I use to adjust my P-value.
In my case, I have a categorical variable with 9 categories, should I use 8, 9, or 36?
I would be really grateful if someone can help me to solve this.
Thank you again! :)
in my master thesis i had a similar problem and even tho there is mathematical discussion about this, usually the correct way is to multiply alpha by the total combinations (without repetition by switching place)... so 9 categories, creating all possible pairs tests is 9C2 = 36.
now you might have the same doubt as I do which is, even tho accepting the logic for that 36, why should my comparisons in ur example: (A-B, A-C, A-D, A-E) put in the same basket as (B-C, B-D, B-E) etc instead of being independent or at least have something like multiplying by 4 each instead of 10 in that case (5C2), thus considering (B-A, B-C, B-D, B-E) and the same for C D and E. meaning why not independently compare EACH one of the categories with their pairs and multiply alpha by only 4 (number of tests for each group) instead of 36.... this is the true doubt that I would love for someone to explain.
@Design eLearning Tutorials
If you’re running ten tests then divide the alpha by ten. If you’re doing nine tests, then divide the answer by nine. I’m happy to help! ✌️
In the case of a t-test, exactly what's the difference between the p-value and the alpha value?
Thank you so much. This really a helpful video.
May I ask you a specific question?
I have 2 groups A (290 patients) and B ( 87 patients). I will test the statistical difference between the 2 groups based on some numeric variables( age, WBC, systolic blood pressure, and diastolic blood pressure) and some categorical variables ( gender, ethnicity, history of specific disease, and diabetes).
For the numerical variables, I will use Mann Whitney test and for the categorical ones I will use chi square test.
My question is: how we can use the adjustment of p value in this case, should we divide by 8 ( No of categorical variables + No of numerical variables) or by 4 for each test, or it is not necessary to to do adjustment in this case?
Thank you so much.
Great question. Divide by 4 for each test will be fine
Thank you so much
@@DesigneLearningTutorials I have a question about this. Do I understand it correctly that you calculate an individual adjusted p value (using Bonferroni method) for each individual test type (Mann Whitney U, t-test, Chi square test,..)? In this case this results in 2 adjusted p-values, with which you compare your hypotheses dependent on if they are numeric of categorical variables?
Ok, definitely you should have more subscribers, thanks for the video!
please could someone explain to me when doing ANOVA, how do you know to use Bonferroni, tukey hsd or linear contrast please help
What does it really mean by doing multiple comparison ? Is it using the same IVs repeatedly on multiple tests OR using the same DVs repeatedly on multiple tests ? Or is it apply to both situations ?
Suppose you have two groups (men and women) and want to compare how they are similiar or different across five forms of motivation (e.g., social, financial, role, idea, and adventure). In that case, you are making multiple comparisons: five in fact. You divide the original alpha by 5, 0.05/5 = 0.01
@@DesigneLearningTutorials how can we report this alpha adjustment in the writing ?
@@zikryaiman7208 In your methods (data analysis sub-part) explain that you ran, for example, five t-tests to explore variables 1-5. To mitigate errors, you used an adjusted alpha (Bonferroni Correction) of 0.01.
Excellent explanation... thanks 🙏🏼
Always a pleasure
Hello, I've found very good your explanation for that it is easy to understand, though I'm not totally convinced by the reason / example you mention to explain that the risk of type 1 error increases... Let's put that we have two levels and we find significant difference very close to "alpha". Later, we add more observations on the same dataset. These additional observations are on additional levels. We do the same T-test on the first two levels, with bonferroni correction, and find that the formerly found difference is not significant anymore. Why would this be, if we did not modify the original data?
As for me, i tried to interpret this in this way: when there are lots of levels, a minor difference (true difference with a smaller dataset) might become insignificant when adding more levels of the same variable with bigger differences among themselves... The Bonferroni correction tries to implement that notion that a minor difference must be even more sustentated when there are lots of levels of observation with large differences. Am I right in this interpretation?
This was sooo helpful! Thank you! 💛💛
This is such a fantastic explanation! You have an extraordinary talent for explaining matters simpel and effective. Please keep making more videos. Thanks!
If I could ask a related question: In your example of the 5 different websites with the SUS score. Should one first do an Anova or Kruskal-Wallis before proceeding to the t-test respectively the Dunn's test? Or, is in this case a simple t-test / mann-whitney test with bonferroni correction correct? I got the exact same case and I am totally confused. Thanks!
Hi Maurice. The tests you mentioned are all different. You choose the right test to run once instead of running one test then another. I recommend you read the book The Tao of Statistics. It explains which method to use in the simplest and most joyous way possible.
This was a great explanation. Thanks so much!
My pleasure! Please like and subscribe to help others find this video 👍
That´s so well explained, thank you!
My pleasure
type 1 and type 2 errors mixed up
Really great explanation, thanks! :)
Thank you kindly for the explanation, really helpful:)
Great explanation ! Thank You!
You are welcome!
Is it just me? I think you have mixed up the type 1 and type II error. Correct me if I am wrong, type 1 is false positive, type II is false negative.
yeah I agree, he mixed those up really well
I was lost for a second there, too.
Yes he made this mistake
you got me at 7:25 haha
My pleasure