These videos are great, they're really helping me with the statistical part of my master's thesis. For me personally, they would be even better if at the end, you'd add how to report the statistic findings in APA style.
In the top right corner of the Crosstabs: Cell Display window there is a z-test section with the options: "Compare column proportions" and then a sub option "Adjust p-values (Bonferroni method)". Is there any reason not to use this option to make the adjustments?
Great video, but I am wondering if you have two cells that are significantly different, could you say they are different from each other? Or different from the rest? How do you make pairwise comparisons like post hoc analyses in a parametric test?
Hi there. I was wondering why we would use adjusted standardised residuals rather than just standardised? My Andy Field book points towards using standardised. Thanks.
Very informative video, thanks! Just a question: at about 2:54, you say that percentages are 33% under the null hypothesis. Shouldn't that be 25%, since the percentages add up to 100% over the four levels of SES?
Hi Tim, I am a stats instructor in NY. I think I can help with your question. Actually, we would need to ask SPSS to provide the expected percentages, which I am surprised he did not do. The expected percentages are based on what breakdown you would expect if your outcomes were randomly assorted. Although they always sum to 100%, they are not (in fact, often not) always equal. For example, if at University A there are 75% females and 25% males in the total sample and there are sciences majors and arts majors, I might be interested in seeing if the relative proportion of males to females in the science major is significantly different than what I would expect if major was randomly determined (if not biased based on gender). So, my null, which I would compare my observed results to, would be the percentage of males (or females) I would expect to find in the science major if it does not have a gender bias. However, I would not expect (my null would not be) 50% males in the science major. This is because I have only 25% males in my overall university, thus, my null for science major would be 25% males and 75% females. So if I were to observe that 50% of my science majors are male, this would be much different than I would expect if major was assigned randomly (my null). I could also calculate the expected actually count by taking the cross-product. For example, if I have 100 males (25%) and 300 females (75%) in my sample, and 100 sciences majors (25%) and 300 arts majors (75%), then the number of expected males in my sciences majors would be (percentage of students who are male) X (percentage of students who are science majors) X (number of students) or .25 X .25 X 400 = 25 male science students. If I observed many more or many fewer, I would suspect a gender bias. Note: You could get the same thing by doing cross products with the actual frequencies (rather than percentages). (Number of Total Males) X (number of Science Total Majors)/ Total Number of students = 100 X 100 / 400 = 25 I hope this helps! Prof. Osgood
+Tim Vanhoomissen Neither! The expected fraction (under the null hypothesis of independence) is actually given in the bottom row of the table, and it is 29.3% (i.e. if the two factors are independent, the proportions of each SES will be the same for each major, down the columns). This is why 31.0% corresponds to a low POSITIVE z = 0.6: it's marginally more than you would expect.
These videos are great, they're really helping me with the statistical part of my master's thesis. For me personally, they would be even better if at the end, you'd add how to report the statistic findings in APA style.
In the top right corner of the Crosstabs: Cell Display window there is a z-test section with the options: "Compare column proportions" and then a sub option "Adjust p-values (Bonferroni method)". Is there any reason not to use this option to make the adjustments?
you helped me more than the video, thanks!
is this method also applicable for the likelihood ratio in case we have violated the pearson chi square assumptions.
Thanks for the video. It will be further beneficial if you can show how to see post hoc between group differences
Thanks for your lesson,i really gain alot from it..please i wish to know more about chi square as am writing my final year project this year.thanks
Great video, but I am wondering if you have two cells that are significantly different, could you say they are different from each other? Or different from the rest? How do you make pairwise comparisons like post hoc analyses in a parametric test?
Hi there. I was wondering why we would use adjusted standardised residuals rather than just standardised? My Andy Field book points towards using standardised. Thanks.
Very informative video, thanks! Just a question: at about 2:54, you say that percentages are 33% under the null hypothesis. Shouldn't that be 25%, since the percentages add up to 100% over the four levels of SES?
Hi Tim,
I am a stats instructor in NY. I think I can help with your question.
Actually, we would need to ask SPSS to provide the expected percentages, which I am surprised he did not do. The expected percentages are based on what breakdown you would expect if your outcomes were randomly assorted. Although they always sum to 100%, they are not (in fact, often not) always equal.
For example, if at University A there are 75% females and 25% males in the total sample and there are sciences majors and arts majors, I might be interested in seeing if the relative proportion of males to females in the science major is significantly different than what I would expect if major was randomly determined (if not biased based on gender). So, my null, which I would compare my observed results to, would be the percentage of males (or females) I would expect to find in the science major if it does not have a gender bias. However, I would not expect (my null would not be) 50% males in the science major. This is because I have only 25% males in my overall university, thus, my null for science major would be 25% males and 75% females. So if I were to observe that 50% of my science majors are male, this would be much different than I would expect if major was assigned randomly (my null).
I could also calculate the expected actually count by taking the cross-product. For example, if I have 100 males (25%) and 300 females (75%) in my sample, and 100 sciences majors (25%) and 300 arts majors (75%), then the number of expected males in my sciences majors would be (percentage of students who are male) X (percentage of students who are science majors) X (number of students) or .25 X .25 X 400 = 25 male science students. If I observed many more or many fewer, I would suspect a gender bias.
Note: You could get the same thing by doing cross products with the actual frequencies (rather than percentages). (Number of Total Males) X (number of Science Total Majors)/ Total Number of students = 100 X 100 / 400 = 25
I hope this helps!
Prof. Osgood
+Tim Vanhoomissen
Neither! The expected fraction (under the null hypothesis of independence) is actually given in the bottom row of the table, and it is 29.3% (i.e. if the two factors are independent, the proportions of each SES will be the same for each major, down the columns). This is why 31.0% corresponds to a low POSITIVE z = 0.6: it's marginally more than you would expect.
Thanks very much. What is the difference whit a Bonferroni adjustment_ ?
If we have categorical data of four groups and I want to check that which one group is significant. And how I find out the alpha value
Nailed it with this video, thanks!
is there a way to do it in excel
Thanks for the video.
Great job, thanks
This was soo helpful, thanks! Anybody know an easy way of doing this in Python??? :)
these are great.
This guy talks HUMAN!
thank you ,very clear
1.96 for adjusted residual.