"Have you ever read a research article only to say to yourself, I don't know how to interpret these findings" (NCCMT-URE, 2016)? Yes lady, everytime I read one. LOL.
There is a serious research design problem in the hypothetical study on teen mental health that serves as an example: it is entirely possible that the two groups of teens were not equal in terms of their mental scores prior to intervention. For example, let's say that the control group started at 6 and also ended at a mean of 6, whereas the intervention group started at a mean of 4 prior to intervention and ended at 6, you'd come to the conclusion that the intervention has no effect even though there was a 50% increase in the intervention group's mental health and no change in the control group.
Hi David. Thank you for your comment. You are correct! If you were to only compare mean scores after an intervention, you might not be able to see the full picture. We have used this fictional example to set up an explanation of comparing mean differences to calculate a standardized mean difference to summarize the results of many studies. The real analysis would take into consideration baseline differences.
For me, the explanation missed out the most important part of the calculation: the study outcomes have to be standardised to "a single common measurement". How? The video shows numbers from 1 to 10 (I assume) for the outcomes, but then when looking at the differences in SMD suddenly the range is -1 to +1? Have the numbers now been converted to some proportion - eg "9" become 0.9 etc?
Thanks for your comment. This video is meant to profile a high-level summary of how and when standardized mean differences are used. For more detail, the ‘single common measurement’ is calculated as standardized mean difference = (mean difference)/(standard deviation). This results in a unit-less number that gives you an approximation of the size of the effect. Since there are no units, you can’t determine the actual size of the effect, but it allows you to determine an approximate size of the effect, e.g., an SMD of 0 to 0.2 is a small effect, an SMD of 0.2 - 0.8 is a moderate effect, and an SMD greater than 0.8 is a large effect. While you don’t know the absolute size of the effect, it allows you to combine results of studies that used different units of measure.
Excellent video, thanks for sharing. One question though, can a SMD be superior to 1? I'm reading a SRMA with a SMD = 2.01. What does that mean ? Thanks !
Yes, a SMD can be superior to 1. A SMD expresses the mean difference in standard deviation units. So a SMD of 1 is equal to the mean for the treatment group being 1 standard deviation greater than the mean for the control group; and thus a SMD of 2 relays that the mean of the treatment group is 2 standard deviations greater than the mean for the control group. This rule of thumb by Cohen is often used to interpret the magnitude of the SMD: small, SMD = 0.2; medium, SMD = 0.5; and large, SMD = 0.8.” So a SMD of 2 is a very large difference. Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed). Hillsdale, NJ: Erlbaum.
Hello, i'd like to ask if i have a study reporting mean changes from baseline at follow up and another study that just report mean finding at follow up. Can i group those study in the same forest plot if use SMD as the effect size ? thank you very much
Hi Abritho, thanks for your question. Does the study reporting the mean difference also report the mean finding at follow up? If both studies report the mean findings at follow up for both the intervention and control groups, then yes, you can include them in the same Forest plot. If you have further questions, please contact us directly at nccmt@mcmaster.ca.
@@nccmt Hi, similar to this question, I'm trying to work out the SMD in studies with follow-up data but there's a SD for both baseline and follow up. Which one do I use? For example: mean (SD) Baseline: 7.24 (4.04) Follow-up 5.19 (3.73) I've worked out the mean difference is 2.05 but I don't now whether to divide it by 4.04 or 3.73. I'd appreciate any help you're able to give. Thank you.
@@KoshVader Hello, thank you for your comment. Please connect with us directly over email at nccmt@mcmaster.ca so that we can provide more direction as we are able.
hello, im still confused regarding the SMD. an article states that "An SMD of zero means that the new treatment and the placebo have equivalent effects. If improvement is associated with higher scores on the outcome measure, SMDs greater than zero indicate the degree to which treatment is more efficacious than placebo, and SMDs less than zero indicate the degree to which treatment is less efficacious than placebo. If improvement is associated with lower scores on the outcome measure, SMDs lower than zero indicate the degree to which treatment is more efficacious than placebo and SMDs greater than zero indicate the degree to which treatment is less efficacious than placebo." so, one of my result is -3.01 and another result for different parameter is 0.89. please do help .
Thank you for your question. We will require some additional information in order to provide a response. Can you please email us at nccmt@mcmaster.ca with your question, and if possible, the article that contains the outcomes and data you are referring to?
Generally, dichotomous data are presented as rates of outcomes, or ratios of outcomes comparing the intervention group to the control group (relative risk, odds ratio). As the outcome is measured in the same way across both groups, there is no need to standardize. The exception to this is if you are trying to age standardize, when comparing two different populations, for example, in incidence or mortality.
Today is the day that I conquered SMDs.
"Have you ever read a research article only to say to yourself, I don't know how to interpret these findings" (NCCMT-URE, 2016)? Yes lady, everytime I read one. LOL.
I burst out laughing when she said that, hahah yes in fact that's exactly why im here :)
Thank you so much for this video! It helped me analyze my articles so much easier and I was able to write a great analysis. Thank you!!
Thank you!! This simplified it and made it so much easier for me to understand :)
Thank you great simplicity and clarity
There is a serious research design problem in the hypothetical study on teen mental health that serves as an example: it is entirely possible that the two groups of teens were not equal in terms of their mental scores prior to intervention. For example, let's say that the control group started at 6 and also ended at a mean of 6, whereas the intervention group started at a mean of 4 prior to intervention and ended at 6, you'd come to the conclusion that the intervention has no effect even though there was a 50% increase in the intervention group's mental health and no change in the control group.
Hi David. Thank you for your comment. You are correct! If you were to only compare mean scores after an intervention, you might not be able to see the full picture. We have used this fictional example to set up an explanation of comparing mean differences to calculate a standardized mean difference to summarize the results of many studies. The real analysis would take into consideration baseline differences.
I can proudly say that I have conquered SMD🤣
For me, the explanation missed out the most important part of the calculation: the study outcomes have to be standardised to "a single common measurement". How? The video shows numbers from 1 to 10 (I assume) for the outcomes, but then when looking at the differences in SMD suddenly the range is -1 to +1? Have the numbers now been converted to some proportion - eg "9" become 0.9 etc?
Thanks for your comment. This video is meant to profile a high-level summary of how and when standardized mean differences are used. For more detail, the ‘single common measurement’ is calculated as standardized mean difference = (mean difference)/(standard deviation). This results in a unit-less number that gives you an approximation of the size of the effect. Since there are no units, you can’t determine the actual size of the effect, but it allows you to determine an approximate size of the effect, e.g., an SMD of 0 to 0.2 is a small effect, an SMD of 0.2 - 0.8 is a moderate effect, and an SMD greater than 0.8 is a large effect. While you don’t know the absolute size of the effect, it allows you to combine results of studies that used different units of measure.
Excellent video, thanks for sharing. One question though, can a SMD be superior to 1? I'm reading a SRMA with a SMD = 2.01. What does that mean ? Thanks !
Yes, a SMD can be superior to 1. A SMD expresses the mean difference in standard deviation units. So a SMD of 1 is equal to the mean for the treatment group being 1 standard deviation greater than the mean for the control group; and thus a SMD of 2 relays that the mean of the treatment group is 2 standard deviations greater than the mean for the control group. This rule of thumb by Cohen is often used to interpret the magnitude of the SMD: small, SMD = 0.2; medium, SMD = 0.5; and large, SMD = 0.8.” So a SMD of 2 is a very large difference.
Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed). Hillsdale, NJ: Erlbaum.
The NCCMT thank you very much for your detailed answer ! It truly helped ! Cheers from Belgium 😉
@@nccmt "A SMD expresses the mean difference in standard deviation units" This should have been included in the video.
What software you use to make this presentation?
good explanation. Thank you!!
Thanks, your explanation helped a lot
Hello, i'd like to ask if i have a study reporting mean changes from baseline at follow up and another study that just report mean finding at follow up. Can i group those study in the same forest plot if use SMD as the effect size ? thank you very much
Hi Abritho, thanks for your question. Does the study reporting the mean difference also report the mean finding at follow up? If both studies report the mean findings at follow up for both the intervention and control groups, then yes, you can include them in the same Forest plot. If you have further questions, please contact us directly at nccmt@mcmaster.ca.
@@nccmt Hi, similar to this question, I'm trying to work out the SMD in studies with follow-up data but there's a SD for both baseline and follow up.
Which one do I use?
For example: mean (SD)
Baseline: 7.24 (4.04)
Follow-up 5.19 (3.73)
I've worked out the mean difference is 2.05 but I don't now whether to divide it by 4.04 or 3.73.
I'd appreciate any help you're able to give. Thank you.
@@KoshVader Hello, thank you for your comment. Please connect with us directly over email at nccmt@mcmaster.ca so that we can provide more direction as we are able.
hello, im still confused regarding the SMD. an article states that "An SMD of zero means that the new treatment and the placebo have equivalent effects. If improvement is associated with higher scores on the outcome measure, SMDs greater than zero indicate the degree to which treatment is more efficacious than placebo, and SMDs less than zero indicate the degree to which treatment is less efficacious than placebo. If improvement is associated with lower scores on the outcome measure, SMDs lower than zero indicate the degree to which treatment is more efficacious than placebo and SMDs greater than zero indicate the degree to which treatment is less efficacious than placebo." so, one of my result is -3.01 and another result for different parameter is 0.89. please do help .
Thank you for your question. We will require some additional information in order to provide a response. Can you please email us at nccmt@mcmaster.ca with your question, and if possible, the article that contains the outcomes and data you are referring to?
What happens when the data is not continus data. Dichotomus or ordinal?
Generally, dichotomous data are presented as rates of outcomes, or ratios of outcomes comparing the intervention group to the control group (relative risk, odds ratio). As the outcome is measured in the same way across both groups, there is no need to standardize. The exception to this is if you are trying to age standardize, when comparing two different populations, for example, in incidence or mortality.