00:00 Presentation 01:45 the Forest Plot 05:30 Overview of the exposition 07:30 key of Stadistical significance is transparency and replicability 08:30 Background on Meta-analysis 12:12 the focus of meta-analysis, definition, ways to measure it 17:30 Computing effect sizes 19:27 Standarized mean group 20:19 How do you make different types of outcome comparable? standarization - correlation coefficient, normal distribution 21:30 what do you do if you have dichotomous outcomes reported in a study? 23:15 Nice effect size calculator 26:15 Basics of Meta-analysis 27:55 Determining the mean effect size, the precision of measuring the effect size is not equally accurate in all studies 30:20 Variance Weighted meta-analysis as a solution of effect size precision variance It helps also to find out how precise our mean is 31:15 Once we have the measures of size effect, its time to analyse it 33:01 We use Log of RR or OD just to make the measures more simetrical, hence we use inverse variance weights 34:05 All we should note about the inverse variance weights formulas are that they use amot sample sizes 35:30 Some studies have a variety of number of size effects, and in order to keep stadistical independence we need to take 1 size effect per study, if wwe take all of the some studies will be louder than others. 37:30 Check list to be ready for meta analisis: Effect Sizes, the inverted variance weight of those sizes, those effect sizes and calculus should be in subsets that answers the same question of the research. 37:45 Calculate the Mean of these inverse variances, this is not a normal average, its a weighted average 38:40 The standard error of the mean of the inverse variances is ITS INVERSE. 39:10 Once we have that we can calculate confidence intervals 40:15 Now we can calculate the z test of that 42:00 with the z test we can now calculate the p value 42:50 We need to do an homogeneity test to know if the values are really explaining something 43:45 If we get heterogeneity; this means all the studies are not telling the same story, hence it can give a hint that the values are an oversimplification of a problem 44:10 If that's the case, youhave options: you can model the difference between the studies a random effect model is an example 44:40 Homogeneity Analisis 46:40 Homogeneity Analisis Q Statistic, this is a weighted sum of squares, looks like the top of the variance formula, its distributed as chi square with k - 1 degrees of freedom why k, we dont know but it works 47:30 If you have Q stadistically significant it means that you have heterogeneity 48:00 On the other side if you get Q not stadistically, it can mean also that you have an small size sample, so you should keep in mind that a Q not stadistically significant doesnt mean automatically that its homogeneous 48:18 I square its an alternative to Q (but apparently not becouse it includes Q), it measure how much heterogeneity there is. 49:45 This meta-analysis is called fixed effects meta-analysis and ironically is not recomended to do alone, its better to do random effects meta-analysis 50:20 How Random effects meta-analiysis works? it assumes homogeneity first and when there is differences it assumes is becouse sampling errors, this means there is a distribution of random effects that produces heterogeneity, this hetereogeneity is assumed an takes the nature of imperfection process unrelated to the nature of the investigative procedure by itself, which is the heterogeneity we want to really measure. The problem with this asumtion is that the random effect meta-analysis assumes this random errors are normally distributed, which it's something that are still in debate. 55:00 You should use Random effects, if you feel isnt enought you can use fixed effects, but not fixed effects alone 55:35 How to do Random effects? you need a new set of weights 56:20 This set of weight has to have the sampling error plus the study variability (not counted in fixed effect model) 56:35 this study variability comes from Q, and then we need to calculate Tau square Tau square will be the constant where the new weight of each study will come from (once it's inversed with log) 58:20 Once we have the new set of w, we just calculate again the weight mean, the standard deviation of w, the standard error, confidence interval (bigger EE, fatter confidence intervals) 59:15 There are different ways to calculate tau square 01:01:05 Normally once you have already all calculated hetereogeneity and you have created the forest plot, it's always a good idea to ask if its possible to know where this variability comes from. 01:01:30 This explanations can come from your coding procedure, where you have identified the differences of the studies. 01:02:00 So you use moderator analysis to compare the diversity of identified differences, one example is: you can identify slight differences between procedures and compare them using categorical models (ANOVA) 01:03:15 If your study has levels of continuity f.e. quality degrees of implementation, and you want to know if that affects the size effect, you have to use a meta analytic form of regression, and not a regular (stadistic form of) regression. 01:04:15 Once you have decided to use a moderator analysis you have to then do a fixed or(/and idealy) random effects meta-analysis of each. becouse you then have to explain where the diversity comes from. Random effects = mixed effects 01:05:00 Example of categorical model ANOVA, random (mixed) effects, then you can see this compares the variability between the randomized and non randomized studies. You then should not take the stadistic significance as indicator of heterogeneity, the same way you did not do it with the study effects size themselves, becouse again, that stadistic significance can mean the samples sizes are different, so what you have to do is again use Q test that is a t test but for meta-analisis (DONT USE T TEST) so then you can measure the difference between the means. 01:06:40 This is the way you can define if the variable you took is accountable to explain the variability that let to heterogeneity, hence the reason you have noise in your data 01:07:00 In the case of meta analytic regression, you must use specialized software becouse is not regular stadistic regression, it is an (special) regression, we can call it meta analytic regresion. 01:08:20 Whit a meta analytic regression you have the regular features you can find in a regular stadistic regression analysis, the Q test is the remplacement of the F test in normal stadistics, and it helps us to determine if the regression is explaining variability. If that is the case, you then need to see if they are stadistically significant, if yes then there is variability. 01:09:15 So in summary you need to take in count; you must look at the average, its confidence interval, then homogeneity, then if there is heterogeneity then try to explain where in comes from through a Moderator model, from wich you will use average, confidence interval, Q test to figure out if the variables you choose to explain the heterogeneity actually explains it. 01:09:45 Forest plot, overview 01:11:00 Question about graphic presentation of Forest PLot 01:13:45 Comments on Software 01:16:30 Final Comments 01:17:15 Take note on publication selection bias, becouse its a big topic, important to get the proper stuff in and the improper out, becouse this gives you the size effects income at the end of the day. 01:17:30 Common errors I hope this could give a reference overview to explore, thanks for the video, amazing information.
00:00 Presentation
01:45 the Forest Plot
05:30 Overview of the exposition
07:30 key of Stadistical significance is transparency and replicability
08:30 Background on Meta-analysis
12:12 the focus of meta-analysis, definition, ways to measure it
17:30 Computing effect sizes
19:27 Standarized mean group
20:19 How do you make different types of outcome comparable? standarization - correlation coefficient, normal distribution
21:30 what do you do if you have dichotomous outcomes reported in a study?
23:15 Nice effect size calculator
26:15 Basics of Meta-analysis
27:55 Determining the mean effect size, the precision of measuring the effect size is not equally accurate in all studies
30:20 Variance Weighted meta-analysis as a solution of effect size precision variance
It helps also to find out how precise our mean is
31:15 Once we have the measures of size effect, its time to analyse it
33:01 We use Log of RR or OD just to make the measures more simetrical, hence we use inverse variance weights
34:05 All we should note about the inverse variance weights formulas are that they use amot sample sizes
35:30 Some studies have a variety of number of size effects, and in order to keep stadistical independence we need to take 1 size effect per study, if wwe take all of the some studies will be louder than others.
37:30 Check list to be ready for meta analisis: Effect Sizes, the inverted variance weight of those sizes, those effect sizes and calculus should be in subsets that answers the same question of the research.
37:45 Calculate the Mean of these inverse variances, this is not a normal average, its a weighted average
38:40 The standard error of the mean of the inverse variances is ITS INVERSE.
39:10 Once we have that we can calculate confidence intervals
40:15 Now we can calculate the z test of that
42:00 with the z test we can now calculate the p value
42:50 We need to do an homogeneity test to know if the values are really explaining something
43:45 If we get heterogeneity; this means all the studies are not telling the same story, hence
it can give a hint that the values are an oversimplification of a problem
44:10 If that's the case, youhave options: you can model the difference between the studies
a random effect model is an example
44:40 Homogeneity Analisis
46:40 Homogeneity Analisis Q Statistic, this is a weighted sum of squares, looks like
the top of the variance formula, its distributed as chi square with k - 1 degrees of freedom
why k, we dont know but it works
47:30 If you have Q stadistically significant it means that you have heterogeneity
48:00 On the other side if you get Q not stadistically, it can mean also that you have an small size sample, so you should keep in mind that a Q not stadistically significant doesnt mean automatically that its homogeneous
48:18 I square its an alternative to Q (but apparently not becouse it includes Q), it measure how much heterogeneity there is.
49:45 This meta-analysis is called fixed effects meta-analysis and ironically is not recomended to do alone, its better to do random effects meta-analysis
50:20 How Random effects meta-analiysis works? it assumes homogeneity first and when there is differences it assumes is becouse sampling errors, this means there is a distribution of random effects that produces heterogeneity, this hetereogeneity is assumed an takes the nature of imperfection process unrelated to the nature of the investigative procedure by itself, which is the heterogeneity we want to really measure. The problem with this asumtion is that the random effect meta-analysis assumes this random errors are normally distributed,
which it's something that are still in debate.
55:00 You should use Random effects, if you feel isnt enought you can use fixed effects, but not fixed effects alone
55:35 How to do Random effects? you need a new set of weights
56:20 This set of weight has to have the sampling error plus the study variability (not counted in fixed effect model)
56:35 this study variability comes from Q, and then we need to calculate Tau square
Tau square will be the constant where the new weight of each study will come from (once it's inversed with log)
58:20 Once we have the new set of w, we just calculate again the weight mean, the standard deviation of w, the standard error, confidence interval (bigger EE, fatter confidence intervals)
59:15 There are different ways to calculate tau square
01:01:05 Normally once you have already all calculated hetereogeneity and you have created the forest plot, it's always a good idea to ask if its possible to know where this variability comes from.
01:01:30 This explanations can come from your coding procedure, where you have identified the differences of the studies.
01:02:00 So you use moderator analysis to compare the diversity of identified differences,
one example is: you can identify slight differences between procedures and compare them using categorical models (ANOVA)
01:03:15 If your study has levels of continuity f.e. quality degrees of implementation, and you want to know if that affects the size effect, you have to use a meta analytic form of regression, and not a regular (stadistic form of) regression.
01:04:15 Once you have decided to use a moderator analysis you have to then do a fixed or(/and idealy) random effects meta-analysis of each.
becouse you then have to explain where the diversity comes from. Random effects = mixed effects
01:05:00 Example of categorical model ANOVA, random (mixed) effects, then you can see this compares the variability between the randomized and
non randomized studies. You then should not take the stadistic significance as indicator of heterogeneity, the same way you did not do it
with the study effects size themselves, becouse again, that stadistic significance can mean the samples sizes are different,
so what you have to do is again use Q test that is a t test but for meta-analisis (DONT USE T TEST) so then you can measure the difference
between the means.
01:06:40 This is the way you can define if the variable you took is accountable to explain the variability that let to heterogeneity, hence the reason you have noise in your data
01:07:00 In the case of meta analytic regression, you must use specialized software becouse is not regular stadistic regression, it is an (special) regression, we can call it meta analytic regresion.
01:08:20 Whit a meta analytic regression you have the regular features you can find in a regular stadistic regression analysis, the Q test is
the remplacement of the F test in normal stadistics, and it helps us to determine if the regression is explaining variability.
If that is the case, you then need to see if they are stadistically significant, if yes then there is variability.
01:09:15 So in summary you need to take in count; you must look at the average, its confidence interval, then homogeneity, then if there is
heterogeneity then try to explain where in comes from through a Moderator model, from wich you will use average, confidence interval, Q test
to figure out if the variables you choose to explain the heterogeneity actually explains it.
01:09:45 Forest plot, overview
01:11:00 Question about graphic presentation of Forest PLot
01:13:45 Comments on Software
01:16:30 Final Comments
01:17:15 Take note on publication selection bias, becouse its a big topic, important to get the proper stuff in and the improper out,
becouse this gives you the size effects income at the end of the day.
01:17:30 Common errors
I hope this could give a reference overview to explore, thanks for the video, amazing information.
THANKS
Thank you
My left ear is now very well informed. :)
I've been looking how to calculate my effect sizes for my meta-analysis... thanks very much.. you're the answer to my prayers..
Great lecture. It has been very helpful. Thank you.
This was a great lecture. Really vivid way of explaining, thank you.
The best explanation available. Thank you!
Great video and overview of meta-analysis! thanks!
Absolutely, a wonderful lecture.
Great! This lecture is very helpful.
This is very clear and helpful. Thanks!
Thank you so much.
Very helpful and informative. Thank you!
very nice.. more videos if possible
Very nice and useful
you're right
SPSS add-on macros can be downloaded here sites.google.com/site/ahmaddaryanto/meta-analysis-macros
Hey, Can anyone help me to reference the above formulas, is there any citation that include all of these formulas, any help?
Thank you so much!