Is this the appropriate method when you have multiple explanatory variables and response variables? Or is it better to do each one individually, i.e. one table for calculating effect size on one of the response variables, then another for effect size of temperature on the variable etc. and repeating for each response variable?
Great questions. If I'm understanding your question correctly than you have two different main ideas we should discuss: 1) multiple predictors, 2) multiple response variables. For 1) I think what you're looking for is multiple meta-regression. This isn't used much in my field, but the process is similar to what i call a moderator analysis. Here are some resources: Moderator analysis using a single variable as a moderator Video tutorial for conventional models w/ categorical variables: th-cam.com/video/LIzp2tgdW9c/w-d-xo.html Video tutorial for three-level models w/ categorical variables: th-cam.com/video/2Qd0NgK0Bkw/w-d-xo.html Video tutorial for three-level models with correlated and hierarchical effects and robust variance estimation: th-cam.com/video/bL1wSs6heoc/w-d-xo.html Note: You can do all three of these analyses with categorical or continuous moderators in Simple Meta-Analysis, and it will give you the R code to replicate the analysis. You can also do multiple meta-regression using SMA. SMA is open source and free, and can be run locally or on the web: learnmeta-analysis.com/pages/simple-meta-analysis-software Long story short(er): If you are doing multiple metaregression, you can set it up like a moderator analysis, and put a + between the different moderator variables. There are some examples in the documentation below. Written documentation on meta-regression: Meta-regression in metafor documentation regarding intercepts: www.metafor-project.org/doku.php/tips:models_with_or_without_intercept How-to example in metafor: cjvanlissa.github.io/Doing-Meta-Analysis-in-R/multiple-meta-regression.html Statistical explanation and example in different R package: bookdown.org/MathiasHarrer/Doing_Meta_Analysis_in_R/metareg.html#multiple-metareg In terms of 2) When you say response variables, do you mean the outcome? For example, I work in the field of education, so a typical outcome would be 'learning'. Within learning, we may have a moderator variable that differentiates between recall or transfer. Meanwhile, we may have a separate meta-analysis interested in the outcome of "motivation", and then we could have a moderator that differentiates between, for example, self-efficacy and intrinsic motivation. Typically speaking, in my field I do not prefer analyzing unlike outcomes in the same analysis. By which i mean for example, i see limited value in examining learning and motivation in the same meta-analysis. The reason is that you can't really draw a conclusion from that - what would the overall effect size mean? To make a broad generalization, it wouldn't mean much in my opinion - it would just mean some positive outcome, and that's generally not helpful in terms of advancing theory and practice in relation to either outcome. Accordingly, in my field I would recommend that only conceptually like measures be analyzed as outcomes in the same analysis, and the more similar the better. I hope that makes sense - if not please let me know and I'm happy to try and clarify.
I had an issue with my meta which was that studentized test was sig in half of the outcomes I was measuring and cook's was never sig. I have no clue how to deal with this other than the advice you gave(my team checked data and checked the papers) and nothing was found except a few were attributed to sample size. (should we report that sample size is the patiental reason?
So if I'm understanding correctly, you're saying that ~ half the studies were outliers, but none were influential. That is quite interesting. I'm curious how many studies are in the data set? I'm assuming all the ES are independent if you're using a conventional MA model as opposed to 3 level. I've not encountered anything like that, but i'm guessing there are not many studies. There are a lot of contextual things that could influence how to deal with this, so I will refrain from commenting on that, other than suggesting that clear reporting in the paper is very important. Reviewers in your field may have suggestions on how to deal with it if they feel your approach was not the best.
Hello Dr. Noah!
Thank you for this great video.
You are a great teacher!
Thank you for your kind words!
Concise and very helpful. Thank you very much
Glad it was helpful!
Is this the appropriate method when you have multiple explanatory variables and response variables? Or is it better to do each one individually, i.e. one table for calculating effect size on one of the response variables, then another for effect size of temperature on the variable etc. and repeating for each response variable?
Great questions. If I'm understanding your question correctly than you have two different main ideas we should discuss: 1) multiple predictors, 2) multiple response variables. For 1) I think what you're looking for is multiple meta-regression. This isn't used much in my field, but the process is similar to what i call a moderator analysis. Here are some resources:
Moderator analysis using a single variable as a moderator
Video tutorial for conventional models w/ categorical variables: th-cam.com/video/LIzp2tgdW9c/w-d-xo.html
Video tutorial for three-level models w/ categorical variables: th-cam.com/video/2Qd0NgK0Bkw/w-d-xo.html
Video tutorial for three-level models with correlated and hierarchical effects and robust variance estimation: th-cam.com/video/bL1wSs6heoc/w-d-xo.html
Note: You can do all three of these analyses with categorical or continuous moderators in Simple Meta-Analysis, and it will give you the R code to replicate the analysis. You can also do multiple meta-regression using SMA. SMA is open source and free, and can be run locally or on the web: learnmeta-analysis.com/pages/simple-meta-analysis-software
Long story short(er): If you are doing multiple metaregression, you can set it up like a moderator analysis, and put a + between the different moderator variables. There are some examples in the documentation below.
Written documentation on meta-regression:
Meta-regression in metafor documentation regarding intercepts: www.metafor-project.org/doku.php/tips:models_with_or_without_intercept
How-to example in metafor: cjvanlissa.github.io/Doing-Meta-Analysis-in-R/multiple-meta-regression.html
Statistical explanation and example in different R package: bookdown.org/MathiasHarrer/Doing_Meta_Analysis_in_R/metareg.html#multiple-metareg
In terms of 2) When you say response variables, do you mean the outcome? For example, I work in the field of education, so a typical outcome would be 'learning'. Within learning, we may have a moderator variable that differentiates between recall or transfer. Meanwhile, we may have a separate meta-analysis interested in the outcome of "motivation", and then we could have a moderator that differentiates between, for example, self-efficacy and intrinsic motivation. Typically speaking, in my field I do not prefer analyzing unlike outcomes in the same analysis. By which i mean for example, i see limited value in examining learning and motivation in the same meta-analysis. The reason is that you can't really draw a conclusion from that - what would the overall effect size mean? To make a broad generalization, it wouldn't mean much in my opinion - it would just mean some positive outcome, and that's generally not helpful in terms of advancing theory and practice in relation to either outcome. Accordingly, in my field I would recommend that only conceptually like measures be analyzed as outcomes in the same analysis, and the more similar the better. I hope that makes sense - if not please let me know and I'm happy to try and clarify.
I had an issue with my meta which was that studentized test was sig in half of the outcomes I was measuring and cook's was never sig. I have no clue how to deal with this other than the advice you gave(my team checked data and checked the papers) and nothing was found except a few were attributed to sample size. (should we report that sample size is the patiental reason?
So if I'm understanding correctly, you're saying that ~ half the studies were outliers, but none were influential. That is quite interesting. I'm curious how many studies are in the data set? I'm assuming all the ES are independent if you're using a conventional MA model as opposed to 3 level. I've not encountered anything like that, but i'm guessing there are not many studies. There are a lot of contextual things that could influence how to deal with this, so I will refrain from commenting on that, other than suggesting that clear reporting in the paper is very important. Reviewers in your field may have suggestions on how to deal with it if they feel your approach was not the best.