Run Meta-Analysis, Create Forest Plot, Examine Influence in R with metafor (How To Series with SMD)

แชร์
ฝัง
  • เผยแพร่เมื่อ 19 ธ.ค. 2024

ความคิดเห็น • 9

  • @alirezahekmati7632
    @alirezahekmati7632 2 หลายเดือนก่อน

    Hello Dr. Noah!
    Thank you for this great video.

  • @FC-os1mm
    @FC-os1mm 3 หลายเดือนก่อน +1

    You are a great teacher!

  • @Moonstonn
    @Moonstonn 4 หลายเดือนก่อน +1

    Concise and very helpful. Thank you very much

  • @Liam-l7q7i
    @Liam-l7q7i 21 วันที่ผ่านมา +2

    Is this the appropriate method when you have multiple explanatory variables and response variables? Or is it better to do each one individually, i.e. one table for calculating effect size on one of the response variables, then another for effect size of temperature on the variable etc. and repeating for each response variable?

    • @LearnMetaAnalysis
      @LearnMetaAnalysis  20 วันที่ผ่านมา

      Great questions. If I'm understanding your question correctly than you have two different main ideas we should discuss: 1) multiple predictors, 2) multiple response variables. For 1) I think what you're looking for is multiple meta-regression. This isn't used much in my field, but the process is similar to what i call a moderator analysis. Here are some resources:
      Moderator analysis using a single variable as a moderator
      Video tutorial for conventional models w/ categorical variables: th-cam.com/video/LIzp2tgdW9c/w-d-xo.html
      Video tutorial for three-level models w/ categorical variables: th-cam.com/video/2Qd0NgK0Bkw/w-d-xo.html
      Video tutorial for three-level models with correlated and hierarchical effects and robust variance estimation: th-cam.com/video/bL1wSs6heoc/w-d-xo.html
      Note: You can do all three of these analyses with categorical or continuous moderators in Simple Meta-Analysis, and it will give you the R code to replicate the analysis. You can also do multiple meta-regression using SMA. SMA is open source and free, and can be run locally or on the web: learnmeta-analysis.com/pages/simple-meta-analysis-software
      Long story short(er): If you are doing multiple metaregression, you can set it up like a moderator analysis, and put a + between the different moderator variables. There are some examples in the documentation below.
      Written documentation on meta-regression:
      Meta-regression in metafor documentation regarding intercepts: www.metafor-project.org/doku.php/tips:models_with_or_without_intercept
      How-to example in metafor: cjvanlissa.github.io/Doing-Meta-Analysis-in-R/multiple-meta-regression.html
      Statistical explanation and example in different R package: bookdown.org/MathiasHarrer/Doing_Meta_Analysis_in_R/metareg.html#multiple-metareg
      In terms of 2) When you say response variables, do you mean the outcome? For example, I work in the field of education, so a typical outcome would be 'learning'. Within learning, we may have a moderator variable that differentiates between recall or transfer. Meanwhile, we may have a separate meta-analysis interested in the outcome of "motivation", and then we could have a moderator that differentiates between, for example, self-efficacy and intrinsic motivation. Typically speaking, in my field I do not prefer analyzing unlike outcomes in the same analysis. By which i mean for example, i see limited value in examining learning and motivation in the same meta-analysis. The reason is that you can't really draw a conclusion from that - what would the overall effect size mean? To make a broad generalization, it wouldn't mean much in my opinion - it would just mean some positive outcome, and that's generally not helpful in terms of advancing theory and practice in relation to either outcome. Accordingly, in my field I would recommend that only conceptually like measures be analyzed as outcomes in the same analysis, and the more similar the better. I hope that makes sense - if not please let me know and I'm happy to try and clarify.

  • @georgebashour4333
    @georgebashour4333 3 หลายเดือนก่อน

    I had an issue with my meta which was that studentized test was sig in half of the outcomes I was measuring and cook's was never sig. I have no clue how to deal with this other than the advice you gave(my team checked data and checked the papers) and nothing was found except a few were attributed to sample size. (should we report that sample size is the patiental reason?

    • @LearnMetaAnalysis
      @LearnMetaAnalysis  3 หลายเดือนก่อน

      So if I'm understanding correctly, you're saying that ~ half the studies were outliers, but none were influential. That is quite interesting. I'm curious how many studies are in the data set? I'm assuming all the ES are independent if you're using a conventional MA model as opposed to 3 level. I've not encountered anything like that, but i'm guessing there are not many studies. There are a lot of contextual things that could influence how to deal with this, so I will refrain from commenting on that, other than suggesting that clear reporting in the paper is very important. Reviewers in your field may have suggestions on how to deal with it if they feel your approach was not the best.