Continuous by Continuous Interaction in Linear Regression (SPSS Tutorial Video #28)

แชร์
ฝัง
  • เผยแพร่เมื่อ 23 ม.ค. 2025

ความคิดเห็น • 48

  • @weihuang559
    @weihuang559 3 หลายเดือนก่อน

    This is the clearest video I have ever seen. Thanks!!

  • @noai3484
    @noai3484 3 ปีที่แล้ว +1

    This saved my bachelor's thesis, thanks so much!

  • @jschulz
    @jschulz 3 ปีที่แล้ว +3

    Can't wait for the simple effects video!

  • @hyunkim3672
    @hyunkim3672 3 ปีที่แล้ว +1

    This is so helpful. Thanks!, Also very impressed by all the replies that you have been commenting!

  • @mahmoudelmaghraby1548
    @mahmoudelmaghraby1548 4 หลายเดือนก่อน

    Thanks a lot. Why didn't you run the analysis with interaction using the raw scores?

  • @Dalenatton
    @Dalenatton 3 ปีที่แล้ว +1

    Thank you so much for the informative video. I wonder if you could give us a link to your video where you explained how to determine whether the simple slope of each openness line is significant. Thank you so much in advance for your help!

  • @alicerosew
    @alicerosew 3 ปีที่แล้ว

    Do we also center the covariates?

  • @mary-janewheeler3394
    @mary-janewheeler3394 3 ปีที่แล้ว +1

    Really helpful video! Do you have a video that gives guidance on how to report the interaction effect within the write up. Thanks!

  • @palmerja17231
    @palmerja17231 3 ปีที่แล้ว +1

    Great video, thank you! Can you direct me to the future video tutorial to test whether each of the three regression line fits are statistically significant?

    • @DataDemystified
      @DataDemystified  3 ปีที่แล้ว

      Thank you! Unfortunately I have not made those yet, but they are high on my list! If the meantime, I suggesting looking up either "spotlight analysis" or "floodlight analysis"...both do what I think you're looking for

  • @ramikhatib2425
    @ramikhatib2425 2 ปีที่แล้ว +1

    Great job at explaining, your regression videos are truly awesome! Can you produce a video on how to probe simple slopes? Thanks!

  • @theresetodd6598
    @theresetodd6598 ปีที่แล้ว +1

    Hi Jeff, I understand centering the variables allows for increased interpretation. I'm wondering what's the impact of not centering the variables. I ran my analyses initially without interaction terms and then added them to the model in SPSS (I used blocks to look at added effects of different types of variables on my model). When I originally ran my analysis I did not center my variables. I'm confused about how centering them will change my results. Is it possible to leave the uncentered variables in the model but use a centered interaction term?

    • @lc807
      @lc807 ปีที่แล้ว

      i'm also wondering how the interpretation of the main effect coefficients changes when centering the main effect variables as opposed to running the regression without centering them...

  • @JANUSKWEEPEER
    @JANUSKWEEPEER 3 ปีที่แล้ว

    Super useful indeed. Looking forward to the simple slope analysis .... Is this video still coming up?

    • @DataDemystified
      @DataDemystified  3 ปีที่แล้ว

      Glad you found it useful! Yes, soon I hope! Lots of content to produce and this is definitely on the list.

  • @schester1543
    @schester1543 3 ปีที่แล้ว

    Thank you for this video! Very useful and easy to understand. I just wondered if you could provide a link to the future video 'simple slope' analysis you mentioned. Many thanks

  • @federicotedeschi3841
    @federicotedeschi3841 ปีที่แล้ว

    Thank you. Is there a way to get (let's call the two predictors X1 and X2) the range of values of X1 for which the slope of X2 is statistically significant (or the other way round)?

  • @pablopinheiro5133
    @pablopinheiro5133 3 ปีที่แล้ว +1

    Great video. Thanks. Greetins from Brazil.

  • @NM-ng4dq
    @NM-ng4dq 2 ปีที่แล้ว

    Thank you for this video. I have a question. How can one run regression analyses per xx increase for a continuous variable?

  • @正经坨子
    @正经坨子 2 หลายเดือนก่อน

    Thank you very much for your video! I have a question: I need to divide one of my continuous variables, 'a', into three parts - 'a-high', 'a-mid', and 'a-low' - and then examine how their interaction with another continuous variable, 'b', affects the dependent variable in Linear Regression. Why is it that when I use your method to plot the results, they look different from the method used in the "Multi-level categorical x continuous interaction (linear regression) SPSS (SPSS Tutorial Video #35)" video?Thank you so much for your help!!

    • @正经坨子
      @正经坨子 2 หลายเดือนก่อน

      th-cam.com/video/_AEL2j4YVOQ/w-d-xo.html

  • @ModernDayDebate
    @ModernDayDebate ปีที่แล้ว

    That helped a ton, thank you!

  • @CarolineCha118
    @CarolineCha118 2 ปีที่แล้ว

    Hi! How would you present these tables and graphs in APA formatting?

  • @lc807
    @lc807 ปีที่แล้ว

    thanks so much for the video! i'm a little confused about the interpretation of the main effect coefficients, as i had thought they each individually represent the change in the dependent variable when the other variable is at their mean, rather than when the other variable is 0 like you explained. or did you mean that since the variables are mean centered, when one equals 0 this effectively means it is at its mean value? can you please clarify this?

  • @johnmortenmichaelis5635
    @johnmortenmichaelis5635 3 ปีที่แล้ว +2

    Super useful! However is there a way in SPSS to visualize the interaction effects in a more presentable graph? I mean without the scatter, ideally in an APA format.

    • @DataDemystified
      @DataDemystified  3 ปีที่แล้ว +1

      Thanks! Unfortunately I don’t know a way to format the graphs easily for apa styling. When I do it for papers I recreate all my graphs in excel and format accordingly. Sorry!

  • @qizhang203
    @qizhang203 4 ปีที่แล้ว +1

    Thanks for your explanation. Yet, I am confused about interpreting a positive interaction effect when the main effect I am concerned is negative. Can you help me?

    • @DataDemystified
      @DataDemystified  4 ปีที่แล้ว +1

      Hi Qi. Great question. The interpretation is the same regardless of the sign of the main effect. In other words, if you have a positive interaction, that means that the influence of one of your variables on the DV INCREASES as the other variable increases in value. The fact that the main effect one one of those IVs is positive or negative doesn't change that. I hope this helps!

    • @qizhang203
      @qizhang203 4 ปีที่แล้ว

      @@DataDemystified Hi, Jeff, thanks for your quick reply. Yes, I interpreted interaction the way you said before, yet, recently, I read a paper interpreting a positive interaction as decreasing the influence of the variable on dv when the sign of main effect is negative. Later, me and my friends made a discussion and it seemed that the conclusion was the interaction effect is interpreted as playing strengthening role (vs weakening role) when the signs of interaction and main effect are the same (vs different). That’s why I asked you the question at first. After watching the above video, I created a set of data and draw the graph the way you did, it seemed to support me and my friends’ conclusion. I am sharing the data with you. Can you further reply to me after your analysis with the data? (IV: 2,4,3,3,6,5,4,3,4,3,7,5,2,8,5,9,9,6; moderator: 1,2,2,2,1,3,2,3,3,6,4,3,4,1,6,1,1,3; DV: 100, 90, 89, 99, 79,80,91,92,99,100,88,91,70,69,100,61,60,82)

    • @DataDemystified
      @DataDemystified  4 ปีที่แล้ว +1

      ​@@qizhang203 Hi Qi, I think the confusion may be that even with the positive interaction, in your data, the net effect of the IV (or the Moderator) on the DV is still negative. However, it is less negative due to the interaction. When I run your data, I get this:
      Y = 115.33 - 7.198 IV - 6.575 Moderator + 1.92 * IV * Moderator
      Consider what happens when the IV = 1. Then the model reduces to:
      Y = 115.33 - 7.198 - 6.575 Moderator + 1.92 * Moderator
      Which means that the effect of the moderator on Y is (-6.57 + 1.92) = -4.65
      Now consider when IV = 2. The model now reduces to:
      Y = 115.33 - 7.198 x(2) - 6.575 Moderator + 1.92 * 2 * Moderator
      Which means that the effect of the moderator on Y= (-6.575 + 1.92x2) = -2.735
      In other words, as Y increases, the influence of the Moderator on the DV becomes less negative, until it crosses 0 and then it becomes positive.
      So, a positive interaction effect changes the way one variable influences the DV, depending on the level of that other variable. When the coefficient is positive, it increases the influence of the other variable as the first variable increases.
      I hope that helps
      (BTW, what you call and IV and what you call a Moderator is irrelevant...they model is symmetric, so both are moderating variables).

  • @DeveshBatraonGPlus
    @DeveshBatraonGPlus 3 ปีที่แล้ว +2

    This a great video, thank you. I was wondering how would we quantify the effect size of the independent variable on the response variable? For instance, I'm using a similar setup with a mixed effects model using glmer function in R, and since I have both main and interaction effects for 'age', how can I calculate its effect size? I normally use odds ratios for this purpose. Also, will I even consider the main effects?

    • @DataDemystified
      @DataDemystified  3 ปีที่แล้ว

      Hi Devesh. Effect sizes can mean different things in different disciplines. For instance in psychology and the related fields (e.g. Consumer Behavior), effect sizes are usually calculated to help standardize comparisons (see a good long list here: en.wikipedia.org/wiki/Effect_size). In economics, in contrast, effect size often means "change" as in, what % change did we see when XYZ happened. For regressions, we typically talk about R^2 (or adjusted R^2) for the overall model, and you can compute CHANGE in R^2 to understand the influence of a single predictor. Alternatively ,there is an effect size known as f^2, which I don't often use, but can be done at the individual variable level. As for odds ratios, they make sense for logistic regressions, but not for linear regressions.

  • @contessaa1671
    @contessaa1671 ปีที่แล้ว

    Thank you prof. for your great explanation. I do have a question though; I do have a positive interaction term and want to further run the analysis / understand the relationship by stratifying my moderator. Please a short explanation will be very helpful on how to proceed. Like for instance do I add my interaction term in the analysis with my stratified moderator? Do I keep my other independent variable continuous or do I group it as well? Please help

  • @jenniferb163
    @jenniferb163 3 ปีที่แล้ว

    Hello, Thanks so much for your videos. Is there a way to do this with 9 independent variables?

    • @DataDemystified
      @DataDemystified  3 ปีที่แล้ว

      Hi Jennifer. Sure, it would basically be the same process, but you'll have a LOT of potential 2-way interactions: 9 Choose 2 = 36 interaction terms. You'd just make an interaciton term for every possible pair-wise comparison (a*b, a*c, a*d, etc...) Though I would STRONGLY caution you to correct for multiple comparisons if you do that (e.g. bonferroni correction).

    • @jenniferb163
      @jenniferb163 3 ปีที่แล้ว

      @@DataDemystified Thank you so much for replying, and yes I just watched your Bonferroni correction video that changed my results entirely. Could I do this for only the significant variables as I have run 2 multiple regressions and would that mean I need to do 72? as my dissertation is due tomorrow and my tutor told me to do an interaction plot. Only 3 variables were significant. Thank you again for your time.

    • @DataDemystified
      @DataDemystified  3 ปีที่แล้ว

      @@jenniferb163 You have to apply the correction to all tests run...however many that might be. And yes, if you ran 72 tests, with no a priori (in advance) predictions, you apply a x72 penalty.

  • @durgadhakal9453
    @durgadhakal9453 3 ปีที่แล้ว

    Really great! I would like to ask one question. How do we interpret the whole thing if the interaction effect is not significant (but other values are the same)?

    • @DataDemystified
      @DataDemystified  3 ปีที่แล้ว

      Just means that there is no dependency bertwwen the two predictor variables and the dependent variable. So you can interpret the other coefficients in their own

  • @pradhyumnbansal5188
    @pradhyumnbansal5188 4 ปีที่แล้ว

    hey jeff, great video, i want to know why'd you create the third feature ( age*openness ) in your linear regression. Won't it cause the issues of collinearity with other features ?

    • @DataDemystified
      @DataDemystified  4 ปีที่แล้ว +1

      Great question. It would not cause collinearity because we centered the two underlying variables. Beyond that, the only possible way to test the interact effects of two predictor variables (IVs) on an outcome measure (DVs) is to construct an interaction term as done here. The whole point of that term is to test dependencies. I hope that helps!

    • @pradhyumnbansal5188
      @pradhyumnbansal5188 4 ปีที่แล้ว +1

      @@DataDemystified so i thought about it and please let me know if im right. X1 = age and X2 = openness and, Y = importance, now since we centered our IVs, E[X1] = E[X2] = 0 and thus, Cov(X1, X2) = E[X1*X2]. Therefore to check this effect of correlation between X1 and X2 we add that interaction term X1*X2 .

    • @DataDemystified
      @DataDemystified  4 ปีที่แล้ว +2

      @@pradhyumnbansal5188 I think you're most of the way there. What the interaction (X1*X2) checks is to see if the effect of X1 on Y depends on X2 (or, the compliment: if the effect of X2 on Y depends on X1). As in, the slope for X1's influence on Y differs as s function of X2 (if there is a significant interaction). If there is no interaction than the slope of X1 on Y is invariant with X2. Does that help?