Hi, I have one doubt. I used Chi-square test of independence in a table of 2 x 3. My Pearson Chi-Square (Asymp. Sig. (2-sided) show a signifcant value of 0.009. Then I used the std. residual as a pos hoc to compare the differences between the groups. But oddly, none of the values were higher than + - 1.96. One of the categories had a value of -1.8. So I am not 100% sure if was this group the responsable for the variation in the test... (I was expecting a value larger than 1.96.). Any suggestion?
Great Thank you. What about Cramer's V...isn't this better when you have more than 2x2 design? Also (sorry for so many questions but I cant find anything in text books), if you violate expected frequencies rule, can you 1.) Do a Fishers Exact for 2x4? 2.) still interpret Cramer's V or Odd ratio from that Fishers Exact? so Sorry! Many thanks again! very much appreciated!
I would report the residuals for each cell in a table format. Then provide interpretation of the magnitude of the residuals in your results section. Example: the largest residual was in the cell that comprised level 1of factor a and level 2 of factor b.
Hi, Thank you for the videos, really helpful. Just a quation please, what do we do if the assumptions are violated (more cells have an expected value of less than five) can we used Likelihood Ratio? Thank You
Hi, I have a significant Chi-square p value, but when I look at the standardized residuals none of the groups go beyond +/- 1.96, testing at the .05 level. Two of my groups have values of 1.7--one group +1.7 and the other group -1.7---suggestions on how to interpret this please?
That could be interpreted as a study that is underpowered and there is no one cell that is accounting for the significance. There is overall significance but not a strong contribution in cell compared to others.
***** i have 4 question in questionnaire that represent dependent variable and 3 question in the questionnaire represent independent variable 1,how can i combine all this question to represent depended variable and independent variable 1 because i want to use dependent variable with Independence variable in correlation analysis?
anna melissa As I understand your question, it seems that you want to combine the results of your dependent variables to create one composite variable and then do the same for your independent variables (all subjects would be in the same group). If I have that correct, then I don't believe you can then attempt to correlate two categorical variables with any accuracy. I don't know of a correlation coefficient designed to work with two categorical variables..
Hi, Is it possible to have 3 options for the independent variable and 3 options for the dependent variable? In my project: Landfill is the independent variable. The 3 options are yes, no, and not sure Recycling is the dependent variable. The options are yes, no, and sometimes I gave the value 0 to yes, 1 for no, and 2 for not sure/sometimes I'm not sure whether the outcome is correct though - in the Case Processing Summary it says 4.9% are valid and 95.1% are missing. Is this because of the values I have used to represent the categories? Would appreciate some help from anyone who is good with SPSS! Thanks!
Yes, a 3 x 3 is possible. If the output shows missing data be sure that the cells hadn't had data and then the data was deleted; there will be a period in the cell. SPSS interprets that as missing data.
Thank you so much! This video has helped me tremendously in my research work! Excellent interpretation
Thank you for great video. How would you report these significant results and would you calculate an Odds ratio? many thanks for your help
Thank you so so much, you have helped loads! really appreciate it.
Thanks for that! I will use that explanation then.
Very good your videos, thanks sharing.
Hi, I have one doubt. I used Chi-square test of independence in a table of 2 x 3. My Pearson Chi-Square (Asymp. Sig. (2-sided) show a signifcant value of 0.009. Then I used the std. residual as a pos hoc to compare the differences between the groups. But oddly, none of the values were higher than + - 1.96. One of the categories had a value of -1.8. So I am not 100% sure if was this group the responsable for the variation in the test... (I was expecting a value larger than 1.96.). Any suggestion?
Great Thank you. What about Cramer's V...isn't this better when you have more than 2x2 design?
Also (sorry for so many questions but I cant find anything in text books), if you violate expected frequencies rule, can you
1.) Do a Fishers Exact for 2x4?
2.) still interpret Cramer's V or Odd ratio from that Fishers Exact?
so Sorry! Many thanks again! very much appreciated!
very useful! thank you
Thank you so much! This really helped.
If you were to report the standardized residuals in a paper how would you go about writing them?
I would report the residuals for each cell in a table format. Then provide interpretation of the magnitude of the residuals in your results section. Example: the largest residual was in the cell that comprised level 1of factor a and level 2 of factor b.
Thank you for the help!
Hi, How to get the value of 1.96?
Ng Mei Khon The value of 1.96 is associated with a probability level of 0.05.
what means the 0.02 at the chi-square' that rejects or accept the hypotesis?
thanks a lot
Hi, Thank you for the videos, really helpful. Just a quation please, what do we do if the assumptions are violated (more cells have an expected value of less than five) can we used Likelihood Ratio? Thank You
+S. Khelifi You could perform the Fishers Exact Test to correct for cells with an expected value less than 5.
+TheRMUoHP Biostatistics Resource Channel
can you perfrom Fishers Exact test in Spss for 2X4
Do you mean likelihood ratio won't be OK?
+Saber Khelifi You could use likelihood ratios.
Hi, I have a significant Chi-square p value, but when I look at the standardized residuals none of the groups go beyond +/- 1.96, testing at the .05 level. Two of my groups have values of 1.7--one group +1.7 and the other group -1.7---suggestions on how to interpret this please?
That could be interpreted as a study that is underpowered and there is no one cell that is accounting for the significance. There is overall significance but not a strong contribution in cell compared to others.
***** i have 4 question in questionnaire that represent dependent variable and 3 question in the questionnaire represent independent variable 1,how can i combine all this question to represent depended variable and independent variable 1 because i want to use dependent variable with Independence variable in correlation analysis?
anna melissa As I understand your question, it seems that you want to combine the results of your dependent variables to create one composite variable and then do the same for your independent variables (all subjects would be in the same group). If I have that correct, then I don't believe you can then attempt to correlate two categorical variables with any accuracy. I don't know of a correlation coefficient designed to work with two categorical variables..
Hi,
Is it possible to have 3 options for the independent variable and 3 options for the dependent variable?
In my project:
Landfill is the independent variable. The 3 options are yes, no, and not sure
Recycling is the dependent variable. The options are yes, no, and sometimes
I gave the value 0 to yes, 1 for no, and 2 for not sure/sometimes
I'm not sure whether the outcome is correct though - in the Case Processing Summary it says 4.9% are valid and 95.1% are missing. Is this because of the values I have used to represent the categories?
Would appreciate some help from anyone who is good with SPSS! Thanks!
Yes, a 3 x 3 is possible. If the output shows missing data be sure that the cells hadn't had data and then the data was deleted; there will be a period in the cell. SPSS interprets that as missing data.
*****
Cheers, got it sorted!
how can i change from 95% to 99% using SPSS ?
my test results are x2 (1)= 1.312, p= 0.252....what does this suggest. plz tell how to interpret.
+sanaa a. You would interpret that as there is no significant relationship between the independent variable and the outcome.