Thanks Todd! This is great. @Ben van Buren - Cohen's Kappa is used in many academic articles but it did not originate there. It's actually from the Cohen & Cohen book from 1960. I'm using a more recent version, the citation is: Cohen, J., Cohen, P., West, S. G., & Aiken, L. S. (2013). Applied multiple regression/correlation analysis for the behavioral sciences. Routledge.
I have a very strange Kappa result: I have checked for a certain behavior in footage of animals, which I have assessed twice. For 28 animals, I have agreed 27 times that the behavior is present and have disagreed only once (the behavior was present in the first assessment, but not in the second). My data is organized as the following matrix: 0 1 0 27 And that gives me a Kappa value of zero which I find very strange because in only 1 of 28 assessments I disagree. How come it is considered these results as pure chance?
Very informative. However, what do you do when: a) the Pe is 1 (and then the equation is divisible by 1)? Assume the Kappa is 1? b) have a very low Kappa when the raters agree on all but one of the ratings? Surely it should be higher? I have 2 raters,, 20 subjects. If they agree on 19, and differ on 1, the Kappa is nearly 0.
Dr. Grande: what would you do if the Kappa agreement turns to be too low? Should both coders recode the material in order to match and increase the value? Or what do you suggest? Thanks in advance.
It depends on the matter. In my field it is necessary, when the mean value turn out too low, a discussion is held to talk about troubleshooting. And then re-code the material.
Hi. What about calculating sample size for Kappa? Do you think it is problematic to set the null hypothesis at K=0.0? I believe this would be the same at what others call to set K1=0.0, when many state the K1 should be the minimum K expected. Thanks
Dear Dr. Grande, i have maybe a simpe question. But Reseacher and RA are people, which give their responses to the survey f.a.,right?! So, this number can be very high then. And I´ve got 5 criteria like satisfactory etc . But I think i´ve understood how to do this. I should probably split people , giving responses into groups, in order to come up with the coefficient.
Super useful, helped me calculate interrater reliability for program assessment of student literature reviews. Thanks!
Thank you so much, this is exactly what I've been looking for.
Super helpful and clear thank you!
Great concise presentation, very useful!Much appreciated!
Thank you for uploading it!
THANK YOU SO MUCH!!!! On a time crunch and SPSS seems like it'd take too much time to even learn to use @_@. This video really helped.
You're welcome - thanks for watching.
Thank you so much for the well explained video ,
It really helped me very much .
You are an excellent teacher.
Great! Really useful! Thank you
Thanks for this very good video. The excel functions make my life so much easier :-)
what if I have more than 3 values? e.g. not just 0 and 1, but 2, 3, 4 or even more?
Lim Yufan just code it as 0,1, and 2. Hope this helps
the very important person
Thank you!!
such a wonderful and helpful video! Thanks a lot!
I'm glad you found the video useful. Thanks for watching.
Thanks! You saved me a lot of time.
very clear, thank you
thank you, very helpful!
Thanks Todd! This is great.
@Ben van Buren - Cohen's Kappa is used in many academic articles but it did not originate there. It's actually from the Cohen & Cohen book from 1960. I'm using a more recent version, the citation is:
Cohen, J., Cohen, P., West, S. G., & Aiken, L. S. (2013). Applied multiple regression/correlation analysis for the behavioral sciences. Routledge.
Thank you for your video. Could you explain how to handle ratings that are missing, where one rater recorded a score and the other did not?
hi! how do you calculate confidence invervals and standard error for the kappa values using excel? Thank you for your very helpful video
I have a very strange Kappa result: I have checked for a certain behavior in footage of animals, which I have assessed twice. For 28 animals, I have agreed 27 times that the behavior is present and have disagreed only once (the behavior was present in the first assessment, but not in the second). My data is organized as the following matrix:
0 1
0 27
And that gives me a Kappa value of zero which I find very strange because in only 1 of 28 assessments I disagree. How come it is considered these results as pure chance?
Very informative. However, what do you do when:
a) the Pe is 1 (and then the equation is divisible by 1)? Assume the Kappa is 1?
b) have a very low Kappa when the raters agree on all but one of the ratings? Surely it should be higher? I have 2 raters,, 20 subjects. If they agree on 19, and differ on 1, the Kappa is nearly 0.
Hi Todd. Can you do Fleiss' Kappa in Excel as well?
Dr. Grande: what would you do if the Kappa agreement turns to be too low? Should both coders recode the material in order to match and increase the value? Or what do you suggest? Thanks in advance.
It depends on the matter. In my field it is necessary, when the mean value turn out too low, a discussion is held to talk about troubleshooting. And then re-code the material.
Suppose you have five categories from low to high. Since it is not dichotomous as it is here, do you still use the same approach?
Probably need a little more discussion of sensitivity and specificity, although I expect it's also addressed in some other videos and in the book.
Hi. What about calculating sample size for Kappa? Do you think it is
problematic to set the null hypothesis at K=0.0? I believe this would
be the same at what others call to set K1=0.0, when many state the K1
should be the minimum K expected. Thanks
Can this test be used to measure reliability of categorical data?
wish my lecturer could explain like you
Thank you!!!
Dear Dr. Grande,
i have maybe a simpe question. But Reseacher and RA are people, which give their responses to the survey f.a.,right?! So, this number can be very high then. And I´ve got 5 criteria like satisfactory etc . But I think i´ve understood how to do this. I should probably split people , giving responses into groups, in order to come up with the coefficient.
I imagine this would be helpful in research pertaining to rating the acquisition of counseling skills in student counselors
It's really helpful...
excellent videio!!! thanks....!
You're welcome!
excellent
Many Thankssss
Do the resercher and the resercher assestance should have the same experince?or no
then you take Kendall out for a spin
A bit slow-paced but otherwise an excellent video, thanks.