What if I have four levels? Thank you. 1 = definitely exclude 2 = not sure, but tendency to exclude 3 = not sure, but tendency to include 4 = definitely include
Very informative video, however, I have a question. When manually calculated Kappa was found to be 0.44 while using DATAtab it was 0.23, why this difference?
Thank you so much for your effort. I have 2 questions: Is 0.081 a good Kappa value? Can I run Cronbach's alpha test to assess the agreement between two raters?
0.081 is very bad. There are debates about what is considered good and it also depends on the context of the research. In general, I'd say in most cases everything above 0.5 is an area, in which it gets interesting.
Thank you for great explanations. Found you searching for Fleiss Kappa and now keep watching other videos for the sheer pleasure.
No one can explain obvious like you, perfect !
Many many thanks for your nice Feedback! Regards Hannah
Thank you for such an excellent yet simple explanation 👍
Excelente muchas Gracias
Glad it was helpful and many thanks for your feedback! Regards Hannah
Beautifully explained!
Glad it was helpful!
Many thanks for the clear explanation!
Such a perfect explanation! Thank you ❤
Glad it was helpful!
thank you~!! so easy and good explanation!!
Glad it was helpful!
Great explanation of the kappa, thank you very much!!!!!! :)
Glad you liked it!
Thank you so muchhhhhh!!!!! A great explanation!
Many thanks!
Thanks for your video. It really helpful
Glad it was helpful! Regards Hannah
great explanation
Thank for the great video. Could you perhaps do a video where you use articles to calculate the Kappa.
Great suggestion! I will but it on my to do list!
Hello. Can you do an example where you use Cohen's Kappa but there are 3 raters? Thank you.
Hi, then you use Fleiss Kappa, here is our video: ruclips.net/video/ga-bamq7Qcs/видео.html Regards, Hannah
@@datatab can you do an example where u usw cohen's kappa for 6 raters?
Is Cohen's Kappa can be used for a 5 raters?
Excellent, thank you very much
Glad it was helpful!
Wonderful Explanation!!! Thanks
hi excelent lesson. I would like to know how to proceed when one of the raters's response is only yes or only no for all questions
Can you use cohen kappa when two diffrent instruments are used measuring same thing rather than for individuals
Great video! Thanks.
Glad it was helpful! Thanks for your nice feedback! Regards Hannah
well explained
Many thanks : )
What if I have four levels? Thank you.
1 = definitely exclude
2 = not sure, but tendency to exclude
3 = not sure, but tendency to include
4 = definitely include
Hi, then you can also use Cohen's Kappa, maybe you can also use the weighted Cohen's Kappa.
2:15 inter-rater reliability
5:45 calculating
Thank you!
You're welcome! : ) Regards Hannah
Excellent explanation . Tks a lot
Glad you liked it!
@@datatab you are my life-saver for my statistic subject. Tks so much from the bottom of my heart.
Very informative video, however, I have a question. When manually calculated Kappa was found to be 0.44 while using DATAtab it was 0.23, why this difference?
very informative
Superb mam
👍
Perfect
Thanks!
Can this test be used for more than2 categorical variables by two raters eg depressed, not depressed, unknown
How is Cohen's Kappa different from the Cronbach's Alpha?
Very good video.
Thank you!
how did u got 0.72?
Randolph’s K = 0.84 what does it mean
Why did you make an example where the sum of different variables is identical, two times 25 and I have a headache.
Oh sorry!
Thank you so much for your effort. I have 2 questions:
Is 0.081 a good Kappa value?
Can I run Cronbach's alpha test to assess the agreement between two raters?
0.081 is very bad. There are debates about what is considered good and it also depends on the context of the research. In general, I'd say in most cases everything above 0.5 is an area, in which it gets interesting.
The video actually talks about good and bad values at 9:25 onwards