Good question. I should have said in the video but for interpretation, positive K means the two raters agree more than by random chance, zero K means they agree by purely random chance, negative K means they disagree more than by random chance
@@ritvikmath You should pin this chain. It's important question and very helpful answer. Also, I assume we can calculate K for more than 2 raters, am I right?
I can’t believe what I am watching. Just 30 minutes I was reading about Kappa because it’s part of my research paper that I should submit soon. I LOVE you❤❤❤
Good question! One hot application right now is training a Large Language Model (LLM) to generate ratings that mimic a human's at a fraction of the time and cost.
I don't really understand why the probability for their randomness was calculated like that. Can you explain the formula for the expression the the top right again. I am a bit rusty with my probablility so sorry in advance if this is a dumb question
So how do you interpret the results against the problem? Ka says their have a propensity to randomly argree 1/6 of the time?…
Good question. I should have said in the video but for interpretation, positive K means the two raters agree more than by random chance, zero K means they agree by purely random chance, negative K means they disagree more than by random chance
@@ritvikmath You should pin this chain. It's important question and very helpful answer.
Also, I assume we can calculate K for more than 2 raters, am I right?
Great video. Very intuitive and insightful. Thank you!
I can’t believe what I am watching.
Just 30 minutes I was reading about Kappa because it’s part of my research paper that I should submit soon.
I LOVE you❤❤❤
Amazing!
that was such a beautiful Cohen’s Kappa Explanation video!!! Neat and Crisp, I understand this way better than before now, Thank you Sir Ritvik 🫡
Thanks!!!! 🫡
This reminds me of psychometrics and the many reliability coefficients. Cool and intuitive explanation!
Thanks!
Very insightful.
Beautiful work as always! Do you also have plans for doing a video about transformers?
Please explain the weighted cohens kappa
thanks
Awesome explanation, but What are some real world applications of this?
Good question! One hot application right now is training a Large Language Model (LLM) to generate ratings that mimic a human's at a fraction of the time and cost.
I don't really understand why the probability for their randomness was calculated like that. Can you explain the formula for the expression the the top right again. I am a bit rusty with my probablility so sorry in advance if this is a dumb question
"This the the Greek letter kappa and not a k": BECAUSE I TELL YOU IT IS.