Determining Inter-Rater Reliability with the Intraclass Correlation Coefficient in SPSS

Поделиться
HTML-код
  • Опубликовано: 28 янв 2025

Комментарии • 71

  • @SierraKyliuk
    @SierraKyliuk 5 лет назад +28

    My thesis is due in 2 hours--you just saved me so much stress kind sir

    • @jubilent07
      @jubilent07 4 года назад

      Ahhh I'm in the same boat my friend ahhaha

    • @batmanarkham5120
      @batmanarkham5120 4 года назад

      Wow two hours :-o

    • @SierraKyliuk
      @SierraKyliuk 4 года назад +1

      @@batmanarkham5120 my computer crashed and I lost half of my thesis before it was due, it was a rough time

    • @batmanarkham5120
      @batmanarkham5120 4 года назад

      @@SierraKyliuk oh I hope that you got your thesis cleared

  • @amelamel4335
    @amelamel4335 7 лет назад +2

    I have been checking a couple of videos but this is by far the most explanary one. Thank you so much.

    • @DrGrande
      @DrGrande  7 лет назад

      You're welcome - thanks for watching

  • @halealkan7940
    @halealkan7940 8 лет назад +1

    Thank you so much. I learned the things -unfortunately- I couldn't learn from my statistics teacher. You saved my life writing my dissertation.

    • @DrGrande
      @DrGrande  8 лет назад

      You're welcome, thanks for watching -

  • @bradleyfairchild1208
    @bradleyfairchild1208 8 лет назад

    As an unexperienced instructor I am always interested in how "strict" or "easy" I am with my grading so this video on seeing how similar the three teachers graded was particularly interesting to me, thanks Dr. Grande.

  • @kemowens6356
    @kemowens6356 4 года назад +1

    This was extremely helpful. And you used almost the same data I had, so it really helped!

  • @normallgirl
    @normallgirl 4 года назад

    Thank you SO much, dr. Grande! This is more than helpful. Bless you!

  • @ryuguji6504
    @ryuguji6504 3 года назад

    i’m so grateful for this video!!! thank you so much you are one of the reason that make me pass the defense (if i pass T.T) thank youuuu

  • @thanhnhanphanthi1344
    @thanhnhanphanthi1344 6 месяцев назад

    Thank you so much! Very useful information!❤

  • @bmcvinke
    @bmcvinke 4 года назад +2

    Thanks for this video! What is the correct APA way to report this analysis? Thanks!

  • @umangternate
    @umangternate 5 месяцев назад

    Thank you, Doc... You saved me

  • @marianna9371
    @marianna9371 5 лет назад +3

    This was really useful, thank you!

  • @Dalenatton
    @Dalenatton 5 месяцев назад

    Dr. Grande, I wonder whether it is possible to use the Intraclass Correlation Coefficient (ICC) with a two-way mixed model to calculate inter-rater reliability when the raters have rated only a subset of the total subjects. For example, Instructor 1 rates all subjects (n = 30), while Instructor 2 rates the first half (n = 15) and Instructor 3 rates the remaining half (n = 15). Thank you so much for your help!

  • @Alinka5s
    @Alinka5s 8 лет назад +3

    Thank you for the video. It is very helpful!!

    • @DrGrande
      @DrGrande  8 лет назад

      I'm glad you found the video useful. Thanks for watching.

  • @ShafinaIVohra
    @ShafinaIVohra 4 года назад +1

    So if we have multiple raters here and each rater is rating each participant on a scale of 1-5 for each item, how would that work?

  • @VijayKumar-yd2qv
    @VijayKumar-yd2qv Год назад

    Excellent, nicely described.

  • @SPORTSCIENCEps
    @SPORTSCIENCEps 3 года назад

    Thank you for the video!

  • @Radiology_Specific
    @Radiology_Specific Год назад

    Thanks for your video. I have a question :
    What does this error indicate ? "Kappa statistic cannot be computed. It requires a two-way table in which the variables are of the same type."
    Despite I utilized a two-way table in which the variables are of the same type (both of them nominal), I get that from SPSS.
    What should I do about that ?

  • @sofiaskroder5584
    @sofiaskroder5584 11 месяцев назад

    Hi Todd! Thank you for a great video. I was wondering if you could use the ICC for determining reliability between both 1) one rater who does the same measurement twice, and 2) two raters who does the same measurement one time each. If so - which numbers in the output shows which answer to my two questions? Many thanks!

    • @sofiaskroder5584
      @sofiaskroder5584 11 месяцев назад

      Can I do rater 1 and rater 2 as the same person doing the same measurement twice and rater 3 the other person and doing two separate sets of outputs?

  • @ahtade
    @ahtade Месяц назад

    if one observer calculates in different times we again select the avarege one or single one for the ICC score? chatgpt says to me if there is one observer you should select the single one. thank you

  • @manarahmed2985
    @manarahmed2985 Год назад

    Thank you. This was helpful, but I have a question. What should I do if the ICC coefficient is less than 0.7, should I delete part of the data or what?

  • @Chocotreacle
    @Chocotreacle 3 года назад

    What do you do if all the instructors (rafters) have the same score, how do you calculate that. When I tried to do this on SPSS it gave me nothing.

  • @nicolecatubigan
    @nicolecatubigan 9 месяцев назад

    Hello i just want to ask, i have 4 raters and they have rated the rubric consisting of 1-4 which are (1=beginning, 2=developing, 3=competent, and 4=accomplished).
    Do i need to put label on their ratings?

  • @AtheerAl
    @AtheerAl 5 лет назад +1

    amazing works..

  • @fpires7
    @fpires7 9 лет назад

    Hello Todd, I have an analysis with more than two raters and 100 items. Is there a way to look at agreement for each of the items to understand where raters are disagreeing?

  • @chrisgamble3132
    @chrisgamble3132 8 лет назад

    Hello,
    In a study I am participating there are 10 patients which are measured by 2 raters across 3 seperate measurement occasions. We are measuring a continous variable with 2 different instruments and I intend to use ICC (2,1) to calculate inter-rater reliability. As I understand it, in SPSS I would have to create columns (for the raters) of 10 rows (the patients) for each instrument and variable. However, the 3 measurement occasions are spread out in time. Therefore, for each measurement occasion I need to calculate the inter-rater reliability seperately. This leaves me with 3 values and I want to be able to present 1 value in my report/thesis. How do I go about in determining this value?

  • @seekewl5418
    @seekewl5418 4 года назад +1

    That was extremely helpful, thank you! I have a question, though. If we are gathering ratings of concepts from three raters, for example, could we just take the average of their ratings for one whole rating that represents each concept's overall score? I guess that would be the same case with your example in knowing which score to take. Thanks!

  • @connormacmillan5427
    @connormacmillan5427 2 года назад

    Can you also do this but with ordinal data? Lets say I have a rubric that has been made as the following: bad(1), good (2), very good(3), excellent (4) ?

  • @hamidD2222
    @hamidD2222 8 лет назад

    If the agreement was found to be low due to one rater giving very different rating compared to the others (≥3 raters in total). How can this "different" rater be identified?
    And is it possible to add the references too please, it help us in case we need further details not explained here it
    Thanks

  • @karwanmustafa6633
    @karwanmustafa6633 8 лет назад

    Hi Dr. Todd,
    I would be most grateful if you can briefly explain my concern. I have an English speaking exam, to be scored out of 100, and 48 respondents. I have got two raters to score the respondents' English speaking performance. Now I would like to determine how valid the raters' scores are. Could you please explain if need to use correlation coefficient or Cohen's Kappa. As far as I know, I need to use correlation coefficient, but I just wanted to be sure about it. I am thankful if you can explain that in short, please.
    Kind regards,
    Karwan

  • @sitimunirah1298
    @sitimunirah1298 2 года назад

    What if the sig more than 0.05 but got ICC above 0.7.. is that still means excellent agreement?

  • @alihyaa_me
    @alihyaa_me 3 года назад

    How do they rate the variables? Please needed and how do they came out with 1 or 2

  • @hakanbayezit5908
    @hakanbayezit5908 4 года назад

    Sir, two raters are assessing 80 essays, and giving a score between 0-20 totally. (content:6, organization:5, language use: 6, punctuation:3) In order to find inter rater reliability , can we use the kappa technique or the technique you teach above? Please advise.

  • @drabhijeetghosh
    @drabhijeetghosh 9 лет назад

    Hello, I would like to know how to calculate ICC for calculating the clustering effect. As in produce robust 95% confidence intervals (CI) ​while accounting for ​cluster effect. For example I have data for multiple patients, clustered by doctors and by hospitals within which the doctors and patients are; and I want to produce means and other stats for medications etc...but calculate ICC for the groups of hospitals and doctors and adjust for the cluster effects of source hospital and source doctors.

  • @utaaprepschool428
    @utaaprepschool428 5 лет назад

    Can we do the same analysis as yours with 12 instructors?
    Nothing will be changed but there would be 12 instructors instead of 3

  • @juliethasagun1634
    @juliethasagun1634 4 года назад

    Is this the adjusted or unadjusted ICC?

  • @esrakutlu4318
    @esrakutlu4318 5 лет назад

    Hi, I have a question about my study.
    We asked 5 dıfferent observers to evaluate the degree of a bone dysplasia between 0 to 3. They evaluated same specimens in two different time point. We want to assess the intra and inter-rater reliability. The video that you provide seems to be evaluating 3 different raters, and student notes between 0 to 10. Should I apply the same principles for inter rater reliability for my test? and what should I do for "intra-rater" reliability? Thank you!

  • @littlefur
    @littlefur 4 года назад

    Thanks so much! This video exactly solved my problem. May I ask whether SPSS can calculate Fleiss' Kappa as well?

  • @frohnzy04
    @frohnzy04 4 года назад

    Can you do a video on how to interpret the ANOVA table along with the ICC - I haven't found a video that includes that . Thank you

  • @putrinurshahiraabdulrapar3175
    @putrinurshahiraabdulrapar3175 7 лет назад

    Hye, im doing a research about inter-rater reliability and using the icc for my data analysis. However, my data is ordinal and non-normal distribution. Are my data from icc results is valid?

  • @chollanotk
    @chollanotk 5 лет назад

    What is the exact meaning of Sig (0.000)? Is it p-value by a certain statistical analysis?

  • @LoriBanosco
    @LoriBanosco 7 лет назад +1

    Hi,
    Thanks for the video, it helped a lot.
    Is it possible to do this with more than one variable within just one command? I got more than 400 variables, each of them x3 raters, and don't want to write this command 400 times.
    Thanks from Germany

  • @gmcorpuz09
    @gmcorpuz09 4 года назад

    Can the ICC be used for 9 raters also?

  • @ddelmoni
    @ddelmoni 5 лет назад

    Nice job...thanks!

  • @nicholaslim9078
    @nicholaslim9078 7 лет назад

    what if it 0.1 apart, would you still consider as equal. Example, rater 1 = 3.4 vs rater 2 =3.5 ?? help!!

  • @iqrakhalid1561
    @iqrakhalid1561 6 лет назад

    Pls tell me which method of reliablityi can use when i have 6 teachers and 6 psychologist each member rate each item of my scale

  • @nicolecarmona4164
    @nicolecarmona4164 8 лет назад +2

    Hi Todd, I'm just wondering where you are getting these interpretation ranges from. Your tutorial was extremely useful and I would like to have a citation for my interpretation values! Thank you in advance.

  • @mynnzero
    @mynnzero 9 лет назад +1

    excellent, thank you!

  • @alzalan2001
    @alzalan2001 6 лет назад +1

    Thank you for the video, it is really very informative . How I can conduct Bland and Altman from this

  • @bayushiep
    @bayushiep 3 года назад

    What's the difference with cronbach alpha?

  • @saraantonini3661
    @saraantonini3661 8 лет назад

    Very helpful! thank youuu

    • @DrGrande
      @DrGrande  8 лет назад

      I'm glad you found the video useful. Thanks for watching.

  • @bahadroktay4396
    @bahadroktay4396 6 лет назад

    First of all thank you for this helpfull video... I have two questions, if you could answer I'll be very appriciated:
    First of all could you please give me a book referance for cutoff .70 .
    Second, if we have 4 rater and 12 questions (raters could give scores 0 - 10 at for all answers) to evaluate what do I have to do?
    a) 12 different rebiality analysys or
    b) Just one analysys including 12 questions...
    if answer is a, do I need a correction and if some of the interclass correlations are below the 0.70 but most of them are bigger than 0.70 how could I interpret this results?
    Thank you for your kind help...

    • @leeken86
      @leeken86 4 года назад

      I have the same question. Have you solved your problem? thanks

  • @brambonnaerens7133
    @brambonnaerens7133 8 лет назад

    big help, thank you!

    • @DrGrande
      @DrGrande  8 лет назад

      You're welcome - thanks for watching.

  • @batmanarkham5120
    @batmanarkham5120 4 года назад

    But isn’t ICC for continuous data

  • @nghiachedinh
    @nghiachedinh 9 лет назад

    thank you so much!

  • @mioulin
    @mioulin 6 лет назад

    Thank you!

  • @suzannahstone
    @suzannahstone 4 года назад

    THANK YOUUUUUUU

  • @raz2936
    @raz2936 Год назад

    Don't you eat? You talked in such a low voice that I cannot hear clearly