Inter-Rater Reliability Measures in R

Inter-Rater Reliability Measures in R

Inter-Rater Reliability Measures in R

The inter-rater reliability consists of statistical measures for assessing the extent of agreement among two or more raters (i.e., “judges”, “observers”). Other synonyms are: inter-rater agreement, inter-observer agreement or inter-rater concordance.

In this course, you will learn the basics and how to compute the different statistical measures for analyzing the inter-rater reliability. These include:

  • Cohen’s Kappa: It can be used for either two nominal or two ordinal variables. It accounts for strict agreements between observers. It is most appropriate for two nominal variables.
  • Weighted Kappa: It should be considered for two ordinal variables only. It allows partial agreement.
  • Light’s Kappa, which is the average of Cohen’s Kappa if using more than two categorical variables.
  • Fleiss Kappa: for two or more categorical variables (nominal or ordinal)
  • Intraclass correlation coefficient (ICC) for continuous or ordinal data

You will also learn how to visualize the agreement between raters. The course presents the basic principles of these tasks and provide examples in R.

Related Book

Inter-Rater Reliability Essentials: Practical Guide in R

 



Version: Français

Lessons

  1. This chapter explains the basics of the intra-class correlation coefficient (ICC), which can be used to measure the agreement between multiple raters rating in ordinal or continuous scales. We also show how to compute and interpret the ICC values using the R software.

Comment ( 1 )

  • Elshaimaa

    Very useful and simple demonstration, I really appreciate your nice effort.

Give a comment

Want to post an issue with R? If yes, please make sure you have read this: How to Include Reproducible R Script Examples in Datanovia Comments

Teachers