The **inter-rater reliability** consists of statistical measures for assessing the extent of agreement among two or more raters (i.e., “judges”, “observers”). Other synonyms are: **inter-rater agreement**, **inter-observer agreement** or **inter-rater concordance**.

In this course, you will learn the basics and how to compute the different statistical measures for analyzing the inter-rater reliability. These include:

: It can be used for either two nominal or two ordinal variables. It accounts for strict agreements between observers. It is most appropriate for two nominal variables.*Cohen’s Kappa*: It should be considered for two ordinal variables only. It allows partial agreement.*Weighted Kappa*, which is the average of Cohen’s Kappa if using more than two categorical variables.*Light’s Kappa*: for two or more categorical variables (nominal or ordinal)*Fleiss Kappa*(ICC) for continuous or ordinal data*Intraclass correlation coefficient*

You will also learn how to **visualize the agreement** between raters. The course presents the basic principles of these tasks and provide examples in R.

#### Related Book

Inter-Rater Reliability Essentials: Practical Guide in R

Version: Français

Very useful and simple demonstration, I really appreciate your nice effort.