This book provides a solid step-by-step practical guide to inter-rater reliability analyses using R software. The inter-rater reliability are statistical measures, which give the extent of agreement among two or more raters (i.e., "judges", "observers"). Other synonyms are: inter-rater agreement, inter-observer agreement or inter-rater concordance.
This book is designed to get you doing the analyses as quick as possible. It focuses on implementation and understanding of the methods, without having to struggle through pages of mathematical proofs.
You will be guided through the steps of basic explanations of the test formula and assumptions, performing the analysis in R, interpreting and reporting the results.
- Covers the most common statistical measures for the inter-rater reliability analyses, including cohen’s Kappa, weighted kappa, Light’s kappa , Fleiss kappa, intraclass correlation coefficient and agreement chart.
- Key assumptions are presented
- Short, self-contained chapters with practical examples.