{"id":10305,"date":"2019-11-07T03:03:02","date_gmt":"2019-11-07T01:03:02","guid":{"rendered":"https:\/\/www.datanovia.com\/en\/?post_type=dt_lessons&#038;p=10305"},"modified":"2019-11-07T03:03:02","modified_gmt":"2019-11-07T01:03:02","slug":"weighted-kappa-in-r-for-two-ordinal-variables","status":"publish","type":"dt_lessons","link":"https:\/\/www.datanovia.com\/en\/lessons\/weighted-kappa-in-r-for-two-ordinal-variables\/","title":{"rendered":"Weighted Kappa in R: For Two Ordinal Variables"},"content":{"rendered":"<div id=\"rdoc\">\n<p>In biomedical, behavioral research and many other fields, it is frequently required that a group of participants is rated or classified into categories by two observers (or raters, methods, etc). An example is two clinicians that classify the extent of disease in patients. The analysis of the agreement between the two observers can be used to measure the reliability of the rating system. High agreement would indicate consensus in the diagnosis and interchangeability of the observers <span class=\"citation\">(Warrens 2013)<\/span>.<\/p>\n<p>In a previous chapter (Chapter @ref(cohen-s-kappa)), we described the classical <em>Cohen\u2019s Kappa<\/em>, which is a popular measure of <em>inter-rater reliability<\/em> or <em>inter-rater agreement<\/em>. The Classical Cohen\u2019s Kappa only counts strict agreement, where the same category is assigned by both raters <span class=\"citation\">(Friendly, Meyer, and Zeileis 2015)<\/span>. It takes no account of the degree of disagreement, all disagreements are treated equally. This is most appropriate when you have nominal variables. For <strong>ordinal rating scale<\/strong> it may preferable to give different weights to the disagreements depending on the magnitude.<\/p>\n<p>This chapter describes the <strong>weighted kappa<\/strong>, a variant of the Cohen\u2019s Kappa, that allows partial agreement <span class=\"citation\">(J. Cohen 1968)<\/span>. In other words, the weighted kappa allows the use of weighting schemes to take into account the closeness of agreement between categories. This is only suitable in the situation where you have <strong>ordinal or ranked variables<\/strong>.<\/p>\n<div class=\"block\">\n<p>Recall that, the kappa coefficients remove the chance agreement, which is the proportion of agreement that you would expect two raters to have based simply on chance.<\/p>\n<\/div>\n<p>Here, you will learn:<\/p>\n<ul>\n<li><strong>Basics and the formula of the weighted kappa<\/strong><\/li>\n<li><strong>Assumptions and requirements for computing the weighted kappa<\/strong><\/li>\n<li><strong>Examples of R code for computing the weighted kappa<\/strong><\/li>\n<\/ul>\n<p>Contents:<\/p>\n<div id=\"TOC\">\n<ul>\n<li><a href=\"#prerequisites\">Prerequisites<\/a><\/li>\n<li><a href=\"#basics\">Basics<\/a>\n<ul>\n<li><a href=\"#formula\">Formula<\/a><\/li>\n<li><a href=\"#types-of-weights-linear-and-quadratic\">Types of weights: Linear and quadratic<\/a><\/li>\n<li><a href=\"#how-to-choose-kappa-weighting-systems\">How to choose kappa weighting systems<\/a><\/li>\n<\/ul>\n<\/li>\n<li><a href=\"#interpretation-magnitude-of-the-agreement\">Interpretation: Magnitude of the agreement<\/a><\/li>\n<li><a href=\"#assumptions\">Assumptions<\/a><\/li>\n<li><a href=\"#statistical-hypotheses\">Statistical hypotheses<\/a><\/li>\n<li><a href=\"#example-of-data\">Example of data<\/a><\/li>\n<li><a href=\"#computing-weighted-kappa\">Computing Weighted kappa<\/a><\/li>\n<li><a href=\"#report\">Report<\/a><\/li>\n<li><a href=\"#summary\">Summary<\/a><\/li>\n<li><a href=\"#references\">References<\/a><\/li>\n<\/ul>\n<\/div>\n<div class='dt-sc-hr-invisible-medium  '><\/div>\n<div class='dt-sc-ico-content type1'><div class='custom-icon' ><a href='https:\/\/www.datanovia.com\/en\/product\/inter-rater-reliability-essentials-practical-guide-in-r\/' target='_blank'><span class='fa fa-book'><\/span><\/a><\/div><h4><a href='https:\/\/www.datanovia.com\/en\/product\/inter-rater-reliability-essentials-practical-guide-in-r\/' target='_blank'> Related Book <\/a><\/h4>Inter-Rater Reliability Essentials: Practical Guide in R<\/div>\n<div class='dt-sc-hr-invisible-medium  '><\/div>\n<div id=\"prerequisites\" class=\"section level2\">\n<h2>Prerequisites<\/h2>\n<p>Read the Chapter on Cohen\u2019s Kappa (Chapter @ref(cohen-s-kappa)).<\/p>\n<\/div>\n<div id=\"basics\" class=\"section level2\">\n<h2>Basics<\/h2>\n<p>To explain the basic concept of the weighted kappa, let the rated categories be ordered as follow: \u2018strongly disagree\u2019, \u2018disagree\u2019, \u2018neutral\u2019, \u2018agree\u2019, and \u2018strongly agree\u2019.<\/p>\n<p>The weighted kappa coefficient takes into consideration the different levels of disagreement between categories. For example, if one rater \u2018strongly disagrees\u2019 and another \u2018strongly agrees\u2019 this must be considered a greater level of disagreement than when one rater \u2018agrees\u2019 and another \u2018strongly agrees\u2019 <span class=\"citation\">(Tang et al. 2015)<\/span>.<\/p>\n<div id=\"formula\" class=\"section level3\">\n<h3>Formula<\/h3>\n<p><strong>kxk contingency table<\/strong>. Let\u2019s consider the following k\u00d7k contingency table summarizing the ratings scores from two raters. k is the number of categories. The table cells contain the counts of cross-classified categories. These counts are indicated by the notation <code>n11, n12, ..., n1K<\/code> for row 1; <code>n21, n22, ..., n2K<\/code> for row 2 and so on.<\/p>\n<pre><code>##           rater2\r\n## rater1     Level.1 Level.2 Level... Level.k Total\r\n##   Level.1  n11     n12     ...      n1k     n1+  \r\n##   Level.2  n21     n22     ...      n2k     n2+  \r\n##   Level... ...     ...     ...      ...     ...  \r\n##   Level.k  nk1     nk2     ...      nkk     nk+  \r\n##   Total    n+1     n+2     ...      n+k     N<\/code><\/pre>\n<p><strong>Terminologies<\/strong>:<\/p>\n<ul>\n<li>The column \u201cTotal\u201d (<code>n1+, n2+, ..., nk+<\/code>) indicates the sum of each row, known as <strong>row margins<\/strong> or marginal counts. Here, the total sum of a given row <code>i<\/code> is named <code>ni+<\/code>.<\/li>\n<li>The row \u201cTotal\u201d (<code>n+1, n+2, ..., n+k<\/code>) indicates the sum of each column, known as <strong>column margins<\/strong>. Here, the total sum of a given column <code>i<\/code> is named <code>n+i<\/code><\/li>\n<li>N is the total sum of all table cells<\/li>\n<li>For a give row\/column, the <strong>marginal proportion<\/strong> is the row\/column margin divide by N. This is also known as the marginal frequencies or probabilities. For a row <code>i<\/code>, the marginal proportion is <code>Pi+ = ni+\/N<\/code>. Similarly, for a given column <code>i<\/code>, the marginal proportion is <code>P+i = n+i\/N<\/code>.<\/li>\n<li>For each table cell, the proportion can be calculated as the cell count divided by N.<\/li>\n<\/ul>\n<p><strong>Joint proportions<\/strong>. The proportion in each cell is obtained by dividing the count in the cell by total N cases (sum of the all the table counts).<\/p>\n<pre><code>##           rater2\r\n## rater1     Level.1 Level.2 Level... Level.k Total\r\n##   Level.1  p11     p12     ...      p1k     p1+  \r\n##   Level.2  p21     p22     ...      p2k     p2+  \r\n##   Level... ...     ...     ...      ...     ...  \r\n##   Level.k  pk1     pk2     ...      pkk     pk+  \r\n##   Total    p+1     p+2     ...      p+k     1<\/code><\/pre>\n<p><strong>Weights<\/strong>. To compute a weighted kappa, weights are assigned to each cell in the contingency table. The weights range from 0 to 1, with weight = 1 assigned to all diagonal cells (corresponding to where both raters agree)<span class=\"citation\">(Friendly, Meyer, and Zeileis 2015)<\/span>. The type of commonly used weighting schemes are explained in the next sections.<\/p>\n<p>The <strong>proportion of observed agreement<\/strong> (Po) is the sum of weighted proportions.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/www.datanovia.com\/en\/wp-content\/uploads\/dn-tutorials\/inter-rater-reliability\/images\/weighted-proportion-of-observed-agreement-formula.png\" alt=\"Weighted proportion of observed agreement formula\" \/><\/p>\n<p>The <strong>proportion of expected chance agreement<\/strong> (Pe) is the sum of the weighted product of rows and columns marginal proportions.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/www.datanovia.com\/en\/wp-content\/uploads\/dn-tutorials\/inter-rater-reliability\/images\/weighted-proportion-of-expected-agreement-formula.png\" alt=\"Weighted proportion of expected (chance) agreement formula\" \/><\/p>\n<p>The <strong>weighted Kappa<\/strong> can be then calculated by plugging these weighted Po and Pe in the following formula:<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/www.datanovia.com\/en\/wp-content\/uploads\/dn-tutorials\/inter-rater-reliability\/images\/cohen-s-kappa-formula.png\" alt=\"Cohen\u2019s Kappa formula\" \/><\/p>\n<div class=\"warning\">\n<p>kappa can range form -1 (no agreement) to +1 (perfect agreement).<\/p>\n<ul>\n<li>when k = 0, the agreement is no better than what would be obtained by chance.<\/li>\n<li>when k is negative, the agreement is less than the agreement expected by chance.<\/li>\n<li>when k is positive, the rater agreement exceeds chance agreement.<\/li>\n<\/ul>\n<\/div>\n<div class=\"block\">\n<p>Note that for 2x2 table (binary rating scales), there is no weighted version of kappa, since kappa remains the same regardless of the weights used.<\/p>\n<\/div>\n<\/div>\n<div id=\"types-of-weights-linear-and-quadratic\" class=\"section level3\">\n<h3>Types of weights: Linear and quadratic<\/h3>\n<p>There are two commonly used weighting system in the literature:<\/p>\n<ol style=\"list-style-type: decimal;\">\n<li>The <strong>Cicchetti-Allison weights<\/strong> <span class=\"citation\">(Cicchetti and Allison 1971)<\/span> based on equal spacing weights for near-match. This also known as <strong>linear weights<\/strong> because it is proportional to the deviation of individual rating.<\/li>\n<li>The <strong>Fleiss-Cohen weights<\/strong> <span class=\"citation\">(Fleiss and Cohen 1973)<\/span>, based on an inverse-square spacing. This is also known as <strong>quadratic weights<\/strong> because it is proportional to the square of the deviation of individual ratings.<\/li>\n<\/ol>\n<p>For an RxR contingency table,<\/p>\n<ul>\n<li>the <strong>linear weight<\/strong> for a given cell is: <code>W_ij = 1-(|i-j|)\/(R-1)<\/code><\/li>\n<li>the <strong>quadratic weight<\/strong> for a given cell is: <code>W_ij = 1-(|i-j|)^2\/(R-1)^2<\/code><\/li>\n<\/ul>\n<p>were, <code>|i-j|<\/code> is the distance between categories and <code>R<\/code> is the number o categories.<\/p>\n<p><strong>Example of linear weights<\/strong> for a 4x4 table, where two clinical specialist classifies patients into 4 groups:<\/p>\n<pre><code>##            Doctor2\r\n## Doctor1     Stade I Stade II Stade III Stade IV\r\n##   Stade I   1       2\/3      1\/3       0       \r\n##   Stade II  2\/3     1        2\/3       1\/3     \r\n##   Stade III 1\/3     2\/3      1         2\/3     \r\n##   Stade IV  0       1\/3      2\/3       1<\/code><\/pre>\n<p><strong>Example of quadratic weights<\/strong>:<\/p>\n<pre><code>##            Doctor2\r\n## Doctor1     Stade I Stade II Stade III Stade IV\r\n##   Stade I   1       8\/9      5\/9       0       \r\n##   Stade II  8\/9     1        8\/9       5\/9     \r\n##   Stade III 5\/9     8\/9      1         8\/9     \r\n##   Stade IV  0       5\/9      8\/9       1<\/code><\/pre>\n<div class=\"block\">\n<p>Note that, the quadratic weights attach greater importance to near disagreements. For example, in the situation where you have one category difference between the two doctors diagnosis, the linear weight is 2\/3 (0.66). This can be seen as the doctors are in two-thirds agreement (or alternatively, one-third disagreement).<\/p>\n<p>However, the corresponding quadratic weight is 8\/9 (0.89), which is strongly higher and gives almost full credit (90%) when there are only one category disagreement between the two doctors in evaluating the disease stage.<\/p>\n<p>However, notice that the quadratic weight drops quickly when there are two or more category differences.<\/p>\n<\/div>\n<p>The table below compare the two weighting system side-by-side for 4x4 table:<\/p>\n<table>\n<thead>\n<tr class=\"header\">\n<th>Difference<\/th>\n<th>Linear<\/th>\n<th>Quadratic<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr class=\"odd\">\n<td>0<\/td>\n<td>1<\/td>\n<td>1<\/td>\n<\/tr>\n<tr class=\"even\">\n<td>1<\/td>\n<td>0.66<\/td>\n<td>0.89<\/td>\n<\/tr>\n<tr class=\"odd\">\n<td>2<\/td>\n<td>0.33<\/td>\n<td>0.55<\/td>\n<\/tr>\n<tr class=\"even\">\n<td>3<\/td>\n<td>0<\/td>\n<td>0<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<\/div>\n<div id=\"how-to-choose-kappa-weighting-systems\" class=\"section level3\">\n<h3>How to choose kappa weighting systems<\/h3>\n<p>If you consider each category difference as equally important you should choose linear weights (i.e., equal spacing weights).<\/p>\n<p>In other words:<\/p>\n<ul>\n<li>Use linear weights when the difference between the first and second category has the same importance as a difference between the second and third category, etc.<\/li>\n<li>Use quadratic weights if the difference between the first and second category is less important than a difference between the second and third category, etc.<\/li>\n<\/ul>\n<\/div>\n<\/div>\n<div id=\"interpretation-magnitude-of-the-agreement\" class=\"section level2\">\n<h2>Interpretation: Magnitude of the agreement<\/h2>\n<p>The interpretation of the magnitude of weighted kappa is like that of unweighted kappa <span class=\"citation\">(Joseph L. Fleiss 2003)<\/span>. For most purposes,<\/p>\n<ul>\n<li>values greater than 0.75 or so may be taken to represent excellent agreement beyond chance,<\/li>\n<li>values below 0.40 or so may be taken to represent poor agreement beyond chance, and<\/li>\n<li>values between 0.40 and 0.75 may be taken to represent fair to good agreement beyond chance.<\/li>\n<\/ul>\n<p>Read more on kappa interpretation at (Chapter @ref(cohen-s-kappa)).<\/p>\n<\/div>\n<div id=\"assumptions\" class=\"section level2\">\n<h2>Assumptions<\/h2>\n<p>Your data should met the following assumptions for computing weighted kappa.<\/p>\n<ol style=\"list-style-type: decimal;\">\n<li>You have <strong>two outcome categorical variables<\/strong>, which should be <strong>ordinal<\/strong><\/li>\n<li>The two outcome variables should have exactly the <strong>same categories<\/strong><\/li>\n<li>You have <strong>paired observations<\/strong>; each subject is categorized twice by <strong>two independent raters or methods<\/strong>.<\/li>\n<li>The <strong>same two raters<\/strong> are used for all participants.<\/li>\n<\/ol>\n<\/div>\n<div id=\"statistical-hypotheses\" class=\"section level2\">\n<h2>Statistical hypotheses<\/h2>\n<ul>\n<li><strong>Null hypothesis<\/strong> (H0): <code>kappa = 0<\/code>. The agreement is the same as chance agreement.<\/li>\n<li><strong>Alternative hypothesis<\/strong> (Ha): <code>kappa \u2260 0<\/code>. The agreement is different from chance agreement.<\/li>\n<\/ul>\n<\/div>\n<div id=\"example-of-data\" class=\"section level2\">\n<h2>Example of data<\/h2>\n<p>We\u2019ll use the <code>anxiety<\/code> demo dataset where two clinical doctors classify 50 individuals into 4 ordered anxiety levels: \u201cnormal\u201d (no anxiety), \u201cmoderate\u201d, \u201chigh\u201d, \u201cvery high\u201d.<\/p>\n<p>The data is organized in the following 3x3 contingency table:<\/p>\n<pre class=\"r\"><code>anxiety &lt;- as.table(\r\n  rbind(\r\n    c(11, 3, 1, 0), c(1, 9, 0, 1),\r\n    c(0, 1, 10, 0 ), c(1, 2, 0, 10)\r\n  )\r\n)\r\ndimnames(anxiety) &lt;- list(\r\n  Doctor1 = c(\"Normal\", \"Moderate\", \"High\", \"Very high\"),\r\n  Doctor2 = c(\"Normal\", \"Moderate\", \"High\", \"Very high\")\r\n)\r\nanxiety<\/code><\/pre>\n<pre><code>##            Doctor2\r\n## Doctor1     Normal Moderate High Very high\r\n##   Normal        11        3    1         0\r\n##   Moderate       1        9    0         1\r\n##   High           0        1   10         0\r\n##   Very high      1        2    0        10<\/code><\/pre>\n<div class=\"warning\">\n<p>Note that the factor levels must be in the correct order, otherwise the results will be wrong.<\/p>\n<\/div>\n<\/div>\n<div id=\"computing-weighted-kappa\" class=\"section level2\">\n<h2>Computing Weighted kappa<\/h2>\n<p>The R function <code>Kappa()<\/code> [vcd package] can be used to compute unweighted and weighted Kappa. To specify the type of weighting, use the option <code>weights<\/code>, which can be either \u201cEqual-Spacing\u201d or \u201cFleiss-Cohen\u201d.<\/p>\n<div class=\"warning\">\n<p>Note that, the unweighted Kappa represents the standard Cohen\u2019s Kappa which should be considered only for nominal variables. You can read more in the dedicated chapter.<\/p>\n<\/div>\n<pre class=\"r\"><code>library(\"vcd\")\r\n# Compute kapa\r\nres.k &lt;- Kappa(anxiety)\r\nres.k<\/code><\/pre>\n<pre><code>##            value    ASE    z Pr(&gt;|z|)\r\n## Unweighted 0.733 0.0752 9.75 1.87e-22\r\n## Weighted   0.747 0.0791 9.45 3.41e-21<\/code><\/pre>\n<pre class=\"r\"><code># Confidence intervals\r\nconfint(res.k)<\/code><\/pre>\n<pre><code>##             \r\n## Kappa          lwr   upr\r\n##   Unweighted 0.586 0.881\r\n##   Weighted   0.592 0.903<\/code><\/pre>\n<pre class=\"r\"><code># Summary showing the weights assigned to each cell\r\nsummary(res.k)<\/code><\/pre>\n<pre><code>##            value    ASE    z Pr(&gt;|z|)\r\n## Unweighted 0.733 0.0752 9.75 1.87e-22\r\n## Weighted   0.747 0.0791 9.45 3.41e-21\r\n## \r\n## Weights:\r\n##       [,1]  [,2]  [,3]  [,4]\r\n## [1,] 1.000 0.667 0.333 0.000\r\n## [2,] 0.667 1.000 0.667 0.333\r\n## [3,] 0.333 0.667 1.000 0.667\r\n## [4,] 0.000 0.333 0.667 1.000<\/code><\/pre>\n<div class=\"notice\">\n<p>Note that, in the above results <code>ASE<\/code> is the asymptotic standard error of the kappa value.<\/p>\n<\/div>\n<div class=\"success\">\n<p>In our example, the weighted kappa (k) = 0.73, which represents a good strength of agreement (p &lt; 0.0001). In conclusion, there was a statistically significant agreement between the two doctors.<\/p>\n<\/div>\n<\/div>\n<div id=\"report\" class=\"section level2\">\n<h2>Report<\/h2>\n<p>Weighted kappa (kw) with linear weights <span class=\"citation\">(Cicchetti and Allison 1971)<\/span> was computed to assess if there was agreement between two clinical doctors in diagnosing the severity of anxiety. 50 participants were enrolled and were classified by each of the two doctors into 4 ordered anxiety levels: \u201cnormal\u201d, \u201cmoderate\u201d, \u201chigh\u201d, \u201cvery high\u201d.<\/p>\n<p>There was a statistically significant agreement between the two doctors, kw = 0.75 (95% CI, 0.59 to 0.90), p &lt; 0.0001. The strength of agreement was classified as good according to Fleiss et al. (2003).<\/p>\n<\/div>\n<div id=\"summary\" class=\"section level2\">\n<h2>Summary<\/h2>\n<p>This chapter explains the basics and the formula of the weighted kappa, which is appropriate to measure the agreement between two raters rating in ordinal scales. We also show how to compute and interpret the kappa values using the R software. Other variants of inter-rater agreement measures are: the <em>Cohen\u2019s Kappa<\/em> (unweighted) (Chapter @ref(cohen-s-kappa)), which only counts for strict agreement; <em>Fleiss kappa<\/em> for situations where you have two or more raters (Chapter @ref(fleiss-kappa)).<\/p>\n<\/div>\n<div id=\"references\" class=\"section level2 unnumbered\">\n<h2>References<\/h2>\n<div id=\"refs\" class=\"references\">\n<div id=\"ref-Cicchetti1971\">\n<p>Cicchetti, Domenic V., and Truett Allison. 1971. \u201cA New Procedure for Assessing Reliability of Scoring Eeg Sleep Recordings.\u201d <em>American Journal of EEG Technology<\/em> 11 (3). Taylor; Francis: 101\u201310. doi:<a href=\"https:\/\/doi.org\/10.1080\/00029238.1971.11080840\">10.1080\/00029238.1971.11080840<\/a>.<\/p>\n<\/div>\n<div id=\"ref-Cohen1968\">\n<p>Cohen, J. 1968. \u201cWeighted Kappa: Nominal Scale Agreement with Provision for Scaled Disagreement or Partial Credit.\u201d <em>Psychological Bulletin<\/em> 70 (4): 213\u2014220. doi:<a href=\"https:\/\/doi.org\/10.1037\/h0026256\">10.1037\/h0026256<\/a>.<\/p>\n<\/div>\n<div id=\"ref-Fleiss1973\">\n<p>Fleiss, Joseph L., and Jacob Cohen. 1973. \u201cThe Equivalence of Weighted Kappa and the Intraclass Correlation Coefficient as Measures of Reliability.\u201d <em>Educational and Psychological Measurement<\/em> 33 (3): 613\u201319. doi:<a href=\"https:\/\/doi.org\/10.1177\/001316447303300309\">10.1177\/001316447303300309<\/a>.<\/p>\n<\/div>\n<div id=\"ref-Friendly2015\">\n<p>Friendly, Michael, D. Meyer, and A. Zeileis. 2015. <em>Discrete Data Analysis with R: Visualization and Modeling Techniques for Categorical and Count Data<\/em>. 1st ed. Chapman; Hall\/CRC.<\/p>\n<\/div>\n<div id=\"ref-Fleiss2003\">\n<p>Joseph L. Fleiss, Myunghee Cho Paik, Bruce Levin. 2003. <em>Statistical Methods for Rates and Proportions<\/em>. 3rd ed. John Wiley; Sons, Inc.<\/p>\n<\/div>\n<div id=\"ref-Tang2015\">\n<p>Tang, Wan, Jun Hu, Hui Zhang, Pan Wu, and Hua He. 2015. \u201cKappa Coefficient: A Popular Measure of Rater Agreement.\u201d <em>Shanghai Archives of Psychiatry<\/em> 27 (February): 62\u201367. doi:<a href=\"https:\/\/doi.org\/10.11919\/j.issn.1002-0829.215010\">10.11919\/j.issn.1002-0829.215010<\/a>.<\/p>\n<\/div>\n<div id=\"ref-Warrens2013\">\n<p>Warrens, Matthijs J. 2013. \u201cWeighted Kappas for 3x3 Tables.\u201d <em>Journal of Probability and Statistics<\/em>. doi:<a href=\"https:\/\/doi.org\/https:\/\/doi.org\/10.1155\/2013\/325831\">https:\/\/doi.org\/10.1155\/2013\/325831<\/a>.<\/p>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<p><!--end rdoc--><\/p>\n","protected":false},"excerpt":{"rendered":"<p>This chapter explains the basics and the formula of the weighted kappa, which is appropriate to measure the agreement between two raters rating in ordinal scales. We also show how to compute and interpret the kappa values using the R software. <\/p>\n","protected":false},"author":1,"featured_media":9093,"parent":0,"menu_order":0,"comment_status":"open","ping_status":"closed","template":"","class_list":["post-10305","dt_lessons","type-dt_lessons","status-publish","has-post-thumbnail","hentry"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v25.2 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Weighted Kappa in R: Best Reference - Datanovia<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.datanovia.com\/en\/lessons\/weighted-kappa-in-r-for-two-ordinal-variables\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Weighted Kappa in R: Best Reference - Datanovia\" \/>\n<meta property=\"og:description\" content=\"This chapter explains the basics and the formula of the weighted kappa, which is appropriate to measure the agreement between two raters rating in ordinal scales. We also show how to compute and interpret the kappa values using the R software.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.datanovia.com\/en\/lessons\/weighted-kappa-in-r-for-two-ordinal-variables\/\" \/>\n<meta property=\"og:site_name\" content=\"Datanovia\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.datanovia.com\/en\/wp-content\/uploads\/2019\/05\/X26731206_567196586953623_460691146121416825_n.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1024\" \/>\n\t<meta property=\"og:image:height\" content=\"512\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data1\" content=\"10 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\/\/www.datanovia.com\/en\/lessons\/weighted-kappa-in-r-for-two-ordinal-variables\/\",\"url\":\"https:\/\/www.datanovia.com\/en\/lessons\/weighted-kappa-in-r-for-two-ordinal-variables\/\",\"name\":\"Weighted Kappa in R: Best Reference - Datanovia\",\"isPartOf\":{\"@id\":\"https:\/\/www.datanovia.com\/en\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/www.datanovia.com\/en\/lessons\/weighted-kappa-in-r-for-two-ordinal-variables\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/www.datanovia.com\/en\/lessons\/weighted-kappa-in-r-for-two-ordinal-variables\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/www.datanovia.com\/en\/wp-content\/uploads\/2019\/05\/X26731206_567196586953623_460691146121416825_n.jpg\",\"datePublished\":\"2019-11-07T01:03:02+00:00\",\"breadcrumb\":{\"@id\":\"https:\/\/www.datanovia.com\/en\/lessons\/weighted-kappa-in-r-for-two-ordinal-variables\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/www.datanovia.com\/en\/lessons\/weighted-kappa-in-r-for-two-ordinal-variables\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.datanovia.com\/en\/lessons\/weighted-kappa-in-r-for-two-ordinal-variables\/#primaryimage\",\"url\":\"https:\/\/www.datanovia.com\/en\/wp-content\/uploads\/2019\/05\/X26731206_567196586953623_460691146121416825_n.jpg\",\"contentUrl\":\"https:\/\/www.datanovia.com\/en\/wp-content\/uploads\/2019\/05\/X26731206_567196586953623_460691146121416825_n.jpg\",\"width\":1024,\"height\":512},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/www.datanovia.com\/en\/lessons\/weighted-kappa-in-r-for-two-ordinal-variables\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/www.datanovia.com\/en\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Lessons\",\"item\":\"https:\/\/www.datanovia.com\/en\/lessons\/\"},{\"@type\":\"ListItem\",\"position\":3,\"name\":\"Weighted Kappa in R: For Two Ordinal Variables\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/www.datanovia.com\/en\/#website\",\"url\":\"https:\/\/www.datanovia.com\/en\/\",\"name\":\"Datanovia\",\"description\":\"Data Mining and Statistics for Decision Support\",\"publisher\":{\"@id\":\"https:\/\/www.datanovia.com\/en\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/www.datanovia.com\/en\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/www.datanovia.com\/en\/#organization\",\"name\":\"Datanovia\",\"url\":\"https:\/\/www.datanovia.com\/en\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.datanovia.com\/en\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/www.datanovia.com\/en\/wp-content\/uploads\/2018\/09\/datanovia-logo.png\",\"contentUrl\":\"https:\/\/www.datanovia.com\/en\/wp-content\/uploads\/2018\/09\/datanovia-logo.png\",\"width\":98,\"height\":99,\"caption\":\"Datanovia\"},\"image\":{\"@id\":\"https:\/\/www.datanovia.com\/en\/#\/schema\/logo\/image\/\"}}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Weighted Kappa in R: Best Reference - Datanovia","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.datanovia.com\/en\/lessons\/weighted-kappa-in-r-for-two-ordinal-variables\/","og_locale":"en_US","og_type":"article","og_title":"Weighted Kappa in R: Best Reference - Datanovia","og_description":"This chapter explains the basics and the formula of the weighted kappa, which is appropriate to measure the agreement between two raters rating in ordinal scales. We also show how to compute and interpret the kappa values using the R software.","og_url":"https:\/\/www.datanovia.com\/en\/lessons\/weighted-kappa-in-r-for-two-ordinal-variables\/","og_site_name":"Datanovia","og_image":[{"width":1024,"height":512,"url":"https:\/\/www.datanovia.com\/en\/wp-content\/uploads\/2019\/05\/X26731206_567196586953623_460691146121416825_n.jpg","type":"image\/jpeg"}],"twitter_card":"summary_large_image","twitter_misc":{"Est. reading time":"10 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/www.datanovia.com\/en\/lessons\/weighted-kappa-in-r-for-two-ordinal-variables\/","url":"https:\/\/www.datanovia.com\/en\/lessons\/weighted-kappa-in-r-for-two-ordinal-variables\/","name":"Weighted Kappa in R: Best Reference - Datanovia","isPartOf":{"@id":"https:\/\/www.datanovia.com\/en\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.datanovia.com\/en\/lessons\/weighted-kappa-in-r-for-two-ordinal-variables\/#primaryimage"},"image":{"@id":"https:\/\/www.datanovia.com\/en\/lessons\/weighted-kappa-in-r-for-two-ordinal-variables\/#primaryimage"},"thumbnailUrl":"https:\/\/www.datanovia.com\/en\/wp-content\/uploads\/2019\/05\/X26731206_567196586953623_460691146121416825_n.jpg","datePublished":"2019-11-07T01:03:02+00:00","breadcrumb":{"@id":"https:\/\/www.datanovia.com\/en\/lessons\/weighted-kappa-in-r-for-two-ordinal-variables\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.datanovia.com\/en\/lessons\/weighted-kappa-in-r-for-two-ordinal-variables\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.datanovia.com\/en\/lessons\/weighted-kappa-in-r-for-two-ordinal-variables\/#primaryimage","url":"https:\/\/www.datanovia.com\/en\/wp-content\/uploads\/2019\/05\/X26731206_567196586953623_460691146121416825_n.jpg","contentUrl":"https:\/\/www.datanovia.com\/en\/wp-content\/uploads\/2019\/05\/X26731206_567196586953623_460691146121416825_n.jpg","width":1024,"height":512},{"@type":"BreadcrumbList","@id":"https:\/\/www.datanovia.com\/en\/lessons\/weighted-kappa-in-r-for-two-ordinal-variables\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.datanovia.com\/en\/"},{"@type":"ListItem","position":2,"name":"Lessons","item":"https:\/\/www.datanovia.com\/en\/lessons\/"},{"@type":"ListItem","position":3,"name":"Weighted Kappa in R: For Two Ordinal Variables"}]},{"@type":"WebSite","@id":"https:\/\/www.datanovia.com\/en\/#website","url":"https:\/\/www.datanovia.com\/en\/","name":"Datanovia","description":"Data Mining and Statistics for Decision Support","publisher":{"@id":"https:\/\/www.datanovia.com\/en\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.datanovia.com\/en\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/www.datanovia.com\/en\/#organization","name":"Datanovia","url":"https:\/\/www.datanovia.com\/en\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.datanovia.com\/en\/#\/schema\/logo\/image\/","url":"https:\/\/www.datanovia.com\/en\/wp-content\/uploads\/2018\/09\/datanovia-logo.png","contentUrl":"https:\/\/www.datanovia.com\/en\/wp-content\/uploads\/2018\/09\/datanovia-logo.png","width":98,"height":99,"caption":"Datanovia"},"image":{"@id":"https:\/\/www.datanovia.com\/en\/#\/schema\/logo\/image\/"}}]}},"multi-rating":{"mr_rating_results":[]},"_links":{"self":[{"href":"https:\/\/www.datanovia.com\/en\/wp-json\/wp\/v2\/dt_lessons\/10305","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.datanovia.com\/en\/wp-json\/wp\/v2\/dt_lessons"}],"about":[{"href":"https:\/\/www.datanovia.com\/en\/wp-json\/wp\/v2\/types\/dt_lessons"}],"author":[{"embeddable":true,"href":"https:\/\/www.datanovia.com\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.datanovia.com\/en\/wp-json\/wp\/v2\/comments?post=10305"}],"version-history":[{"count":0,"href":"https:\/\/www.datanovia.com\/en\/wp-json\/wp\/v2\/dt_lessons\/10305\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.datanovia.com\/en\/wp-json\/wp\/v2\/media\/9093"}],"wp:attachment":[{"href":"https:\/\/www.datanovia.com\/en\/wp-json\/wp\/v2\/media?parent=10305"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}