{"id":10876,"date":"2019-11-29T08:51:19","date_gmt":"2019-11-29T06:51:19","guid":{"rendered":"https:\/\/www.datanovia.com\/en\/?post_type=dt_lessons&#038;p=10876"},"modified":"2019-11-29T08:51:19","modified_gmt":"2019-11-29T06:51:19","slug":"ancova-in-r","status":"publish","type":"dt_lessons","link":"https:\/\/www.datanovia.com\/en\/lessons\/ancova-in-r\/","title":{"rendered":"ANCOVA in R"},"content":{"rendered":"<div id=\"rdoc\">\n<p>The <strong>Analysis of Covariance<\/strong> (<strong>ANCOVA<\/strong>) is used to compare means of an outcome variable between two or more groups taking into account (or to correct for) variability of other variables, called covariates. In other words, ANCOVA allows to compare the adjusted means of two or more independent groups.<\/p>\n<p>For example, you might want to compare \u201ctest score\u201d by \u201clevel of education\u201d taking into account the \u201cnumber of hours spent studying\u201d. In this example: 1) <code>test score<\/code> is our outcome (dependent) variable; 2) <code>level of education<\/code> (high school, college degree or graduate degree) is our grouping variable; 3) <code>sudying time<\/code> is our covariate.<\/p>\n<p>The <strong>one-way ANCOVA<\/strong> can be seen as an extension of the one-way ANOVA that incorporate a covariate variable. The <strong>two-way ANCOVA<\/strong> is used to evaluate simultaneously the effect of two independent grouping variables (A and B) on an outcome variable, after adjusting for one or more continuous variables, called covariates.<\/p>\n<p>In this article, you will learn how to:<\/p>\n<ul>\n<li><strong>Compute and interpret<\/strong> the one-way and the two-way ANCOVA in R<\/li>\n<li><strong>Check ANCOVA assumptions<\/strong><\/li>\n<li><strong>Perform post-hoc tests<\/strong>, multiple pairwise comparisons between groups to identify which groups are different<\/li>\n<li><strong>Visualize the data<\/strong> using box plots, add ANCOVA and pairwise comparisons p-values to the plot<\/li>\n<\/ul>\n<p>Contents:<\/p>\n<div id=\"TOC\">\n<ul>\n<li><a href=\"#assumptions\">Assumptions<\/a><\/li>\n<li><a href=\"#prerequisites\">Prerequisites<\/a><\/li>\n<li><a href=\"#one-way-ancova\">One-way ANCOVA<\/a>\n<ul>\n<li><a href=\"#data-preparation\">Data preparation<\/a><\/li>\n<li><a href=\"#check-assumptions\">Check assumptions<\/a><\/li>\n<li><a href=\"#normality-of-residuals\">Normality of residuals<\/a><\/li>\n<li><a href=\"#homogeneity-of-variances\">Homogeneity of variances<\/a><\/li>\n<li><a href=\"#outliers\">Outliers<\/a><\/li>\n<li><a href=\"#computation\">Computation<\/a><\/li>\n<li><a href=\"#post-hoc-test\">Post-hoc test<\/a><\/li>\n<li><a href=\"#report\">Report<\/a><\/li>\n<\/ul>\n<\/li>\n<li><a href=\"#two-way-ancova\">Two-way ANCOVA<\/a>\n<ul>\n<li><a href=\"#data-preparation-1\">Data preparation<\/a><\/li>\n<li><a href=\"#check-assumptions-1\">Check assumptions<\/a><\/li>\n<li><a href=\"#computation-1\">Computation<\/a><\/li>\n<li><a href=\"#post-hoc-test-1\">Post-hoc test<\/a><\/li>\n<li><a href=\"#report-1\">Report<\/a><\/li>\n<\/ul>\n<\/li>\n<li><a href=\"#summary\">Summary<\/a><\/li>\n<\/ul>\n<\/div>\n<div class='dt-sc-hr-invisible-medium  '><\/div>\n<div class='dt-sc-ico-content type1'><div class='custom-icon' ><a href='https:\/\/www.datanovia.com\/en\/product\/practical-statistics-in-r-for-comparing-groups-numerical-variables\/' target='_blank'><span class='fa fa-book'><\/span><\/a><\/div><h4><a href='https:\/\/www.datanovia.com\/en\/product\/practical-statistics-in-r-for-comparing-groups-numerical-variables\/' target='_blank'> Related Book <\/a><\/h4>Practical Statistics in R II - Comparing Groups: Numerical Variables<\/div>\n<div class='dt-sc-hr-invisible-medium  '><\/div>\n<div id=\"assumptions\" class=\"section level2\">\n<h2>Assumptions<\/h2>\n<p>ANCOVA makes several assumptions about the data, such as:<\/p>\n<ul>\n<li><strong>Linearity between the covariate and the outcome variable<\/strong> at each level of the grouping variable. This can be checked by creating a grouped scatter plot of the covariate and the outcome variable.<\/li>\n<li><strong>Homogeneity of regression slopes<\/strong>. The slopes of the regression lines, formed by the covariate and the outcome variable, should be the same for each group. This assumption evaluates that there is no interaction between the outcome and the covariate. The plotted regression lines by groups should be parallel.<\/li>\n<li><strong>The outcome variable should be approximately normally distributed<\/strong>. This can be checked using the Shapiro-Wilk test of normality on the model residuals.<\/li>\n<li><strong>Homoscedasticity<\/strong> or homogeneity of residuals variance for all groups. The residuals are assumed to have a constant variance (homoscedasticity)<\/li>\n<li><strong>No significant outliers<\/strong> in the groups<\/li>\n<\/ul>\n<div class=\"warning\">\n<p>Many of these assumptions and potential problems can be checked by analyzing the residual errors. In the situation, where the ANCOVA assumption is not met you can perform <strong>robust ANCOVA<\/strong> test using the WRS2 package.<\/p>\n<\/div>\n<\/div>\n<div id=\"prerequisites\" class=\"section level2\">\n<h2>Prerequisites<\/h2>\n<p>Make sure you have installed the following R packages:<\/p>\n<ul>\n<li><code>tidyverse<\/code> for data manipulation and visualization<\/li>\n<li><code>ggpubr<\/code> for creating easily publication ready plots<\/li>\n<li><code>rstatix<\/code> for easy pipe-friendly statistical analyses<\/li>\n<li><code>broom<\/code> for printing a nice summary of statistical tests as data frames<\/li>\n<li><code>datarium<\/code>: contains required data sets for this chapter<\/li>\n<\/ul>\n<p>Start by loading the following required packages:<\/p>\n<pre class=\"r\"><code>library(tidyverse)\r\nlibrary(ggpubr)\r\nlibrary(rstatix)\r\nlibrary(broom)<\/code><\/pre>\n<\/div>\n<div id=\"one-way-ancova\" class=\"section level2\">\n<h2>One-way ANCOVA<\/h2>\n<div id=\"data-preparation\" class=\"section level3\">\n<h3>Data preparation<\/h3>\n<p>We\u2019ll prepare our demo data from the <code>anxiety<\/code> dataset available in the datarium package.<\/p>\n<p>Researchers investigated the effect of exercises in reducing the level of anxiety. Therefore, they conducted an experiment, where they measured the anxiety score of three groups of individuals practicing physical exercises at different levels (grp1: low, grp2: moderate and grp3: high).<\/p>\n<p>The anxiety score was measured pre- and 6-months post-exercise training programs. It is expected that any reduction in the anxiety by the exercises programs would also depend on the participant\u2019s basal level of anxiety score.<\/p>\n<p>In this analysis we use the pretest anxiety score as the covariate and are interested in possible differences between group with respect to the post-test anxiety scores.<\/p>\n<pre class=\"r\"><code># Load and prepare the data\r\ndata(\"anxiety\", package = \"datarium\")\r\nanxiety &lt;- anxiety %&gt;%\r\n  select(id, group, t1, t3) %&gt;%\r\n  rename(pretest = t1, posttest = t3)\r\nanxiety[14, \"posttest\"] &lt;- 19\r\n# Inspect the data by showing one random row by groups\r\nset.seed(123)\r\nanxiety %&gt;% sample_n_by(group, size = 1)<\/code><\/pre>\n<pre><code>## # A tibble: 3 x 4\r\n##   id    group pretest posttest\r\n##   &lt;fct&gt; &lt;fct&gt;   &lt;dbl&gt;    &lt;dbl&gt;\r\n## 1 5     grp1     16.5     15.7\r\n## 2 27    grp2     17.8     16.9\r\n## 3 37    grp3     17.1     14.3<\/code><\/pre>\n<\/div>\n<div id=\"check-assumptions\" class=\"section level3\">\n<h3>Check assumptions<\/h3>\n<div id=\"linearity-assumption\" class=\"section level4\">\n<h4>Linearity assumption<\/h4>\n<ul>\n<li>Create a scatter plot between the covariate (i.e., <code>pretest<\/code>) and the outcome variable (i.e., <code>posttest<\/code>)<\/li>\n<li>Add regression lines, show the corresponding equations and the R2 by groups<\/li>\n<\/ul>\n<pre class=\"r\"><code>ggscatter(\r\n  anxiety, x = \"pretest\", y = \"posttest\",\r\n  color = \"group\", add = \"reg.line\"\r\n  )+\r\n  stat_regline_equation(\r\n    aes(label =  paste(..eq.label.., ..rr.label.., sep = \"~~~~\"), color = group)\r\n    )<\/code><\/pre>\n<p><img decoding=\"async\" src=\"https:\/\/www.datanovia.com\/en\/wp-content\/uploads\/dn-tutorials\/r-statistics-2-comparing-groups-means\/figures\/048-ancova-analysis-of-covariance-one-way-linearity-assumption-1.png\" width=\"384\" \/><\/p>\n<div class=\"success\">\n<p>There was a linear relationship between pre-test and post-test anxiety score for each training group, as assessed by visual inspection of a scatter plot.<\/p>\n<\/div>\n<\/div>\n<div id=\"homogeneity-of-regression-slopes\" class=\"section level4\">\n<h4>Homogeneity of regression slopes<\/h4>\n<p>This assumption checks that there is no significant interaction between the covariate and the grouping variable. This can be evaluated as follow:<\/p>\n<pre class=\"r\"><code>anxiety %&gt;% anova_test(posttest ~ group*pretest)<\/code><\/pre>\n<pre><code>## ANOVA Table (type II tests)\r\n## \r\n##          Effect DFn DFd       F        p p&lt;.05   ges\r\n## 1         group   2  39 209.314 1.40e-21     * 0.915\r\n## 2       pretest   1  39 572.828 6.36e-25     * 0.936\r\n## 3 group:pretest   2  39   0.127 8.81e-01       0.006<\/code><\/pre>\n<div class=\"success\">\n<p>There was homogeneity of regression slopes as the interaction term was not statistically significant, F(2, 39) = 0.13, p = 0.88.<\/p>\n<\/div>\n<\/div>\n<\/div>\n<div id=\"normality-of-residuals\" class=\"section level3\">\n<h3>Normality of residuals<\/h3>\n<p>You first need to compute the model using <code>lm()<\/code>. In R, you can easily augment your data to add fitted values and residuals by using the function <code>augment(model)<\/code> [broom package]. Let\u2019s call the output <code>model.metrics<\/code> because it contains several metrics useful for regression diagnostics.<\/p>\n<pre class=\"r\"><code># Fit the model, the covariate goes first\r\nmodel &lt;- lm(posttest ~ pretest + group, data = anxiety)\r\n# Inspect the model diagnostic metrics\r\nmodel.metrics &lt;- augment(model) %&gt;%\r\n  select(-.hat, -.sigma, -.fitted, -.se.fit) # Remove details\r\nhead(model.metrics, 3)<\/code><\/pre>\n<pre><code>## # A tibble: 3 x 6\r\n##   posttest pretest group .resid .cooksd .std.resid\r\n##      &lt;dbl&gt;   &lt;dbl&gt; &lt;fct&gt;  &lt;dbl&gt;   &lt;dbl&gt;      &lt;dbl&gt;\r\n## 1     14.1    14.1 grp1   0.550  0.101       1.46 \r\n## 2     14.3    14.5 grp1   0.338  0.0310      0.885\r\n## 3     14.9    15.7 grp1  -0.295  0.0133     -0.750<\/code><\/pre>\n<pre class=\"r\"><code># Assess normality of residuals using shapiro wilk test\r\nshapiro_test(model.metrics$.resid)<\/code><\/pre>\n<pre><code>## # A tibble: 1 x 3\r\n##   variable             statistic p.value\r\n##   &lt;chr&gt;                    &lt;dbl&gt;   &lt;dbl&gt;\r\n## 1 model.metrics$.resid     0.975   0.444<\/code><\/pre>\n<div class=\"success\">\n<p>The Shapiro Wilk test was not significant (p &gt; 0.05), so we can assume normality of residuals<\/p>\n<\/div>\n<\/div>\n<div id=\"homogeneity-of-variances\" class=\"section level3\">\n<h3>Homogeneity of variances<\/h3>\n<p>ANCOVA assumes that the variance of the residuals is equal for all groups. This can be checked using the Levene\u2019s test:<\/p>\n<pre class=\"r\"><code>model.metrics %&gt;% levene_test(.resid ~ group)<\/code><\/pre>\n<pre><code>## # A tibble: 1 x 4\r\n##     df1   df2 statistic     p\r\n##   &lt;int&gt; &lt;int&gt;     &lt;dbl&gt; &lt;dbl&gt;\r\n## 1     2    42      2.27 0.116<\/code><\/pre>\n<div class=\"success\">\n<p>The Levene\u2019s test was not significant (p &gt; 0.05), so we can assume homogeneity of the residual variances for all groups.<\/p>\n<\/div>\n<\/div>\n<div id=\"outliers\" class=\"section level3\">\n<h3>Outliers<\/h3>\n<p>An outlier is a point that has an extreme outcome variable value. The presence of outliers may affect the interpretation of the model.<\/p>\n<p>Outliers can be identified by examining the standardized residual (or studentized residual), which is the residual divided by its estimated standard error. Standardized residuals can be interpreted as the number of standard errors away from the regression line.<\/p>\n<div class=\"warning\">\n<p>Observations whose standardized residuals are greater than 3 in absolute value are possible outliers.<\/p>\n<\/div>\n<pre class=\"r\"><code>model.metrics %&gt;% \r\n  filter(abs(.std.resid) &gt; 3) %&gt;%\r\n  as.data.frame()<\/code><\/pre>\n<pre><code>## [1] posttest   pretest    group      .resid     .cooksd    .std.resid\r\n## &lt;0 rows&gt; (or 0-length row.names)<\/code><\/pre>\n<div class=\"success\">\n<p>There were no outliers in the data, as assessed by no cases with standardized residuals greater than 3 in absolute value.<\/p>\n<\/div>\n<\/div>\n<div id=\"computation\" class=\"section level3\">\n<h3>Computation<\/h3>\n<p>The orders of variables matters when computing ANCOVA. You want to remove the effect of the covariate first - that is, you want to control for it - prior to entering your main variable or interest.<\/p>\n<div class=\"warning\">\n<p>The covariate goes first (and there is no interaction)! If you do not do this in order, you will get different results.<\/p>\n<\/div>\n<pre class=\"r\"><code>res.aov &lt;- anxiety %&gt;% anova_test(posttest ~ pretest + group)\r\nget_anova_table(res.aov)<\/code><\/pre>\n<pre><code>## ANOVA Table (type II tests)\r\n## \r\n##    Effect DFn DFd   F        p p&lt;.05   ges\r\n## 1 pretest   1  41 598 4.48e-26     * 0.936\r\n## 2   group   2  41 219 1.35e-22     * 0.914<\/code><\/pre>\n<div class=\"success\">\n<p>After adjustment for pre-test anxiety score, there was a statistically significant difference in post-test anxiety score between the groups, F(2, 41) = 218.63, p &lt; 0.0001.<\/p>\n<\/div>\n<\/div>\n<div id=\"post-hoc-test\" class=\"section level3\">\n<h3>Post-hoc test<\/h3>\n<p>Pairwise comparisons can be performed to identify which groups are different. The Bonferroni multiple testing correction is applied. This can be easily done using the function <code>emmeans_test()<\/code> [rstatix package], a wrapper around the <code>emmeans<\/code> package, which needs to be installed. Emmeans stands for <strong>estimated marginal means<\/strong> (aka least square means or adjusted means).<\/p>\n<pre class=\"r\"><code># Pairwise comparisons\r\nlibrary(emmeans)\r\npwc &lt;- anxiety %&gt;% \r\n  emmeans_test(\r\n    posttest ~ group, covariate = pretest,\r\n    p.adjust.method = \"bonferroni\"\r\n    )\r\npwc<\/code><\/pre>\n<pre><code>## # A tibble: 3 x 8\r\n##   .y.      group1 group2    df statistic        p    p.adj p.adj.signif\r\n## * &lt;chr&gt;    &lt;chr&gt;  &lt;chr&gt;  &lt;dbl&gt;     &lt;dbl&gt;    &lt;dbl&gt;    &lt;dbl&gt; &lt;chr&gt;       \r\n## 1 posttest grp1   grp2      41      4.24 1.26e- 4 3.77e- 4 ***         \r\n## 2 posttest grp1   grp3      41     19.9  1.19e-22 3.58e-22 ****        \r\n## 3 posttest grp2   grp3      41     15.5  9.21e-19 2.76e-18 ****<\/code><\/pre>\n<pre class=\"r\"><code># Display the adjusted means of each group\r\n# Also called as the estimated marginal means (emmeans)\r\nget_emmeans(pwc)<\/code><\/pre>\n<pre><code>## # A tibble: 3 x 8\r\n##   pretest group emmean    se    df conf.low conf.high method      \r\n##     &lt;dbl&gt; &lt;fct&gt;  &lt;dbl&gt; &lt;dbl&gt; &lt;dbl&gt;    &lt;dbl&gt;     &lt;dbl&gt; &lt;chr&gt;       \r\n## 1    16.9 grp1    16.4 0.106    41     16.2      16.7 Emmeans test\r\n## 2    16.9 grp2    15.8 0.107    41     15.6      16.0 Emmeans test\r\n## 3    16.9 grp3    13.5 0.106    41     13.2      13.7 Emmeans test<\/code><\/pre>\n<div class=\"success\">\n<p>Data are adjusted mean +\/- standard error. The mean anxiety score was statistically significantly greater in grp1 (16.4 +\/- 0.15) compared to the grp2 (15.8 +\/- 0.12) and grp3 (13.5 +\/_ 0.11), p &lt; 0.001.<\/p>\n<\/div>\n<\/div>\n<div id=\"report\" class=\"section level3\">\n<h3>Report<\/h3>\n<p>An ANCOVA was run to determine the effect of exercises on the anxiety score after controlling for basal anxiety score of participants.<\/p>\n<p>After adjustment for pre-test anxiety score, there was a statistically significant difference in post-test anxiety score between the groups, F(2, 41) = 218.63, p &lt; 0.0001.<\/p>\n<p>Post hoc analysis was performed with a Bonferroni adjustment. The mean anxiety score was statistically significantly greater in grp1 (16.4 +\/- 0.15) compared to the grp2 (15.8 +\/- 0.12) and grp3 (13.5 +\/_ 0.11), p &lt; 0.001.<\/p>\n<pre class=\"r\"><code># Visualization: line plots with p-values\r\npwc &lt;- pwc %&gt;% add_xy_position(x = \"group\", fun = \"mean_se\")\r\nggline(get_emmeans(pwc), x = \"group\", y = \"emmean\") +\r\n  geom_errorbar(aes(ymin = conf.low, ymax = conf.high), width = 0.2) + \r\n  stat_pvalue_manual(pwc, hide.ns = TRUE, tip.length = FALSE) +\r\n  labs(\r\n    subtitle = get_test_label(res.aov, detailed = TRUE),\r\n    caption = get_pwc_label(pwc)\r\n  )<\/code><\/pre>\n<p><img decoding=\"async\" src=\"https:\/\/www.datanovia.com\/en\/wp-content\/uploads\/dn-tutorials\/r-statistics-2-comparing-groups-means\/figures\/048-ancova-analysis-of-covariance-one-way-line-plot-with-p-values-1.png\" width=\"576\" \/><\/p>\n<\/div>\n<\/div>\n<div id=\"two-way-ancova\" class=\"section level2\">\n<h2>Two-way ANCOVA<\/h2>\n<div id=\"data-preparation-1\" class=\"section level3\">\n<h3>Data preparation<\/h3>\n<p>We\u2019ll use the <code>stress<\/code> dataset available in the datarium package. In this study, a researcher wants to evaluate the effect of <code>treatment<\/code> and <code>exercise<\/code> on stress reduction <code>score<\/code> after adjusting for <code>age<\/code>.<\/p>\n<p>In this example: 1) <code>stress score<\/code> is our outcome (dependent) variable; 2) <code>treatment<\/code> (levels: no and yes) and <code>exercise<\/code> (levels: low, moderate and high intensity training) are our grouping variable; 3) <code>age<\/code> is our covariate.<\/p>\n<p>Load the data and show some random rows by groups:<\/p>\n<pre class=\"r\"><code>data(\"stress\", package = \"datarium\")\r\nstress %&gt;% sample_n_by(treatment, exercise)<\/code><\/pre>\n<pre><code>## # A tibble: 6 x 5\r\n##      id score treatment exercise   age\r\n##   &lt;int&gt; &lt;dbl&gt; &lt;fct&gt;     &lt;fct&gt;    &lt;dbl&gt;\r\n## 1     8  83.8 yes       low         61\r\n## 2    15  86.9 yes       moderate    55\r\n## 3    29  71.5 yes       high        55\r\n## 4    40  92.4 no        low         67\r\n## 5    41 100   no        moderate    75\r\n## 6    56  82.4 no        high        53<\/code><\/pre>\n<\/div>\n<div id=\"check-assumptions-1\" class=\"section level3\">\n<h3>Check assumptions<\/h3>\n<div id=\"linearity-assumption-1\" class=\"section level4\">\n<h4>Linearity assumption<\/h4>\n<ul>\n<li>Create a scatter plot between the covariate (i.e., <code>age<\/code>) and the outcome variable (i.e., <code>score<\/code>) for each combination of the groups of the two grouping variables:<\/li>\n<li>Add smoothed loess lines, which helps to decide if the relationship is linear or not<\/li>\n<\/ul>\n<pre class=\"r\"><code>ggscatter(\r\n  stress, x = \"age\", y = \"score\",\r\n  facet.by  = c(\"exercise\", \"treatment\"), \r\n  short.panel.labs = FALSE\r\n  )+\r\n  stat_smooth(method = \"loess\", span = 0.9)<\/code><\/pre>\n<p><img decoding=\"async\" src=\"https:\/\/www.datanovia.com\/en\/wp-content\/uploads\/dn-tutorials\/r-statistics-2-comparing-groups-means\/figures\/048-ancova-analysis-of-covariance-linearity-assumption-1.png\" width=\"480\" \/><\/p>\n<div class=\"success\">\n<p>There was a linear relationship between the covariate (age variable) and the outcome variable (score) for each group, as assessed by visual inspection of a scatter plot.<\/p>\n<\/div>\n<\/div>\n<div id=\"homogeneity-of-regression-slopes-1\" class=\"section level4\">\n<h4>Homogeneity of regression slopes<\/h4>\n<p>This assumption checks that there is no significant interaction between the covariate and the grouping variables. This can be evaluated as follow:<\/p>\n<pre class=\"r\"><code>stress %&gt;%\r\n  anova_test(\r\n    score ~ age + treatment + exercise + \r\n     treatment*exercise + age*treatment +\r\n     age*exercise + age*exercise*treatment\r\n  )<\/code><\/pre>\n<pre><code>## ANOVA Table (type II tests)\r\n## \r\n##                   Effect DFn DFd      F        p p&lt;.05      ges\r\n## 1                    age   1  48  8.359 6.00e-03     * 0.148000\r\n## 2              treatment   1  48  9.907 3.00e-03     * 0.171000\r\n## 3               exercise   2  48 18.197 1.31e-06     * 0.431000\r\n## 4     treatment:exercise   2  48  3.303 4.50e-02     * 0.121000\r\n## 5          age:treatment   1  48  0.009 9.25e-01       0.000189\r\n## 6           age:exercise   2  48  0.235 7.91e-01       0.010000\r\n## 7 age:treatment:exercise   2  48  0.073 9.30e-01       0.003000<\/code><\/pre>\n<p>Another simple alternative is to create a new grouping variable, say <code>group<\/code>, based on the combinations of the existing variables, and then compute ANOVA model:<\/p>\n<pre class=\"r\"><code>stress %&gt;%\r\n  unite(col = \"group\", treatment, exercise) %&gt;%\r\n  anova_test(score ~ group*age)<\/code><\/pre>\n<pre><code>## ANOVA Table (type II tests)\r\n## \r\n##      Effect DFn DFd      F        p p&lt;.05   ges\r\n## 1     group   5  48 10.912 4.76e-07     * 0.532\r\n## 2       age   1  48  8.359 6.00e-03     * 0.148\r\n## 3 group:age   5  48  0.126 9.86e-01       0.013<\/code><\/pre>\n<div class=\"success\">\n<p>There was homogeneity of regression slopes as the interaction terms, between the covariate (<code>age<\/code>) and grouping variables (<code>treatment<\/code> and <code>exercise<\/code>), was not statistically significant, p &gt; 0.05.<\/p>\n<\/div>\n<\/div>\n<div id=\"normality-of-residuals-1\" class=\"section level4\">\n<h4>Normality of residuals<\/h4>\n<pre class=\"r\"><code># Fit the model, the covariate goes first\r\nmodel &lt;- lm(score ~ age + treatment*exercise, data = stress)\r\n# Inspect the model diagnostic metrics\r\nmodel.metrics &lt;- augment(model) %&gt;%\r\n  select(-.hat, -.sigma, -.fitted, -.se.fit) # Remove details\r\nhead(model.metrics, 3)<\/code><\/pre>\n<pre><code>## # A tibble: 3 x 7\r\n##   score   age treatment exercise .resid .cooksd .std.resid\r\n##   &lt;dbl&gt; &lt;dbl&gt; &lt;fct&gt;     &lt;fct&gt;     &lt;dbl&gt;   &lt;dbl&gt;      &lt;dbl&gt;\r\n## 1  95.6    59 yes       low        9.10  0.0647       1.93\r\n## 2  82.2    65 yes       low       -7.32  0.0439      -1.56\r\n## 3  97.2    70 yes       low        5.16  0.0401       1.14<\/code><\/pre>\n<pre class=\"r\"><code># Assess normality of residuals using shapiro wilk test\r\nshapiro_test(model.metrics$.resid)<\/code><\/pre>\n<pre><code>## # A tibble: 1 x 3\r\n##   variable             statistic p.value\r\n##   &lt;chr&gt;                    &lt;dbl&gt;   &lt;dbl&gt;\r\n## 1 model.metrics$.resid     0.982   0.531<\/code><\/pre>\n<div class=\"success\">\n<p>The Shapiro Wilk test was not significant (p &gt; 0.05), so we can assume normality of residuals<\/p>\n<\/div>\n<\/div>\n<div id=\"homogeneity-of-variances-1\" class=\"section level4\">\n<h4>Homogeneity of variances<\/h4>\n<p>ANCOVA assumes that the variance of the residuals is equal for all groups. This can be checked using the Levene\u2019s test:<\/p>\n<pre class=\"r\"><code>levene_test(.resid ~ treatment*exercise, data = model.metrics)<\/code><\/pre>\n<div class=\"success\">\n<p>The Levene\u2019s test was not significant (p &gt; 0.05), so we can assume homogeneity of the residual variances for all groups.<\/p>\n<\/div>\n<\/div>\n<div id=\"outliers-1\" class=\"section level4\">\n<h4>Outliers<\/h4>\n<p>Observations whose standardized residuals are greater than 3 in absolute value are possible outliers.<\/p>\n<pre class=\"r\"><code>model.metrics %&gt;% \r\n  filter(abs(.std.resid) &gt; 3) %&gt;%\r\n  as.data.frame()<\/code><\/pre>\n<pre><code>## [1] score      age        treatment  exercise   .resid     .cooksd    .std.resid\r\n## &lt;0 rows&gt; (or 0-length row.names)<\/code><\/pre>\n<div class=\"success\">\n<p>There were no outliers in the data, as assessed by no cases with standardized residuals greater than 3 in absolute value.<\/p>\n<\/div>\n<\/div>\n<\/div>\n<div id=\"computation-1\" class=\"section level3\">\n<h3>Computation<\/h3>\n<pre class=\"r\"><code>res.aov &lt;- stress %&gt;% \r\n  anova_test(score ~ age + treatment*exercise)\r\nget_anova_table(res.aov)<\/code><\/pre>\n<pre><code>## ANOVA Table (type II tests)\r\n## \r\n##               Effect DFn DFd     F        p p&lt;.05   ges\r\n## 1                age   1  53  9.11 4.00e-03     * 0.147\r\n## 2          treatment   1  53 11.10 2.00e-03     * 0.173\r\n## 3           exercise   2  53 20.82 2.13e-07     * 0.440\r\n## 4 treatment:exercise   2  53  4.45 1.60e-02     * 0.144<\/code><\/pre>\n<div class=\"success\">\n<p>After adjustment for age, there was a statistically significant interaction between treatment and exercise on the stress score, F(2, 53) = 4.45, p = 0.016. This indicates that the effect of exercise on score depends on the level of exercise, and vice-versa.<\/p>\n<\/div>\n<\/div>\n<div id=\"post-hoc-test-1\" class=\"section level3\">\n<h3>Post-hoc test<\/h3>\n<p>A statistically significant two-way interactions can be followed up by <strong>simple main effect analyses<\/strong>, that is evaluating the effect of one variable at each level of the second variable, and vice-versa.<\/p>\n<p>In the situation, where the interaction is not significant, you can report the main effect of each grouping variable.<\/p>\n<p>A <strong>significant two-way interaction<\/strong> indicates that the impact that one factor has on the outcome variable depends on the level of the other factor (and vice versa). So, you can decompose a significant two-way interaction into:<\/p>\n<ul>\n<li><strong>Simple main effect<\/strong>: run one-way model of the first variable (factor A) at each level of the second variable (factor B),<\/li>\n<li><strong>Simple pairwise comparisons<\/strong>: if the simple main effect is significant, run multiple pairwise comparisons to determine which groups are different.<\/li>\n<\/ul>\n<p>For a <strong>non-significant two-way interaction<\/strong>, you need to determine whether you have any statistically significant <strong>main effects<\/strong> from the ANCOVA output.<\/p>\n<p>In this section we\u2019ll describe the procedure for a significant three-way interaction<\/p>\n<div id=\"simple-main-effect-analyses-for-treatment\" class=\"section level4\">\n<h4>Simple main effect analyses for treatment<\/h4>\n<p><strong>Analyze the simple main effect<\/strong> of <code>treatment<\/code> at each level of <code>exercise<\/code>. Group the data by <code>exercise<\/code> and perform one-way ANCOVA for <code>treatment<\/code> controlling for <code>age<\/code>:<\/p>\n<pre class=\"r\"><code># Effect of treatment at each level of exercise\r\nstress %&gt;%\r\n  group_by(exercise) %&gt;%\r\n  anova_test(score ~ age + treatment)<\/code><\/pre>\n<pre><code>## # A tibble: 6 x 8\r\n##   exercise Effect      DFn   DFd      F        p `p&lt;.05`   ges\r\n##   &lt;fct&gt;    &lt;chr&gt;     &lt;dbl&gt; &lt;dbl&gt;  &lt;dbl&gt;    &lt;dbl&gt; &lt;chr&gt;   &lt;dbl&gt;\r\n## 1 low      age           1    17  2.25  0.152    \"\"      0.117\r\n## 2 low      treatment     1    17  0.437 0.517    \"\"      0.025\r\n## 3 moderate age           1    17  6.65  0.02     *       0.281\r\n## 4 moderate treatment     1    17  0.419 0.526    \"\"      0.024\r\n## 5 high     age           1    17  0.794 0.385    \"\"      0.045\r\n## 6 high     treatment     1    17 18.7   0.000455 *       0.524<\/code><\/pre>\n<div class=\"warning\">\n<p>Note that, we need to apply Bonferroni adjustment for multiple testing corrections. One common approach is lowering the level at which you declare significance by dividing the alpha value (0.05) by the number of tests performed. In our example, that is 0.05\/3 = 0.016667.<\/p>\n<\/div>\n<div class=\"success\">\n<p>Statistical significance was accepted at the Bonferroni-adjusted alpha level of 0.01667, that is 0.05\/3. The effect of treatment was statistically significant in the high-intensity exercise group (p = 0.00045), but not in the low-intensity exercise group (p = 0.517) and in the moderate-intensity exercise group (p = 0.526).<\/p>\n<\/div>\n<p><strong>Compute pairwise comparisons between treatment groups<\/strong> at each level of exercise. The Bonferroni multiple testing correction is applied.<\/p>\n<pre class=\"r\"><code># Pairwise comparisons\r\npwc &lt;- stress %&gt;% \r\n  group_by(exercise) %&gt;%\r\n  emmeans_test(\r\n    score ~ treatment, covariate = age,\r\n    p.adjust.method = \"bonferroni\"\r\n    )\r\npwc %&gt;% filter(exercise == \"high\")<\/code><\/pre>\n<pre><code>## # A tibble: 1 x 9\r\n##   exercise .y.   group1 group2    df statistic         p     p.adj p.adj.signif\r\n##   &lt;fct&gt;    &lt;chr&gt; &lt;chr&gt;  &lt;chr&gt;  &lt;dbl&gt;     &lt;dbl&gt;     &lt;dbl&gt;     &lt;dbl&gt; &lt;chr&gt;       \r\n## 1 high     score yes    no        53     -4.36 0.0000597 0.0000597 ****<\/code><\/pre>\n<div class=\"warning\">\n<p>In the pairwise comparison table, you will only need the result for \u201cexercises:high\u201d group, as this was the only condition where the simple main effect of treatment was statistically significant.<\/p>\n<\/div>\n<div class=\"success\">\n<p>The pairwise comparisons between treatment:no and treatment:yes group was statistically significant in participant undertaking high-intensity exercise (p &lt; 0.0001).<\/p>\n<\/div>\n<\/div>\n<div id=\"simple-main-effect-for-exercise\" class=\"section level4\">\n<h4>Simple main effect for exercise<\/h4>\n<p>You can do the same post-hoc analyses for the <code>exercise<\/code> variable at each level of <code>treatment<\/code> variable.<\/p>\n<pre class=\"r\"><code># Effect of exercise at each level of treatment\r\nstress %&gt;%\r\n  group_by(treatment) %&gt;%\r\n  anova_test(score ~ age + exercise)<\/code><\/pre>\n<pre><code>## # A tibble: 4 x 8\r\n##   treatment Effect     DFn   DFd     F         p `p&lt;.05`   ges\r\n##   &lt;fct&gt;     &lt;chr&gt;    &lt;dbl&gt; &lt;dbl&gt; &lt;dbl&gt;     &lt;dbl&gt; &lt;chr&gt;   &lt;dbl&gt;\r\n## 1 yes       age          1    26  2.37 0.136     \"\"      0.083\r\n## 2 yes       exercise     2    26 17.3  0.0000164 *       0.572\r\n## 3 no        age          1    26  7.26 0.012     *       0.218\r\n## 4 no        exercise     2    26  3.99 0.031     *       0.235<\/code><\/pre>\n<div class=\"success\">\n<p>Statistical significance was accepted at the Bonferroni-adjusted alpha level of 0.025, that is 0.05\/2 (the number of tests). The effect of exercise was statistically significant in the treatment=yes group (p &lt; 0.0001), but not in the treatment=no group (p = 0.031).<\/p>\n<\/div>\n<p>Perform multiple pairwise comparisons between <code>exercise<\/code> groups at each level of <code>treatment<\/code>. You don\u2019t need to interpret the results for the \u201cno treatment\u201d group, because the effect of <code>exercise<\/code> was not significant for this group.<\/p>\n<pre class=\"r\"><code>pwc2 &lt;- stress %&gt;% \r\n  group_by(treatment) %&gt;%\r\n  emmeans_test(\r\n    score ~ exercise, covariate = age,\r\n    p.adjust.method = \"bonferroni\"\r\n    ) %&gt;%\r\n  select(-df, -statistic, -p) # Remove details\r\npwc2 %&gt;% filter(treatment == \"yes\")<\/code><\/pre>\n<pre><code>## # A tibble: 3 x 6\r\n##   treatment .y.   group1   group2         p.adj p.adj.signif\r\n##   &lt;fct&gt;     &lt;chr&gt; &lt;chr&gt;    &lt;chr&gt;          &lt;dbl&gt; &lt;chr&gt;       \r\n## 1 yes       score low      moderate 1           ns          \r\n## 2 yes       score low      high     0.00000113  ****        \r\n## 3 yes       score moderate high     0.000000466 ****<\/code><\/pre>\n<div class=\"success\">\n<p>There was a statistically significant difference between the adjusted mean of low and high exercise group (p &lt; 0.0001) and, between moderate and high group (p &lt; 0.0001). The difference between the adjusted means of low and moderate was not significant.<\/p>\n<\/div>\n<\/div>\n<\/div>\n<div id=\"report-1\" class=\"section level3\">\n<h3>Report<\/h3>\n<p>A two-way ANCOVA was performed to examine the effects of treatment and exercise on stress reduction, after controlling for age.<\/p>\n<p>There was a statistically significant two-way interaction between treatment and exercise on score concentration, whilst controlling for age, F(2, 53) = 4.45, p = 0.016.<\/p>\n<p>Therefore, an analysis of simple main effects for exercise and treatment was performed with statistical significance receiving a Bonferroni adjustment and being accepted at the p &lt; 0.025 level for exercise and p &lt; 0.0167 for treatment.<\/p>\n<p>The simple main effect of treatment was statistically significant in the high-intensity exercise group (p = 0.00046), but not in the low-intensity exercise group (p = 0.52) and the moderate-intensity exercise group (p = 0.53).<\/p>\n<p>The effect of exercise was statistically significant in the <code>treatment=yes<\/code> group (p &lt; 0.0001), but not in the <code>treatment=no<\/code> group (p = 0.031).<\/p>\n<p>All pairwise comparisons were computed for statistically significant simple main effects with reported p-values Bonferroni adjusted. For the <code>treatment=yes<\/code> group, there was a statistically significant difference between the adjusted mean of low and high exercise group (p &lt; 0.0001) and, between moderate and high group (p &lt; 0.0001). The difference between the adjusted means of low and moderate exercise groups was not significant.<\/p>\n<ul>\n<li>Create a line plot:<\/li>\n<\/ul>\n<pre class=\"r\"><code># Line plot\r\nlp &lt;- ggline(\r\n  get_emmeans(pwc), x = \"exercise\", y = \"emmean\", \r\n  color = \"treatment\", palette = \"jco\"\r\n  ) +\r\n  geom_errorbar(\r\n    aes(ymin = conf.low, ymax = conf.high, color = treatment), \r\n    width = 0.1\r\n    )<\/code><\/pre>\n<ul>\n<li>Add p-values<\/li>\n<\/ul>\n<pre class=\"r\"><code># Comparisons between treatment group at each exercise level\r\npwc &lt;- pwc %&gt;% add_xy_position(x = \"exercise\", fun = \"mean_se\", step.increase = 0.2)\r\npwc.filtered &lt;- pwc %&gt;% filter(exercise == \"high\")\r\nlp + \r\nstat_pvalue_manual(\r\n  pwc.filtered, hide.ns = TRUE, tip.length = 0,\r\n  bracket.size = 0\r\n  ) +\r\nlabs(\r\n  subtitle = get_test_label(res.aov,  detailed = TRUE),\r\n  caption = get_pwc_label(pwc)\r\n)<\/code><\/pre>\n<p><img decoding=\"async\" src=\"https:\/\/www.datanovia.com\/en\/wp-content\/uploads\/dn-tutorials\/r-statistics-2-comparing-groups-means\/figures\/048-ancova-analysis-of-covariance-two-way-ancova-line-plots-with-p-values-1.png\" width=\"576\" \/><\/p>\n<pre class=\"r\"><code># Comparisons between exercises group at each treatment level\r\npwc2 &lt;- pwc2 %&gt;% add_xy_position(x = \"exercise\", fun = \"mean_se\")\r\npwc2.filtered &lt;- pwc2 %&gt;% filter(treatment == \"yes\")\r\nlp + \r\nstat_pvalue_manual(\r\n  pwc2.filtered, hide.ns = TRUE, tip.length = 0,\r\n  step.group.by = \"treatment\", color = \"treatment\"\r\n  ) +\r\nlabs(\r\n  subtitle = get_test_label(res.aov,  detailed = TRUE),\r\n  caption = get_pwc_label(pwc2)\r\n)<\/code><\/pre>\n<p><img decoding=\"async\" src=\"https:\/\/www.datanovia.com\/en\/wp-content\/uploads\/dn-tutorials\/r-statistics-2-comparing-groups-means\/figures\/048-ancova-analysis-of-covariance-two-way-ancova-line-plots-with-p-values-2.png\" width=\"576\" \/><\/p>\n<\/div>\n<\/div>\n<div id=\"summary\" class=\"section level2\">\n<h2>Summary<\/h2>\n<p>This article describes how to compute and interpret one-way and two-way ANCOVA in R. We also explain the assumptions made by ANCOVA tests and provide practical examples of R codes to check whether the test assumptions are met or not.<\/p>\n<\/div>\n<\/div>\n<p><!--end rdoc--><\/p>\n","protected":false},"excerpt":{"rendered":"<p>The Analysis of Covariance (ANCOVA) is used to compare means of an outcome variable between two or more groups taking into account (or to correct for) variability of other variables, called covariates.  In this chapter, you will learn how to compute and interpret the one-way and the two-way ANCOVA in R.<\/p>\n","protected":false},"author":1,"featured_media":9089,"parent":0,"menu_order":0,"comment_status":"open","ping_status":"closed","template":"","class_list":["post-10876","dt_lessons","type-dt_lessons","status-publish","has-post-thumbnail","hentry"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v25.2 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>ANCOVA in R: The Ultimate Practical Guide - Datanovia<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.datanovia.com\/en\/lessons\/ancova-in-r\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"ANCOVA in R: The Ultimate Practical Guide - Datanovia\" \/>\n<meta property=\"og:description\" content=\"The Analysis of Covariance (ANCOVA) is used to compare means of an outcome variable between two or more groups taking into account (or to correct for) variability of other variables, called covariates. In this chapter, you will learn how to compute and interpret the one-way and the two-way ANCOVA in R.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.datanovia.com\/en\/lessons\/ancova-in-r\/\" \/>\n<meta property=\"og:site_name\" content=\"Datanovia\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.datanovia.com\/en\/wp-content\/uploads\/2019\/05\/X26814448_567195243620424_5875663421650887010_n.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1024\" \/>\n\t<meta property=\"og:image:height\" content=\"512\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data1\" content=\"18 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\/\/www.datanovia.com\/en\/lessons\/ancova-in-r\/\",\"url\":\"https:\/\/www.datanovia.com\/en\/lessons\/ancova-in-r\/\",\"name\":\"ANCOVA in R: The Ultimate Practical Guide - Datanovia\",\"isPartOf\":{\"@id\":\"https:\/\/www.datanovia.com\/en\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/www.datanovia.com\/en\/lessons\/ancova-in-r\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/www.datanovia.com\/en\/lessons\/ancova-in-r\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/www.datanovia.com\/en\/wp-content\/uploads\/2019\/05\/X26814448_567195243620424_5875663421650887010_n.jpg\",\"datePublished\":\"2019-11-29T06:51:19+00:00\",\"breadcrumb\":{\"@id\":\"https:\/\/www.datanovia.com\/en\/lessons\/ancova-in-r\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/www.datanovia.com\/en\/lessons\/ancova-in-r\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.datanovia.com\/en\/lessons\/ancova-in-r\/#primaryimage\",\"url\":\"https:\/\/www.datanovia.com\/en\/wp-content\/uploads\/2019\/05\/X26814448_567195243620424_5875663421650887010_n.jpg\",\"contentUrl\":\"https:\/\/www.datanovia.com\/en\/wp-content\/uploads\/2019\/05\/X26814448_567195243620424_5875663421650887010_n.jpg\",\"width\":1024,\"height\":512},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/www.datanovia.com\/en\/lessons\/ancova-in-r\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/www.datanovia.com\/en\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Lessons\",\"item\":\"https:\/\/www.datanovia.com\/en\/lessons\/\"},{\"@type\":\"ListItem\",\"position\":3,\"name\":\"ANCOVA in R\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/www.datanovia.com\/en\/#website\",\"url\":\"https:\/\/www.datanovia.com\/en\/\",\"name\":\"Datanovia\",\"description\":\"Data Mining and Statistics for Decision Support\",\"publisher\":{\"@id\":\"https:\/\/www.datanovia.com\/en\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/www.datanovia.com\/en\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/www.datanovia.com\/en\/#organization\",\"name\":\"Datanovia\",\"url\":\"https:\/\/www.datanovia.com\/en\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.datanovia.com\/en\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/www.datanovia.com\/en\/wp-content\/uploads\/2018\/09\/datanovia-logo.png\",\"contentUrl\":\"https:\/\/www.datanovia.com\/en\/wp-content\/uploads\/2018\/09\/datanovia-logo.png\",\"width\":98,\"height\":99,\"caption\":\"Datanovia\"},\"image\":{\"@id\":\"https:\/\/www.datanovia.com\/en\/#\/schema\/logo\/image\/\"}}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"ANCOVA in R: The Ultimate Practical Guide - Datanovia","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.datanovia.com\/en\/lessons\/ancova-in-r\/","og_locale":"en_US","og_type":"article","og_title":"ANCOVA in R: The Ultimate Practical Guide - Datanovia","og_description":"The Analysis of Covariance (ANCOVA) is used to compare means of an outcome variable between two or more groups taking into account (or to correct for) variability of other variables, called covariates. In this chapter, you will learn how to compute and interpret the one-way and the two-way ANCOVA in R.","og_url":"https:\/\/www.datanovia.com\/en\/lessons\/ancova-in-r\/","og_site_name":"Datanovia","og_image":[{"width":1024,"height":512,"url":"https:\/\/www.datanovia.com\/en\/wp-content\/uploads\/2019\/05\/X26814448_567195243620424_5875663421650887010_n.jpg","type":"image\/jpeg"}],"twitter_card":"summary_large_image","twitter_misc":{"Est. reading time":"18 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/www.datanovia.com\/en\/lessons\/ancova-in-r\/","url":"https:\/\/www.datanovia.com\/en\/lessons\/ancova-in-r\/","name":"ANCOVA in R: The Ultimate Practical Guide - Datanovia","isPartOf":{"@id":"https:\/\/www.datanovia.com\/en\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.datanovia.com\/en\/lessons\/ancova-in-r\/#primaryimage"},"image":{"@id":"https:\/\/www.datanovia.com\/en\/lessons\/ancova-in-r\/#primaryimage"},"thumbnailUrl":"https:\/\/www.datanovia.com\/en\/wp-content\/uploads\/2019\/05\/X26814448_567195243620424_5875663421650887010_n.jpg","datePublished":"2019-11-29T06:51:19+00:00","breadcrumb":{"@id":"https:\/\/www.datanovia.com\/en\/lessons\/ancova-in-r\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.datanovia.com\/en\/lessons\/ancova-in-r\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.datanovia.com\/en\/lessons\/ancova-in-r\/#primaryimage","url":"https:\/\/www.datanovia.com\/en\/wp-content\/uploads\/2019\/05\/X26814448_567195243620424_5875663421650887010_n.jpg","contentUrl":"https:\/\/www.datanovia.com\/en\/wp-content\/uploads\/2019\/05\/X26814448_567195243620424_5875663421650887010_n.jpg","width":1024,"height":512},{"@type":"BreadcrumbList","@id":"https:\/\/www.datanovia.com\/en\/lessons\/ancova-in-r\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.datanovia.com\/en\/"},{"@type":"ListItem","position":2,"name":"Lessons","item":"https:\/\/www.datanovia.com\/en\/lessons\/"},{"@type":"ListItem","position":3,"name":"ANCOVA in R"}]},{"@type":"WebSite","@id":"https:\/\/www.datanovia.com\/en\/#website","url":"https:\/\/www.datanovia.com\/en\/","name":"Datanovia","description":"Data Mining and Statistics for Decision Support","publisher":{"@id":"https:\/\/www.datanovia.com\/en\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.datanovia.com\/en\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/www.datanovia.com\/en\/#organization","name":"Datanovia","url":"https:\/\/www.datanovia.com\/en\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.datanovia.com\/en\/#\/schema\/logo\/image\/","url":"https:\/\/www.datanovia.com\/en\/wp-content\/uploads\/2018\/09\/datanovia-logo.png","contentUrl":"https:\/\/www.datanovia.com\/en\/wp-content\/uploads\/2018\/09\/datanovia-logo.png","width":98,"height":99,"caption":"Datanovia"},"image":{"@id":"https:\/\/www.datanovia.com\/en\/#\/schema\/logo\/image\/"}}]}},"multi-rating":{"mr_rating_results":[]},"_links":{"self":[{"href":"https:\/\/www.datanovia.com\/en\/wp-json\/wp\/v2\/dt_lessons\/10876","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.datanovia.com\/en\/wp-json\/wp\/v2\/dt_lessons"}],"about":[{"href":"https:\/\/www.datanovia.com\/en\/wp-json\/wp\/v2\/types\/dt_lessons"}],"author":[{"embeddable":true,"href":"https:\/\/www.datanovia.com\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.datanovia.com\/en\/wp-json\/wp\/v2\/comments?post=10876"}],"version-history":[{"count":0,"href":"https:\/\/www.datanovia.com\/en\/wp-json\/wp\/v2\/dt_lessons\/10876\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.datanovia.com\/en\/wp-json\/wp\/v2\/media\/9089"}],"wp:attachment":[{"href":"https:\/\/www.datanovia.com\/en\/wp-json\/wp\/v2\/media?parent=10876"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}