Statistical Tests and Assumptions

Statistical Tests and Assumptions

Statistical Tests and Assumptions

Course description

In this chapter, we’ll introduce some research questions and the corresponding statistical tests, as well as, the assumptions of the tests.

Related Book

Practical Statistics in R II - Comparing Groups: Numerical Variables

Research questions and statistics

The most popular research questions include:

  1. whether two variables (n = 2) are correlated (i.e., associated)
  2. whether multiple variables (n > 2) are correlated
  3. whether two groups (n = 2) of samples differ from each other
  4. whether multiple groups (n >= 2) of samples differ from each other
  5. whether the variability of two or more samples differ

Each of these questions can be answered using the following statistical tests:

  1. Correlation test between two variables
  2. Correlation matrix between multiple variables
  3. Comparing the means of two groups:
    • Student’s t-test (parametric)
    • Wilcoxon rank test (non-parametric)
  4. Comparing the means of more than two groups
    • ANOVA test (analysis of variance, parametric): extension of t-test to compare more than two groups.
    • Kruskal-Wallis rank sum test (non-parametric): extension of Wilcoxon rank test to compare more than two groups
  5. Comparing the variances:
    • Comparing the variances of two groups: F-test (parametric)
    • Comparison of the variances of more than two groups: Bartlett’s test (parametric), Levene’s test (parametric) and Fligner-Killeen test (non-parametric)

Assumptions of statistical tests

Many of the statistical methods including correlation, regression, t-test, and analysis of variance assume some characteristics about the data. Generally they assume that:

  • the data are normally distributed
  • and the variances of the groups to be compared are homogeneous (equal).

These assumptions should be taken seriously to draw reliable interpretation and conclusions of the research.

These tests - correlation, t-test and ANOVA - are called parametric tests, because their validity depends on the distribution of the data.

Before using parametric test, some preliminary tests should be performed to make sure that the test assumptions are met. In the situations where the assumptions are violated, non-paramatric tests are recommended.

Assessing normality

  1. With large enough sample sizes (n > 30) the violation of the normality assumption should not cause major problems (central limit theorem). This implies that we can ignore the distribution of the data and use parametric tests.
  2. However, to be consistent, we can use Shapiro-Wilk’s significance test comparing the sample distribution to a normal one in order to ascertain whether data show or not a serious deviation from normality (Ghasemi and Zahediasl 2012).

Assessing equality of variances

The standard Student’s t-test (comparing two independent samples) and the ANOVA test (comparing multiple samples) assume also that the samples to be compared have equal variances.

If the samples, being compared, follow normal distribution, then it’s possible to use:

  • F-test to compare the variances of two samples
  • Bartlett’s Test or Levene’s Test to compare the variances of multiple samples.

Summary

This chapter introduces the most commonly used statistical tests and their assumptions.

References

Ghasemi, Asghar, and Saleh Zahediasl. 2012. “Normality Tests for Statistical Analysis: A Guide for Non-Statisticians.” Int J Endocrinol Metab 10 (2): 486–89. doi:10.5812/ijem.3505.



Version: Français

Lessons

  1. Many of the statistical methods including correlation, regression, t tests, and analysis of variance assume that the data follows a normal distribution or a Gaussian distribution. In this chapter, you will learn how to check the normality of the data in R by visual inspection (QQ plots and density distributions) and by significance tests (Shapiro-Wilk test).
  2. Some statistical tests, such as two independent samples T-test and ANOVA test, assume that variances are equal across groups. This chapter describes methods for checking the homogeneity of variances test in R across two or more groups. These tests include: F-test, Bartlett's test, Levene's test and Fligner-Killeen's test.
  3. Repeated measures ANOVA make the assumption that the variances of differences between all combinations of related conditions (or group levels) are equal. This is known as the assumption of sphericity. The Mauchly’s test of sphericity is used to assess whether or not the assumption of sphericity is met. In this article, you will learn how to: 1) Calculate sphericity; 2) Compute Mauchly's test of sphericity in R; 3) Interpret repeated measures ANOVA results when the assumption of sphericity is met or violated. 4) Extract the ANOVA table automatically corrected for deviation from sphericity.
  4. Parametric methods, such as t-test and ANOVA tests, assume that the dependent (outcome) variable is approximately normally distributed for every groups to be compared. This chapter describes how to transform data to normal distribution in R.

Comments ( 2 )

  • José ALZA ARAMBURU

    Why the certificates of some courses that I have taken are not available? For example, this one.
    Jose ALZA ARAMBURU

  • Marina Raouf

    very informative

Give a comment

Want to post an issue with R? If yes, please make sure you have read this: How to Include Reproducible R Script Examples in Datanovia Comments

Teachers