Partitional Clustering in R: The Essentials

Partitional Clustering in R: The Essentials
Featured

Partitional Clustering in R: The Essentials

Partitioning clustering methods

Partitional clustering (or partitioning clustering) are clustering methods used to classify observations, within a data set, into multiple groups based on their similarity. The algorithms require the analyst to specify the number of clusters to be generated.

This course describes the commonly used partitional clustering, including:

  • K-means clustering (MacQueen 1967), in which, each cluster is represented by the center or means of the data points belonging to the cluster. The K-means method is sensitive to anomalous data points and outliers.
  • K-medoids clustering or PAM (Partitioning Around Medoids, Kaufman & Rousseeuw, 1990), in which, each cluster is represented by one of the objects in the cluster. PAM is less sensitive to outliers compared to k-means.
  • CLARA algorithm (Clustering Large Applications), which is an extension to PAM adapted for large data sets.

For each of these methods, we provide:

  • the basic idea and the key mathematical concepts
  • the clustering algorithm and implementation in R software
  • R lab sections with many examples for cluster analysis and visualization

The following R packages will be used to compute and visualize partitioning clustering:

  • stats package for computing K-means
  • cluster package for computing PAM and CLARA algorithms
  • factoextra for beautiful visualization of clusters

Related Book

Practical Guide to Cluster Analysis in R

Quick start

Data preparation:

# Load data
data("USArrests")
my_data <- USArrests
# Remove any missing value (i.e, NA values for not available)
my_data <- na.omit(my_data)
# Scale variables
my_data <- scale(my_data)
# View the firt 3 rows
head(my_data, n = 3)
##         Murder Assault UrbanPop     Rape
## Alabama 1.2426   0.783   -0.521 -0.00342
## Alaska  0.5079   1.107   -1.212  2.48420
## Arizona 0.0716   1.479    0.999  1.04288

Determine the optimal number of clusters for k-means clustering:

library("factoextra")
fviz_nbclust(my_data, kmeans,
             method = "gap_stat")
## Clustering k = 1,2,..., K.max (= 10): .. done
## Bootstrapping, b = 1,2,..., B (= 100)  [one "." per sample]:
## .................................................. 50 
## .................................................. 100

Compute and visualize k-means clustering:

set.seed(123)
km.res <- kmeans(my_data, 3, nstart = 25)

# Visualize
library("factoextra")
fviz_cluster(km.res, data = my_data, 
             ellipse.type = "convex",
             palette = "jco",
             repel = TRUE,
             ggtheme = theme_minimal())

Similarly, you can compute and visualize PAM clustering as follow:

# Compute PAM
library("cluster")
pam.res <- pam(my_data, 4)
# Visualize
fviz_cluster(pam.res)

References

MacQueen, J. 1967. “Some Methods for Classification and Analysis of Multivariate Observations.” In Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability, Volume 1: Statistics, 281–97. Berkeley, Calif.: University of California Press. http://projecteuclid.org:443/euclid.bsmsp/1200512992.

Lessons

  1. K-means clustering is one of the most commonly used unsupervised machine learning algorithm for partitioning a given data set into a set of k groups. In this tutorial, you will learn: 1) the basic steps of k-means algorithm; 2) How to compute k-means in R software using practical examples; and 3) Advantages and disavantages of k-means clustering
  2. CLARA is a clustering technique that extends the k-medoids (PAM) methods to deal with data containing a large number of objects in order to reduce computing time and RAM storage problem. In this article, you will learn: 1) the basic steps of CLARA algorithm; 2) Examples of computing CLARA in R software using practical examples.

No Comments

Post a Reply

Teachers

No Ratings Yet

No ratings yet.

Please rate this

  • 1 Star
  • 2 Stars
  • 3 Stars
  • 4 Stars
  • 5 Stars