# Data Clustering Basics

## Cluster Analysis Example: Quick Start R Code

This chapter describes a cluster analysis example using R software. We provide a quick start R code to compute and visualize K-means and hierarchical clustering.

#### Related Book

Practical Guide to Cluster Analysis in R

• cluster for cluster analysis
• factoextra for cluster visualization
library(cluster)
library(factoextra)

## Data preparation

We’ll use the demo data set USArrests. We start by standardizing the data:

mydata <- scale(USArrests) 

## K-means clustering

K-means is a clustering techniques that subdivide the data sets into a set of k groups, where k is the number of groups pre-specified by the analyst.

The following R codes show how to determine the optimal number of clusters and how to compute k-means and PAM clustering in R.

1. Determining the optimal number of clusters: use factoextra::fviz_nbclust()
fviz_nbclust(mydata, kmeans, method = "gap_stat") Suggested number of cluster: 3

1. Compute and visualize k-means clustering:
set.seed(123) # for reproducibility
km.res <- kmeans(mydata, 3, nstart = 25)
# Visualize
fviz_cluster(km.res, data = mydata, palette = "jco",
ggtheme = theme_minimal()) ## Hierarchical clustering

Hierarchical clustering is an alternative approach to partitioning clustering for identifying groups in the data set. It does not require to pre-specify the number of clusters to be generated.

The result of hierarchical clustering is a tree-based representation of the objects, which is also known as dendrogram. Observations can be subdivided into groups by cutting the dendrogram at a desired similarity level.

• Computation: R function: hclust(). It takes a dissimilarity matrix as an input, which is calculated using the function dist().
• Visualization: fviz_dend() [in factoextra]

R code to compute and visualize hierarchical clustering:

res.hc <- hclust(dist(mydata),  method = "ward.D2")
fviz_dend(res.hc, cex = 0.5, k = 4, palette = "jco") A heatmap is another way to visualize hierarchical clustering. It’s also called a false colored image, where data values are transformed to color scale. Heat maps allow us to simultaneously visualize groups of samples and features. You can easily create a pretty heatmap using the R package pheatmap.

In heatmap, generally, columns are samples and rows are variables. Therefore we start by transposing the data before creating the heatmap.

library(pheatmap)
pheatmap(t(mydata), cutree_cols = 4) ## Summary

This chapter presents examples of R code to compute and visualize k-means and hierarchical clustering.

• Serkan Korkmaz

Hi,

For some reason my code produces two optimal clusters.

data(“USArrests”)
df = USArrests; df = scale(df)

# k-means clustering; ####

# optimal number of clusters;
fviz_nbclust(
df,
kmeans,
method = “gap_stat”
)

I cant seem to find a difference in the code.

• Same here! I suspect that the dataset got updated after the page was published.

• Gab

I wish I will do the same heatmap but for agglomerative clustering. Is there a way to get this?

#### Course Curriculum

##### Teacher 