Survey
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
Data Mining:
Concepts and Techniques
Cluster Analysis
Li Xiong
Slide credits: Jiawei Han and Micheline Kamber
Tan, Steinbach, Kumar
April 29, 2017
Data Mining: Concepts and Techniques
1
Chapter 7. Cluster Analysis
Overview
Partitioning methods
Hierarchical methods
Density-based methods
Other Methods
Outlier analysis
Summary
April 29, 2017
Data Mining: Concepts and Techniques
2
What is Cluster Analysis?
Finding groups of objects (clusters)
Objects similar to one another in the same group
Objects different from the objects in other groups
Unsupervised learning
Inter-cluster
distances are
maximized
Intra-cluster
distances are
minimized
April 29, 2017
Data Mining: Concepts and Techniques
3
Clustering Applications
Marketing research
Social network analysis
April 29, 2017
Li Xiong
4
Clustering Applications
WWW: Documents and search results clustering
April 29, 2017
Data Mining: Concepts and Techniques
5
Clustering Applications
Earthquake studies
April 29, 2017
Li Xiong
6
Clustering Applications
Biology: plants and animals
Bioinformatics: microarray data, genes and sequences
April 29, 2017
Li Xiong
Time:
Time X
Time Y
Time Z
Gene 1
10
8
10
Gene 2
10
0
9
Gene 3
4
8.6
3
Gene 4
7
8
3
Gene 5
1
2
3
7
Requirements of Clustering
Scalability
Ability to deal with different types of attributes
Ability to handle dynamic data
Ability to deal with noise and outliers
Ability to deal with high dimensionality
Minimal requirements for domain knowledge to
determine input parameters
Incorporation of user-specified constraints
Interpretability and usability
April 29, 2017
Data Mining: Concepts and Techniques
8
Quality: What Is Good Clustering?
Agreement with “ground truth”
A good clustering will produce high quality clusters with
Homogeneity - high intra-class similarity
Separation - low inter-class similarity
Intra-cluster
distances are
minimized
April 29, 2017
Data Mining: Concepts and Techniques
Inter-cluster
distances are
maximized
9
Bad Clustering vs. Good Clustering
Similarity or Dissimilarity between Data Objects
x11
...
x
i1
...
x
n1
...
x1f
...
...
...
...
xif
...
...
...
...
... xnf
...
...
x1p
...
xip
...
xnp
Euclidean distance
d (i, j) (| x x |2 | x x |2 ... | x x |2 )
i1 j1
i2
j2
ip
jp
Manhattan distance
d (i, j) | x x | | x x | ... | x x |
i1 j1 i2 j 2
i p jp
Minkowski distance
d (i, j) q (| x x |q | x x |q ... | x x |q )
i1 j1
i2
j2
ip
jp
Weighted
April 29, 2017
Li Xiong
11
Other Similarity or Dissimilarity Metrics
x11
...
x
i1
...
x
n1
...
x1f
...
...
...
...
xif
...
...
...
...
... xnf
...
...
( X
x1p
...
xip
...
xnp
X i )( X j X j )
Pearson correlation
X X
i j
Cosine measure
|| X |||| X ||
i
j
KL divergence, Bregman divergence, …
April 29, 2017
i
( p 1) X i X j
Li Xiong
12
Different Attribute Types
To compute | xif x jf |
f is continuous
Normalization if necessary
Logarithmic transformation for ratio-scaled values
x AeBt y log( x )
if
if
if
f is ordinal
zif
Mapping by rank
f is categorical
r 1
M 1
if
f
Mapping function
| x x | = 0 if x = x , or 1 otherwise
if
jf
if
jf
Hamming distance (edit distance) for strings
April 29, 2017
Data Mining: Concepts and Techniques
13
Clustering Approaches
Partitioning approach:
Construct various partitions and then evaluate them by some criterion,
e.g., minimizing the sum of square errors
Typical methods: k-means, k-medoids, CLARANS
Hierarchical approach:
Create a hierarchical decomposition of the set of data (or objects) using
some criterion
Typical methods: Diana, Agnes, BIRCH, ROCK, CAMELEON
Density-based approach:
Based on connectivity and density functions
Typical methods: DBSACN, OPTICS, DenClue
Others
April 29, 2017
Data Mining: Concepts and Techniques
14
Chapter 7. Cluster Analysis
Overview
Partitioning methods
Hierarchical methods
Density-based methods
Other Methods
Outlier analysis
Summary
April 29, 2017
Data Mining: Concepts and Techniques
15
Partitioning Algorithms: Basic Concept
Partitioning method: Construct a partition of a database D of n objects
into a set of k clusters, s.t., the sum of squared distance is minimized
ik1 pCi ( p mi )2
Given a k, find a partition of k clusters that optimizes the chosen
partitioning criterion
Global optimal: exhaustively enumerate all partitions
Heuristic methods: k-means and k-medoids algorithms
k-means (MacQueen’67): Each cluster is represented by the center
of the cluster
k-medoids or PAM (Partition around medoids) (Kaufman &
Rousseeuw’87): Each cluster is represented by one of the objects
in the cluster
April 29, 2017
Data Mining: Concepts and Techniques
16
K-Means Clustering: Lloyd Algorithm
Given k, and randomly choose k initial cluster centers
Partition objects into k nonempty subsets by assigning
each object to the cluster with the nearest centroid
Update centroid, i.e. mean point of the cluster
Go back to Step 2, stop when no more new
assignment
April 29, 2017
Data Mining: Concepts and Techniques
17
The K-Means Clustering Method
Example
10
10
9
9
8
8
7
7
6
6
5
5
10
9
8
7
6
5
4
4
3
2
1
0
0
1
2
3
4
5
6
7
8
K=2
Arbitrarily choose K
object as initial
cluster center
9
10
Assign
each
objects
to most
similar
center
3
2
1
0
0
1
2
3
4
5
6
7
8
9
10
4
3
2
1
0
0
1
2
3
4
5
6
reassign
10
10
9
9
8
8
7
7
6
6
5
5
4
2
1
0
0
1
2
3
4
5
6
7
8
7
8
9
10
reassign
3
April 29, 2017
Update
the
cluster
means
9
10
Update
the
cluster
means
Data Mining: Concepts and Techniques
4
3
2
1
0
0
1
2
3
4
5
6
7
8
9
10
18
K-means Clustering – Details
Initial centroids are often chosen randomly.
The centroid is (typically) the mean of the points in the
cluster.
‘Closeness’ is measured by Euclidean distance, cosine
similarity, correlation, etc.
Most of the convergence happens in the first few
iterations.
Often the stopping condition is changed to ‘Until relatively few
points change clusters’
Complexity is O(tkn)
n is # objects, k is # clusters, and t is # iterations.
Comments on the K-Means Method
Strength
Simple and works well for “regular” disjoint clusters
Relatively efficient and scalable (normally, k, t << n)
Weakness
Need to specify k, the number of clusters, in advance
Depending on initial centroids, may terminate at a local optimum
Potential solutions
Unable to handle noisy data and outliers
Not suitable for clusters of
Different sizes
Non-convex shapes
April 29, 2017
Data Mining: Concepts and Techniques
20
Importance of Choosing Initial Centroids – Case 1
Iteration 1
Iteration 2
Iteration 3
2.5
2.5
2.5
2
2
2
1.5
1.5
1.5
y
3
y
3
y
3
1
1
1
0.5
0.5
0.5
0
0
0
-2
-1.5
-1
-0.5
0
0.5
1
1.5
2
-2
-1.5
-1
-0.5
x
0
0.5
1
1.5
2
-2
Iteration 4
Iteration 5
2.5
2
2
2
1.5
1.5
1.5
1
1
1
0.5
0.5
0.5
0
0
0
-0.5
0
x
0.5
1
1.5
2
0
0.5
1
1.5
2
1
1.5
2
y
2.5
y
2.5
y
3
-1
-0.5
Iteration 6
3
-1.5
-1
x
3
-2
-1.5
x
-2
-1.5
-1
-0.5
0
x
0.5
1
1.5
2
-2
-1.5
-1
-0.5
0
x
0.5
Importance of Choosing Initial Centroids – Case 2
Iteration 1
Iteration 2
2.5
2.5
2
2
1.5
1.5
y
3
y
3
1
1
0.5
0.5
0
0
-2
-1.5
-1
-0.5
0
0.5
1
1.5
2
-2
-1.5
-1
-0.5
x
0
Iteration 3
2.5
2
2
2
1.5
1.5
1.5
1
1
1
0.5
0.5
0.5
0
0
0
-0.5
0
x
0.5
1
1.5
2
y
2.5
y
2.5
y
3
-1
1.5
Iteration 5
3
-1.5
1
Iteration 4
3
-2
0.5
x
2
-2
-1.5
-1
-0.5
0
x
0.5
1
1.5
2
-2
-1.5
-1
-0.5
0
x
0.5
1
1.5
2
Limitations of K-means: Differing Sizes
Original Points
K-means (3 Clusters)
Limitations of K-means: Non-convex Shapes
Original Points
K-means (2 Clusters)
Overcoming K-means Limitations
Original Points
K-means Clusters
Overcoming K-means Limitations
Original Points
K-means Clusters
Variations of the K-Means Method
A few variants of the k-means which differ in
Selection of the initial k means
Dissimilarity calculations
Strategies to calculate cluster means
Handling categorical data: k-modes (Huang’98)
Replacing means of clusters with modes
Using new dissimilarity measures to deal with categorical objects
Using a frequency-based method to update modes of clusters
A mixture of categorical and numerical data: k-prototype method
April 29, 2017
Data Mining: Concepts and Techniques
27
What Is the Problem of the K-Means Method?
The k-means algorithm is sensitive to outliers !
Since an object with an extremely large value may
substantially distort the distribution of the data.
K-Medoids: Instead of taking the mean value of the object
in a cluster as a reference point, medoids can be used,
which is the most centrally located object in a cluster.
10
10
9
9
8
8
7
7
6
6
5
5
4
4
3
3
2
2
1
1
0
0
0
April 29, 2017
1
2
3
4
5
6
7
8
9
10
0
1
2
3
Data Mining: Concepts and Techniques
4
5
6
7
8
9
10
28
The K-Medoids Clustering Method
PAM (Kaufman and Rousseeuw, 1987)
Arbitrarily select k objects as medoid
Assign each data object in the given data set to most similar
medoid.
Randomly select nonmedoid object O’
Compute total cost, S, of swapping a medoid object to O’ (cost as
total sum of absolute error)
If S<0, then swap initial medoid with the new one
Repeat until there is no change in the medoid.
k-medoids and (n-k) instances pair-wise comparison
April 29, 2017
Data Mining: Concepts and Techniques
29
A Typical K-Medoids Algorithm (PAM)
Total Cost = 20
10
10
10
9
9
9
8
8
8
Arbitrary
choose k
object as
initial
medoids
7
6
5
4
3
2
Assign
each
remaining
object to
nearest
medoids
7
6
5
4
3
2
1
1
0
0
0
1
2
3
4
5
6
7
8
9
0
10
1
2
3
4
5
6
7
8
9
10
7
6
5
4
3
2
1
0
0
K=2
Until no
change
10
3
4
5
6
7
8
9
10
10
Compute
total cost of
swapping
9
9
Swapping O
and Oramdom
8
If quality is
improved.
5
5
4
4
3
3
2
2
1
1
7
6
0
8
7
6
0
0
April 29, 2017
2
Randomly select a
nonmedoid object,Orandom
Total Cost = 26
Do loop
1
1
2
3
4
5
6
7
8
9
10
Data Mining: Concepts and Techniques
0
1
2
3
4
5
6
7
8
9
10
30
What Is the Problem with PAM?
Pam is more robust than k-means in the presence of
noise and outliers
Pam works efficiently for small data sets but does not
scale well for large data sets.
Complexity? O(k(n-k)2t)
n is # of data,k is # of clusters, t is # of iterations
Sampling based method,
CLARA(Clustering LARge Applications)
April 29, 2017
Data Mining: Concepts and Techniques
31
CLARA (Clustering Large Applications) (1990)
CLARA (Kaufmann and Rousseeuw in 1990)
It draws multiple samples of the data set, applies PAM on
each sample, and gives the best clustering as the output
Strength: deals with larger data sets than PAM
Weakness:
Efficiency depends on the sample size
A good clustering based on samples will not
necessarily represent a good clustering of the whole
data set if the sample is biased
April 29, 2017
Data Mining: Concepts and Techniques
32
CLARANS (“Randomized” CLARA) (1994)
CLARANS (A Clustering Algorithm based on Randomized
Search) (Ng and Han’94)
The clustering process can be presented as searching a
graph where every node is a potential solution, that is, a
set of k medoids
PAM examines neighbors for local minimum
CLARA works on subgraphs of samples
CLARANS examines neighbors dynamically
April 29, 2017
If local optimum is found, starts with new randomly selected
node in search for a new local optimum
Data Mining: Concepts and Techniques
33
Chapter 7. Cluster Analysis
Overview
Partitioning methods
Hierarchical methods and graph-based methods
Density-based methods
Other Methods
Outlier analysis
Summary
April 29, 2017
Data Mining: Concepts and Techniques
34
Hierarchical Clustering
Produces a set of nested clusters organized as a
hierarchical tree
Can be visualized as a dendrogram
A tree like diagram representing a hierarchy of nested
clusters
Clustering obtained by cutting at desired level
5
6
0.2
4
3
0.15
4
2
5
2
0.1
1
0.05
3
0
1
3
2
5
4
6
1
Strengths of Hierarchical Clustering
Do not have to assume any particular number of
clusters
May correspond to meaningful taxonomies
Hierarchical Clustering
Two main types of hierarchical clustering
Agglomerative:
Start with the points as individual clusters
At each step, merge the closest pair of clusters until only one
cluster (or k clusters) left
Divisive:
Start with one, all-inclusive cluster
At each step, split a cluster until each cluster contains a point
(or there are k clusters)
Agglomerative Clustering Algorithm
1.
2.
3.
4.
5.
6.
Compute the proximity matrix
Let each data point be a cluster
Repeat
Merge the two closest clusters
Update the proximity matrix
Until only a single cluster remains
Starting Situation
Start with clusters of individual points and a
p1 p2
p3
p4 p5
proximity matrix
...
p1
p2
p3
p4
p5
.
.
Proximity Matrix
.
...
p1
p2
p3
p4
p9
p10
p11
p12
Intermediate Situation
C1
C2
C3
C4
C5
C1
C2
C3
C3
C4
C4
C5
Proximity Matrix
C1
C2
C5
...
p1
p2
p3
p4
p9
p10
p11
p12
How to Define Inter-Cluster Similarity
p1
p2
p3
p4 p5
p1
p2
Similarity?
p3
p4
p5
.
.
.
Proximity Matrix
...
Distance Between Clusters
Single Link: smallest distance between
Complete Link: largest distance between
Average Link: average distance between
Centroid: distance between centroids
points
points
points
Hierarchical Clustering: MIN
1
5
3
5
0.2
2
1
2
3
0.15
6
0.1
0.05
4
4
Nested Clusters
0
3
6
2
5
Dendrogram
4
1
MST (Minimum Spanning Tree)
Start with a tree that consists of any point
In successive steps, look for the closest pair of points (p,
q) such that one point (p) is in the current tree but the
other (q) is not
Add q to the tree and put an edge between p and q
Min vs. Max vs. Group Average
1
5
4
3
5
5
2
MIN
2
5
1
2
1
3
MAX
2
3
6
3
4
1
4
4
1
5
2
5
2
3
3
6
1
4
4
6
Group Average
Strength of MIN
Original Points
• Can handle non-elliptical shapes
Two Clusters
Limitations of MIN
Original Points
• Sensitive to noise and outliers
Two Clusters
Strength of MAX
Original Points
• Less susceptible to noise and outliers
Two Clusters
Limitations of MAX
Original Points
•Tends to break large clusters
•Biased towards globular clusters
Two Clusters
Hierarchical Clustering: Group Average
Compromise between Single and Complete
Link
Strengths
Less susceptible to noise and outliers
Limitations
Biased towards globular clusters
Hierarchical Clustering: Major Weaknesses
Do not scale well (N: number of points)
Space complexity: O(N2)
Time complexity:
O(N3)
O(N2 log(N)) for some cases/approaches
Cannot undo what was done previously
Quality varies in terms of distance measures
MIN (single link): susceptible to noise/outliers
MAX/GROUP AVERAGE: may not work well with nonglobular clusters
Recent Hierarchical Clustering Methods
BIRCH (1996): uses CF-tree and incrementally adjusts the
quality of sub-clusters
CURE(1998): uses representative points for inter-cluster
distance
ROCK (1999): clustering categorical data by neighbor and
link analysis
CHAMELEON (1999): hierarchical clustering using dynamic
modeling
April 29, 2017
Data Mining: Concepts and Techniques
52
Birch
Birch: Balanced Iterative Reducing and Clustering using
Hierarchies (Zhang, Ramakrishnan & Livny, SIGMOD’96)
Main ideas:
Use in-memory clustering feature to summarize
data/cluster
Use hierarchical clustering for microclustering and
other clustering methods (e.g. partitioning) for
macroclustering
Minimize database scans and I/O cost
Fix the problems of hierarchical clustering
Features:
Scales linearly: single scan and improves the quality
with a few additional scans
handles only numeric data, and sensitive to the order
of the data record.
April 29, 2017
Data Mining: Concepts and Techniques
53
Cluster Statistics
Given a cluster of instances
Centroid:
Radius: average distance from member points to
centroid
Diameter: average pair-wise distance within a cluster
April 29, 2017
54
Intra-Cluster Distance
Given two clusters
Centroid Euclidean distance:
Centroid Manhattan distance:
Average distance:
April 29, 2017
55
Clustering Feature (CF)
CF = (5, (16,30),(54,190))
10
9
8
7
6
5
4
3
2
1
0
April 29, 2017
0
1
2
3
4
5
6
56
7
8
9
10
(3,4)
(2,6)
(4,5)
(4,7)
(3,8)
Properties of Clustering Feature
CF entry is more compact
Stores significantly less then all of the data
points in the sub-cluster
A CF entry has sufficient information to
calculate statistics about the cluster and
intra-cluster distances
Additivity theorem allows us to merge subclusters incrementally & consistently
April 29, 2017
57
Hierarchical CF-Tree
A CF tree is a height-balanced tree that stores the
clustering features for a hierarchical clustering
A nonleaf node in a tree has descendants or “children”
The nonleaf nodes store sums of the CFs of their
children
A CF tree has two parameters
Branching factor: specify the maximum number of
children.
threshold: max diameter of sub-clusters stored at the
leaf nodes
April 29, 2017
Data Mining: Concepts and Techniques
58
The CF Tree Structure
Root
CF1
CF2 CF3
CF6
child1
child2 child3
child6
Non-leaf node
CF1
CF2 CF3
CF5
child1
child2 child3
child5
Leaf node
prev CF1 CF2
April 29, 2017
CF6 next
Leaf node
prev CF1 CF2
Data Mining: Concepts and Techniques
CF4 next
59
CF-Tree Insertion
Traverse down from root, find the appropriate
leaf
Follow the "closest"-CF path, w.r.t. intracluster distance measures
Modify the leaf
If the closest-CF leaf cannot absorb, make a
new CF entry.
If there is no room for new leaf, split the
parent node
Traverse back & up
Updating CFs on the path or splitting nodes
April 29, 2017
60
BIRCH Overview
April 29, 2017
61
The Algorithm: BIRCH
Phase 1: Scan database to build an initial inmemory CF-tree
Subsequent phases become fast, accurate, less order
sensitive
Phase 2: Condense data (optional)
Rebuild the CF-tree with a larger T
Phase 3: Global clustering
Use existing clustering algorithm on CF entries
Helps fix problem where natural clusters span nodes
Phase 4: Cluster refining (optional)
Do additional passes over the dataset & reassign data
points to the closest centroid from phase 3
April 29, 2017
62
CURE
CURE: An Efficient Clustering Algorithm for Large
Databases (1998) Sudipto Guha, Rajeev Rastogi, Kyuscok
Shim
Main ideas:
Use representative points for inter-cluster distance
Random sampling and partitioning
Features:
Handles non-spherical shapes and arbitrary sizes
better
CURE: Cluster Points
Uses a number of points to represent a cluster
Representative points are found by selecting a constant
number of points from a cluster and then “shrinking”
them toward the center of the cluster
How to shrink?
Cluster similarity is the similarity of the closest pair of
representative points from different clusters
Experimental Results: CURE
Picture from CURE, Guha, Rastogi, Shim.
Experimental Results: CURE
(centroid)
(single link)
Picture from CURE, Guha, Rastogi, Shim.
CURE Cannot Handle Differing Densities
Original Points
CURE
Clustering Categorical Data: The ROCK Algorithm
ROCK: RObust Clustering using linKs
S. Guha, R. Rastogi & K. Shim, ICDE’99
Major ideas
Use links to measure similarity/proximity
Sampling-based clustering
Features:
More meaningful clusters
Emphasizes interconnectivity but ignores proximity
April 29, 2017
Data Mining: Concepts and Techniques
68
Similarity Measure in ROCK
Market basket data clustering
T1 T2
Jaccard co-efficient-based similarity function: Sim(T1 , T2 )
T1 T2
Example: Two groups (clusters) of transactions
C1. <a, b, c, d, e>
{a, b, c}, {a, b, d}, {a, b, e}, {a, c, d}, {a, c, e}, {a, d, e}, {b, c, d}, {b, c,
e}, {b, d, e}, {c, d, e}
C2. <a, b, f, g>
{a, b, f}, {a, b, g}, {a, f, g}, {b, f, g}
Let T1 = {a, b, c}, T2 = {c, d, e}, T3 = {a, b, f}
Jaccard co-efficient may lead to wrong clustering result
Sim (T 1, T 2 )
Sim (T1 , T3 )
April 29, 2017
{c}
1
0.2
{a , b, c, d , e}
5
{c, f }
2
0.5
{a , b, c, f }
4
Data Mining: Concepts and Techniques
69
Link Measure in ROCK
Sim( P1 , P3 )
Neighbor:
Links: # of common neighbors
Example:
link(T1, T2) = 4, since they have 4 common neighbors
link(T1, T3) = 3, since they have 3 common neighbors
April 29, 2017
{a, c, d}, {a, c, e}, {b, c, d}, {b, c, e}
{a, b, d}, {a, b, e}, {a, b, g}
Data Mining: Concepts and Techniques
70
Rock Algorithm
1.
2.
3.
4.
Obtain a sample of points from the data set
Compute the link value for each set of points,
from the original similarities (computed by
Jaccard coefficient)
Perform an agglomerative hierarchical clustering
on the data using the “number of shared
neighbors” as similarity measure
Assign the remaining points to the clusters that
have been found
April 29, 2017
Data Mining: Concepts and Techniques
71
CHAMELEON: Hierarchical Clustering Using
Dynamic Modeling (1999)
CHAMELEON: by G. Karypis, E.H. Han, and V. Kumar’99
Basic ideas:
A graph-based clustering approach
A two-phase algorithm:
Agglomerative hierarchical clustering: repeatedly combine these subclusters
Measures the similarity based on a dynamic model
Partitioning: cluster objects into a large number of relatively small subclusters
interconnectivity and closeness (proximity)
Features:
Handles clusters of arbitrary shapes, sizes, and density
Scales well
April 29, 2017
Data Mining: Concepts and Techniques
72
Graph-Based Clustering
Uses the proximity graph
Start with the proximity matrix
Consider each point as a node in a graph
Each edge between two nodes has a weight
which is the proximity between the two points
Fully connected proximity graph
MIN (single-link) and MAX (complete-link)
Sparsification
Clusters are connected components in the
graph
CHAMELEON
Overall Framework of CHAMELEON
Construct
Partition the Graph
Sparse Graph
Data Set
Merge Partition
Final Clusters
April 29, 2017
Data Mining: Concepts and Techniques
74
Chameleon: Steps
Preprocessing Step:
Represent the Data by a Graph
Given a set of points, construct the k-nearest-neighbor
(k-NN) graph to capture the relationship between a
point and its k nearest neighbors
Concept of neighborhood is captured dynamically
(even if region is sparse)
Phase 1: Use a multilevel graph partitioning algorithm on
the graph to find a large number of clusters of wellconnected vertices
Each cluster should contain mostly points from one
“true” cluster, i.e., is a sub-cluster of a “real” cluster
Chameleon: Steps …
Phase 2: Use Hierarchical Agglomerative Clustering to
merge sub-clusters
Two clusters are combined if the resulting cluster
shares certain properties with the constituent clusters
Two key properties used to model cluster similarity:
Relative Interconnectivity: Absolute interconnectivity of two
clusters normalized by the internal connectivity of the clusters
Relative Closeness: Absolute closeness of two clusters
normalized by the internal closeness of the clusters
Cluster Merging: Limitations of Current Schemes
Existing schemes are static in nature
MIN or CURE:
merge two clusters based on their closeness (or
minimum distance)
GROUP-AVERAGE or ROCK:
merge two clusters based on their average
connectivity
Limitations of Current Merging Schemes
(a)
(b)
(c)
(d)
Closeness
schemes will
merge (a) and (b)
Average connectivity
schemes will merge (c)
and (d)
Chameleon: Clustering Using Dynamic Modeling
Adapt to the characteristics of the data set to find the
natural clusters
Use a dynamic model to measure the similarity between
clusters
Main property is the relative closeness and relative interconnectivity of the cluster
Two clusters are combined if the resulting cluster shares certain
properties with the constituent clusters
The merging scheme preserves self-similarity
CHAMELEON (Clustering Complex Objects)
April 29, 2017
Data Mining: Concepts and Techniques
80
Chapter 7. Cluster Analysis
Overview
Partitioning methods
Hierarchical methods
Density-based methods
Other methods
Cluster evaluation
Outlier analysis
Summary
April 29, 2017
Data Mining: Concepts and Techniques
81
Density-Based Clustering Methods
Clustering based on density
Major features:
Clusters of arbitrary shape
Handle noise
Need density parameters as termination condition
Several interesting studies:
DBSCAN: Ester, et al. (KDD’96)
OPTICS: Ankerst, et al (SIGMOD’99).
DENCLUE: Hinneburg & D. Keim (KDD’98)
CLIQUE: Agrawal, et al. (SIGMOD’98) (more grid-based)
April 29, 2017
Data Mining: Concepts and Techniques
82
DBSCAN: Basic Concepts
Density = number of points within a specified radius
core point: has high density
border point: has less density, but in the
neighborhood of a core point
noise point: not a core point or a border point.
Core point
border point
noise point
DBScan: Definitions
Two parameters:
Eps: radius of the neighbourhood
MinPts: Minimum number of points in an Epsneighbourhood of that point
NEps(p):
{q belongs to D | dist(p,q) <= Eps}
core point: |NEps (q)| >= MinPts
p
q
April 29, 2017
MinPts = 5
Eps = 1 cm
Data Mining: Concepts and Techniques
84
DBScan: Definitions
Directly density-reachable: p belongs
to NEps(q)
p
q
Density-reachable: if there is a chain
of points p1, …, pn, p1 = q, pn = p
such that pi+1 is directly densityreachable from pi
Density-connected: if there is a point
o such that both, p and q are densityreachable from o w.r.t. Eps and
MinPts = 5
Eps = 1 cm
p
p1
q
p
q
o
MinPts
Data Mining: Concepts and Techniques
85
DBSCAN: Cluster Definition
A cluster is defined as a maximal set of density-connected
points
Outlier
Border
Eps = 1cm
Core
April 29, 2017
MinPts = 5
Data Mining: Concepts and Techniques
86
DBSCAN: The Algorithm
Arbitrary select a point p
Retrieve all points density-reachable from p w.r.t. Eps
and MinPts.
If p is a core point, a cluster is formed.
If p is a border point, no points are density-reachable
from p and DBSCAN visits the next point of the database.
Continue the process until all of the points have been
processed.
April 29, 2017
Data Mining: Concepts and Techniques
87
DBSCAN: Determining EPS and MinPts
Basic idea:
For points in a cluster, their kth nearest neighbors
are at roughly the same distance
Noise points have the kth nearest neighbor at
farther distance
Plot sorted distance of every point to its kth nearest
neighbor
DBSCAN: Sensitive to Parameters
April 29, 2017
Data Mining: Concepts and Techniques
89
Chapter 7. Cluster Analysis
Overview
Partitioning methods
Hierarchical methods
Density-based methods
Other methods
Clustering by mixture models: mixed Gaussian model
Conceptual clustering: COBWEB
Neural network approach: SOM
Cluster evaluation
Outlier analysis
Summary
April 29, 2017
Data Mining: Concepts and Techniques
90
Model-Based Clustering
Attempt to optimize the fit between the given data and
some mathematical model
Typical methods
Statistical approach
Machine learning approach
EM (Expectation maximization)
COBWEB
Neural network approach
April 29, 2017
SOM (Self-Organizing Feature Map)
Data Mining: Concepts and Techniques
91
Clustering by Mixture Model
Assume data are generated by a mixture of probabilistic
model
Each cluster can be represented by a probabilistic
model, like a Gaussian (continuous) or a Poisson
(discrete) distribution.
April 29, 2017
Data Mining: Concepts and Techniques
92
Expectation Maximization (EM)
Starts with an initial estimate of the parameters of the
mixture model
Iteratively refine the parameters using EM method
Expectation step: computes expectation of the likelihood
of each data point Xi belonging to cluster Ci
Maximization step: computes maximum likelihood
estimates of the parameters
April 29, 2017
Data Mining: Concepts and Techniques
93
Conceptual Clustering
Conceptual clustering
Generates a concept description for each concept (class)
Produces a hierarchical category or classification scheme
Related to decision tree learning and mixture model
learning
COBWEB (Fisher’87)
A popular and simple method of incremental conceptual
learning
Creates a hierarchical clustering in the form of a
classification tree
Each node refers to a concept and contains a
probabilistic description of that concept
April 29, 2017
Data Mining: Concepts and Techniques
94
COBWEB Classification Tree
April 29, 2017
Data Mining: Concepts and Techniques
95
COBWEB: Learning the Classification Tree
Incrementally builds the classification tree
Given a new object
Search for the best node at which to
incorporate the object or add a new node for
the object
Update the probabilistic description at each
node
Merging and splitting
Use a heuristic measure - Category Utility – to
guide construction of the tree
April 29, 2017
Data Mining: Concepts and Techniques
96
COBWEB: Comments
Limitations
The assumption that the attributes are independent of
each other is often too strong because correlation may
exist
Not suitable for clustering large database – skewed tree
and expensive probability distributions
April 29, 2017
Data Mining: Concepts and Techniques
97
Neural Network Approach
Neural network approach for unsupervised learning
Involves a hierarchical architecture of several units (neurons)
Two modes
Training: builds the network using input data
Mapping: automatically classifies a new input vector.
Typical methods
SOM (Soft-Organizing feature Map)
Competitive learning
April 29, 2017
Data Mining: Concepts and Techniques
98
Self-Organizing Feature Map (SOM)
SOMs, also called topological ordered maps, or Kohonen Self-Organizing
Feature Map (KSOMs)
Produce a low-dimensional (typically two) representation of the highdimensional input data, called a map
The distance and proximity relationship (i.e., topology) are
preserved as much as possible
Visualization tool for high-dimensional data
Clustering method for grouping similar objects together
Competitive learning
believed to resemble processing that can occur in the brain
April 29, 2017
Data Mining: Concepts and Techniques
99
Learning SOM
Network structure – a set of units associated with a weight vector
Training – competitive learning
The unit whose weight vector is closest to the current object
becomes the winning unit
The winner and its neighbors learn by having their weights
adjusted
Demo: http://www.sis.pitt.edu/~ssyn/som/demo.html
April 29, 2017
Data Mining: Concepts and Techniques
100
Web Document Clustering Using SOM
The result of
SOM clustering
of 12088 Web
articles
The picture on
the right: drilling
down on the
keyword
“mining”
April 29, 2017
Data Mining: Concepts and Techniques
101
Chapter 7. Cluster Analysis
Overview
Partitioning methods
Hierarchical methods
Density-based methods
Other methods
Cluster evaluation
Outlier analysis
April 29, 2017
Data Mining: Concepts and Techniques
102
Cluster Evaluation
Determine clustering tendency of data, i.e.
distinguish whether non-random structure exists
Determine correct number of clusters
Evaluate how well the cluster results fit the data
without external information
Evaluate how well the cluster results are
compared to externally known results
Compare different clustering algorithms/results
1
1
0.9
0.9
0.8
0.8
0.7
0.7
0.6
0.6
0.5
0.5
y
Random
Points
y
Clusters found in Random Data
0.4
0.4
0.3
0.3
0.2
0.2
0.1
0.1
0
0
0.2
0.4
0.6
0.8
0
1
DBSCAN
0
0.2
0.4
x
1
1
0.9
0.9
0.8
0.8
0.7
0.7
0.6
0.6
0.5
0.5
y
y
K-means
0.4
0.4
0.3
0.3
0.2
0.2
0.1
0.1
0
0
0.2
0.4
0.6
x
0.6
0.8
1
x
0.8
1
0
Complete
Link
0
0.2
0.4
0.6
x
0.8
1
Measures of Cluster Validity
Unsupervised (internal indices): Used to measure the
goodness of a clustering structure without respect to
external information.
Supervised (external indices): Used to measure the extent
to which cluster labels match externally supplied class
labels.
Sum of Squared Error (SSE)
Entropy
Relative: Used to compare two different clustering results
Often an external or internal index is used for this function, e.g., SSE
or entropy
Internal Measures: Cohesion and Separation
Cluster Cohesion: how closely related are objects in a
cluster
Cluster Separation: how distinct or well-separated a
cluster is from other clusters
Example: Squared Error
Cohesion: within cluster sum of squares (SSE)
WSS ( x mi )2
i xC i
Separation: between cluster sum of squares
BSS ( mi m j )
i
j
2
Cohesion
separation
Internal Measures: SSE
SSE is good for comparing two clusterings
Can also be used to estimate the number of clusters
10
6
9
8
4
7
2
6
SSE
0
5
4
-2
3
2
-4
1
-6
5
10
15
0
2
5
10
15
K
20
25
30
Internal Measures: SSE
Another example of a more complicated data set
1
2
6
3
4
5
7
SSE of clusters found using K-means
Statistical Framework for SSE
Statistics framework for cluster validity
More “atypical” -> likely valid structure in the data
Use values resulting from random data as baseline
Example
Clustering: SSE = 0.005
SSE of three clusters in 500 sets of random data points
1
50
0.9
45
0.8
40
0.7
35
30
Count
y
0.6
0.5
0.4
20
0.3
15
0.2
10
0.1
0
25
5
0
0.2
0.4
0.6
x
0.8
1
0
0.016 0.018
0.02
0.022
0.024
0.026
SSE
0.028
0.03
0.032
0.034
External Measures
Compare cluster results with “ground truth” or manually
clustering
Classification-oriented measures: entropy, purity,
precision, recall, F-measures
Similarity-oriented measures: Jaccard scores
External Measures: Classification-Oriented Measures
Entropy: the degree to which each cluster consists of
objects of a single class
Precision: the fraction of a cluster that consists of objects
of a specified class
Recall: the extent to which a cluster contains all objects
of a specified class
External Measure: Similarity-Oriented Measures
Given a reference clustering T and clustering S
f00: number of pair of points belonging to different clusters in both T
and S
f01: number of pair of points belonging to different cluster in T but
same cluster in S
f10: number of pair of points belonging to same cluster in T but
different cluster in S
f11: number of pair of points belonging to same clusters in both T and
S
Rand
f 00 f11
f 00 f 01 f10 f11
Jaccard
f11
f 01 f10 f11
T
April 29, 2017
Li Xiong
S
112
Chapter 7. Cluster Analysis
Overview
Partitioning methods
Hierarchical methods
Density-based methods
Other methods
Cluster evaluation
Outlier analysis
April 29, 2017
Data Mining: Concepts and Techniques
113
What Is Outlier Discovery?
What are outliers?
The set of objects are considerably dissimilar from the
remainder of the data
Problem: Define and find outliers in large data sets
Applications:
Credit card fraud detection
Telecom fraud detection
Customer segmentation
Medical analysis
April 29, 2017
Data Mining: Concepts and Techniques
114
Outlier Discovery:
Statistical Approaches
Assume a model underlying distribution that generates
data set (e.g. normal distribution)
Use discordancy tests depending on
data distribution
distribution parameter (e.g., mean, variance)
number of expected outliers
Drawbacks
most tests are for single attribute
In many cases, data distribution may not be known
April 29, 2017
Data Mining: Concepts and Techniques
115
Outlier Discovery: Distance-Based Approach
Introduced to counter the main limitations imposed by
statistical methods
We need multi-dimensional analysis without knowing
data distribution
Distance-based outlier: A DB(p, D)-outlier is an object O
in a dataset T such that at least a fraction p of the objects
in T lies at a distance greater than D from O
Algorithms for mining distance-based outliers
Index-based algorithm
Nested-loop algorithm
Cell-based algorithm
April 29, 2017
Data Mining: Concepts and Techniques
116
Density-Based Local
Outlier Detection
Distance-based outlier detection
is based on global distance
distribution
It encounters difficulties to
identify outliers if data is not
uniformly distributed
Ex. C1 contains 400 loosely
distributed points, C2 has 100
tightly condensed points, 2
outlier points o1, o2
Distance-based method cannot
identify o2 as an outlier
Need the concept of local outlier
April 29, 2017
Local outlier factor (LOF)
Assume outlier is not
crisp
Each point has a LOF
Data Mining: Concepts and Techniques
117
Outlier Discovery: Deviation-Based Approach
Identifies outliers by examining the main characteristics
of objects in a group
Objects that “deviate” from this description are
considered outliers
Sequential exception technique
simulates the way in which humans can distinguish
unusual objects from among a series of supposedly
like objects
OLAP data cube technique
uses data cubes to identify regions of anomalies in
large multidimensional data
April 29, 2017
Data Mining: Concepts and Techniques
118
Chapter 7. Cluster Analysis
1. What is Cluster Analysis?
2. Types of Data in Cluster Analysis
3. A Categorization of Major Clustering Methods
4. Partitioning Methods
5. Hierarchical Methods
6. Density-Based Methods
7. Grid-Based Methods
8. Model-Based Methods
9. Clustering High-Dimensional Data
10. Constraint-Based Clustering
11. Outlier Analysis
12. Summary
April 29, 2017
Data Mining: Concepts and Techniques
119
Summary
Cluster analysis groups objects based on their similarity
and has wide applications
Measure of similarity can be computed for various types
of data
Clustering algorithms can be categorized into partitioning
methods, hierarchical methods, density-based methods,
grid-based methods, and model-based methods
Outlier detection and analysis are very useful for fraud
detection, etc. and can be performed by statistical,
distance-based or deviation-based approaches
There are still lots of research issues on cluster analysis
April 29, 2017
Data Mining: Concepts and Techniques
120
Problems and Challenges
Considerable progress has been made in scalable
clustering methods
Partitioning: k-means, k-medoids, CLARANS
Hierarchical: BIRCH, ROCK, CHAMELEON
Density-based: DBSCAN, OPTICS, DenClue
Grid-based: STING, WaveCluster, CLIQUE
Model-based: EM, Cobweb, SOM
Frequent pattern-based: pCluster
Constraint-based: COD, constrained-clustering
Current clustering techniques do not address all the
requirements adequately, still an active area of research
April 29, 2017
Data Mining: Concepts and Techniques
121
References (1)
R. Agrawal, J. Gehrke, D. Gunopulos, and P. Raghavan. Automatic subspace clustering of high
dimensional data for data mining applications. SIGMOD'98
M. R. Anderberg. Cluster Analysis for Applications. Academic Press, 1973.
M. Ankerst, M. Breunig, H.-P. Kriegel, and J. Sander. Optics: Ordering points to identify the
clustering structure, SIGMOD’99.
P. Arabie, L. J. Hubert, and G. De Soete. Clustering and Classification. World Scientific, 1996
Beil F., Ester M., Xu X.: "Frequent Term-Based Text Clustering", KDD'02
M. M. Breunig, H.-P. Kriegel, R. Ng, J. Sander. LOF: Identifying Density-Based Local Outliers.
SIGMOD 2000.
M. Ester, H.-P. Kriegel, J. Sander, and X. Xu. A density-based algorithm for discovering clusters in
large spatial databases. KDD'96.
M. Ester, H.-P. Kriegel, and X. Xu. Knowledge discovery in large spatial databases: Focusing
techniques for efficient class identification. SSD'95.
D. Fisher. Knowledge acquisition via incremental conceptual clustering. Machine Learning, 2:139172, 1987.
D. Gibson, J. Kleinberg, and P. Raghavan. Clustering categorical data: An approach based on
dynamic systems. VLDB’98.
April 29, 2017
Data Mining: Concepts and Techniques
122
References (2)
V. Ganti, J. Gehrke, R. Ramakrishan. CACTUS Clustering Categorical Data Using Summaries. KDD'99.
D. Gibson, J. Kleinberg, and P. Raghavan. Clustering categorical data: An approach based on
dynamic systems. In Proc. VLDB’98.
S. Guha, R. Rastogi, and K. Shim. Cure: An efficient clustering algorithm for large databases.
SIGMOD'98.
S. Guha, R. Rastogi, and K. Shim. ROCK: A robust clustering algorithm for categorical attributes. In
ICDE'99, pp. 512-521, Sydney, Australia, March 1999.
A. Hinneburg, D.l A. Keim: An Efficient Approach to Clustering in Large Multimedia Databases with
Noise. KDD’98.
A. K. Jain and R. C. Dubes. Algorithms for Clustering Data. Printice Hall, 1988.
G. Karypis, E.-H. Han, and V. Kumar. CHAMELEON: A Hierarchical Clustering Algorithm Using
Dynamic Modeling. COMPUTER, 32(8): 68-75, 1999.
L. Kaufman and P. J. Rousseeuw. Finding Groups in Data: an Introduction to Cluster Analysis. John
Wiley & Sons, 1990.
E. Knorr and R. Ng. Algorithms for mining distance-based outliers in large datasets. VLDB’98.
G. J. McLachlan and K.E. Bkasford. Mixture Models: Inference and Applications to Clustering. John
Wiley and Sons, 1988.
P. Michaud. Clustering techniques. Future Generation Computer systems, 13, 1997.
R. Ng and J. Han. Efficient and effective clustering method for spatial data mining. VLDB'94.
April 29, 2017
Data Mining: Concepts and Techniques
123
References (3)
L. Parsons, E. Haque and H. Liu, Subspace Clustering for High Dimensional Data: A Review ,
SIGKDD Explorations, 6(1), June 2004
E. Schikuta. Grid clustering: An efficient hierarchical clustering method for very large data sets.
Proc. 1996 Int. Conf. on Pattern Recognition,.
G. Sheikholeslami, S. Chatterjee, and A. Zhang. WaveCluster: A multi-resolution clustering
approach for very large spatial databases. VLDB’98.
A. K. H. Tung, J. Han, L. V. S. Lakshmanan, and R. T. Ng. Constraint-Based Clustering in Large
Databases, ICDT'01.
A. K. H. Tung, J. Hou, and J. Han. Spatial Clustering in the Presence of Obstacles , ICDE'01
H. Wang, W. Wang, J. Yang, and P.S. Yu. Clustering by pattern similarity in large data
sets, SIGMOD’ 02.
W. Wang, Yang, R. Muntz, STING: A Statistical Information grid Approach to Spatial Data Mining,
VLDB’97.
T. Zhang, R. Ramakrishnan, and M. Livny. BIRCH : an efficient data clustering method for very
large databases. SIGMOD'96.
April 29, 2017
Data Mining: Concepts and Techniques
124
www.cs.uiuc.edu/~hanj
Thank you !!!
April 29, 2017
Data Mining: Concepts and Techniques
125
Clustering: Rich Applications and
Multidisciplinary Efforts
Pattern Recognition
Spatial Data Analysis
Create thematic maps in GIS by clustering feature
spaces
Detect spatial clusters or for other spatial mining tasks
Image Processing
Economic Science (especially market research)
WWW
Document clustering
Cluster Weblog data to discover groups of similar access
patterns
April 29, 2017
Data Mining: Concepts and Techniques
126
Major Clustering Approaches (II)
Grid-based approach:
based on a multiple-level granularity structure
Typical methods: STING, WaveCluster, CLIQUE
Model-based:
A model is hypothesized for each of the clusters and tries to find the best
fit of that model to each other
Typical methods: EM, SOM, COBWEB
Frequent pattern-based:
Based on the analysis of frequent patterns
Typical methods: pCluster
User-guided or constraint-based:
Clustering by considering user-specified or application-specific constraints
Typical methods: COD (obstacles), constrained clustering
April 29, 2017
Data Mining: Concepts and Techniques
127
Measure the Quality of Clustering
Dissimilarity/Similarity metric: Similarity is expressed in
terms of a distance function, typically metric: d(i, j)
There is a separate “quality” function that measures the
“goodness” of a cluster.
The definitions of distance functions are usually very
different for interval-scaled, boolean, categorical, ordinal
ratio, and vector variables.
Weights should be associated with different variables
based on applications and data semantics.
It is hard to define “similar enough” or “good enough”
the answer is typically highly subjective.
April 29, 2017
Data Mining: Concepts and Techniques
128
Interval-valued variables
Standardize data
Calculate the mean absolute deviation:
sf 1
n (| x1 f m f | | x2 f m f | ... | xnf m f |)
where
m f 1n (x1 f x2 f
...
xnf )
.
Calculate the standardized measurement (z-score)
xif m f
zif
sf
Using mean absolute deviation is more robust than using
standard deviation
April 29, 2017
Data Mining: Concepts and Techniques
129
Similarity and Dissimilarity Between
Objects (Cont.)
If q = 2, d is Euclidean distance:
d (i, j) (| x x |2 | x x |2 ... | x x |2 )
i1
j1
i2
j2
ip
jp
Properties
d(i,j) 0
d(i,i) = 0
d(i,j) = d(j,i)
d(i,j) d(i,k) + d(k,j)
Also, one can use weighted distance, parametric
Pearson product moment correlation, or other
disimilarity measures
April 29, 2017
Data Mining: Concepts and Techniques
130
Data Structures
Data matrix
(two modes)
x11
...
x
i1
...
x
n1
Dissimilarity matrix
(one mode)
April 29, 2017
...
x1f
...
...
...
...
xif
...
...
...
...
... xnf
...
...
0
d(2,1)
0
d(3,1) d ( 3,2) 0
:
:
:
d ( n,1) d ( n,2) ...
Data Mining: Concepts and Techniques
x1p
...
xip
...
xnp
... 0
131
Type of data in clustering analysis
Interval-scaled variables
Binary variables
Nominal, ordinal, and ratio variables
Variables of mixed types
April 29, 2017
Data Mining: Concepts and Techniques
132
Similarity or Dissimilarity Metrics
Distances are normally used to measure the similarity or
dissimilarity between two data objects
Some popular ones include: Minkowski distance:
d (i, j) q (| x x |q | x x |q ... | x x |q )
i1 j1
i2
j2
ip
jp
where i = (xi1, xi2, …, xip) and j = (xj1, xj2, …, xjp) are
two p-dimensional data objects, and q is a positive
integer
If q = 1, d is Manhattan distance
d (i, j) | x x | | x x | ... | x x |
i1 j1 i2 j 2
i p jp
April 29, 2017
Data Mining: Concepts and Techniques
133
Similarity and Dissimilarity Between
Objects (Cont.)
If q = 2, d is Euclidean distance:
d (i, j) (| x x |2 | x x |2 ... | x x |2 )
i1
j1
i2
j2
ip
jp
Properties
d(i,j) 0
d(i,i) = 0
d(i,j) = d(j,i)
d(i,j) d(i,k) + d(k,j)
Also, one can use weighted distance, parametric
Pearson product moment correlation, or other
disimilarity measures
April 29, 2017
Data Mining: Concepts and Techniques
134
Binary Variables
Object j
1
0
A contingency table for binary
1
a
b
Object i
data
0
c
d
sum a c b d
Distance measure for
symmetric binary variables:
Distance measure for
asymmetric binary variables:
Jaccard coefficient (similarity
measure for asymmetric
d (i, j)
d (i, j)
April 29, 2017
bc
a bc d
bc
a bc
simJaccard (i, j)
binary variables):
Data Mining: Concepts and Techniques
sum
a b
cd
p
a
a b c
135
Dissimilarity between Binary Variables
Example
Name
Jack
Mary
Jim
Gender
M
F
M
Fever
Y
Y
Y
Cough
N
N
P
Test-1
P
P
N
Test-2
N
N
N
Test-3
N
P
N
Test-4
N
N
N
gender is a symmetric attribute
the remaining attributes are asymmetric binary
let the values Y and P be set to 1, and the value N be set to 0
01
0.33
2 01
11
d ( jack , jim )
0.67
111
1 2
d ( jim , mary )
0.75
11 2
d ( jack , mary )
April 29, 2017
Data Mining: Concepts and Techniques
136
Nominal Variables
A generalization of the binary variable in that it can take
more than 2 states, e.g., red, yellow, blue, green
Method 1: Simple matching
m: # of matches, p: total # of variables
m
d (i, j) p
p
Method 2: use a large number of binary variables
creating a new binary variable for each of the M
nominal states
April 29, 2017
Data Mining: Concepts and Techniques
137
Ordinal Variables
An ordinal variable can be discrete or continuous
Order is important, e.g., rank
Can be treated like interval-scaled
replace xif by their rank
map the range of each variable onto [0, 1] by replacing
i-th object in the f-th variable by
zif
rif {1,...,M f }
rif 1
M f 1
compute the dissimilarity using methods for intervalscaled variables
April 29, 2017
Data Mining: Concepts and Techniques
138
Ratio-Scaled Variables
Ratio-scaled variable: a positive measurement on a
nonlinear scale, approximately at exponential scale,
such as AeBt or Ae-Bt
Methods:
treat them like interval-scaled variables—not a good
choice! (why?—the scale can be distorted)
apply logarithmic transformation
yif = log(xif)
treat them as continuous ordinal data treat their rank
as interval-scaled
April 29, 2017
Data Mining: Concepts and Techniques
139
Distance between Attributes
A database may contain all the six types of variables
symmetric binary, asymmetric binary, nominal,
ordinal, interval and ratio
One may use a weighted formula to combine their
effects
pf 1 ij( f ) d ij( f )
d (i, j)
pf 1 ij( f )
f is binary or nominal:
dij(f) = 0 if xif = xjf , or dij(f) = 1 otherwise
f is interval-based: use the normalized distance
f is ordinal or ratio-scaled
compute ranks rif and
r 1
z
if
and treat zif as interval-scaled
M 1
if
f
April 29, 2017
Data Mining: Concepts and Techniques
140
Vector Objects
Vector objects: keywords in documents, gene
features in micro-arrays, etc.
Broad applications: information retrieval, biologic
taxonomy, etc.
Cosine measure
A variant: Tanimoto coefficient
April 29, 2017
Data Mining: Concepts and Techniques
141
Typical Alternatives to Calculate the Distance
between Clusters
Single link: smallest distance between an element in one cluster
and an element in the other, i.e., dis(Ki, Kj) = min(tip, tjq)
Complete link: largest distance between an element in one cluster
and an element in the other, i.e., dis(Ki, Kj) = max(tip, tjq)
Average: avg distance between an element in one cluster and an
element in the other, i.e., dis(Ki, Kj) = avg(tip, tjq)
Centroid: distance between the centroids of two clusters, i.e.,
dis(Ki, Kj) = dis(Ci, Cj)
Medoid: distance between the medoids of two clusters, i.e., dis(Ki,
Kj) = dis(Mi, Mj)
Medoid: one chosen, centrally located object in the cluster
April 29, 2017
Data Mining: Concepts and Techniques
142
Centroid, Radius and Diameter of a
Cluster (for numerical data sets)
Centroid: the “middle” of a cluster
ip
)
N
Radius: square root of average distance from any point of the
cluster to its centroid
Cm
iN 1(t
N (t cm ) 2
Rm i 1 ip
N
Diameter: square root of average mean squared distance between
all pairs of points in the cluster
N N (t t ) 2
Dm i 1 i 1 ip iq
N ( N 1)
April 29, 2017
Data Mining: Concepts and Techniques
143
PAM (Partitioning Around Medoids) (1987)
PAM (Kaufman and Rousseeuw, 1987), built in Splus
Use real object to represent the cluster
Select k representative objects arbitrarily
For each pair of non-selected object h and selected
object i, calculate the total swapping cost TCih
For each pair of i and h,
If TCih < 0, i is replaced by h
Then assign each non-selected object to the most
similar representative object
repeat steps 2-3 until there is no change
April 29, 2017
Data Mining: Concepts and Techniques
144
PAM Clustering: Total swapping cost TCih=jCjih
10
10
9
9
t
8
7
7
6
5
i
4
3
j
6
h
4
5
h
i
3
2
2
1
1
0
0
0
1
2
3
4
5
6
7
8
9
10
Cjih = d(j, h) - d(j, i)
0
1
2
3
4
5
6
7
8
9
10
Cjih = 0
10
10
9
9
h
8
8
7
j
7
6
6
i
5
5
i
4
h
4
t
j
3
3
t
2
2
1
1
0
0
0
April 29, 2017
j
t
8
1
2
3
4
5
6
7
8
9
10
0
1
2
3
4
5
6
7
8
9
CjihTechniques
= d(j, h) - d(j, t)
Cjih = d(j, t) - d(j, i) Data Mining: Concepts and
10
145
Hierarchical Clustering
Use distance matrix as clustering criteria. This method
does not require the number of clusters k as an input,
but needs a termination condition
Step 0
a
Step 1
Step 2 Step 3 Step 4
agglomerative
(AGNES)
ab
b
abcde
c
cde
d
de
e
Step 4
April 29, 2017
Step 3
Step 2 Step 1 Step 0
Data Mining: Concepts and Techniques
divisive
(DIANA)
146
AGNES (Agglomerative Nesting)
Introduced in Kaufmann and Rousseeuw (1990)
Implemented in statistical analysis packages, e.g., Splus
Use the Single-Link method and the dissimilarity matrix.
Merge nodes that have the least dissimilarity
Go on in a non-descending fashion
Eventually all nodes belong to the same cluster
10
10
10
9
9
9
8
8
8
7
7
7
6
6
6
5
5
5
4
4
4
3
3
3
2
2
2
1
1
1
0
0
0
1
2
3
April 29, 2017
4
5
6
7
8
9
10
0
0
1
2
3
4
5
6
7
8
9
10
Data Mining: Concepts and Techniques
0
1
2
3
4
5
6
7
8
9
10
147
Dendrogram: Shows How the Clusters are Merged
Decompose data objects into a several levels of nested
partitioning (tree of clusters), called a dendrogram.
A clustering of the data objects is obtained by cutting the
dendrogram at the desired level, then each connected
component forms a cluster.
April 29, 2017
Data Mining: Concepts and Techniques
148
DIANA (Divisive Analysis)
Introduced in Kaufmann and Rousseeuw (1990)
Implemented in statistical analysis packages, e.g., Splus
Inverse order of AGNES
Eventually each node forms a cluster on its own
10
10
10
9
9
9
8
8
8
7
7
7
6
6
6
5
5
5
4
4
4
3
3
3
2
2
2
1
1
1
0
0
0
0
1
2
April 29, 2017
3
4
5
6
7
8
9
10
0
1
2
3
4
5
6
7
8
9
10
Data Mining: Concepts and Techniques
0
1
2
3
4
5
6
7
8
9
10
149
Sparsification in the Clustering Process
Keep the connections to the most similar
(nearest) neighbors of a point
Reduces the impact of noise and outliers and
sharpens the distinction between clusters.
Facilitates the use of graph partitioning
algorithms
Characteristics of Spatial Data Sets
• Clusters are defined as densely populated
regions of the space
• Clusters have arbitrary shapes, orientation,
and non-uniform sizes
• Difference in densities across clusters and
variation in density within clusters
• Existence of special artifacts (streaks) and
noise
The clustering algorithm must address
the above characteristics and also
require minimal supervision.
DBSCAN: Core, Border, and Noise Points
OPTICS: A Cluster-Ordering Method (1999)
OPTICS: Ordering Points To Identify the Clustering
Structure
Ankerst, Breunig, Kriegel, and Sander (SIGMOD’99)
Produces a special order of the database wrt its
density-based clustering structure
This cluster-ordering contains info equiv to the densitybased clusterings corresponding to a broad range of
parameter settings
Good for both automatic and interactive cluster analysis,
including finding intrinsic clustering structure
Can be represented graphically or using visualization
techniques
April 29, 2017
Data Mining: Concepts and Techniques
153
OPTICS: Some Extension from
DBSCAN
Index-based:
k = number of dimensions
N = 20
p = 75%
M = N(1-p) = 5
D
Complexity: O(kN2)
Core Distance
p1
Reachability Distance
o
p2
Max (core-distance (o), d (o, p))
r(p1, o) = 2.8cm. r(p2,o) = 4cm
April 29, 2017
o
MinPts = 5
e = 3 cm
Data Mining: Concepts and Techniques
154
Reachability
-distance
undefined
e
e‘
April 29, 2017
e
Data Mining: Concepts and Techniques
Cluster-order
of the objects
155
Density-Based Clustering: OPTICS & Its Applications
April 29, 2017
Data Mining: Concepts and Techniques
156
Denclue: Technical Essence
Uses grid cells but only keeps information about grid
cells that do actually contain data points and manages
these cells in a tree-based access structure
Influence function: describes the impact of a data point
within its neighborhood
Overall density of the data space can be calculated as
the sum of the influence function of all data points
Clusters can be determined mathematically by
identifying density attractors
Density attractors are local maximal of the overall
density function
April 29, 2017
Data Mining: Concepts and Techniques
157
Density Attractor
April 29, 2017
Data Mining: Concepts and Techniques
158
Center-Defined and Arbitrary
April 29, 2017
Data Mining: Concepts and Techniques
159
Different Aspects of Cluster Validation
1.
2.
3.
Determining the clustering tendency of a set of data, i.e.,
distinguishing whether non-random structure actually exists in the
data.
Comparing the results of a cluster analysis to externally known
results, e.g., to externally given class labels.
Evaluating how well the results of a cluster analysis fit the data
without reference to external information.
- Use only the data
4.
5.
Comparing the results of two different sets of cluster analyses to
determine which is better.
Determining the ‘correct’ number of clusters.
For 2, 3, and 4, we can further distinguish whether we want to
evaluate the entire clustering or just individual clusters.
Using Similarity Matrix for Cluster Validation
1
0.9
500
1
2
0.8
6
0.7
1000
3
0.6
4
1500
0.5
0.4
2000
0.3
5
0.2
2500
0.1
7
3000
DBSCAN
500
1000
1500
2000
2500
3000
0
Framework for Cluster Validity
Need a framework to interpret any measure.
For example, if our measure of evaluation has the value, 10, is that
good, fair, or poor?
Statistics provide a framework for cluster validity
The more “atypical” a clustering result is, the more likely it represents
valid structure in the data
Can compare the values of an index that result from random data or
clusterings to those of a clustering result.
If the value of the index is unlikely, then the cluster results are valid
These approaches are more complicated and harder to understand.
For comparing the results of two different sets of cluster
analyses, a framework is less necessary.
However, there is the question of whether the difference between two
index values is significant
DENCLUE: Using Statistical Density
Functions
DENsity-based CLUstEring by Hinneburg & Keim (KDD’98)
Using statistical density functions:
f Gaussian ( x, y) e
f
D
Gaussian
f
Major features
( x)
d ( x , y )2
2 2
N
i 1
e
d ( x , xi ) 2
2
2
( x, xi ) i 1 ( xi x) e
D
Gaussian
Solid mathematical foundation
Good for data sets with large amounts of noise
N
d ( x , xi ) 2
2 2
Allows a compact mathematical description of arbitrarily shaped
clusters in high-dimensional data sets
Significant faster than existing algorithm (e.g., DBSCAN)
But needs a large number of parameters
April 29, 2017
Data Mining: Concepts and Techniques
163
Clustering High-Dimensional Data
Clustering high-dimensional data
Many applications: text documents, DNA micro-array data
Major challenges:
Many irrelevant dimensions may mask clusters
Distance measure becomes meaningless—due to equi-distance
Clusters may exist only in some subspaces
Methods
Feature transformation: only effective if most dimensions are relevant
Feature selection: wrapper or filter approaches
PCA & SVD useful only when features are highly correlated/redundant
useful to find a subspace where the data have nice clusters
Subspace-clustering: find clusters in all the possible subspaces
April 29, 2017
CLIQUE, ProClus, and frequent pattern-based clustering
Data Mining: Concepts and Techniques
164
The Curse of Dimensionality
(graphs adapted from Parsons et al. KDD Explorations 2004)
Data in only one dimension is relatively
packed
Adding a dimension “stretch” the
points across that dimension, making
them further apart
Adding more dimensions will make the
points further apart—high dimensional
data is extremely sparse
Distance measure becomes
meaningless—due to equi-distance
April 29, 2017
Data Mining: Concepts and Techniques
165
Why Subspace Clustering?
(adapted from Parsons et al. SIGKDD Explorations 2004)
April 29, 2017
Clusters may exist only in some subspaces
Subspace-clustering: find clusters in all the subspaces
Data Mining: Concepts and Techniques
166
CLIQUE (Clustering In QUEst)
Agrawal, Gehrke, Gunopulos, Raghavan (SIGMOD’98)
Automatically identifying subspaces of a high dimensional data space
that allow better clustering than original space
CLIQUE can be considered as both density-based and grid-based
It partitions each dimension into the same number of equal length
interval
It partitions an m-dimensional data space into non-overlapping
rectangular units
A unit is dense if the fraction of total data points contained in the
unit exceeds the input model parameter
A cluster is a maximal set of connected dense units within a
subspace
April 29, 2017
Data Mining: Concepts and Techniques
167
CLIQUE: The Major Steps
Partition the data space and find the number of points that
lie inside each cell of the partition.
Identify the subspaces that contain clusters using the
Apriori principle
Identify clusters
Determine dense units in all subspaces of interests
Determine connected dense units in all subspaces of
interests.
Generate minimal description for the clusters
Determine maximal regions that cover a cluster of
connected dense units for each cluster
Determination of minimal cover for each cluster
April 29, 2017
Data Mining: Concepts and Techniques
168
40
50
20
30
40
50
age
60
Vacation
=3
30
Vacation
(week)
0 1 2 3 4 5 6 7
Salary
(10,000)
0 1 2 3 4 5 6 7
20
age
60
30
50
age
April 29, 2017
Data Mining: Concepts and Techniques
169
Strength and Weakness of CLIQUE
Strength
automatically finds subspaces of the highest
dimensionality such that high density clusters exist in
those subspaces
insensitive to the order of records in input and does not
presume some canonical data distribution
scales linearly with the size of input and has good
scalability as the number of dimensions in the data
increases
Weakness
The accuracy of the clustering result may be degraded
at the expense of simplicity of the method
April 29, 2017
Data Mining: Concepts and Techniques
170
Frequent Pattern-Based Approach
Clustering high-dimensional space (e.g., clustering text documents,
microarray data)
Projected subspace-clustering: which dimensions to be projected
on?
CLIQUE, ProClus
Feature extraction: costly and may not be effective?
Using frequent patterns as “features”
“Frequent” are inherent features
Mining freq. patterns may not be so expensive
Typical methods
Frequent-term-based document clustering
Clustering by pattern similarity in micro-array data (pClustering)
April 29, 2017
Data Mining: Concepts and Techniques
171
Clustering by Pattern Similarity (p-Clustering)
Right: The micro-array “raw” data
shows 3 genes and their values in a
multi-dimensional space
Difficult to find their patterns
Bottom: Some subsets of dimensions
form nice shift and scaling patterns
April 29, 2017
Data Mining: Concepts and Techniques
172
Why p-Clustering?
Microarray data analysis may need to
Clustering on thousands of dimensions (attributes)
Discovery of both shift and scaling patterns
Clustering with Euclidean distance measure? — cannot find shift patterns
Clustering on derived attribute Aij = ai – aj? — introduces N(N-1) dimensions
Bi-cluster using transformed mean-squared residue score matrix (I, J)
1
d
d
ij | J | ij
jJ
d
1
d
| I | i I ij
d
1
d
| I || J | i I , j J ij
Where
A submatrix is a δ-cluster if H(I, J) ≤ δ for some δ > 0
Ij
IJ
Problems with bi-cluster
No downward closure property,
Due to averaging, it may contain outliers but still within δ-threshold
April 29, 2017
Data Mining: Concepts and Techniques
173
p-Clustering: Clustering
by Pattern Similarity
Given object x, y in O and features a, b in T, pCluster is a 2 by 2
matrix
d xa d xb
pScore(
) | (d xa d xb ) (d ya d yb ) |
d ya d yb
A pair (O, T) is in δ-pCluster if for any 2 by 2 matrix X in (O, T),
pScore(X) ≤ δ for some δ > 0
Properties of δ-pCluster
Downward closure
Clusters are more homogeneous than bi-cluster (thus the name:
pair-wise Cluster)
Pattern-growth algorithm has been developed for efficient mining
d
/d
ya
For scaling patterns, one can observe, taking logarithmic on xa
d xb / d yb
will lead to the pScore form
April 29, 2017
Data Mining: Concepts and Techniques
174
Chapter 6. Cluster Analysis
1. What is Cluster Analysis?
2. Types of Data in Cluster Analysis
3. A Categorization of Major Clustering Methods
4. Partitioning Methods
5. Hierarchical Methods
6. Density-Based Methods
7. Grid-Based Methods
8. Model-Based Methods
9. Clustering High-Dimensional Data
10. Constraint-Based Clustering
11. Outlier Analysis
12. Summary
April 29, 2017
Data Mining: Concepts and Techniques
175
Grid-Based Clustering Method
Using multi-resolution grid data structure
Several interesting methods
STING (a STatistical INformation Grid approach) by Wang,
Yang and Muntz (1997)
WaveCluster by Sheikholeslami, Chatterjee, and Zhang
(VLDB’98)
A multi-resolution clustering approach using wavelet
method
CLIQUE: Agrawal, et al. (SIGMOD’98)
April 29, 2017
On high-dimensional data (thus put in the section of clustering
high-dimensional data
Data Mining: Concepts and Techniques
176
STING: A Statistical Information Grid Approach
Wang, Yang and Muntz (VLDB’97)
The spatial area area is divided into rectangular cells
There are several levels of cells corresponding to different
levels of resolution
April 29, 2017
Data Mining: Concepts and Techniques
177
The STING Clustering Method
Each cell at a high level is partitioned into a number of
smaller cells in the next lower level
Statistical info of each cell is calculated and stored
beforehand and is used to answer queries
Parameters of higher level cells can be easily calculated from
parameters of lower level cell
count, mean, s, min, max
type of distribution—normal, uniform, etc.
Use a top-down approach to answer spatial data queries
Start from a pre-selected layer—typically with a small
number of cells
For each cell in the current level compute the confidence
interval
April 29, 2017
Data Mining: Concepts and Techniques
178
Comments on STING
Remove the irrelevant cells from further consideration
When finish examining the current layer, proceed to the
next lower level
Repeat this process until the bottom layer is reached
Advantages:
Query-independent, easy to parallelize, incremental
update
O(K), where K is the number of grid cells at the lowest
level
Disadvantages:
All the cluster boundaries are either horizontal or
vertical, and no diagonal boundary is detected
April 29, 2017
Data Mining: Concepts and Techniques
179
WaveCluster: Clustering by Wavelet Analysis (1998)
Sheikholeslami, Chatterjee, and Zhang (VLDB’98)
A multi-resolution clustering approach which applies wavelet
transform to the feature space
How to apply wavelet transform to find clusters
Summarizes the data by imposing a multidimensional grid
structure onto data space
These multidimensional spatial data objects are represented in a
n-dimensional feature space
Apply wavelet transform on feature space to find the dense
regions in the feature space
Apply wavelet transform multiple times which result in clusters at
different scales from fine to coarse
April 29, 2017
Data Mining: Concepts and Techniques
180
Wavelet Transform
Wavelet transform: A signal processing technique that
decomposes a signal into different frequency sub-band
(can be applied to n-dimensional signals)
Data are transformed to preserve relative distance
between objects at different levels of resolution
Allows natural clusters to become more distinguishable
April 29, 2017
Data Mining: Concepts and Techniques
181
The WaveCluster Algorithm
Input parameters
# of grid cells for each dimension
the wavelet, and the # of applications of wavelet transform
Why is wavelet transformation useful for clustering?
Use hat-shape filters to emphasize region where points cluster,
but simultaneously suppress weaker information in their
boundary
Effective removal of outliers, multi-resolution, cost effective
Major features:
Complexity O(N)
Detect arbitrary shaped clusters at different scales
Not sensitive to noise, not sensitive to input order
Only applicable to low dimensional data
Both grid-based and density-based
April 29, 2017
Data Mining: Concepts and Techniques
182
Quantization
& Transformation
First, quantize data into m-D grid
structure, then wavelet transform
a) scale 1: high resolution
b) scale 2: medium resolution
c) scale 3: low resolution
April 29, 2017
Data Mining: Concepts and Techniques
183
Chapter 7. Cluster Analysis
1. What is Cluster Analysis?
2. Types of Data in Cluster Analysis
3. A Categorization of Major Clustering Methods
4. Partitioning Methods
5. Hierarchical Methods
6. Density-Based Methods
7. Grid-Based Methods
8. Model-Based Methods
9. Clustering High-Dimensional Data
10. Constraint-Based Clustering
11. Outlier Analysis
12. Summary
April 29, 2017
Data Mining: Concepts and Techniques
184
Why Constraint-Based Cluster Analysis?
Need user feedback: Users know their applications the best
Less parameters but more user-desired constraints, e.g., an
ATM allocation problem: obstacle & desired clusters
April 29, 2017
Data Mining: Concepts and Techniques
185
A Classification of Constraints in Cluster Analysis
Clustering in applications: desirable to have user-guided
(i.e., constrained) cluster analysis
Different constraints in cluster analysis:
Constraints on individual objects (do selection first)
Constraints on distance or similarity functions
Weighted functions, obstacles (e.g., rivers, lakes)
Constraints on the selection of clustering parameters
Cluster on houses worth over $300K
# of clusters, MinPts, etc.
Semi-supervised: giving small training sets as
“constraints” or hints
April 29, 2017
Pair-wise constraints
Data Mining: Concepts and Techniques
186
Clustering with User-Specified Constraints
Example: Locating k delivery centers, each serving at least
m valued customers and n ordinary ones
Proposed approach
Find an initial “solution” by partitioning the data set into
k groups and satisfying user-constraints
Iteratively refine the solution by micro-clustering
relocation (e.g., moving δ μ-clusters from cluster Ci to
Cj) and “deadlock” handling (break the microclusters
when necessary)
Efficiency is improved by micro-clustering
How to handle more complicated constraints?
E.g., having approximately same number of valued
customers in each cluster?! — Can you solve it?
April 29, 2017
Data Mining: Concepts and Techniques
187
Clustering With Obstacle Objects
K-medoids is more preferable since
k-means may locate the ATM center
in the middle of a lake
Visibility graph and shortest path
Triangulation and micro-clustering
Two kinds of join indices (shortestpaths) worth pre-computation
VV index: indices for any pair of
obstacle vertices
MV index: indices for any pair of
micro-cluster and obstacle
indices
April 29, 2017
Data Mining: Concepts and Techniques
188
An Example: Clustering With Obstacle Objects
Not Taking obstacles into account
April 29, 2017
Taking obstacles into account
Data Mining: Concepts and Techniques
189
Internal Measures Based on Proximity Graph
A proximity graph based approach can also be used for
cohesion and separation.
Cluster cohesion is the sum of the weight of all links within a cluster.
Cluster separation is the sum of the weights between nodes in the cluster
and nodes outside the cluster.
cohesion
separation