Download Data Mining Cluster Analysis: Basic Concepts and Algorithms Slides

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Principal component analysis wikipedia , lookup

Human genetic clustering wikipedia , lookup

Nonlinear dimensionality reduction wikipedia , lookup

K-means clustering wikipedia , lookup

Nearest-neighbor chain algorithm wikipedia , lookup

Cluster analysis wikipedia , lookup

Transcript
Data Mining
Cluster Analysis: Basic Concepts
and Algorithms
Slides From
Lecture Notes for Chapter 8
Introduction to Data Mining
by
Tan, Steinbach, Kumar
© Tan,Steinbach, Kumar
Introduction to Data Mining
4/18/2004
1
What is Cluster Analysis?
 Finding
groups of objects such that the objects in a group
will be similar (or related) to one another and different
from (or unrelated to) the objects in other groups
Inter-cluster
distances are
maximized
Intra-cluster
distances are
minimized
© Tan,Steinbach, Kumar
Introduction to Data Mining
4/18/2004
2
Notion of a Cluster can be Ambiguous
How many clusters?
Six Clusters
Two Clusters
Four Clusters
© Tan,Steinbach, Kumar
Introduction to Data Mining
4/18/2004
3
Partitional Clustering
Original Points
© Tan,Steinbach, Kumar
A Partitional Clustering
Introduction to Data Mining
4/18/2004
4
Hierarchical Clustering
p1
p3
p4
p2
p1 p2
Traditional Hierarchical Clustering
p3 p4
Traditional Dendrogram
p1
p3
p4
p2
p1 p2
Non-traditional Hierarchical Clustering
© Tan,Steinbach, Kumar
p3 p4
Non-traditional Dendrogram
Introduction to Data Mining
4/18/2004
5
Other Distinctions Between Sets of Clusters
 Exclusive versus non-exclusive
– In non-exclusive clusterings, points may belong to multiple
clusters.
– Can represent multiple classes or ‘
border’
points
 Fuzzy versus non-fuzzy
– In fuzzy clustering, a point belongs to every cluster with some
weight between 0 and 1
– Weights must sum to 1
– Probabilistic clustering has similar characteristics
 Partial versus complete
– In some cases, we only want to cluster some of the data
 Heterogeneous versus homogeneous
– Cluster of widely different sizes, shapes, and densities
© Tan,Steinbach, Kumar
Introduction to Data Mining
4/18/2004
6
Hierarchical Clustering
 Two
main types of hierarchical clustering
– Agglomerative:

Start with the points as individual clusters
At each step, merge the closest pair of clusters until only one cluster
(or k clusters) left

– Divisive:

Start with one, all-inclusive cluster
At each step, split a cluster until each cluster contains a point (or
there are k clusters)

 Traditional
hierarchical algorithms use a similarity or
distance matrix
– Merge or split one cluster at a time
© Tan,Steinbach, Kumar
Introduction to Data Mining
4/18/2004
7
Agglomerative Clustering Algorithm

More popular hierarchical clustering technique

Basic algorithm is straightforward
1.
Compute the proximity matrix
2.
Let each data point be a cluster
3.
Repeat
4.
Merge the two closest clusters
5.
Update the proximity matrix
6.

Until only a single cluster remains
Key operation is the computation of the proximity of
two clusters
–
Different approaches to defining the distance between
clusters distinguish the different algorithms
© Tan,Steinbach, Kumar
Introduction to Data Mining
4/18/2004
8
Starting Situation
 Start
with clusters of individual points and a
proximity matrix
p1 p2
p3
p4 p5
...
p1
p2
p3
p4
p5
.
.
.
© Tan,Steinbach, Kumar
Introduction to Data Mining
Proximity Matrix
4/18/2004
9
Intermediate Situation

After some merging steps, we have some clusters
C1
C2
C3
C4
C5
C1
C2
C3
C3
C4
C4
C5
Proximity Matrix
C1
C2
© Tan,Steinbach, Kumar
C5
Introduction to Data Mining
4/18/2004
10
Intermediate Situation

We want to merge the two closest clusters (C2 and C5) and
update the proximity matrix.
C1 C2
C3
C4 C5
C1
C2
C3
C3
C4
C4
C5
Proximity Matrix
C1
C2
© Tan,Steinbach, Kumar
C5
Introduction to Data Mining
4/18/2004
11
After Merging

The question is “
How do we update the proximity matrix?”
C1
C1
C2 U C5
C3
C4
C2
U
C5
C3
C4
?
?
?
?
?
C3
?
C4
?
Proximity Matrix
C1
C2 U C5
© Tan,Steinbach, Kumar
Introduction to Data Mining
4/18/2004
12
How to Define Inter-Cluster Similarity
p1
Similarity?
p2
p3
p4 p5
...
p1
p2
p3
p4





p5
MIN
.
MAX
.
Group Average
.
Proximity Matrix
Distance Between Centroids
Other methods driven by an objective
function
– Ward’
s Method uses squared error
© Tan,Steinbach, Kumar
Introduction to Data Mining
4/18/2004
13
How to Define Inter-Cluster Similarity
© Tan,Steinbach, Kumar
Introduction to Data Mining
4/18/2004
14
How to Define Inter-Cluster Similarity
p1
p2
p3
p4 p5
...
p1
p2
p3
p4





p5
MIN
.
MAX
.
Group Average
.
Proximity Matrix
Distance Between Centroids
Other methods driven by an objective
function
– Ward’
s Method uses squared error
© Tan,Steinbach, Kumar
Introduction to Data Mining
4/18/2004
15
How to Define Inter-Cluster Similarity
p1
p2
p3
p4 p5
...
p1
p2
p3
p4





p5
MIN
.
MAX
.
Group Average
.
Proximity Matrix
Distance Between Centroids
Other methods driven by an objective
function
– Ward’
s Method uses squared error
© Tan,Steinbach, Kumar
Introduction to Data Mining
4/18/2004
16
How to Define Inter-Cluster Similarity
p1
p2
p3
p4 p5
...
p1
p2
p3
p4





p5
MIN
.
MAX
.
Group Average
.
Proximity Matrix
Distance Between Centroids
Other methods driven by an objective
function
– Ward’
s Method uses squared error
© Tan,Steinbach, Kumar
Introduction to Data Mining
4/18/2004
17
How to Define Inter-Cluster Similarity
p1
p2
p3
p4 p5
...
p1


p2
p3
p4





p5
MIN
.
MAX
.
Group Average
.
Proximity Matrix
Distance Between Centroids
Other methods driven by an objective
function
– Ward’
s Method uses squared error
© Tan,Steinbach, Kumar
Introduction to Data Mining
4/18/2004
18
Cluster Similarity: MIN or Single Link
 Similarity
of two clusters is based on the two
most similar (closest) points in the different
clusters
– Determined by one pair of points, i.e., by one link in
the proximity graph.
I1
I2
I3
I4
I5
I1
1.00
0.90
0.10
0.65
0.20
I2
0.90
1.00
0.70
0.60
0.50
© Tan,Steinbach, Kumar
I3
0.10
0.70
1.00
0.40
0.30
I4
0.65
0.60
0.40
1.00
0.80
I5
0.20
0.50
0.30
0.80
1.00
Introduction to Data Mining
1
2
3
4
4/18/2004
5
19
Cluster Similarity: MAX or Complete Linkage
 Similarity
of two clusters is based on the two least
similar (most distant) points in the different
clusters
– Determined by all pairs of points in the two clusters
I1
I2
I3
I4
I5
I1
1.00
0.90
0.10
0.65
0.20
© Tan,Steinbach, Kumar
I2
0.90
1.00
0.70
0.60
0.50
I3
0.10
0.70
1.00
0.40
0.30
I4
0.65
0.60
0.40
1.00
0.80
I5
0.20
0.50
0.30
0.80
1.00
Introduction to Data Mining
1
2
3
4
4/18/2004
5
20
Cluster Similarity: Group Average

Proximity of two clusters is the average of pairwise proximity
between points in the two clusters.
proximity(p , p )
i
j
piClusteri
p jClusterj
proximity(Clusteri , Clusterj ) 
|Clusteri |
|Clusterj |

Need to use average connectivity for scalability since total
proximity favors large clusters
I1
I2
I3
I4
I5
I1
1.00
0.90
0.10
0.65
0.20
© Tan,Steinbach, Kumar
I2
0.90
1.00
0.70
0.60
0.50
I3
0.10
0.70
1.00
0.40
0.30
I4
0.65
0.60
0.40
1.00
0.80
I5
0.20
0.50
0.30
0.80
1.00
Introduction to Data Mining
1
2
3
4
4/18/2004
5
21
Hierarchical Clustering: Time and Space requirements
 O(N2)
space since it uses the proximity matrix.
– N is the number of points.
 O(N3)
time in many cases
– There are N steps and at each step the size, N2,
proximity matrix must be updated and searched
– Complexity can be reduced to O(N2 log(N) ) time for
some approaches
© Tan,Steinbach, Kumar
Introduction to Data Mining
4/18/2004
22
Hierarchical Clustering: Problems and Limitations
 Once
a decision is made to combine two clusters,
it cannot be undone
 No
objective function is directly minimized
 Different
schemes have problems with one or
more of the following:
– Sensitivity to noise and outliers
– Difficulty handling different sized clusters and convex
shapes
– Breaking large clusters
© Tan,Steinbach, Kumar
Introduction to Data Mining
4/18/2004
23
Internal Measures: Cohesion and Separation
 Cluster
Cohesion: Measures how closely related
are objects in a cluster
– Example: sum of square errors (SSE)
 Cluster
Separation: Measure how distinct or wellseparated a cluster is from other clusters

Example: Squared Error
– Cohesion is measured by the within cluster SSE
WSS  ( x mi )2
i xC i
– Separation is measured by the between cluster SSE
BSS Ci ( m mi ) 2
i
–Where |Ci| is the size of cluster Ci, mi is the medoid of Ci, and
m is the global medoid
© Tan,Steinbach, Kumar
Introduction to Data Mining
4/18/2004
24
Internal Measures: Cohesion and Separation
 Example:
SSE
– BSS + WSS = constant
1

m1
K=1 cluster:
m
2

3
4

m2
5
WSS(1 3) 2 ( 2 3) 2 ( 4 3) 2 (5 3) 2 10
BSS4 (3 3) 2 0
Total 10 0 10
K=2 clusters:
WSS(1 1.5) 2 ( 2 1.5) 2 ( 4 4.5) 2 (5 4.5) 2 1
BSS2 (3 1.5) 2 2 ( 4.5 3) 2 9
Total 1 9 10
© Tan,Steinbach, Kumar
Introduction to Data Mining
4/18/2004
25
Internal Measures: Cohesion and Separation

A proximity graph based approach can also be used for
cohesion and separation.
– Cluster cohesion is the sum of the weight of all links within a cluster.
– Cluster separation is the sum of the weights between nodes in the cluster
and nodes outside the cluster.
cohesion
© Tan,Steinbach, Kumar
separation
Introduction to Data Mining
4/18/2004
26