Download slides - UCLA Computer Science

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Principal component analysis wikipedia , lookup

Nonlinear dimensionality reduction wikipedia , lookup

Human genetic clustering wikipedia , lookup

K-means clustering wikipedia , lookup

Nearest-neighbor chain algorithm wikipedia , lookup

Cluster analysis wikipedia , lookup

Transcript
Cluster Analysis
CS240B Lecture notes
based on those by
© Tan,Steinbach, Kumar
© Tan,Steinbach, Kumar
Introduction to Data Mining
Introduction to Data Mining
4/18/2004
4/18/2004
What is Cluster Analysis?

Finding groups of objects such that the objects in a
group will be similar (or related) to one another and
different from (or unrelated to) the objects in other
groups
Intra-cluster
distances are
minimized
Inter-cluster
distances are
maximized
‹#›
Cluster Analysis: many & diverse applications

Understanding
– Group related documents
for browsing, group genes
and proteins that have
similar functionality, or
group stocks with similar
price fluctuations

Discovered Clusters
1
2
3
4
Applied-Matl-DOWN,Bay-Network-Down,3-COM-DOWN,
Cabletron-Sys-DOWN,CISCO-DOWN,HP-DOWN,
DSC-Comm-DOWN,INTEL-DOWN,LSI-Logic-DOWN,
Micron-Tech-DOWN,Texas-Inst-Down,Tellabs-Inc-Down,
Natl-Semiconduct-DOWN,Oracl-DOWN,SGI-DOWN,
Sun-DOWN
Apple-Comp-DOWN,Autodesk-DOWN,DEC-DOWN,
ADV-Micro-Device-DOWN,Andrew-Corp-DOWN,
Computer-Assoc-DOWN,Circuit-City-DOWN,
Compaq-DOWN, EMC-Corp-DOWN, Gen-Inst-DOWN,
Motorola-DOWN,Microsoft-DOWN,Scientific-Atl-DOWN
Fannie-Mae-DOWN,Fed-Home-Loan-DOWN,
MBNA-Corp-DOWN,Morgan-Stanley-DOWN
Baker-Hughes-UP,Dresser-Inds-UP,Halliburton-HLD-UP,
Louisiana-Land-UP,Phillips-Petro-UP,Unocal-UP,
Schlumberger-UP
Industry Group
Technology1-DOWN
Technology2-DOWN
Financial-DOWN
Oil-UP
Summarization
– Reduce the size of large
data sets
Clustering precipitation
in Australia
‹#›
Types of Clusterings



A clustering is a set of clusters
Important distinction between hierarchical
and partitional sets of clusters
Partitional Clustering
– A division data objects into non-overlapping subsets
(clusters) such that each data object is in exactly one
subset

Hierarchical clustering
– A set of nested clusters organized as a hierarchical tree
‹#›
Partitional Clustering
Original Points
A Partitional Clustering
‹#›
Hierarchical Clustering
p1
p3
p4
p2
p1 p2
Traditional Hierarchical Clustering
p3 p4
Traditional Dendrogram
p1
p3
p4
p2
p1 p2
Non-traditional Hierarchical Clustering
p3 p4
Non-traditional Dendrogram
‹#›
Other Distinctions Between Sets of Clusters

Exclusive versus non-exclusive
– In non-exclusive clusterings, points may belong to multiple
clusters.
– Can represent multiple classes or ‘border’ points

Fuzzy versus non-fuzzy
– In fuzzy clustering, a point belongs to every cluster with
some weight between 0 and 1
– Weights must sum to 1
– Probabilistic clustering has similar characteristics

Partial versus complete
– In some cases, we only want to cluster some of the data

Heterogeneous versus homogeneous
– Cluster of widely different sizes, shapes, and densities
‹#›
Clustering Algorithms

K-means and its variants

Hierarchical clustering

Density-based clustering
‹#›
K-means Clustering



Partitional clustering approach
Each cluster is associated with a centroid (center
point)
Each point is assigned to the cluster with the closest
centroid

Number of clusters, K, must be specified

The basic algorithm is very simple
‹#›
K-means Clustering – Details

Initial centroids are often chosen randomly.
–




The centroid is (typically) the mean of the points in the
cluster.
‘Closeness’ is measured by Euclidean distance, cosine
similarity, correlation, etc.
K-means will converge for common similarity measures
mentioned above.
Most of the convergence happens in the first few
iterations.
–

Clusters produced vary from one run to another.
Often the stopping condition is changed to ‘Until relatively few
points change clusters’
Complexity is O( n * K * I * d )
–
n = number of points, K = number of clusters,
I = number of iterations, d = number of attributes
‹#›
Two different K-means Clusterings
3
2.5
Original Points
2
y
1.5
1
0.5
0
-2
-1.5
-1
-0.5
0
0.5
1
1.5
2
x
2.5
2.5
2
2
1.5
1.5
y
3
y
3
1
1
0.5
0.5
0
0
-2
-1.5
-1
-0.5
0
0.5
1
1.5
x
Optimal Clustering
2
-2
-1.5
-1
-0.5
0
0.5
1
1.5
2
x
Sub-optimal Clustering
‹#›
K-means Clusterings-example
Initial centroids: Case 1
3
2.5
2
y
1.5
1
0.5
0
-2
-1.5
-1
-0.5
0
0.5
1
1.5
2
x
‹#›
Importance of Choosing Initial Centroids
Iteration
16
2
3
4
Iteration 5
33
2.5
2.5
22
y
1.5
1.5
11
0.5
0.5
00
-2
-2
-1.5
-1.5
-1
-1
-0.5
-0.5
00
0.5
0.5
11
1.5
1.5
22
xx
‹#›
Importance of Choosing Initial Centroids
Iteration 1
Iteration 2
Iteration 3
2.5
2.5
2.5
2
2
2
1.5
1.5
1.5
y
3
y
3
y
3
1
1
1
0.5
0.5
0.5
0
0
0
-2
-1.5
-1
-0.5
0
0.5
1
1.5
2
-2
-1.5
-1
-0.5
x
0
0.5
1
1.5
2
-2
Iteration 4
Iteration 5
2.5
2
2
2
1.5
1.5
1.5
1
1
1
0.5
0.5
0.5
0
0
0
-0.5
0
x
0.5
1
1.5
2
0
0.5
1
1.5
2
1
1.5
2
y
2.5
y
2.5
y
3
-1
-0.5
Iteration 6
3
-1.5
-1
x
3
-2
-1.5
x
-2
-1.5
-1
-0.5
0
x
0.5
1
1.5
2
-2
-1.5
-1
-0.5
0
0.5
x
‹#›
K-means Clusterings: Initial centers
Initial centers: Case 1
3
2.5
2
y
1.5
1
0.5
0
-2
-1.5
-1
-0.5
0
0.5
1
1.5
2
x
‹#›
Importance of Choosing Initial Centroids …
Iteration 5
1
2
3
4
3
2.5
2
y
1.5
1
0.5
0
-2
-1.5
-1
-0.5
0
0.5
1
1.5
2
x
‹#›
Importance of Choosing Initial Centroids …
Iteration 1
Iteration 2
2.5
2.5
2
2
1.5
1.5
y
3
y
3
1
1
0.5
0.5
0
0
-2
-1.5
-1
-0.5
0
0.5
1
1.5
2
-2
-1.5
-1
-0.5
x
0
0.5
Iteration 3
2.5
2
2
2
1.5
1.5
1.5
y
2.5
y
2.5
y
3
1
1
1
0.5
0.5
0.5
0
0
0
-1
-0.5
0
x
0.5
2
Iteration 5
3
-1.5
1.5
Iteration 4
3
-2
1
x
1
1.5
2
-2
-1.5
-1
-0.5
0
x
0.5
1
1.5
2
-2
-1.5
-1
-0.5
0
0.5
1
1.5
2
x
‹#›
Problems with Selecting Initial Points


If there are 2 ‘real’ clusters then any two centroids
produces an optimal clustering—i.e., each point will
become the centroid of cluster.
But if we know that there are K>2 `real’ clusters the
chance of selecting one centroid for each becomes
small as k grows.
–
For example, if K = 10, then probability = 10!/1010 =
0.00036
–
Sometimes the initial centroids will readjust themselves
in ‘right’ way, and sometimes they don’t
–
An obvious limitation of random initial assignements (also
we normally do not know K)
‹#›
Evaluating K-means Clusters

Most common measure is Sum of Squared Error (SSE)
– For each point, the error is the distance to the nearest
cluster
– To get SSE, we square these errors and sum them.
K
SSE    dist 2 (mi , x )
i 1 xCi
– x is a data point in cluster Ci and mi is the representative
point for cluster Ci
– One easy way to reduce SSE is to increase K, the number of
clusters
– But in general, the fewer the clusters the better.
A good clustering with smaller K can have a lower SSE than a
poor clustering with higher K
How do we minimize both K and SSE?
For a given K we can try different sets of initial random points.

‹#›
10 Clusters Example (5 pairs)
Iteration 4
1
2
3
8
6
4
y
2
0
-2
-4
-6
0
5
10
15
20
x
Starting with two initial centroids in one cluster of each pair of clusters
‹#›
10 Clusters Example
Iteration 2
8
6
6
4
4
2
2
y
y
Iteration 1
8
0
0
-2
-2
-4
-4
-6
-6
0
5
10
15
20
0
5
x
6
6
4
4
2
2
0
-2
-4
-4
-6
-6
x
15
20
0
-2
10
20
Iteration 4
8
y
y
Iteration 3
5
15
x
8
0
10
15
20
0
5
10
x
Starting with two initial centroids in one cluster of each pair of clusters
‹#›
10 Clusters Example
Iteration 4
1
2
3
8
6
4
y
2
0
-2
-4
-6
0
5
10
15
20
x
Starting with some pairs of clusters having three initial centroids, while other have only one.
‹#›
10 Clusters Example
Iteration 2
8
6
6
4
4
2
2
y
y
Iteration 1
8
0
0
-2
-2
-4
-4
-6
-6
0
5
10
15
20
0
5
8
8
6
6
4
4
2
2
0
-2
-4
-4
-6
-6
5
10
x
15
20
15
20
0
-2
0
10
x
Iteration
4
y
y
x
Iteration
3
15
20
0
5
10
x
Starting with some pairs of clusters having three initial centroids, while other have only one.
‹#›
Solutions to Initial Centroids Problem

Multiple runs
– Helps, but probability is not on your side
Start with more than k initial centroids and
then select k centroids from the most widely
separated resulting clusters
 Use hierarchical clustering to determine initial
centroids on a small sample of data
 Bisecting K-means

– Not as susceptible to initialization issues

Postprocessing
‹#›
Improvements





Eliminate outliers (preprocessing) and very small
clusters that may represent outliers
Outliers are defined as points that have large
distances from every centroids (they are bad because
they exercise an excessive—quadratic effect)
On the other hand many outliers in the same vicinity,
could indicate the need to generate a new cluster
Split ‘loose’ clusters, i.e., clusters with relatively high
SSE
Merge clusters that are ‘close’ and that have relatively
low SSE
– Some clusters could turn out empty.

Can use these steps during the clustering process
‹#›
Bisecting K-means

Bisecting K-means algorithm
–
Variant of K-means that can produce a partitional or a
hierarchical clustering
‹#›
Bisecting K-means Example
‹#›
Limitations of K-means

K-means has problems when clusters are of
differing
– Sizes
– Densities
– Non-globular shapes

K-means has problems when the data contains
outliers.
‹#›
Limitations of K-means: Differing Sizes
Original Points
K-means (3 Clusters)
‹#›
Limitations of K-means: Differing Density
Original Points
K-means (3 Clusters)
‹#›
Limitations of K-means: Non-globular Shapes
Original Points
K-means (2 Clusters)
‹#›
Overcoming K-means Limitations
Original Points
K-means Clusters
One solution is to use many clusters.
Find parts of clusters, but need to put together.
‹#›
Overcoming K-means Limitations
Original Points
K-means Clusters
‹#›
Overcoming K-means Limitations
Original Points
K-means Clusters
‹#›
Hierarchical Clustering
Produces a set of nested clusters organized as
a hierarchical tree
 Can be visualized as a dendrogram

– A tree like diagram that records the sequences of
merges or splits
5
6
0.2
4
3
4
2
0.15
5
2
0.1
1
0.05
3
0
1
3
2
5
4
1
6
‹#›
Strengths of Hierarchical Clustering

Do not have to assume any particular number
of clusters
– Any desired number of clusters can be obtained by
‘cutting’ the dendogram at the proper level

They may correspond to meaningful
taxonomies
– Example in biological sciences (e.g., animal kingdom,
phylogeny reconstruction, …)
‹#›
Hierarchical Clustering

Two main types of hierarchical clustering
– Agglomerative:

Start with the points as individual clusters
At each step, merge the closest pair of clusters until only one
cluster (or k clusters) left

– Divisive:

Start with one, all-inclusive cluster
At each step, split a cluster until each cluster contains a point
(or there are k clusters)


Traditional hierarchical algorithms use a similarity or
distance matrix
– Merge or split one cluster at a time
‹#›
Agglomerative Clustering Algorithm

More popular hierarchical clustering technique

Basic algorithm is straightforward
1.
Compute the proximity matrix
2.
Let each data point be a cluster
3.
Repeat
4.
Merge the two closest clusters
5.
Update the proximity matrix
6.

Until only a single cluster remains
Key operation is the computation of the proximity
of two clusters
–
Different approaches to defining the distance between
clusters distinguish the different algorithms
‹#›
How to Define Inter-Cluster Similarity
p1
Similarity?
p2
p3
p4 p5
...
p1
p2
p3
p4





MIN
MAX
Group Average
Distance Between Centroids
Objective function
p5
.
.
.
Proximity Matrix
– Ward’s Method
‹#›
Cluster Similarity: Ward’s Method

Similarity of two clusters is based on the
increase in squared error when two clusters
are merged
– Similar to group average if distance between points
is distance squared
Less susceptible to noise and outliers
 Biased towards globular clusters
 Hierarchical analogue of K-means

– Can be used to initialize K-means
‹#›
Recent Hierarchical Clustering Methods

Major weakness of agglomerative clustering methods
– do not scale well: time complexity of at least O(n2), where n is
the number of total objects
– can never undo what was done previously

Integration of hierarchical with distance-based clustering
– BIRCH (1996): uses CF-tree and incrementally adjusts the
quality of sub-clusters
– ROCK (1999): clustering categorical data by neighbor and link
analysis
– CHAMELEON (1999): hierarchical clustering using dynamic
modeling
‹#›
Cluster Validity

For supervised classification we have a variety
of measures to evaluate how good our model is
– Accuracy, precision, recall
For cluster analysis, the analogous question is
how to evaluate the “goodness” of the
resulting clusters?
 But “clusters are in the eye of the beholder”!

‹#›
Using Similarity Matrix for Cluster Validation
Order the similarity matrix with respect to cluster
labels and inspect visually.
1
1
0.9
0.8
0.7
Points
0.6
y

0.5
0.4
0.3
0.2
0.1
0
10
0.9
20
0.8
30
0.7
40
0.6
50
0.5
60
0.4
70
0.3
80
0.2
90
0.1
100
0
0.2
0.4
0.6
x
0.8
1
20
40
60
80
0
100 Similarity
Points
‹#›
Using Similarity Matrix for Cluster Validation

Clusters in random data are not so crisp
1
10
0.9
0.9
20
0.8
0.8
30
0.7
0.7
40
0.6
0.6
50
0.5
0.5
60
0.4
0.4
70
0.3
0.3
80
0.2
0.2
90
0.1
0.1
100
20
40
60
80
0
100 Similarity
y
Points
1
0
0
0.2
0.4
0.6
0.8
1
x
Points
K-means
‹#›
K-means on data streams



Streams is partitioned into windows
Initially: for each new point P a new centroid is
created with probability d/D where D is proportional
to size of the domain, and d is the actual distance of P
to the closest centroid [Callaghan ’02]
Later windows can use the centroids of the previous
window---improved by
– Eliminating empty clusters
– Splitting large sparse ones
– Merging large clusters


Large changes in cluster position/population denote
concept changes.
Maintenance on sliding windows, or window panes is
harder.
‹#›