Download No Slide Title - Computer Science

Document related concepts

Principal component analysis wikipedia , lookup

Nonlinear dimensionality reduction wikipedia , lookup

Human genetic clustering wikipedia , lookup

K-means clustering wikipedia , lookup

Nearest-neighbor chain algorithm wikipedia , lookup

Cluster analysis wikipedia , lookup

Transcript
CS590D: Data Mining
Prof. Chris Clifton
February 21, 2006
Clustering
Cluster Analysis
• What is Cluster Analysis?
• Types of Data in Cluster Analysis
• Partitioning Methods
• Hierarchical Methods
• Density-Based Methods
• Cluster Evaluation
• Grid-Based Methods
• Model-Based Clustering Methods
• Outlier Analysis
• Summary
CS590D
2
What is Cluster Analysis?
• Finding groups of objects such that the objects in a
group will be similar (or related) to one another and
different from (or unrelated to) the objects in other
groups
Inter-cluster
distances are
maximized
Intra-cluster
distances are
minimized
CS590D
3
What is Cluster Analysis?
• Cluster: a collection of data objects
– Similar to one another within the same cluster
– Dissimilar to the objects in other clusters
• Cluster analysis
– Grouping a set of data objects into clusters
• Clustering is unsupervised classification: no
predefined classes
• Typical applications
– As a stand-alone tool to get insight into data
distribution
– As a preprocessing step for other algorithms
CS590D
4
General Applications of
Clustering
• Pattern Recognition
• Spatial Data Analysis
– create thematic maps in GIS by clustering feature
spaces
– detect spatial clusters and explain them in spatial
data mining
• Image Processing
• Economic Science (especially market research)
• WWW
– Document classification
– Cluster Weblog data to discover groups of similar
access patterns
CS590D
5
Examples of Clustering
Applications
• Marketing: Help marketers discover distinct groups in their customer
bases, and then use this knowledge to develop targeted marketing
programs
• Land use: Identification of areas of similar land use in an earth
observation database
• Insurance: Identifying groups of motor insurance policy holders with
a high average claim cost
• City-planning: Identifying groups of houses according to their house
type, value, and geographical location
• Earth-quake studies: Observed earth quake epicenters should be
clustered along continent faults
CS590D
6
Notion of a Cluster can be
Ambiguous
How many clusters?
Six Clusters
Two Clusters
Four Clusters
CS590D
7
What Is Good Clustering?
• A good clustering method will produce high quality
clusters with
– high intra-class similarity
– low inter-class similarity
• The quality of a clustering result depends on both the
similarity measure used by the method and its
implementation.
• The quality of a clustering method is also measured by
its ability to discover some or all of the hidden patterns.
CS590D
8
Types of Clusterings
• A clustering is a set of clusters
• Important distinction between hierarchical and
partitional sets of clusters
• Partitional Clustering
– A division data objects into non-overlapping subsets (clusters)
such that each data object is in exactly one subset
• Hierarchical clustering
– A set of nested clusters organized as a hierarchical tree
CS590D
9
Partitional Clustering
Original Points
A Partitional Clustering
CS590D
10
Hierarchical Clustering
p1
p3
p4
p2
p1 p2
Traditional Hierarchical Clustering
p3 p4
Traditional Dendrogram
p1
p3
p4
p2
p1 p2
p3 p4
Non-traditional Hierarchical Clustering
Non-traditional Dendrogram
CS590D
11
Other Distinctions Between
Sets of Clusters
• Exclusive versus non-exclusive
– In non-exclusive clusterings, points may belong to multiple
clusters.
– Can represent multiple classes or ‘border’ points
• Fuzzy versus non-fuzzy
– In fuzzy clustering, a point belongs to every cluster with some
weight between 0 and 1
– Weights must sum to 1
– Probabilistic clustering has similar characteristics
• Partial versus complete
– In some cases, we only want to cluster some of the data
• Heterogeneous versus homogeneous
– Cluster of widely different sizes, shapes, and densities
CS590D
12
Types of Clusters
•
•
•
•
•
•
Well-separated clusters
Center-based clusters
Contiguous clusters
Density-based clusters
Property or Conceptual
Described by an Objective Function
CS590D
13
Types of Clusters: WellSeparated
• Well-Separated Clusters:
– A cluster is a set of points such that any point in a cluster is
closer (or more similar) to every other point in the cluster than to
any point not in the cluster.
3 well-separated clusters
CS590D
14
Types of Clusters: CenterBased
• Center-based
– A cluster is a set of objects such that an object in a cluster is
closer (more similar) to the “center” of a cluster, than to the
center of any other cluster
– The center of a cluster is often a centroid, the average of all the
points in the cluster, or a medoid, the most “representative” point
of a cluster
4 center-based clusters
CS590D
15
Types of Clusters: ContiguityBased
• Contiguous Cluster (Nearest neighbor or
Transitive)
– A cluster is a set of points such that a point in a cluster is closer
(or more similar) to one or more other points in the cluster than
to any point not in the cluster.
8 contiguous clusters
CS590D
16
Types of Clusters: DensityBased
• Density-based
– A cluster is a dense region of points, which is separated by lowdensity regions, from other regions of high density.
– Used when the clusters are irregular or intertwined, and when
noise and outliers are present.
6 density-based clusters
CS590D
17
Types of Clusters: Conceptual
Clusters
• Shared Property or Conceptual Clusters
– Finds clusters that share some common property or represent a
particular concept.
2 Overlapping Circles
CS590D
18
Types of Clusters: Objective
Function
• Clusters Defined by an Objective Function
– Finds clusters that minimize or maximize an objective function.
– Enumerate all possible ways of dividing the points into clusters
and evaluate the `goodness' of each potential set of clusters by
using the given objective function. (NP Hard)
– Can have global or local objectives.
• Hierarchical clustering algorithms typically have local objectives
• Partitional algorithms typically have global objectives
– A variation of the global objective function approach is to fit the
data to a parameterized model.
• Parameters for the model are determined from the data.
• Mixture models assume that the data is a ‘mixture' of a number of
statistical distributions.
CS590D
19
Types of Clusters: Objective
Function …
• Map the clustering problem to a different domain
and solve a related problem in that domain
– Proximity matrix defines a weighted graph, where the
nodes are the points being clustered, and the
weighted edges represent the proximities between
points
– Clustering is equivalent to breaking the graph into
connected components, one for each cluster.
– Want to minimize the edge weight between clusters
and maximize the edge weight within clusters
CS590D
20
Requirements of Clustering in
Data Mining
• Scalability
• Ability to deal with different types of attributes
• Discovery of clusters with arbitrary shape
• Minimal requirements for domain knowledge to determine input
parameters
• Able to deal with noise and outliers
• Insensitive to order of input records
• High dimensionality
• Incorporation of user-specified constraints
• Interpretability and usability
CS590D
21
Cluster Analysis
• What is Cluster Analysis?
• Types of Data in Cluster Analysis
• Partitioning Methods
• Hierarchical Methods
• Density-Based Methods
• Cluster Evaluation
• Grid-Based Methods
• Model-Based Clustering Methods
• Outlier Analysis
• Summary
CS590D
22
Data Structures
• Data matrix
 x11

 ...
x
 i1
 ...
x
 n1
– (two modes)
• Dissimilarity matrix
– (one mode)
...
x1f
...
...
...
...
xif
...
...
...
...
... xnf
...
...
 0
 d(2,1)
0

 d(3,1) d ( 3,2) 0

:
:
 :
d ( n,1) d ( n,2) ...
CS590D
x1p 

... 
xip 

... 
xnp 







... 0
23
Measure the Quality of
Clustering
• Dissimilarity/Similarity metric: Similarity is expressed in
terms of a distance function, which is typically metric:
d(i, j)
• There is a separate “quality” function that measures the
“goodness” of a cluster.
• The definitions of distance functions are usually very
different for interval-scaled, boolean, categorical, ordinal
and ratio variables.
• Weights should be associated with different variables
based on applications and data semantics.
• It is hard to define “similar enough” or “good enough”
– the answer is typically highly subjective.
CS590D
24
Type of data in clustering
analysis
• Interval-scaled variables:
• Binary variables:
• Nominal, ordinal, and ratio variables:
• Variables of mixed types:
CS590D
25
Interval-valued variables
• Standardize data
– Calculate the mean absolute deviation:
sf  1
n (| x1 f  m f |  | x2 f  m f | ... | xnf  m f |)
m f  1n (x1 f  x2 f  ...  xnf )
– Calculate the standardized measurement (z-score)
where
.
xif  m f
zif 
sf
• Using mean absolute deviation is more robust than using
standard deviation
CS590D
26
Similarity and Dissimilarity
Between Objects
• Distances are normally used to measure the similarity or
dissimilarity between two data objects
• Some popular ones include: Minkowski distance:
d (i, j)  q (| x  x |q  | x  x |q ... | x  x |q )
i1 j1
i2
j2
ip
jp
where i = (xi1, xi2, …, xip) and j = (xj1, xj2, …, xjp) are two pdimensional data objects, and q is a positive integer
• If q = 1, d is Manhattan distance
d (i, j) | x  x |  | x  x | ... | x  x |
i1 j1 i2 j 2
i p jp
CS590D
27
Similarity and Dissimilarity
Between Objects (Cont.)
• If q = 2, d is Euclidean distance:
d (i, j)  (| x  x |2  | x  x |2 ... | x  x |2 )
i1
j1
i2
j2
ip
jp
– Properties
• d(i,j)  0
• d(i,i) = 0
• d(i,j) = d(j,i)
• d(i,j)  d(i,k) + d(k,j)
• Also, one can use weighted distance, parametric
Pearson product moment correlation, or other disimilarity
measures
CS590D
28
Binary Variables
• A contingency table for binary data
Object j
Object i
1
0
1
a
b
0
c
d
sum a  c b  d
sum
a b
cd
p
• Simple matching coefficient (invariant, if the binary variable is
symmetric):
d (i, j) 
bc
a bc  d
• Jaccard coefficient (noninvariant if the binary variable is
asymmetric):
d (i, j) 
bc
a bc
CS590D
30
Dissimilarity between Binary
Variables
• Example
Name
Jack
Mary
Jim
Gender
M
F
M
Fever
Y
Y
Y
Cough
N
N
P
Test-1
P
P
N
Test-2
N
N
N
Test-3
N
P
N
Test-4
N
N
N
– gender is a symmetric attribute
– the remaining attributes are asymmetric binary
– let the values Y and P be set to 1, and the value N be set to 0
01
 0.33
2 01
11
d ( jack , jim ) 
 0.67
111
1 2
d ( jim , mary ) 
CS590D  0.75
11 2
d ( jack , mary ) 
31
Nominal Variables
• A generalization of the binary variable in that it can take
more than 2 states, e.g., red, yellow, blue, green
• Method 1: Simple matching
– m: # of matches, p: total # of variables
m
d (i, j)  p 
p
• Method 2: use a large number of binary variables
– creating a new binary variable for each of the M nominal states
CS590D
32
Ordinal Variables
• An ordinal variable can be discrete or continuous
• Order is important, e.g., rank
• Can be treated like interval-scaled
rif {1,...,M f }
– replace xif by their rank
– map the range of each variable onto [0, 1] by replacing i-th object
in the f-th variable by
zif
rif 1

M f 1
– compute the dissimilarity using methods for interval-scaled
variables
CS590D
33
Ratio-Scaled Variables
• Ratio-scaled variable: a positive measurement on a
nonlinear scale, approximately at exponential scale,
such as AeBt or Ae-Bt
• Methods:
– treat them like interval-scaled variables—not a good choice!
(why?—the scale can be distorted)
– apply logarithmic transformation
yif = log(xif)
– treat them as continuous ordinal data treat their rank as intervalscaled
CS590D
34
Variables of Mixed Types
• A database may contain all the six types of variables
– symmetric binary, asymmetric binary, nominal, ordinal, interval
and ratio
• One may use a weighted formula to combine their
effects
 pf  1 ij( f ) d ij( f )
d (i, j) 
 pf  1 ij( f )
– f is binary or nominal:
dij(f) = 0 if xif = xjf , or dij(f) = 1 o.w.
– f is interval-based: use the normalized distance
– f is ordinal or ratio-scaled
• compute ranks rif and
rif
z

• and treat zif as interval-scaled
if
M
CS590D
1
f
1
35
CS590D: Data Mining
Prof. Chris Clifton
February 23, 2006
Clustering
Cluster Analysis
• What is Cluster Analysis?
• Types of Data in Cluster Analysis
• Partitioning Methods
• Hierarchical Methods
• Density-Based Methods
• Cluster Evaluation
• Grid-Based Methods
• Model-Based Clustering Methods
• Outlier Analysis
• Summary
CS590D
40
Partitioning Algorithms: Basic
Concept
• Partitioning method: Construct a partition of a database
D of n objects into a set of k clusters
• Given a k, find a partition of k clusters that optimizes the
chosen partitioning criterion
– Global optimal: exhaustively enumerate all partitions
– Heuristic methods: k-means and k-medoids algorithms
– k-means (MacQueen’67): Each cluster is represented by the
center of the cluster
– k-medoids or PAM (Partition around medoids) (Kaufman &
Rousseeuw’87): Each cluster is represented by one of the
objects in the cluster
CS590D
41
The K-Means Clustering
Method
• Given k, the k-means algorithm is implemented in four
steps:
– Partition objects into k nonempty subsets
– Compute seed points as the centroids of the clusters of the
current partition (the centroid is the center, i.e., mean point, of
the cluster)
– Assign each object to the cluster with the nearest seed point
– Go back to Step 2, stop when no more new assignment
CS590D
42
The K-Means Clustering
Method
10
10
9
9
8
8
7
7
6
6
5
5
10
9
8
7
6
5
4
4
3
2
1
0
0
1
2
3
4
5
6
7
8
K=2
Arbitrarily choose K
object as initial
cluster center
9
10
Assign
each
objects
to most
similar
center
3
2
1
0
0
1
2
3
4
5
6
7
8
9
10
Update
the
cluster
means
4
3
2
1
0
0
1
2
3
4
5
6
reassign
10
9
9
8
8
7
7
6
6
5
5
4
3
2
1
0
1
2
3
4
5
6
7
CS590D
8
8
9
10
reassign
10
0
7
9
10
Update
the
cluster
means
4
3
2
1
0
0
1
2
3
4
5
6
7
8
9
43
10
Comments on the K-Means
Method
•
Strength: Relatively efficient: O(tkn), where n is # objects, k is # clusters,
and t is # iterations. Normally, k, t << n.
• Comparing: PAM: O(k(n-k)2 ), CLARA: O(ks2 + k(n-k))
•
Comment: Often terminates at a local optimum. The global optimum may be
found using techniques such as: deterministic annealing and genetic
algorithms
•
Weakness
– Applicable only when mean is defined, then what about categorical data?
– Need to specify k, the number of clusters, in advance
– Unable to handle noisy data and outliers
– Not suitable to discover clusters with non-convex shapes
CS590D
44
Variations of the K-Means
Method
• A few variants of the k-means which differ in
– Selection of the initial k means
– Dissimilarity calculations
– Strategies to calculate cluster means
• Handling categorical data: k-modes (Huang’98)
– Replacing means of clusters with modes
– Using new dissimilarity measures to deal with categorical objects
– Using a frequency-based method to update modes of clusters
– A mixture of categorical and numerical data: k-prototype method
CS590D
45
What is the problem of kMeans Method?
• The k-means algorithm is sensitive to outliers !
– Since an object with an extremely large value may substantially distort
the distribution of the data.
• K-Medoids: Instead of taking the mean value of the object in a
cluster as a reference point, medoids can be used, which is the
most centrally located object in a cluster.
10
10
9
9
8
8
7
7
6
6
5
5
4
4
3
3
2
2
1
1
0
0
0
1
2
3
4
5
6
7
8
9
10
CS590D
0
1
2
3
4
5
6
7
8
9
10
46
Importance of Choosing Initial
Centroids
Iteration 1
Iteration 2
Iteration 3
2.5
2.5
2.5
2
2
2
1.5
1.5
1.5
y
3
y
3
y
3
1
1
1
0.5
0.5
0.5
0
0
0
-2
-1.5
-1
-0.5
0
0.5
1
1.5
2
-2
-1.5
-1
-0.5
x
0
0.5
1
1.5
2
-2
Iteration 4
Iteration 5
2.5
2
2
2
1.5
1.5
1.5
1
1
1
0.5
0.5
0.5
0
0
0
-0.5
0
x
0.5
1
1.5
2
0
0.5
1
1.5
2
1
1.5
2
y
2.5
y
2.5
y
3
-1
-0.5
Iteration 6
3
-1.5
-1
x
3
-2
-1.5
x
-2
-1.5
-1
-0.5
0
0.5
1
CS590D
x
1.5
2
-2
-1.5
-1
-0.5
0
x
0.5
47
Solutions to Initial Centroids
Problem
• Multiple runs
– Helps, but probability is not on your side
• Sample and use hierarchical clustering to
determine initial centroids
• Select more than k initial centroids and then
select among these initial centroids
– Select most widely separated
• Postprocessing
• Bisecting K-means
– Not as susceptible to initialization issues
CS590D
48
Limitations of K-means:
Differing Density
K-means (3 Clusters)
Original Points
CS590D
49
Limitations of K-means: Nonglobular Shapes
Original Points
K-means (2 Clusters)
CS590D
50
The K-Medoids Clustering
Method
• Find representative objects, called medoids, in clusters
• PAM (Partitioning Around Medoids, 1987)
– starts from an initial set of medoids and iteratively replaces one of the
medoids by one of the non-medoids if it improves the total distance of
the resulting clustering
– PAM works effectively for small data sets, but does not scale well for
large data sets
• CLARA (Kaufmann & Rousseeuw, 1990)
• CLARANS (Ng & Han, 1994): Randomized sampling
• Focusing + spatial data structure (Ester et al., 1995)
CS590D
51
Typical k-medoids algorithm
(PAM)
Total Cost = 20
10
10
10
9
9
9
8
8
8
Arbitrary
choose k
object as
initial
medoids
7
6
5
4
3
2
7
6
5
4
3
2
1
1
0
0
0
1
2
3
4
5
6
7
8
9
0
10
K=2
Do loop
Until no
change
1
2
3
4
5
6
7
8
9
10
Assign
each
remainin
g object
to
nearest
medoids
7
6
5
4
3
2
1
0
0
1
2
3
4
5
6
7
8
9
10
Randomly select a
nonmedoid object,Oramdom
Total Cost = 26
10
10
Compute
total cost of
swapping
9
9
Swapping O
and Oramdom
8
If quality is
improved.
5
5
4
4
3
3
2
2
1
1
7
6
0
0
1
2
CS590D
3
4
5
6
8
7
6
0
7
8
9
10
0
1
2
3
4
5
6
7
52
8
9
10
PAM (Partitioning Around
Medoids) (1987)
• PAM (Kaufman and Rousseeuw, 1987), built in Splus
• Use real object to represent the cluster
– Select k representative objects arbitrarily
– For each pair of non-selected object h and selected object i,
calculate the total swapping cost TCih
– For each pair of i and h,
• If TCih < 0, i is replaced by h
• Then assign each non-selected object to the most similar
representative object
– repeat steps 2-3 until there is no change
CS590D
53
PAM Clustering: Total swapping
cost TCih=jCjih
10
10
9
9
8
7
7
6
j
t
8
t
6
j
5
5
4
4
i
3
h
h
i
3
2
2
1
1
0
0
0
1
2
3
4
5
6
7
8
9
10
Cjih = d(j, h) - d(j, i)
0
1
2
3
4
5
6
7
8
9
10
Cjih = 0
10
10
9
9
8
8
h
7
7
j
6
6
i
5
5
i
4
4
t
3
h
3
2
2
t
1
1
j
0
0
0
1
2
3
4
5
6
7
8
9
Cjih = d(j, t) - d(j, i)
10
0
1
2
3
4
5
6
7
8
9
Cjih = d(j, h) - d(j, t)
10
What is the problem with
PAM?
• Pam is more robust than k-means in the presence of
noise and outliers because a medoid is less influenced
by outliers or other extreme values than a mean
• Pam works efficiently for small data sets but does not
scale well for large data sets.
– O(k(n-k)2 ) for each iteration
where n is # of data,k is # of clusters
Sampling based method,
CLARA(Clustering LARge Applications)
CS590D
55
CLARA (Clustering Large
Applications) (1990)
• CLARA (Kaufmann and Rousseeuw in 1990)
– Built in statistical analysis packages, such as S+
• It draws multiple samples of the data set, applies PAM on each
sample, and gives the best clustering as the output
• Strength: deals with larger data sets than PAM
• Weakness:
– Efficiency depends on the sample size
– A good clustering based on samples will not necessarily represent a
good clustering of the whole data set if the sample is biased
CS590D
56
CLARANS (“Randomized”
CLARA) (1994)
• CLARANS (A Clustering Algorithm based on Randomized Search)
(Ng and Han’94)
• CLARANS draws sample of neighbors dynamically
• The clustering process can be presented as searching a graph
where every node is a potential solution, that is, a set of k medoids
• If the local optimum is found, CLARANS starts with new randomly
selected node in search for a new local optimum
• It is more efficient and scalable than both PAM and CLARA
• Focusing techniques and spatial access structures may further
improve its performance (Ester et al.’95)
CS590D
57
Cluster Analysis
• What is Cluster Analysis?
• Types of Data in Cluster Analysis
• Partitioning Methods
• Hierarchical Methods
• Density-Based Methods
• Cluster Evaluation
• Grid-Based Methods
• Model-Based Clustering Methods
• Outlier Analysis
• Summary
CS590D
58
Hierarchical Clustering
• Use distance matrix as clustering criteria. This
method does not require the number of clusters
k as an input, but needs a termination condition
Step 0
a
Step 1
Step 2 Step 3 Step 4
ab
b
abcde
c
cde
d
de
e
Step 4
agglomerative
(AGNES)
Step 3
Step 2 StepCS590D
1 Step 0
divisive
(DIANA)
59
AGNES (Agglomerative
Nesting)
•
Introduced in Kaufmann and Rousseeuw (1990)
•
Implemented in statistical analysis packages, e.g., Splus
•
Use the Single-Link method and the dissimilarity matrix.
•
Merge nodes that have the least dissimilarity
•
Go on in a non-descending fashion
•
Eventually all nodes belong to the same cluster
10
10
10
9
9
9
8
8
8
7
7
7
6
6
6
5
5
5
4
4
4
3
3
3
2
2
2
1
1
1
0
0
0
1
2
3
4
5
6
7
8
9
10
0
0
1
2
3
4
5
6
CS590D
7
8
9
10
0
1
2
3
4
5
6
7
8
9
10
60
Agglomerative Clustering
Algorithm
•
More popular hierarchical clustering technique
•
Basic algorithm is straightforward
1.
2.
3.
4.
5.
6.
•
Compute the proximity matrix
Let each data point be a cluster
Repeat
Merge the two closest clusters
Update the proximity matrix
Until only a single cluster remains
Key operation is the computation of the proximity of two
clusters
–
Different approaches to defining the distance between clusters
distinguish the different algorithms
CS590D
61
Starting Situation
• Start with clusters of individual points and
p1 p2
p3
p4 p5
...
a proximity matrix
p1
p2
p3
p4
p5
.
.
Proximity Matrix
.
...
p1
CS590D
p2
p3
p4
p9
p10
p11
p12
62
Intermediate Situation
• After some merging steps, we have some clusters
C1
C2
C3
C4
C5
C1
C2
C3
C3
C4
C4
C5
C1
Proximity Matrix
C2
C5
...
p1
CS590D
p2
p3
p4
p9
p10
p11
p12
63
Intermediate Situation
• We want to merge the two closest clusters (C2 and C5) and
update the proximity matrix.
C1 C2
C3
C4 C5
C1
C2
C3
C3
C4
C4
C5
C1
Proximity Matrix
C2
C5
...
p1
CS590D
p2
p3
p4
p9
p10
p11
p12
64
After Merging
• The question is “How do we update the proximity
C2
matrix?”
U
C1
C1
C3
C3
C4
?
?
?
?
C2 U C5
C4
C5
?
C3
?
C4
?
Proximity Matrix
C1
C2 U C5
...
p1
CS590D
p2
p3
p4
p9
p10
p11
p12
65
How to Define Inter-Cluster
Similarity
p1
Similarity?
p2
p3
p4 p5
...
p1
p2
•
•
•
•
•
p3
MIN
MAX
Group Average
Distance Between Centroids
Other methods driven by an
objective function
p4
p5
.
.
.
– Ward’s Method uses squared
error
CS590D
Proximity Matrix
66
How to Define Inter-Cluster
Similarity
p1
p2
p3
p4 p5
...
p1
p2
•
•
•
•
•
p3
MIN
MAX
Group Average
Distance Between Centroids
Other methods driven by an
objective function
p4
p5
.
.
.
– Ward’s Method uses squared
error
CS590D
Proximity Matrix
67
How to Define Inter-Cluster
Similarity
p1
p2
p3
p4 p5
...
p1
p2
•
•
•
•
•
p3
MIN
MAX
Group Average
Distance Between Centroids
Other methods driven by an
objective function
p4
p5
.
.
.
– Ward’s Method uses squared
error
CS590D
Proximity Matrix
68
How to Define Inter-Cluster
Similarity
p1
p2
p3
p4 p5
...
p1
p2
•
•
•
•
•
p3
MIN
MAX
Group Average
Distance Between Centroids
Other methods driven by an
objective function
p4
p5
.
.
.
– Ward’s Method uses squared
error
CS590D
Proximity Matrix
69
How to Define Inter-Cluster
Similarity
p1


p2
p3
p4 p5
...
p1
p2
•
•
•
•
•
p3
MIN
MAX
Group Average
Distance Between Centroids
Other methods driven by an
objective function
p4
p5
.
.
.
– Ward’s Method uses squared
error
CS590D
Proximity Matrix
70
Hierarchical Clustering: Time and
Space requirements
• O(N2) space since it uses the proximity matrix.
– N is the number of points.
• O(N3) time in many cases
– There are N steps and at each step the size, N2,
proximity matrix must be updated and searched
– Complexity can be reduced to O(N2 log(N) ) time for
some approaches
CS590D
84
Hierarchical Clustering: Problems
and Limitations
• Once a decision is made to combine two
clusters, it cannot be undone
• No objective function is directly minimized
• Different schemes have problems with one
or more of the following:
– Sensitivity to noise and outliers
– Difficulty handling different sized clusters and
convex shapes
– Breaking large clusters
CS590D
85
A Dendrogram Shows How the
Clusters are Merged Hierarchically
• Decompose data objects into a several levels of
nested partitioning (tree of clusters), called a
dendrogram.
• A clustering of the data objects is obtained by
cutting the dendrogram at the desired level, then
each connected component forms a cluster.
CS590D
86
DIANA (Divisive Analysis)
• Introduced in Kaufmann and Rousseeuw (1990)
• Implemented in statistical analysis packages, e.g., Splus
• Inverse order of AGNES
• Eventually each node forms a cluster on its own
10
10
10
9
9
9
8
8
8
7
7
7
6
6
6
5
5
5
4
4
4
3
3
3
2
2
2
1
1
1
0
0
0
0
1
2
3
4
5
6
7
8
9
10
0
1
2
3
4
5
6
7
CS590D
8
9
10
0
1
2
3
4
5
6
7
8
9
10
87
More on Hierarchical
Clustering Methods
• Major weakness of agglomerative clustering methods
– do not scale well: time complexity of at least O(n2), where n is
the number of total objects
– can never undo what was done previously
• Integration of hierarchical with distance-based clustering
– BIRCH (1996): uses CF-tree and incrementally adjusts the
quality of sub-clusters
– CURE (1998): selects well-scattered points from the cluster and
then shrinks them towards the center of the cluster by a specified
fraction
– CHAMELEON (1999): hierarchical clustering using dynamic
modeling
CS590D
90
BIRCH (1996)
• Birch: Balanced Iterative Reducing and Clustering using Hierarchies,
by Zhang, Ramakrishnan, Livny (SIGMOD’96)
• Incrementally construct a CF (Clustering Feature) tree, a
hierarchical data structure for multiphase clustering
– Phase 1: scan DB to build an initial in-memory CF tree (a multi-level
compression of the data that tries to preserve the inherent clustering
structure of the data)
– Phase 2: use an arbitrary clustering algorithm to cluster the leaf nodes
of the CF-tree
• Scales linearly: finds a good clustering with a single scan and
improves the quality with a few additional scans
• Weakness: handles only numeric data, and sensitive to the order of
the data record.
CS590D
91
Clustering Feature Vector
Clustering Feature: CF = (N, LS, SS)
N: Number of data points
LS: Ni=1=Xi
SS: Ni=1=Xi2
CF = (5, (16,30),(54,190))
10
9
8
7
6
5
4
3
2
1
0
0
1
2
3
4
5
6
7
CS590D
8
9
10
(3,4)
(2,6)
(4,5)
(4,7)
(3,8)
92
CF-Tree in BIRCH
• Clustering feature:
– summary of the statistics for a given subcluster: the 0-th, 1st and
2nd moments of the subcluster from the statistical point of view.
– registers crucial measurements for computing cluster and utilizes
storage efficiently
A CF tree is a height-balanced tree that stores the
clustering features for a hierarchical clustering
– A nonleaf node in a tree has descendants or “children”
– The nonleaf nodes store sums of the CFs of their children
• A CF tree has two parameters
– Branching factor: specify the maximum number of children.
– threshold: max diameter of sub-clusters stored at the leaf nodes
CS590D
93
CF Tree
Root
B=7
CF1
CF2 CF3
CF6
L=6
child1
child2 child3
child6
Non-leaf node
CF1
CF2 CF3
CF5
child1
child2 child3
child5
Leaf node
prev CF1 CF2
CF6 next
Leaf node
prev CF1 CF2
CF4 next
CURE (Clustering Using
REpresentatives )
• CURE: proposed by Guha, Rastogi & Shim, 1998
– Stops the creation of a cluster hierarchy if a level consists of k
clusters
– Uses multiple representative points to evaluate the distance
between clusters, adjusts well to arbitrary shaped clusters and
avoids single-link effect
CS590D
95
Drawbacks of DistanceBased Method
• Drawbacks of square-error based clustering method
– Consider only one point as representative of a cluster
– Good only for convex shaped, similar size and density, and if k
can be reasonably estimated
CS590D
96
Cure: The Algorithm
•
•
•
•
Draw random sample s.
Partition sample to p partitions with size s/p
Partially cluster partitions into s/pq clusters
Eliminate outliers
– By random sampling
– If a cluster grows too slow, eliminate it.
• Cluster partial clusters.
• Label data in disk
CS590D
97
Data Partitioning and
Clustering
– s = 50
– p=2
– s/p = 25

s/pq = 5
y
y
y
x
y
y
x
x
CS590D
x
x
98
Cure: Shrinking
Representative Points
y
y
x
• Shrink the multiple representative points towards the
gravity center by a fraction of .
x
• Multiple representatives capture the shape of the cluster
CS590D
99
Clustering Categorical Data:
ROCK
• ROCK: Robust Clustering using linKs,
by S. Guha, R. Rastogi, K. Shim (ICDE’99).
– Use links to measure similarity/proximity
– Not distance based
– Computational complexity: O(n2  nm
2
m

n
log n)
m a
• Basic ideas:
T  T2
– Similarity function and neighbors:
Sim(T1 , T2 )  1
T1  T2
Let T1 = {1,2,3}, T2={3,4,5}
Sim( T1, T 2) 
{3}
1

 0.2
{1,2,3,4,5}
5
CS590D
100
CHAMELEON (Hierarchical
clustering using dynamic modeling)
•
CHAMELEON: by G. Karypis, E.H. Han, and V. Kumar’99
•
Measures the similarity based on a dynamic model
•
–
Two clusters are merged only if the interconnectivity and closeness
(proximity) between two clusters are high relative to the internal
interconnectivity of the clusters and closeness of items within the
clusters
–
Cure ignores information about interconnectivity of the objects,
Rock ignores information about the closeness of two clusters
A two-phase algorithm
1.
Use a graph partitioning algorithm: cluster objects into a large number
of relatively small sub-clusters
2.
Use an agglomerative hierarchical clustering algorithm: find the
genuine clusters by repeatedly combining these sub-clusters
CS590D
102
Overall Framework of
CHAMELEON
Construct
Partition the Graph
Sparse Graph
Data Set
Merge Partition
Final Clusters
CS590D
103
Cluster Analysis
• What is Cluster Analysis?
• Types of Data in Cluster Analysis
• Partitioning Methods
• Hierarchical Methods
• Density-Based Methods
• Cluster Evaluation
• Grid-Based Methods
• Model-Based Clustering Methods
• Outlier Analysis
• Summary
CS590D
105
Density-Based Clustering
Methods
• Clustering based on density (local cluster criterion), such
as density-connected points
• Major features:
–
–
–
–
Discover clusters of arbitrary shape
Handle noise
One scan
Need density parameters as termination condition
• Several interesting studies:
–
–
–
–
DBSCAN: Ester, et al. (KDD’96)
OPTICS: Ankerst, et al (SIGMOD’99).
DENCLUE: Hinneburg & D. Keim (KDD’98)
CLIQUE: Agrawal, et al. (SIGMOD’98)
CS590D
106
Density Concepts
• Core object (CO)–object with at least ‘M’ objects within a
radius ‘E-neighborhood’
• Directly density reachable (DDR)–x is CO, y is in x’s ‘E-
neighborhood’
• Density reachable–there exists a chain of DDR objects
from x to y
• Density based cluster–density connected objects
maximum w.r.t. reachability
CS590D
107
Density-Based Clustering:
Background
• Two parameters:
– Eps: Maximum radius of the neighbourhood
– MinPts: Minimum number of points in an Eps-neighbourhood of
that point
• NEps(p):
{q belongs to D | dist(p,q) <= Eps}
• Directly density-reachable: A point p is directly densityreachable from a point q wrt. Eps, MinPts if
– 1) p belongs to NEps(q)
– 2) core point condition:
|NEps (q)| >= MinPts
CS590D
p
q
MinPts = 5
Eps = 1 cm
108
Density-Based Clustering:
Background (II)
• Density-reachable:
– A point p is density-reachable from
a point q wrt. Eps, MinPts if there is
a chain of points p1, …, pn, p1 = q, pn
= p such that pi+1 is directly densityreachable from pi
p
p1
q
• Density-connected
– A point p is density-connected to a
point q wrt. Eps, MinPts if there is a
point o such that both, p and q are
density-reachable from o wrt. Eps
and MinPts.
CS590D
p
q
o
109
DBSCAN: Density Based Spatial
Clustering of Applications with Noise
• Relies on a density-based notion of cluster: A cluster is
defined as a maximal set of density-connected points
• Discovers clusters of arbitrary shape in spatial
databases with noise
Outlier
Border
Eps = 1cm
Core
MinPts = 5
CS590D
110
DBSCAN
• DBSCAN is a density-based algorithm.
–
Density = number of points within a specified radius
(Eps)
– A point is a core point if it has more than a specified
number of points (MinPts) within Eps
• These are points that are at the interior of a
cluster
– A border point has fewer than MinPts within Eps,
but is in the neighborhood of a core point
– A noise point is any point that is not a core point or
a border point.
CS590D
112
DBSCAN: Core, Border, and
Noise Points
CS590D
113
DBSCAN Algorithm
• Eliminate noise points
• Perform clustering on the remaining points
CS590D
114
DBSCAN: Core, Border and
Noise Points
Original Points
Point types: core,
border and noise
Eps = 10, MinPts = 4
CS590D
115
When DBSCAN Works Well
Original Points
Clusters
• Resistant to Noise
• Can handle clusters of different shapes and sizes
CS590D
116
When DBSCAN Does NOT
Work Well
(MinPts=4, Eps=9.75).
Original Points
• Varying densities
• High-dimensional data
(MinPts=4, Eps=9.92)
CS590D
117
DBSCAN: Determining EPS
and MinPts
•
•
•
Idea is that for points in a cluster, their kth nearest
neighbors are at roughly the same distance
Noise points have the kth nearest neighbor at farther
distance
So, plot sorted distance of every point to its kth nearest
neighbor
CS590D
118
OPTICS: A Cluster-Ordering
Method (1999)
• OPTICS: Ordering Points To Identify the
Clustering Structure
– Ankerst, Breunig, Kriegel, and Sander (SIGMOD’99)
– Produces a special order of the database wrt its
density-based clustering structure
– This cluster-ordering contains info equiv to the
density-based clusterings corresponding to a broad
range of parameter settings
– Good for both automatic and interactive cluster
analysis, including finding intrinsic clustering structure
– Can be represented graphically or using visualization
techniques
CS590D
119
Cluster Analysis
• What is Cluster Analysis?
• Types of Data in Cluster Analysis
• Partitioning Methods
• Hierarchical Methods
• Density-Based Methods
• Cluster Evaluation
• Grid-Based Methods
• Model-Based Clustering Methods
• Outlier Analysis
• Summary
CS590D
129
Cluster Validity
• For supervised classification we have a variety of
measures to evaluate how good our model is
– Accuracy, precision, recall
• For cluster analysis, the analogous question is how to
evaluate the “goodness” of the resulting clusters?
• But “clusters are in the eye of the beholder”!
• Then why do we want to evaluate them?
–
–
–
–
To avoid finding patterns in noise
To compare clustering algorithms
To compare two sets of clusters
To compare two clusters
CS590D
130
1
1
0.9
0.9
0.8
0.8
0.7
0.7
0.6
0.6
0.5
0.5
y
Random
Points
y
Clusters found in Random Data
0.4
0.4
0.3
0.3
0.2
0.2
0.1
0.1
0
0
0.2
0.4
0.6
0.8
0
1
DBSCAN
0
0.2
0.4
x
1
1
0.9
0.9
0.8
0.8
0.7
0.7
0.6
0.6
0.5
0.5
y
y
K-means
0.4
0.4
0.3
0.3
0.2
0.2
0.1
0.1
0
0
0.2
0.4
0.6
x
0.6
0.8
1
x
0.8
1
0
CS590D0
Complete
Link
0.2
0.4
0.6
x
0.8
1
131
Different Aspects of Cluster
Validation
1.
2.
3.
Determining the clustering tendency of a set of data, i.e.,
distinguishing whether non-random structure actually exists in the
data.
Comparing the results of a cluster analysis to externally known
results, e.g., to externally given class labels.
Evaluating how well the results of a cluster analysis fit the data
without reference to external information.
- Use only the data
4.
5.
Comparing the results of two different sets of cluster analyses to
determine which is better.
Determining the ‘correct’ number of clusters.
For 2, 3, and 4, we can further distinguish whether we want to
evaluate the entire clustering or just individual clusters.
CS590D
132
Measures of Cluster Validity
• Numerical measures that are applied to judge various aspects
of cluster validity, are classified into the following three types.
– External Index: Used to measure the extent to which cluster
labels match externally supplied class labels.
• Entropy
– Internal Index: Used to measure the goodness of a clustering
structure without respect to external information.
• Sum of Squared Error (SSE)
– Relative Index: Used to compare two different clusterings or
clusters.
• Often an external or internal index is used for this function, e.g., SSE or
entropy
• Sometimes these are referred to as criteria instead of indices
– However, sometimes criterion is the general strategy and index is the
numerical measure that implements the criterion.
CS590D
133
Measuring Cluster Validity Via
Correlation
•
Two matrices
–
–
Proximity Matrix
“Incidence” Matrix
•
•
•
•
Compute the correlation between the two matrices
–
•
•
One row and one column for each data point
An entry is 1 if the associated pair of points belong to the same cluster
An entry is 0 if the associated pair of points belongs to different clusters
Since the matrices are symmetric, only the correlation between
n(n-1) / 2 entries needs to be calculated.
High correlation indicates that points that belong to the
same cluster are close to each other.
Not a good measure for some density or contiguity
based clusters.
CS590D
134
Measuring Cluster Validity Via
Correlation
1
1
0.9
0.9
0.8
0.8
0.7
0.7
0.6
0.6
0.5
0.5
y
y
• Correlation of incidence and proximity matrices
for the K-means clusterings of the following two
data sets.
0.4
0.4
0.3
0.3
0.2
0.2
0.1
0.1
0
0
0.2
0.4
0.6
0.8
0
1
x
0
0.2
0.4
0.6
0.8
1
x
Corr = -0.9235
Corr = -0.5810
CS590D
135
Using Similarity Matrix for
Cluster Validation
• Order the similarity matrix with respect to cluster
labels and inspect visually.
1
1
0.9
0.8
0.7
Points
y
0.6
0.5
0.4
0.3
0.2
0.1
0
10
0.9
20
0.8
30
0.7
40
0.6
50
0.5
60
0.4
70
0.3
80
0.2
90
0.1
100
0
0.2
0.4
0.6
0.8
20
1
40
60
80
0
100 Similarity
Points
x
CS590D
136
Using Similarity Matrix for
Cluster Validation
• Clusters in random data are not so crisp
1
10
0.9
0.9
20
0.8
0.8
30
0.7
0.7
40
0.6
0.6
50
0.5
0.5
60
0.4
0.4
70
0.3
0.3
80
0.2
0.2
90
0.1
0.1
100
20
40
60
80
y
Points
1
0
0
100 Similarity
Points
0
0.2
0.4
0.6
0.8
1
x
DBSCAN
CS590D
137
Using Similarity Matrix for
Cluster Validation
• Clusters in random data are not so crisp
1
10
0.9
0.9
20
0.8
0.8
30
0.7
0.7
40
0.6
0.6
50
0.5
0.5
60
0.4
0.4
70
0.3
0.3
80
0.2
0.2
90
0.1
0.1
100
20
40
60
80
y
Points
1
0
0
100 Similarity
0
0.2
0.4
0.6
0.8
1
x
Points
K-means
CS590D
138
Using Similarity Matrix for
Cluster Validation
1
0.9
500
1
2
0.8
6
0.7
1000
3
0.6
4
1500
0.5
0.4
2000
0.3
5
0.2
2500
0.1
7
3000
500
1000
1500
2000
2500
3000
0
DBSCAN
CS590D
140
Internal Measures: SSE
• Clusters in more complicated figures aren’t well separated
• Internal Index: Used to measure the goodness of a clustering
structure without respect to external information
– SSE
• SSE is good for comparing two clusterings or two clusters (average
SSE).
• Can also be used to estimate the number of clusters
10
9
6
8
4
7
6
SSE
2
0
5
4
-2
3
2
-4
1
-6
0
5
10
15
CS590D
2
5
10
15
K
20
25
30
141
Internal Measures: SSE
• SSE curve for a more complicated data set
1
2
6
3
4
5
7
SSE of clusters found using K-means
CS590D
142
Framework for Cluster Validity
•
Need a framework to interpret any measure.
–
•
For example, if our measure of evaluation has the value, 10, is that
good, fair, or poor?
Statistics provide a framework for cluster validity
–
The more “atypical” a clustering result is, the more likely it represents
valid structure in the data
Can compare the values of an index that result from random data or
clusterings to those of a clustering result.
–
•
–
•
If the value of the index is unlikely, then the cluster results are valid
These approaches are more complicated and harder to understand.
For comparing the results of two different sets of cluster
analyses, a framework is less necessary.
–
However, there is the question of whether the difference between two
index values is significant
CS590D
143
Statistical Framework for SSE
• Example
– Compare SSE of 0.005 against three clusters in random data
– Histogram shows SSE of three clusters in 500 sets of random
data points of size 100 distributed over the range 0.2 – 0.8 for x
50
and y values
1
0.9
45
0.8
40
0.7
35
30
Count
y
0.6
0.5
0.4
20
0.3
15
0.2
10
0.1
0
25
5
0
0.2
0.4
0.6
x
0.8
1
0
0.016 0.018
0.02
0.022
0.024
0.026
0.028
0.03
0.032
0.034
SSE
CS590D
144
Statistical Framework for
Correlation
1
1
0.9
0.9
0.8
0.8
0.7
0.7
0.6
0.6
0.5
0.5
y
y
• Correlation of incidence and proximity matrices for
the K-means clusterings of the following two data
sets.
0.4
0.4
0.3
0.3
0.2
0.2
0.1
0.1
0
0
0.2
0.4
0.6
x
Corr = -0.9235
0.8
1
0
0
0.2
0.4
0.6
0.8
1
x
Corr = -0.5810
CS590D
145
Internal Measures: Cohesion and
Separation
• Cluster Cohesion: Measures how closely related
are objects in a cluster
– Example: SSE
• Cluster Separation: Measure how distinct or
well-separated a cluster is from other clusters
• Example: Squared Error
– Cohesion is measured by the within cluster sum of squares
(SSE)
WSS    ( x  mi )2
i xC i
– Separation is measured by the between cluster sum of squares
BSS   Ci (m  mi )2
i
– Where |Ci| is the size of cluster i
CS590D
146
Internal Measures: Cohesion
and Separation
• Example: SSE
– BSS + WSS = constant
m

1
m1
K=1 cluster:

2

3
4
m2
5
WSS  (1  3)2  (2  3)2  (4  3)2  (5  3)2  10
BSS  4  (3  3)2  0
Total  10  0  10
K=2 clusters:
WSS  (1  1.5) 2  ( 2  1.5) 2  ( 4  4.5) 2  (5  4.5) 2  1
BSS  2  (3  1.5) 2  2  ( 4.5  3) 2  9
Total  1  9  10
CS590D
147
Internal Measures: Cohesion and
Separation
• A proximity graph based approach can also be used for
cohesion and separation.
– Cluster cohesion is the sum of the weight of all links within a cluster.
– Cluster separation is the sum of the weights between nodes in the
cluster and nodes outside the cluster.
cohesion
separation
CS590D
148
Internal Measures: Silhouette
Coefficient
• Silhouette Coefficient combine ideas of both cohesion and
separation, but for individual points, as well as clusters and
clusterings
• For an individual point, i
– Calculate a = average distance of i to the points in its cluster
– Calculate b = min (average distance of i to points in another cluster)
– The silhouette coefficient for a point is then given by
s = 1 – a/b if a < b,
(or s = b/a - 1
if a  b, not the usual case)
b
– Typically between 0 and 1.
– The closer to 1 the better.
a
• Can calculate the Average Silhouette width for a cluster or a
clustering
CS590D
149
External Measures of Cluster Validity:
Entropy and Purity
CS590D
150
Final Comment on Cluster
Validity
“The validation of clustering structures is the
most difficult and frustrating part of cluster
analysis.
Without a strong effort in this direction, cluster
analysis will remain a black art accessible only
to those true believers who have experience and
great courage.”
Algorithms for Clustering Data, Jain and Dubes
CS590D
151
CS590D: Data Mining
Prof. Chris Clifton
March 2, 2006
Clustering
Cluster Analysis
• What is Cluster Analysis?
• Types of Data in Cluster Analysis
• Partitioning Methods
• Hierarchical Methods
• Density-Based Methods
• Cluster Evaluation
• Grid-Based Methods
• Model-Based Clustering Methods
• Outlier Analysis
• Summary
CS590D
153
Grid-Based Clustering
Method
• Using multi-resolution grid data structure
• Several interesting methods
– STING (a STatistical INformation Grid approach) by Wang, Yang
and Muntz (1997)
– WaveCluster by Sheikholeslami, Chatterjee, and Zhang
(VLDB’98)
• A multi-resolution clustering approach using wavelet method
– CLIQUE: Agrawal, et al. (SIGMOD’98)
CS590D
154
STING: A Statistical
Information Grid Approach
• Wang, Yang and Muntz (VLDB’97)
• The spatial area area is divided into rectangular cells
• There are several levels of cells corresponding to
different levels of resolution
CS590D
155
STING: A Statistical
Information Grid Approach (2)
– Each cell at a high level is partitioned into a number of smaller
cells in the next lower level
– Statistical info of each cell is calculated and stored beforehand
and is used to answer queries
– Parameters of higher level cells can be easily calculated from
parameters of lower level cell
• count, mean, s, min, max
• type of distribution—normal, uniform, etc.
– Use a top-down approach to answer spatial data queries
– Start from a pre-selected layer—typically with a small number of
cells
– For each cell in the current level compute the confidence interval
CS590D
156
STING: A Statistical
Information Grid Approach (3)
– Remove the irrelevant cells from further consideration
– When finish examining the current layer, proceed to the next
lower level
– Repeat this process until the bottom layer is reached
– Advantages:
• Query-independent, easy to parallelize, incremental update
• O(K), where K is the number of grid cells at the lowest level
– Disadvantages:
• All the cluster boundaries are either horizontal or vertical, and
no diagonal boundary is detected
CS590D
157
WaveCluster (1998)
• Sheikholeslami, Chatterjee, and Zhang (VLDB’98)
• A multi-resolution clustering approach which applies
wavelet transform to the feature space
– A wavelet transform is a signal processing technique that
decomposes a signal into different frequency sub-band.
• Both grid-based and density-based
• Input parameters:
– # of grid cells for each dimension
– the wavelet, and the # of applications of wavelet transform.
CS590D
158
What is Wavelet (1)?
CS590D
159
WaveCluster (1998)
• How to apply wavelet transform to find clusters
– Summaries the data by imposing a multidimensional
grid structure onto data space
– These multidimensional spatial data objects are
represented in a n-dimensional feature space
– Apply wavelet transform on feature space to find the
dense regions in the feature space
– Apply wavelet transform multiple times which result in
clusters at different scales from fine to coarse
CS590D
160
Wavelet Transform
• Decomposes a signal into different
frequency subbands. (can be applied to ndimensional signals)
• Data are transformed to preserve relative
distance between objects at different
levels of resolution.
• Allows natural clusters to become more
distinguishable
CS590D
161
What Is Wavelet (2)?
CS590D
162
Quantization
CS590D
163
Transformation
CS590D
164
WaveCluster (1998)
• Why is wavelet transformation useful for clustering
– Unsupervised clustering
It uses hat-shape filters to emphasize region where points
cluster, but simultaneously to suppress weaker information in
their boundary
– Effective removal of outliers
– Multi-resolution
– Cost efficiency
• Major features:
–
–
–
–
Complexity O(N)
Detect arbitrary shaped clusters at different scales
Not sensitive to noise, not sensitive to input order
Only applicable to low dimensional data
CS590D
165
CLIQUE (Clustering In QUEst)
• Agrawal, Gehrke, Gunopulos, Raghavan (SIGMOD’98).
• Automatically identifying subspaces of a high dimensional data
space that allow better clustering than original space
• CLIQUE can be considered as both density-based and grid-based
– It partitions each dimension into the same number of equal length
interval
– It partitions an m-dimensional data space into non-overlapping
rectangular units
– A unit is dense if the fraction of total data points contained in the unit
exceeds the input model parameter
– A cluster is a maximal set of connected dense units within a subspace
CS590D
166
CLIQUE: The Major Steps
• Partition the data space and find the number of points
that lie inside each cell of the partition.
• Identify the subspaces that contain clusters using the
Apriori principle
• Identify clusters:
– Determine dense units in all subspaces of interests
– Determine connected dense units in all subspaces of interests.
• Generate minimal description for the clusters
– Determine maximal regions that cover a cluster of connected
dense units for each cluster
– Determination of minimal cover for each cluster
CS590D
167
=3
30
40
Vacation
20
50
Salary
30
(week)
0 1 2 3 4 5 6 7
Vacation
(10,000)
0 1 2 3 4 5 6 7
age
60
20
30
40
50
age
50
age
60
Strength and Weakness of
CLIQUE
• Strength
– It automatically finds subspaces of the highest
dimensionality such that high density clusters exist in
those subspaces
– It is insensitive to the order of records in input and
does not presume some canonical data distribution
– It scales linearly with the size of input and has good
scalability as the number of dimensions in the data
increases
• Weakness
– The accuracy of the clustering result may be
degraded at the expense of simplicity of the method
CS590D
169
Cluster Analysis
• What is Cluster Analysis?
• Types of Data in Cluster Analysis
• Partitioning Methods
• Hierarchical Methods
• Density-Based Methods
• Cluster Evaluation
• Grid-Based Methods
• Model-Based Clustering Methods
• Outlier Analysis
• Summary
CS590D
170
Model-Based Clustering
Methods
• Attempt to optimize the fit between the data and some
mathematical model
• Statistical and AI approach
– Conceptual clustering
• A form of clustering in machine learning
• Produces a classification scheme for a set of unlabeled objects
• Finds characteristic description for each concept (class)
– COBWEB (Fisher’87)
• A popular a simple method of incremental conceptual learning
• Creates a hierarchical clustering in the form of a classification tree
• Each node refers to a concept and contains a probabilistic
description of that concept
CS590D
171
COBWEB Clustering
Method
A classification tree
CS590D
172
More on Statistical-Based
Clustering
• Limitations of COBWEB
– The assumption that the attributes are independent of each
other is often too strong because correlation may exist
– Not suitable for clustering large database data – skewed tree
and expensive probability distributions
• CLASSIT
– an extension of COBWEB for incremental clustering of
continuous data
– suffers similar problems as COBWEB
• AutoClass (Cheeseman and Stutz, 1996)
– Uses Bayesian statistical analysis to estimate the number of
clusters
– Popular in industry
CS590D
173
Other Model-Based
Clustering Methods
• Neural network approaches
– Represent each cluster as an exemplar, acting as a
“prototype” of the cluster
– New objects are distributed to the cluster whose
exemplar is the most similar according to some
dostance measure
• Competitive learning
– Involves a hierarchical architecture of several units
(neurons)
– Neurons compete in a “winner-takes-all” fashion for
the object currently being presented
CS590D
174
Model-Based Clustering
Methods
CS590D
175
Self-organizing feature
maps (SOMs)
• Clustering is also performed by having several
units competing for the current object
• The unit whose weight vector is closest to the
current object wins
• The winner and its neighbors learn by having
their weights adjusted
• SOMs are believed to resemble processing that
can occur in the brain
• Useful for visualizing high-dimensional data in 2or 3-D space
CS590D
176
Cluster Analysis
• What is Cluster Analysis?
• Types of Data in Cluster Analysis
• Partitioning Methods
• Hierarchical Methods
• Density-Based Methods
• Cluster Evaluation
• Grid-Based Methods
• Model-Based Clustering Methods
• Outlier Analysis
• Summary
CS590D
177
What Is Outlier Discovery?
• What are outliers?
– The set of objects are considerably dissimilar from the
remainder of the data
– Example: Sports: Michael Jordon, Wayne Gretzky, ...
• Problem
– Find top n outlier points
• Applications:
–
–
–
–
Credit card fraud detection
Telecom fraud detection
Customer segmentation
Medical analysis
CS590D
178
Outlier Discovery:
Statistical
Approaches
Assume a model underlying distribution that
generates data set (e.g. normal distribution)
• Use discordancy tests depending on
– data distribution
– distribution parameter (e.g., mean, variance)
– number of expected outliers
• Drawbacks
– most tests are for single attribute
– In many cases, data distribution may not be known
CS590D: Data Mining
Prof. Chris Clifton
March 4, 2006
Clustering
Outlier Discovery: DistanceBased Approach
• Introduced to counter the main limitations
imposed by statistical methods
– We need multi-dimensional analysis without knowing
data distribution.
• Distance-based outlier: A DB(p, D)-outlier is an
object O in a dataset T such that at least a
fraction p of the objects in T lies at a distance
greater than D from O
• Algorithms for mining distance-based outliers
– Index-based algorithm
– Nested-loop algorithm
– Cell-based algorithm
CS590D
181
Outlier Discovery:
Deviation-Based Approach
• Identifies outliers by examining the main characteristics
of objects in a group
• Objects that “deviate” from this description are
considered outliers
• sequential exception technique
– simulates the way in which humans can distinguish unusual
objects from among a series of supposedly like objects
• OLAP data cube technique
– uses data cubes to identify regions of anomalies in large
multidimensional data
CS590D
182
Cluster Analysis
• What is Cluster Analysis?
• Types of Data in Cluster Analysis
• Partitioning Methods
• Hierarchical Methods
• Density-Based Methods
• Cluster Evaluation
• Grid-Based Methods
• Model-Based Clustering Methods
• Outlier Analysis
• Summary
CS590D
183
Problems and Challenges
• Considerable progress has been made in scalable clustering
methods
– Partitioning: k-means, k-medoids, CLARANS
– Hierarchical: BIRCH, CURE
– Density-based: DBSCAN, CLIQUE, OPTICS
– Grid-based: STING, WaveCluster
– Model-based: Autoclass, Denclue, Cobweb
• Current clustering techniques do not address all the requirements
adequately
• Constraint-based clustering analysis: Constraints exist in data space
(bridges and highways) or in user queries
CS590D
184
Constraint-Based Clustering
Analysis
•
Clustering analysis: less parameters but more userdesired constraints, e.g., an ATM allocation problem
CS590D
185
Clustering With Obstacle
Objects
Not Taking obstacles into account
CS590D
Taking obstacles into account
186
Summary
• Cluster analysis groups objects based on their similarity
and has wide applications
• Measure of similarity can be computed for various types
of data
• Clustering algorithms can be categorized into partitioning
methods, hierarchical methods, density-based methods,
grid-based methods, and model-based methods
• Outlier detection and analysis are very useful for fraud
detection, etc. and can be performed by statistical,
distance-based or deviation-based approaches
• There are still lots of research issues on cluster analysis,
such as constraint-based clustering
CS590D
187
References (1)
•
R. Agrawal, J. Gehrke, D. Gunopulos, and P. Raghavan. Automatic subspace clustering of high
dimensional data for data mining applications. SIGMOD'98
•
•
M. R. Anderberg. Cluster Analysis for Applications. Academic Press, 1973.
M. Ankerst, M. Breunig, H.-P. Kriegel, and J. Sander. Optics: Ordering points to identify the
clustering structure, SIGMOD’99.
P. Arabie, L. J. Hubert, and G. De Soete. Clustering and Classification. World Scietific, 1996
M. Ester, H.-P. Kriegel, J. Sander, and X. Xu. A density-based algorithm for discovering clusters in
large spatial databases. KDD'96.
•
•
•
M. Ester, H.-P. Kriegel, and X. Xu. Knowledge discovery in large spatial databases: Focusing
techniques for efficient class identification. SSD'95.
•
D. Fisher. Knowledge acquisition via incremental conceptual clustering. Machine Learning, 2:139172, 1987.
•
D. Gibson, J. Kleinberg, and P. Raghavan. Clustering categorical data: An approach based on
dynamic systems. In Proc. VLDB’98.
•
S. Guha, R. Rastogi, and K. Shim. Cure: An efficient clustering algorithm for large databases.
SIGMOD'98.
•
A. K. Jain and R. C. Dubes. Algorithms for Clustering Data. Printice Hall, 1988.
CS590D
188
References (2)
•
L. Kaufman and P. J. Rousseeuw. Finding Groups in Data: an Introduction to Cluster Analysis.
John Wiley & Sons, 1990.
•
E. Knorr and R. Ng. Algorithms for mining distance-based outliers in large datasets. VLDB’98.
•
G. J. McLachlan and K.E. Bkasford. Mixture Models: Inference and Applications to Clustering.
John Wiley and Sons, 1988.
•
P. Michaud. Clustering techniques. Future Generation Computer systems, 13, 1997.
•
R. Ng and J. Han. Efficient and effective clustering method for spatial data mining. VLDB'94.
•
E. Schikuta. Grid clustering: An efficient hierarchical clustering method for very large data sets.
Proc. 1996 Int. Conf. on Pattern Recognition, 101-105.
•
G. Sheikholeslami, S. Chatterjee, and A. Zhang. WaveCluster: A multi-resolution clustering
approach for very large spatial databases. VLDB’98.
•
W. Wang, Yang, R. Muntz, STING: A Statistical Information grid Approach to Spatial Data Mining,
VLDB’97.
•
T. Zhang, R. Ramakrishnan, and M. Livny. BIRCH : an efficient data clustering method for very
large databases. SIGMOD'96.
CS590D
189