Download Cluster Analysis: Advanced Concepts d Al i h and Algorithms Outline

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Human genetic clustering wikipedia , lookup

Nonlinear dimensionality reduction wikipedia , lookup

K-nearest neighbors algorithm wikipedia , lookup

Nearest-neighbor chain algorithm wikipedia , lookup

K-means clustering wikipedia , lookup

Cluster analysis wikipedia , lookup

Transcript
Cluster Analysis: Advanced Concepts
and
d Al
Algorithms
i h
Dr. Hui Xiong
Rutgers University
Introduction to Data Mining
08/06/2006
08/06/2006
1 1
Outline
z
Prototype-based
– Fuzzy c-means
– Mixture Model Clustering
g
– Self-Organizing Maps
z
Density-based
– Grid-based clustering
– Subspace clustering
z
Graph-based
– Chameleon
Ch
l
– Jarvis-Patrick
– Shared Nearest Neighbor (SNN)
z
Characteristics of Clustering Algorithms
Introduction to Data Mining
08/06/2006
2
Hard (Crisp) vs Soft (Fuzzy) Clustering
z
Hard (Crisp) vs. Soft (Fuzzy) clustering
– Generalize K-means objective function
SSE = ∑∑ wij (xi − c j ) ,
k
N
2
j =1 i =1
‹
k
∑w
j =1
ij
=1
wij : weight with which object xi belongs to cluster Cj
– To minimize SSE, repeat the following steps:
‹
Fixed cj and determine wij (cluster assignment)
‹
Fixed wij and recompute cj
– Hard clustering: wij ∈ {0,1}
Introduction to Data Mining
08/06/2006
3
Hard (Crisp) vs Soft (Fuzzy) Clustering
c1
x
c2
1
2
5
SSE ( x ) = wx1 ( 2 − 1) 2 + wx 2 (5 − 2)2
= wx 1 + 9 wx 2
SSE(x) is minimized when wx1 = 1, wx2 = 0
Introduction to Data Mining
08/06/2006
4
Fuzzy C-means
z
p: fuzzifier (p > 1)
Objective function
SSE = ∑∑ wijp (xi − c j ) ,
k
N
2
j =1 i =1
‹
k
∑w
j =1
ij
=1
wij : weight with which object xi belongs to cluster Cj
– To minimize objective function, repeat the following:
‹
Fixed cj and determine wij
‹
Fixed wij and recompute cj
– Fuzzy clustering: wij ∈[0,1]
Introduction to Data Mining
08/06/2006
5
Fuzzy C-means
c1
x
c2
1
2
5
E ( x ) = wx21( 2 − 1) 2 + wx22 (5 − 2) 2
= wx21 + 9 wx22
SSE(x) is minimized when wx1 = 0.9, w x2 = 0.1
Introduction to Data Mining
08/06/2006
6
Fuzzy C-means
z
p: fuzzifier (p > 1)
Objective function:
SSE = ∑∑ w (xi − c j ) ,
k
N
j =1 i =1
p
ij
2
k
∑w
j =1
ij
=1
z
Initialization: choose the weights wij randomly
z
Repeat:
– Update centroids:
– Update weights:
Introduction to Data Mining
08/06/2006
7
Fuzzy K-means Applied to Sample Data
0.95
0.9
0.85
0.8
0.75
0.7
0.65
0.6
0.55
Introduction to Data Mining
08/06/2006
0.5
maximum
membership
8
Hard (Crisp) vs Soft (Probabilistic) Clustering
z
Idea is to model the set of data points as arising from a
mixture of distributions
– Typically,
yp
y normal ((Gaussian)) distribution is used
– But other distributions have been very profitably used.
z
Clusters are found by estimating the parameters of the
statistical distributions
– Can use a k-means like algorithm, called the EM algorithm, to
estimate these p
parameters
‹
Actually, k-means is a special case of this approach
– Provides a compact representation of clusters
– The probabilities with which point belongs to each cluster provide
a functionality similar to fuzzy clustering.
Introduction to Data Mining
08/06/2006
9
Probabilistic Clustering: Example
z
z
z
Informal example: consider
modeling the points that
generate the following
histogram.
histogram
Looks like a combination of two
normal distributions
Suppose we can estimate the
mean and standard deviation of each normal distribution.
– This completely describes the two clusters
– We can compute the probabilities with which each point belongs
to each cluster
– Can assign each point to the
cluster (distribution) in which it
is most probable.
Introduction to Data Mining
08/06/2006
10
Probabilistic Clustering: EM Algorithm
Initialize the parameters
Repeat
For each point, compute its probability under each distribution
Using these probabilities, update the parameters of each distribution
Until there is not change
z
z
z
Very similar to of K-means
Consists of assignment and update steps
Can use random initialization
– Problem of local minima
z
z
For normal distributions, typically use K-means to initialize
If using normal distributions, can find elliptical as well as
spherical shapes.
Introduction to Data Mining
08/06/2006
11
Probabilistic Clustering: Updating Centroids
Update formula for the weights
m
m
i =1
i =1
c j = ∑ xi p( j | xi ) / ∑ p ( j | xi )
z
Very similar to the fuzzy k-means formula
– Weights are not raised to a power
– Weights are probabilities
– These probabilities are calculated using Bayes rule
z
Need to assign weights to each cluster
– Weights may not be equal
– Similar to prior probabilities
– Can be estimated, if desired
Introduction to Data Mining
08/06/2006
12
Probabilistic Clustering Applied to Sample Data
0.95
0.9
0.85
0.8
0.75
0.7
0.65
0.6
0.55
0.5
maximum
probability
Introduction to Data Mining
08/06/2006
13
Probabilistic Clustering: Dense and Sparse Clusters
6
?
4
2
y
0
-2
-4
-6
-8
-10
-8
Introduction to Data Mining
-6
-4
x
-2
08/06/2006
0
2
4
14
SOM: Self-Organizing Maps
z
Self-organizing maps (SOM)
– Centroid based clustering scheme
– Like K-means,
K means a fixed number of clusters are specified
– However, the spatial relationship
of clusters is also specified,
typically as a grid
– Points are considered one by
one
– Each point is assigned to the
closest centroid
– Other centroids are updated based on their nearness to
the closest centroid
Introduction to Data Mining
08/06/2006
15
SOM: Self-Organizing Maps
z
Updates are weighted by distance
– Centroids farther away are affected less
z
The impact of the updates decreases with each time
– At some point the centroids will not change much
Introduction to Data Mining
08/06/2006
16
SOM: Self-Organizing Maps
SOM can be viewed as a type of dimensionality
reduction
z If a two-dimensional
t
di
i
l grid
id iis used,
d th
the results
lt can
be visualized
z
Introduction to Data Mining
08/06/2006
17
SOM Clusters of LA Times Document Data
Introduction to Data Mining
08/06/2006
18
Grid-based Clustering
z
A type of density-based clustering
Introduction to Data Mining
08/06/2006
19
Subspace Clustering
z
Until now, we found clusters by considering all of
the attributes
z
Some clusters may involve only a subset of
attributes, i.e., subspaces of the data
– Example:
‹
When k-means is used to find document clusters, the
resulting clusters can typically be characterized by 10 or so
terms
Introduction to Data Mining
08/06/2006
20
Example
Introduction to Data Mining
08/06/2006
21
08/06/2006
22
Example
Introduction to Data Mining
Example
Introduction to Data Mining
08/06/2006
23
08/06/2006
24
Example
Introduction to Data Mining
Clique Algorithm - Overview
z
A grid-based clustering algorithm that
methodically finds subspace clusters
– Partitions the data space into rectangular units of
equal volume
– Measures the density of each unit by the fraction of
points it contains
– A unit is dense if the fraction of overall points it
contains is above a user specified threshold, τ
– A cluster is a group of collections of contiguous
(touching) dense units
Introduction to Data Mining
08/06/2006
25
Clique Algorithm
z
It is impractical to check each volume unit to see
if it is dense since there is exponential number of
such units
z
Monotone property of density-based clusters:
– If a set of points forms a density based cluster in k
dimensions, then the same set of points is also part of
a density based cluster in all possible subsets of
those dimensions
z
Very similar to Apriori algorithm
z
Can find overlapping clusters
Introduction to Data Mining
08/06/2006
26
Clique Algorithm
Introduction to Data Mining
08/06/2006
27
Limitations of Clique
z
Time complexity is exponential in number of
dimensions
– Especially if “too
too many”
many dense units are generated at
lower stages
z
May fail if clusters are of widely differing
densities, since the threshold is fixed
– Determining appropriate threshold and unit interval
length can be challenging
Introduction to Data Mining
08/06/2006
28
Denclue (DENsity CLUstering)
z
Based on the notion of kernel-density estimation
– Contribution of each point to the density is given by a
an influence or kernel function
Formula and plot of Gaussian Kernel
– Overall density is the sum of the contributions of all
points
Introduction to Data Mining
08/06/2006
29
Example of Density from Gaussian Kernel
Introduction to Data Mining
08/06/2006
30
DENCLUE Algorithm
Introduction to Data Mining
08/06/2006
31
DENCLUE Algorithm
z
Find the density function
z
Identify local maxima (density attractors)
z
Assign each point to the density attractor
– Follow direction of maximum increase in density
z
Define clusters as groups consisting of points associated
with density attractor
z
Discard clusters whose density attractor has a density
less than a user specified minimum, ξ
z
Combine clusters connected by paths of points that are
connected by points with density above ξ
Introduction to Data Mining
08/06/2006
32
Graph-Based Clustering: General Concepts
z
Graph-Based clustering uses the proximity graph
– Start with the proximity matrix
– Consider
C
id each
h point
i t as a node
d iin a graph
h
– Each edge between two nodes has a weight which is
the proximity between the two points
– Initially the proximity graph is fully connected
– MIN (single-link) and MAX (complete-link) can be
viewed as starting
g with this g
graph
p
z
In the simplest case, clusters are connected
components in the graph.
Introduction to Data Mining
08/06/2006
33
CURE Algorithm: Graph-Based Clustering
z
Agglomerative hierarchical clustering algorithms
vary in terms of how the proximity of two clusters
are computed
‹
MIN (single link)
– susceptible to noise/outliers
‹ MAX (complete link)/GROUP AVERAGE/Centroid/Ward’s:
– may not work well with non-globular clusters
z
CURE algorithm
g
tries to handle both p
problems
– Starts with a proximity matrix/proximity graph
Introduction to Data Mining
08/06/2006
34
CURE Algorithm
z
Represents a cluster using multiple
representative points
– Representative points are found by selecting a
constant number of points from a cluster
‹
First representative point is chosen to be the point furthest
from the center of the cluster
‹
Remaining representative points are chosen so that they are
farthest from all previously chosen points
Introduction to Data Mining
08/06/2006
35
CURE Algorithm
z
“Shrink” representative points toward the center of the
cluster by a factor, α
×
×
z
Shrinking representative points toward the center helps
avoid problems with noise and outliers
z
Cluster similarity is the similarity of the closest pair of
representative points from different clusters
Introduction to Data Mining
08/06/2006
36
CURE Algorithm
z
Uses agglomerative hierarchical scheme to
perform clustering;
– α = 0: similar to centroid-based
centroid based
– α = 1: somewhat similar to single-link
z
CURE is better able to handle clusters of arbitrary
shapes and sizes
Introduction to Data Mining
08/06/2006
37
Experimental Results: CURE
Picture from CURE, Guha, Rastogi, Shim.
Introduction to Data Mining
08/06/2006
38
Experimental Results: CURE
(centroid)
(single link)
Picture from CURE, Guha, Rastogi, Shim.
Introduction to Data Mining
08/06/2006
39
CURE Cannot Handle Differing Densities
Original Points
Introduction to Data Mining
CURE
08/06/2006
40
Graph-Based Clustering: Chameleon
z
Based on several key ideas
–
S
Sparsification
ifi ti off th
the proximity
i it graph
h
–
Partitioning the data into clusters that are
relatively pure subclusters of the “true” clusters
–
Merging based on preserving characteristics of
clusters
Introduction to Data Mining
08/06/2006
41
Graph-Based Clustering: Sparsification
z
The amount of data that needs to be
processed is drastically reduced
–
Sparsification can eliminate more than 99% of the
entries in a proximity matrix
–
The amount of time required to cluster the data is
drastically reduced
–
The size of the problems that can be handled is
increased
Introduction to Data Mining
08/06/2006
42
Graph-Based Clustering: Sparsification …
z
z
Clustering may work better
–
Sparsification techniques keep the connections to the most
similar (nearest) neighbors of a point while breaking the
connections
ti
to
t less
l
similar
i il points.
i t
–
The nearest neighbors of a point tend to belong to the same
class as the point itself.
–
This reduces the impact of noise and outliers and sharpens
the distinction between clusters.
Sparsification facilitates the use of graph
partitioning algorithms (or algorithms based
on graph partitioning algorithms)
–
Chameleon and Hypergraph-based Clustering
Introduction to Data Mining
08/06/2006
43
Sparsification in the Clustering Process
Introduction to Data Mining
08/06/2006
44
Limitations of Current Merging Schemes
Existing merging schemes in hierarchical
clustering algorithms are static in nature
z
– MIN or CURE:
‹ Merge two clusters based on their closeness (or minimum
distance)
– GROUP-AVERAGE:
‹
Merge two clusters based on their average connectivity
Introduction to Data Mining
08/06/2006
45
Limitations of Current Merging Schemes
(a)
(b)
(c)
(d)
Closeness schemes
will merge (a) and (b)
Introduction to Data Mining
Average connectivity schemes
will merge (c) and (d)
08/06/2006
46
Chameleon: Clustering Using Dynamic Modeling
z
z
Adapt to the characteristics of the data set to find the
natural clusters
Use a dynamic model to measure the similarity between
clusters
– Main properties are the relative closeness and relative interconnectivity of the cluster
– Two clusters are combined if the resulting cluster shares certain
properties with the constituent clusters
– The merging scheme preserves self-similarity
Introduction to Data Mining
08/06/2006
47
Relative Interconnectivity
Introduction to Data Mining
08/06/2006
48
Relative Closeness
Introduction to Data Mining
08/06/2006
49
Chameleon: Steps
z
Preprocessing Step:
Represent the data by a Graph
– Given a set of points, construct the k-nearestk nearest
neighbor (k-NN) graph to capture the relationship
between a point and its k nearest neighbors
– Concept of neighborhood is captured dynamically
(even if region is sparse)
z
Phase 1: Use a multilevel g
graph
p p
partitioning
g
algorithm on the graph to find a large number of
clusters of well-connected vertices
– Each cluster should contain mostly points from one
“true” cluster, i.e., is a sub-cluster of a “real” cluster
Introduction to Data Mining
08/06/2006
50
Chameleon: Steps …
z
Phase 2: Use Hierarchical Agglomerative
Clustering to merge sub-clusters
– Two clusters are combined if the resulting cluster
shares certain properties with the constituent clusters
– Two key properties used to model cluster similarity:
‹ Relative Interconnectivity: Absolute interconnectivity of two
clusters normalized by the internal connectivity of the clusters
Relative Closeness: Absolute closeness of two clusters
normalized by the internal closeness of the clusters
‹
Introduction to Data Mining
08/06/2006
51
Experimental Results: CHAMELEON
Introduction to Data Mining
08/06/2006
52
Experimental Results: CHAMELEON
Introduction to Data Mining
08/06/2006
53
Experimental Results: CURE (10 clusters)
Introduction to Data Mining
08/06/2006
54
Experimental Results: CURE (15 clusters)
Introduction to Data Mining
08/06/2006
55
Experimental Results: CHAMELEON
Introduction to Data Mining
08/06/2006
56
Experimental Results: CURE (9 clusters)
Introduction to Data Mining
08/06/2006
57
Experimental Results: CURE (15 clusters)
Introduction to Data Mining
08/06/2006
58
Graph-Based Clustering: SNN Approach
Shared Nearest Neighbor (SNN) graph: the weight of an edge is
the number of shared neighbors between vertices given that the
vertices
ti
are connected
t d
i
j
Introduction to Data Mining
i
4
j
08/06/2006
59
Creating the SNN Graph
Sparse Graph
Shared Near Neighbor Graph
Link weights are similarities
between neighboring points
Link weights are number of
Shared Nearest Neighbors
Introduction to Data Mining
08/06/2006
60
Jarvis-Patrick Clustering
z
First, the k-nearest neighbors of all points are found
– In graph terms this can be regarded as breaking all but the k
strongest links from a point to other points in the proximity graph
z
A pair of points is put in the same cluster if
– any two points share more than T neighbors and
– the two points are in each others k nearest neighbor list
z
For instance, we might
g choose a nearest neighbor
g
list of
size 20 and put points in the same cluster if they share
more than 10 near neighbors
z
Jarvis-Patrick clustering is too brittle
Introduction to Data Mining
08/06/2006
61
When Jarvis-Patrick Works Reasonably Well
Original Points
Jarvis Patrick Clustering
6 shared neighbors out of 20
Introduction to Data Mining
08/06/2006
62
When Jarvis-Patrick Does NOT Work Well
Smallest threshold, T,
that does not merge
clusters.
Introduction to Data Mining
Threshold of T - 1
08/06/2006
63
SNN Density-Based Clustering
z
Combines:
– Graph based clustering (similarity definition based on
number of shared nearest neighbors)
– Density based clustering (DBScan-like approach)
z
SNN density measures whether a point is
surrounded by similar points (with respect to its
nearest neighbors)
Introduction to Data Mining
08/06/2006
64
SNN Clustering Algorithm
1.
Compute the similarity matrix
This corresponds to a similarity graph with data points for nodes and
edges whose weights are the similarities between data points
2.
S
Sparsify
if the
th similarity
i il it matrix
t i by
b keeping
k
i only
l the
th k mostt similar
i il
neighbors
This corresponds to only keeping the k strongest links of the similarity
graph
3.
Construct the shared nearest neighbor graph from the sparsified
similarity matrix.
At this point, we could apply a similarity threshold and find the
connected components to obtain the clusters (Jarvis-Patrick
(Jarvis Patrick
algorithm)
4.
Find the SNN density of each Point.
Using a user specified parameters, Eps, find the number points that
have an SNN similarity of Eps or greater to each point. This is the
SNN density of the point
Introduction to Data Mining
08/06/2006
65
SNN Clustering Algorithm …
5.
Find the core points
Using a user specified parameter, MinPts, find the core
points, i.e., all points that have an SNN density greater
than MinPts
6.
Form clusters from the core points
If two core points are within a “radius”, Eps, of each
other they are place in the same cluster
7.
Discard all noise points
All non-core points that are not within a “radius” of Eps of
a core point are discarded
8.
Assign all non-noise, non-core points to clusters
This can be done by assigning such points to the
nearest core point
(Note that steps 4-8 are DBSCAN)
Introduction to Data Mining
08/06/2006
66
SNN Density
a) All Points
c) Medium SNN Density
Introduction to Data Mining
b) High SNN Density
d) Low SNN Density
08/06/2006
67
SNN Clustering Can Handle Differing Densities
Original Points
Introduction to Data Mining
SNN Clustering
08/06/2006
68
SNN Clustering Can Handle Other Difficult Situations
Introduction to Data Mining
08/06/2006
69
Finding Clusters of Time Series In Spatio-Temporal Data
SN N Density of S LP T im e Series Data
26 SLP Clusters via Shared Nearest Neighbor Clustering (100 NN, 1982-1994)
90
90
24
22
25
60
60
13
26
14
30
30
latitude
20
17
latitude
21
16
15
18
0
19
0
-30
23
-3 0
9
1
-60
-6 0
3
6
-9 0
-180
4
5
2
11
8
-150
-120
-90
-60
-30
0
30
12
60
7
90
-90
-180
180 -15
15 0 -12
12 0 -90
90
10
1 20
1 50
1 80
-60
60
-30
30
0
30
longitude
60
90
1 20
1 50
longitude
SNN Clusters of SLP.
Introduction to Data Mining
SNN Density of Points on the Globe.
08/06/2006
70
1 80
Limitations of SNN Clustering
z
Does not cluster all the points
z
C
Complexity
l it off SNN Cl
Clustering
t i iis hi
high
h
–
O( n * time to find numbers of neighbor within Eps)
–
In worst case, this is O(n2)
–
For lower dimensions, there are more efficient ways to find
the nearest neighbors
‹
R* Tree
‹
kdT
k-d
Trees
Introduction to Data Mining
08/06/2006
71
Characteristics of Data, Clusters, and
Clustering Algorithms
z
A cluster analysis is affected by characteristics of
– Data
– Clusters
Cl t
– Clustering algorithms
z
Looking at these characteristics gives us a
number of dimensions that you can use to
describe clustering algorithms and the results that
they produce
Introduction to Data Mining
08/06/2006
72
Characteristics of Data
High dimensionality
z Size of data set
z Sparsity of attribute values
z Noise and Outliers
z Types of attributes and type of data sets
z Differences in attribute scale
z Properties of the data space
z
– Can you define a meaningful centroid
Introduction to Data Mining
08/06/2006
73
Characteristics of Clusters
Data distribution
z Shape
z Differing sizes
z Differing densities
z Poor separation
z Relationship of clusters
z Subspace clusters
z
Introduction to Data Mining
08/06/2006
74
Characteristics of Clustering Algorithms
Order dependence
z Non-determinism
z Parameter selection
z Scalability
z Underlying model
z Optimization based approach
z
Introduction to Data Mining
08/06/2006
75
Comparison of MIN and EM-Clustering
z
We assume EM clustering using the Gaussian (normal) distribution.
z
MIN is hierarchical, EM clustering is partitional.
z
Both MIN and EM clustering are complete.
z
MIN has a graph-based (contiguity-based) notion of a cluster, while
EM clustering has a prototype (or model-based) notion of a cluster.
z
MIN will not be able to distinguish poorly separated clusters, but EM
can manage this in many situations.
z
MIN can find clusters of different shapes and sizes; EM clustering
prefers globular clusters and can have trouble with clusters of
different sizes.
z
Min has trouble with clusters of different densities, while EM can
often handle this.
z
Neither MIN nor EM clustering finds subspace clusters.
Introduction to Data Mining
08/06/2006
76
Comparison of MIN and EM-Clustering
z
MIN can handle outliers, but noise can join clusters; EM clustering
can tolerate noise, but can be strongly affected by outliers.
z
EM can only be applied to data for which a centroid is meaningful;
MIN only requires a meaningful definition of proximity.
z
EM will have trouble as dimensionality increases and the number
of its parameters (the number of entries in the covariance matrix)
increases as the square of the number of dimensions; MIN can
work well with a suitable definition of proximity.
z
EM is designed for Euclidean data, although versions of EM
clustering have been developed for other types of data
data. MIN is
shielded from the data type by the fact that it uses a similarity
matrix.
z
MIN makes not distribution assumptions; the version of EM we are
considering assumes Gaussian distributions.
Introduction to Data Mining
08/06/2006
77
Comparison of MIN and EM-Clustering
z
EM has an O(n) time complexity; MIN is O(n2log(n)).
z
Because of random initialization, the clusters found by EM
can vary from one run to another; MIN produces the same
clusters unless there are ties in the similarity matrix.
z
Neither MIN nor EM automatically determine the number
of clusters.
z
MIN does not have any user-specified parameters; EM
has the number of clusters and possibly the weights of
the clusters
clusters.
z
EM clustering can be viewed as an optimization problem;
MIN uses a graph model of the data.
z
Neither EM or MIN are order dependent.
Introduction to Data Mining
08/06/2006
78
Comparison of DBSCAN and K-means
z
Both are partitional.
z
K-means is complete; DBSCAN is not.
z
K-means has
K
h a prototype-based
b
d notion
i off a cluster;
l
DB
uses a density-based notion.
z
K-means can find clusters that are not well-separated.
DBSCAN will merge clusters that touch.
z
DBSCAN handles clusters of different shapes and sizes;
K-means p
prefers g
globular clusters.
Introduction to Data Mining
08/06/2006
79
Comparison of DBSCAN and K-means
z
DBSCAN can handle noise and outliers; K-means
performs poorly in the presence of outliers
z
K means can only be applied to data for which a centroid
K-means
is meaningful; DBSCAN requires a meaningful definition
of density
z
DBSCAN works poorly on high-dimensional data; Kmeans works well for some types of high-dimensional
data
z
Both techniques were designed for Euclidean data
data, but
extended to other types of data
z
DBSCAN makes not distribution assumptions; K-means is
really assuming spherical Gaussian distributions
Introduction to Data Mining
08/06/2006
80
Comparison of DBSCAN and K-means
z
K-means has an O(n) time complexity; DBSCAN is
O(n^2)
z
Because of random initialization, the clusters found by KK
means can vary from one run to another; DBSCAN
always produces the same clusters
z
DBSCAN automatically determines the number of cluster;
K-means does not
z
K-means has only one parameter, DBSCAN has two.
z
K-means clustering can be viewed as an optimization
problem and as a special case of EM clustering; DBSCAN
is not based on a formal model.
Introduction to Data Mining
08/06/2006
81