Survey
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
Cluster Analysis: Basic Concepts and Algorithms Dr. Hui Xiong Rutgers University Introductionto toData DataMining Mining Introduction 08/06/20068/30/2006 11 What is Cluster Analysis? z Finding groups of objects such that the objects in a group will be similar (or related) to one another and different from (or unrelated to) the objects in other groups Inter-cluster distances are maximized Intra-cluster distances are minimized Introduction to Data Mining 8/30/2006 2 Notion of a Cluster can be Ambiguous How many clusters? Six Clusters Two Clusters Four Clusters Introduction to Data Mining 8/30/2006 3 Types of Clusterings z A clustering is a set of clusters z Important p distinction between hierarchical and partitional sets of clusters z Partitional Clustering – A division data objects into non-overlapping subsets (clusters) such that each data object is in exactly one subset z Hi Hierarchical hi l clustering l t i – A set of nested clusters organized as a hierarchical tree Introduction to Data Mining 8/30/2006 4 Partitional Clustering Original Points A Partitional Clustering Introduction to Data Mining 8/30/2006 5 Hierarchical Clustering p1 p3 p4 p2 p1 p2 Traditional Hierarchical Clustering p3 p4 Traditional Dendrogram p1 p3 p4 p2 p1 p2 Non-traditional Hierarchical Clustering Introduction to Data Mining p3 p4 Non-traditional Dendrogram 8/30/2006 6 Other Distinctions Between Sets of Clusters z Exclusive versus non-exclusive – In non-exclusive clusterings, points may belong to multiple clusters. – Can C representt multiple lti l classes l or ‘b ‘border’ d ’ points i t z Fuzzy versus non-fuzzy – In fuzzy clustering, a point belongs to every cluster with some weight between 0 and 1 – Weights must sum to 1 – Probabilistic clustering has similar characteristics z Partial versus complete – In some cases, we only want to cluster some of the data z Heterogeneous versus homogeneous – Clusters of widely different sizes, shapes, and densities Introduction to Data Mining 8/30/2006 7 Characteristics of the Input Data Are Important z Type of proximity or density measure – This is a derived measure, but central to clustering z Sparseness – Dictates type of similarity – Adds to efficiency z Attribute type – Dictates type of similarity z Type of Data – Dictates type of similarity – Other characteristics characteristics, e e.g., g autocorrelation z z z Dimensionality Noise and Outliers Type of Distribution Introduction to Data Mining 8/30/2006 8 Clustering Algorithms z K-means and its variants z Hierarchical clustering z Density-based clustering Introduction to Data Mining 8/30/2006 9 K-means Clustering z Partitional clustering approach z Number of clusters, K, must be specified z Each cluster is associated with a centroid ((center point) p ) z Each point is assigned to the cluster with the closest centroid z The basic algorithm is very simple Introduction to Data Mining 8/30/2006 10 Example of K-means Clustering Iteration 6 1 2 3 4 5 3 2.5 2 y 1.5 1 0.5 0 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 x Example of K-means Clustering Iteration 1 Iteration 2 Iteration 3 2.5 2.5 2.5 2 2 2 1.5 1.5 1.5 y 3 y 3 y 3 1 1 1 0.5 0.5 0.5 0 0 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 0 -2 -1.5 -1 -0.5 x 0 0.5 1 1.5 2 -2 Iteration 4 Iteration 5 2.5 2 2 1.5 1.5 1.5 1 1 1 0.5 0.5 0.5 0 0 -0.5 0 0.5 1 x Introduction to Data Mining 1.5 2 0 0.5 1 1.5 2 1 1.5 2 y 2.5 2 y 2.5 y 3 -1 -0.5 Iteration 6 3 -1.5 -1 x 3 -2 -1.5 x 0 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 x -2 -1.5 -1 -0.5 0 0.5 x 8/30/2006 12 Limitations of K-means z K-means has problems when clusters are of differing – Sizes – Densities – Non-globular shapes z K-means has problems when the data contains outliers. tli Introduction to Data Mining 8/30/2006 13 Limitations of K-means: Differing Sizes Original Points Introduction to Data Mining K-means (3 Clusters) 8/30/2006 14 Limitations of K-means: Differing Density Original Points Introduction to Data Mining K-means (3 Clusters) 8/30/2006 15 Limitations of K-means: Non-globular Shapes Original Points Introduction to Data Mining K-means (2 Clusters) 8/30/2006 16 Hierarchical Clustering Produces a set of nested clusters organized as a hierarchical tree z Can C b be visualized i li d as a d dendrogram d z – A tree like diagram that records the sequences of merges or splits 5 6 0.2 4 3 4 2 0 15 0.15 5 2 0.1 1 0.05 3 0 1 Introduction to Data Mining 3 2 5 4 1 6 8/30/2006 17 Strengths of Hierarchical Clustering z Do not have to assume any particular number of clusters – Any desired number of clusters can be obtained by ‘cutting’ the dendrogram at the proper level z They may correspond to meaningful taxonomies – Example in biological sciences (e.g., animal kingdom, phylogeny reconstruction, …) Introduction to Data Mining 8/30/2006 18 Hierarchical Clustering z Two main types of hierarchical clustering – Agglomerative: Start with the points as individual clusters At each step, merge the closest pair of clusters until only one cluster (or k clusters) left – Divisive: z Start with one, all-inclusive cluster At each step, split a cluster until each cluster contains a point (or there are k clusters)) Traditional hierarchical algorithms use a similarity or distance matrix – Merge or split one cluster at a time Introduction to Data Mining 8/30/2006 19 How to Define Inter-Cluster Distance p1 p2 p3 p4 p5 ... p1 Similarity? p2 p3 p4 z z z z z p5 MIN . MAX . Group Average . Proximity Matrix Distance Between Centroids C Other methods driven by an objective function – Ward’s Method uses squared error Introduction to Data Mining 8/30/2006 20 How to Define Inter-Cluster Similarity p1 p2 p3 p4 p5 ... p1 p2 p3 p4 z z z z z p5 MIN . MAX . Group Average . Proximity Matrix Distance Between Centroids C Other methods driven by an objective function – Ward’s Method uses squared error Introduction to Data Mining 8/30/2006 21 How to Define Inter-Cluster Similarity p1 p2 p3 p4 p5 ... p1 p2 p3 p4 z z z z z p5 MIN . MAX . Group Average . Proximity Matrix Distance Between Centroids C Other methods driven by an objective function – Ward’s Method uses squared error Introduction to Data Mining 8/30/2006 22 How to Define Inter-Cluster Similarity p1 p2 p3 p4 p5 ... p1 p2 p3 p4 z z z z z p5 MIN . MAX . Group Average . Proximity Matrix Distance Between Centroids C Other methods driven by an objective function – Ward’s Method uses squared error Introduction to Data Mining 8/30/2006 23 How to Define Inter-Cluster Similarity p1 p2 p3 p4 p5 ... p1 × × p2 p3 p4 z z z z z p5 MIN . MAX . Group Average . Proximity Matrix Distance Between Centroids C Other methods driven by an objective function – Ward’s Method uses squared error Introduction to Data Mining 8/30/2006 24 Cluster Similarity: Ward’s Method z Similarity of two clusters is based on the increase in squared error when two clusters are merged – Similar to group average if distance between points is distance squared z Less susceptible to noise and outliers z Biased towards g globular clusters z Hierarchical analogue of K-means – Can be used to initialize K-means Introduction to Data Mining 8/30/2006 25 Hierarchical Clustering: Comparison 1 5 4 3 5 2 5 1 2 1 2 MIN 3 5 2 MAX 6 3 3 4 4 4 1 5 5 2 5 6 1 4 1 2 Ward’s Method 2 3 3 6 5 2 Group Average 3 1 4 4 4 Introduction to Data Mining 6 1 8/30/2006 3 26 Hierarchical Clustering: Time and Space requirements z O(N2) space since it uses the proximity matrix. – N is the number of points. z O(N3) time in many cases – There are N steps and at each step the size, N2, proximity matrix must be updated and searched – Complexity can be reduced to O(N2 log(N) ) time with some cleverness Introduction to Data Mining 8/30/2006 27 Hierarchical Clustering: Problems and Limitations z Once a decision is made to combine two clusters, it cannot be undone z No objective function is directly minimized z Different schemes have problems with one or more of the following: – Sensitivity to noise and outliers – Difficulty handling different sized clusters and convex shapes – Breaking large clusters Introduction to Data Mining 8/30/2006 28 DBSCAN z DBSCAN is a density-based algorithm. – Density = number of points within a specified radius (Eps) – A point is a core point if it has more than a specified number of points (MinPts) within Eps These are points that are at the interior of a cluster – A border point has fewer than MinPts within Eps, but is in the neighborhood of a core point – A noise point is any point that is not a core point or a border point. Introduction to Data Mining 8/30/2006 29 DBSCAN: Core, Border, and Noise Points Introduction to Data Mining 8/30/2006 30 DBSCAN Algorithm 1. 2. 3. 4. 5. Label all points as core, border, or noise points Eliminate noise points Put an edge between all core points that are within Eps of each other. Make each group of connected core points into a separate cluster. Assign each border point to one of the clusters of its associated core points. Introduction to Data Mining 8/30/2006 31 DBSCAN: Core, Border and Noise Points Original Points Point types: core, border and noise Eps = 10, MinPts = 4 Introduction to Data Mining 8/30/2006 32 When DBSCAN Works Well O i i l Points Original P i t Clusters • Resistant to Noise • Can handle clusters of different shapes and sizes Introduction to Data Mining 8/30/2006 33 When DBSCAN Does NOT Work Well (MinPts=4, Eps=9.75). Original Points • Varying densities • High-dimensional data (MinPts=4, Eps=9.92) Introduction to Data Mining 8/30/2006 34 DBSCAN: Determining EPS and MinPts Idea is that for points in a cluster, their kth nearest neighbors are at roughly the same distance Noise p points have the kth nearest neighbor g at farther distance So, plot sorted distance of every point to its kth nearest neighbor z z z Introduction to Data Mining 8/30/2006 35 Cluster Validity z For supervised classification we have a variety of measures to evaluate how good our model is – Accuracy, precision, recall z For cluster analysis, the analogous question is how to evaluate the “goodness” of the resulting clusters? z But “clusters are in the eye of the beholder”! z Then whyy do we want to evaluate them? – – – – To avoid finding patterns in noise To compare clustering algorithms To compare two sets of clusters To compare two clusters Introduction to Data Mining 8/30/2006 36 1 1 0.9 0.9 0.8 0.8 0.7 0.7 0.6 0.6 05 0.5 0.5 y Random Points y Clusters found in Random Data 0.4 0.4 0.3 0.3 0.2 0.2 0.1 0 DBSCAN 0.1 0 0.2 0.4 0.6 0.8 0 1 0 0.2 0.4 x 1 1 0.9 0.9 0.8 0.8 0.7 0.7 0.6 0.6 0.5 0.5 y y K-means 0.4 0.4 0.3 0.3 0.2 0.2 0.1 0 0.6 0.8 1 x Complete Link 0.1 0 0.2 0.4 0.6 0.8 1 0 0 0.2 x Introduction to Data Mining 0.4 0.6 0.8 1 x 8/30/2006 37 Different Aspects of Cluster Validation 1. Determining the clustering tendency of a set of data, i.e., distinguishing whether non-random structure actually exists in the data. 2 2. Comparing the results of a cluster analysis to externally known results, e.g., to externally given class labels. 3. Evaluating how well the results of a cluster analysis fit the data without reference to external information. - Use only the data 4. Comparing the results of two different sets of cluster analyses to determine which is better. 5. Determining the ‘correct’ number of clusters. For 2, 3, and 4, we can further distinguish whether we want to evaluate the entire clustering or just individual clusters. Introduction to Data Mining 8/30/2006 38 Measures of Cluster Validity z Numerical measures that are applied to judge various aspects of cluster validity, are classified into the following three types. – External Index: Used to measure the extent to which cluster labels match externally supplied class labels labels. Entropy – Internal Index: Used to measure the goodness of a clustering structure without respect to external information. Sum of Squared Error (SSE) – Relative Index: Used to compare two different clusterings or clusters. z Often an external or internal index is used for this function, e.g., g SSE or entropy Sometimes these are referred to as criteria instead of indices – However, sometimes criterion is the general strategy and index is the numerical measure that implements the criterion. Introduction to Data Mining 8/30/2006 39 Measuring Cluster Validity Via Correlation z Two matrices – – z z Ideal Similarity Matrix One row and one column for each data point An entry is 1 if the associated pair of points belong to the same cluster An entry is 0 if the associated pair of points belongs to different clusters Compute the correlation between the two matrices – z Proximity Matrix Since the matrices are symmetric, only the correlation between n(n-1) / 2 entries needs to be calculated. High g correlation indicates that p points that belong g to the same cluster are close to each other. Not a good measure for some density or contiguity based clusters. Introduction to Data Mining 8/30/2006 40 Measuring Cluster Validity Via Correlation Correlation of ideal similarity and proximity matrices for the K-means clusterings of the following two data sets sets. 1 1 0.9 0.9 0.8 0.8 0.7 0.7 0.6 0.6 0.5 0.5 y y z 0.4 0.4 0.3 0.3 0.2 0.2 0.1 0 0.1 0 0.2 0.4 0.6 0.8 0 1 0 0.2 0.4 x 0.6 0.8 1 x Corr = 0.9235 Corr = 0.5810 Introduction to Data Mining 8/30/2006 41 Using Similarity Matrix for Cluster Validation z Order the similarity matrix with respect to cluster labels and inspect visually. 1 1 0.9 0.8 0.7 Points y 0.6 0.5 0.4 0.3 02 0.2 0.1 0 10 0.9 20 0.8 30 0.7 40 0.6 50 0.5 60 0.4 70 0.3 80 0.2 0 90 0.1 100 0 0.2 0.4 0.6 0.8 1 20 Introduction to Data Mining 40 60 Points x 8/30/2006 80 0 100 Similarity 42 Using Similarity Matrix for Cluster Validation Clusters in random data are not so crisp z 1 0.9 0.9 20 0.8 0.8 30 0.7 0.7 40 0.6 0.6 50 0.5 0.5 60 0.4 0.4 70 0.3 0.3 80 0.2 0.2 90 0.1 0.1 100 20 40 60 80 Points y Points 1 10 0 0 100 Similarity 0 02 0.2 04 0.4 06 0.6 08 0.8 1 x DBSCAN Introduction to Data Mining 8/30/2006 43 Using Similarity Matrix for Cluster Validation z Clusters in random data are not so crisp 1 0.9 0.9 20 0.8 0.8 30 0.7 0.7 40 0.6 0.6 50 0.5 0.5 60 0.4 0.4 70 0.3 0.3 80 0.2 0.2 90 0.1 0.1 100 20 40 60 Points 80 0 100 Similarity y Points 1 10 0 0 0.2 0.4 0.6 0.8 1 x K-means Introduction to Data Mining 8/30/2006 44 Using Similarity Matrix for Cluster Validation z Clusters in random data are not so crisp 1 0.9 0.9 20 0.8 0.8 30 0.7 0.7 40 0.6 0.6 50 0.5 0.5 60 0.4 0.4 70 0.3 0.3 80 0.2 0.2 90 0.1 0.1 100 20 40 60 y Points 1 10 0 0 100 Similarity 80 Points 0 02 0.2 04 0.4 06 0.6 08 0.8 1 x Complete Link Introduction to Data Mining 8/30/2006 45 Using Similarity Matrix for Cluster Validation 1 0.9 500 1 2 0.8 6 0.7 1000 4 3 0.6 1500 0.5 0.4 2000 0.3 5 0.2 2500 0.1 7 3000 500 1000 1500 2000 2500 0 3000 DBSCAN Introduction to Data Mining 8/30/2006 46 Internal Measures: SSE z Clusters in more complicated figures aren’t well separated z Internal Index: Used to measure the goodness of a clustering structure without respect to external information – SSE z z SSE is good for comparing two clusterings or two clusters (average SSE). Can also be used to estimate the number of clusters 10 9 6 8 4 7 6 SSE 2 0 5 4 -2 3 2 -4 1 -6 0 5 10 15 Introduction to Data Mining 2 5 10 15 K 20 8/30/2006 25 30 47 Internal Measures: SSE z SSE curve for a more complicated data set 1 2 6 4 3 5 7 SSE of clusters found using K-means Introduction to Data Mining 8/30/2006 48 Framework for Cluster Validity Need a framework to interpret any measure. z – For example, if our measure of evaluation has the value, 10, is that good, fair, or poor? Statistics provide a framework for cluster validity z – The more “atypical” a clustering result is, the more likely it represents valid structure in the data – Can compare the values of an index that result from random data or clusterings to those of a clustering result. – If the value of the index is unlikely, then the cluster results are valid These approaches are more complicated and harder to understand. For comparing F i the h results l off two diff different sets off cluster l analyses, a framework is less necessary. z – However, there is the question of whether the difference between two index values is significant Introduction to Data Mining 8/30/2006 49 Statistical Framework for SSE Example z – Compare SSE of 0.005 against three clusters in random data – Histogram g shows SSE of three clusters in 500 sets of random data points of size 100 distributed over the range 0.2 – 0.8 for x and y values 1 50 0.9 45 0.8 40 0.7 35 Coun nt y 0.6 05 0.5 0.4 25 20 0.3 15 0.2 10 0.1 0 30 5 0 0.2 0.4 0.6 x Introduction to Data Mining 0.8 1 0 0.016 0.018 0.02 0.022 0.024 0.026 0.028 0.03 0.032 0.034 SSE 8/30/2006 50 Statistical Framework for Correlation Correlation of ideal similarity and proximity matrices for the K-means clusterings of the following two data sets. 1 1 0.9 0.9 0.8 0.8 0.7 0.7 0.6 0.6 0.5 0.5 y y z 0.4 0.4 0.3 0.3 0.2 0.2 0.1 0 0.1 0 0.2 0.4 0.6 0.8 0 1 0 0.2 0.4 x 0.6 0.8 1 x Corr = -0.9235 Corr = -0.5810 Introduction to Data Mining 8/30/2006 51 Internal Measures: Cohesion and Separation z Cluster Cohesion: Measures how closely related are objects in a cluster – Example: SSE z Cluster Separation: Measure how distinct or wellseparated a cluster is from other clusters z Example: Squared Error – Cohesion is measured by the within cluster sum of squares (SSE) WSS = ∑ ∑ ( x − mi )2 i x∈C i – Separation is measured by the between cluster sum of squares BSS = ∑ Ci ( m − mi ) 2 i – Where |Ci| is the size of cluster i Introduction to Data Mining 8/30/2006 52 Internal Measures: Cohesion and Separation z Example: SSE – BSS + WSS = constant 1 × m1 K=1 cluster: m 2 × 3 4 × m2 5 WSS= (1 − 3) 2 + ( 2 − 3) 2 + ( 4 − 3) 2 + (5 − 3) 2 = 10 BSS= 4 × (3 − 3) 2 = 0 Total = 10 + 0 = 10 K=2 clusters: WSS= (1 − 1.5) 2 + ( 2 − 1.5) 2 + ( 4 − 4.5) 2 + (5 − 4.5) 2 = 1 BSS= 2 × (3 − 1.5) 2 + 2 × ( 4.5 − 3) 2 = 9 Total = 1 + 9 = 10 Introduction to Data Mining 8/30/2006 53 Internal Measures: Cohesion and Separation z A proximity graph based approach can also be used for cohesion and separation. – Cluster cohesion is the sum of the weight of all links within a cluster. – Cluster separation is the sum of the weights between nodes in the cluster and nodes outside the cluster. cohesion Introduction to Data Mining separation 8/30/2006 54 Internal Measures: Silhouette Coefficient z z Silhouette Coefficient combine ideas of both cohesion and separation, but for individual points, as well as clusters and clusterings For an individual point, i – Calculate a = average distance of i to the points in its cluster – Calculate b = min (average distance of i to points in another cluster) – The silhouette coefficient for a point is then given by s = (b – a) / max(a,b) – Typically between 0 and 1. b a – The closer to 1 the better. better z Can calculate the average silhouette coefficient for a cluster or a clustering Introduction to Data Mining 8/30/2006 55 External Measures of Cluster Validity: Entropy and Purity Introduction to Data Mining 8/30/2006 56 Final Comment on Cluster Validity “The validation of clustering structures is the most difficult and frustrating part of cluster analysis. Without a strong effort in this direction, cluster analysis will remain a black art accessible only to those true believers who have experience and great courage.” Algorithms for f Clustering C Data, Jain and Dubes Introduction to Data Mining 8/30/2006 57