Download Clustering - Semantic Scholar

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Nonlinear dimensionality reduction wikipedia , lookup

Expectation–maximization algorithm wikipedia , lookup

Human genetic clustering wikipedia , lookup

K-nearest neighbors algorithm wikipedia , lookup

K-means clustering wikipedia , lookup

Nearest-neighbor chain algorithm wikipedia , lookup

Cluster analysis wikipedia , lookup

Transcript
Overview
• clustering vs. classification
Clustering
• supervised vs. unsupervised learning
Connectionist and Statistical Language Processing
• types of clustering algorithms
• the k-means algorithm
Frank Keller
[email protected]
• applying clustering to word similarity
Computerlinguistik
Universität des Saarlandes
• evaluating clustering models
Literature: Witten and Frank (2000: ch. 6),
Manning and Schütze (1999: ch. 14)
Clustering – p.1/21
Clustering vs. Classification
Clustering – p.2/21
Supervised vs. Unsupervised Learning
Classification: the task is to learn to assign instances to
predefined classes.
Clustering: no predefined classification is required. The task is
to learn a classification from the data.
Clustering algorithms divide a data set into natural groups
(clusters). Instances in the same cluster are similar to each
other, they share certain properties.
Supervised learning: classification requires supervised
learning, i.e., the training data has to specify what we are trying
to learn (the classes).
Unsupervised learning: clustering is an unsupervised task,
i.e., the training data doesn’t specify what we are trying to learn
(the clusters).
Example: to learn the parts of speech in a supervised fashion
we need a predefined inventory of POS (a tagset) and an
annotated corpus.
To learn POS in an unsupervised fashion, we only need an
unannotated corpus. The task is to discover a set of POS.
Clustering – p.3/21
Clustering – p.4/21
Clustering Algorithms
Examples
Clustering algorithms can have different properties:
• Hierarchical or flat: hierarchical algorithms induce a
d
e
hierarchy of clusters of decreasing generality, for flat
algorithms, all clusters are the same.
d
j
• Iterative: the algorithm starts with initial set of clusters and
a
c
a
h
b
improves them by reassigning instances to clusters.
• Hard and soft: hard clustering assigns each instance to
exactly one cluster. Soft clustering assigns each instance
a probability of belonging to a cluster.
• Disjunctive: instances can be part of more than one
k
i
e
b
c
g
f
h
f
i
g
Hard, non-hierarchical,
disjunctive
j
k
non-
Hard, non-hierarchical, disjunctive
cluster.
Clustering – p.5/21
Examples
g a c i e d k b j
Clustering – p.6/21
Using Clustering
f h
Hard, hierarchical, non-disjunctive
1
2
3
a
0.4
0.1
0.5
b
0.1
0.8
0.1
• Exploratory data analysis: visualize the data at hand, get
c
0.3
0.3
0.4
d
0.1
0.1
0.8
a feeling for what the data look like, what its properties
are. First step in building a model.
e
0.4
0.2
0.4
f
0.1
0.4
0.5
g
0.7
0.2
0.1
h
0.5
0.4
0.1
Soft, non-hierarchical,
disjunctive
Clustering – p.7/21
Clustering can be used for:
• Generalization: discover instances that are similar to each
other, and hence can be handled in the same way.
Example for generalization: Thursday and Friday co-occur with
the preposition on in our data, but Monday doesn’t.
If we know that Thursday, Friday, and Monday are syntactically
similar, then we can infer that they all co-occur with on.
Clustering – p.8/21
Properties of Clustering Algorithms
Hierarchical clustering:
Properties of Clustering Algorithms
Flat clustering:
• Preferable for detailed data analysis.
• Preferable if efficiency is important or for large data sets.
• Provides more information than flat clustering.
•
be used first on new data, results are often sufficient.
• No single best algorithm exists (different algorithms are
•
optimal for different applications).
k-means is conceptually the most simple method, should
k-means assumes a Euclidian representation space;
inappropriate for nominal data.
• Less efficient than flat clustering.
• Alternative: EM algorithm, uses definition of clusters
based on probabilistic models.
Clustering – p.9/21
The k-means Algorithm
The k-means Algorithm
Iterative, hard, flat clustering algorithm based on Euclidian
distance. Intuitive formulation:
Each instance ~x in the training set can be represented as a
vector of n values, one for each attribute:
• Specify k, the number of clusters to be generated.
• Chose
Clustering – p.10/21
(1)
k points at random as cluster centers.
• Assign each instance to its closest cluster center using
The Euclidian distance of two vectors ~x and ~y is defined as:
s
Euclidian distance.
• Calculate the centroid (mean) for each cluster, use it as
new cluster center.
~x = (x1 , x2 , . . . , xn )
(2)
|~x −~y| =
n
∑(xi − yi)2
i=1
The mean ~µ of a set of vectors c j is defined as:
• Reassign all instances to the closest cluster center.
• Iterate until the cluster centers don’t change any more.
Clustering – p.11/21
(3)
~µ =
1
∑ ~x
|c j | ~x∈c
j
Clustering – p.12/21
The k-means Algorithm
The k-means Algorithm
Example:
Properties of the algorithm:
8
7
6
5
4
3
2
1
0
• It only finds a local maximum, not a global one.
8
7
6
5
4
3
2
1
0
0 1 2 3 4 5 6 7 8
Assign instances (crosses) to
closest cluster centers (circles)
according to Euclidian distance
• The clusters it comes up with depend a lot on which
random cluster centers are chose initially.
• Can be used for hierarchical clustering: first apply
k-means with k = 2, yielding two clusters. Then apply it
again on each of the two clusters, etc.
• Distance metrics other than Euclidian distance can be
0 1 2 3 4 5 6 7 8
Recompute cluster centers as
means of the instances in each
cluster
used, e.g., the cosine.
Clustering – p.13/21
Applications of Clustering
Clustering – p.14/21
Applications of Clustering
Task: discover contextually similar words
Example (Manning and Schütze 1999: ch. 8):
First determine a set of context words, e.g., the 500 most
frequent words in the corpus.
Use adjectives (rows) as context words to cluster nouns
(columns) according to semantic similarity.
Then determine a window size, e.g., 10 words (5 to the left and
5 to the right).
For each word in the corpus, compute a co-occurrence vector
that specifies how often the word co-occurs with the context
words in the given window.
Use clustering to find words with similar context vectors. This
can find words that are syntactically or semantically similar,
depending on parameters (context words, window size).
cosmonaut
astronaut
moon
car
truck
Soviet
1
0
0
1
1
American
0
1
0
1
1
spacewalking
1
1
0
0
0
red
0
0
0
1
1
full
0
0
1
0
0
old
0
0
0
1
1
Cosmonaut and astronaut are similar, as are car and truck.
Clustering – p.15/21
Clustering – p.16/21
Evaluating Clustering Models
Evaluating Clustering Models
As with all machine learning algorithms, set the model
parameters on the training set. Then test its performance on
the test set, keeping the parameters constant.
Training set: set of instances used to generate the clusters
(compute the cluster centers in the case of k-means).
Test set: a set of unseen instances classified using to the
clusters generated during training.
Problem: How do we evaluate the performance on the test set?
How do we know if the clusters are correct? Possible solutions:
• Test the resulting clusters intuitively, i.e., inspect them and
see if they make sense. Not advisable.
• Have an expert generate clusters manually, and test the
automatically generated ones against them.
Problem: How do we determine k, i.e., the number of clusters?
(Many algorithms require a fixed k, not only k-means.)
Solution: treat k as a parameter, i.e., vary k and find the best
performance on a given data set.
• Test the clusters against a predefined classification if
there is one.
• Perform task-based evaluation, i.e., test if the
performance of some other algorithm can be improved by
using the output of the clusterer.
Clustering – p.17/21
Evaluating Clustering Models
Clustering – p.18/21
Summary
Example: Use a clustering algorithm to discover parts of
speech in a set of word. The algorithm should group together
words with the same syntactic category.
Intuitive: check if the words in the same cluster seem to have
the same part of speech.
Expert: ask a linguist to group the words in the data set
according to POS. Compare the result to the automatically
generated clusters.
Classification: look up the words in a dictionary. Test if words in
the same cluster have the same POS in the dictionary.
Task: use the clusters for parsing (for example) and test if the
performance of the parser improves.
Clustering – p.19/21
Clustering algorithms discover groups of similar instances,
instead of requiring a predefined classification. They are
unsupervised.
Depending on the application, hierarchical or flat, and hard or
soft clustering is appropriate.
The k-means algorithm assigns instances to clusters according
to Euclidian distance to the cluster centers. Then it recomputes
cluster centers as the means of the instances in the cluster.
Clusters can be evaluated against an external classification
(expert-generated or predefined) or task-based.
Clustering – p.20/21
References
Manning, Christopher D., and Hinrich Schütze. 1999. Foundations of Statistical Natural Language
Processing. Cambridge, MA: MIT Press.
Witten, Ian H., and Eibe Frank. 2000. Data Mining: Practical Machine Learing Tools and
Techniques with Java Implementations. San Diego, CA: Morgan Kaufmann.
Clustering – p.21/21