Download Questions October 4

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Mixture model wikipedia , lookup

Principal component analysis wikipedia , lookup

Nonlinear dimensionality reduction wikipedia , lookup

Expectation–maximization algorithm wikipedia , lookup

Nearest-neighbor chain algorithm wikipedia , lookup

K-means clustering wikipedia , lookup

Cluster analysis wikipedia , lookup

K-nearest neighbors algorithm wikipedia , lookup

Transcript
Questions Review1 October 4, 2011
1.
In the news clustering problem we computed the distance between two news entities based on their (key-) wordlists A and B as
follows: distance(A,B)=1-(|AB)|/|AB|) with ‘||’ denoting set cardinality; e.g. |{a,b}|=2. Why do we divide by (AB) in the
formula?
2. What is the main difference between ordinal and a nominal attributes?
3. What role does exploratory data analysis play in a data mining project?
4. Assume we have a dataset in which the median of the first attribute is twice as large as the mean of the first attribute? What
does this tell you about the distribution of the first attribute?
5. What is (are) the characteristic(s) of a good histogram (for an attribute)?
6. Assume you find out that two attributes have a correlation of 0.02; what does this tell you about the relationship of the two
attributes? Answer the same question assuming the correlation is -0.98!
7. What of the following cluster shapes K-means is capable to discover? a) triangles b) clusters inside clusters
c) the letter ‘T ‘d) any polygon of 5 points e) the letter ’I’
8. Assume we apply K-medoids for k=3 to a dataset consisting of 5 objects numbered 1,..5 with the following distance matrix:
Distance Matrix:
0 2 4 5 1 object1
0233
015
02
0
The current set of representatives is {1,3,4}; indicate all computations k-medoids (PAM) performs in its next iteration
9. What are the characteristics of a border point in DBSCAN?
10. If you increase the MinPts parameter of DBSCAN; how will this affect the clustering results?
11. DBSCAN supports the notion of outliers. Why is this desirable?
12. What is the APRIORI property?
13. Assume the APRIORI algorithm identified the following 6 4-item sets that satisfy a user given support threshold: abcd, acde,
acdf, adfg, bcde, and bcdf; what initial candidate 5-itemsets are created by the APRIORI algorithm; which of those survive
Christoph F. Eick
subset pruning?
A Few Answers Review September 23, 2010
1.
In the news clustering problem we computed the distance between two news entities based on their (key-)
wordlists A and B as follows: distance(A,B)=1-(|AB)|/|AB|) with ‘||’ denoting set cardinality; e.g.
|{a,b}|=2. Why do we divide by (AB) in the formula?
2. What is the main difference between ordinal and a nominal attributes?
The values of nominal attributes are ordered; this fact has to be considered when assessing similarity between two
attribute values
1. Name two descriptive data mining methods!
2. What are the reasons for the current popularity of knowledge discovery in commercial and scientific
applications?
3. Most prediction techniques employ supervised learning approaches. Explain!
4. What role does exploratory data analysis play in a data mining project?
5. Assume we have a dataset in which the median of the first attribute is twice as large as the mean of the first
attribute? What does this tell you about the distribution of the first attribute?
6. What is (are) the characteristic(s) of a good histogram (for an attribute)?
It captures the most important characteristics of the underlying density function.
1. Assume you find out that two attributes have a correlation of 0.02; what does this tell you about the
relationship of the two attributes? Answer the same question assuming the correlation is -0.98!
0.02:= no linear relation. -0.98:=a strong linear relationship exists—if the value of one attribute goes up the value of
the other goes down
Christoph F. Eick
More Answers
1. The decision tree induction algorithm, discussed in class, is a greedy algorithm.
Explain!
Does not backtrack/change previously made decisions; in general, greedy algorithms
center on finding a path from the current state to the goal state, and do not revise
the currently taken path from the initial state to the current state.
1. Compute the Gini-gain for a 3-way split for a 3-class classification problem; the classdistribution before the split is (10, 5, 5) and after the split the class distribution is
(0,0,5), (9, 2,0) and (1,3,0).
G(1/2,1/4,1/4)-(1/4*0+11/20*G(9/11.2/11,0)+4/20*G(1/4,3/4))
1. What is overfitting ? What is underfitting? What can be done to address
overfitting/underfitting in decision tree induction?
Overfitting: the model is too complex, the training error is very low but the testing error
is not minimal.
Underfitting: the model is too simple, both training error and testing error are high.
1. Most decision tree learning tools use gain-ratio and not information gain; why?
2. Are decision trees suitable for classification problems involving continuous attributes
when classes have multi-modal (http://en.wikipedia.org/wiki/Multimodal)
distributions? Give reasons for your answer.
Christoph F. Eick
Yes, because