Download CLASSIFICATION: BASIC CONCEPT

Document related concepts

K-nearest neighbors algorithm wikipedia , lookup

Transcript
Komate AMPHAWAN
CLASSIFICATION: BASIC CONCEPT
1
Outline
• Classification: Basic Concepts
• Decision Tree Induction
• Bayes Classification Methods
• Rule-Based Classification
• Model Evaluation and Selection
• Techniques to Improve Classification Accuracy:
Ensemble Methods
• Summary
2
WHAT IS CLASSIFICATION?
3
What is Classification?
• The goal of data classification is to organize and
categorize data in distinct classes.
A model is first created based on the data distribution.
The model is then used to classify new data.
Given the model, a class can be predicted for new data.
4
Examples of Classification Task
• Classifying credit card transactions as
legitimate (ถูกต้ องตามเหตุผล) or fraudulent (ซึงฉ้ อโกง)
• Classifying secondary structures of protein as
alpha-helix, beta-sheet, or random coil
• Categorizing news stories as finance, weather,
entertainment, sports, etc
5
Classification vs. Prediction (1)
• Classification
predicts categorical class labels (discrete or
nominal)
classifies data (constructs a model) based on the
training set and the values (class labels) in a
classifying attribute and uses it in classifying new
data
• Prediction
models continuous-valued functions, i.e., predicts
unknown or missing values
6
Classification vs. Prediction (2)
• Typical applications
Credit approval
Target marketing
Medical diagnosis
Fraud detection
Web page categorization
7
CLASSIFICATION = LEARNING A MODEL
8
Classification = Learning a Model
9
Supervised vs. Unsupervised Learning
• Supervised learning (classification)
Supervision: The training data (observations,
measurements, etc.) are accompanied by labels
indicating the class of the observations
New data is classified based on the training set
• Unsupervised learning (clustering)
The class labels of training data is unknown
Given a set of measurements, observations, etc.
with the aim of establishing the existence of
classes or clusters in the data
10
Classification is a three-step process
1. Model construction (Learning):
Each tuple is assumed to belong to a predefined
class, as determined by one of the attributes, called
the class label.
The set of all tuples used for construction of the
model is called training set.
The model is represented in the following forms:
11
1. Classification Process (Learning)
12
Classification is a three-step process
2. Model Evaluation (Accuracy):
• Estimate accuracy rate of the model based on a
test set.
The known label of test sample is compared with
the classified result from the model.
Accuracy rate is the percentage of test set samples
that are correctly classified by the model.
Test set is independent of training set otherwise
over-fitting will occur.
13
2. Classification Process
(Accuracy Evaluation)
14
Classification is a three-step process
3. Model Use (Classification):
• The model is used to classify unseen objects.
Give a class label to a new tuple
Predict the value of an actual attribute
15
3. Classification Process
(Classification)
16
Framework (Supervised Learning)
17
ISSUES REGARDING CLASSIFICATION
AND PREDICTION
18
Issues: Data Preparation
• Data cleaning
Preprocess data in order to reduce noise and
handle missing values
• Relevance analysis (feature selection)
Remove the irrelevant or redundant attributes
• Data transformation
Generalize and/or normalize data
19
Issues: Evaluating Classification
Methods (1)
• Accuracy
classifier accuracy: predicting class label
predictor accuracy: guessing value of predicted
attributes
• Speed
time to construct the model (training time)
time to use the model (classification/prediction
time)
20
Issues: Evaluating Classification
Methods (2)
• Robustness
handling noise and missing values
• Scalability
efficiency in disk-resident databases
• Interpretability
understanding and insight provided by the model
• Other measures
e.g., goodness of rules, such as decision tree size or
compactness of classification rules
21
CLASSIFICATION
METHODS
22
•
•
•
•
•
•
•
•
•
•
Decision Tree Induction
Neural Networks
Bayesian Classification
Associative Classifiers
K-Nearest Neighbour
Support Vector Machines
Case-Based Reasoning
Genetic Algorithms
Rough Set Theory
Fuzzy Sets
23
DECISION TREE
24
What is a Decision Tree?
• A decision tree is a flow-chart-like tree
structure.
• Internal node denotes a test on an attribute
• Branch represents an outcome of the test
All tuples in branch have the same value for the
tested attribute.
• Leaf node represents class label or class label
distribution.
25
Training Dataset
26
A Sample Decision Tree
27
Decision-Tree Classification Methods
• The basic top-down decision tree generation
approach usually consists of two phases:
1. Tree construction
At the start, all the training examples are at the root.
Partition examples are recursively based on selected
attributes.
2. Tree pruning
Aiming at removing tree branches that may reflect
noise in the training data and lead to errors when
classifying test data improve classification
accuracy.
28
Decision Tree Construction
Recursive process:
• Tree starts a single node representing all data.
• If sample are all same class then node becomes a
leaf labeled with class label.
• Otherwise, select attribute that best separates
sample into individual classes
• Recursion stops when:
Sample in node belong to the same class (majority);
There are no remaining attributes on which to split;
There are no samples with attribute value.
29
Pseudo code of decision tree
generation (1)
30
Pseudo code of decision tree
generation (2)
31
Example of decision tree
32
Choosing the Attribute to Split Data Set
• The measure is also called Goodness function
• Different algorithms may use different goodness
functions:
information gain (ID3/C4.5)
• assume all attributes to be categorical.
• can be modified for continuous-valued attributes.
gini index
• assume all attributes are continuous-valued.
• assume there exist several possible split values for each
attribute.
• may need other tools, such as clustering, to get the possible
split values.
33
• can be modified for categorical attributes.
Information Gain (1)
• Select the attribute with the highest
information gain
• Let pi be the probability that an arbitrary tuple
in D belongs to class Ci, estimated by
|Ci, D|/|D|
• Expected information (entropy) needed to
classify a tuple in D:
m
Info ( D ) = − ∑ pi log 2 ( pi )
i =1
34
Information Gain (1)
• Information needed (after using A to split D
into v partitions) to classify D:
Info A ( D ) =
v
| Dj |
j =1
|D|
∑
× Info ( D j )
• Information gained by branching on attribute
A
Gain(A) = Info(D) − Info A(D)
35
Attribute Selection: Information Gain
g
g
Class P: buys_computer = “yes”
Class N: buys_computer = “no”
Info ( D ) = I (9,5 ) = −
age
<=30
31…40
>40
age
<=30
<=30
31…40
>40
>40
>40
31…40
<=30
<=30
>40
<=30
31…40
31…40
36
>40
Info age ( D ) =
5
+
I (3, 2 ) = 0 .694
14
9
9
5
5
log 2 ( ) −
log 2 ( ) = 0 .940
14
14 14
14
pi
2
4
3
n i I(p i, n i)
3 0.971
0 0
2 0.971
income student credit_rating
high
no
fair
high
no
excellent
high
no
fair
medium
no
fair
low
yes
fair
low
yes
excellent
low
yes
excellent
medium
no
fair
low
yes
fair
medium
yes
fair
medium
yes
excellent
medium
no
excellent
high
yes
fair
medium
no
excellent
5
4
I ( 2 ,3 ) +
I ( 4,0 )
14
14
5
I ( 2,3) means “age <=30” has 5 out of
14
14 samples, with 2 yes’es and 3
no’s. Hence
buys_computer
no
no
yes
yes
yes
no
yes
no
yes
yes
yes
yes
yes
no
Gain ( age ) = Info ( D ) − Info age ( D ) = 0 .246
Similarly,
Gain(income) = 0.029
Gain( student ) = 0.151
Gain(credit _ rating ) = 0.048
37
38
39
40
More example in decision tree on
playing tennis
ความชุ่มชื $น
มีเมฆมาก
41
42
43
44
45
Gain Ratio for Attribute Selection
• Information gain measure is biased towards
attributes with a large number of values
• C4.5 (a successor of ID3) uses gain ratio to
overcome the problem (normalization to
information gain)
v
SplitInfo A ( D) = −∑
j =1
| Dj |
|D|
× log 2 (
| Dj |
|D|
)
• GainRatio(A) = Gain(A)/SplitInfo(A)
• Note that Gain(A) is the information gain of
attribute A
46
age
<=30
<=30
31…40
>40
>40
>40
31…40
<=30
<=30
>40
<=30
31…40
31…40
>40
income student credit_rating
high
no
fair
high
no
excellent
high
no
fair
medium
no
fair
low
yes
fair
low
yes
excellent
low
yes
excellent
medium
no
fair
low
yes
fair
medium
yes
fair
medium
yes
excellent
medium
no
excellent
high
yes
fair
medium
no
excellent
buys_computer
no
no
yes
yes
yes
no
yes
no
yes
yes
yes
yes
yes
no
Example
gain_ratio(income) = 0.029/1.557 = 0.019
• The attribute with the maximum gain ratio is
selected as the splitting attribute
47
48
49
50
51
Gini Index (CART, IBM IntelligentMiner)
• If a data set D contains examples from n classes, gini
index, gini(D) is defined as
n
gini ( D ) = 1 − ∑ p 2j
j =1
where pj is the relative frequency of class j in D
• If a data set D is split on A into two subsets D1 and
D2, the gini index gini(D) is defined as
| D1 |
|D2 |
gini ( D 1) +
gini ( D 2 )
gini A ( D ) =
|D |
|D |
• Reduction in Impurity:
∆gini( A) = gini( D) − giniA ( D)
52
Gini Index
• The attribute provides the smallest ginisplit(D)
(or the largest reduction in impurity) is
chosen to split the node (need to enumerate
all the possible splitting points for each
attribute)
53
Computation of Gini Index
age
<=30
<=30
31…40
>40
>40
>40
31…40
<=30
<=30
>40
<=30
31…40
31…40
>40
income student credit_rating
high
no
fair
high
no
excellent
high
no
fair
medium
no
fair
low
yes
fair
low
yes
excellent
low
yes
excellent
medium
no
fair
low
yes
fair
medium
yes
fair
medium
yes
excellent
medium
no
excellent
high
yes
fair
medium
no
excellent
buys_computer
no
no
yes
yes
yes
no
yes
no
yes
yes
yes
yes
yes
no
• D has 9 tuples in buys_computer = “yes” and 5
in “no”
2
2
9 5
gini( D) = 1 −   −   = 0.459
 14   14 
54
age
<=30
<=30
31…40
>40
>40
>40
31…40
<=30
<=30
>40
<=30
31…40
31…40
>40
Computation of Gini Index
income student credit_rating
high
no
fair
high
no
excellent
high
no
fair
medium
no
fair
low
yes
fair
low
yes
excellent
low
yes
excellent
medium
no
fair
low
yes
fair
medium
yes
fair
medium
yes
excellent
medium
no
excellent
high
yes
fair
medium
no
excellent
buys_computer
no
no
yes
yes
yes
no
yes
no
yes
yes
yes
yes
yes
no
• Suppose the attribute income partitions D into
10 in D1: {low, medium} and 4 in D2
 10 
4
giniincome∈{low,medium} ( D) =  Gini( D1 ) +  Gini( D1 )
 14 
 14 
• Gini{low,high} is 0.458; Gini{medium,high} is 0.450.
Thus, split on the {low,medium} (and {high})
since it has the lowest Gini index
55
Computation of Gini Index
• All attributes are assumed continuous-valued
• May need other tools, e.g., clustering, to get
the possible split values
• Can be modified for categorical attributes
56
57
58
59
60
Comparing Attribute Selection Measures
The three measures, in general, return good results
but
• Information gain:
• biased towards multivalued attributes
• Gain ratio:
tends to prefer unbalanced splits in which one partition
is much smaller than the others
• Gini index:
biased to multivalued attributes
has difficulty when # of classes is large
tends to favor tests that result in equal-sized partitions
and purity in both partitions
61
Other Attribute Selection Measures (1)
• CHAID: a popular decision tree algorithm,
measure based on χ2 test for independence
• C-SEP: performs better than info. gain and gini
index in certain cases
• G-statistics: has a close approximation to χ2
distribution
• MDL (Minimal Description Length) principle (i.e.,
the simplest solution is preferred):
The best tree as the one that requires the fewest # of
bits to both (1) encode the tree, and (2) encode the
exceptions to the tree
62
Other Attribute Selection Measures (1)
• Multivariate splits (partition based on multiple
variable combinations)
CART: finds multivariate splits based on a linear comb.
of attrs.
• Which attribute selection
measure is the best?
Most give good results, none is significantly superior
than others
63
• Underfitting and Overfitting
• Missing Values
• Costs of Classification
PRACTICAL ISSUES OF
CLASSIFICATION
64
Underfitting and Overfitting
(Example)
65
Overfitting due to Noise
66
Problem of decision tree
• Overfitting: An induced tree may overfit the
training data
Too many branches, some may reflect anomalies
due to noise or outliers
Poor accuracy for unseen samples
67
Overfitting and Tree Pruning
• Two approaches to avoid overfitting
Prepruning: Halt tree construction early—do not split a
node if this would result in the goodness measure falling
below a threshold
• Difficult to choose an appropriate threshold
Postpruning: Remove branches from a “fully grown”
tree—get a sequence of progressively pruned trees
• Use a set of data different from the training data to decide
which is the “best pruned tree”
68
Enhancements to Basic Decision Tree
Induction (1)
• Allow for continuous-valued attributes
Dynamically define new discrete-valued attributes
that partition the continuous attribute value into a
discrete set of intervals
• Handle missing attribute values
Assign the most common value of the attribute
Assign probability to each of the possible values
69
Enhancements to Basic Decision Tree
Induction (2)
• Attribute construction
Create new attributes based on existing ones that
are sparsely represented
This reduces fragmentation, repetition, and
replication
70
Classification in Large Databases (1)
• Classification—a classical problem extensively
studied by statisticians and machine learning
researchers
• Scalability: Classifying data sets with millions
of examples and hundreds of attributes with
reasonable speed
71
Classification in Large Databases (2)
• Why decision tree induction in data mining?
relatively faster learning speed (than other
classification methods)
convertible to simple and easy to understand
classification rules
can use SQL queries for accessing databases
comparable classification accuracy with other
methods
72
Scalable Decision Tree Induction
Methods
• SLIQ (EDBT’96 — Mehta et al.)
Builds an index for each attribute and only class
list and the current attribute list reside in memory
• SPRINT (VLDB’96 — J. Shafer et al.)
Constructs an attribute list data structure
• PUBLIC (VLDB’98 — Rastogi & Shim)
Integrates tree splitting and tree pruning: stop
growing the tree earlier
73
Scalable Decision Tree Induction
Methods
• RainForest (VLDB’98 — Gehrke,
Ramakrishnan & Ganti)
Builds an AVC-list (attribute, value, class label)
• BOAT (PODS’99 — Gehrke, Ganti,
Ramakrishnan & Loh)
Uses bootstrapping to create several small
samples
74
Presentation of Classification Results
January 24, 2012
Data Mining: Concepts and Techniques
75
Visualization of a Decision Tree in SGI/MineSet 3.0
January 24, 2012
Data Mining: Concepts and Techniques
76
Interactive Visual Mining by Perception-Based Classification
(PBC)
January 24, 2012
Data Mining: Concepts and Techniques
77
SUMMARY OF DECISION TREE
78
Decision Tree Classification Task
79
Apply Model to Test Data
80
Apply Model to Test Data
81
Apply Model to Test Data
82
Apply Model to Test Data
83
Apply Model to Test Data
84
Decision Tree Induction
• Many Algorithms:
Hunt’s Algorithm (one of the earliest)
CART
ID3, C4.5
SLIQ,SPRINT
85
Q&A
86