Download ICS 278: Data Mining Lecture 1: Introduction to Data Mining

Document related concepts

Nonlinear dimensionality reduction wikipedia , lookup

K-nearest neighbors algorithm wikipedia , lookup

Transcript
CS 277: Data Mining
Notes on Classification
Padhraic Smyth
Department of Computer Science
University of California, Irvine
Review
•
Models that are linear in parameters b, e.g.,
y = b0 + b1 x1
+ b2 x2
+ b12 x1 x2
With least squares objective function -> solving a set of linear equations
•
Models that are non-linear in parameters, e.g., logistic
y = 1/ 1 + exp[ - (b0 + b1 x1
+ b2 x2
+ b12 x1 x2 ) ]
Solution requires non-linear optimization methods, e.g., iterative search
Data Mining Lectures
Notes on Classification
© Padhraic Smyth, UC Irvine
Optimization: Gradient Ascent (or Descent)
– Select an initial b heuristically or randomly
– Update:
• Compute the local gradient of the objective function S(b)
DS(b) = [ dS(b0)/dq1 , ………….., dS(bp)/dqp ]
– Gives direction of maximum increase of S function (at this point in
• Move
b space)
b a small distance in this direction, i.e., uphill
b -> b + l DS(b)
l = learning rate (e.g., 0.1)
• Repeat until convergence
– e.g.,
b is no longer changing, DS ~ 0, i.e., we are a (local) maximum
– Repeat from different initial conditions if local maxima exist in S(b)
– Many different versions (e.g., batch, sequential/stochastic, etc)
Data Mining Lectures
Notes on Classification
© Padhraic Smyth, UC Irvine
Optimization: Newton’s Method
•
Use 2nd order derivative information in update rule:
b -> b + H-1(b) DS(b)
where H-1(b) is the Hessian matrix,
a p x p matrix of 2nd derivatives of S(b) evaluated at b
•
Requires O(Np2) computations to compute Hessian, O(p3) to invert
– Can approximate with diagonal, yields O(Np)
•
Gives optimal convergence rate if S(b) is quadratic
– May be particularly helpful near minimum (or maximum) (think Taylor’s series
expansion)
•
For more discussion see Section 8.3 in the text
Data Mining Lectures
Notes on Classification
© Padhraic Smyth, UC Irvine
Model Evaluation
• Let MSEtest be the mean-square error of our learned predictor
function, evaluated on test data
• Useful to report MSEtest / MSEbaseline
– e.g., where MSEbaseline =
Si [y(i) – my]2
(on test data points)
where my = mean of y values on the training data
- ideally we would like MSEtest / MSEbaseline to be much less than 1.
• Can also plot histograms of individual errors: MSE might be
dominated by outliers
Data Mining Lectures
Notes on Classification
© Padhraic Smyth, UC Irvine
Classification
•
Predictive modeling: predict Y given X
– Y is real-valued => regression
– Y is categorical => classification
• Often use C rather than Y to indicate the “class variable”
•
Classification
– Many applications: speech recognition, document classification, OCR,
loan approval, face recognition, etc
Data Mining Lectures
Notes on Classification
© Padhraic Smyth, UC Irvine
Classification v. Regression
•
Similar in many ways…
– both learn a mapping from X to C or Y
– Both sensitive to dimensionality of X
– Generalization to new data is important in both
• Test error versus model complexity
– Many models can be used for either classification or regression, e.g.,
• trees, neural networks
•
Most important differences
– Categorical Y versus real-valued Y
– Different score functions
• E.g., classification error versus squared error
Data Mining Lectures
Notes on Classification
© Padhraic Smyth, UC Irvine
Decision Region Terminology
TWO-CLASS DATA IN A TWO-DIMENSIONAL FEATURE SPACE
6
Decision
Region 1
5
Decision
Region 2
Feature 2
4
3
2
1
0
Decision
Boundary
-1
Data Mining Lectures
2
3
4
5
6
Feature 1
7
Notes on Classification
8
9
10
© Padhraic Smyth, UC Irvine
Probabilistic View of Classification
•
Notation: K classes c1,…..cK
•
Class probabilities: p(ck) = probability of class k
•
Class-conditional probabilities
p( x | ck ) = probability of x given ck , k = 1,…K
•
Posterior class probabilities (by Bayes rule)
p( ck | x ) = p( x | ck ) p(ck) / p(x) , k = 1,…K
where p(x) = S p( x | cj ) p(cj)
In theory this is all we need….in practice this may not be best approach.
Data Mining Lectures
Notes on Classification
© Padhraic Smyth, UC Irvine
Bayes Rules for Classification
Consider 2 class case c1, c2
Goal of classification: given x, predict c1 or c2
Optimal decision rule: choose c1 if p(c1 | x) > 0.5, otherwise choose c2
=> we would like to know p(c1 | x),
By Bayes rule,
p(c1 | x ) = p(x | c1) p(c1) / p(x)
= p(x | c1) p(c1) / ( p(x | c1) p(c1) + p(x | c2) p(c2) )
=
Data Mining Lectures
p(x , c1)
/ (
p(x , c1)
Notes on Classification
+
p(x , c2)
)
© Padhraic Smyth, UC Irvine
Probabilistic Classification for 1-dimensional x
p( x , c2 )
p( x , c1 )
Note that p( x , c ) = p(x | c) p(c)
Data Mining Lectures
Notes on Classification
© Padhraic Smyth, UC Irvine
Probabilistic Classification for 1-dimensional x
p( x , c2 )
p( x , c1 )
1
p( c1 | x )
0.5
0
Data Mining Lectures
Notes on Classification
© Padhraic Smyth, UC Irvine
Probabilistic Classification for 1-dimensional x
p( x , c2 )
p( x , c1 )
1
p( c1 | x )
0.5
0
Data Mining Lectures
Notes on Classification
© Padhraic Smyth, UC Irvine
Decision Regions and Bayes Error Rate
p( x , c2 )
Class c2
Class c1
Class c2
p( x , c1 )
Class c2
Class c1
Optimal decision regions = regions where 1 class is more likely
Optimal decision regions  optimal decision boundaries
Data Mining Lectures
Notes on Classification
© Padhraic Smyth, UC Irvine
Decision Regions and Bayes Error Rate
p( x , c2 )
Class c2
Class c1
Class c2
p( x , c1 )
Class c2
Class c1
Optimal decision regions = regions where 1 class is more likely
Optimal decision regions  optimal decision boundaries
Bayes error rate = fraction of examples misclassified by optimal classifier
= shaded area above (see equation 10.3 in text)
Data Mining Lectures
Notes on Classification
© Padhraic Smyth, UC Irvine
Procedure for optimal Bayes classifier
•
For each class learn a model p( x | ck )
– E.g., each class is multivariate Gaussian with its own mean and covariance
•
Use Bayes rule to obtain p( ck | x )
=> this yields the optimal decision regions/boundaries
=> use these decision regions/boundaries for classification
•
Correct in theory…. but practical problems include:
– How do we model p( x | ck ) ?
– Even if we know the model for p( x | ck ), modeling a distribution or
density will be very difficult in high dimensions (e.g., p = 100)
•
Alternative approach: model the decision boundaries directly
Data Mining Lectures
Notes on Classification
© Padhraic Smyth, UC Irvine
3 Types of Classifiers
•
Generative (or class-conditional) classifiers:
– Learn models for p( x | ck ), use Bayes rule to find decision boundaries
– Examples: naïve Bayes models, Gaussian classifiers
•
Regression-based classifiers:
– Learn a model for p( ck | x ) directly
– Example: logistic regression, neural networks
•
Discriminative classifiers
–
–
–
No probabilities
Learn the decision boundaries directly
Examples:
• Linear boundaries: perceptrons, linear SVMs
• Piecewise linear boundaries: decision trees, nearest-neighbor classifiers
• Non-linear boundaries: non-linear SVMs
– Note: one can usually “post-fit” class probability estimates p( ck | x ) to a
discriminative classifier, e.g., often done with SVMs
Data Mining Lectures
Notes on Classification
© Padhraic Smyth, UC Irvine
Generative Classifier
p( x , c2 )
Data Mining Lectures
Notes on Classification
p( x , c1 )
© Padhraic Smyth, UC Irvine
Regression-based Classifier
p( x , c2 )
p( x , c1 )
1
p( c1 | x )
0.5
0
Data Mining Lectures
Notes on Classification
© Padhraic Smyth, UC Irvine
Discriminative Classifier
p( x , c2 )
p( x , c1 )
1
p( c1 | x )
0.5
0
Data Mining Lectures
Notes on Classification
© Padhraic Smyth, UC Irvine
What type of cost function is appropriate?
•
Lets look at the score functions:
– c(i) = true class, c(x(i) ; q) = class predicted by the classifier
Class-mismatch loss functions:
S(q) = 1/n
Si Cost [c(i),
c(x(i) ; q) ]
where cost(i, j) = cost of misclassifying true class i as predicted class j
e.g., cost(i,j) = 0 if i=j, = 1 otherwise (misclassification error or 0-1 loss)
and more generally cost(i,j) is a matrix of K x K losses (e.g., surgery, spam email, etc)
Class-probability loss functions, c = 0 or 1
S(q) = 1/n
Si log p(c(i) | x(i) ; q ) (log probability score)
or S(q) = 1/n Si [ c(i) – p(c(i) | x(i) ; q ) ]2
Data Mining Lectures
Notes on Classification
(Brier score)
© Padhraic Smyth, UC Irvine
Example: cost functions for classifying spam email
• 0-1 loss function
– Appropriate if we just want to maximize accuracy
• Asymmetric cost matrix
– Appropriate if missing non-spam emails is more “costly” than failing to
detect spam emails
•
Probability loss
– Appropriate if we wanted to rank all emails by p(spam | email features),
e.g., to allow the user to look at emails via a ranked list.
•
In general: don’t solve a harder problem than you need to, or don’t model
aspects of the problem you don’t need to
Data Mining Lectures
Notes on Classification
© Padhraic Smyth, UC Irvine
Examples of Classifiers
•
Generative/class-conditional/probabilistic, based on
p( x | ck ),
– Naïve Bayes (simple, but often effective in high dimensions)
– Parametric generative models, e.g., Gaussian (can be effective in lowdimensional problems: leads to quadratic boundaries in general)
•
Regression-based, model p( ck | x ) directly
– Logistic regression: simple, linear in “odds” space, widely used in industry
– Neural network: non-linear extension of logistic, can be difficult to work with
•
Discriminative models, focus on locating optimal decision boundaries
– Linear discriminants, perceptrons: simple, sometimes effective
– Support vector machines: generalization of linear discriminants, can be quite
effective, computational complexity can be an issue
– Nearest neighbor: simple, can scale poorly in high dimensions
– Decision trees: often effective in high dimensions, but biased
Data Mining Lectures
Notes on Classification
© Padhraic Smyth, UC Irvine
Generative Classifiers
(classifiers that estimate p(x | c) and then use Bayes rule to compute p(c | x)
Data Mining Lectures
Notes on Classification
© Padhraic Smyth, UC Irvine
A Generative Classifier: Naïve Bayes
• Generative probabilistic model with conditional independence
assumption on p( x | ck ), i.e.
p( x | ck ) = P p( xj | ck )
or,
log p( x | ck )
= S log [ p( xj | ck ) ]
•
Useful in high-dimensional problems
•
Typically used with nominal or ordinal variables
– Real-valued variables discretized to create ordinal versions
• e.g., Supervised and unsupervised discretization of continuous features,
Dougherty, Kohavi, and Sahami, ICML 1995
– alternative for real-valued x is to model each p( xj | ck ) with a parametric
density model, e.g., Gaussian. Less widely used.
Data Mining Lectures
Notes on Classification
© Padhraic Smyth, UC Irvine
A Generative Classifier: Naïve Bayes
• Comments:
– Simple to train (just estimate conditional probabilities for each feature-class pair)
– Often works surprisingly well in practice
• e.g., good baseline for text classification, basis of many widely used spam filters
– Feature selection can be helpful,
• e.g., select the K best individual features
– Note that even if independence assumptions are not met, it may still be able to
approximate the optimal decision boundaries (seems to happen in practice)
• See On the optimality of the simple Bayesian classifier under zero-one loss, Domingos and
Pazzani, Machine Learning, 2004
– However…. on most problems can usually be beaten with a more complex model
(plus more work)
Data Mining Lectures
Notes on Classification
© Padhraic Smyth, UC Irvine
Regression-Based Classifiers
(classifiers that estimate p(c | x) directly)
Data Mining Lectures
Notes on Classification
© Padhraic Smyth, UC Irvine
Regression-based Classification
•
Consider regression once again, but where y now takes values 0 or 1
•
Regression will try to learn an f function to approximate E[y | x] at each x
•
For binary y we have
E[y | x] = Sy p(y |x) y
= p(y=1|x) . 1 + p(y=0|x) . 0
= p(y=1|x)
=> For binary classification problems a regression model will try to
approximate p(y=1|x) (posterior class probabilities)
– e.g., this is what logistic regression and neural networks do
Data Mining Lectures
Notes on Classification
© Padhraic Smyth, UC Irvine
Predicting an output between 0 and 1
•
We often have a problem where y lies between 0 and 1
– probability that a patient with attributes X will survive 10 years
– proportion of people in Zip code X who will buy a product
•
We could use linear regression, but…..
•
Instead we can use the logistic function
log p(y=1|x)/log p(y=0|x) = b0 +
Equivalently,
p(y=1|x) = 1/[1 + exp(- b0 -
S bj xj
S bj xj
)]
We model the log-odds as a linear function of the input variables. This is
known as logistic regression.
(Note: neural networks can be thought of as multi-layer logistic models)
Data Mining Lectures
Notes on Classification
© Padhraic Smyth, UC Irvine
1-dimensional case
p(y=1| x ) = 1/[1 + exp(- b0 - b x ) ]
For simplicity assume b’s are both >0
As x -> + infinity, p(y=1 | x) -> 1
As x -> - infinity, p(y=1 | x) -> 0
P(y=1|x) = 0.5 when?
- b0 - b x = 0
-> x = - b0 / b
- location of logistic curve controlled by - b0 / b
- steepness of curve controlled by b
Data Mining Lectures
Notes on Classification
© Padhraic Smyth, UC Irvine
Likelihood-based Objective Function
• Conditional Log-Likelihood
– likelihood = probability of observed data
– Select parameters to maximize the (log) likelihood of the y’s given the
x’s (“conditional maximum likelihood”)
S(b) = Si log p( y(i) | x(i) ; b)
= Si y(i) log p( y(i)=1| x(i) ; b) + [1-y(i)] log(1- p( y(i)=1| x(i) ; b))
Data Mining Lectures
Notes on Classification
© Padhraic Smyth, UC Irvine
Fitting a Logistic Regression Model
• Iterative Reweighted Least Squares (IRLS)
– Can compute the 2nd derivative directly as weighted matrix
• Forms the basis for an iterative 2nd order Newton scheme
• Each iteration is equivalent to a weighted regression problem, O(p3)
– see (e.g.) Komarek and Moore (2005) for speedups for sparse data
• Known as iteratively reweighted least-squares
• Log-likelihood here is convex: so it is quite stable (only one global
maximum!).
• Stochastic gradient descent
– Often faster for large data sets (large N, large p)
– See notes by Charles Elkan for reference
• http://cseweb.ucsd.edu/~elkan/250B/logreg.pdf
Data Mining Lectures
Notes on Classification
© Padhraic Smyth, UC Irvine
Link between Logistic Regression and Naïve Bayes
Logistic Regression
P(C | d )
log
=    bw  w
P(C | d )
wd
Naïve Bayes
P(C | d )
P(C )
P( w | C )
log
= log
  log
P(C | d )
P(C ) wd
P( w | C )
Data Mining Lectures
Notes on Classification
© Padhraic Smyth, UC Irvine
Evaluating Classifiers
•
Evaluate on independent test data (as with regression)
•
Measures of performance on test data:
– Classification accuracy (or error)
• or cost function if “costs” of errors are not symmetric
• Confusion matrices:
– K x K matrix where entry(i,j) contains number of test examples that were predicted
to be class i, and truly belonged to class j
– Diagonal elements = examples classified correctly
– Off-diagonal elements = misclassified examples
– Useful with more than 2 classes for figuring out which classes are most “confused”
– Log-probability score on test data
• Useful if we want to measure how good (well-callibrated) p(c|x) estimates are
– Ranking performance
• How well does a classifier rank new examples?
– Receiver-operating characteristics
– Lift curves
Data Mining Lectures
Notes on Classification
© Padhraic Smyth, UC Irvine
Imbalanced Class Distributions
•
Common in data mining to have one class be much less likely than the
others
– e.g., 0.1% of examples are fraudulent or have a disease
•
If we train a standard classifier on a random sample of data it is very
difficult to beat the “majority classifier” in terms of accuracy
•
Approaches:
– Stratified sampling: artificially create training data with 50% of each class being
present, and then “correct” for this in prediction
• E.g., learn p(x|c) on stratified data and use true p( c ) when predicting with a
probabilistic model
– Use a different score function:
• We are often interested in scoring/screening/ranking cases when using the model
• Thus, scores such as “how many of the class of interest are ranked in the top 1% of
predictions” may be more relevant than overall accuracy (e.g., in document retrieval)
Data Mining Lectures
Notes on Classification
© Padhraic Smyth, UC Irvine
Ranking and Lift Curves
• Many problems where we are interested in ranking examples in
terms of how likely they are to the “positive” class
– E.g., credit scoring, fraud detection, medical screening, document
retrieval
– E.g., use classifier to rank N test examples according to p(c|x) and then
pick the top K, where K is much smaller than N
• Lift curve
– n = number of true positives that appear in top K% of ranked list
– r = number of true positives that would appear if we ranked randomly
– n/r is the “lift” provided by the classifier for top K%
• e.g., K = 10%, r = 200, n = 300, lift = 1.5, or 50% increase in lift
• Random ranking gives lift = 1, or 0% increase in lift
Data Mining Lectures
Notes on Classification
© Padhraic Smyth, UC Irvine
• Target variable = response/no-response from mailing campaign
• Training and test sets each of size 250k
• Standard model had 80 variables: variable selection reduced this to 7
• Note non-monotonicity in lower curve (undesirable)
Data Mining Lectures
Notes on Classification
© Padhraic Smyth, UC Irvine
Receiver Operating Characteristic (ROC) plots
• Rank the N test examples by p(c|x)
– or whatever real-number our classifier produces that indicates likelihood
of belonging to class 1
• Let k = number of true class 1 examples, and m = number of true
class 0 examples, and k+m = N
• For all possible thresholds t for this ranked list
– count number of true positives kt
• true positive rate = kt /k
– count number of “false alarms”, mt
• false positive rate = mt /m
– ROC plot = plot of true positive rate kt v false positive rate mt
Data Mining Lectures
Notes on Classification
© Padhraic Smyth, UC Irvine
ROC Example
N = 10 examples,
k = 6 true class 1’s,
m = 4 class 0’s
The first column is a
possible ranking
from a classifier
Rank
True Class True
Positives
False
Positives
1
1
1
0
2
1
2
0
3
1
3
0
4
1
4
0
5
0
4
1
6
1
5
1
7
0
5
2
8
1
6
2
9
0
6
3
10
0
6
4
ROC Plot
Diagonal line
corresponds
to random
ranking
•
•
Area under curve (AUC) often used as a metric to summarize ROC
Online example at http://www.anaesthetist.com/mnm/stats/roc/
Calibration
• In addition to ranking we may be interested in how accurate our
estimates of p(c|x) are,
– i.e., if the model says p(c|x) = 0.9, how accurate is this number?
• Calibration:
–
a model is well-calibrated if its probabilistic predictions match realworld empirical frequencies
– i.e., if a classifier predicts p(c|x) = 0.9 for 100 examples, then on
average we would expect about 90 of these examples to belong to class
c, and 10 not to.
– We can estimate calibration curves by binning a classifier’s probabilistic
predictions, and measuring how many
Data Mining Lectures
Notes on Classification
© Padhraic Smyth, UC Irvine
Calibration in Probabilistic Prediction
Data Mining Lectures
Notes on Classification
© Padhraic Smyth, UC Irvine
Examples of Classifiers
•
Generative/class-conditional/probabilistic, based on
p( x | ck ),
– Naïve Bayes (simple, but often effective in high dimensions)
– Parametric generative models, e.g., Gaussian (can be effective in lowdimensional problems: leads to quadratic boundaries in general)
•
Regression-based, model p( ck | x ) directly
– Logistic regression: simple, linear in “odds” space, widely used in industry
– Neural network: non-linear extension of logistic, can be difficult to work with
•
Discriminative models, focus on locating optimal decision boundaries
– Linear discriminants, perceptrons: simple, sometimes effective
– Support vector machines: generalization of linear discriminants, can be quite
effective, computational complexity can be an issue
– Nearest neighbor: simple, can scale poorly in high dimensions
– Decision trees: often effective in high dimensions, but biased
Data Mining Lectures
Notes on Classification
© Padhraic Smyth, UC Irvine
Discriminative Classifiers
Data Mining Lectures
Notes on Classification
© Padhraic Smyth, UC Irvine
Nearest Neighbor Classifiers
•
kNN: select the k nearest neighbors to x from the training data and select
the majority class from these neighbors
•
k is a parameter:
•
Comments
– Small k: “noisier” estimates, Large k: “smoother” estimates
– Best value of k often chosen by cross-validation
– Virtually assumption free
– Gives piecewise linear boundaries (i.e., non-linear overall)
– Interesting theoretical properties:
Bayes error < error(kNN) < 2 x Bayes error (asymptotically)
•
Disadvantages
– Can scale poorly with dimensionality: sensitive to distance metric
– Requires fast lookup at run-time to do classification with large n
– Does not provide any interpretable “model”
Data Mining Lectures
Notes on Classification
© Padhraic Smyth, UC Irvine
Local Decision Boundaries
Boundary? Points that are equidistant
between points of class 1 and 2
Note: locally the boundary is
(1) linear (because of Euclidean distance)
(2) halfway between the 2 class points
(3) at right angles to connector
1
2
Feature 2
1
2
?
2
1
Feature 1
Data Mining Lectures
Notes on Classification
© Padhraic Smyth, UC Irvine
Finding the Decision Boundaries
1
2
Feature 2
1
2
?
2
1
Feature 1
Data Mining Lectures
Notes on Classification
© Padhraic Smyth, UC Irvine
Finding the Decision Boundaries
1
2
Feature 2
1
2
?
2
1
Feature 1
Data Mining Lectures
Notes on Classification
© Padhraic Smyth, UC Irvine
Finding the Decision Boundaries
1
2
Feature 2
1
2
?
2
1
Feature 1
Data Mining Lectures
Notes on Classification
© Padhraic Smyth, UC Irvine
Overall Boundary = Piecewise Linear
Decision Region
for Class 1
Decision Region
for Class 2
1
2
Feature 2
1
2
?
2
1
Feature 1
Data Mining Lectures
Notes on Classification
© Padhraic Smyth, UC Irvine
Example: Choosing k in kNN
(example from G. Ridgeway, 2003)
Data Mining Lectures
Notes on Classification
© Padhraic Smyth, UC Irvine
Decision Tree Classifiers
– Widely used in practice
• Can handle both real-valued and nominal inputs (unusual)
• Good with high-dimensional data
– similar algorithms as used in constructing regression trees
– historically, developed both in statistics and computer science
• Statistics:
– Breiman, Friedman, Olshen and Stone, CART, 1984
• Computer science:
– Quinlan, ID3, C4.5 (1980’s-1990’s)
Data Mining Lectures
Notes on Classification
© Padhraic Smyth, UC Irvine
Decision Tree Example
Debt
Income
Data Mining Lectures
Notes on Classification
© Padhraic Smyth, UC Irvine
Decision Tree Example
Debt
Income > t1
??
t1
Data Mining Lectures
Income
Notes on Classification
© Padhraic Smyth, UC Irvine
Decision Tree Example
Debt
Income > t1
t2
Debt > t2
t1
Income
??
Data Mining Lectures
Notes on Classification
© Padhraic Smyth, UC Irvine
Decision Tree Example
Debt
Income > t1
t2
Debt > t2
t3
t1
Income
Income > t3
Data Mining Lectures
Notes on Classification
© Padhraic Smyth, UC Irvine
Decision Tree Example
Debt
Income > t1
t2
Debt > t2
t3
t1
Income
Income > t3
Note: tree boundaries are piecewise
linear and axis-parallel
Data Mining Lectures
Notes on Classification
© Padhraic Smyth, UC Irvine
Decision Tree Pseudocode
node = tree-design(Data = {X,C})
For i = 1 to d
quality_variable(i) = quality_score(Xi, C)
end
node = {X_split, Threshold } for max{quality_variable}
{Data_right, Data_left} = split(Data, X_split, Threshold)
if node == leaf?
return(node)
else
node_right = tree-design(Data_right)
node_left = tree-design(Data_left)
end
end
Data Mining Lectures
Notes on Classification
© Padhraic Smyth, UC Irvine
Binary split selection criteria
•
Q(t) = N1Q1(t) + N2Q2(t), where t is the threshold
•
Let p1k be the proportion of class k points in region 1
•
Error criterion for a branch
Q1(t) = 1 - p1k*
•
Gini index:
•
Cross-entropy:
Q1(t) = Sk p1k log p1k
•
Cross-entropy and Gini work better in practice than direct minimization
of classification error at each node
Data Mining Lectures
Q1(t) = Sk p1k (1 - p1k)
Notes on Classification
© Padhraic Smyth, UC Irvine
Computational Complexity for a Binary Tree
• At the root node, for each of p variables
– Sort all values, compute quality for each split
– O(pN log N) time for real-valued or ordinal variables
• Subsequent internal node operations each take O(N’ log N’)
• This assumes data are in main memory
– If data are on disk then repeated access of subsets at different nodes
may be very slow (impossible to pre-index)
– Note: time difference between retrieving data in RAM and data on disk
may be O(103) or more.
Data Mining Lectures
Notes on Classification
© Padhraic Smyth, UC Irvine
Splitting on a nominal attribute
• Nominal attribute with m values
– e.g., the name of a state or a city in marketing data
• 2m-1 possible subsets => exhaustive search is O(2m-1)
– For small m, a simple approach is to branch on specific values
– But for large m this may not work well
• Neat trick for the 2-class problem:
–
–
–
–
Data Mining Lectures
For each predictor value calculate the proportion of class 1’s
Order the m values according to these proportions
Now treat as an ordinal variable and select the best split (linear in m)
This gives the optimal split for the Gini index, among all possible 2m-1
splits (Breiman et al, 1984).
Notes on Classification
© Padhraic Smyth, UC Irvine
How to Choose the Right-Sized Tree?
Predictive
Error
Error on Test Data
Error on Training Data
Size of Decision Tree
Ideal Range
for Tree Size
Data Mining Lectures
Notes on Classification
© Padhraic Smyth, UC Irvine
Choosing a Good Tree for Prediction
• General idea
– grow a large tree
– prune it back to create a family of subtrees
• “weakest link” pruning
– score the subtrees and pick the best one
• Massive data sizes (e.g., n ~ 100k data points)
– use training data set to fit a set of trees
– use a validation data set to score the subtrees
• Smaller data sizes (e.g., n ~1k or less)
– use cross-validation
– use explicit penalty terms (e.g., Bayesian methods)
Data Mining Lectures
Notes on Classification
© Padhraic Smyth, UC Irvine
Example: Spam Email Classification
• Data Set: (from the UCI Machine Learning Archive)
– 4601 email messages from 1999
– Manually labelled as spam (60%), non-spam (40%)
– 54 features: percentage of words matching a specific word/character
• Business, address, internet, free, george, !, $, etc
– Average/longest/sum lengths of uninterrupted sequences of CAPS
• Error Rates (Hastie, Tibshirani, Friedman, 2001)
–
–
–
–
Data Mining Lectures
Training: 3056 emails, Testing: 1536 emails
Decision tree = 8.7%
Logistic regression: error = 7.6%
Naïve Bayes = 10% (typically)
Notes on Classification
© Padhraic Smyth, UC Irvine
Data Mining Lectures
Data Mining Lectures
Lectures 9/10: Classification
Notes on Classification
Padhraic Smyth, UC Irvine
© Padhraic Smyth, UC Irvine
Data Mining Lectures
Notes on Classification
© Padhraic Smyth, UC Irvine
Treating Missing Data in Trees
• Missing values are common in practice
• Approaches to handing missing values
– During training
• Ignore rows with missing values (inefficient)
– During testing
• Send the example being classified down both branches and average
predictions
– Replace missing values with an “imputed value” (can be suboptimal)
• Other approaches
– Treat “missing” as a unique value (useful if missing values are
correlated with the class)
– Surrogate splits method
• Search for and store “surrogate” variables/splits during training
Data Mining Lectures
Notes on Classification
© Padhraic Smyth, UC Irvine
Other Issues with Classification Trees
•
Why use binary splits?
– Multiway splits can be used, but cause fragmentation
•
Linear combination splits?
– can produces small improvements
– optimization is much more difficult (need weights and split point)
– Trees are much less interpretable
•
Model instability
– A small change in the data can lead to a completely different tree
– Model averaging techniques (like bagging) can be useful
•
Tree “bias”
– Poor at approximating non-axis-parallel boundaries
•
Producing rule sets from tree models (e.g., c5.0)
Data Mining Lectures
Notes on Classification
© Padhraic Smyth, UC Irvine
Why Trees are useful in Practice
• Can handle high dimensional data
– builds a model using 1 dimension at time
• Can handle any type of input variables
– categorical, real-valued, etc
– most other methods require data of a single type (e.g., only realvalued)
• Trees are (somewhat) interpretable
– domain expert can “read off” the tree’s logic
Data Mining Lectures
Notes on Classification
© Padhraic Smyth, UC Irvine
Limitations of Trees
•
Representational Bias
– classification: piecewise linear boundaries, parallel to axes
– regression: piecewise constant surfaces
•
Trees do not scale well to massive data sets (e.g., N in millions)
– repeated (unpredictable) access of subsets of the data
•
High Variance
– trees can be “unstable” as a function of the sample
• e.g., small change in the data -> completely different tree
– causes two problems
• 1. High variance contributes to prediction error
• 2. High variance reduces interpretability
– Trees are good candidates for model combining
• Often used with boosting and bagging
Data Mining Lectures
Notes on Classification
© Padhraic Smyth, UC Irvine
Decision Trees are not stable
Moving just one
example slightly
may lead to quite
different trees and
space partition
Lack of stability
against small
perturbation of data.
Figure from
Duda, Hart & Stork,
Chap. 8
Data Mining Lectures
Notes on Classification
© Padhraic Smyth, UC Irvine
Example of Tree Instability
2 trees fit to 2 splits of data, from G. Ridgeway, 2003
Data Mining Lectures
Notes on Classification
© Padhraic Smyth, UC Irvine
Linear Classifiers
Some slides adapted from Introduction to Information Retrieval, by
Manning, Raghavan, and Schutze, Cambridge University Press, 2009,
with additional slides from Ray Mooney.
Linear Classifiers
•
Linear classifier (two-class case)
wT x + b > 0
– w is a p-dimensional vector of weights (learned from the data)
– w0 is a threshold (also learned from the data)
– Equation of linear hyperplane (decision boundary)
wT x + b = 0
Note that in p-dimensions this defines a p-1 dimensional hyperplane
Linear Discriminant Analysis (LDA)
Earliest known classification algorithm (1936, R.A. Fisher)
Find a linear projection onto a vector such that means for each class (2
classes) are separated as much as possible (with variances taken into
account appropriately)
See section 10.4 in PDM for more mathematical details
Reduces to a special case of generative Gaussian classifier in certain situations
Many subsequent variations on this basic theme (e.g., regularized LDA)
Data Mining Lectures
Notes on Classification
© Padhraic Smyth, UC Irvine
Examples of Linear Classifiers
• Perceptrons:
– f(x) = wT x + b > 0 , classify x as 1 if f(x) > 0
• Logistic Regression
– See definition in earlier slides
– Boundary occurs at log(p/1-p) = 0, i.e., bT x + b0
• Naïve Bayes
– Linear (additive) form when we look at log P(c=1 | x)
• Linear support vector machines
– Same functional form as perceptrons, but trained differently
Data Mining Lectures
Notes on Classification
© Padhraic Smyth, UC Irvine
2-dimensional Example
• wT = [1 -1], b = 0
• Equation for decision boundary is
wT x + b = 0
or x1 – x2 = 0,
x1 = x2
length of weight vector |w| = sqrt(wT w) = sqrt(2)
Data Mining Lectures
Notes on Classification
© Padhraic Smyth, UC Irvine
Geometry of Linear Classifiers
wT = [1 -1], b = 0
x2
Decision boundary:
wT x + b = 0
Direction
of w vector
Distance of x from
the boundary is
(wT x + b )/ |w|
x1
Data Mining Lectures
Notes on Classification
© Padhraic Smyth, UC Irvine
Linear Separability in High Dimensions
Consider n sample points in p dimensions
– Binary labels => 2n possible labelings (or dichotomies)
– A labeling is linearly separable if we can separate the labels with a
hyperplane
– Let f(n,p) = fraction of the 2n possible labelings that are linearly separable
f(n, p) =
1
2/ 2n
Data Mining Lectures
n <= p + 1
S (n-1 choose i)
Notes on Classification
n > p+1
© Padhraic Smyth, UC Irvine
If n < p+1, then points
will be linearly
separable (for large p)
Data Mining Lectures
Notes on Classification
© Padhraic Smyth, UC Irvine
Ch. 15
Linear classifiers: Which Hyperplane?
•
Lots of possible solutions for weights.
•
Some methods find a separating hyperplane, but
not an optimal one [according to some criterion of expected
goodness]
– E.g., perceptron
•
Support Vector Machine (SVM) finds a “maximum
margin” solution
– Maximizes the distance between the hyperplane and
the “difficult points” close to decision boundary
– One intuition: if there are no points near the decision
surface, then there are no very uncertain
classification decisions
81
Sec. 15.1
Support Vector Machine (SVM)
• SVMs maximize the margin around the
separating hyperplane.
Support vectors
• A.k.a. large margin classifiers
• The decision function is fully specified
by a subset of training samples, the
support vectors.
• Solving SVMs is a quadratic
programming problem
Maximizes
Narrower
margin
margin
• Seen by many as the most successful
current text classification method*
*but other discriminative methods
often perform very similarly
82
Sec. 15.1
Distance of a Point to the Hyperplane
wT x  b
• Distance from example to the separator is r = y
w
(where y is +1 or -1)
ρ
x
Derivation of finding r:
Dotted line x’−x is perpendicular to
decision boundary so parallel to w.
r
x′
Unit vector is w/|w|, so line is rw/|w|.
x’ = x – y r w/|w|.
x’ satisfies wTx’+b = 0.
So wT(x –y r w/|w|) + b = 0
Recall that |w| = sqrt(wTw).
w
Data Mining Lectures
So, solving for r gives:
r = y(wTx + b)/|w|
Notes on Classification
83
© Padhraic Smyth, UC Irvine
Optimal Separating Hyperplane
•
Solution to constrained optimization problem:
max M subject to
w, w 0
1
yi ( wT xi  w0 )  M, i = 1,..., n
|| w ||
(Here yi e {-1, 1} is the binary class label for example i)
•
Without loss of generality, let ||w|| = 1/M
min || w || subject to yi (wT xi  w0 )  1, i = 1,..., n
w, w 0
•
Unique solution for a linearly separable data set
•
Margin M of the classifier
– the distance between the separating hyperplane and the closest training samples
– optimal separating hyperplane  maximum margin
Data Mining Lectures
Notes on Classification
© Padhraic Smyth, UC Irvine
Optimal Hyperplane, Support Vectors, and Margin
Circles = support vectors
= points on convex hull
that are closest to
hyperplane
M = margin = distance of
support vectors from hyperplane
Goal is to find weight vector
that maximizes M
Theory tells us that max-margin
hyperplane leads to good
generalization (see work
by Vapnik in 1990’s)
Data Mining Lectures
Notes on Classification
© Padhraic Smyth, UC Irvine
Sketch of Optimization Problem
•
Define Lagrangian as a function of w vector, and ’s
1
L(w,  ) = || w || 2 2
•
n
T

[
y
(
w
 i i xi  w0 ) -1]
i =1
The solution must satisfy
 i [ yi ( wT xi  w0 ) - 1] = 0
•
Points with i > 0 are called support vectors and distance from hyperplane = M
Sketch of Optimization Problem
•
This results in a quadratic programming optimization problem
•
Good news:
– convex function of unknowns, unique optimum
– Variety of well-known algorithms for finding this optimum
•
Bad news:
– Quadratic programming in general scales as O(n3)
– In practice takes O(na), where a ~ 1.6 to 2
- e.g., Mining the Web: Discovering Knowledge from Hypertext data, S.
Chakrabarti, Chapter 5, p166)
– Faster methods also available, specialized for SVMs
• E.g., cutting plane method of Joachims, 2006
Data Mining Lectures
Notes on Classification
© Padhraic Smyth, UC Irvine
From Chakrabarti, Chapter 5, 2002
Timing results on text classification
Data Mining Lectures
Notes on Classification
© Padhraic Smyth, UC Irvine
Sec. 15.2.1
Soft Margin Classification
•
If the training data is not linearly
separable, slack variables ξi can be
added to allow misclassification of
difficult or noisy examples.
•
Misclassifier points incur a cost in
training, proportional to distance
from hyperplane
•
Try to minimize training set errors,
and to place hyperplane “far” from
each class (large margin)
Data Mining Lectures
Notes on Classification
ξi
ξj
89
© Padhraic Smyth, UC Irvine
Sec. 15.2.3
Non-linear SVMs
•
Datasets that are linearly separable (with some noise) work out great:
x
0
•
But what are we going to do if the dataset is just too hard?
•
x
0
How about … mapping data to a higher-dimensional space:
x2
x
0
Data Mining Lectures
Notes on Classification
90
© Padhraic Smyth, UC Irvine
Sec. 15.2.3
Non-linear SVMs: Feature spaces
• General idea: the original feature space can always be mapped to
some higher-dimensional feature space where the training set is
separable:
Φ: x → φ(x)
Data Mining Lectures
Notes on Classification
91
© Padhraic Smyth, UC Irvine
Support Vector Machines
• If i > 0 then the distance of xi from the separating hyperplane is M
– Support vectors - points with associated i > 0
• The decision function f(x) is computed from support vectors as
n
f ( x) =  yi i xT xi
i =1
=> prediction can be fast if i are sparse (i.e., most are zero)
•
Non-linearly-separable case: can generalize to allow “slack” constraints
•
Non-linear SVMs: replace original x vector with non-linear functions of x
– “kernel trick” : can solve high-d problem without working directly in high d
•
Computational speedups: can reduce training time to near- linear
– e.g Platt’s SMO algorithm, Joachim’s SVMLight
Data Mining Lectures
Lecture 15: Text Classification
Padhraic Smyth, UC Irvine
Experiments by Komarek and Moore, 2005
Data Mining Lectures
Notes on Classification
© Padhraic Smyth, UC Irvine
Accuracies and Training Time
Komarek and Moore, 2005
Data Mining Lectures
Notes on Classification
© Padhraic Smyth, UC Irvine
Accuracies and Training Time
Komarek and Moore, 2005
Data Mining Lectures
Notes on Classification
© Padhraic Smyth, UC Irvine
Summary on Classifiers
• Simple models (but can be effective)
– Logistic regression
– Naïve Bayes
– K nearest-neighbors
• Decision trees
– Good for high-dimensional problems with different data types
• More sophisticated:
– Support vector machines
– Boosting (e.g., boosting with naïve Bayes or with decision stumps)
• Many tradeoffs in interpretability, score functions, etc
Data Mining Lectures
Notes on Classification
© Padhraic Smyth, UC Irvine
Software Tools
•
MATLAB
•
R
– Many free “toolboxes” on the Web for regression and prediction
– e.g., see http://lib.stat.cmu.edu/matlab/
and in particular the CompStats toolbox
– General purpose statistical computing environment (successor to S)
– Free (!)
– Widely used by statisticians, has a huge library of functions and visualization
tools
• Weka
– Widely used JAVA library for machine learning
– Many Notes on Classification, support for cross-validation, evaluation, etc
•
Commercial tools
– SAS, other statistical packages
– Data mining packages
– Often are not progammable: offer a fixed menu of items
Data Mining Lectures
Notes on Classification
© Padhraic Smyth, UC Irvine
Reading
•
From text: Chapter 10:
•
See also pointers on class Web site under Background Reading
•
Additional reading for further information:
– Covers both general concepts in classification and a broad range of classifiers
– Elements of Statistical Learning,
• T. Hastie, R. Tibshirani, and J. Friedman, Springer Verlag, 2001
– Classification Trees,
• Breiman, Friedman, Olshen, and Stone, Wadsworth Press, 1984.
– SVMs
• T. Joachims, Learning to Classify Text using Support Vector Machines. Kluwer, 2002.
Data Mining Lectures
Notes on Classification
© Padhraic Smyth, UC Irvine
Additional Slides
Data Mining Lectures
Notes on Classification
© Padhraic Smyth, UC Irvine
Model Averaging Techniques
Model Averaging
• Can average over parameters and models
– E.g., weighted linear combination of predictions from multiple models
y = S wk yk
– Why? Any predictions from a point estimate of parameters or a single
model has only a small chance of the being the best
– Averaging makes our predictions more stable and less sensitive to
random variations in a particular data set (good for less stable models
like trees)
Data Mining Lectures
Notes on Classification
© Padhraic Smyth, UC Irvine
Model Averaging
• Model averaging flavors
– Fully Bayesian: average over uncertainty in parameters and models
– “empirical Bayesian”: learn weights over multiple models
• E.g., stacking and bagging
– Build multiple simple models in a systematic way and combine them,
e.g.,
• Boosting: will say more about this in later lectures
• Random forests: stochastically perturb the data, learn multiple trees, and
then combine for prediction
Data Mining Lectures
Notes on Classification
© Padhraic Smyth, UC Irvine
Bagging for Combining Classifiers
•
Training data sets of size N
•
Generate B “bootstrap” sampled data sets of size N
•
Build B models (e.g., trees), one for each bootstrap sample
•
For prediction, combine the predictions from the B models
– Bootstrap sample = sample with replacement
– e.g. B = 100
– Intuition is that the bootstrapping “perturbs” the data enough to make the
models more resistant to true variability
– E.g., for classification p(c | x) = fraction of B models that predict c
– Plus: generally improves accuracy on models such as trees
– Negative: lose interpretability
– Related techniques: random forests, boosting.
Data Mining Lectures
Notes on Classification
© Padhraic Smyth, UC Irvine
green = majority vote
purple = averaging
the probabilities
From Hastie, Tibshirani,
and Friedman, 2001
Data Mining Lectures
Notes on Classification
© Padhraic Smyth, UC Irvine
Illustration of Boosting:
Color of points = class label
Diameter of points = weight at each iteration
Dashed line: single stage classifier. Green line: combined, boosted classifier
Dotted blue in last two: bagging
(from G. Rätsch, Phd thesis, 2001)
Data Mining Lectures
Notes on Classification
© Padhraic Smyth, UC Irvine
Classification Case Study
Data Mining Lectures
Notes on Classification
© Padhraic Smyth, UC Irvine
Link Prediction in Coauthor Graphs
O Madadhain, Hutchins, Smyth, SIGKDD, 2005
• Binary classification problem
– Training data:
• graph of coauthor links, 100k authors, 300k links
• data over several years
– Test data: coauthor graph for same authors in a future year
– Classification problem:
• predict if pair(A,B) will coauthor
• Training and test pairs selected in various ways
• Compared a variety of different classifiers and evaluation metrics
– Skewed class distribution
• No link present (class 0) in 93.8 % of test examples
• Link present (class 1) in 6.2 % of test examples
Data Mining Lectures
Notes on Classification
© Padhraic Smyth, UC Irvine
Evaluation Metrics
• Classification error
– If p(link[A,B]) > 0.5, predict a link
• Brier Score
–
S
[ p(link[A,B] – I(A,B) ]2
• ROC Area
– area under ROC plot
Data Mining Lectures
(between 0 and 1)
Notes on Classification
© Padhraic Smyth, UC Irvine
Link Prediction Evaluation
Classification
Error
Baseline
6.2
Single Feature
6.2
Naïve Bayes
15.5
Logistic
6.1
Boosting
6.4
Averaged
6.2
Data Mining Lectures
Notes on Classification
© Padhraic Smyth, UC Irvine
Link Prediction Evaluation
Classification
Error
ROC
Area
Baseline
6.2
0.50
Single feature
6.2
0.54
15.5
0.78
Logistic
6.1
0.80
Boosting
6.4
0.79
Averaged
6.2
0.80
Naïve Bayes
Data Mining Lectures
Notes on Classification
© Padhraic Smyth, UC Irvine
Link Prediction Evaluation
Classification
Error
ROC
Area
Brier
Score
Baseline
6.2
0.50
100.0
Single feature
6.2
0.54
98.6
15.5
0.78
211.7
Logistic
6.1
0.80
83.1
Boosting
6.4
0.79
83.4
Averaged
6.2
0.80
82.2
Naïve Bayes
Data Mining Lectures
Notes on Classification
© Padhraic Smyth, UC Irvine
Lift Curves for Different Models
Base Rate of links = 6.2%
Data Mining Lectures
Notes on Classification
© Padhraic Smyth, UC Irvine
Interpretation of Ranking at Top of Ranked List
• Top 50 ranked candidates
– Averaged: contains 44 true links
– Logistic: contains 40 true links
– Baseline: contains 3 true links
• Top 500 ranked candidates
– Averaged: contains 300 true links
– Logistic: contains 298 true links
– Baseline: contains 31 true links
Data Mining Lectures
Notes on Classification
© Padhraic Smyth, UC Irvine
Lift Curves for Different Models
Base Rate of links = 0.2%
Data Mining Lectures
Notes on Classification
© Padhraic Smyth, UC Irvine
Decision Tree Classifiers
Classification
Task
Representation
Data Mining Lectures
Decision boundaries =
hierarchy of axis-parallel
Score Function
Cross-validated
error
Search/Optimization
Greedy search in
tree space
Data
Management
None specified
Models,
Parameters
Tree
Notes on Classification
© Padhraic Smyth, UC Irvine
Naïve Bayes Classifier
Classification
Task
Representation
Score Function
Likelihood
Search/Optimization
Closed form
probability estimates
Data
Management
None specified
Models,
Parameters
Data Mining Lectures
Conditional independence
probability model
Conditional
probability tables
Notes on Classification
© Padhraic Smyth, UC Irvine
Logistic Regression
Task
Representation
Score Function
Search/Optimization
Data Mining Lectures
Classification
Log-odds(C) = linear
function of X’s
Log-likelihood
Iterative (Newton) method
Data
Management
None specified
Models,
Parameters
Logistic
weights
Notes on Classification
© Padhraic Smyth, UC Irvine
Nearest Neighbor Classifier
Task
Classification
Representation
Data Mining Lectures
Memory-based
Score Function
Cross-validated error
(for selecting k)
Search/Optimization
None
Data
Management
None specified
Models,
Parameters
None
Notes on Classification
© Padhraic Smyth, UC Irvine
Support Vector Machines
Task
Representation
Hyperplanes
Score Function
“Margin”
Search/Optimization
Data Mining Lectures
Classification
Convex optimization
(quadratic programming)
Data
Management
None specified
Models,
Parameters
None
Notes on Classification
© Padhraic Smyth, UC Irvine
Neural Networks
Task
Representation
Score Function
Search/Optimization
Data Mining Lectures
Regression
Y = nonlin function of X’s
Least-squares
Gradient descent
Data
Management
None specified
Models,
Parameters
Network
weights
Notes on Classification
© Padhraic Smyth, UC Irvine
Multivariate Linear Regression
Task
Representation
Data Mining Lectures
Regression
Y = Weighted linear sum
of X’s
Score Function
Least-squares
Search/Optimization
Linear algebra
Data
Management
None specified
Models,
Parameters
Regression
coefficients
Notes on Classification
© Padhraic Smyth, UC Irvine
Autoregressive Time Series Models
Task
Representation
Data Mining Lectures
Time Series Regression
X = Weighted linear sum
of earlier X’s
Score Function
Least-squares
Search/Optimization
Linear algebra
Data
Management
None specified
Models,
Parameters
Regression
coefficients
Notes on Classification
© Padhraic Smyth, UC Irvine