Survey
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
Data Mining Classification: Alternative Techniques Lecture Notes for Chapter 5 Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 1 Instance-Based Classifiers Set of Stored Cases Atr1 ……... AtrN Class A • Store the training records • Use training records to predict the class label of unseen cases B B C A Unseen Case Atr1 ……... AtrN C B © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#› Instance Based Classifiers Examples: – Rote-learner Memorizes entire training data and performs classification only if attributes of record match one of the training examples exactly – Nearest neighbor Uses k “closest” points (nearest neighbors) for performing classification © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#› Nearest Neighbor Classifiers Basic idea: – If it walks like a duck, quacks like a duck, then it’s probably a duck Compute Distance Training Records © Tan,Steinbach, Kumar Test Record Choose k of the “nearest” records Introduction to Data Mining 4/18/2004 ‹#› Nearest-Neighbor Classifiers Unknown record Requires three things – The set of stored records – Distance Metric to compute distance between records – The value of k, the number of nearest neighbors to retrieve To classify an unknown record: – Compute distance to other training records – Identify k nearest neighbors – Use class labels of nearest neighbors to determine the class label of unknown record (e.g., by taking majority vote) © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#› Definition of Nearest Neighbor X (a) 1-nearest neighbor X X (b) 2-nearest neighbor (c) 3-nearest neighbor K-nearest neighbors of a record x are data points that have the k smallest distance to x © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#› 1 nearest-neighbor Voronoi Diagram © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#› Nearest Neighbor Classification Compute distance between two points: – Euclidean distance d (r , s) (r s ) i 2 i i Determine the class from nearest neighbor list – take the majority vote of class labels among the k-nearest neighbors – Weigh the vote according to distance weight factor, w = 1/d2 © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#› Nearest Neighbor Classification… Choosing the value of k: – If k is too small, sensitive to noise points – If k is too large, neighborhood may include points from other classes X © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#› Nearest Neighbor Classification… Scaling issues – Attributes may have to be scaled to prevent distance measures from being dominated by one of the attributes – Example: height of a person may vary from 1.5m to 1.8m weight of a person may vary from 90lb to 300lb income of a person may vary from $10K to $1M © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#› Nearest Neighbor Classification… Problem with Euclidean measure: – High dimensional data curse of dimensionality Solution: Normalize the vectors to unit length © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#› Nearest neighbor Classification… k-NN classifiers are lazy learners – It does not build models explicitly – Unlike eager learners such as decision tree induction and rule-based systems – Classifying unknown records are relatively expensive © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#› Example: PEBLS PEBLS: Parallel Examplar-Based Learning System (Cost & Salzberg) – Works with both continuous and nominal features For nominal features, distance between two nominal values is computed using modified value difference metric (MVDM) – Each record is assigned a weight factor – Number of nearest neighbor, k = 1 © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#› Example: PEBLS Tid Refund Marital Status Taxable Income Cheat 1 Yes Single 125K No d(Single,Married) 2 No Married 100K No = | 2/4 – 0/4 | + | 2/4 – 4/4 | = 1 3 No Single 70K No d(Single,Divorced) 4 Yes Married 120K No 5 No Divorced 95K Yes 6 No Married No d(Married,Divorced) 7 Yes Divorced 220K No = | 0/4 – 1/2 | + | 4/4 – 1/2 | = 1 8 No Single 85K Yes d(Refund=Yes,Refund=No) 9 No Married 75K No = | 0/3 – 3/7 | + | 3/3 – 4/7 | = 6/7 10 No Single 90K Yes 60K Distance between nominal attribute values: = | 2/4 – 1/2 | + | 2/4 – 1/2 | = 0 10 Marital Status Class Refund Single Married Divorced Yes 2 0 1 No 2 4 1 © Tan,Steinbach, Kumar Class Yes No Yes 0 3 No 3 4 Introduction to Data Mining n1i n2i d (V1 ,V2 ) n1 n2 i 4/18/2004 ‹#› Example: PEBLS Tid Refund Marital Status Taxable Income Cheat X Yes Single 125K No Y No Married 100K No 10 Distance between record X and record Y: d ( X , Y ) wX wY d ( X i , Yi ) 2 i 1 where: Number of times X is used for prediction wX Number of times X predicts correctly wX 1 if X makes accurate prediction most of the time wX > 1 if X is not reliable for making predictions © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#› Bayes Classifier A probabilistic framework for solving classification problems Conditional Probability: P ( A, C ) P (C | A) P ( A) P ( A, C ) P( A | C ) P (C ) Bayes theorem: P( A | C ) P(C ) P(C | A) P( A) © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#› Example of Bayes Theorem Given: – A doctor knows that meningitis causes stiff neck 50% of the time – Prior probability of any patient having meningitis is 1/50,000 – Prior probability of any patient having stiff neck is 1/20 If a patient has stiff neck, what’s the probability he/she has meningitis? P( S | M ) P( M ) 0.5 1 / 50000 P( M | S ) 0.0002 P( S ) 1 / 20 © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#› Bayesian Classifiers Consider each attribute and class label as random variables Given a record with attributes (A1, A2,…,An)=A – Goal is to predict class C – Specifically, we want to find the value of C that maximizes P(C= cj | A=a ) Maximum posterior classifier: optimal=minimizes error probability Can we estimate P(C= cj | A=a ) directly from data? © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#› Bayesian Classifiers Approach: – compute the posterior probability P(C= cj | A=a ) for all values cj of C using the Bayes theorem P(C c j | A a) P( A a | C c j ) P(C c j ) P( A a ) – Choose value of C that maximizes P(C= cj | A=a ) – Equivalent to choosing value of C that maximizes P(A=a|C= cj) P(C= cj) How to estimate likelihood P(A=a|C= cj)? © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#› Naïve Bayes Classifier Assume independence among attributes Ai when class is given: P(A=a|C=cj)=P(A1=a1|C=cj)P(A2=a2|C=cj)…P(An=an|C=cj) Can estimate P(Ai=ai|C=cj) for all Ai and cj. New point is classified to cj if P(C=cj) P(Ai=ai|C=cj) is maximal. © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#› How to Estimate Probabilities from Data? l l c Tid at Refund o eg a c i r c at o eg a c i r c on u it n s u o s s a cl Marital Status Taxable Income Evade 1 Yes Single 125K No 2 No Married 100K No 3 No Single 70K No 4 Yes Married 120K No 5 No Divorced 95K Yes 6 No Married 60K No 7 Yes Divorced 220K No 8 No Single 85K Yes 9 No Married 75K No 10 No Single 90K Yes Class: P(C) = Nc/N – e.g., P(No) = 7/10, P(Yes) = 3/10 For discrete attributes: P(Ai | Ck) = |Aik|/ Nc k – where |Aik| is number of instances having attribute Ai and belongs to class Ck – Examples: 10 © Tan,Steinbach, Kumar Introduction to Data Mining P(Status=Married|No) = 4/7 P(Refund=Yes|Yes)=0 4/18/2004 ‹#› Naïve Bayes Classifier If one of the conditional probability is zero, then the entire expression becomes zero Probability estimation: N ij ˆ Original : P( Ai ai | C c j ) Nj si : number of values of Ai N ij 1 ˆ Laplace : P( Ai ai | C c j ) N j si m: parameter p(ai): prior probability N ij mp(ai ) ˆ m - estimate : P( Ai ai | C c j ) Nj m © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#› How to Estimate Probabilities from Data? For continuous attributes: – Discretize the range into bins one ordinal attribute per bin violates independence assumption k – Two-way split: (A < v) or (A > v) choose only one of the two splits as new attribute – Probability density estimation: Assume attribute follows a normal distribution Use data to estimate parameters of distribution (e.g., mean and standard deviation) © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#› l s al uProbabilities ca c How tooriEstimate from Data? i o r u o c Tid e at Refund g c e at g c t n o in as l c Marital Status Taxable Income Evade 1 Yes Single 125K No 2 No Married 100K No 3 No Single 70K No 4 Yes Married 120K No 5 No Divorced 95K Yes 6 No Married 60K No 7 Yes Divorced 220K No 8 No Single 85K Yes 9 No Married 75K No 10 No Single 90K Yes s Normal distribution: f (ai | c j ) 1 2 2 ij ( ai ij ) 2 e 2 ij2 – One for each (Ai,cj) pair For (Income, Class=No): – If Class=No sample mean = 110 sample variance = 2975 10 f ( Income 120 | No) © Tan,Steinbach, Kumar 1 e 2 (54.54) Introduction to Data Mining (120110) 2 2 ( 2975) 0.0072 4/18/2004 ‹#› Example of Naïve Bayes Classifier Given a Test Record: X (Refund No, Married, Income 120K) with original estimate: naive Bayes Classifier: P(Refund=Yes|No) = 3/7 P(Refund=No|No) = 4/7 P(Refund=Yes|Yes) = 0 P(Refund=No|Yes) = 1 P(Marital Status=Single|No) = 2/7 P(Marital Status=Divorced|No)=1/7 P(Marital Status=Married|No) = 4/7 P(Marital Status=Single|Yes) = 2/7 P(Marital Status=Divorced|Yes)=1/7 P(Marital Status=Married|Yes) = 0 For taxable income: If class=No: sample mean=110 sample variance=2975 If class=Yes: sample mean=90 sample variance=25 © Tan,Steinbach, Kumar P(X|Class=No) = P(Refund=No|Class=No) P(Married| Class=No) f(Income=120K| Class=No) = 4/7 4/7 0.0072 = 0.0024 P(X|Class=Yes) = P(Refund=No| Class=Yes) P(Married| Class=Yes) f(Income=120K| Class=Yes) = 1 0 1.2 10-9 = 0 Since P(X|No)P(No) > P(X|Yes)P(Yes) Therefore P(No|X) > P(Yes|X) => Class = No Introduction to Data Mining 4/18/2004 ‹#› Example of Naïve Bayes Classifier with Laplace estimate: Name human python salmon whale frog komodo bat pigeon cat leopard shark turtle penguin porcupine eel salamander gila monster platypus owl dolphin eagle Give Birth yes Give Birth yes no no yes no no yes no yes yes no no yes no no no no no yes no Can Fly no no no no no no yes yes no no no no no no no no no yes no yes Can Fly no © Tan,Steinbach, Kumar Live in Water Have Legs no no yes yes sometimes no no no no yes sometimes sometimes no yes sometimes no no no yes no Class yes no no no yes yes yes yes yes no yes yes yes no yes yes yes yes no yes mammals non-mammals non-mammals mammals non-mammals non-mammals mammals non-mammals mammals non-mammals non-mammals non-mammals mammals non-mammals non-mammals non-mammals mammals non-mammals mammals non-mammals Live in Water Have Legs yes no Class ? Introduction to Data Mining A: attributes M: mammals N: non-mammals 7 7 3 3 P( A | M ) 0.0604 9 9 10 9 2 11 4 5 P( A | N ) 0.0081 15 15 16 15 7 P( A | M ) P( M ) 0.0602 0.02114 20 13 P( A | N ) P( N ) 0.0081 0.005265 20 P(A|M)P(M) > P(A|N)P(N) => Mammals 4/18/2004 ‹#› Naïve Bayes (Summary) Robust to isolated noise points Handle missing values by ignoring the instance during probability estimate calculations Robust to irrelevant attributes Independence assumption may not hold for some attributes – Use other techniques such as Bayesian Belief Networks (BBN) © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#› Artificial Neural Networks (ANN) X1 X2 X3 Y Input 1 1 1 1 0 0 0 0 0 0 1 1 0 1 1 0 0 1 0 1 1 0 1 0 0 1 1 1 0 0 1 0 X1 Black box Output X2 Y X3 Output Y is 1 if at least two of the three inputs are equal to 1. © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#› Artificial Neural Networks (ANN) X1 X2 X3 Y 1 1 1 1 0 0 0 0 0 0 1 1 0 1 1 0 0 1 0 1 1 0 1 0 0 1 1 1 0 0 1 0 Input nodes Black box X1 Output node 0.3 0.3 X2 X3 0.3 Y t=0.4 Y I (0.3 X 1 0.3 X 2 0.3 X 3 0.4 0) 1 where I ( z ) 0 © Tan,Steinbach, Kumar if z is true otherwise Introduction to Data Mining 4/18/2004 ‹#› Artificial Neural Networks (ANN) Model is an assembly of inter-connected nodes and weighted links Input nodes Black box X1 w1 w2 X2 Output node sums up each of its input value according to the weights of its links Compare output node against some threshold t Output node Y w3 X3 t Perceptron Model yˆ I ( wi xi t ) or i yˆ sgn( wi xi t ) sgn( w x) i © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#› Perceptron algorithm Minden x attributumvektort kiegészítjük egy d+1-edik értékkel (mindig 1) Legyen w=(0,0,...0) while van helytelenül klasszifikált eleme a tanító adathalmaznak for all x if x rosszul klasszifikált then if x az első osztályba tartozik then w=w+x else w=w-x Lineárisan szeparálható osztályok esetén a perceptron tanulás véges iteráció után megáll. © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#› Linearly separable © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#› Not linearly separable © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#› General Structure of ANN x1 x2 x3 Input Layer x4 x5 Input I1 I2 Hidden Layer I3 Neuron i Output wi1 wi2 wi3 Si Activation function g(Si ) Oi Oi threshold, t Output Layer Training ANN means learning the weights of the neurons y © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#› Activation function © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#› Algorithm for learning ANN Initialize the weights (w0, w1, …, wk) Adjust the weights in such a way that the output of ANN is consistent with class labels of training examples 2 – Objective function: E Yi f ( wi , X i ) i – Find the weights wi’s that minimize the above objective function e.g., backpropagation algorithm (see lecture notes) © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#› Support Vector Machines Find a linear hyperplane (decision boundary) that will separate the data © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#› Support Vector Machines B1 One Possible Solution © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#› Support Vector Machines B2 Another possible solution © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#› Support Vector Machines B2 Other possible solutions © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#› Support Vector Machines B1 B2 Which one is better? B1 or B2? How do you define better? © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#› Support Vector Machines B1 B2 b21 b22 margin b11 b12 Find hyperplane maximizes the margin => B1 is better than B2 © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#› Support Vector Machines B1 w x b 0 w x b 1 w x b 1 b11 if w x b 1 1 f ( x) 1 if w x b 1 © Tan,Steinbach, Kumar Introduction to Data Mining b12 2 Margin 2 || w || 4/18/2004 ‹#› Support Vector Machines We want to maximize: 2 Margin 2 || w || 2 || w || – Which is equivalent to minimizing: L( w) 2 – But subjected to the following constraints: if w x i b 1 1 f ( xi ) 1 if w x i b 1 This is a constrained optimization problem – Numerical approaches to solve it (e.g., quadratic programming) © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#› Support Vector Machines What if the problem is not linearly separable? © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#› Support Vector Machines What if the problem is not linearly separable? – Introduce slack variables 2 Need to minimize: || w || N k L( w) C i 2 i 1 Subject to: if w x i b 1 - i 1 f ( xi ) 1 if w x i b 1 i © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#› Nonlinear Support Vector Machines What if decision boundary is not linear? © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#› Nonlinear Support Vector Machines Transform data into higher dimensional space © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#› Ensemble Methods Construct a set of classifiers from the training data Predict class label of previously unseen records by aggregating predictions made by multiple classifiers © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#› General Idea D Step 1: Create Multiple Data Sets Step 2: Build Multiple Classifiers D1 D2 C1 C2 Step 3: Combine Classifiers © Tan,Steinbach, Kumar .... Original Training data Dt-1 Dt Ct -1 Ct C* Introduction to Data Mining 4/18/2004 ‹#› Why does it work? Suppose there are 25 base classifiers – Each classifier has error rate, = 0.35 – Assume classifiers are independent – Probability that the ensemble classifier makes a wrong prediction: 25 i 25i ( 1 ) 0.06 i i 13 25 © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#› Examples of Ensemble Methods How to generate an ensemble of classifiers? – Bagging – Boosting © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#› Bagging Sampling with replacement Original Data Bagging (Round 1) Bagging (Round 2) Bagging (Round 3) 1 7 1 1 2 8 4 8 3 10 9 5 4 8 1 10 5 2 2 5 6 5 3 5 7 10 2 9 8 10 7 6 9 5 3 3 10 9 2 7 Build classifier on each bootstrap sample Each sample has probability (1 – 1/n)n of being selected © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#› Boosting An iterative procedure to adaptively change distribution of training data by focusing more on previously misclassified records – Initially, all N records are assigned equal weights – Unlike bagging, weights may change at the end of boosting round © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#› Boosting Records that are wrongly classified will have their weights increased Records that are classified correctly will have their weights decreased Original Data Boosting (Round 1) Boosting (Round 2) Boosting (Round 3) 1 7 5 4 2 3 4 4 3 2 9 8 4 8 4 10 5 7 2 4 6 9 5 5 7 4 1 4 8 10 7 6 9 6 4 3 10 3 2 4 • Example 4 is hard to classify • Its weight is increased, therefore it is more likely to be chosen again in subsequent rounds © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#› Example: AdaBoost Base classifiers: C1, C2, …, CT Error rate: 1 i N w C ( x ) y N j 1 j i j j Importance of a classifier: 1 1 i i ln 2 i © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#› Example: AdaBoost Weight update: j if C j ( xi ) yi w exp ( j 1) wi j Zj if C j ( xi ) yi exp where Z j is the normalizat ion factor ( j) i If any intermediate rounds produce error rate higher than 50%, the weights are reverted back to 1/n and the resampling procedure is repeated Classification: T C * ( x ) arg max j C j ( x ) y y © Tan,Steinbach, Kumar Introduction to Data Mining j 1 4/18/2004 ‹#› Illustrating AdaBoost Initial weights for each data point Original Data 0.1 0.1 0.1 +++ - - - - - ++ Data points for training B1 0.0094 Boosting Round 1 +++ © Tan,Steinbach, Kumar 0.0094 0.4623 - - - - - - - Introduction to Data Mining = 1.9459 4/18/2004 ‹#› Illustrating AdaBoost B1 0.0094 Boosting Round 1 0.0094 +++ 0.4623 - - - - - - - = 1.9459 B2 Boosting Round 2 0.0009 0.3037 - - - - - - - - 0.0422 ++ = 2.9323 B3 0.0276 0.1819 0.0038 Boosting Round 3 +++ ++ ++ + ++ Overall +++ - - - - - © Tan,Steinbach, Kumar Introduction to Data Mining = 3.8744 ++ 4/18/2004 ‹#›