Survey
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
Data Mining Classification: Alternative Techniques Lecture Notes for Chapter 5 Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 1 Instance-Based Classifiers Set of Stored Cases Atr1 ……... AtrN Class A • Store the training records • Use training records to predict the class label of unseen cases B B C A Unseen Case Atr1 ……... AtrN C B © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#› Instance Based Classifiers Examples: – Rote-learner Memorizes entire training data and performs classification only if attributes of record match one of the training examples exactly – Nearest neighbor Uses k “closest” points (nearest neighbors) for performing classification © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#› Nearest Neighbor Classifiers Basic idea: – If it walks like a duck, quacks like a duck, then it’s probably a duck Compute Distance Training Records © Tan,Steinbach, Kumar Test Record Choose k of the “nearest” records Introduction to Data Mining 4/18/2004 ‹#› Nearest-Neighbor Classifiers Unknown record Requires three things – The set of stored records – Distance Metric to compute distance between records – The value of k, the number of nearest neighbors to retrieve To classify an unknown record: – Compute distance to other training records – Identify k nearest neighbors – Use class labels of nearest neighbors to determine the class label of unknown record (e.g., by taking majority vote) © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#› Definition of Nearest Neighbor X (a) 1-nearest neighbor X X (b) 2-nearest neighbor (c) 3-nearest neighbor K-nearest neighbors of a record x are data points that have the k smallest distance to x © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#› 1 nearest-neighbor Voronoi Diagram © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#› Nearest Neighbor Classification Compute distance between two points: – Euclidean distance d ( p, q ) ( pi i q ) 2 i Determine the class from nearest neighbor list – take the majority vote of class labels among the k-nearest neighbors – Weigh the vote according to distance weight factor, w = 1/d2 © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#› Nearest Neighbor Classification… Choosing the value of k: – If k is too small, sensitive to noise points – If k is too large, neighborhood may include points from other classes X © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#› Nearest Neighbor Classification… Scaling issues – Attributes may have to be scaled to prevent distance measures from being dominated by one of the attributes – Example: height of a person may vary from 1.5m to 1.8m weight of a person may vary from 90lb to 300lb income of a person may vary from $10K to $1M © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#› Nearest Neighbor Classification… Problem with Euclidean measure: – High dimensional data curse of dimensionality – Can produce counter-intuitive results 111111111110 100000000000 vs 011111111111 000000000001 d = 1.4142 d = 1.4142 Solution: Normalize the vectors to unit length © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#› Nearest neighbor Classification… k-NN classifiers are lazy learners – It does not build models explicitly – Unlike eager learners such as decision tree induction and rule-based systems – Classifying unknown records are relatively expensive © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#› Example: PEBLS PEBLS: Parallel Examplar-Based Learning System (Cost & Salzberg) – Works with both continuous and nominal features For nominal features, distance between two nominal values is computed using modified value difference metric (MVDM) – Each record is assigned a weight factor – Number of nearest neighbor, k = 1 © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#› Example: PEBLS Tid Refund Marital Status Taxable Income Cheat 1 Yes Single 125K No d(Single,Married) 2 No Married 100K No = | 2/4 – 0/4 | + | 2/4 – 4/4 | = 1 3 No Single 70K No d(Single,Divorced) 4 Yes Married 120K No 5 No Divorced 95K Yes 6 No Married No d(Married,Divorced) 7 Yes Divorced 220K No = | 0/4 – 1/2 | + | 4/4 – 1/2 | = 1 8 No Single 85K Yes d(Refund=Yes,Refund=No) 9 No Married 75K No = | 0/3 – 3/7 | + | 3/3 – 4/7 | = 6/7 10 No Single 90K Yes 60K Distance between nominal attribute values: = | 2/4 – 1/2 | + | 2/4 – 1/2 | = 0 10 Marital Status Class Refund Single Married Divorced Yes 2 0 1 No 2 4 1 © Tan,Steinbach, Kumar Class Yes No Yes 0 3 No 3 4 Introduction to Data Mining n1i n2i d (V1 ,V2 ) n1 n2 i 4/18/2004 ‹#› Example: PEBLS Tid Refund Marital Status Taxable Income Cheat X Yes Single 125K No Y No Married 100K No 10 Distance between record X and record Y: d ( X , Y ) wX wY d ( X i , Yi ) 2 i 1 where: Number of times X is used for prediction wX Number of times X predicts correctly wX 1 if X makes accurate prediction most of the time wX > 1 if X is not reliable for making predictions © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#› Bayes Classifier A probabilistic framework for solving classification problems Conditional Probability: P ( A, C ) P (C | A) P ( A) P ( A, C ) P( A | C ) P (C ) Bayes theorem: P( A | C ) P(C ) P(C | A) P( A) © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#› Example of Bayes Theorem Given: – A doctor knows that meningitis causes stiff neck 50% of the time – Prior probability of any patient having meningitis is 1/50,000 – Prior probability of any patient having stiff neck is 1/20 If a patient has stiff neck, what’s the probability he/she has meningitis? © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#› Example of Bayes Theorem Given: – A doctor knows that meningitis causes stiff neck 50% of the time – Prior probability of any patient having meningitis is 1/50,000 – Prior probability of any patient having stiff neck is 1/20 If a patient has stiff neck, what’s the probability he/she has meningitis? P( S | M ) P( M ) 0.5 1 / 50000 P( M | S ) 0.0002 P( S ) 1 / 20 © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#› Bayesian Classifiers Consider each attribute and class label as random variables Given a record with attributes (A1, A2,…,An) – Goal is to predict class C – Specifically, we want to find the value of C that maximizes P(C| A1, A2,…,An ) Posterior probability Can we estimate P(C| A1, A2,…,An ) directly from data? © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#› Bayesian Classifiers Tid P(C| A1, A2, A3 ) Refund Marital Status Taxable Income Cheat 1 Yes Single 125K No 2 No Married 100K No 3 No Single 70K No 4 Yes Married 120K No 5 No Divorced 95K Yes 6 No Married 60K No 7 Yes Divorced 220K No 8 No Single 85K Yes 9 No Married 75K No 10 No Single 90K Yes P(No | Refund=Yes, Status = Single, Income=120K) P(Yes | Refund=Yes, Status = Single, Income=120K) Estimate these two posterior probabilities and compare them for classification 10 © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#› Bayesian Classifiers Approach: – compute the posterior probability P(C | A1, A2, …, An) for all values of C using the Bayes theorem P(C | A A A ) 1 2 n P( A A A | C ) P(C ) P( A A A ) 1 2 n 1 2 n – Choose value of C that maximizes P(C | A1, A2, …, An) – Equivalent to choosing value of C that maximizes P(A1, A2, …, An|C) P(C) How to estimate P(A1, A2, …, An | C )? © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#› Naïve Bayes Classifier ca g te o l a ric ca g te o a ric l c t n o in u uo s as l c Tid Refund Marital Status Taxable Income Evade 1 Yes Single 125K No 2 No Married 100K No 3 No Single 70K No – Can estimate P(Ai| Cj) for all Ai and Cj. 4 Yes Married 120K No 5 No Divorced 95K Yes – New point is classified to Cj if P(Cj) P(Ai| Cj) is maximal. 6 No Married No 7 Yes Divorced 220K No 8 No Single 85K Yes 9 No Married 75K No 10 No Single 90K Yes Assume independence among attributes Ai when class is given: – P(A1, A2, …, An |Cj) = P(A1| Cj) P(A2| Cj)… P(An| Cj) – Classes: C1 = Yes, C2 = No 60K 10 Predict the class label of (Refund=Y, status = S, Income = 120) P(Yes) P(Refund=Y | Yes) P(Status= S | Yes) P(Income = 120 | Yes) P(No) P(Refund=Y | No) P(Status= S | No) P(Income = 120 | No) © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#› s How to Estimate Probabilities from Data? l l c Tid at Refund o eg a c i r c at o eg a c i r c on u it n s u o s s a cl Marital Status Taxable Income Evade 1 Yes Single 125K No 2 No Married 100K No 3 No Single 70K No 4 Yes Married 120K No 5 No Divorced 95K Yes 6 No Married 60K No 7 Yes Divorced 220K No 8 No Single 85K Yes 9 No Married 75K No 10 No Single 90K Yes Class: P(C) = Nc/N – e.g., P(No) = 7/10, P(Yes) = 3/10 For discrete attributes: P(Ai | Ck) = |Aik|/ Nc k – where |Aik| is number of instances having attribute Ai and belongs to class Ck – Examples: 10 © Tan,Steinbach, Kumar Introduction to Data Mining P(Status=Married|No) = ? P(Refund=Yes|Yes)=? 4/18/2004 ‹#› How to Estimate Probabilities from Data? l l c Tid at Refund o eg a c i r c at o eg a c i r c on u it n s u o s s a cl Marital Status Taxable Income Evade 1 Yes Single 125K No 2 No Married 100K No 3 No Single 70K No 4 Yes Married 120K No 5 No Divorced 95K Yes 6 No Married 60K No 7 Yes Divorced 220K No 8 No Single 85K Yes 9 No Married 75K No 10 No Single 90K Yes Class: P(C) = Nc/N – e.g., P(No) = 7/10, P(Yes) = 3/10 For discrete attributes: P(Ai | Ck) = |Aik|/ Nc k – where |Aik| is number of instances having attribute Ai and belongs to class Ck – Examples: 10 © Tan,Steinbach, Kumar Introduction to Data Mining P(Status=Married|No) = 4/7 P(Refund=Yes|Yes)=0 4/18/2004 ‹#› How to Estimate Probabilities from Data? For continuous attributes: – Discretize the range into bins one ordinal attribute per bin violates independence assumption – Two-way split: (A < v) or (A > v) choose only one of the two splits as new attribute – Probability density estimation: Assume attribute follows a normal distribution Use data to estimate parameters of distribution (e.g., mean and standard deviation) Once probability distribution is known, can use it to estimate the conditional probability P(Ai|c) © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#› l s al uProbabilities ca c How tooriEstimate from Data? i o r u o c Tid e at Refund g c e at Marital Status g c t n o Taxable Income in as l c Evade 1 Yes Single 125K No 2 No Married 100K No 3 No Single 70K No 4 Yes Married 120K No 5 No Divorced 95K Yes 6 No Married 60K No 7 Yes Divorced 220K No 8 No Single 85K Yes 9 No Married 75K No 10 No Single 90K Yes s Normal distribution: 1 P( A | c ) e 2 i j ( Ai ij ) 2 2 ij2 2 ij – One for each (Ai,ci) pair For (Income, Class=No): – If Class=No sample mean = 110 sample variance = 2975 10 1 P( Income 120 | No) e 2 (54.54) © Tan,Steinbach, Kumar Introduction to Data Mining ( 120110) 2 2 ( 2975) 0.0072 4/18/2004 ‹#› Example of Naïve Bayes Classifier Given aP(Refund=N Test Record: | Yes) P(Status=M | Yes) P(Income = 120 | Yes) P(Yes) l l s X (Refund No, Married, Income 120K) a a u c c o ri ri u o o= 120 | inNo) P(No) P(Refund=N | No) P(Status=M | No) P(Income g g t ss te te n a l ca naive Bayes Classifier: P(Refund=Yes|No) = 3/7 P(Refund=No|No) = 4/7 P(Refund=Yes|Yes) = 0 P(Refund=No|Yes) = 1 P(Marital Status=Single|No) = 2/7 P(Marital Status=Divorced|No)=1/7 P(Marital Status=Married|No) = 4/7 P(Marital Status=Single|Yes) = 2/7 P(Marital Status=Divorced|Yes)=1/7 P(Marital Status=Married|Yes) = 0 For taxable income: If class=No: sample mean=110 sample variance=2975 If class=Yes: sample mean=90 sample variance=25 ca co c Tid Refund Marital Taxable Income Evade P(X|Class=No) Status = P(Refund=No|Class=No) 1 Yes 2 No P(Married| Class=No) 125K No P(Income=120K| Class=No) Married = 4/7 4/7100K 0.0072No = 0.0024 3 No Single Single 70K No P(X|Class=Yes) = P(Refund=No| Class=Yes) 4 Yes Married 120K No P(Married| Class=Yes) 5 No Divorced 95K Yes P(Income=120K| Class=Yes) = 1 0 60K 1.2 10-9 No =0 6 No Married 7 Yes Divorced 220K No 8 No Single 85K Yes 75K No 90K Yes Since P(X|No)P(No) > P(X|Yes)P(Yes) Therefore P(No|X) > P(Yes|X) 9 10 =>NoClassMarried = No No Single 10 © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#› Naïve Bayes Classifier If one of the conditional probability is zero, then the entire expression becomes zero Probability estimation: N ic Original : P( Ai | C ) Nc N ic 1 Laplace : P( Ai | C ) Nc c N ic mp m - estimate : P( Ai | C ) Nc m © Tan,Steinbach, Kumar Introduction to Data Mining c: number of classes p: prior probability: P(C) m: parameter 4/18/2004 ‹#› Example of Naïve Bayes Classifier Name human python salmon whale frog komodo bat pigeon cat leopard shark turtle penguin porcupine eel salamander gila monster platypus owl dolphin eagle Give Birth yes Give Birth yes no no yes no no yes no yes yes no no yes no no no no no yes no Can Fly no no no no no no yes yes no no no no no no no no no yes no yes Can Fly no © Tan,Steinbach, Kumar Live in Water Have Legs no no yes yes sometimes no no no no yes sometimes sometimes no yes sometimes no no no yes no Class yes no no no yes yes yes yes yes no yes yes yes no yes yes yes yes no yes mammals non-mammals non-mammals mammals non-mammals non-mammals mammals non-mammals mammals non-mammals non-mammals non-mammals mammals non-mammals non-mammals non-mammals mammals non-mammals mammals non-mammals Live in Water Have Legs yes no Class ? Introduction to Data Mining A: attributes M: mammals N: non-mammals 6 6 2 2 P ( A | M ) 0.06 7 7 7 7 1 10 3 4 P ( A | N ) 0.0042 13 13 13 13 7 P ( A | M ) P( M ) 0.06 0.021 20 13 P ( A | N ) P ( N ) 0.004 0.0027 20 P(A|M)P(M) > P(A|N)P(N) => Mammals 4/18/2004 ‹#› Naïve Bayes (Summary) Robust to isolated noise points Handle missing values by ignoring the instance during probability estimate calculations Robust to irrelevant attributes Independence assumption may not hold for some attributes – Use other techniques such as Bayesian Belief Networks (BBN) © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#› Support Vector Machines Find a linear hyperplane (decision boundary) that will separate the data © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#› Support Vector Machines B1 One Possible Solution © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#› Support Vector Machines B2 Another possible solution © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#› Support Vector Machines B2 Other possible solutions © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#› Support Vector Machines B1 B2 Which one is better? B1 or B2? How do you define better? © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#› Support Vector Machines B1 B2 b21 b22 margin b11 b12 Find hyperplane maximizes the margin => B1 is better than B2 © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#› Support Vector Machines B1 w x b 0 w x b 1 w x b 1 b11 if w x b 1 1 f ( x) 1 if w x b 1 © Tan,Steinbach, Kumar Introduction to Data Mining b12 2 Margin 2 || w || 4/18/2004 ‹#› Support Vector Machines We want to maximize: 2 Margin 2 || w || 2 || w || – Which is equivalent to minimizing: L( w) 2 – But subjected to the following constraints: if w x i b 1 1 f ( xi ) 1 if w x i b 1 This is a constrained optimization problem – Numerical approaches to solve it (e.g., quadratic programming) © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#› Support Vector Machines What if the problem is not linearly separable? © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#› Support Vector Machines What if the problem is not linearly separable? – Introduce slack variables 2 Need to minimize: || w || N k L( w) C i 2 i 1 Subject to: if w x i b 1 - i 1 f ( xi ) 1 if w x i b 1 i © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#› Nonlinear Support Vector Machines What if decision boundary is not linear? © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#› Nonlinear Support Vector Machines Transform data into higher dimensional space © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#› How to Construct an ROC curve Instance P(+|A) True Class 1 0.95 + 2 0.93 + 3 0.87 - 4 0.85 - 5 0.85 - 6 0.85 + 7 0.76 - 8 0.53 + 9 0.43 - 10 0.25 + • Use classifier that produces posterior probability for each test instance P(+|A) • Sort the instances according to P(+|A) in decreasing order • Apply threshold at each unique value of P(+|A) • Count the number of TP, FP, TN, FN at each threshold • TP rate, TPR = TP/(TP+FN) • FP rate, FPR = FP/(FP + TN) © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#› How to construct an ROC curve + - + - - - + - + + 0.25 0.43 0.53 0.76 0.85 0.85 0.85 0.87 0.93 0.95 1.00 TP 5 4 4 3 3 3 3 2 2 1 0 FP 5 5 4 4 3 2 1 1 0 0 0 TN 0 0 1 1 2 3 4 4 5 5 5 FN 0 1 1 2 2 2 2 3 3 4 5 TPR 1 0.8 0.8 0.6 0.6 0.6 0.6 0.4 0.4 0.2 0 FPR 1 1 0.8 0.8 0.6 0.4 0.2 0.2 0 0 0 Class P Threshold >= ROC Curve: © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#› Precision, Recall, and F-measure Suppose the cutoff threshold is chosen to be 0.8. In other words, any instance with posterior probability greater than 0.8 is classified as positive. Compute the precision, recall, and F-measure for the model at this threshold value. © Tan,Steinbach, Kumar Introduction to Data Mining Instance P(+|A) True Class 1 0.95 + 2 0.93 + 3 0.87 - 4 0.85 - 5 0.85 - 6 0.85 + 7 0.76 - 8 0.53 + 9 0.43 - 10 0.25 + 4/18/2004 ‹#› Precision, Recall, and F-measure PREDICTED CLASS ACTUA Class =Yes L CLASS Class =No Class =Yes Class= No (TP) 3 (FN) 2 (FP) (TN) 3 2 Instance P(+|A) True Class 1 0.95 + 2 0.93 + 3 0.87 - 4 0.85 - 5 0.85 - 6 0.85 + 7 0.76 - 8 0.53 + 9 0.43 - 10 0.25 + p = 3/(3+3) = ½ r = 3/(3+2) = 3/5 F-measure = 2pr/(p+r)=6/11 © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#›