Survey
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
Data Mining Classification: Basic Concepts, Decision Trees, and Model Evaluation Examples of Classification Task Lecture Notes for Chapter 4 Introduction to Data Mining z Predicting tumor cells as benign or malignant z Classifying credit card transactions as legitimate or fraudulent z Classifying secondary structures of protein as alpha-helix, beta-sheet, or random coil z Categorizing news stories as finance, weather, entertainment, sports, etc by Tan, Steinbach, Kumar © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 1 © Tan,Steinbach, Kumar Classification: Definition z z 4/18/2004 4 Classification Techniques Decision Tree based Methods Rule-based Methods z Memory based reasoning z Neural Networks z Naïve Naï e Ba Bayes es and Ba Bayesian esian Belief Net Networks orks z Support Vector Machines z Given a collection of records (training set ) z – Each record contains a set of attributes, one of the attributes is the class. z Introduction to Data Mining Find a model for class attribute as a function of the values of other attributes. Goal: previously unseen records should be assigned a class as accurately as possible. – A test set is used to determine the accuracy of the model. Usually, the given data set is divided into training and test sets, with training set used to build the model and test set used to validate it. © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 2 © Tan,Steinbach, Kumar Illustrating Classification Task Tid Attrib1 1 Yes Large 125K No 2 No Medium Attrib2 100K Attrib3 No 3 No Small 70K No 4 Yes Medium 120K No 5 No Large 95K Yes 6 No Medium 60K No 7 Yes Large 220K No 8 No Small 85K Yes 9 No Medium 75K No 10 No Small 90K Yes Introduction to Data Mining 4/18/2004 5 Example of a Decision Tree Class Learn Model Tid Refund Marital Status Taxable Income Cheat 1 Yes Single 125K No 2 No Married 100K No 3 No Single 70K No 4 Yes Married 120K No 5 No Divorced 95K Yes 6 No Married No 7 Yes Divorced 220K No 8 No Single 85K Yes 9 No Married 75K No 10 No Single 90K Yes Splitting Attributes Refund Yes No NO MarSt 10 Tid Attrib1 Attrib2 Attrib3 Class 11 No Small 55K ? 12 Yes Medium 80K ? 13 Yes Large 110K ? 14 No Small 95K ? 15 No Large 67K ? Apply Model 60K Single, Divorced TaxInc < 80K NO Married NO > 80K YES 10 10 Model: Decision Tree Training Data © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 3 © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 6 Another Example of Decision Tree Apply Model to Test Data Test Data MarSt Tid Refund Marital Status Taxable Income Cheat 1 Yes Single 125K No 2 No Married 100K No 3 No Single 70K No 4 Yes Married 120K No 5 No Divorced 95K Yes No 6 No Married 7 Yes Divorced 220K No 8 No Single 60K 85K Yes 9 No Married 75K No 10 No Single 90K Yes Married NO Single, Divorced Refund Refund Yes No Yes NO TaxInc < 80K Refund Marital Status Taxable Income Cheat No 80K Married ? 10 No NO MarSt > 80K Single, Divorced Married YES NO TaxInc < 80K There could be more than one tree that fits the same data! NO > 80K YES NO 10 © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 7 Decision Tree Classification Task Tid Attrib1 1 Yes Large 125K No 2 No Medium Attrib2 100K Attrib3 No 3 No Small 70K No 4 Yes Medium 120K No 5 No Large 95K Yes 6 No Medium 60K No 7 Yes Large 220K No 8 No Small 85K Yes 9 No Medium 75K No 10 No Small 90K Yes © Tan,Steinbach, Kumar Introduction to Data Mining Test Data Class Refund Yes Learn Model NO Attrib1 11 No Small 55K ? 12 Yes Medium 80K Attrib3 ? 13 Yes Large 110K ? 14 No Small 95K ? 15 No Large 67K ? Apply Model Class Refund Marital Status Taxable Income Cheat No 80K Married ? 10 No MarSt Single, Divorced Tid 10 Apply Model to Test Data 10 Attrib2 4/18/2004 Decision Tree TaxInc < 80K Married NO > 80K YES NO 10 © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 8 Apply Model to Test Data © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 Apply Model to Test Data Test Data Start from the root of tree. Refund Yes Test Data Refund Marital Status Taxable Income Cheat No 80K Married ? Refund 10 No NO Yes MarSt Single, Divorced TaxInc < 80K NO © Tan,Steinbach, Kumar TaxInc < 80K NO 4/18/2004 9 No 80K Married ? MarSt NO YES Taxable Income Cheat 10 Single, Divorced > 80K Refund Marital Status No NO Married Introduction to Data Mining 11 © Tan,Steinbach, Kumar Married NO > 80K YES Introduction to Data Mining 4/18/2004 12 Apply Model to Test Data Decision Tree Induction Test Data Refund z Refund Marital Status Taxable Income Cheat No 80K Married ? 10 Yes No NO MarSt Many Algorithms: – Hunt’s Algorithm (one of the earliest) – CART – ID3, C4.5 – SLIQ,SPRINT SLIQ SPRINT Married Single, Divorced TaxInc NO < 80K > 80K YES NO © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 13 Apply Model to Test Data Refund Marital Status Refund Married z Taxable Income Cheat 80K z ? 10 Yes No NO MarSt Assign Cheat to “No” Married Single, Divorced TaxInc NO < 80K > 80K YES NO © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 Introduction to Data Mining 4/18/2004 14 Decision Tree Classification Task Let Dt be the set of training records that reach a node t General Procedure: – If Dt contains records that belong the same class yt, then t is a leaf node labeled as yt – If Dt is an empty set, then t is a l f node leaf d llabeled b l db by th the d default f lt class, yd – If Dt contains records that belong to more than one class, use an attribute test to split the data into smaller subsets. Recursively apply the procedure to each subset. © Tan,Steinbach, Kumar Attrib3 Don’t Cheat Taxable Income Cheat 1 Yes Single 125K No 2 No Married 100K No 3 No Single 70K No 4 Yes Married 120K No 5 No Divorced 95K Yes 6 No Married No 7 Yes Divorced 220K No 8 No Single 85K Yes 9 No Married 75K No 10 No Single 90K Yes 60K Dt ? Introduction to Data Mining 4/18/2004 Hunt’s Algorithm Class Tid Refund Marital Status 10 Refund Attrib2 16 General Structure of Hunt’s Algorithm Test Data No © Tan,Steinbach, Kumar 17 Tid Refund Marital Status Taxable Income Cheat 1 Yes Single 125K No 2 No Married 100K No 3 No Single 70K No 4 Yes Married 120K No 5 No Divorced 95K Yes No Tid Attrib1 1 Yes Large 125K No 2 No Medium 100K No 3 No Small 70K No 4 Yes Medium 120K No 6 No Married 5 No Large 95K Yes 7 Yes Divorced 220K No 6 No Medium 60K No 8 No Single 85K Yes 9 No Married 75K No 10 No Single 90K Yes 7 Yes Large 220K No 8 No Small 85K Yes 9 No Medium 75K No 10 No Small 90K Yes Yes No Don’t Cheat Don’t Cheat Learn Model Refund Refund Yes Yes No No 60K 10 10 Tid Attrib1 Attrib2 Attrib3 Class 11 No Small 55K ? 12 Yes Medium 80K ? 13 Yes Large 110K ? 14 No Small 95K ? 15 No Large 67K ? Apply Model Don’t Cheat Decision Tree Don’t Cheat Marital Status Single, Divorced Cheat Married Marital Status Single, Divorced Married Don’t Cheat Taxable Income Don’t Cheat < 80K >= 80K Don’t Cheat Cheat 10 © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 15 © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 18 Tree Induction z Splitting Based on Nominal Attributes Greedy strategy. – Split the records based on an attribute test that optimizes certain criterion. z Multi-way split: Use as many partitions as distinct values. CarType Family Luxury Sports z Issues – Determine how to split the records z How to specify the attribute test condition? How to determine the best split? {Sports, Luxury} – Determine when to stop splitting © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 19 Tree Induction z z CarType {Family, Luxury} OR {Family} © Tan,Steinbach, Kumar z {Sports} Introduction to Data Mining 4/18/2004 22 Multi-way split: Use as many partitions as distinct values. Size Small Medium Issues – Determine how to split the records How CarType Splitting Based on Ordinal Attributes Greedy strategy. – Split the records based on an attribute test that optimizes certain criterion. How Binary split: Divides values into two subsets. Need to find optimal partitioning. z to specify the attribute test condition? to determine the best split? Large Bi Binary split: lit Di Divides id values l iinto t ttwo subsets. b t Need to find optimal partitioning. Size {Small, Medium} {Large} OR {Medium, Large} Size {Small} – Determine when to stop splitting z © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 20 How to Specify Test Condition? z What about this split? © Tan,Steinbach, Kumar Depends on attribute types – Nominal – Ordinal – Continuous z 4/18/2004 23 Static – discretize once at the beginning Dynamic y – ranges g can be found by y equal q interval bucketing, equal frequency bucketing (percentiles), or clustering. – Binary Decision: (A < v) or (A ≥ v) Introduction to Data Mining {Medium} Introduction to Data Mining Different ways of handling – Discretization to form an ordinal categorical attribute Depends on number of ways to split – 2-way split – Multi-way split © Tan,Steinbach, Kumar Size Splitting Based on Continuous Attributes z {Small, Large} 4/18/2004 21 consider all possible splits and finds the best cut can be more compute intensive © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 24 Splitting Based on Continuous Attributes How to determine the Best Split Greedy approach: – Nodes with homogeneous class distribution are preferred z Need a measure of node impurity: z © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 25 Tree Induction z z Non-homogeneous, Homogeneous, High degree of impurity Low degree of impurity © Tan,Steinbach, Kumar Introduction to Data Mining z Gini Index z Entropy Issues – Determine how to split the records z Misclassification error How 28 4/18/2004 29 Measures of Node Impurity Greedy strategy. – Split the records based on an attribute test that optimizes certain criterion. How 4/18/2004 to specify the attribute test condition? to determine the best split? – Determine when to stop splitting © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 26 How to determine the Best Split © Tan,Steinbach, Kumar Introduction to Data Mining How to Find the Best Split Before Splitting: Before Splitting: 10 records of class 0, 10 records of class 1 C0 C1 N00 N01 M0 A? B? Yes No Node N1 C0 C1 Node N2 N10 N11 C0 C1 N20 N21 M2 M1 Yes No Node N3 C0 C1 Node N4 N30 N31 C0 C1 M3 N40 N41 M4 Which test condition is the best? M12 M34 Gain = M0 – M12 vs M0 – M34 © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 27 © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 30 Measure of Impurity: GINI z Binary Attributes: Computing GINI Index Gini Index for a given node t : z GINI (t ) = 1 − ∑ [ p ( j | t )] z 2 j Splits into two partitions Effect of Weighing partitions: – Larger and Purer Partitions are sought for. Parent (NOTE: p( j | t) is the relative frequency of class j at node t). B? – Maximum (1 - 1/nc) when records are equally distributed among all classes, implying least interesting information – Minimum (0.0) when all records belong to one class, implying most interesting information C1 C2 0 6 Gini=0.000 © Tan,Steinbach, Kumar C1 C2 1 5 C1 C2 Gini=0.278 2 4 Gini=0.444 C1 C2 Yes Gini(N1) = 1 – (5/7)2 – (2/7)2 = 0.408 3 3 4/18/2004 31 © Tan,Steinbach, Kumar z j C1 C2 1 5 C1 C2 © Tan,Steinbach, Kumar 2 4 z P(C1) = 0/6 = 0 z P(C1) = 1/6 N1 5 2 N2 1 4 Gini(Children) = 7/12 * 0.408 + 5/12 * 0.32 = 0.371 Gini=0.371 Introduction to Data Mining Multi-way split 4/18/2004 34 P(C1) = 2/6 Two-way split (find best partition of values) CarType P(C2) = 5/6 Gini = 1 – (1/6)2 – (5/6)2 = 0.278 C1 C2 Gini P(C2) = 4/6 Family Sports Luxury 1 2 1 4 1 1 0.393 C1 C2 Gini CarType {Sports, {Family} Luxury} 3 1 2 4 0.400 CarType {Family, Luxury} 2 2 1 5 0.419 {Sports} C1 C2 Gini Gini = 1 – (2/6)2 – (4/6)2 = 0.444 Introduction to Data Mining k GINI split = ∑ i =1 4/18/2004 32 © Tan,Steinbach, Kumar z z ni GINI (i ) n z ni = number of records at child i, n = number of records at node p. Introduction to Data Mining Introduction to Data Mining 4/18/2004 35 Continuous Attributes: Computing Gini Index z © Tan,Steinbach, Kumar C1 C2 For each distinct value, gather counts for each class in the dataset Use the count matrix to make decisions Gini = 1 – P(C1)2 – P(C2)2 = 1 – 0 – 1 = 0 Used in CART, SLIQ, SPRINT. When a node p is split into k partitions (children), the quality of split is computed as, where, Node N2 P(C2) = 6/6 = 1 Splitting Based on GINI z 6 Categorical Attributes: Computing Gini Index GINI (t ) = 1 − ∑ [ p ( j | t )]2 0 6 Node N1 Gini(N2) = 1 – (1/5)2 – (4/5)2 = 0.32 Examples for computing GINI C1 C2 6 C2 Gini = 0.500 Gini=0.500 Introduction to Data Mining No C1 4/18/2004 33 Use Binary Decisions based on one value Several Choices for the splitting value – Number of possible splitting values = Number of distinct values Each splitting value has a count matrix associated with it – Class counts in each of the partitions, A < v and A ≥ v Simple method to choose best v – For each v, scan the database to gather count matrix and compute its Gini index – Computationally Inefficient! Repetition of work. © Tan,Steinbach, Kumar Introduction to Data Mining Tid Refund Marital Status Taxable Income Cheat 1 Yes Single 125K No 2 No Married 100K No 3 No Single 70K No 4 Yes Married 120K No 5 No Divorced 95K Yes 6 No Married No 7 Y Yes Di Divorced d 220K N No 8 No Single 85K Yes 9 No Married 75K No 10 No Single 90K Yes 60K 10 4/18/2004 36 Continuous Attributes: Computing Gini Index... z Splitting Based on INFO... z For efficient computation: for each attribute, – Sort the attribute on values – Linearly scan these values, each time updating the count matrix and computing gini index – Choose the split position that has the least gini index Information Gain: GAIN n = Entropy ( p ) − ⎛⎜ ∑ Entropy (i ) ⎞⎟ ⎝ n ⎠ k split i i =1 Parent Node, p is split into k partitions; Ch Cheat N No N No N No Y Yes Y Yes Y Yes N No N No N No ni is number of records in partition i N No Taxable Income 60 Sorted Values Split Positions 70 55 75 65 85 72 90 80 95 87 100 92 97 120 110 125 122 220 172 230 <= > <= > <= > <= > <= > <= > <= > <= > <= > <= > <= > Yes 0 3 0 3 0 3 0 3 1 2 2 1 3 0 3 0 3 0 3 0 3 0 No 0 7 1 6 2 5 3 4 3 4 3 4 3 4 4 3 5 2 6 1 7 0 Gini 0.420 © Tan,Steinbach, Kumar 0.400 0.375 0.343 0.417 0.400 0.300 0.343 Introduction to Data Mining 0.375 0.400 0.420 4/18/2004 37 Alternative Splitting Criteria based on INFO z – Measures Reduction in Entropy achieved because of the split. Choose the split that achieves most reduction (maximizes GAIN) – Used in ID3 and C4.5 – Disadvantage: Tends to prefer splits that result in large number of partitions, each being small but pure. © Tan,Steinbach, Kumar Introduction to Data Mining z Entropy (t ) = − ∑ p ( j | t ) log p ( j | t ) Gain Ratio: GainRATIO j split (NOTE: p( j | t) is the relative frequency of class j at node t). – Measures homogeneity of a node. (log nc) when records are equally distributed among all classes implying least information Minimum (0.0) when all records belong to one class, implying most information 38 Examples for computing Entropy C1 C2 0 6 C1 C2 1 5 P(C1) = 1/6 C1 C2 2 4 P(C1) = 2/6 P(C1) = 0/6 = 0 © Tan,Steinbach, Kumar z 2 i i i =1 Introduction to Data Mining 4/18/2004 41 Classification error at a node t : Error (t ) = 1 − max P (i | t ) P(C2) = 6/6 = 1 i Entropy = – 0 log 0 – 1 log 1 = – 0 – 0 = 0 z © Tan,Steinbach, Kumar k Split Splitting Criteria based on Classification Error Entropy (t ) = − ∑ p ( j | t ) log p ( j | t ) j GAIN n n SplitINFO = − ∑ log SplitINFO n n – Adjusts Information Gain by the entropy of the partitioning (SplitINFO). Higher entropy partitioning (large number of small partitions) is penalized! – Used in C4.5 – Designed to overcome the disadvantage of Information Gain – Entropy based computations are similar to the GINI index computations 4/18/2004 = Parent Node, p is split into k partitions ni is the number of records in partition i Maximum Introduction to Data Mining 40 Splitting Based on INFO... Entropy at a given node t: © Tan,Steinbach, Kumar 4/18/2004 Measures misclassification error made by a node node. Maximum P(C2) = 5/6 Entropy = – (1/6) log2 (1/6) – (5/6) log2 (5/6) = 0.65 (1 - 1/nc) when records are equally distributed among all classes, implying least interesting information Minimum (0.0) when all records belong to one class, implying most interesting information P(C2) = 4/6 Entropy = – (2/6) log2 (2/6) – (4/6) log2 (4/6) = 0.92 Introduction to Data Mining 4/18/2004 39 © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 42 Examples for Computing Error Stopping Criteria for Tree Induction Error (t ) = 1 − max P (i | t ) z Stop expanding a node when all the records belong to the same class z Stop expanding a node when all the records have similar attribute values z Early termination (to be discussed later) i C1 C2 0 6 C1 C2 1 5 P(C1) = 1/6 C1 C2 2 4 P(C1) = 2/6 © Tan,Steinbach, Kumar P(C1) = 0/6 = 0 P(C2) = 6/6 = 1 Error = 1 – max (0, 1) = 1 – 1 = 0 P(C2) = 5/6 Error = 1 – max (1/6, 5/6) = 1 – 5/6 = 1/6 P(C2) = 4/6 Error = 1 – max (2/6, 4/6) = 1 – 4/6 = 1/3 Introduction to Data Mining 4/18/2004 43 © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 Comparison among Splitting Criteria Decision Tree Based Classification For a 2-class problem: z © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 44 Tree Induction z z 4/18/2004 48 Simple depth-first construction. Uses Information Gain z Sorts Continuous Attributes at each node. z Needs entire data to fit in memory. z Unsuitable Uns itable for Large Datasets Datasets. – Needs out-of-core sorting. to specify the attribute test condition? to determine the best split? Introduction to Data Mining Introduction to Data Mining z z – Determine when to stop splitting © Tan,Steinbach, Kumar © Tan,Steinbach, Kumar z Issues – Determine how to split the records How Advantages: – Inexpensive to construct – Extremely fast at classifying unknown records – Easy to interpret for small-sized trees – Accuracy Acc rac is comparable to other classification techniques for many simple data sets Example: C4.5 Greedy strategy. – Split the records based on an attribute test that optimizes certain criterion. How 47 You can download the software from: http://www.cse.unsw.edu.au/~quinlan/c4.5r8.tar.gz 4/18/2004 46 © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 49 Practical Issues of Classification z Underfitting and Overfitting z Missing Values z Costs of Classification Overfitting due to Insufficient Examples Lack of data points in the lower half of the diagram makes it difficult to predict correctly the class labels of that region - Insufficient number of training records in the region causes the decision tree to predict the test examples using other training records that are irrelevant to the classification task © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 50 Underfitting and Overfitting © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 54 How to Address Overfitting z Overfitting Pre-Pruning (Early Stopping Rule) – Stop the algorithm before it becomes a fully-grown tree – Typical stopping conditions for a node: Stop if all instances belong to the same class Stop if all the attribute values are the same – More M restrictive t i ti conditions: diti Stop if number of instances is less than some user-specified threshold Stop if class distribution of instances are independent of the available features (e.g., using χ 2 test) Stop if expanding the current node does not improve impurity measures (e.g., Gini or information gain). Underfitting: when model is too simple, both training and test errors are large © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 52 Overfitting due to Noise © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 59 How to Address Overfitting… z Post-pruning – Grow decision tree to its entirety – Trim the nodes of the decision tree in a bottom-up fashion – If generalization error improves after trimming trimming, replace sub-tree by a leaf node. – Class label of leaf node is determined from majority class of instances in the sub-tree – Can use MDL for post-pruning Decision boundary is distorted by noise point © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 53 © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 60 Handling Missing Attribute Values z Search Strategy Missing values affect decision tree construction in three different ways: – Affects how impurity measures are computed – Affects how to distribute instance with missing value to child nodes – Affects how a test instance with missing value is classified © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 63 z Finding an optimal decision tree is NP-hard z The algorithm presented so far uses a greedy, top-down, recursive partitioning strategy to induce a reasonable solution z Other strategies? – Bottom-up – Bi-directional © Tan,Steinbach, Kumar Introduction to Data Mining Other Issues Expressiveness Data Fragmentation z Search Strategy z Expressiveness z Tree Replication z z Example: parity function: – Class = 1 if there is an even number of Boolean attributes with truth value = True – Class = 0 if there is an odd number of Boolean attributes with truth value = True z Introduction to Data Mining 69 Decision tree provides expressive representation for learning discrete-valued function – But they do not generalize well to certain types of Boolean functions © Tan,Steinbach, Kumar 4/18/2004 4/18/2004 67 Data Fragmentation For accurate modeling, must have a complete tree Not expressive enough for modeling continuous variables – Particularly when test condition involves only a single attribute at-a-time © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 70 Decision Boundary z Number of instances gets smaller as you traverse down the tree z Number of instances at the leaf nodes could be too ssmall a to make a ea any y stat statistically st ca y ssignificant g ca t decision • Border line between two neighboring regions of different classes is known as decision boundary • Decision boundary is parallel to axes because test condition involves a single attribute at-a-time © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 68 © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 71 Oblique Decision Trees Model Evaluation z Metrics for Performance Evaluation – How to evaluate the performance of a model? z Methods for Performance Evaluation – How Ho to obtain reliable estimates? z Methods for Model Comparison – How to compare the relative performance among competing models? x+y<1 Class = + Class = • Test condition may involve multiple attributes • More expressive representation • Finding optimal test condition is computationally expensive © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 72 Tree Replication Introduction to Data Mining 4/18/2004 75 Metrics for Performance Evaluation P Focus on the predictive capability of a model – Rather than how fast it takes to classify or build models, scalability, etc. z Confusion Matrix: z Q S © Tan,Steinbach, Kumar R 0 Q 1 PREDICTED CLASS 0 1 S 0 Class=Yes 0 1 Class=Yes ACTUAL CLASS Class=No Class=No a b c d a: TP (true positive) b: FN (false negative) c: FP (false positive) d: TN (true negative) • Same subtree appears in multiple branches © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 73 Model Evaluation z z z © Tan,Steinbach, Kumar 76 PREDICTED CLASS Class=Yes Class=Yes ACTUAL CLASS Class=No Class No Methods for Performance Evaluation – How Ho to obtain reliable estimates? Methods for Model Comparison – How to compare the relative performance among competing models? Introduction to Data Mining 4/18/2004 Metrics for Performance Evaluation… Metrics for Performance Evaluation – How to evaluate the performance of a model? © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 z a (TP) b (FN) c (FP) d (TN) Most widely-used metric: Accuracy = 74 Class=No © Tan,Steinbach, Kumar a+d TP + TN = a + b + c + d TP + TN + FP + FN Introduction to Data Mining 4/18/2004 77 Limitation of Accuracy z z Cost-Sensitive Measures Precision (p) = Consider a 2-class problem – Number of Class 0 examples = 9990 – Number of Class 1 examples = 10 Recall (r) = a a+c a a+b F - measure (F) = If model predicts e everything er thing to be class 0 0, accuracy is 9990/10000 = 99.9 % – Accuracy is misleading because model does not detect any class 1 example z z z 2rp 2a = r + p 2a + b + c Precision is biased towards C(Yes|Yes) & C(Yes|No) Recall is biased towards C(Yes|Yes) & C(No|Yes) F-measure is biased towards all except C(No|No) Weighted Accuracy = wa + w d wa + wb+ wc+ w d 1 © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 78 Cost Matrix © Tan,Steinbach, Kumar C(i|j) Class=Yes Class=Yes C(Yes|Yes) C(No|Yes) C(Yes|No) C(No|No) ACTUAL CLASS Class=No © Tan,Steinbach, Kumar 4 4/18/2004 82 Metrics for Performance Evaluation – How to evaluate the performance of a model? z Methods for Performance Evaluation – How Ho to obtain reliable estimates? z Methods for Model Comparison – How to compare the relative performance among competing models? Class=No Introduction to Data Mining 4/18/2004 79 Computing Cost of Classification Cost Matrix C(i|j) + - + -1 100 - 1 0 PREDICTED CLASS + - + 150 40 - 60 250 Accuracy = 80% Cost = 3910 Model M2 ACTUAL CLASS © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 83 Methods for Performance Evaluation PREDICTED CLASS ACTUAL CLASS © Tan,Steinbach, Kumar 3 Introduction to Data Mining z C(i|j): Cost of misclassifying class j example as class i ACTUAL CLASS 4 2 Model Evaluation PREDICTED CLASS Model M1 1 PREDICTED CLASS + - + 250 45 - 5 200 z How to obtain a reliable estimate of performance? z Performance of a model may depend on other factors acto s bes besides des tthe e learning ea ga algorithm: go t – Class distribution – Cost of misclassification – Size of training and test sets Accuracy = 90% Cost = 4255 Introduction to Data Mining 4/18/2004 80 © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 84 Learning Curve ROC (Receiver Operating Characteristic) z Learning curve shows how accuracy changes with varying sample size z Requires a sampling schedule for creating learning curve: z Arithmetic sampling (Langley, et al) z Geometric sampling (Provost et al) Effect of small sample size: © Tan,Steinbach, Kumar - Bias in the estimate - Variance of estimate Introduction to Data Mining 4/18/2004 85 Methods of Estimation z z z z z Introduction to Data Mining 4/18/2004 86 Model Evaluation z z z Methods for Performance Evaluation – How Ho to obtain reliable estimates? Methods for Model Comparison – How to compare the relative performance among competing models? Introduction to Data Mining Introduction to Data Mining 4/18/2004 88 Instance P(+|A) True Class 1 0.95 + 2 0.93 + 3 0.87 - 4 0.85 - 5 0 85 0.85 - 6 0.85 + 7 0.76 - 8 0.53 + 9 0.43 - 10 0.25 + • Use classifier that produces posterior probability for each test instance P(+|A) • Sort the instances according to P(+|A) in decreasing order • Apply A l th threshold h ld att each h unique value of P(+|A) • Count the number of TP, FP, TN, FN at each threshold • TP rate, TPR = TP/(TP+FN) • FP rate, FPR = FP/(FP + TN) © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 89 How to construct a ROC curve Metrics for Performance Evaluation – How to evaluate the performance of a model? © Tan,Steinbach, Kumar © Tan,Steinbach, Kumar How to construct a ROC curve Holdout – Reserve 2/3 for training and 1/3 for testing Random subsampling – Repeated holdout Cross validation – Partition data into k disjoint j subsets – k-fold: train on k-1 partitions, test on the remaining one – Leave-one-out: k=n Stratified sampling – oversampling vs undersampling Bootstrap – Sampling with replacement © Tan,Steinbach, Kumar Developed in 1950s for signal detection theory to analyze noisy signals – Characterize the trade-off between positive hits and false alarms z ROC curve plots TPR (on the y-axis) against FPR (on the xx-axis) axis) z Performance of each classifier represented as a point on the ROC curve – changing the threshold of algorithm, sample distribution or cost matrix changes the location of the point z 4/18/2004 + - + - - - + - + + 0.25 0.43 0.53 0.76 0.85 0.85 0.85 0.87 0.93 0.95 TP 5 4 4 3 3 3 3 2 2 1 FP 5 5 4 4 3 2 1 1 0 0 0 TN 0 0 1 1 2 3 4 4 5 5 5 5 Class Threshold >= 1.00 0 FN 0 1 1 2 2 2 2 3 3 4 TPR 1 0.8 0.8 0.6 0.6 0.6 0.6 0.4 0.4 0.2 0 FPR 1 1 0.8 0.8 0.6 0.4 0.2 0.2 0 0 0 ROC Curve: 87 © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 90 ROC Curve (TPR,FPR): z (0,0): declare everything to be negative class z (1,1): declare everything to be positive class z (1,0): (1 0): ideal z Diagonal line: – Random guessing – Below diagonal line: prediction is opposite of the true class © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 92 Using ROC for Model Comparison z No model consistently outperform the other z M1 is better for small FPR z M2 is better for large FPR z Area Under the ROC curve z Ideal: z Random guess: Area Area © Tan,Steinbach, Kumar Introduction to Data Mining =1 = 0.5 4/18/2004 93