Survey
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
Data Mining Classification: Alternative Techniques Lecture Notes for Chapter 5 Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 1 Rule-Based Classifier Classify records by using a collection of “if…then…” rules Rule: (Condition) y – where Condition=(A1 op v1) & (A2 op v2)…(Ak op vk) Condition is a conjunctions of attributes y is the class label – LHS: rule antecedent or condition – RHS: rule consequent – Examples of classification rules: (Blood Type=Warm) (Lay Eggs=Yes) Birds (Taxable Income < 50K) (Refund=Yes) Evade=No © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 2 Rule-based Classifier (Example) Name human python salmon whale frog komodo bat pigeon cat leopard shark turtle penguin porcupine eel salamander gila monster platypus owl dolphin eagle Blood Type warm cold cold warm cold cold warm warm warm cold cold warm warm cold cold cold warm warm warm warm Give Birth yes no no yes no no yes no yes yes no no yes no no no no no yes no Can Fly no no no no no no yes yes no no no no no no no no no yes no yes Live in Water no no yes yes sometimes no no no no yes sometimes sometimes no yes sometimes no no no yes no Class mammals reptiles fishes mammals amphibians reptiles mammals birds mammals fishes reptiles birds mammals fishes amphibians reptiles mammals birds mammals birds R1: (Give Birth = no) (Can Fly = yes) Birds R2: (Give Birth = no) (Live in Water = yes) Fishes R3: (Give Birth = yes) (Blood Type = warm) Mammals R4: (Give Birth = no) (Can Fly = no) Reptiles R5: (Live in Water = sometimes) Amphibians © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 3 Application of Rule-Based Classifier A rule r covers an instance x if the attributes of the instance satisfy the condition of the rule R1: (Give Birth = no) (Can Fly = yes) Birds R2: (Give Birth = no) (Live in Water = yes) Fishes R3: (Give Birth = yes) (Blood Type = warm) Mammals R4: (Give Birth = no) (Can Fly = no) Reptiles R5: (Live in Water = sometimes) Amphibians Name hawk grizzly bear Blood Type warm warm Give Birth Can Fly Live in Water Class no yes yes no no no ? ? The rule R1 covers a hawk => Bird The rule R3 covers the grizzly bear => Mammal © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 4 Rule Coverage and Accuracy Coverage of a rule: – Fraction of records that satisfy the antecedent of a rule Accuracy of a rule: – Fraction of records that satisfy both the antecedent and consequent of a rule Tid Refund Marital Status Taxable Income Class 1 Yes Single 125K No 2 No Married 100K No 3 No Single 70K No 4 Yes Married 120K No 5 No Divorced 95K Yes 6 No Married No 7 Yes Divorced 220K 8 No Single 85K Yes 9 No Married 75K No 10 No Single 90K Yes 60K No 10 Ex: (Status=Single) No Coverage = 40%, Accuracy = 50% © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 5 How does Rule-based Classifier Work? R1: (Give Birth = no) (Can Fly = yes) Birds R2: (Give Birth = no) (Live in Water = yes) Fishes R3: (Give Birth = yes) (Blood Type = warm) Mammals R4: (Give Birth = no) (Can Fly = no) Reptiles R5: (Live in Water = sometimes) Amphibians Name lemur turtle dogfish shark Blood Type warm cold cold Give Birth Can Fly Live in Water Class yes no yes no no no no sometimes yes ? ? ? A lemur triggers rule R3, so it is classified as a mammal A turtle triggers both R4 and R5 A dogfish shark triggers none of the rules © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 6 Characteristics of Rule-Based Classifier Mutually exclusive rules – Classifier contains mutually exclusive rules if the rules are independent of each other – Every record is covered by at most one rule Exhaustive rules – Classifier has exhaustive coverage if it accounts for every possible combination of attribute values – Each record is covered by at least one rule © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 7 From Decision Trees To Rules Classification Rules (Refund=Yes) ==> No Refund Yes No NO Marita l Status {Single, Divorced} NO {Married} (Refund=No, Marital Status={Single,Divorced}, Taxable Income>80K) ==> Yes (Refund=No, Marital Status={Married}) ==> No NO Taxable Income < 80K (Refund=No, Marital Status={Single,Divorced}, Taxable Income<80K) ==> No > 80K YES Rules are mutually exclusive and exhaustive Rule set contains as much information as the tree © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 8 Rules Can Be Simplified Refund Yes No NO {Single, Divorced} Marita l Status NO Taxable Income < 80K NO {Married} > 80K YES Tid Refund Marital Status Taxable Income Cheat 1 Yes Single 125K No 2 No Married 100K No 3 No Single 70K No 4 Yes Married 120K No 5 No Divorced 95K 6 No Married 7 Yes Divorced 220K No 8 No Single 85K Yes 9 No Married 75K No 10 No Single 90K Yes 60K Yes No 10 Initial Rule: (Refund=No) (Status=Married) No Simplified Rule: (Status=Married) No © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 9 Effect of Rule Simplification Rules are no longer mutually exclusive – A record may trigger more than one rule – Solution? Ordered rule set Unordered rule set – use voting schemes Rules are no longer exhaustive – A record may not trigger any rules – Solution? Use a default class © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 10 Ordered Rule Set Rules are rank ordered according to their priority – An ordered rule set is known as a decision list When a test record is presented to the classifier – It is assigned to the class label of the highest ranked rule it has triggered – If none of the rules fired, it is assigned to the default class R1: (Give Birth = no) (Can Fly = yes) Birds R2: (Give Birth = no) (Live in Water = yes) Fishes R3: (Give Birth = yes) (Blood Type = warm) Mammals R4: (Give Birth = no) (Can Fly = no) Reptiles R5: (Live in Water = sometimes) Amphibians Name turtle © Tan,Steinbach, Kumar Blood Type cold Give Birth Can Fly Live in Water Class no no sometimes ? Introduction to Data Mining 4/18/2004 11 Rule Ordering Schemes Rule-based ordering – Individual rules are ranked based on their quality Class-based ordering – Rules that belong to the same class appear together Rule-based Ordering Class-based Ordering (Refund=Yes) ==> No (Refund=Yes) ==> No (Refund=No, Marital Status={Single,Divorced}, Taxable Income<80K) ==> No (Refund=No, Marital Status={Single,Divorced}, Taxable Income<80K) ==> No (Refund=No, Marital Status={Single,Divorced}, Taxable Income>80K) ==> Yes (Refund=No, Marital Status={Married}) ==> No (Refund=No, Marital Status={Married}) ==> No © Tan,Steinbach, Kumar (Refund=No, Marital Status={Single,Divorced}, Taxable Income>80K) ==> Yes Introduction to Data Mining 4/18/2004 12 Building Classification Rules Holte’s 1R Algorithm: Direct Method: – Extract rules directly from data – e.g.: RIPPER, CN2, Holte’s 1R for each attribute for each value of that attribute find class distribution for attr/value conc = most frequent class make rule: attribute=value -> conc calculate error rate of rule set select rule set with lowest error rate Indirect Method: – Extract rules from other classification models (e.g. decision trees, neural networks, etc). – e.g: C4.5rules © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 13 Direct Method: Sequential Covering 1. 2. 3. 4. Start from an empty rule Grow a rule using the Learn-One-Rule function Remove training records covered by the rule Repeat Step (2) and (3) until stopping criterion is met © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 14 Example of Sequential Covering (i) Original Data © Tan,Steinbach, Kumar (ii) Step 1 Introduction to Data Mining 4/18/2004 15 Example of Sequential Covering… R1 R1 R2 (iii) Step 2 © Tan,Steinbach, Kumar (iv) Step 3 Introduction to Data Mining 4/18/2004 16 Aspects of Sequential Covering Rule Growing Instance Elimination Rule Evaluation Stopping Criterion Rule Pruning © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 17 Rule Growing Two common strategies {} Yes: 3 No: 4 Refund=No, Status=Single, Income=85K (Class=Yes) Refund= No Status = Single Status = Divorced Status = Married Yes: 3 No: 4 Yes: 2 No: 1 Yes: 1 No: 0 Yes: 0 No: 3 ... Income > 80K Yes: 3 No: 1 (a) General-to-specific © Tan,Steinbach, Kumar Introduction to Data Mining Refund=No, Status=Single, Income=90K (Class=Yes) Refund=No, Status = Single (Class = Yes) (b) Specific-to-general 4/18/2004 18 Rule Growing (Examples) CN2 Algorithm: – Start from an empty conjunct: {} – Add conjuncts that minimizes the entropy measure: {A}, {A,B}, … – Determine the rule consequent by taking majority class of instances covered by the rule RIPPER Algorithm: – Start from an empty rule: {} => class – Add conjuncts that maximizes FOIL’s information gain measure: R0: {} => class (initial rule) R1: {A} => class (rule after adding conjunct) Gain(R0, R1) = t [ log (p1/(p1+n1)) – log (p0/(p0 + n0)) ] where t: number of positive instances covered by both R0 and R1 p0: number of positive instances covered by R0 n0: number of negative instances covered by R0 p1: number of positive instances covered by R1 n1: number of negative instances covered by R1 © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 19 Aspects of Sequential Covering Rule Growing Instance Elimination Rule Evaluation Stopping Criterion Rule Pruning © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 20 Instance Elimination Why do we need to eliminate instances? R3 – Otherwise, the next rule is identical to previous rule Why do we remove positive instances? R1 + class = + – Ensure that the next rule is different Why do we remove negative instances? - class = - – Prevent underestimating accuracy of rule – Compare rules R2 and R3 in the diagram © Tan,Steinbach, Kumar + Introduction to Data Mining + + ++ + + + ++ + + + + + - - - + + + + + + - - - + + + - + + + + - - R2 - - - - 4/18/2004 21 Aspects of Sequential Covering Rule Growing Instance Elimination Rule Evaluation Stopping Criterion Rule Pruning © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 22 Rule Evaluation Metrics: nc – Accuracy n nc 1 – Laplace nk nc kp – M-estimate nk n : Number of instances covered by rule nc : Number of instances covered by rule k : Number of classes – Foil-gain p : Prior probability FOIL _ Gain pos'(log 2 pos' pos log 2 ) pos' neg ' pos neg Pos/neg are # of positive/negative tuples covered by R. © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 23 Aspects of Sequential Covering Rule Growing Instance Elimination Rule Evaluation Stopping Criterion Rule Pruning © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 24 Stopping Criterion and Rule Pruning Stopping criterion – Compute the gain – If gain is not significant, discard the new rule FOIL _ Prune( R) Rule Pruning – Similar to post-pruning of decision trees – Reduced Error Pruning: pos neg pos neg Remove one of the conjuncts in the rule Compare error rate on validation set before and after pruning If error improves, prune the conjunct © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 25 Summary of Direct Method Grow a single rule Remove Instances from rule Prune the rule (if necessary) Add rule to Current Rule Set Repeat © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 26 Direct Method: RIPPER For 2-class problem, choose one of the classes as positive class, and the other as negative class – Learn rules for positive class – Negative class will be default class For multi-class problem – Order the classes according to increasing class prevalence (fraction of instances that belong to a particular class) – Learn the rule set for smallest class first, treat the rest as negative class – Repeat with next smallest class as positive class © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 27 Direct Method: RIPPER Growing a rule: – Start from empty rule – Add conjuncts as long as they improve FOIL’s information gain – Stop when rule no longer covers negative examples – Prune the rule immediately using incremental reduced error pruning – Measure for pruning: v = (p-n)/(p+n) p: number of positive examples covered by the rule in the validation set n: number of negative examples covered by the rule in the validation set – Pruning method: delete any final sequence of conditions that maximizes v © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 28 Direct Method: RIPPER Building a Rule Set: – Use sequential covering algorithm Finds the best rule that covers the current set of positive examples Eliminate both positive and negative examples covered by the rule – Each time a rule is added to the rule set, compute the new description length stop adding new rules when the new description length is d bits longer than the smallest description length obtained so far © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 29 Direct Method: RIPPER Optimize the rule set: – For each rule r in the rule set R Consider 2 alternative rules: – Replacement rule (r*): grow new rule from scratch – Revised rule(r’): add conjuncts to extend the rule r Compare the rule set for r against the rule set for r* and r’ Choose rule set that minimizes MDL principle – Repeat rule generation and rule optimization for the remaining positive examples © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 30 Indirect Methods P No Yes Q No - Rule Set R Yes No Yes + + Q No - © Tan,Steinbach, Kumar Yes r1: (P=No,Q=No) ==> r2: (P=No,Q=Yes) ==> + r3: (P=Yes,R=No) ==> + r4: (P=Yes,R=Yes,Q=No) ==> r5: (P=Yes,R=Yes,Q=Yes) ==> + + Introduction to Data Mining 4/18/2004 31 Indirect Method: C4.5rules Extract rules from an unpruned decision tree For each rule, r: A y, – consider an alternative rule r’: A’ y where A’ is obtained by removing one of the conjuncts in A – Compare the pessimistic error rate for r against all r’s – Prune if one of the r’s has lower pessimistic error rate – Repeat until we can no longer improve generalization error © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 32 Indirect Method: C4.5rules Instead of ordering the rules, order subsets of rules (class ordering) – Each subset is a collection of rules with the same rule consequent (class) – Compute description length of each subset Description length = L(error) + g L(model) g is a parameter that takes into account the presence of redundant attributes in a rule set (default value = 0.5) © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 33 Example Name human python salmon whale frog komodo bat pigeon cat leopard shark turtle penguin porcupine eel salamander gila monster platypus owl dolphin eagle © Tan,Steinbach, Kumar Give Birth yes no no yes no no yes no yes yes no no yes no no no no no yes no Lay Eggs no yes yes no yes yes no yes no no yes yes no yes yes yes yes yes no yes Can Fly no no no no no no yes yes no no no no no no no no no yes no yes Introduction to Data Mining Live in Water Have Legs no no yes yes sometimes no no no no yes sometimes sometimes no yes sometimes no no no yes no yes no no no yes yes yes yes yes no yes yes yes no yes yes yes yes no yes Class mammals reptiles fishes mammals amphibians reptiles mammals birds mammals fishes reptiles birds mammals fishes amphibians reptiles mammals birds mammals birds 4/18/2004 34 C4.5 versus C4.5rules versus RIPPER C4.5rules: Give Birth? (Give Birth=No, Can Fly=Yes) Birds (Give Birth=No, Live in Water=Yes) Fishes No Yes (Give Birth=Yes) Mammals (Give Birth=No, Can Fly=No, Live in Water=No) Reptiles Live In Water? Mammals Yes ( ) Amphibians RIPPER: No (Live in Water=Yes) Fishes (Have Legs=No) Reptiles Sometimes Fishes Yes Birds © Tan,Steinbach, Kumar (Give Birth=No, Can Fly=No, Live In Water=No) Reptiles Can Fly? Amphibians (Can Fly=Yes,Give Birth=No) Birds No () Mammals Reptiles Introduction to Data Mining 4/18/2004 35 C4.5 versus C4.5rules versus RIPPER Confusion Matrix C4.5 and C4.5rules: PREDICTED CLASS Amphibians Fishes Reptiles Birds ACTUAL Amphibians 2 0 0 CLASS Fishes 0 2 0 Reptiles 1 0 3 Birds 1 0 0 Mammals 0 0 1 0 0 0 3 0 Mammals 0 1 0 0 6 0 0 0 2 0 Mammals 2 0 1 1 4 RIPPER: PREDICTED CLASS Amphibians Fishes Reptiles Birds ACTUAL Amphibians 0 0 0 CLASS Fishes 0 3 0 Reptiles 0 0 3 Birds 0 0 1 Mammals 0 2 1 © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 36 Advantages of Rule-Based Classifiers As highly expressive as decision trees Easy to interpret Easy to generate Can classify new instances rapidly Performance comparable to decision trees © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 37 Instance-Based Classifiers Set of Stored Cases Atr1 ……... AtrN Class A • Store the training records • Use training records to predict the class label of unseen cases B B C A Unseen Case Atr1 ……... AtrN C B © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 38 Instance Based Classifiers Examples: – Rote-learner Memorizes entire training data and performs classification only if attributes of record match one of the training examples exactly – Nearest neighbor Uses k “closest” points (nearest neighbors) for performing classification © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 39 Nearest Neighbor Classifiers Basic idea: – If it walks like a duck, quacks like a duck, then it’s probably a duck Compute Distance Training Records © Tan,Steinbach, Kumar Test Record Choose k of the “nearest” records Introduction to Data Mining 4/18/2004 40 Nearest-Neighbor Classifiers Unknown record Requires three things – The set of stored records – Distance Metric to compute distance between records – The value of k, the number of nearest neighbors to retrieve To classify an unknown record: – Compute distance to other training records – Identify k nearest neighbors – Use class labels of nearest neighbors to determine the class label of unknown record (e.g., by taking majority vote) © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 41 Definition of Nearest Neighbor X (a) 1-nearest neighbor X X (b) 2-nearest neighbor (c) 3-nearest neighbor K-nearest neighbors of a record x are data points that have the k smallest distance to x © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 42 1 nearest-neighbor Voronoi Diagram © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 43 Nearest Neighbor Classification Compute distance between two points: – Euclidean distance d ( p, q ) ( pi i q ) 2 i Determine the class from nearest neighbor list – take the majority vote of class labels among the knearest neighbors – Weigh the vote according to distance weight factor, w = 1/d2 © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 44 Nearest Neighbor Classification… Choosing the value of k: – If k is too small, sensitive to noise points – If k is too large, neighborhood may include points from other classes X © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 45 Nearest Neighbor Classification… Scaling issues – Attributes may have to be scaled to prevent distance measures from being dominated by one of the attributes – Example: height of a person may vary from 1.5m to 1.8m weight of a person may vary from 90lb to 300lb income of a person may vary from $10K to $1M © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 46 Nearest Neighbor Classification… Problem with Euclidean measure: – High dimensional data curse of dimensionality – Can produce counter-intuitive results 111111111110 vs 100000000000 011111111111 000000000001 d = 1.4142 d = 1.4142 Solution: Normalize the vectors to unit length © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 47 Nearest neighbor Classification… k-NN classifiers are lazy learners – It does not build models explicitly – Unlike eager learners such as decision tree induction and rule-based systems – Classifying unknown records are relatively expensive © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 48 Example: PEBLS PEBLS: Parallel Examplar-Based Learning System (Cost & Salzberg) – Works with both continuous and nominal features For nominal features, distance between two nominal values is computed using modified value difference metric (MVDM) – Each record is assigned a weight factor – Number of nearest neighbor, k = 1 © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 49 Example: PEBLS Tid Refund Marital Status Taxable Income Cheat 1 Yes Single 125K No d(Single,Married) 2 No Married 100K No = | 2/4 – 0/4 | + | 2/4 – 4/4 | = 1 3 No Single 70K No 4 Yes Married 120K No d(Single,Divorced) 5 No Divorced 95K Yes 6 No Married No d(Married,Divorced) 7 Yes Divorced 220K No = | 0/4 – 1/2 | + | 4/4 – 1/2 | = 1 8 No Single 85K Yes d(Refund=Yes,Refund=No) 9 No Married 75K No 10 No Single 90K Yes = | 0/3 – 3/7 | + | 3/3 – 4/7 | = 6/7 60K Distance between nominal attribute values: = | 2/4 – 1/2 | + | 2/4 – 1/2 | = 0 10 Marital Status Class Refund Single Married Divorced Yes 2 0 1 No 2 4 1 © Tan,Steinbach, Kumar Class Yes No Yes 0 3 No 3 4 Introduction to Data Mining d (V1 ,V2 ) i 4/18/2004 n1i n2i n1 n2 50 Example: PEBLS Tid Refund Marital Status Taxable Income Cheat X Yes Single 125K No Y No Married 100K No 10 Distance between record X and record Y: d ( X , Y ) wX wY d ( X i , Yi ) 2 i 1 where: Number of times X is used for prediction wX Number of times X predicts correctly wX 1 if X makes accurate prediction most of the time wX > 1 if X is not reliable for making predictions © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 51 Naïve Bayes Classifier Thomas Bayes 1702 - 1761 © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 52 Grasshoppers Katydids Antenna Length 10 9 8 7 6 5 4 3 2 1 1 2 3 4 5 6 7 8 9 10 Abdomen Length © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 53 With a lot of data, we can build a histogram. Let us just build one for “Antenna Length” for now… Antenna Length 10 Katydids Grasshoppers 9 8 7 6 5 4 3 2 1 1 2 3 4 5 6 7 8 9 10 © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 54 We can leave the histograms as they are, or we can summarize them with two normal distributions. Let us us two normal distributions for ease of visualization in the following slides… © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 55 p(cj | d) = probability of class cj, given that we have observed d P(Grasshopper | 3 ) = 10 / (10 + 2) = 0.833 P(Katydid | 3 ) = 0.166 = 2 / (10 + 2) 10 2 3 Antennae length is 3 © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 56 p(cj | d) = probability of class cj, given that we have observed d P(Grasshopper | 7 ) = 3 / (3 + 9) = 0.250 P(Katydid | 7 ) = 0.750 = 9 / (3 + 9) 9 3 7 Antennae length is 7 © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 57 p(cj | d) = probability of class cj, given that we have observed d P(Grasshopper | 5 ) = 6 / (6 + 6) = 0.500 P(Katydid | 5 ) = 0.500 = 6 / (6 + 6) 6 6 5 Antennae length is 5 © Tan,Steinbach, Kumar Introduction to Data Mining ? 4/18/2004 58 Bayesian Theorem Given training data X, posteriori probability of a hypothesis H, P(H|X) follows the Bayes theorem Grasshopper P(H | X ) P( X | H )P(H ) P( X ) Katydid Informally, this can be written as posterior =likelihood x prior probability/ evidence Predicts X belongs to Ci iff the probability P(Ci|X) is the highest among all the P(Ck|X) foi all the k classes Practical difficulty: require initial knowledge of many probabilities =>significant computational cost © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 59 Example Is Chris a Male or Female Female? ? Name Sex Chris Male Claudia Female Chris Female Chris Female Alberto Male Karin Female Nina Female Sergio Male Chris Female © Tan,Steinbach, Kumar 2010/03/26 P(H | X ) P( X | H )P(H ) P( X ) p(male male |Chris) = 1/3 * 3/9 4/9 p(female female |Chris) = 3/6 * 6/9 4/9 =>Chris is a Female Introduction to Data Mining 4/18/2004 60 60 Bayes Classifier A probabilistic framework for solving classification problems Conditional Probability: P( A, C ) P(C | A) P( A) P( A, C ) P( A | C ) P(C ) Bayes theorem: P ( A | C ) P(C ) P(C | A) P ( A) © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 61 Bayesian Classifiers Consider each attribute and class label as random variables Given a record with attributes (A1, A2,…,An) – Goal is to predict class C – Specifically, we want to find the value of C that maximizes P(C| A1, A2,…,An ) Can we estimate P(C| A1, A2,…,An ) directly from data? © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 63 Bayesian Classifiers Approach: – compute the posterior probability P(C | A1, A2, …, An) for all values of C using the Bayes theorem P (C | A A A ) 1 2 n P ( A A A | C ) P (C ) P( A A A ) 1 2 n 1 2 n – Choose value of C that maximizes P(C | A1, A2, …, An) – Equivalent to choosing value of C that maximizes P(A1, A2, …, An|C) P(C) How to estimate P(A1, A2, …, An | C )? © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 64 Naïve Bayes Classifier Assume independence among attributes Ai when class is given: – P(A1, A2, …, An |C) = P(A1| Cj) P(A2| Cj)… P(An| Cj) difficult A1 (Height=170), A2 (Weight=50), … – Can estimate P(Ai| Cj) for all Ai and Cj much easier – New point is classified to Cj if P(Cj) P(Ai| Cj) is maximal. P (C | A A A ) 1 2 n P ( A A A | C ) P (C ) P( A A A ) 1 2 n 1 © Tan,Steinbach, Kumar Introduction to Data Mining 2 n 4/18/2004 65 How to Estimate Probabilities from Data? l l c at o eg a c i r c at o eg a c i r c on u it n s u o s s a cl Tid Refund Marital Status Taxable Income Evade 1 Yes Single 125K No 2 No Married 100K No 3 No Single 70K No 4 Yes Married 120K No 5 No Divorced 95K Yes 6 No Married 60K No 7 Yes Divorced 220K No 8 No Single 85K Yes 9 No Married 75K No 10 No Single 90K Yes Class: P(C) = Nc/N – e.g., P(No) = 7/10, P(Yes) = 3/10 For discrete attributes: P(Ai | Ck) = |Aik|/ Nck – where |Aik| is number of instances having attribute Ai and belongs to class Ck – Examples: 10 © Tan,Steinbach, Kumar Introduction to Data Mining P(Status=Married|No) = 4/7 P(Refund=Yes|Yes)=0 4/18/2004 66 Training dataset Class: C1:buys_computer=‘yes’ C2:buys_computer=‘no’ Data sample: X (age<=30, Income=medium, Student=yes, Credit_rating=Fair) age <=30 <=30 30…40 >40 >40 >40 31…40 <=30 <=30 >40 <=30 31…40 31…40 >40 © Tan,Steinbach, Kumar income student credit_rating high no fair high no excellent high no fair medium no fair low yes fair low yes excellent low yes excellent medium no fair low yes fair medium yes fair medium yes excellent medium no excellent high yes fair medium no excellent Introduction to Data Mining buys_computer no no yes yes yes no yes no yes yes yes yes yes no 4/18/2004 67 Naïve Bayesian Classifier: Example X=(age<=30, income=medium, student=yes, credit_rating=fair) P(buys_computer=“yes”)=9/14, P(buys_computer=“no”)=5/14 Compute P(X|Ci) for each class P(age=“<30” | buys_computer=“yes”) = 2/9=0.222 P(income=“medium” | buys_computer=“yes”)= 4/9 =0.444 P(student=“yes” | buys_computer=“yes)= 6/9 =0.667 P(credit_rating=“fair” | buys_computer=“yes”)=6/9=0.667 ∵ X=(age<=30 ,income =medium, student=yes,credit_rating=fair) P(X|Ci) : P(X|buys_computer=“yes”)= 0.222 x 0.444 x 0.667 x 0.667 =0.044 P(credit_rating=“fair” | buys_computer=“no”)=2/5=0.4 P(age=“<30” | buys_computer=“no”) = 3/5 =0.6 P(income=“medium” | buys_computer=“no”) = 2/5 = 0.4 P(student=“yes” | buys_computer=“no”)= 1/5=0.2 => P(X|buys_computer=“no”)= 0.6 x 0.4 x 0.2 x 0.4 =0.019 P(X|Ci)*P(Ci ) : P(X|buys_computer=“yes”) * P(buys_computer=“yes”)=0.028 P(X|buys_computer=“no”) * P(buys_computer=“no”)=0.007 => X belongs to class “buys_computer=yes” © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 68 Example of Naïve Bayes Classifier Given a Test Record: X (Refund No, Married, Income 120K) naive Bayes Classifier: P(Refund=Yes|No) = 3/7 P(Refund=No|No) = 4/7 P(Refund=Yes|Yes) = 0 P(Refund=No|Yes) = 1 P(Marital Status=Single|No) = 2/7 P(Marital Status=Divorced|No)=1/7 P(Marital Status=Married|No) = 4/7 P(Marital Status=Single|Yes) = 2/7 P(Marital Status=Divorced|Yes)=1/7 P(Marital Status=Married|Yes) = 0 For taxable income: If class=No: sample mean=110 sample variance=2975 If class=Yes: sample mean=90 sample variance=25 © Tan,Steinbach, Kumar P(X|Class=No) = P(Refund=No|Class=No) P(Married| Class=No) P(Income=120K| Class=No) = 4/7 4/7 0.0072 = 0.0024 P(X|Class=Yes) = P(Refund=No| Class=Yes) P(Married| Class=Yes) P(Income=120K| Class=Yes) = 1 0 1.2 10-9 = 0 Since P(X|No)P(No) > P(X|Yes)P(Yes) Therefore P(No|X) > P(Yes|X) => Class = No Introduction to Data Mining 4/18/2004 69 Example of Naïve Bayes Classifier Name human python salmon whale frog komodo bat pigeon cat leopard shark turtle penguin porcupine eel salamander gila monster platypus owl dolphin eagle Give Birth yes Give Birth yes no no yes no no yes no yes yes no no yes no no no no no yes no Can Fly no no no no no no yes yes no no no no no no no no no yes no yes Can Fly no © Tan,Steinbach, Kumar Live in Water Have Legs no no yes yes sometimes no no no no yes sometimes sometimes no yes sometimes no no no yes no Class yes no no no yes yes yes yes yes no yes yes yes no yes yes yes yes no yes mammals non-mammals non-mammals mammals non-mammals non-mammals mammals non-mammals mammals non-mammals non-mammals non-mammals mammals non-mammals non-mammals non-mammals mammals non-mammals mammals non-mammals Live in Water Have Legs yes no Class ? Introduction to Data Mining A: attributes M: mammals N: non-mammals 6 6 2 2 P ( A | M ) 0.06 7 7 7 7 1 10 3 4 P ( A | N ) 0.0042 13 13 13 13 7 P ( A | M ) P ( M ) 0.06 0.021 20 13 P ( A | N ) P ( N ) 0.004 0.0027 20 P(A|M)P(M) > P(A|N)P(N) => Mammals 4/18/2004 70 How to Estimate Probabilities from Data? For continuous attributes: – Discretize the range into bins one ordinal attribute per bin violates independence assumption – Two-way split: (A < v) or (A > v) choose only one of the two splits as new attribute – Probability density estimation: Assume attribute follows a normal distribution Use data to estimate parameters of distribution (e.g., mean and standard deviation) Once probability distribution is known, can use it to estimate the conditional probability P(Ai|c) © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 71 l s al uProbabilities ca c How tooriEstimate from Data? i o r u o c Tid e at Refund g c e at Marital Status g c t n o Taxable Income in as l c Evade 1 Yes Single 125K No 2 No Married 100K No 3 No Single 70K No 4 Yes Married 120K No 5 No Divorced 95K Yes 6 No Married 60K No 7 Yes Divorced 220K No 8 No Single 85K Yes 9 No Married 75K No 10 No Single 90K Yes s Normal distribution: 1 P( A | c ) e 2 i j ( Ai ij ) 2 2 ij2 2 ij – One for each (Ai,ci) pair For (Income, Class=No): – If Class=No sample mean = 110 sample variance = 2975 10 1 P( Income 120 | No) e 2 (54.54) © Tan,Steinbach, Kumar Introduction to Data Mining ( 120110 ) 2 2 ( 2975 ) 0.0072 4/18/2004 72 Naïve Bayes Classifier If one of the conditional probability is zero, then the entire expression becomes zero Probability estimation: N ic Original : P( Ai | C ) Nc N ic 1 Laplace : P( Ai | C ) Nc c N ic mp m - estimate : P( Ai | C ) Nc m © Tan,Steinbach, Kumar Introduction to Data Mining c: number of classes p: prior probability m: parameter 4/18/2004 73 Naïve Bayes (Summary) Robust to isolated noise points Handle missing values by ignoring the instance during probability estimate calculations Robust to irrelevant attributes Independence assumption may not hold for some attributes – Use other techniques such as Bayesian Belief Networks (BBN) © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 74 Naïve Bayesian Classifier: Comments Advantages : – Easy to implement – Good results obtained in most of the cases Disadvantages – Assumption: class conditional independence – Practically, dependencies exist among variables – E.g., hospitals: patients: Profile: age, family history etc Symptoms: fever, cough etc., Disease: lung cancer, diabetes etc – Dependencies among these cannot be modeled by Naïve Bayesian Classifier How to deal with these dependencies? – Bayesian Belief Networks © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 75 Bayesian Networks Bayesian belief network allows a subset of the variables conditionally independent A graphical model of causal relationships – Represents dependency among the variables – Gives a specification of joint probability distribution Y X Z P © Tan,Steinbach, Kumar 2010/03/26 Nodes: random variables Links: dependency X,Y are the parents of Z, and Y is the parent of P No dependency between Z and P Has no loops or cycles Introduction to Data Mining 4/18/2004 76 76 Bayesian Belief Network: An Example •One conditional probability table (CPT) for each variable Family History Smoker The CPT for the variable LungCancer: Shows the conditional probability for each possible combination of its parents (FH, S) LungCancer PositiveXRay Emphysema Dyspnea Bayesian Belief Networks © Tan,Steinbach, Kumar (FH, ~S) (~FH, S) (~FH, ~S) LC 0.8 0.5 0.7 0.1 ~LC 0.2 0.5 0.3 0.9 Derivation of the probability of a particular combination of values of X, from CPT: n P( x1 ,..., xn ) P( xi | Parents(Y i)) i 1 Introduction to Data Mining 4/18/2004 77 Classification: A Mathematical Mapping Classification: – predicts categorical class labels E.g., Personal homepage classification – xi = (x1, x2, x3, …), yi = +1 or –1 x1 : # of a word “homepage” x2 : # of a word “welcome” x3 : … Mathematically – x X = n, y Y = {+1, –1} – We want a function f: X Y © Tan,Steinbach, Kumar 2010/03/26 Introduction to Data Mining 4/18/2004 78 78 Linear Classification x x x x x x x x x o o x o o o © Tan,Steinbach, Kumar o o o o o o o o Introduction to Data Mining Binary Classification problem The data above the line belongs to class ‘x’ The data below red line belongs to class ‘o’ Examples: – Perceptron – SVM – Probabilistic Classifiers 4/18/2004 79 Neural Networks Analogy to Biological Systems (Indeed a great example of a good learning system) Massive Parallelism allowing for computational efficiency The first learning algorithm came in 1959 (Rosenblatt) – If a target output value is provided for a single neuron with fixed inputs, one can incrementally change weights to learn to produce these outputs using the perceptron learning rule © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 80 Neural Networks Terminal Branches of Axon Dendrites Axon Terminal Branches of Axon Dendrites x1 w1 x2 x3 w2 w3 Binary classifier functions Threshold activation function S Axon wn xn © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 81 Activation Function 1 y hid (u ) 1 e u Step Threshold Threshold Activation Function Sigmoid Sigmoid Neuron unit function N y u wn xn n 1 Linear Linear Activation functions © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 82 Neural Networks The general artificial neuron model has five components, shown in the following list 1. A set of inputs, xi. 2. A set of weights, wi. 3. A bias, u. 4. An activation function, f. x0 w0 5. Neuron output, y x1 w1 xn wn A Neuron Input weight vector x vector w © Tan,Steinbach, Kumar Introduction to Data Mining weighted sum k f output y Activation function 4/18/2004 83 Artificial Neural Networks (ANN) X1 X2 X3 Y Input 1 1 1 1 0 0 0 0 0 0 1 1 0 1 1 0 0 1 0 1 1 0 1 0 0 1 1 1 0 0 1 0 X1 Black box Output X2 Y X3 Output Y is 1 if at least two of the three inputs are equal to 1. © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 84 Artificial Neural Networks (ANN) X1 X2 X3 Y 1 1 1 1 0 0 0 0 0 0 1 1 0 1 1 0 0 1 0 1 1 0 1 0 0 1 1 1 0 0 1 0 Input nodes Black box X1 X2 X3 Output node 0.3 0.3 0.3 Y S t=0.4 1, if 0.3x 1 0.3x 2 0.3x 3 - 0.4 0; yˆ 1, if 0.3x 1 0.3x 2 0.3x 3 - 0.4 0. © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 85 Artificial Neural Networks (ANN) Model is an assembly of inter-connected nodes and weighted links Input nodes Black box X1 w1 w2 X2 Output node sums up each of its input value according to the weights of its links Output node Y S w3 X3 t Perceptron Model Compare output node against some threshold t Y I ( wi X i t ) or i Y sign( wi X i t ) i © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 86 General Structure of ANN x1 x2 x3 Input Layer x4 x5 Input I1 I2 Hidden Layer I3 Neuron i wi1 wi2 wi3 Si Activation function g(Si ) Output Oi Oi threshold, t Output Layer Training ANN means learning the weights of the neurons y © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 87 Layered Networks Inputs x1 © Tan,Steinbach, Kumar . Hidden Neurons Output Neurons Outputs yh1(u) yo1(u) y1 x2 yh2(u) yo2(u) y2 xn yhn(u) yom(u) ym Introduction to Data Mining 4/18/2004 88 Perceptron Learning Algorithm 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. Let D={xi,yi|i=1,2,…,N} be the training instances Initialize the weight vector with random values w(0) repeat for each instance (xi,yi) D Compute ŷ for each weight wj do (k ) (k+1) (k) y ˆ y wj = wj + λ( i - i ) xij error end for end for until stopping condition is met © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 89 Algorithm for learning ANN Initialize the weights (w0, w1, …, wk) Adjust the weights in such a way that the output of ANN is consistent with class labels of training examples 2 N – Objective function: SSE yi yˆ i i – Find the weights wi’s that minimize the above objective function e.g., backpropagation algorithm © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 90 Error Back-Propagation (BP) Training error term: e Hidden Neurons uhid,1 yhid(uhid,1) N1 whid,1 x yhid uout,1 yhid(uhid,2) whid,2 wout,2 N2 x ytrain wout,1 yout(Suout,n) Output Neuron whid,n wout,n yout e(yout, ytrain) e 1 e ( y yˆ ) 2 2 yhid(uhid,n) Nn © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 91 Support Vector Machines Find a linear hyperplane (decision boundary) that will separate the data © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 92 Support Vector Machines B1 One Possible Solution © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 93 Support Vector Machines B2 Another possible solution © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 94 Support Vector Machines B2 Other possible solutions © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 95 Support Vector Machines B1 B2 Which one is better? B1 or B2? How do you define better? © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 96 Support Vector Machines B1 B2 b21 b22 margin b11 b12 Find hyperplane maximizes the margin => B1 is better than B2 © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 97 Support Vector Machines x {x1 , x2 , x3 , x 4 ...} B1 w x b 0 w x b 1 w x b 1 b11 if w x b 1 1 f ( x) 1 if w x b 1 © Tan,Steinbach, Kumar Introduction to Data Mining b12 2 Margin 2 || w || 4/18/2004 98 Support Vector Machines We want to maximize: 2 Margin 2 || w || 2 – Which is equivalent to minimizing: L( w) || w || 2 – But subjected to the following constraints: if w x i b 1 1 f ( xi ) 1 if w x i b 1 This is a constrained optimization problem – Numerical approaches to solve it (e.g., quadratic programming) © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 99 Support Vector Machines What if the problem is not linearly separable? © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 100 Support Vector Machines What if the problem is not linearly separable? – Introduce slack variables 2 Need to minimize: || w || N k L ( w) C i 2 i 1 Subject to: if w x i b 1 - i 1 f ( xi ) 1 if w x i b 1 i © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 101 Nonlinear Support Vector Machines What if decision boundary is not linear? © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 102 Nonlinear Support Vector Machines Transform data into higher dimensional space © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 103 Ensemble Methods Construct a set of classifiers from the training data Predict class label of previously unseen records by aggregating predictions made by multiple classifiers © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 104 General Idea D Step 1: Create Multiple Data Sets Step 2: Build Multiple Classifiers D1 D2 C1 C2 Step 3: Combine Classifiers © Tan,Steinbach, Kumar .... Original Training data Dt-1 Dt Ct -1 Ct C* Introduction to Data Mining 4/18/2004 105 Why does it work? Suppose there are 25 base classifiers – Each classifier has error rate, = 0.35 – Assume classifiers are independent – Probability that the ensemble classifier makes a wrong prediction: 25 i 25i ( 1 ) 0.06 i i 13 25 © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 106 Examples of Ensemble Methods How to generate an ensemble of classifiers? Method – By manipulating the training set Bagging Boosting – By manipulating the input features – By manipulating the learning algorithm – By manipulating the class label © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 107 Bagging Sampling with replacement Original Data Bagging (Round 1) Bagging (Round 2) Bagging (Round 3) 1 7 1 1 2 8 4 8 3 10 9 5 4 8 1 10 5 2 2 5 6 5 3 5 7 10 2 9 8 10 7 6 9 5 3 3 Build classifier on each bootstrap sample Each sample has probability (1 – 1/n)n of being selected © Tan,Steinbach, Kumar Introduction to Data Mining 10 9 2 7 4/18/2004 108 Boosting An iterative procedure to adaptively change distribution of training data by focusing more on previously misclassified records – Initially, all N records are assigned equal weights – Unlike bagging, weights may change at the end of boosting round © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 109 Boosting Records that are wrongly classified will have their weights increased Records that are classified correctly will have their weights decreased Original Data Boosting (Round 1) Boosting (Round 2) Boosting (Round 3) 1 7 5 4 2 3 4 4 3 2 9 8 4 8 4 10 5 7 2 4 6 9 5 5 7 4 1 4 8 10 7 6 9 6 4 3 10 3 2 4 • Example 4 is hard to classify • Its weight is increased, therefore it is more likely to be chosen again in subsequent rounds © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 110 Example: AdaBoost Base classifiers: C1, C2, …, CT Error rate: 1 i N w C ( x ) y N j 1 j i j j Importance of a classifier: 1 1 i i ln 2 i © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 111 Example: AdaBoost Weight update: j if C j ( xi ) yi w exp ( j 1) wi Z j exp j if C j ( xi ) yi where Z j is the normalizat ion factor ( j) i If any intermediate rounds produce error rate higher than 50%, the weights are reverted back to 1/n and the resampling procedure is repeated Classification: T C * ( x ) arg max j C j ( x ) y y © Tan,Steinbach, Kumar Introduction to Data Mining j 1 4/18/2004 112 Illustrating AdaBoost Initial weights for each data point Original Data 0.1 0.1 - - - - - +++ Data points for training 0.1 ++ B1 Boosting Round 1 0.0094 +++ © Tan,Steinbach, Kumar 0.0094 0.4623 - - - - - - - Introduction to Data Mining = 1.9459 4/18/2004 113 Illustrating AdaBoost B1 Boosting Round 1 0.0094 0.0094 +++ 0.4623 - - - - - - - = 1.9459 B2 Boosting Round 2 0.0009 0.3037 - - - - - - - - 0.0422 ++ = 2.9323 B3 0.0276 0.1819 0.0038 Boosting Round 3 +++ ++ ++ + ++ Overall +++ - - - - - © Tan,Steinbach, Kumar Introduction to Data Mining = 3.8744 ++ 4/18/2004 114