Survey
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
Data Mining Association Analysis: Basic Concepts and Algorithms Lecture Notes for Chapter 6 Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 1 Association Rule Mining Given a set of transactions, find rules that will predict the occurrence of an item based on the occurrences of other items in the transaction Market-Basket transactions TID Items 1 Bread, Milk 2 3 4 5 Bread, Diaper, Beer, Eggs Milk, Diaper, Beer, Coke Bread, Milk, Diaper, Beer Bread, Milk, Diaper, Coke © Tan,Steinbach, Kumar Introduction to Data Mining Example of Association Rules {Diaper} {Beer}, {Milk, Bread} {Eggs,Coke}, {Beer, Bread} {Milk}, Implication means co-occurrence, not causality! 4/18/2004 ‹#› Definition: Frequent Itemset Itemset – A collection of one or more items Example: {Milk, Bread, Diaper} – k-itemset An itemset that contains k items Support count () – Frequency of occurrence of an itemset – E.g. ({Milk, Bread,Diaper}) = 2 Support TID Items 1 Bread, Milk 2 3 4 5 Bread, Diaper, Beer, Eggs Milk, Diaper, Beer, Coke Bread, Milk, Diaper, Beer Bread, Milk, Diaper, Coke – Fraction of transactions that contain an itemset – E.g. s({Milk, Bread, Diaper}) = 2/5 Frequent Itemset – An itemset whose support is greater than or equal to a minsup threshold © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#› Definition: Association Rule Association Rule – An implication expression of the form X Y, where X and Y are itemsets – Example: {Milk, Diaper} {Beer} Rule Evaluation Metrics TID Items 1 Bread, Milk 2 3 4 5 Bread, Diaper, Beer, Eggs Milk, Diaper, Beer, Coke Bread, Milk, Diaper, Beer Bread, Milk, Diaper, Coke – Support (s) Example: Fraction of transactions that contain both X and Y {Milk , Diaper } Beer – Confidence (c) Measures how often items in Y appear in transactions that contain X © Tan,Steinbach, Kumar s (Milk, Diaper, Beer ) |T| 2 0.4 5 (Milk, Diaper, Beer ) 2 c 0.67 (Milk, Diaper ) 3 Introduction to Data Mining 4/18/2004 ‹#› Association Rule Mining Task Given a set of transactions T, the goal of association rule mining is to find all rules having – support ≥ minsup threshold – confidence ≥ minconf threshold Brute-force approach: – List all possible association rules – Compute the support and confidence for each rule – Prune rules that fail the minsup and minconf thresholds Computationally prohibitive! © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#› Mining Association Rules Example of Rules: TID Items 1 Bread, Milk 2 3 4 5 Bread, Diaper, Beer, Eggs Milk, Diaper, Beer, Coke Bread, Milk, Diaper, Beer Bread, Milk, Diaper, Coke {Milk,Diaper} {Beer} (s=0.4, c=0.67) {Milk,Beer} {Diaper} (s=0.4, c=1.0) {Diaper,Beer} {Milk} (s=0.4, c=0.67) {Beer} {Milk,Diaper} (s=0.4, c=0.67) {Diaper} {Milk,Beer} (s=0.4, c=0.5) {Milk} {Diaper,Beer} (s=0.4, c=0.5) Observations: • All the above rules are binary partitions of the same itemset: {Milk, Diaper, Beer} • Rules originating from the same itemset have identical support but can have different confidence • Thus, we may decouple the support and confidence requirements © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#› Mining Association Rules Two-step approach: 1. Frequent Itemset Generation – Generate all itemsets whose support minsup 2. Rule Generation – Generate high confidence rules from each frequent itemset, where each rule is a binary partitioning of a frequent itemset Frequent itemset generation is still computationally expensive © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#› Frequent Itemset Generation null A B C D E AB AC AD AE BC BD BE CD CE DE ABC ABD ABE ACD ACE ADE BCD BCE BDE CDE ABCD ABCE ABDE ACDE ABCDE © Tan,Steinbach, Kumar Introduction to Data Mining BCDE Given d items, there are 2d possible candidate itemsets 4/18/2004 ‹#› Frequent Itemset Generation Brute-force approach: – Each itemset in the lattice is a candidate frequent itemset – Count the support of each candidate by scanning the database Transactions N TID 1 2 3 4 5 Items Bread, Milk Bread, Diaper, Beer, Eggs Milk, Diaper, Beer, Coke Bread, Milk, Diaper, Beer Bread, Milk, Diaper, Coke List of Candidates M w – Match each transaction against every candidate – Complexity ~ O(NMw) => Expensive since M = 2d !!! © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#› Reducing Number of Candidates Apriori principle: – If an itemset is frequent, then all of its subsets must also be frequent Apriori principle holds due to the following property of the support measure: X , Y : ( X Y ) s( X ) s(Y ) – Support of an itemset never exceeds the support of its subsets – This is known as the anti-monotone property of support © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#› Illustrating Apriori Principle null A B C D E AB AC AD AE BC BD BE CD CE DE ABC ABD ABE ACD ACE ADE BCD BCE BDE CDE Found to be Infrequent ABCD ABCE Pruned supersets © Tan,Steinbach, Kumar Introduction to Data Mining ABDE ACDE BCDE ABCDE 4/18/2004 ‹#› Illustrating Apriori Principle Item Bread Coke Milk Beer Diaper Eggs Count 4 2 4 3 4 1 Items (1-itemsets) Itemset {Bread,Milk} {Bread,Beer} {Bread,Diaper} {Milk,Beer} {Milk,Diaper} {Beer,Diaper} Minimum Support = 3 Pairs (2-itemsets) (No need to generate candidates involving Coke or Eggs) Triplets (3-itemsets) If every subset is considered, 6C + 6C + 6C = 41 1 2 3 With support-based pruning, 6 + 6 + 1 = 13 © Tan,Steinbach, Kumar Count 3 2 3 2 3 3 Introduction to Data Mining Itemset {Bread,Milk,Diaper} Count 3 4/18/2004 ‹#› Apriori Algorithm Method: – Let k=1 – Generate frequent itemsets of length 1 – Repeat until no new frequent itemsets are identified Generate length (k+1) candidate itemsets from length k frequent itemsets Prune candidate itemsets containing subsets of length k that are infrequent Count the support of each candidate by scanning the DB Eliminate candidates that are infrequent, leaving only those that are frequent © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#› Reducing Number of Comparisons Candidate counting: – Scan the database of transactions to determine the support of each candidate itemset – To reduce the number of comparisons, store the candidates in a hash structure Instead of matching each transaction against every candidate, match it against candidates contained in the hashed buckets Transactions N TID 1 2 3 4 5 Hash Structure Items Bread, Milk Bread, Diaper, Beer, Eggs Milk, Diaper, Beer, Coke Bread, Milk, Diaper, Beer Bread, Milk, Diaper, Coke k Buckets © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#› Factors Affecting Complexity Choice of minimum support threshold – – Dimensionality (number of items) of the data set – – more space is needed to store support count of each item if number of frequent items also increases, both computation and I/O costs may also increase Size of database – lowering support threshold results in more frequent itemsets this may increase number of candidates and max length of frequent itemsets since Apriori makes multiple passes, run time of algorithm may increase with number of transactions Average transaction width – transaction width increases with denser data sets – This may increase max length of frequent itemsets (number of subsets in a transaction increases with its width) © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#› FP-growth Algorithm Use a compressed representation of the database using an FP-tree Once an FP-tree has been constructed, it uses a recursive divide-and-conquer approach to mine the frequent itemsets © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#› FP-tree construction null After reading TID=1: TID 1 2 3 4 5 6 7 8 9 10 Items {A,B} {B,C,D} {A,C,D,E} {A,D,E} {A,B,C} {A,B,C,D} {B,C} {A,B,C} {A,B,D} {B,C,E} A:1 B:1 After reading TID=2: null A:1 B:1 B:1 C:1 D:1 © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#› FP-Tree Construction TID 1 2 3 4 5 6 7 8 9 10 Items {A,B} {B,C,D} {A,C,D,E} {A,D,E} {A,B,C} {A,B,C,D} {B,C} {A,B,C} {A,B,D} {B,C,E} Header table Item Pointer A B C D E © Tan,Steinbach, Kumar Transaction Database null B:3 A:7 B:5 C:1 C:3 D:1 D:1 C:3 D:1 D:1 D:1 E:1 E:1 E:1 Pointers are used to assist frequent itemset generation Introduction to Data Mining 4/18/2004 ‹#› FP-growth C:1 Conditional Pattern base for D: P = {(A:1,B:1,C:1), (A:1,B:1), (A:1,C:1), (A:1), (B:1,C:1)} D:1 Recursively apply FPgrowth on P null A:7 B:5 B:1 C:1 C:3 D:1 D:1 Frequent Itemsets found (with sup > 1): AD, BD, CD, ACD, BCD D:1 D:1 © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#› Rule Generation Given a frequent itemset L, find all non-empty subsets f L such that f L – f satisfies the minimum confidence requirement – If {A,B,C,D} is a frequent itemset, candidate rules: ABC D, A BCD, AB CD, BD AC, ABD C, B ACD, AC BD, CD AB, ACD B, C ABD, AD BC, BCD A, D ABC BC AD, If |L| = k, then there are 2k – 2 candidate association rules (ignoring L and L) © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#› Rule Generation How to efficiently generate rules from frequent itemsets? – In general, confidence does not have an antimonotone property c(ABC D) can be larger or smaller than c(AB D) – But confidence of rules generated from the same itemset has an anti-monotone property – e.g., L = {A,B,C,D}: c(ABC D) c(AB CD) c(A BCD) Confidence is anti-monotone w.r.t. number of items on the RHS of the rule © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#› Rule Generation for Apriori Algorithm Lattice of rules Low Confidence Rule CD=>AB ABCD=>{ } BCD=>A ACD=>B BD=>AC D=>ABC BC=>AD C=>ABD ABD=>C AD=>BC B=>ACD ABC=>D AC=>BD AB=>CD A=>BCD Pruned Rules © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#› Rule Generation for Apriori Algorithm Candidate rule is generated by merging two rules that share the same prefix in the rule consequent CD=>AB BD=>AC join(CD=>AB,BD=>AC) would produce the candidate rule D => ABC D=>ABC Prune rule D=>ABC if its subset AD=>BC does not have high confidence © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#› Effect of Support Distribution Many real data sets have skewed support distribution Support distribution of a retail data set © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#› Effect of Support Distribution How to set the appropriate minsup threshold? – If minsup is set too high, we could miss itemsets involving interesting rare items (e.g., expensive products) – If minsup is set too low, it is computationally expensive and the number of itemsets is very large Using a single minimum support threshold may not be effective © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#› Multiple Minimum Support How to apply multiple minimum supports? – MS(i): minimum support for item i – e.g.: MS(Milk)=5%, MS(Coke) = 3%, MS(Broccoli)=0.1%, MS(Salmon)=0.5% – MS({Milk, Broccoli}) = min (MS(Milk), MS(Broccoli)) = 0.1% – Challenge: Support is no longer anti-monotone Suppose: Support(Milk, Coke) = 1.5% and Support(Milk, Coke, Broccoli) = 0.5% {Milk,Coke} is infrequent but {Milk,Coke,Broccoli} is frequent © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#› Multiple Minimum Support (Liu 1999) Order the items according to their minimum support (in ascending order) – e.g.: MS(Milk)=5%, MS(Coke) = 3%, MS(Broccoli)=0.1%, MS(Salmon)=0.5% – Ordering: Broccoli, Salmon, Coke, Milk Need to modify Apriori such that: – L1 : set of frequent items – F1 : set of items whose support is MS(1) where MS(1) is mini( MS(i) ) – C2 : candidate itemsets of size 2 is generated from F1 instead of L1 © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#› Multiple Minimum Support (Liu 1999) Modifications to Apriori: – In traditional Apriori, A candidate (k+1)-itemset is generated by merging two frequent itemsets of size k The candidate is pruned if it contains any infrequent subsets of size k – Pruning step has to be modified: Prune only if subset contains the first item e.g.: Candidate={Broccoli, Coke, Milk} (ordered according to minimum support) Let {Broccoli, Coke} and {Broccoli, Milk} are frequent but {Coke, Milk} is infrequent – Candidate is not pruned because {Coke,Milk} does not contain the first item, i.e., Broccoli. © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 ‹#›