Download k+1

Document related concepts

Nonlinear dimensionality reduction wikipedia , lookup

Transcript
Data Mining
Association Analysis
Introduction to Data Mining
by
Tan, Steinbach, Kumar
© Tan,Steinbach, Kumar
Introduction to Data Mining
4/18/2004
1
Association Rule Mining

Given a set of transactions, find rules that will predict the
occurrence of an item based on the occurrences of other
items in the transaction
Market-Basket transactions
TID
Items
1
Bread, Milk
2
3
4
5
Bread, Diaper, Beer, Eggs
Milk, Diaper, Beer, Coke
Bread, Milk, Diaper, Beer
Bread, Milk, Diaper, Coke
© Tan,Steinbach, Kumar
Introduction to Data Mining
Example of Association Rules
{Diaper}  {Beer},
{Milk, Bread}  {Eggs,Coke},
{Beer, Bread}  {Milk},
Implication means co-occurrence,
not causality!
4/18/2004
‹#›
Definition: Frequent Itemset

Itemset
– A collection of one or more items

Example: {Milk, Bread, Diaper}
– k-itemset


An itemset that contains k items
Support count ()
– Frequency of occurrence of an itemset
– E.g. ({Milk, Bread,Diaper}) = 2

Support
TID
Items
1
Bread, Milk
2
3
4
5
Bread, Diaper, Beer, Eggs
Milk, Diaper, Beer, Coke
Bread, Milk, Diaper, Beer
Bread, Milk, Diaper, Coke
– Fraction of transactions that contain an
itemset
– E.g. s({Milk, Bread, Diaper}) = 2/5

Frequent Itemset
– An itemset whose support is greater
than or equal to a minsup threshold
© Tan,Steinbach, Kumar
Introduction to Data Mining
4/18/2004
‹#›
Definition: Association Rule

Association Rule
– An implication expression of the form
X  Y, where X and Y are itemsets
– Example:
{Milk, Diaper}  {Beer}

Rule Evaluation Metrics
TID
Items
1
Bread, Milk
2
3
4
5
Bread, Diaper, Beer, Eggs
Milk, Diaper, Beer, Coke
Bread, Milk, Diaper, Beer
Bread, Milk, Diaper, Coke
– Support (s)

Example:
Fraction of transactions that contain
both X and Y
{Milk , Diaper }  Beer
– Confidence (c)

Measures how often items in Y
appear in transactions that
contain X
© Tan,Steinbach, Kumar
s
 (Milk, Diaper, Beer )
|T|

2
 0.4
5
 (Milk, Diaper, Beer ) 2
c
  0.67
 (Milk, Diaper )
3
Introduction to Data Mining
4/18/2004
‹#›
Association Rule Mining Task

Given a set of transactions T, the goal of
association rule mining is to find all rules having
– support ≥ minsup threshold
– confidence ≥ minconf threshold

Brute-force approach:
– List all possible association rules
– Compute the support and confidence for each rule
– Prune rules that fail the minsup and minconf
thresholds
 Computationally prohibitive!
© Tan,Steinbach, Kumar
Introduction to Data Mining
4/18/2004
‹#›
Mining Association Rules
Example of Rules:
TID
Items
1
Bread, Milk
2
3
4
5
Bread, Diaper, Beer, Eggs
Milk, Diaper, Beer, Coke
Bread, Milk, Diaper, Beer
Bread, Milk, Diaper, Coke
{Milk,Diaper}  {Beer} (s=0.4, c=0.67)
{Milk,Beer}  {Diaper} (s=0.4, c=1.0)
{Diaper,Beer}  {Milk} (s=0.4, c=0.67)
{Beer}  {Milk,Diaper} (s=0.4, c=0.67)
{Diaper}  {Milk,Beer} (s=0.4, c=0.5)
{Milk}  {Diaper,Beer} (s=0.4, c=0.5)
Observations:
• All the above rules are binary partitions of the same itemset:
{Milk, Diaper, Beer}
• Rules originating from the same itemset have identical support but
can have different confidence
• Thus, we may decouple the support and confidence requirements
© Tan,Steinbach, Kumar
Introduction to Data Mining
4/18/2004
‹#›
Mining Association Rules

Two-step approach:
1. Frequent Itemset Generation
–
Generate all itemsets whose support  minsup
2. Rule Generation
–

Generate high confidence rules from each frequent itemset,
where each rule is a binary partitioning of a frequent itemset
Frequent itemset generation is still
computationally expensive
© Tan,Steinbach, Kumar
Introduction to Data Mining
4/18/2004
‹#›
Frequent Itemset Generation
null
A
B
C
D
E
AB
AC
AD
AE
BC
BD
BE
CD
CE
DE
ABC
ABD
ABE
ACD
ACE
ADE
BCD
BCE
BDE
CDE
ABCD
ABCE
ABDE
ACDE
ABCDE
© Tan,Steinbach, Kumar
Introduction to Data Mining
BCDE
Given d items, there
are 2d possible
candidate itemsets
4/18/2004
‹#›
Frequent Itemset Generation

Brute-force approach:
– Each itemset in the lattice is a candidate frequent itemset
– Count the support of each candidate by scanning the
database
Transactions
N
TID
1
2
3
4
5
Items
Bread, Milk
Bread, Diaper, Beer, Eggs
Milk, Diaper, Beer, Coke
Bread, Milk, Diaper, Beer
Bread, Milk, Diaper, Coke
List of
Candidates
M
w
– Match each transaction against every candidate
– Complexity ~ O(NMw) => Expensive since M = 2d !!!
© Tan,Steinbach, Kumar
Introduction to Data Mining
4/18/2004
‹#›
Computational Complexity

Given d unique items:
– Total number of itemsets = 2d
– Total number of possible association rules:
 d   d  k 
R       

 k   j 
 3  2 1
d 1
d k
k 1
j 1
d
d 1
If d=6, R = 602 rules
© Tan,Steinbach, Kumar
Introduction to Data Mining
4/18/2004
‹#›
Frequent Itemset Generation Strategies

Reduce the number of candidates (M)
– Complete search: M=2d
– Use pruning techniques to reduce M

Reduce the number of transactions (N)
– Reduce size of N as the size of itemset increases
– Used by DHP and vertical-based mining algorithms

Reduce the number of comparisons (NM)
– Use efficient data structures to store the candidates or
transactions
– No need to match every candidate against every
transaction
© Tan,Steinbach, Kumar
Introduction to Data Mining
4/18/2004
‹#›
Reducing Number of Candidates

Apriori principle:
– If an itemset is frequent, then all of its subsets must also
be frequent

Apriori principle holds due to the following property
of the support measure:
X , Y : ( X  Y )  s( X )  s(Y )
– Support of an itemset never exceeds the support of its
subsets
– This is known as the anti-monotone property of support
© Tan,Steinbach, Kumar
Introduction to Data Mining
4/18/2004
‹#›
Illustrating Apriori Principle
null
A
B
C
D
E
AB
AC
AD
AE
BC
BD
BE
CD
CE
DE
ABC
ABD
ABE
ACD
ACE
ADE
BCD
BCE
BDE
CDE
Found to be
Infrequent
ABCD
ABCE
Pruned
supersets
© Tan,Steinbach, Kumar
Introduction to Data Mining
ABDE
ACDE
BCDE
ABCDE
4/18/2004
‹#›
Illustrating Apriori Principle
Item
Bread
Coke
Milk
Beer
Diaper
Eggs
Count
4
2
4
3
4
1
Items (1-itemsets)
Itemset
{Bread,Milk}
{Bread,Beer}
{Bread,Diaper}
{Milk,Beer}
{Milk,Diaper}
{Beer,Diaper}
Minimum Support = 3
Pairs (2-itemsets)
(No need to generate
candidates involving Coke
or Eggs)
Triplets (3-itemsets)
If every subset is considered,
6C + 6C + 6C = 41
1
2
3
With support-based pruning,
6 + 6 + 1 = 13
© Tan,Steinbach, Kumar
Count
3
2
3
2
3
3
Introduction to Data Mining
Itemset
{Bread,Milk,Diaper}
Count
3
4/18/2004
‹#›
Apriori Algorithm

Method:
– Let k=1
– Generate frequent itemsets of length 1
– Repeat until no new frequent itemsets are identified
 Generate
length (k+1) candidate itemsets from length k
frequent itemsets
 Prune candidate itemsets containing subsets of length k that
are infrequent
 Count the support of each candidate by scanning the DB
 Eliminate candidates that are infrequent, leaving only those
that are frequent
© Tan,Steinbach, Kumar
Introduction to Data Mining
4/18/2004
‹#›
Reducing Number of Comparisons

Candidate counting:
– Scan the database of transactions to determine the
support of each candidate itemset
– To reduce the number of comparisons, store the
candidates in a hash structure
Instead of matching each transaction against every candidate,
match it against candidates contained in the hashed buckets

Transactions
N
TID
1
2
3
4
5
Hash Structure
Items
Bread, Milk
Bread, Diaper, Beer, Eggs
Milk, Diaper, Beer, Coke
Bread, Milk, Diaper, Beer
Bread, Milk, Diaper, Coke
k
Buckets
© Tan,Steinbach, Kumar
Introduction to Data Mining
4/18/2004
‹#›
Generate Hash Tree
Suppose you have 15 candidate itemsets of length 3:
{1 4 5}, {1 2 4}, {4 5 7}, {1 2 5}, {4 5 8}, {1 5 9}, {1 3 6}, {2 3 4}, {5 6 7}, {3 4 5},
{3 5 6}, {3 5 7}, {6 8 9}, {3 6 7}, {3 6 8}
You need:
• Hash function
• Max leaf size: max number of itemsets stored in a leaf node (if number of
candidate itemsets exceeds max leaf size, split the node)
Hash function
3,6,9
1,4,7
234
567
345
136
145
2,5,8
124
457
© Tan,Steinbach, Kumar
125
458
Introduction to Data Mining
356
357
689
367
368
159
4/18/2004
‹#›
Association Rule Discovery: Hash tree
Hash Function
1,4,7
Candidate Hash Tree
3,6,9
2,5,8
234
567
145
136
345
Hash on
1, 4 or 7
124
457
© Tan,Steinbach, Kumar
125
458
159
Introduction to Data Mining
356
357
689
367
368
4/18/2004
‹#›
Association Rule Discovery: Hash tree
Hash Function
1,4,7
Candidate Hash Tree
3,6,9
2,5,8
234
567
145
136
345
Hash on
2, 5 or 8
124
457
© Tan,Steinbach, Kumar
125
458
159
Introduction to Data Mining
356
357
689
367
368
4/18/2004
‹#›
Association Rule Discovery: Hash tree
Hash Function
1,4,7
Candidate Hash Tree
3,6,9
2,5,8
234
567
145
136
345
Hash on
3, 6 or 9
124
457
© Tan,Steinbach, Kumar
125
458
159
Introduction to Data Mining
356
357
689
367
368
4/18/2004
‹#›
Subset Operation
Given a transaction t, what
are the possible subsets of
size 3?
Transaction, t
1 2 3 5 6
Level 1
1 2 3 5 6
2 3 5 6
3 5 6
Level 2
12 3 5 6
13 5 6
123
125
126
135
136
Level 3
© Tan,Steinbach, Kumar
15 6
156
23 5 6
235
236
25 6
256
35 6
356
Subsets of 3 items
Introduction to Data Mining
4/18/2004
‹#›
Subset Operation Using Hash Tree
Hash Function
1 2 3 5 6 transaction
1+ 2356
2+ 356
1,4,7
3+ 56
3,6,9
2,5,8
234
567
145
136
345
124
457
125
458
© Tan,Steinbach, Kumar
159
356
357
689
Introduction to Data Mining
367
368
4/18/2004
‹#›
Subset Operation Using Hash Tree
Hash Function
1 2 3 5 6 transaction
1+ 2356
2+ 356
12+ 356
1,4,7
3+ 56
3,6,9
2,5,8
13+ 56
234
567
15+ 6
145
136
345
124
457
© Tan,Steinbach, Kumar
125
458
159
Introduction to Data Mining
356
357
689
367
368
4/18/2004
‹#›
Subset Operation Using Hash Tree
Hash Function
1 2 3 5 6 transaction
1+ 2356
2+ 356
12+ 356
1,4,7
3+ 56
3,6,9
2,5,8
13+ 56
234
567
15+ 6
145
136
345
124
457
© Tan,Steinbach, Kumar
125
458
159
356
357
689
367
368
Match transaction against 11 out of 15 candidates
Introduction to Data Mining
4/18/2004
‹#›
Projected Database
Original Database:
TID
1
2
3
4
5
6
7
8
9
10
Items
{A,B}
{B,C,D}
{A,C,D,E}
{A,D,E}
{A,B,C}
{A,B,C,D}
{B,C}
{A,B,C}
{A,B,D}
{B,C,E}
Projected Database
for node A:
TID
1
2
3
4
5
6
7
8
9
10
Items
{B}
{}
{C,D,E}
{D,E}
{B,C}
{B,C,D}
{}
{B,C}
{B,D}
{}
For each transaction T, projected transaction at node A is T  E(A)
© Tan,Steinbach, Kumar
Introduction to Data Mining
4/18/2004
‹#›
ECLAT

For each item, store a list of transaction ids (tids)
Horizontal
Data Layout
TID
1
2
3
4
5
6
7
8
9
10
© Tan,Steinbach, Kumar
Items
A,B,E
B,C,D
C,E
A,C,D
A,B,C,D
A,E
A,B
A,B,C
A,C,D
B
Vertical Data Layout
A
1
4
5
6
7
8
9
B
1
2
5
7
8
10
C
2
3
4
8
9
D
2
4
5
9
E
1
3
6
TID-list
Introduction to Data Mining
4/18/2004
‹#›
ECLAT

Determine support of any k-itemset by intersecting tid-lists
of two of its (k-1) subsets.
A
1
4
5
6
7
8
9


B
1
2
5
7
8
10

AB
1
5
7
8
3 traversal approaches:
– top-down, bottom-up and hybrid


Advantage: very fast support counting
Disadvantage: intermediate tid-lists may become too
large for memory
© Tan,Steinbach, Kumar
Introduction to Data Mining
4/18/2004
‹#›
Rule Generation

Given a frequent itemset L, find all non-empty
subsets f  L such that f  L – f satisfies the
minimum confidence requirement
– If {A,B,C,D} is a frequent itemset, candidate rules:
ABC D,
A BCD,
AB CD,
BD AC,

ABD C,
B ACD,
AC  BD,
CD AB,
ACD B,
C ABD,
AD  BC,
BCD A,
D ABC
BC AD,
If |L| = k, then there are 2k – 2 candidate
association rules (ignoring L   and   L)
© Tan,Steinbach, Kumar
Introduction to Data Mining
4/18/2004
‹#›
Sequence Data
Timeline
10
Sequence Database:
Object
A
A
A
B
B
B
B
C
Timestamp
10
20
23
11
17
21
28
14
Events
2, 3, 5
6, 1
1
4, 5, 6
2
7, 8, 1, 2
1, 6
1, 8, 7
15
20
25
30
35
Object A:
2
3
5
6
1
1
Object B:
4
5
6
2
1
6
7
8
1
2
Object C:
1
7
8
© Tan,Steinbach, Kumar
Introduction to Data Mining
4/18/2004
‹#›
Examples of Sequence Data
Sequence
Database
Sequence
Element
(Transaction)
Event
(Item)
Customer
Purchase history of a given
customer
A set of items bought by
a customer at time t
Books, diary products,
CDs, etc
Web Data
Browsing activity of a
particular Web visitor
A collection of files
viewed by a Web visitor
after a single mouse click
Home page, index
page, contact info, etc
Event data
History of events generated
by a given sensor
Events triggered by a
sensor at time t
Types of alarms
generated by sensors
Genome
sequences
DNA sequence of a
particular species
An element of the DNA
sequence
Bases A,T,G,C
Element
(Transaction)
Sequence
© Tan,Steinbach, Kumar
E1
E2
E1
E3
E2
Introduction to Data Mining
E2
E3
E4
Event
(Item)
4/18/2004
‹#›
Formal Definition of a Sequence

A sequence is an ordered list of elements
(transactions)
s = < e1 e2 e3 … >
– Each element contains a collection of events (items)
ei = {i1, i2, …, ik}
– Each element is attributed to a specific time or location

Length of a sequence, |s|, is given by the number
of elements of the sequence

A k-sequence is a sequence that contains k
events (items)
© Tan,Steinbach, Kumar
Introduction to Data Mining
4/18/2004
‹#›
Examples of Sequence

Web sequence:
< {Homepage} {Electronics} {Digital Cameras} {Canon Digital Camera}
{Shopping Cart} {Order Confirmation} {Return to Shopping} >

Sequence of initiating events causing the nuclear
accident at 3-mile Island:
(http://stellar-one.com/nuclear/staff_reports/summary_SOE_the_initiating_event.htm)
< {clogged resin} {outlet valve closure} {loss of feedwater}
{condenser polisher outlet valve shut} {booster pumps trip}
{main waterpump trips} {main turbine trips} {reactor pressure increases}>

Sequence of books checked out at a library:
<{Fellowship of the Ring} {The Two Towers} {Return of the King}>
© Tan,Steinbach, Kumar
Introduction to Data Mining
4/18/2004
‹#›
Formal Definition of a Subsequence



A sequence <a1 a2 … an> is contained in another
sequence <b1 b2 … bm> (m ≥ n) if there exist integers
i1 < i2 < … < in such that a1  bi1 , a2  bi1, …, an  bin
Data sequence
Subsequence
Contain?
< {2,4} {3,5,6} {8} >
< {2} {3,5} >
Yes
< {1,2} {3,4} >
< {1} {2} >
No
< {2,4} {2,4} {2,5} >
< {2} {4} >
Yes
The support of a subsequence w is defined as the fraction
of data sequences that contain w
A sequential pattern is a frequent subsequence (i.e., a
subsequence whose support is ≥ minsup)
© Tan,Steinbach, Kumar
Introduction to Data Mining
4/18/2004
‹#›
Sequential Pattern Mining: Definition

Given:
– a database of sequences
– a user-specified minimum support threshold, minsup

Task:
– Find all subsequences with support ≥ minsup
© Tan,Steinbach, Kumar
Introduction to Data Mining
4/18/2004
‹#›
Sequential Pattern Mining: Challenge

Given a sequence: <{a b} {c d e} {f} {g h i}>
– Examples of subsequences:
<{a} {c d} {f} {g} >, < {c d e} >, < {b} {g} >, etc.

How many k-subsequences can be extracted
from a given n-sequence?
<{a b} {c d e} {f} {g h i}> n = 9
k=4:
Y_
<{a}
© Tan,Steinbach, Kumar
_YY _ _ _Y
{d e}
Introduction to Data Mining
{i}>
Answer :
n 9
      126
 k   4
4/18/2004
‹#›
Sequential Pattern Mining: Example
Object
A
A
A
B
B
C
C
C
D
D
D
E
E
Timestamp
1
2
3
1
2
1
2
3
1
2
3
1
2
© Tan,Steinbach, Kumar
Events
1,2,4
2,3
5
1,2
2,3,4
1, 2
2,3,4
2,4,5
2
3, 4
4, 5
1, 3
2, 4, 5
Introduction to Data Mining
Minsup = 50%
Examples of Frequent Subsequences:
< {1,2} >
< {2,3} >
< {2,4}>
< {3} {5}>
< {1} {2} >
< {2} {2} >
< {1} {2,3} >
< {2} {2,3} >
< {1,2} {2,3} >
s=60%
s=60%
s=80%
s=80%
s=80%
s=60%
s=60%
s=60%
s=60%
4/18/2004
‹#›
Extracting Sequential Patterns

Given n events: i1, i2, i3, …, in

Candidate 1-subsequences:
<{i1}>, <{i2}>, <{i3}>, …, <{in}>

Candidate 2-subsequences:
<{i1, i2}>, <{i1, i3}>, …, <{i1} {i1}>, <{i1} {i2}>, …, <{in-1} {in}>

Candidate 3-subsequences:
<{i1, i2 , i3}>, <{i1, i2 , i4}>, …, <{i1, i2} {i1}>, <{i1, i2} {i2}>, …,
<{i1} {i1 , i2}>, <{i1} {i1 , i3}>, …, <{i1} {i1} {i1}>, <{i1} {i1} {i2}>, …
© Tan,Steinbach, Kumar
Introduction to Data Mining
4/18/2004
‹#›
Generalized Sequential Pattern (GSP)

Step 1:
– Make the first pass over the sequence database D to yield all the 1element frequent sequences

Step 2:
Repeat until no new frequent sequences are found
– Candidate Generation:

Merge pairs of frequent subsequences found in the (k-1)th pass to generate
candidate sequences that contain k items
– Candidate Pruning:

Prune candidate k-sequences that contain infrequent (k-1)-subsequences
– Support Counting:

Make a new pass over the sequence database D to find the support for these
candidate sequences
– Candidate Elimination:

Eliminate candidate k-sequences whose actual support is less than minsup
© Tan,Steinbach, Kumar
Introduction to Data Mining
4/18/2004
‹#›
Candidate Generation

Base case (k=2):
– Merging two frequent 1-sequences <{i1}> and <{i2}> will produce
two candidate 2-sequences: <{i1} {i2}> and <{i1 i2}>

General case (k>2):
– A frequent (k-1)-sequence w1 is merged with another frequent
(k-1)-sequence w2 to produce a candidate k-sequence if the
subsequence obtained by removing the first event in w1 is the same
as the subsequence obtained by removing the last event in w2
The resulting candidate after merging is given by the sequence w1
extended with the last event of w2.

– If the last two events in w2 belong to the same element, then the last event
in w2 becomes part of the last element in w1
– Otherwise, the last event in w2 becomes a separate element appended to
the end of w1
© Tan,Steinbach, Kumar
Introduction to Data Mining
4/18/2004
‹#›
Candidate Generation Examples

Merging the sequences
w1=<{1} {2 3} {4}> and w2 =<{2 3} {4 5}>
will produce the candidate sequence < {1} {2 3} {4 5}> because the
last two events in w2 (4 and 5) belong to the same element

Merging the sequences
w1=<{1} {2 3} {4}> and w2 =<{2 3} {4} {5}>
will produce the candidate sequence < {1} {2 3} {4} {5}> because the
last two events in w2 (4 and 5) do not belong to the same element

We do not have to merge the sequences
w1 =<{1} {2 6} {4}> and w2 =<{1} {2} {4 5}>
to produce the candidate < {1} {2 6} {4 5}> because if the latter is a
viable candidate, then it can be obtained by merging w1 with
< {1} {2 6} {5}>
© Tan,Steinbach, Kumar
Introduction to Data Mining
4/18/2004
‹#›
GSP Example
Frequent
3-sequences
< {1} {2} {3} >
< {1} {2 5} >
< {1} {5} {3} >
< {2} {3} {4} >
< {2 5} {3} >
< {3} {4} {5} >
< {5} {3 4} >
© Tan,Steinbach, Kumar
Candidate
Generation
< {1} {2} {3} {4} >
< {1} {2 5} {3} >
< {1} {5} {3 4} >
< {2} {3} {4} {5} >
< {2 5} {3 4} >
Introduction to Data Mining
Candidate
Pruning
< {1} {2 5} {3} >
4/18/2004
‹#›
Timing Constraints (I)
{A B}
{C}
<= xg
{D E}
xg: max-gap
>ng
ng: min-gap
ms: maximum span
<= ms
xg = 2, ng = 0, ms= 4
Data sequence
Subsequence
Contain?
< {2,4} {3,5,6} {4,7} {4,5} {8} >
< {6} {5} >
Yes
< {1} {2} {3} {4} {5}>
< {1} {4} >
No
< {1} {2,3} {3,4} {4,5}>
< {2} {3} {5} >
Yes
< {1,2} {3} {2,3} {3,4} {2,4} {4,5}>
< {1,2} {5} >
No
© Tan,Steinbach, Kumar
Introduction to Data Mining
4/18/2004
‹#›
Mining Sequential Patterns with Timing Constraints

Approach 1:
– Mine sequential patterns without timing constraints
– Postprocess the discovered patterns

Approach 2:
– Modify GSP to directly prune candidates that violate
timing constraints
– Question:

Does Apriori principle still hold?
© Tan,Steinbach, Kumar
Introduction to Data Mining
4/18/2004
‹#›
Apriori Principle for Sequence Data
Object
A
A
A
B
B
C
C
C
D
D
D
E
E
Timestamp
1
2
3
1
2
1
2
3
1
2
3
1
2
Events
1,2,4
2,3
5
1,2
2,3,4
1, 2
2,3,4
2,4,5
2
3, 4
4, 5
1, 3
2, 4, 5
Suppose:
xg = 1 (max-gap)
ng = 0 (min-gap)
ms = 5 (maximum span)
minsup = 60%
<{2} {5}> support = 40%
but
<{2} {3} {5}> support = 60%
Problem exists because of max-gap constraint
No such problem if max-gap is infinite
© Tan,Steinbach, Kumar
Introduction to Data Mining
4/18/2004
‹#›
Contiguous Subsequences

s is a contiguous subsequence of
w = <e1>< e2>…< ek>
if any of the following conditions hold:
1. s is obtained from w by deleting an item from either e1 or ek
2. s is obtained from w by deleting an item from any element ei that
contains more than 2 items
3. s is a contiguous subsequence of s’ and s’ is a contiguous
subsequence of w (recursive definition)

Examples: s = < {1} {2} >
–
is a contiguous subsequence of
< {1} {2 3}>, < {1 2} {2} {3}>, and < {3 4} {1 2} {2 3} {4} >
–
is not a contiguous subsequence of
< {1} {3} {2}> and < {2} {1} {3} {2}>
© Tan,Steinbach, Kumar
Introduction to Data Mining
4/18/2004
‹#›
Modified Candidate Pruning Step

Without maxgap constraint:
– A candidate k-sequence is pruned if at least one of its
(k-1)-subsequences is infrequent

With maxgap constraint:
– A candidate k-sequence is pruned if at least one of its
contiguous (k-1)-subsequences is infrequent
© Tan,Steinbach, Kumar
Introduction to Data Mining
4/18/2004
‹#›
Timing Constraints (II)
{A B}
{C}
<= xg
xg: max-gap
{D E}
>ng
ng: min-gap
<= ws
ws: window size
<= ms
ms: maximum span
xg = 2, ng = 0, ws = 1, ms= 5
Data sequence
Subsequence
Contain?
< {2,4} {3,5,6} {4,7} {4,6} {8} >
< {3} {5} >
No
< {1} {2} {3} {4} {5}>
< {1,2} {3} >
Yes
< {1,2} {2,3} {3,4} {4,5}>
< {1,2} {3,4} >
Yes
© Tan,Steinbach, Kumar
Introduction to Data Mining
4/18/2004
‹#›
Modified Support Counting Step

Given a candidate pattern: <{a, c}>
– Any data sequences that contain
<… {a c} … >,
<… {a} … {c}…> ( where time({c}) – time({a}) ≤ ws)
<…{c} … {a} …> (where time({a}) – time({c}) ≤ ws)
will contribute to the support count of candidate
pattern
© Tan,Steinbach, Kumar
Introduction to Data Mining
4/18/2004
‹#›
Other Formulation

In some domains, we may have only one very long
time series
– Example:


monitoring network traffic events for attacks

monitoring telecommunication alarm signals
Goal is to find frequent sequences of events in the
time series
– This problem is also known as frequent episode mining
E1
E3
E1
E1 E2 E4
E1
E2
E2
E4
E2 E3 E4
E2 E3 E5
E2 E3 E5
E1
E2 E3 E1
Pattern: <E1> <E3>
© Tan,Steinbach, Kumar
Introduction to Data Mining
4/18/2004
‹#›
General Support Counting Schemes
Object's Timeline
p
1
p
2
p
q
3
q
4
p
q
5
p
q
6
Sequence: (p) (q)
q
Method
Support
Count
7
COBJ
1
CWIN
6
Assume:
xg = 2 (max-gap)
ng = 0 (min-gap)
CMINWIN
4
ws = 0 (window size)
ms = 2 (maximum span)
© Tan,Steinbach, Kumar
CDIST_O
8
CDIST
5
Introduction to Data Mining
4/18/2004
‹#›
Frequent Subgraph Mining
Extend association rule mining to finding frequent
subgraphs
 Useful for Web Mining, computational chemistry,
bioinformatics, spatial data sets, etc

Homepage
Research
Artificial
Intelligence
Databases
Data Mining
© Tan,Steinbach, Kumar
Introduction to Data Mining
4/18/2004
‹#›
Graph Definitions
a
a
q
p
p
b
a
p
p
a
s
s
a
s
r
p
a
r
t
t
c
c
p
q
b
(a) Labeled Graph
© Tan,Steinbach, Kumar
t
t
r
r
c
c
p
p
b
(b) Subgraph
Introduction to Data Mining
b
(c) Induced Subgraph
4/18/2004
‹#›
Challenges
Node may contain duplicate labels
 Support and confidence

– How to define them?

Additional constraints imposed by pattern
structure
– Support and confidence are not the only constraints
– Assumption: frequent subgraphs must be connected

Apriori-like approach:
– Use frequent k-subgraphs to generate frequent (k+1)
subgraphs
What
© Tan,Steinbach, Kumar
is k?
Introduction to Data Mining
4/18/2004
‹#›
Challenges…

Support:
– number of graphs that contain a particular subgraph

Apriori principle still holds

Level-wise (Apriori-like) approach:
– Vertex growing:

k is the number of vertices
– Edge growing:

k is the number of edges
© Tan,Steinbach, Kumar
Introduction to Data Mining
4/18/2004
‹#›
Vertex Growing
a
q
e
p
e
p
p
p
a
a
a
+
r
G1
q
p
p
0

p
M 
p

q
a
a
d
r
r
r
r
d
a
a
a
G1
G2
G3 = join(G1,G2)
p
0
p
r
r
0
0
0
© Tan,Steinbach, Kumar
q

0
0

0
0

p
M 
p

0
G2
p
0
r
0
p 0

r 0
0 r

r 0
Introduction to Data Mining
M G3
0

p
p

0
q

p
0
r
0
0
p 0 q

r 0 0
0 r 0

r 0 0
0 0 0 
4/18/2004
‹#›
Edge Growing
a
a
q
p
p
a
q
p
f
a
r
+
p
f
p
f
p
a
a
r
r
r
r
a
a
a
G1
G2
G3 = join(G1,G2)
© Tan,Steinbach, Kumar
Introduction to Data Mining
4/18/2004
‹#›
Apriori-like Algorithm
Find frequent 1-subgraphs
 Repeat

– Candidate generation

Use frequent (k-1)-subgraphs to generate candidate k-subgraph
– Candidate pruning
Prune candidate subgraphs that contain infrequent
(k-1)-subgraphs

– Support counting

Count the support of each remaining candidate
– Eliminate candidate k-subgraphs that are infrequent
In practice, it is not as easy. There are many other issues
© Tan,Steinbach, Kumar
Introduction to Data Mining
4/18/2004
‹#›
Example: Dataset
a
q
e
p
r
r
b
p
b
b
d
© Tan,Steinbach, Kumar
(a,b,r)
0
0
1
0
q
Introduction to Data Mining
p
r
c
p
d
G3
(b,c,p)
0
0
1
0
e
p
d
a
G2
(a,b,p) (a,b,q)
1
0
1
0
0
0
0
0
c
r
r
c
G1
a
p
p
p
G1
G2
G3
G4
q
q
d
r
e
a
(b,c,q)
0
0
0
0
G4
(b,c,r)
1
0
0
0
…
…
…
…
…
(d,e,r)
0
0
0
0
4/18/2004
‹#›
Example
Minimum support count = 2
k=1
Frequent
Subgraphs
a
b
c
p
k=2
Frequent
Subgraphs
a
a
p
a
r
e
b
d
p
d
c
e
p
p
k=3
Candidate
Subgraphs
e
q
b
c
d
d
b
c
p
r
d
e
(Pruned candidate)
© Tan,Steinbach, Kumar
Introduction to Data Mining
4/18/2004
‹#›
Candidate Generation

In Apriori:
– Merging two frequent k-itemsets will produce a
candidate (k+1)-itemset

In frequent subgraph mining (vertex/edge
growing)
– Merging two frequent k-subgraphs may produce more
than one candidate (k+1)-subgraph
© Tan,Steinbach, Kumar
Introduction to Data Mining
4/18/2004
‹#›