Download Chapter 3: Data Preprocessing

Document related concepts
Transcript
Data Warehousing and
Data Mining
— Chapter 3 —
— Data Preprocessing —
2013/2014 Fall
1
Chapter 3: Data Preprocessing

Why preprocess the data?

Data cleaning

Data integration and transformation

Data reduction

Discretization and concept hierarchy generation

Time-dependent data

Summary
2
Why Data Preprocessing?

Data in the real world is dirty
 incomplete: lacking attribute values, lacking
certain attributes of interest, or containing
only aggregate data


noisy: containing errors or outliers


e.g., occupation=“”
e.g., Salary=“-10”
inconsistent: containing discrepancies in codes
or names



e.g., Age=“42” Birthday=“03/07/1997”
e.g., Was rating “1,2,3”, now rating “A, B, C”
e.g., discrepancy between duplicate records
3
Why Is Data Dirty?

Incomplete data comes from





n/a data value when collected
different consideration between the time when the data was
collected and when it is analyzed.
human/hardware/software problems
Noisy data comes from the process of data

collection

entry

transmission
Inconsistent data comes from

different data sources

functional dependency violation
4
Why Is Data Preprocessing Important?

No quality data, no quality mining results!

Quality decisions must be based on quality data



e.g., duplicate or missing data may cause incorrect or even
misleading statistics.
Data warehouse needs consistent integration of quality
data
Data extraction, cleaning, and transformation comprises
the majority of the work of building a data warehouse
5
Multi-Dimensional Measure of Data Quality


A well-accepted multidimensional view:
 Accuracy
 Completeness
 Consistency
 Timeliness
 Believability
 Value added
 Interpretability
 Accessibility
Broad categories:
 intrinsic, contextual, representational, and
accessibility.
6
Major Tasks in Data Preprocessing

Data cleaning


Data integration


Normalization and aggregation
Data reduction


Integration of multiple databases, data cubes, or files
Data transformation


Fill in missing values, smooth noisy data, identify or remove
outliers, and resolve inconsistencies
Obtains reduced representation in volume but produces the
same or similar analytical results
Data discretization

Part of data reduction but with particular importance, especially
for numerical data
7
Forms of data preprocessing
8
Chapter 3: Data Preprocessing

Why preprocess the data?

Data cleaning

Data integration and transformation

Data reduction

Discretization and concept hierarchy generation

Time-dependent data

Summary
9
Data Cleaning


Importance
 “Data cleaning is one of the three biggest
problems in data warehousing”—Ralph Kimball
 “Data cleaning is the number one problem in data
warehousing”—DCI survey
Data cleaning tasks

Fill in missing values

Identify outliers and smooth out noisy data

Correct inconsistent data

Resolve redundancy caused by data integration
10
Missing Data

Data is not always available


Missing data may be due to

equipment malfunction

inconsistent with other recorded data and thus deleted

data not entered due to misunderstanding



E.g., many tuples have no recorded value for several
attributes, such as customer income in sales data
certain data may not be considered important at the time of
entry
not register history or changes of the data
Missing data may need to be inferred.
11
How to Handle Missing Data?

Ignore the tuple: usually done when class label is missing (assuming
the tasks in classification—not effective when the percentage of
missing values per attribute varies considerably.

Fill in the missing value manually: tedious + infeasible?

Use a global constant to fill in the missing value: e.g., “unknown”, a
new class?!

Use the attribute mean,median or mode to fill in the missing value

Use the attribute mean for all samples belonging to the same class
to fill in the missing value: smarter

Use the most probable value to fill in the missing value: inferencebased such as Bayesian formula or decision tree
12
Most Probable Value

Use the most probable value to fill in the missing value: inferencebased such as Bayesian formula or decision tree

develop a submodel to predict the category or
numerical value of the missing case

income = f(education,sex, age,...)

A neural network model

K-NN model

Decision tree

Regression

...
13
An Extreme Case



Suppose all values of an
attribute is missing
This attribute may be
considered as the
unknown label in
unsupervised learning
Assigning an object into
a cluster can be thought
of filling the missing
values of an attribute
Education
gende
r
incom
e
type
UGrad
M
1-2
?
HighSc
F
0.5-1
?
Grad
F
0.5-1
?
14
Time Series and Spacial Data

Filling missing values in time series and spatial
data needs a different tratement
Stock index
x x
x
xo x
x
o
x
o
x data exisits
o missing
Time
in days
15
Noisy Data



Noise: random error or variance in a measured variable
Incorrect attribute values may due to
 faulty data collection instruments
 data entry problems:human or computer error
 data transmission problems
 technology limitation
 inconsistency in naming convention
 duplicate records
Some noice is inavitable and due to variables or factors
that can not be measured
16
How to Handle Noisy Data?




Binning method:
 first sort data and partition into (equi-depth) bins
 then one can smooth by bin means, smooth by bin
median, smooth by bin boundaries, etc.
Clustering
 detect and remove outliers
Combined computer and human inspection
 detect suspicious values and check by human
Regression
 smooth by fitting the data into regression functions
17
Simple Discretization Methods: Binning


Equal-width (distance) partitioning:
 It divides the range into N intervals of equal size:
uniform grid
 if A and B are the lowest and highest values of the
attribute, the width of intervals will be: W = (B-A)/N.
 The most straightforward
 But outliers may dominate presentation
 Skewed data is not handled well.
Equal-depth (frequency) partitioning:
 It divides the range into N intervals, each containing
approximately same number of samples
 Managing categorical attributes can be tricky.
18
Binning Methods for Data Smoothing
* Sorted data for price (in dollars): 4, 8, 9, 15, 21, 21, 24, 25, 26, 28,
29, 34
* Partition into (equi-depth) bins:
- Bin 1: 4, 8, 9, 15
- Bin 2: 21, 21, 24, 25
- Bin 3: 26, 28, 29, 34
* Smoothing by bin means:
- Bin 1: 9, 9, 9, 9
- Bin 2: 23, 23, 23, 23
- Bin 3: 29, 29, 29, 29
* Smoothing by bin boundaries:
- Bin 1: 4, 4, 4, 15
- Bin 2: 21, 21, 25, 25
- Bin 3: 26, 26, 26, 34
19
Cluster Analysis
20
Regression
y
Y1
Y1’
y=x+1
X1
x
21
Example use of regression to determine
outliers

Problem: predict the yearly spending of AE
customers from their income.
 Y: yearly spending in YTL
 X: average montly income in YTL
22
Data set
X

50
 100
 200
 250
 300
 400
 500
 700
 750
 800
1000

Y
200
50
70
80
85
120
130
150
150
180
200
X: income, Y: spending
First data is probabliy an outlier
But examining only spending data that is
Distribution of Y 200 spending of the first
Customer can not be identified as an
outlier
23
Regression
y
Y1:200
Y1’
Examine residuals
Y1-Y1* is high
But Y10-Y10* is low
So data point 1 is an
outlier
y = 0.15x + 20
X1:50
X10:1000
x
inccome
24
Notes



Some methods are used for both smoothing
and data reduction or discretization
 binning
 used in decision tress to reduce number of
categories
concept hierarchies
Example price a numerical variable
 in to concepts as expensive, moderately
prised, expensive
25
Inconsistent Data

Inconsistent data may due to
 faulty data collection instruments
 data entry problems:human or computer error
 data transmission problems
 Chang in scale over time


inconsistency in naming convention


Data integration:
Different units used for the same variable



1,2,3 to A, B. C
TL or dollar
Value added tax ıncluded ın one source not in other
duplicate records
26
Mining Data Dispersion Characteristics

Motivation


Data dispersion characteristics



To better understand the data: central tendency, variation and
spread
median, max, min, quartiles, outliers, variance, etc.
Numerical dimensions correspond to sorted intervals

Data dispersion: analyzed with multiple granularities of
precision

Boxplot or quantile analysis on sorted intervals
Dispersion analysis on computed measures

Folding measures into numerical dimensions

Boxplot or quantile analysis on the transformed cube
27
Measuring the Central Tendency


Mean (algebraic measure):
1 n
x   xi
n i 1

Weighted arithmetic mean:

Trimmed mean: chopping extreme values
n
x 
w x
i
i 1
n
i
w
i 1
i
Median: A holistic measure

Middle value if odd number of values, or average of the
middle two values otherwise


Estimated by interpolation (for grouped data)
median  L1  (
Mode

Value that occurs most frequently in the data

Unimodal, bimodal, trimodal

Empirical formula:
n / 2  ( f )l
f median
)c
mean  mode  3  (mean  median)
28
Symmetric vs. Skewed Data

Median, mean and mode of
symmetric, positively and
negatively skewed data
29
Measuring the Dispersion of Data
Quartiles, outliers and boxplots


Quartiles: Q1 (25th percentile), Q3 (75th percentile)

Inter-quartile range: IQR = Q3 – Q1

Five number summary: min, Q1, M, Q3, max

Boxplot: ends of the box are the quartiles, median is marked,
whiskers, and plot outlier individually

Outlier: usually, a value higher/lower than 1.5 x IQR
Variance and standard deviation


s
2
Variance s2: (algebraic, scalable computation)
n
n
n
1
1
1
2
2
2

(
x

x
)

[
x

(
x
)
 i
 i
 i ]
n  1 i 1
n  1 i 1
n i 1

Standard deviation s is the square root of variance s2
30
Properties of Normal Distribution Curve

The normal (distribution) curve
 From μ–σ to μ+σ: contains about 68% of the
measurements (μ: mean, σ: standard deviation)

From μ–2σ to μ+2σ: contains about 95% of it
 From μ–3σ to μ+3σ: contains about 99.7% of it
31
Boxplot Analysis

Five-number summary of a distribution:
Minimum, Q1, M, Q3, Maximum

Boxplot




Data is represented with a box
The ends of the box are at the first and third
quartiles, i.e., the height of the box is IRQ
The median is marked by a line within the box
Whiskers: two lines outside the box extend to
Minimum and Maximum
32
Positively and Negatively Correlated Data
33
Not Correlated Data
34
Graphic Displays of Basic Statistical Descriptions






Histogram: (shown before)
Boxplot: (covered before)
Quantile plot: each value xi is paired with fi indicating
that approximately 100 fi % of data are  xi
Quantile-quantile (q-q) plot: graphs the quantiles of
one univariant distribution against the corresponding
quantiles of another
Scatter plot: each pair of values is a pair of coordinates
and plotted as points in the plane
Loess (local regression) curve: add a smooth curve to a
scatter plot to provide better perception of the pattern
of dependence
35
Quantile Plot


Displays all of the data (allowing the user to assess
both the overall behavior and unusual occurrences)
Plots quantile information
 For a data xi data sorted in increasing order, fi
indicates that approximately 100 fi% of the data are
below or equal to the value xi
Data Mining: Concepts and Techniques
36
Quantile-Quantile (Q-Q) Plot



Graphs the quantiles of one univariate distribution against the
corresponding quantiles of another
View: Is there is a shift in going from one distribution to another?
Example shows unit price of items sold at Branch 1 vs. Branch 2 for
each quantile. Unit prices of items sold at Branch 1 tend to be lower
than those at Branch 2.
37
Chapter 3: Data Preprocessing

Why preprocess the data?

Data cleaning

Data integration and transformation

Data reduction

Discretization and concept hierarchy generation

Time-dependent data

Summary
38
Data Integration



Data integration:
 combines data from multiple sources into a coherent store
Schema integration
 integrate metadata from different sources
 Entity identification problem: identify real world entities from
multiple data sources, e.g., A.cust-id  B.cust-#
Detecting and resolving data value conflicts
 for the same real world entity, attribute values from different
sources are different
 possible reasons: different representations, different scales, e.g.,
metric vs. British units
 price may include value added tax in one source and not include in
the other
 Name are entered by different convensıons:

name lastname or lastname name or ...
39
Handling Redundant Data
in Data Integration

Redundant data occur often when integration of multiple
databases



The same attribute may have different names in different
databases
One attribute may be a “derived” attribute in another table, e.g.,
annual revenue
Redundant data may be able to be detected by correlational
analysis
ni
 rx,y=  =1(xi-mean_x)(yi-mean_y)/[(n-1)xy]
 where x,y are standard deviation of X and Y
 as r1 positive correlation: x y or x y, one is
redundant
 r  0, X and X are independent
 r  -1 negative correlation: x y or x y one is
redundant
40
Positively and Negatively Correlated Data
41
Not Correlated Data
42
Exercise



Construct a data set or a functional relationship
of two variables X and Y where there is a
perfect relation between X and Y:
 knowing value of X, Y corresponding to that
X can be predicted with no error
but correlation coefficient between X and Y is
ZERO
43
Handling Redundant Data
in Data Integration

how to detect redundancy for categorical variables?


Find a measure of association for categorical variables
Careful integration of the data from multiple sources may
help reduce/avoid redundancies and inconsistencies and
improve mining speed and quality
44
Correlation Analysis (Categorical Data)

Χ2 (chi-square) test
2
(
Observed

Expected
)
2  
Expected


The larger the Χ2 value, the more likely the variables are
related
The cells that contribute the most to the  2 value are
those whose actual count is very different from the
expected count
45
Chi-Square Test

Under the null hypothesis that attribute valuea
are independent or not correlated
2 statistics is approximately distributed as
 
chi-square distribution with (kA-1)*(kB-1)
degree of freedom



KA and KB are number of distinct values of
attribute A and B
Given a confidence level 
Reject null hypothesis of no correlation if
2
2
  >  (df,)
46
Chi-Square Calculation: An Example

Play chess
Not play chess
Sum (row)
Like science fiction
250(90)
200(360)
450
Not like science fiction
50(210)
1000(840)
1050
Sum(col.)
300
1200
1500
 2 (chi-square) calculation (numbers in parenthesis are
expected counts calculated based on the data distribution
in the two categories)
2
2
2
2
(
250

90
)
(
50

210
)
(
200

360
)
(
1000

840
)
2 



 507.93
90
210
360
840


2((2-1)(2-1),0.05)=5.99<507.93
It shows that like_science_fiction and play_chess are
correlated in the group
47
Calculating expected frequencies



Expected values are computed under the
independence assumption
E(i,j)=pi*pj*N
 pi probability of observing variable A at state
i
 pj probability of observing variable B at state
j
 N total number of observations
E.g. E(like_scfic,play_chess) =
(450/1500)(300/1500)*1500=90
48
Exercises



What is the limitation of chi-square measure of
correlation between categorical variables
Suggest a better way of measuring correlation
between categorical variables
Suggest a way of measuring association
between a categorical and a numerical variable
49
Correlation and Causation

Correlation does not imply causality


# of hospitals and # of car-theft in a city are
correlated
Both are causally linked to the third variable:
population
50
Exercise




Correlation coefficient show degree of linear association
between two variables say X and Y
Suppose in the data set there is a third variable Z
potentially correlated with X and Y
Then the association between X and Y is affected by
the presence of the third variable Z
E.g. three variables:
 Y crop yield, X:rainfall, Z: temperature
 Assume



No association between Y and X
Positive association between Y and Z and
Negative association between X and Z
51
Exercise cont.




How do you interpret the associations
Design such a dataset
So the simple correlation coefficient between X
and X is misleading
 Contradicts with common sense knowledge
Develop an association measure showing the
association between Y and X elliminating the
effect of Z(third variable) on both X and Y
52
Another attribute relevance measure
Entropy and Information Gain


S contains si tuples of class Ci for i = {1, …, m}
Information measures info required to classify
any arbitrary tuple
m
I( s1,s2,...,sm )  
i 1

si
si
log 2
s
s
Entropy of attribute A with values {a1,a2,…,av}
s1 j  ...  smj
I ( s1 j ,..., smj )
s
j 1
v
E(A)  

Information gained by branching on attribute A
Gain(A)  I(s 1, s 2 ,..., sm)  E(A)
53
Entropy and Information Gain


Choose the attributes as relevant
Whose information gains is greater than a
critical value
54
Exercise


Construct a simple data set of two categorical
attributes A and B such that
 Knowing the values of A provides perfect
information to predict B but
 Knowing the values of B does not provide
any information to precict A
Entroy is not a symetric measure of association
like correlation coefficient is
55
Exercise

Consider a data set of three attributes A, B, and
C. A has two distinct values as “positive” : p
and “negative”: n and can be considered as
the class label. The unconditional probabilities
of p and n are 0.5 each. B and C have four and
three distinct values as b1 b2 b3 b4 and c1 c2
c3 each having unconditional probabilities ¼
and 1/3 .respectively. Construct a data set
such that B is perfectly relevant in predicting
the class type A but C is perfectly irrelevant for
the prediction of A.
56
Data Transformation

Smoothing: remove noise from data

Aggregation: summarization, data cube construction


Generalization: concept hierarchy climbing



From daily sales to monthly sales
From original ages to categories: young,middle,senior
Normalization: scaled to fall within a small, specified range

min-max normalization

z-score normalization

normalization by decimal scaling
Attribute/feature construction

New attributes constructed from the given ones

profit, financial ratios,
57
Data Transformation:
Normalization

min-max normalization
v  minA
v' 
(new _ maxA  new _ minA)  new _ minA
maxA  minA
When a score is outside the new range the algorithm
may give an error or warning message

in the presence of outliers regular observations are
squized in to a small interval

58
Data Transformation:
Normalization

z-score normalization
v  meanA
v' 
stand _ devA
Good in handling outliers
as the new range is between -to +  no out of range
error
59
Data Transformation:
Normalization

Decimal scaling
v
v'  j
10
v '|)<1
Where j is the smallest integer such that Max(|
Ex: max v=984 then j =3 v’=0.984
preserves the appearance of figures

60
Data Transformation:Logarithmic
Transformation



Logarithmic transformations:
Y` = logY
Used in some distance based methods
 Clustering
 For ratio scaled variables
 E.g.: weight, TL/dollar
 Distance between objects is related to
percentage changes rather then actual
differences
61
Linear transformations




Note that linear transformations preserve the
shape of the distribution
Whereas non linear transformations distors the
distribution of data
Example:
Logistic function f(x) = 1/(1+exp(-x))
 Transforms x between 0 and 1
 New data are always between 0-1
62
Attribute/feature construction






Automatic attribute generation
 Using product or and opperations
E.g.: a regression model to explain spending of a
customer shown by Y
 Using X1: age, X2: income, ...
Y = a + b1*X1 + b2*X2 : a linear model

a,b1,b2 are parameters to be estimated
Y = a + b1*X1 + b2*X2 + c*X1*X2 :
a nonlinear model a,b1,b2,c are parameters to be
estimated but linear in parameters as:
X1*X2 can be directly computed from income and age
63
Ratios or differences


Define new attributes
E.g.:
 Real variables in macroeconomics



Real GNP
Financial ratios
Profit = revenue – cost
64
Chapter 3: Data Preprocessing

Why preprocess the data?

Data cleaning

Data integration and transformation

Data reduction

Discretization and concept hierarchy generation

Time-dependent data

Summary
65
Data Reduction Strategies



Warehouse may store terabytes of data: Complex data
analysis/mining may take a very long time to run on the
complete data set
Data reduction
 Obtains a reduced representation of the data set that is
much smaller in volume but yet produces the same (or
almost the same) analytical results
Data reduction strategies
 Data cube aggregation
 Dimensionality reduction—remove unimportant
attributes
 Data Compression
 Numerosity reduction—fit data into models
 Discretization and concept hierarchy generation
66
Credit Card Promotion Database
Magazıne
Promotıon
Income
Range
Watch
Promotıon
Lıfe Insurance
Promotıon
Gender
Credıt Card
Insurance
Age
40-50 K
Yes
No
No
Male
45
No
30-40 K
Yes
Yes
Yes
Female
40
No
40-50 K
No
No
No
Male
42
No
30-40 K
Yes
Yes
Yes
Male
43
Yes
50-60 K
Yes
No
Yes
Female
38
No
20-30 K
No
No
No
Female
55
No
30-40 K
Yes
No
Yes
Male
35
Yes
20-30 K
No
Yes
No
Male
27
No
30-40 K
Yes
No
No
Male
43
No
30-40 K
Yes
Yes
Yes
Female
41
No
40-50 K
No
Yes
Yes
Female
43
No
20-30 K
No
Yes
Yes
Male
29
No
50-60 K
Yes
Yes
Yes
Female
39
No
40-50 K
No
Yes
No
Male
55
No
20-30 K
No
No
Yes
Female
19
Yes
67
Data Reduction

Eliminate redandent attributes

correlation coefficients show that





Watch and magazine are associated
Eliminate one
Combine variables to reduce number of independent
variables
 Principle component analysis
Sampling reduce number of cases (records) by
sampling from secondary storage
Discretization
 Age to categorical values
68
Dimensionality Reduction


Feature selection (i.e., attribute subset selection):
 Select a minimum set of features such that the
probability distribution of different classes given the
values for those features is as close as possible to the
original distribution given the values of all features
 reduce # of patterns in the patterns, easier to
understand
Heuristic methods (due to exponential # of choices):
 step-wise forward selection
 step-wise backward elimination
 combining forward selection and backward elimination
 decision-tree induction
69
Example for irrelevent features
income
low
mid
high
young
5
7
9
mid
senior
5.1
4.9
6.9
7.1
8.9
9.1
income
low
5
mid
7
high
9
Average spending of a
customer by income and
age
Droping age
Expain spending by
İncome only
Does not change results
significantly
70
Example of Decision Tree Induction
Initial attribute set:
{A1, A2, A3, A4, A5, A6}
A4 ?
A6?
A1?
Class 1
>
Class 2
Class 1
Class 2
Reduced attribute set: {A1, A4, A6}
71
Example: Economic indicator problem




There are tens of macroeconomic variables
 say totally 45
Which ones is the best predictor for inflation rate three
months ahead?
Develop a simple model to predict inflation by using
only a couple of those 45 macro variables
Best-stepwise feature selection
 The single macro variable predicting inflation among
the 45 is seleced first: try 45 models say: $/TL
 repeat


The k th variable is entered among 45-(k-1) variables
Stop at some point introducing new variables
73
Example cont.






Best feature elimination
Develop a model including all 45 variables
Remove just one of them try 45 models each
excluding just one out of 45 variables
repeat
 Continue eliminating a new variable at each
step
Unitl a stoping criteria
Rearly used comared to feature selection
74
Principal Component Analysis

Given N data vectors from k-dimensions, find c <= k
orthogonal vectors that can be best used to represent
data


The original data set is reduced to one consisting of
N data vectors on c principal components (reduced
dimensions)
Each data vector is a linear combination of the c
principal component vectors

Works for numeric data only

Used when the number of dimensions is large
75
Principal Component Analysis
X2
Y1
Y2
*
*
*
* **
* *
*
*
*
*
*
*
*
X1
76
Principal Component Analysis
X2
Y1
Y2
*
a2

Coordinate
of Z1 is
a1
Z1,Z2 in
when using principleZ1
components
Z= y1,y2 expressed in
unit vectors a1 a2

77
Histograms
30
25
20
15
10
100000
90000
80000
70000
60000
0
50000
5
40000

35
30000

40
20000

A popular data reduction
technique
Divide data into buckets
and store average (sum)
for each bucket
Can be constructed
optimally in one
dimension using dynamic
programming
Related to quantization
problems.
10000

78



Equiwidht: the width of each bucket range is
uniform
Equidepth (equiheight):each bucket contains
roughly the same number of continuous data
samples
V-Optimal:least variance
 histogram variance is the is a weighted sum
of the original values that each bucket
represents
 bucket weight = #values in bucket
79
MaxDiff

MaxDiff: make a bucket boundry between
adjacent values if the difference is one of the
largest k differences
 xi+1 - xi >= max_k-1(x1,..xN)
80
V-optimal design









Sort the values
assign equal number of values in each bucket
compute variance
repeat
 change bucket`s of boundary values
 compute new variance
until no reduction in variance
variance = (n1*Var1+n2*Var2+...+nk*Vark)/N
N= n1+n2+..+nk,
Vari= nij=1(xj-x_meani)2/ni,
Note that: V-Optimal in one dimension is equivalent to
K-means clusterıng in one dimension
81
Sampling




Allow a mining algorithm to run in complexity that is
potentially sub-linear to the size of the data
Choose a representative subset of the data
 Simple random sampling may have very poor
performance in the presence of skew
Develop adaptive sampling methods
 Stratified sampling:
 Approximate the percentage of each class (or
subpopulation of interest) in the overall database
 Used in conjunction with skewed data
Sampling may not reduce database I/Os (page at a time).
82
Sampling methods


Simple random sample without replacement
(SRSWOR) of size n:
 n of N tuples from D n<N
 P(drawing any tuple)=1/N all are equally
likely
Simple random sample with replacement
(SRSWR) of size n:
 each time a tuple is drawn from D, it is
recorded and then replaced
 it may be drawn again
83
Sampling methods cont.


Cluster Sample: if tuples in D are grouped into M
mutually disjoint “clusters” then an SRS of m clusters
can be obtained where m < M
 Tuples in a database are retrieved a page at a time
 Each pages can be considered as a cluster
Stratified Sample: if D is divided into mutually
disjoint parts called strata.
 Obtain a SRS at each stratum
 a representative sample when data are skewed
 Ex: customer data a stratum for each age group

The age group having the smallest number of customers
will be sure to be presented
84
Confidence intervals and sample size





Cost of obtaining a sample is propotional to the
size of the sample, n
Specifying a confidence interval and
you should be able to determine n number of
samples required so that the sample mean will
be within the confidence interval with %(1-p)
confident
n is very small compared to the size of the
database N
n<<N
85
Sampling
Raw Data
86
Sampling
Raw Data
Cluster/Stratified Sample
87
Chapter 3: Data Preprocessing

Why preprocess the data?

Data cleaning

Data integration and transformation

Data reduction

Discretization and concept hierarchy generation

Time-dependent data

Summary
88
Discretization


Three types of attributes:
 Nominal — values from an unordered set
 Ordinal — values from an ordered set
 Continuous — real numbers
Discretization:
 divide the range of a continuous attribute into
intervals
 Some classification algorithms only accept categorical
attributes.
 Reduce data size by discretization
 Prepare for further analysis
89
Discretization and Concept hierachy

Discretization


reduce the number of values for a given continuous
attribute by dividing the range of the attribute into
intervals. Interval labels can then be used to replace
actual data values.
Types of discretization

Supervised (class information used)

unsupervised (not used or not available)

dynamic - static

local - global

top down - bottom up
90
Discretization Process


Univariate - one feature at a time
Steps of a typical discritization process
 sorting the continuous values of the feature
 evaluating a cut-point for splitting or
adjacent intervals for merging
 splitting or merging intervals of continuous
value

top-down approach intervals are split


chose the best cut point to split the values into two
partitions
until a stopping criterion is satisfied
91
Discretization Process cont.

bottom-up approach intervals are merged



find best pair of intervals to merge
until stopping criterion is satisfied
stopping at some point



lower # better understanding less accuracy
high # poorer understanding higher accuracy
Stopping criteria




number of intervals reach a critical value
number of observations in each interval exceeds a value
all observations in an interval are of the same class
a minimum information gain is not satisfied
92
Generalized Splitting Algorithm


S= sorted values of feature f
splitting(S) {
 if stopping criterion satisfied







}
return
T = GetBestSplitPoint(S)
S1=GetLeftPart(S,T)
S2=GetRightPart(S,T)
Splitting(S1)
Splitting(S2)
93
Discretization and concept hierarchy
generation for numeric data

Binning (see sections before)

Histogram analysis (see sections before)

Clustering analysis (see sections before)

Entropy-based discretization

Segmentation by natural partitioning
94
1R a supervised discretization
method






After sorting the continuous values,
divides the range of continuous values into a
number of disjoint intervals and adjusts the
boundaries based on the class labels
Example
11 14 15 18 19 20 21  22 23 25 30 31 33 35  36
R C C R C R C  R C C R C R C R
C

C
 R
 interval of class C from 11-21
 another interval of C from 22-35
 last of class R including just 36
 the two leftmost intervals are merged as they predict
the same class
 stoping: each interval should contain a prespecified
minimum number of instances
95
Entropy




A measure of (im)purity of an sample variable S
is defined as
Ent(S) = -spslog2ps
 s is a value of S
 Ps is its estimated probability
average amount of information per event
Information of an event is
 I(s)= -log2ps
 information is high for lower probable events
and low otherwise
96
Entropy-Based Discretization

Given a set of samples S, if S is partitioned into two
intervals S1 and S2 using boundary T, the entropy after
partitioning isE ( S , T )  | S1| Ent ( )  | S 2 | Ent ( )
| S|


S1
| S|
S2
The boundary that minimizes the entropy function over all
possible boundaries is selected as a binary discretization.
The process is recursively applied to partitions obtained
until some stopping criterion is met, e.g.,
Ent ( S )  E (T , S )  

Experiments show that it may reduce data size and
improve classification accuracy
97
Age Splitting Example









A sample of ages for 15 customers
19 27 29 35 38 39 40 41 42 43 43 43 45 55 55
y n y y y y y y n y y n n n n
classes are y or n response to life insurance
Responding to life promotion (y or n) v.s. Age of
customer
y or n yes or no
entropy of the whole sample without partitioning
ent(S)=(-6/15)*log2(6/15)+(-9/15)*log2(9/15)=
=-0.4*(-1.322)+(-0.6)*(-0.734)=0.969
98
Age Splitting Example









Splitting value of age=41.5
S1 age<41.5: 7 y, 1 n
S2 T> 41.5: 2 y, 5 n
I(S,T)=#S1/#S*ent(S1)+#S2/#S*ent(S2)
= (8/15)*ent(S1)+(7/15)ent(S2)
ent(S1)=(-1/8)log2(1/8)+(-7/8)log2(7/8)=
=-0.125*(-3.000)+(-0.875*(-0.193)=0.544
ent(S2)=(-2/7)log2(2/7)+(-5/7)log2(5/7)=
=-0.286*(-1.806)+(-0.714*(-0.486)=0.863
ent(S1,S2) = (8/15)*0.544+(7/15)*0.863=0.692
99
Age Splitting Example







Information gain:
0.969-0.692=0.277
try all possible partitions
chose the partition providing maximum
information gain
then reccursively partition each interval
until the stoping criteria is satisfied
when information gain < a treshold value
100
Exercise
1.
Consider a data set of two attributes A and B.
A is continuous, whereas B is categorical,
having two values as “y” and “n”, which can
be considered as class of each observation.
When attribute A is discretized into two
equiwidth intervals no information is provided
by the class attribute B but when discretized
into three equiwidth intervals there is perfect
information provided by B. Construct a simple
dataset obeying these characteristics.
101