Download general data mining issues

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Nonlinear dimensionality reduction wikipedia , lookup

Transcript
LECTURE 3:
GENERAL DATA MINING ISSUES
3.1 MACHINE LEARNING
What do we mean by 'learning' when applied to machines?
 Not just committing to memory (= storage)
 Can't require consciousness
 Learn facts (data), or processes (algorithms)?
“Things learn when they change their behaviour in a way that makes them perform better”
(Witten)



Ties to future performance, not the act itself
But things change behaviour for reasons other than 'learning'
Can a machine have the Intent to perform better?
3.2 INPUTS TO DATA MINING ALGORITHMS
The aim of data mining is to learn a model for the data. This could be called a concept of the
data, so our outcome will be a concept description. Eg, the task is classify emails as spam/not
spam. Concept to learn is the concept of 'what is spam?'
Input comes as instances. Eg, the individual emails.
Instances have attributes. Eg sender, date, recipient, words in text
Use attributes to determine what about an instance means that it should be classified as a
particular class. == Learning!
Obvious input structure: Table of instances (rows) and attributes (columns)
1. DATA TYPES
Nominal: Prespecified, finite number of values eg: {cat, fish, dog, squirrel}. Includes
boolean {true, false} and all enumerations.
@ St. Paul’s University
1
Ordinal: Orderable, but no concept of distance eg: hot > warm > cool > cold
Domain specific ordering, but no notion of how much hotter warm is compared to cool.
Interval: Ordered, fixed unit. eg: 1990 < 1995 < 2000 < 2005
Difference between values makes sense (1995 is 5 years after 1990)
Sum does not make sense (1990 + 1995 = year 3985??)
Ratio: Ordered, fixed unit, relative to a zero point eg: 1m, 2m, 3m, 5m
Difference makes sense (3m is 1m greater than 2m) Sum makes sense (1m + 2m = 3m)
2. DATA ISSUES: MISSING VALUES
The following issues will come up over and over again, but diferent algorithms have diferent
requirements.
What happens if we don't know the value for a particular attribute in an instance?
For example, the data was never stored, lost or not able to be represented.
Maybe that data was important!
How should we process missing values?
Possible 'solutions' for dealing with missing values:
 Ignore the instance completely. (eg class missing in training data set)
 Not very useful solution if in test data to be classified!
 Fill in values by hand
Could be very slow, and likely to be impossible
 Global 'missingValue' constant
Possible for enumerations, but what about numeric data?
 Replace with attribute mean
 Replace with class's attribute mean
 Train new classifier to predict missing value!
 Just leave as missing and require algorithm to apply appropriate technique
3. NOISY VALUES
By 'noisy data' we mean random errors scattered in the data. For example, due to inaccurate
recording, data corruption. Some noise will be very obvious:
 data has incorrect type (string in numeric attribute)
 data does not match enumeration (maybe in yes/no field)
 data is very dissimilar to all other entries (10 in an attribute otherwise 0..1)
@ St. Paul’s University
2
Some incorrect values won't be obvious at all. Eg typing 0.52 at data entry instead of 0.25.
Some possible solutions:
Manual inspection and removal
 Use clustering on the data to find instances or attributes that lie outside the main body
(outliers) and remove them
 Use regression to determine function, then remove those that lie far from the predicted
value
 Ignore all values that occur below a certain frequency threshold
 Apply smoothing function over known-to-be-noisy data
If noise is removed, can apply missing value techniques on it. If it is not removed, it may
adversely affect the accuracy of the model.
4. INCONSISTENT VALUES
Some values may not be recorded in different ways. For example 'coke', 'coca cola', 'cocacola', 'Coca Cola' etc etc
In this case, the data should be normalised to a single form. Can be treated as a special case
of noise.
Some values may be recorded inaccurately on purpose! Email address:
[email protected] Entity resolution (alias names and namesakes)
5. REDUNDANT VALUES
Just because the base data includes an attribute doesn't make it worth giving to the data
mining task.
For example, denormalise a typical commercial database and you might have:
ProductId, ProductName, ProductPrice, SupplierId, SupplierAddress...
SupplierAddress is dependant on SupplierId (remember SQL normalisation rules?) so they
will always appear together.
A 100% confident, 100% support association rule is not very interesting!
6. NUMBER OF ATTRIBUTES
Is there any harm in putting in redundant values? Yes for association rule mining, and ... yes
for other data mining tasks too.
You can treat text as thousands of numeric attributes: term/frequency from our inverted
indexes. But not all of those terms are useful for determining (for example) if an email is
spam. 'the' does not contribute to spam detection.
@ St. Paul’s University
3
The number of attributes in the table will afect the time it takes the data mining process to
run. It is often the case that we want to run it many times, so getting rid of unnecessary
attributes is important.
Number of Attributes/Values is also called 'dimensionality reduction'. We'll look at
techniques for this later in the course, but some simplistic versions:
 Apply upper and lower thresholds of frequency
 Noise removal functions
 Remove redundant attributes
 Remove attributes below a threshold of contribution to classification. (Eg if attribute
is evenly distributed, adds no knowledge)
3.3 OVER-FITTING / UNDER-FITTING
Learning a concept must stop at the appropriate time. For example, could express the concept
of 'Is Spam?' as a list of spam emails. Any email identical to those is spam.
Accuracy: 0% on new data, 100% on training data.
Ooops! This is called Over-Fitting. The concept has been tailored too closely to the training
data.
Story: US Military trained a neural network to distinguish tanks vs rocks.
 It would shoot the US tanks they trained it on very consistently and never shot any
rocks ... or enemy tanks. [probably fiction, but amusing]
Extreme case of over-fitting:
Algorithm tries to learn a set of rules to determine class.
Rule1: attr1=val1/1 and attr2=val2/1 and attr3=val3/1 = class1
Rule2: attr1=val1/2 and attr2=val2/2 and attr3=val3/2 = class2
One rule for each instance is useless.
Need to prevent the learning from becoming too specific to the training set, but also don't
want it to be too broad. Complicated!
Extreme case of under-fitting:
Always pick the most frequent class, ignore the data completely.
Eg: if one class makes up 99% of the data, then a 'classifier' that always picks this class will
be correct 99% of the time!
@ St. Paul’s University
4
But probably the aim of the exercise is to determine the 1%, not the 99%... making it accurate
0% of the time when you need it.
3.4 SCALABILITY
We may be able to reduce the number of attributes, but most of the time we're not interested
in small 'toy' databases, but huge ones.
When there are millions of instances, and thousands of attributes, that's a LOT of data to try
to find a model for.
Very important that data mining algorithms scale well.




Can't keep all data in memory
Might not be able to keep all results in memory either
Might have access to distributed processing?
Might be able to train on a sample of the data?
3.5 HUMAN INTERACTION
Problem Exists Between Keyboard And Chair.
Data Mining experts are probably not experts in the domain of the data. Need to work
together to find out what is needed, and formulate queries
 Need to work together to interpret and evaluate results
 Visualisation of results may be problematic
 Integrating into the normal workflow may be problematic
 How to apply the results appropriately may not be clear
3.6 ETHICAL DATA MINING
Just because we can doesn't mean we should.
Should we include married status, gender, race, religion or other attributes about a person in a
data mining experiment? Discrimination?
But sometimes those attributes are appropriate and important ... medical diagnosis, for
example.
What about attributes that are dependent on 'sensitive' attributes? Neighbourhoods have
different average incomes... discriminating against the poor by using location?
Privacy issues?
Privacy Preserving Data Mining
@ St. Paul’s University
5