Download Artificial Intelligence EDA132

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts
no text concepts found
Transcript
Artificial Intelligence
EDA132
Lecture 5: Machine Learning
Pierre Nugues
Lund University
[email protected]
http://cs.lth.se/pierre_nugues/
February 1st, 2017
Pierre Nugues
Artificial Intelligence EDA132
February 1st, 2017
1 / 51
Lecture 5: Machine Learning
Why Machine Learning: Early Artificial Intelligence
Early artificial intelligence techniques used introspection to codify
knowledge, often in the form of rules.
Expert systems, one of the most notable applications of traditional AI, were
entirely based on the competence of experts.
This has two major drawbacks:
Need of an expertise to understand and explain the rules
Bias introduced by the expert
Pierre Nugues
Artificial Intelligence EDA132
February 1st, 2017
2 / 51
Lecture 5: Machine Learning
Why Machine Learning: What has Changed
Now terabytes of data available.
Makes it impossible to understand such volumes of data, organize them
using manually-crafted rules.
Triggered a major move to empirical and statistical techniques.
In fact, most machine–learning techniques come from traditional statistics.
Applications in natural language processing, medicine, banking, online
shopping, image recognition, etc.
The success of companies like Google, Facebook, Amazon, and
Netflix, not to mention Wall Street firms and industries from
manufacturing and retail to healthcare, is increasingly driven by
better tools for extracting meaning from very large quantities of
data. ‘Data Scientist’ is now the hottest job title in Silicon Valley.
– Tim O’Reilly
Pierre Nugues
Artificial Intelligence EDA132
February 1st, 2017
3 / 51
Lecture 5: Machine Learning
Some Definitions
1
Machine learning always starts with data sets: a collection of objects
or observations.
2
Machine-learning algorithms can be classified along two main lines:
supervised and unsupervised classification.
3
Supervised algorithms need a training set, where the objects are
described in terms of attributes and belong to a known class or have a
known output.
4
The performance of the resulting classifier is measured against a test
set.
5
We can also use N-fold cross validation, where the test set is selected
randomly from the training set N times, usually 10.
6
Unsupervised algorithms consider objects, where no class is provided.
7
Unsupervised algorithms learn regularities in data sets.
Pierre Nugues
Artificial Intelligence EDA132
February 1st, 2017
4 / 51
Lecture 5: Machine Learning
A Data Set: Salammbô
A corpus is a collection – a body – of texts.
French original
Pierre Nugues
English translation
Artificial Intelligence EDA132
February 1st, 2017
5 / 51
Lecture 5: Machine Learning
Supervised Learning
Letter counts from Salammbô
Chapter
Chapter
Chapter
Chapter
Chapter
Chapter
Chapter
Chapter
Chapter
Chapter
Chapter
Chapter
Chapter
Chapter
Chapter
Chapter
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
French
# characters
36,961
43,621
15,694
36,231
29,945
40,588
75,255
37,709
30,899
25,486
37,497
40,398
74,105
76,725
18,317
#A
2,503
2,992
1,042
2,487
2,014
2,805
5,062
2,643
2,126
1,784
2,641
2,766
5,047
5,312
1,215
English
# characters
35,680
42,514
15,162
35,298
29,800
40,255
74,532
37,464
31,030
24,843
36,172
39,552
72,545
75,352
18,031
#A
2,217
2,761
990
2,274
1,865
2,606
4,805
2,396
1,993
1,627
2,375
2,560
4,597
4,871
1,119
Data set: https://github.com/pnugues/ilppp/tree/master/
programs/ch04/salammbo
Pierre Nugues
Artificial Intelligence EDA132
February 1st, 2017
6 / 51
Lecture 5: Machine Learning
Supervised Learning: Regression
Letter count from Salammbô in French
5500
"salammbo_a_fr.tsv"
f(x)
5000
4500
4000
3500
3000
2500
2000
1500
1000
10000
Pierre Nugues
20000
30000
40000
Artificial Intelligence EDA132
50000
60000
70000
80000
February 1st, 2017
7 / 51
Lecture 5: Machine Learning
Models
We will assume that data sets are governed by functions or models.
For instance given the set:
{(xi , yi )|0 < i 6 N},
there exists a function such that:
f (xi ) = yi .
Supervised machine learning algorithms will produce hypothesized functions
or models fitting the data.
Pierre Nugues
Artificial Intelligence EDA132
February 1st, 2017
8 / 51
Lecture 5: Machine Learning
Selecting a Model
Often, multiple models can fit a data set:
Three polynomials of degree: 1, 8, and 9
A general rule in machine
learning is to prefer the
simplest hypotheses, here the
lower polynomial degrees.
Source code:
https://github.com/pnugues/ilppp/tree/master/programs/ch04
Pierre Nugues
Artificial Intelligence EDA132
February 1st, 2017
9 / 51
Lecture 5: Machine Learning
Regression: Another Model, the Logistic Curve (Verhulst)
Pierre Nugues
Artificial Intelligence EDA132
February 1st, 2017
10 / 51
Lecture 5: Machine Learning
Supervised Learning: Classification
Given the data set, {(xi , yi )|0 < i 6 N} and a model f :
Classification: f (x) = y is discrete,
Regression: f (x) = y is continuous.
Pierre Nugues
Artificial Intelligence EDA132
February 1st, 2017
11 / 51
Lecture 5: Machine Learning
Classification Data Set
Here a binary classification.
Chapter
Chapter
Chapter
Chapter
Chapter
Chapter
Chapter
Chapter
Chapter
Chapter
Chapter
Chapter
Chapter
Chapter
Chapter
Pierre Nugues
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
# char.
36,961
43,621
15,694
36,231
29,945
40,588
75,255
37,709
30,899
25,486
37,497
40,398
74,105
76,725
18,317
#A
2,503
2,992
1,042
2,487
2,014
2,805
5,062
2,643
2,126
1,784
2,641
2,766
5,047
5,312
1,215
class (y )
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
Artificial Intelligence EDA132
# char.
35,680
42,514
15,162
35,298
29,800
40,255
74,532
37,464
31,030
24,843
36,172
39,552
72,545
75,352
18,031
#A
2,217
2,761
990
2,274
1,865
2,606
4,805
2,396
1,993
1,627
2,375
2,560
4,597
4,871
1,119
class (y )
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
February 1st, 2017
12 / 51
Lecture 5: Machine Learning
Supervised Learning: Fisher’s Iris data set (1936)
Pierre Nugues
Artificial Intelligence EDA132
February 1st, 2017
13 / 51
Lecture 5: Machine Learning
Unsupervised Learning: Clustering
No class is given:
Pierre Nugues
Artificial Intelligence EDA132
February 1st, 2017
14 / 51
Lecture 5: Machine Learning
Objects, Attributes, and Classes. After Quinlan (1986)
Object
1
2
3
4
5
6
7
8
9
10
11
12
13
14
Pierre Nugues
Outlook
Sunny
Sunny
Overcast
Rain
Rain
Rain
Overcast
Sunny
Sunny
Rain
Sunny
Overcast
Overcast
Rain
Attributes
Temperature Humidity
Hot
High
Hot
High
Hot
High
Mild
High
Cool
Normal
Cool
Normal
Cool
Normal
Mild
High
Cool
Normal
Mild
Normal
Mild
Normal
Mild
High
Hot
Normal
Mild
High
Artificial Intelligence EDA132
Class
Windy
False
True
False
False
False
True
True
False
False
False
True
True
False
True
N
N
P
P
P
N
P
N
P
P
P
P
P
N
February 1st, 2017
15 / 51
Lecture 5: Machine Learning
Classifying Objects with Decision Trees. After Quinlan
(1986)
Outlook
P: 9, N: 5
sunny
Humidity
P: 2, N: 3
overcast
P: 4
rain
Windy
P: 3, N: 2
high
normal
true
false
N: 3
P: 2
N: 2
P: 3
Pierre Nugues
Artificial Intelligence EDA132
February 1st, 2017
16 / 51
Lecture 5: Machine Learning
Decision Trees and Classification
Each object is defined by an attribute vector (or feature vector or
predictors) {A1 , A2 , ..., Av }
Each object belongs to one class {C1 , C2 , ..., Cn }
The attributes of the examples are:
{Outlook, Temperature, Humidity , Windy } and the classes are:
{N, P}.
The nodes of the tree are the attributes.
Each attribute has a set of possible values. The values of Outlook are
{sunny , rain, overcast}
The branches correspond to the values of each attribute
The optimal tree corresponds to a minimal number of tests.
Pierre Nugues
Artificial Intelligence EDA132
February 1st, 2017
17 / 51
Lecture 5: Machine Learning
Entropy, Decision Trees, and Classification
Decision trees are useful devices to classify objects into a set of classes.
Shannon’s entropy is a quantification of distribution diversity in a set
of symbols
It can help us learn automatically decision trees from a set of data.
The algorithm is one of the simplest machine-learning techniques to
obtain a classifier.
Pierre Nugues
Artificial Intelligence EDA132
February 1st, 2017
18 / 51
Lecture 5: Machine Learning
Inducing (Learning) Decision Trees Automatically: ID3
It is possible to design many trees that classify the objects successfully
An efficient decision tree uses a minimal number of tests.
In the decision tree:
Each example is defined by a finite number of attributes.
Each node in the decision tree corresponds to an attribute that has as
many branches as the attribute has possible values.
At the root of the tree, the condition must be the most discriminating, that
is, have branches gathering most positive examples while others gather
negative examples.
Pierre Nugues
Artificial Intelligence EDA132
February 1st, 2017
19 / 51
Lecture 5: Machine Learning
ID3 (Quinlan 1986)
Each attribute scatters the set into as many subsets as there are values for
this attribute.
Outlook
P: 9, N: 5
sunny
P: 2, N: 3
overcast
P: 4
rain
P: 3, N: 2
At each decision point, the “best” attribute has the maximal separation
power, the maximal information gain
Pierre Nugues
Artificial Intelligence EDA132
February 1st, 2017
20 / 51
Lecture 5: Machine Learning
Better Root?
Temperature
P: 9, N: 5
hot
P: 2, N: 2
Pierre Nugues
mild
P: 4, N: 2
Artificial Intelligence EDA132
cool
P: 3, N: 1
February 1st, 2017
21 / 51
Lecture 5: Machine Learning
Entropy
Information theory models a text as a sequence of symbols.
Let x1 , x2 , ..., xN be a discrete set of N symbols representing the characters.
The information content of a symbol is defined as
I (xi ) = − log2 P(xi ) = log2
where
p(xi ) =
1
,
P(xi )
Count(xi )
.
n
∑j=1 Count(xj )
Entropy, the average information content, is defined as:
H(X ) = −
∑ P(x) log2 P(x),
x∈X
By convention: 0 log2 0 = 0.
Pierre Nugues
Artificial Intelligence EDA132
February 1st, 2017
22 / 51
Lecture 5: Machine Learning
Entropy of the Dataset
Outlook
P: 9, N: 5
sunny
overcast
P: 2, N: 3
P: 4
rain
P: 3, N: 2
Entropy of the top node:
I (p, n) = −
9
5
9
5
log2
− log2
= 0.940.
14
14 14
14
In the three nodes:
sunny :
overcast :
rain :
Pierre Nugues
2
2 3
3
I (p1 , n1 ) = − log2 − log2 = 0.971.
5
5 5
5
I (p2 , n2 ) = 0.0
3
3 2
2
I (p3 , n3 ) = − log2 − log2 = 0.971.
5
5 5
5
Artificial Intelligence EDA132
February 1st, 2017
23 / 51
Lecture 5: Machine Learning
Understanding the Entropy
For a two-class set, we set:
x=
n
p
and
= 1 − x.
p+n
p+n
I (x) = −x log2 x − (1 − x) log2 (1 − x) with x ∈ [0, 1].
The entropy reaches a maximum when there are as many positive as
negative examples in the data set. It is minimal when the set consists of
either positive or negative examples.
Pierre Nugues
Artificial Intelligence EDA132
February 1st, 2017
24 / 51
Lecture 5: Machine Learning
Gini Impurity
Gini impurity is an alternative to entropy:
G (X ) = −
∑ P(x)(1 − P(x)),
x∈X
For a two-class set, we set:
2G (x) = 2(x(1 − x) + (1 − x)x) with x ∈ [0, 1].
= 4x(1 − x)
Pierre Nugues
Artificial Intelligence EDA132
February 1st, 2017
25 / 51
Lecture 5: Machine Learning
Entropy of a Text
The entropy of the text is
H(X ) = − ∑ P(x) log2 P(x).
x∈X
= −P(A) log2 P(A) − P(B) log2 P(B) − ...
−P(Z ) log2 P(Z ) − P(À) log2 P(À) − ...
−P(Ÿ ) log2 P(Ÿ ) − P(blanks) log2 P(blanks).
Entropy of Gustave Flaubert’s Salammbô in French is H(X ) = 4.39.
Pierre Nugues
Artificial Intelligence EDA132
February 1st, 2017
26 / 51
Lecture 5: Machine Learning
Cross-Entropy
The cross entropy of M on P is defined as:
H(P, M) = −
∑ P(x) log2 M(x).
x∈X
We have the inequality H(P) ≤ H(P, M).
The difference is called the Kullback-Leibler divergence.
Salammbô,
chapters 1-14,
training set
Salammbô, chapter 15, test set
Notre Dame de Paris, test set
Nineteen Eighty-Four, test set
Pierre Nugues
Entropy
4.39481
Cross entropy
4.39481
Difference
0.0
4.34937
4.43696
4.35922
4.36074
4.45507
4.82012
0.01137
0.01811
0.46090
Artificial Intelligence EDA132
February 1st, 2017
27 / 51
Lecture 5: Machine Learning
ID3 (Quinlan 1986)
ID3 uses the entropy to select the best attribute to be the root of the tree
and recursively the next attributes of the resulting nodes.
The entropy of a two-class set p and n is:
I (p, n) = −
p
p
n
n
log2
−
log2
.
p+n
p+n p+n
p+n
The weighted average of all the nodes below an attribute A is:
v
E (A) =
pi + ni
pi
ni
I(
,
).
pi + ni pi + ni
i=1 p + n
∑
The information gain is defined as I (p, n) − E (A) (or Ibefore − Iafter )
This measures the separating power of an attribute: the more the gain, the
better the attribute.
Pierre Nugues
Artificial Intelligence EDA132
February 1st, 2017
28 / 51
Lecture 5: Machine Learning
Example
The entropy of the data set is:
I (p, n) = −
9
5
5
9
log2
− log2
= 0.940.
14
14 14
14
Outlook has three values: sunny, overcast, and rain. The entropies of the
corresponding subsets are:
sunny :
overcast :
rain :
2
2 3
3
I (p1 , n1 ) = − log2 − log2 = 0.971.
5
5 5
5
I (p2 , n2 ) = 0.0
3 2
2
3
I (p3 , n3 ) = − log2 − log2 = 0.971.
5
5 5
5
5
5
× 0.971 − × 0.971 = 0.246, the
14
14
highest possible among the attributes.
We have Gain(Temperature) = 0.029, Gain(Humidity ) = 0.151, and
Gain(Windy ) = 0.048.
Gain(Outlook) is then 0.940 −
Pierre Nugues
Artificial Intelligence EDA132
February 1st, 2017
29 / 51
Lecture 5: Machine Learning
Inducing Decision Trees Automatically: ID3
The algorithm to build the decision tree is simple.
The information gain is computed on the data set for all attributes in
the set of attributes, SA.
The attribute with the highest gain is selected to be the root of the
tree:
A = arg max I (n, p) − E (a).
a∈SA
The data set is split into v subsets {N1 , ..., Nv }, where the value of A
for the objects in Ni is Ai
For each subset, a corresponding node is created below the root.
This process is repeated recursively for each node of the tree with the
subset it contains until all the objects of the node are either positive or
negative.
For a training set of N instances each having M attributes, ID3’s
complexity to generate a decision tree is O(NM).
Pierre Nugues
Artificial Intelligence EDA132
February 1st, 2017
30 / 51
Lecture 5: Machine Learning
Decision Tree Learning Algorithm (From the Textbook)
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:
13:
14:
function DTL(examples, attributes, parent_examples) returns a tree
if examples is empty then
return PluralityValue(parent_examples)
else if all examples have the same classification then
return the classification
else if attributes is empty then
return PluralityValue(examples)
else
A ← arg maxa∈attributes InformationGain(a, examples)
tree ← a new decision tree with root test A
for all vk ∈ A do
exs ← {e : e ∈ examples and e.A = vk }
subtree ← DTL(exs, attributes − A, examples)
add a branch to tree with label {A = vk } and subtree
subtree
return tree
Pierre Nugues
Artificial Intelligence EDA132
February 1st, 2017
31 / 51
5: Machine Learning
How do we know that h ≈ f ? (Hume’sLecture
Problem
of Induction)
Learning Curve
1) Use theorems of computational/statistical learning theory
The classical evaluation technique uses a training set and a test set.
a new
test
of examples
2)Generally,
Try h on the
larger
theset
training
set, the better the performance.
(use
same
distribution
over
as training
set)
This can be visualized with a learning example
curve. Fromspace
the textbook,
Stuart
Russell and Peter Norvig, Artificial Intelligence, 3rd ed., 2010, page 703.
Learning curve = % correct on test set as a function of training set size
% correct on test set
1
0.9
0.8
0.7
0.6
0.5
0.4
0 10 20 30 40 50 60 70 80 90 100
Training set size
Pierre Nugues
Artificial Intelligence EDA132
Chapter
February 1st,
201718, Sections
32 1–3
/ 51
2
Lecture 5: Machine Learning
Overfitting
When two classifiers have equal performances on a specific test set,
the simplest one is supposed to be more general
A small decision tree is always preferable to a larger one.
Complex classifiers may show an overfit to the training data and have
poor performance when the data set changes.
In the case of decision trees, overfits show with deep trees and when
numerous leaves have few examples.
Pierre Nugues
Artificial Intelligence EDA132
February 1st, 2017
33 / 51
Lecture 5: Machine Learning
Pruning Trees
Decision tree pruning consists in removing and merging leaves with few
objects, for instance one or two.
To prune the tree:
1
Generate the complete decision tree
2
Consider the nodes that have only leaves as children and merge the
leaves if the information gain is below a certain threshold.
3
To determine the threshold, we use a significance test.
This is the χ 2 pruning.
Pierre Nugues
Artificial Intelligence EDA132
February 1st, 2017
34 / 51
Lecture 5: Machine Learning
Contingency Tables
Example: A set of 12 positive and 8 negative examples.
Let us compare two attributes:
Attribute 1 with three values:{v11 , v12 , v13 }
Attribute 2 with three values:{v21 , v22 , v23 }
Contingency tables are devices to visualize frequency distributions.
Positive
Negative
Total
v11
6
4
10
v12
3
2
5
v13
3
2
5
12 (60%)
8 (40%)
20 (100%)
Positive
Negative
Total
v21
1
8
9
v22
1
0
1
v23
10
0
10
12 (60%)
8 (40%)
20 (100%)
Attribute 1
Attribute 2
Pierre Nugues
Artificial Intelligence EDA132
February 1st, 2017
35 / 51
Lecture 5: Machine Learning
Significance Test
An attribute with no significance would have the same proportions of
positive and negative examples before and after the test.
Proportions before, for instance 12 and 8:
p
n
and
p+n
p+n
Proportions after, for vk , a value of the attribute:
pk
nk
and
pk + nk
pk + nk
The expected values of a nonsignificant attribute (null hypothesis) are:
n
p
p̂k =
× (pk + nk )
n̂k =
× (pk + nk )
p+n
p+n
The total deviation for an attribute with values v1 ..vd is:
d
∆=
Pierre Nugues
(pk − p̂k )2 (nk − n̂k )2
+
p̂k
n̂k
k=1
∑
Artificial Intelligence EDA132
February 1st, 2017
36 / 51
Lecture 5: Machine Learning
Pruning Trees (II)
The null hypothesis corresponds to an irrelevant attribute: ∆ = 0.
∆ values are distributed according to the χ 2 distribution
If v is the number of values of an attribute, the degree of freedom is
v −1
The corresponding χ 2 distributions can be found in statistical tables.
For three degrees of freedom:
10%: 6.25, 5%: 7.81, 1%: 11.34, 1‰: 16.27
The significance level is usually 5%
An alternate pruning method is to modify the termination condition of
the algorithm:
We stop when the information gain is below a certain threshold.
Pierre Nugues
Artificial Intelligence EDA132
February 1st, 2017
37 / 51
Lecture 5: Machine Learning
Dealing with Real Data
Unknown attribute values: Use the most frequent value of the
attribute or all the values;
Attributes with many values (large attribute domains): Difficult to
handle for decision trees. The book suggests to use the gain ratio:
v
Information gain
pi + ni
pi + ni
, where Information value = − ∑
log2
Information value
p+n
i=1 p + n
Numerical attributes: Find the binary split point that maximizes the
information gain
Numerical output: Regression tree
Pierre Nugues
Artificial Intelligence EDA132
February 1st, 2017
38 / 51
Lecture 5: Machine Learning
Evaluation
The standard evaluation procedure is to train the classifier on a
training set and evaluate the performance on a test set.
When we have only one set, we divide it in two subsets: the training
set and the test set (or holdout data).
The split can be 90–10 or 80–20
This often optimizes the classifier for a specific test set and creates an
overfit
Pierre Nugues
Artificial Intelligence EDA132
February 1st, 2017
39 / 51
Lecture 5: Machine Learning
Cross Validation
A N-fold cross validation mitigates the overfit
The set is partitioned into N subsets, N = 5 for example, one of them
being the test set (red) and the rest the training set (gray).
The process is repeated N times with a different test set: N folds
part 1
fold
fold
fold
fold
fold
part 2
part 3
part 4
part 5
1
2
3
4
5
At the extreme, leave-one-out cross-validation
Pierre Nugues
Artificial Intelligence EDA132
February 1st, 2017
40 / 51
Lecture 5: Machine Learning
Model Selection
Validation can apply to one classification method
We can use it to select a classification method and its parametrization.
Needs three sets: training set, development set, and test set.
Pierre Nugues
Artificial Intelligence EDA132
February 1st, 2017
41 / 51
Lecture 5: Machine Learning
Loss
Depending on their type, errors can have different impacts.
Imagine a smoke detector that predicts fire.
Equivalent to a classifier that uses input data from sensors and
classifies them as fire and nonfire.
The loss function is defined as L(y , ŷ ) with ŷ = h(x), where x is the
vector of attributes, h the classifier, and y , the correct value.
The cost of L(fire, nonfire) is much higher than L(nonfire, fire).
L(fire, fire) = L(nonfire, nonfire) = 0
Pierre Nugues
Artificial Intelligence EDA132
February 1st, 2017
42 / 51
Lecture 5: Machine Learning
Common Loss Measures
L1 (y , ŷ )
= |y − ŷ |
Absolute value loss
L2 (y , ŷ )
= (y − ŷ )2
Squared error loss
L0/1 (y , ŷ ) = 0 if y = ŷ else 1 0/1 loss
Pierre Nugues
Artificial Intelligence EDA132
February 1st, 2017
43 / 51
Lecture 5: Machine Learning
Empirical Loss
We compute the empirical loss of a classifier h on a set of examples E
using the formula:
Loss(L, E , h) =
1
N
∑ L(y , h(x)).
1
N
∑ (y − h(x))2 .
E
For continuous functions:
Loss(L, E , h) =
Pierre Nugues
Artificial Intelligence EDA132
E
February 1st, 2017
44 / 51
Lecture 5: Machine Learning
Evaluation
There are different kinds of measures to evaluate the performance of
machine learning techniques, for instance:
Precision and recall in information retrieval and natural language
processing;
The receiver operating characteristic (ROC) in medicine.
Classified as P
Classified as N
Positive examples: P
True positives: A
False negatives: C
Negative examples: N
False positives: B
True negatives: D
More on the receiver operating characteristic here: http:
//en.wikipedia.org/wiki/Receiver_operating_characteristic
Pierre Nugues
Artificial Intelligence EDA132
February 1st, 2017
45 / 51
Lecture 5: Machine Learning
Recall, Precision, and the F-Measure
|A ∪ D|
.
|P ∪ N|
Recall measures how much relevant examples the system has classified
correctly, for P:
The accuracy is
Recall =
|A|
.
|A ∪ C |
Precision is the accuracy of what has been returned, for P:
Precision =
|A|
.
|A ∪ B|
Recall and precision are combined into the F-measure, which is defined as
the harmonic mean of both numbers:
F=
Pierre Nugues
2 · Precision × Recall
.
Precision + Recall
Artificial Intelligence EDA132
February 1st, 2017
46 / 51
Lecture 5: Machine Learning
Measuring Quality: The Confusion Matrix
A task in natural language processing: Identify the parts of speech (POS)
of words.
Example: The can rusted
The human: The/art (DT) can/noun (NN) rusted /verb (VBD)
The POS tagger: The/art (DT) can/modal (MD) rusted /verb (VBD)
↓Correct
DT
IN
JJ
NN
RB
RP
VB
VBD
VBG
VBN
Tagger →
DT
IN
99.4
0.3
0.4
97.5
–
0.1
–
–
0.2
2.4
–
24.7
–
–
–
–
–
–
–
–
JJ
–
–
93.9
2.2
2.2
–
0.3
0.3
2.5
4.6
NN
–
–
1.8
95.5
0.6
1.1
1.4
–
4.4
–
RB
0.3
1.5
0.9
–
93.2
12.6
–
–
–
–
RP
–
0.5
–
–
1.2
61.5
–
–
–
–
VB
–
–
0.1
0.2
–
–
96.0
–
–
–
VBD
–
–
0.1
–
–
–
–
94.6
–
4.3
VBG
–
–
0.4
0.4
–
–
–
–
93.0
–
VBN
–
–
1.5
–
–
–
0.2
4.8
–
90.6
After Franz (1996, p. 124)
Pierre Nugues
Artificial Intelligence EDA132
February 1st, 2017
47 / 51
Lecture 5: Machine Learning
The Weka Toolkit
Weka: A powerful collection of machine-learning algorithms
http://www.cs.waikato.ac.nz/ml/weka/.
Pierre Nugues
Artificial Intelligence EDA132
February 1st, 2017
48 / 51
Lecture 5: Machine Learning
The Weka Toolkit
Running ID3
Pierre Nugues
Artificial Intelligence EDA132
February 1st, 2017
49 / 51
Lecture 5: Machine Learning
ARFF: The Weka Data Format
Storing Quinlan’s data set in Weka’s attribute-relation file format (ARFF)
http://weka.wikispaces.com/ARFF:
@relation weather.symbolic
@attribute
@attribute
@attribute
@attribute
@attribute
outlook {sunny, overcast, rainy}
temperature {hot, mild, cool}
humidity {high, normal}
windy {TRUE, FALSE}
play {yes, no}
@data
sunny,hot,high,FALSE,no
sunny,hot,high,TRUE,no
overcast,hot,high,FALSE,yes
rainy,mild,high,FALSE,yes
rainy,cool,normal,FALSE,yes
rainy,cool,normal,TRUE,no
overcast,cool,normal,TRUE,yes
Pierre Nugues
Artificial Intelligence EDA132
February 1st, 2017
50 / 51
Lecture 5: Machine Learning
Other Toolkits
Scikit-learn: http://scikit-learn.org/
Keras: https://keras.io/
R: http://www.r-project.org/
Matlab/Octave: http://www.gnu.org/software/octave/
H2 O: http://www.h2o.ai/
Pierre Nugues
Artificial Intelligence EDA132
February 1st, 2017
51 / 51