Download Document

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

History of randomness wikipedia , lookup

Dempster–Shafer theory wikipedia , lookup

Randomness wikipedia , lookup

Probabilistic context-free grammar wikipedia , lookup

Infinite monkey theorem wikipedia , lookup

Probability box wikipedia , lookup

Birthday problem wikipedia , lookup

Law of large numbers wikipedia , lookup

Conditioning (probability) wikipedia , lookup

Ars Conjectandi wikipedia , lookup

Inductive probability wikipedia , lookup

Probability interpretations wikipedia , lookup

Transcript
CS 430 / INFO 430
Information Retrieval
Lecture 10
Probabilistic Information Retrieval
1
Course Administration
Assignment 1
You should have received results by email
Assignment 2
Will be posted on Wednesday
2
Calculation of tf.idf
If you wish to check your calculation of tf.idf, you can use the
test data in the two Excel spreadsheets linked from the testData
page on the web site.
Example: What is the tf and idf for the term monstrous?
Definition: tfij = fij / mi
From DocumentFreq1.xls, there is one posting for monstrous
in file19.txt
From AllFiles1.xls, fij =1, mi = 16
tfij =1/16 = 0.0625
3
Calculation of tf.idf (continued)
Definition: idfj = log2 (n/nj) + 1
From DocumentFreq1.xls, there is one posting for monstrous
idfj = log2 (n/nj) + 1
= log2 (20/1) + 1
= 5.322
tfij.idfj = 0.0625 * 5.322 = 0.3326
4
Three Approaches to Information
Retrieval
Many authors divide the methods of information retrieval into
three categories:
Boolean (based on set theory)
Vector space (based on linear algebra)
Probabilistic (based on Bayesian statistics)
In practice, the latter two have considerable overlap.
5
Probability: independent random
variables and conditional probability
Notation
Let a, b be two events, with probability P(a) and P(b).
Independent events
The events a and b are independent if and only if:
P(a  b) = P(b) P(a)
Conditional probability
P(a | b) is the probability of a given b, also called the
conditional probability of a given b.
P(a | b) P(b) = P(a  b) = P(b | a) P(a)
6
Example: independent random
variables and conditional probability
Independent
a and b are the results of throwing two dice
P(a=5 | b=3) = P(a=5) = 1/6
Not independent
a and b are the results of throwing two dice
t=a+b
P(t=8 | a=2) = 1/6
P(t=8 | a=1) = 0
7
Probability Theory -- Bayesian Formulas
Notation
Let a, b be two events.
P(a | b) is the probability of a given b
Bayes Theorem
P(a | b) =
P(a | b) =
P(b | a) P(a)
P(b)
P(b | a) P(a)
where a is the event not a
P(b)
Derivation
P(a | b) P(b) = P(a  b) = P(b | a) P(a)
8
Example of Bayes Theorem
Example
P(a | b) = D / (A+D) = D / P(b)
a Weight over 200 lb.
P(b | a) = D / (D+C) = D / P(a)
b Height over 6 ft.
D is P(a  b)
D
C
A
B
9
Probability Ranking Principle
"If a reference retrieval system’s response to each request is a
ranking of the documents in the collections in order of
decreasing probability of usefulness to the user who submitted
the request, where the probabilities are estimated as accurately
a possible on the basis of whatever data is made available to the
system for this purpose, then the overall effectiveness of the
system to its users will be the best that is obtainable on the
basis of that data."
W.S. Cooper
10
Probabilistic Ranking
Basic concept:
"For a given query, if we know some documents that are
relevant, terms that occur in those documents should be given
greater weighting in searching for other relevant documents.
By making assumptions about the distribution of terms and
applying Bayes Theorem, it is possible to derive weights
theoretically."
Van Rijsbergen
11
Concept
R is a set of documents that are guessed to be relevant and R
the complement of R.
1. Guess a preliminary probabilistic description of R and
use it to retrieve a first set of documents.
2. Interact with the user to refine the description.
3. Repeat, thus generating a succession of approximations
to R.
12
Probabilistic Principle
Basic concept:
The probability that a document is relevant to a query is assumed to
depend on the terms in the query and the terms used to index the
document, only.
Given a user query q, the ideal answer set, R, is the set of all
relevant documents.
Given a user query q and a document dj in the collection, the
probabilistic model estimates the probability that the user will
find dj relevant, i.e., that dj is a member of R.
13
Probabilistic Principle
Similarity measure:
The similarity (dj, q) is the ratio of the probability that dj is
relevant to q, to the probability that dj is not relevant to q.
This measure runs from near zero, if the probability is small that
the document is relevant, to large as the probability of relevance
approaches one.
14
Probabilistic Principle
Given a query q and a document dj the model needs an estimate
of the probability that the user finds dj relevant. i.e., P(R | dj).
P(R | dj)
similarity (dj, q) =
P(R | dj)
P(dj | R) P(R)
=
P(dj | R) P(R)
by Bayes Theorem
P(dj | R)
xk
P(dj | R)
where k is constant
=
P(dj | R) is the probability of randomly selecting dj from R.
15
Binary Independence Retrieval Model
(BIR)
Let x = (x1, x2, ... xn) be the term incidence vector for dj.
xi = 1 if term i is in the document and 0 otherwise.
Let q = (q1, q2, ... qn) be the term incidence vector for the query.
We estimate P(dj | R) by P(x | R)
If the index terms are independent
P(x | R) = P(x1 | R) P(x2 | R) ... P(xn | R) = ∏ P(xi | R)
16
Binary Independence Retrieval Model
(BIR)
∏ P(xi | R)
S = similarity (dj, q) = k
∏ P(xi | R)
Since the xi are either 0 or 1, this can we written:
S = k
17
∏
xi = 1
P(xi = 1 | R)
P(xi = 1 | R)
∏
xi = 0
P(xi = 0 | R)
P(xi = 0 | R)
Binary Independence Retrieval Model
(BIR)
For terms that appear in the query let
pi = P(xi = 1 | R)
ri = P(xi = 1 | R)
For terms that do not appear in the query assume
pi = ri
S = k
= k
18
∏
∏
xi = qi = 1
xi = qi = 1
pi
ri
∏
xi = 0, qi = 1
pi (1 - ri)
ri(1 - pi)
∏
1 - pi
1 - ri
qi = 1
1 - pi
1 - ri
constant
for a
given
query
Binary Independence Retrieval Model
(BIR)
Taking logs and ignoring factors that are constant for a given
query, we have:
pi (1 - ri )
similarity (d, q) = ∑ log{(1 - p ) r }
i
i
where the summation is taken over those terms that appear in
both the query and the document.
This similarity measure can be used to rank all documents
against the query q.
19
Estimates of P(xi | R)
Initial guess, with no information to work from:
pi = P(xi | R) = c
ri = P(xi | R) = ni / N
where:
c is an arbitrary constant, e.g., 0.5
ni is the number of documents that contain xi
N is the total number of documents in the collection
20
Improving the Estimates of P(xi | R)
Human feedback -- relevance feedback (discussed later)
Automatically
(a) Run query q using initial values. Consider the t top ranked
documents. Let si be the number of these documents that
contain the term xi.
(b) The new estimates are:
pi = P(xi | R) = si / t
ri = P(xi | R) = (ni - si) / (N - t)
21
Discussion of Probabilistic Model
Advantages
• Based on firm theoretical basis
Disadvantages
• Initial definition of R has to be guessed.
• Weights ignore term frequency
• Assumes independent index terms (as does vector model)
22