Download Slides

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

History of statistics wikipedia , lookup

Probability wikipedia , lookup

Probability interpretations wikipedia , lookup

Transcript
CS 511a: Artificial Intelligence
Spring 2013
Lecture 13: Probability/ State Space Uncertainty
03/18/2013
Robert Pless
Class directly adapted from Kilian Weinberger, with multiple
slides adapted from Dan Klein and Deva Ramanan
Announcements
 Upcoming
 Project 3 out now(-ish)
 Due Monday , April 1, 11:59pm + 1 minute
 Midterm graded, to be returned on Wednesday.
2
Course Status:
 A search problem:





set of states s  S
set of actions a  A
start state
goal test
cost function
 Solution is a
sequence of actions
(a plan) which
transforms the start
state to a goal state
 An MDP:
 set of states s  S
 set of actions a  A
 transition function T(s,a,s’)
 Prob that a from s leads to s’
 reward function R(s, a, s’)
 Sometimes just R(s) or R(s’)
 start state (or distribution)
 one or many terminal states
 Solution is a policy dictating
action to take in each state.
 Reinforcement learning: MDPs
where we don’t know the
transition or reward functions
Uncertainty
 General situation:
 Evidence: Agent knows certain things about
the state of the world (e.g., sensor readings or
symptoms)
 Hidden variables: Agent needs to reason
about other aspects (e.g. where an object is or
what disease is present, or how sensor is bad.)
 Model: Agent knows something about how the
known variables relate to the unknown
variables
 Probabilistic reasoning gives us a
framework for managing our beliefs and
knowledge
4
[Demo]
Inference in Ghostbusters
 A ghost is in the grid
somewhere
 Sensor readings tell
how close a square
is to the ghost




On the ghost: red
1 or 2 away: orange
3 or 4 away: yellow
5+ away: green
 Sensors are noisy, but we know P(Color | Distance)
P(red | 3)
P(orange | 3)
P(yellow | 3)
P(green | 3)
0.05
0.15
0.5
0.3
Today
 Goal:
 Modeling and using
distributions over LARGE numbers of random variables
 Probability






Random Variables
Joint and Marginal Distributions
Conditional Distribution
Product Rule, Chain Rule, Bayes’ Rule
Inference
Independence
 You’ll need all this stuff A LOT for the next few
weeks, so make sure you go over it now!
6
Random Variables
 A random variable is some aspect of the world about
which we (may) have uncertainty
 R = Is it raining?
 D = How long will it take to drive to work?
 L = Where am I?
 We denote random variables with capital letters
 Like variables in a CSP, random variables have domains
 R in {true, false} (sometimes write as {+r, r})
 D in [0, )
 L in possible locations, maybe {(0,0), (0,1), …}
7
Probabilistic Models
8
Probability Distributions
 Unobserved random variables have distributions
T
P
W
P
warm
0.5
sun
0.6
cold
0.5
rain
0.1
fog
0.2999999999999
meteor
0.0000000000001
 A distribution is a TABLE of probabilities of values
 A probability (lower case value) is a single number
 Must have:
9
Joint Distributions
 A joint distribution over a set of random variables:
specifies a real number for each assignment (or outcome):
T
W
P
 Size of distribution if n variables with domain sizes d?
hot
sun
0.4
hot
rain
0.1
 Must obey:
cold
sun
0.2
cold
rain
0.3
 Joint Distribution Table list probabilities of all combinations. For all
but the smallest distributions, this is impractical to write out
10
Probabilistic Models
 A probabilistic model is a joint distribution
over a set of random variables
 Probabilistic models:
 (Random) variables with domains Assignments
are called outcomes
 Joint distributions: say whether assignments
(outcomes) are likely
 Normalized: sum to 1.0
 Ideally: only certain variables directly interact
 Constraint satisfaction probs:
 Variables with domains
 Constraints: state whether assignments are
possible
 Ideally: only certain variables directly interact
Distribution over T,W
T
W
P
hot
sun
0.4
hot
rain
0.1
cold
sun
0.2
cold
rain
0.3
Constraint over T,W
T
W
P
hot
sun
T
hot
rain
F
cold
sun
F
cold
rain
T
Events
 An event is a set E of outcomes
 From a joint distribution, we can calculate
the probability of any event
T
W
P
hot
sun
0.4
hot
rain
0.1
cold
sun
0.2
cold
rain
0.3
 Probability that it’s hot AND sunny?
 Probability that it’s hot?
 Probability that it’s hot OR sunny?
 Typically, the events we care about are
partial assignments, like P(T=hot)
12
Marginal Distributions
 Marginal distributions are sub-tables which eliminate variables
 Marginalization (summing out): Combine collapsed rows by adding
T
P
P
hot
0.5
cold
0.5
T
W
hot
sun
0.4
hot
rain
0.1
cold
sun
0.2
W
cold
rain
0.3
sun
0.6
rain
0.4
P
13
Conditional Probabilities
 A simple relation between joint and conditional probabilities
 In fact, this is taken as the definition of a conditional probability
T
W
P
hot
sun
0.4
hot
rain
0.1
cold
sun
0.2
cold
rain
0.3
14
Conditional Distributions
 Conditional distributions are probability distributions over
some variables given fixed values of others
Conditional Distributions
W
P
Joint Distribution
T
W
P
sun
0.8
hot
sun
0.4
rain
0.2
hot
rain
0.1
cold
sun
0.2
cold
rain
0.3
W
P
sun
0.4
rain
0.6
15
Normalization Trick
 A trick to get a whole conditional distribution at once:
 Select the joint probabilities matching the evidence
 Normalize the selection (scale values so table sums to one)
 Compute (T | W = rain)
T
W
P
hot
sun
0.4
T
W
P
hot
rain
0.1
hot
rain
0.1
cold
rain
0.3
cold
sun
0.2
cold
rain
0.3
Select
Normalize
T
P
hot
0.25
cold
0.75
 Why does this work? Sum of selection is P(evidence)! (here, P(r))
16
The Product Rule
 Sometimes have conditional distributions but want the joint
 Example with new weather condition, “dryness”.
W
P
sun
0.8
rain
0.2
D
W
P
D
W
P
wet
sun
0.1
wet
sun
0.08
dry
sun
0.9
dry
sun
0.72
wet
rain
0.7
wet
rain
0.14
dry
rain
0.3
dry
rain
17
0.06
Inference
18
Probabilistic Inference
 Probabilistic inference: compute a desired probability from
other known probabilities (e.g. conditional from joint)
 We generally compute conditional probabilities
 P(on time | no reported accidents) = 0.90
 These represent the agent’s beliefs given the evidence
 Probabilities change with new evidence:
 P(on time | no accidents, 5 a.m.) = 0.95
 P(on time | no accidents, 5 a.m., raining) = 0.80
 Observing new evidence causes beliefs to be updated
19
Inference by Enumeration
 P(W=sun)?
 P(W=sun | S=winter)?
 P(W=sun | S=winter, T=hot)?
S
T
W
P
summer
hot
sun
0.30
summer
hot
rain
0.05
summer
cold
sun
0.10
summer
cold
rain
0.05
winter
hot
sun
0.10
winter
hot
rain
0.05
winter
cold
sun
0.15
winter
cold
rain
0.20
20
Inference by Enumeration
 General case:
 Evidence variables:
 Query* variable:
 Hidden variables:
All variables
 We want:
 First, select the entries consistent with the evidence
 Second, sum out H to get joint of Query and evidence:
 Finally, normalize the remaining entries to conditionalize
 Obvious problems:
 Worst-case time complexity O(dn)
 Space complexity O(dn) to store the joint distribution
* Works fine with
multiple query
variables, too
Monty Hall Problem
 Car behind one of three doors
 Contestant picks door #1
 Host (Monty) shows that behind door #2 is
no car
 Should contest switch to door #3 or stay at
#1?
 Random Variables:
 C(=car), M(=Monty’s pick)
 What is the query, evidence?
 What is P(C)?
 What are the CPTs P(M|C=c)
 How do you solve for P(C|evidence)?
22
Probability Tables
Assume (without loss of generality) candidate picks door 1.
M|C=1
P
M|C=2
P
M|C=3
P
C
P
door1
door1
door1
door1
door2
door2
door2
door2
door3
door3
door3
door3
23
Probability Tables
Assume (without loss of generality) candidate picks door 1.
M|C=1
M|C=2
P
M|C=3
P
P
0
door1
0
C
P
door1
1/3
door1
0
door1
door2
0.5
door2
0
door2
1
door2
1/3
door3
0.5
door3
1
door3
0
door3
1/3
M
C
door1
door1
door2
door1
door3
door1
door1
door2
door2
door2
door3
door2
door1
door3
door2
door3
door3
door3
P
24
Probability Tables
Assume (without loss of generality) candidate picks door 1.
M|C=1
M|C=2
P
M|C=3
P
P
0
door1
0
C
P
door1
1/3
door1
0
door1
door2
0.5
door2
0
door2
1
door2
1/3
door3
0.5
door3
1
door3
0
door3
1/3
M
C
P
door1
door1
0
door2
door1
1/6
door3
door1
1/6
door1
door2
0
door2
door2
0
door3
door2
1/3
door1
door3
0
door2
door3
1/3
door3
door3
0
25
Probability Tables
Assume (without loss of generality) candidate picks door 1.
M|C=1
M|C=2
P
M|C=3
P
C
P
door1
1/3
P
0
door1
0
door1
0
door1
door2
0.5
door2
0
door2
1
door2
1/3
door3
0.5
door3
1
door3
0
door3
1/3
M
C
P
door1
door1
0
door2
door1
1/6
door3
door1
1/6
door1
door2
0
door1
door2
door2
0
door2
door3
door2
1/3
door3
door1
door3
0
door2
door3
1/3
door3
door3
0
C|M=door2
P
26
Probability Tables
Assume (without loss of generality) candidate picks door 1.
M|C=1
M|C=2
P
M|C=3
P
C
P
door1
1/3
P
0
door1
0
door1
0
door1
door2
0.5
door2
0
door2
1
door2
1/3
door3
0.5
door3
1
door3
0
door3
1/3
M
C
P
door1
door1
0
door2
door1
1/6
door3
door1
1/6
door1
door2
0
door1
1/6
door2
door2
0
door2
0
door3
door2
1/3
door3
1/3
door1
door3
0
door2
door3
1/3
door3
door3
0
(un-normalized!!)
C|M=door2
P
27
Probability Tables
Assume (without loss of generality) candidate picks door 1.
M|C=1
M|C=2
P
M|C=3
P
C
P
door1
1/3
P
0
door1
0
door1
0
door1
door2
0.5
door2
0
door2
1
door2
1/3
door3
0.5
door3
1
door3
0
door3
1/3
M
C
P
door1
door1
0
door2
door1
1/6
door3
door1
1/6
door1
door2
0
door1
1/3
door2
door2
0
door2
0
door3
door2
1/3
door3
2/3
door1
door3
0
door2
door3
1/3
door3
door3
0
C|M=door2
P
28
Humans don’t get it…
29
Humans don’t get it…
30
Pigeons …
Source: Discover
31
Pigeons do!
Source: Discover
32
Inference by Enumeration
 General case:
 Evidence variables:
 Query* variable:
 Hidden variables:
All variables
 We want:
 First, select the entries consistent with the evidence
 Second, sum out H to get joint of Query and evidence:
 Finally, normalize the remaining entries to conditionalize
 Obvious problems:
 Worst-case time complexity O(dn)
 Space complexity O(dn) to store the joint distribution
The Chain Rule
 More generally, can always write any joint distribution as
an incremental product of conditional distributions
 Why is this always true?
34
Bayes’ Rule
 Two ways to factor a joint distribution over two variables:
That’s my rule!
 Dividing, we get:
 Why is this at all helpful?
 Lets us build one conditional from its reverse
 Often one conditional is tricky but the other one is simple
 Foundation of many systems we’ll see later (e.g. Automated
Speech Recognition, Machine Translation)
 In the running for most important AI equation!
35
Inference with Bayes’ Rule
 Example: Diagnostic probability from causal probability:
 Example:
 m is meningitis, s is stiff neck
Example
givens
 Note: posterior probability of meningitis still very small
 Note: you should still get stiff necks checked out! Why?
36
Probability Tables
Assume (without loss of generality) candidate picks door 1.
M|C=1
M|C=2
P
M|C=3
P
P
0
door1
0
C
P
door1
1/3
door1
0
door1
door2
0.5
door2
0
door2
1
door2
1/3
door3
0.5
door3
1
door3
0
door3
1/3
P(C =1| M = 2) =
P(M = 2 | C =1)P(C =1)
P(M = 2 | C =1)P(C =1)
0.5
=
=
=1/ 3
P(M = 2)
1.5
P(M
=
2
|
C
=
c)
P(C
=
c)
å
c
C|M=door2
P
door1
1/3
door2
0
door3
2/3
37
Ghostbusters, Revisited
 Let’s say we have two distributions:
 Prior distribution over ghost location: P(G)
 Let’s say this is uniform
 Sensor reading model: P(R | G)
 Given: we know what our sensors do
 R = reading color measured at (1,1)
 E.g. P(R = yellow | G=(1,1)) = 0.1
 We can calculate the posterior
distribution P(G|r) over ghost locations
given a reading using Bayes’ rule:
38
Summary
 Probabilistic model:





All we need is joint probability table (JPT)
Can perform inference by enumeration
From JPT we can obtain CPT and marginals
(Can obtain JPT from CPT and marginals)
Bayes Rule can perform inference directly
from CPT and marginals
 Next several lectures:
 How to avoid storing the entire JPT
39
Why might these tables get so
big?
 Markov property:
p(pixel | rest of image) = p(pixel | neighborhood)
?
Example application domain:
image repair
p
Synthesizing a pixel