Download x. - WSU EECS

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Neurocomputational speech processing wikipedia , lookup

Inductive probability wikipedia , lookup

Visual servoing wikipedia , lookup

Transcript
Visibility Graph
Voronoi Diagram
β€’ Control is easy: stay equidistant away from closest obstacles
Exact Cell Decomposition
Plan over this graph
Localization
β€’ Two types of approaches:
– Iconic : use raw sensor data directly. Match
current sensor readings with what was observed
in the past
– Feature-based : extract features of the
environment, such as corners and doorways.
Match current observations
Continuous Localization and Mapping
Time
Initial Map
Global Map
Continuous Localization and Mapping
Time
Sensor Data
Local Map
Global Map
Matching
Registration
π‘₯, 𝑦, πœƒ offsets to
correct odometry
Continuous Localization and Mapping
Time
Local Map
Global Map
Matching
Registration
π‘₯, 𝑦, πœƒ offsets to
correct odometry
Continuous Localization and Mapping
register
global map
sensor
data
construct local
map
local map
match and
score
best pose
Continuous Localization and Mapping
register
global map
sensor
data
encoder
data
construct local
map
pose estimation
local map
k possible poses
match and
score
best pose
Matching
X
probabilities
Local Map
Where
am I?
Global Map
Matching
®
Where am I on the
global map?
obstacle
…
Local Map
Global Map
  
  
  
  
  
®  
  
  
…













®


 
 
 
 
Examine different possible robot positions.
…
This sounds hard, do we need to localize?
https://www.youtube.com/watch?v=6KRjuuEVEZs
Matching and Registration
β€’ Collect sensor readings and create a local map
β€’ Estimate poses that the robot is likely to be in
given distance traveled from last map update
– In theory k is infinite
– Discretize the space of possible positions (e.g.,
consider errors in increments of 5o)
– Try to model the likely behavior of your robot. Try to
account for systematic errors (e.g., robot tends to drift
to one side)
Matching and Registration
β€’ Collect 𝑛 sensor readings and create a local map
β€’ Estimate π‘˜ poses (π‘₯, 𝑦, πœƒ) that the robot is likely to be in
given the distance travelled from the last map update
β€’ For each pose π‘˜ score how well the local map matches
the global map at this position
β€’ Choose the pose with the best score. Update the
position of the robot to the corresponding (π‘₯, 𝑦, πœƒ)
location.
What if you were tracking multiple possible poses.
How would you combine info from this with
previous estimate of global position + odometry?
Representations
line-based map (~100 lines)
Representations
One location vs. location distribution
Grid-based map (3000 cells)
Topological map (50 features,
18 nodes)
Feature-Based Localization
β€’ Extract features such as doorways, corners and intersections
β€’ Either
– Use continuous localization to try and match features at each update
– Use topological information to create a graph of the environment
Topological Map of Office Building
β€’ The robot has identified 10 doorways, marked by single number.
β€’ Hallways between doorways labeled by gateway-gateway pairing.
Topological Map of Office Building
β€’ What if the robot is told it is at position A but it’s actually at B.
How could it correct that information?
Localization Problem(s)
β€’
β€’
β€’
β€’
Position Tracking
Global Localization
Kidnapped Robot Problem
Multi-Robot Localization
β€’ General approach:
β€’ A: action
β€’ S: pose
β€’ O: observation
Position at time t depends on position previous position and action,
and current observation
β€’ Pose at time t determines the observation at
time t
β€’ If we know the pose, we can say what the
observation is
β€’ But, this is backwards…
β€’ Hello Bayes!
Quiz!
β€’ If events a and b are independent,
β€’ p(a, b) =
β€’ If events a and b are not independent,
β€’ p(a, b) =
β€’ p(c|d) = ?
Mattel
1992
Quiz!
β€’ If events a and b are independent,
β€’ p(a, b) = p(a) × p(b)
β€’ If events a and b are not independent,
β€’ p(a, b) = p(a) × p(b|a) = p(b) × p (a|b)
β€’ p(c|d) = p (c , d) / p(d) = p((d|c) p(c)) / p(d)
Bayes Filtering
β€’ Want to have a way of representing uncertainty
β€’ Probability Distribution
– Could be discrete or continuous
– Prob. of each pose in set of all possible poses
β€’ Belief
β€’ Prior
β€’ Posterior
Models of Belief
1. Uniform Prior
2. Observation: see pillar
3. Action: move right
4. Observation: see pillar
Modeling objects in the environment
http://www.cs.washington.edu/research/rse-lab/projects/mcl
Modeling objects in the environment
http://www.cs.washington.edu/research/rse-lab/projects/mcl
Axioms of Probability Theory
β€’ Pr(𝐴) denotes probability that proposition A is true.
β€’ Pr(¬π΄) denotes probability that proposition A is false.
1.
2.
3.
0 ο‚£ Pr( A) ο‚£ 1
Pr(True) ο€½ 1
Pr( False) ο€½ 0
Pr( A οƒš B) ο€½ Pr( A)  Pr( B) ο€­ Pr( A  B)
A Closer Look at Axiom 3
Pr( A οƒš B) ο€½ Pr( A)  Pr( B) ο€­ Pr( A  B)
True
A
A B
B
B
Discrete Random Variables
β€’ X denotes a random variable.
β€’ X can take on a countable number of values in {x1, x2, …, xn}.
β€’ P(X=xi), or P(xi), is the probability that the random variable X
takes on value xi.
β€’ P(xi) is called probability mass function.
β€’ E.g.
P( Room) ο€½ 0.2
Continuous Random Variables
β€’ 𝑋 takes on values in the continuum.
β€’ 𝑝(𝑋 = π‘₯), or 𝑝(π‘₯), is a probability density function.
b
Pr( x οƒŽ (a, b)) ο€½  p( x)dx
a
β€’ E.g.
p(x)
x
Probability Density Function
p(x)
Magnitude of curve could be greater
than 1 in some areas. The total area
under the curve must add up to 1.
x
β€’ Since continuous probability functions are defined for an infinite number
of points over a continuous interval, the probability at a single point is
always 0.
Inference by Enumeration

𝑃 ¬π‘π‘Žπ‘£π‘–𝑑𝑦 π‘‘π‘œπ‘œπ‘‘β„Žπ‘Žπ‘β„Žπ‘’) =
0.016+0.064
0.108+0.012++0.016+0.064
𝑃(¬π‘π‘Žπ‘£π‘–π‘‘π‘¦βˆ§π‘‘π‘œπ‘œπ‘‘β„Žπ‘Žπ‘β„Žπ‘’)
𝑃(π‘‘π‘œπ‘œπ‘‘β„Žπ‘Žπ‘β„Žπ‘’)
=0.4
Law of Total Probability
Discrete case
οƒ₯ P( x) ο€½ 1
x
P ( x ) ο€½ οƒ₯ P ( x, y )
y
P( x) ο€½ οƒ₯ P( x | y ) P( y )
y
Continuous case
 p( x) dx ο€½ 1
p( x) ο€½  p( x, y ) dy
p( x) ο€½  p( x | y ) p( y ) dy
Bayes Formula
P ( x, y ) ο€½ P ( x | y ) P ( y ) ο€½ P ( y | x ) P ( x )
οƒž
P( y | x) P( x) likelihood οƒ— prior
P( x y ) ο€½
ο€½
P( y )
evidence
If y is a new sensor reading:
p (x )
p( x y )
οƒ 
Prior probability distribution
οƒ 
Posterior (conditional) probability distribution
p ( y x)
p( y)
οƒ 
Model of the characteristics of the sensor
οƒ 
Does not depend on x
Bayes Formula
P ( x, y ) ο€½ P ( x | y ) P ( y ) ο€½ P ( y | x ) P ( x )
οƒž
P( y | x) P( x) likelihood οƒ— prior
P( x y ) ο€½
ο€½
P( y )
evidence
οƒž
P( y | x) P( x)
P( x y ) ο€½
οƒ₯ P( y | x) P( x)
x
Bayes Rule with Background
Knowledge
P( y | x, z ) P( x | z )
P( x | y, z ) ο€½
P( y | z )
Conditional Independence
P( x, y z ) ο€½ P( x | z ) P( y | z )
equivalent to
P ( x z ) ο€½ P( x | z , y )
and
P( y z ) ο€½ P( y | z , x )
Simple Example of State Estimation
β€’ Suppose a robot obtains measurement 𝑧
β€’ What is 𝑃(π‘œπ‘π‘’π‘›|𝑧)?
Causal vs. Diagnostic Reasoning
β€’
β€’
β€’
β€’
𝑃(π‘œπ‘π‘’π‘›|𝑧) is diagnostic.
𝑃(𝑧|π‘œπ‘π‘’π‘›) is causal.
Often causal knowledge is easier to obtain.
Bayes rule allows us to use causal knowledge:
Comes from sensor model.
P( z | open) P(open)
P(open | z ) ο€½
P( z )
P(open | z ) ο€½
Example


P(o z) =
P(z|open) = 0.6
P(z|οƒ˜open) = 0.3
P(open) = P(οƒ˜open) = 0.5
P( z | open) P(open)
P( z )
P(z | o) P(o)
å P(z | o)P(o)
x
P( z | open) P(open)
P(open | z ) ο€½
P( z | open) p(open)  P( z | οƒ˜open) p(οƒ˜open)
0.6 οƒ— 0.5
2
P(open | z ) ο€½
ο€½ ο€½ 0.67
0.6 οƒ— 0.5  0.3 οƒ— 0.5 3
𝑧 raises the probability that the door is open.
Combining Evidence
β€’ Suppose our robot obtains
another observation z2.
β€’ How can we integrate this new
information?
β€’ More generally, how can we
estimate
P(x| z1...zn )?
Recursive Bayesian Updating
P( zn | x, z1,, zn ο€­ 1) P( x | z1,, zn ο€­ 1)
P( x | z1,, zn) ο€½
P( zn | z1,, zn ο€­ 1)
Markov assumption: zn is independent of z1,...,zn-1 if
we know x.
P(zn | open) P(open | z1,… , zn - 1)
P(open | z1,… , zn ) =
P(zn | z1,… , zn - 1)
P(open | z ) ο€½
P( z | open) P(open)
P( z )
Example: 2nd Measurement
P( x | z1,  , zn) ο€½
β€’ P(z2|open) = 0.5
β€’ P(open|z1)=2/3
P( zn | x) P ( x | z1,  , zn ο€­ 1)
P ( zn | z1,  , zn ο€­ 1)
P(z2|οƒ˜open) = 0.6
P( z2 | open) P(open | z1 )
P(open
P
(open ||zz22,, zz11)) =
ο€½?
P( z2 | open) P(open | z1 )  P( z2 | οƒ˜open) P(οƒ˜open | z1 )
1 2
οƒ—
5
2 3
ο€½
ο€½
ο€½ 0.625
1 2 3 1
8
οƒ—  οƒ—
2 3 5 3
𝑧2 lowers the probability that the door is open.