Download CPSC 7373: Artificial Intelligence

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts
no text concepts found
Transcript
CPSC 7373: Artificial Intelligence
Lecture 9: Planning
Jiang Bian, Fall 2012
University of Arkansas at Little Rock
Planning
• We defined AI to be the study and process of
finding appropriate actions for an agent.
• We have looked at problem solving search
over a state space.
– Given a state space and a problem description, we
can find a solution, a path to the goal.
• Problem solving approaches only work when
the environment is deterministic and fully
observable.
Problem Solving vs Planning
Oradea
Neamt
Zerind
Iasi
Arad
Sibiu
Fagaras
Vaslui
Rimnicu Vilcea
Timisoara
Pitesti
Lugoj
Hirsova
Mehadia
Urziceni
Bucharest
Dobreta
Craiova
Giurgiu
Eforie
A Mystery: Why Can't We Walk
Straight?
Walking Straight into Circles, by Souman et. al.
Planning vs Execution
• Why we need to interleave planning with execution?
– Properties of the environment make it hard
• STOCHASTIC: We don't know for sure what an action is going to do
• MULTIAGENT:
• PARTIAL OBSERVABILITY:
– Unknown model: lack of knowledge of the world
• e.g., we have map or GPS software that's inaccurate or
incomplete
– Hierarchical: devils in the details
• Instead of planning in the space of world states, we
plan in the space of belief states.
Vacuum Cleaner Example
Search in the state space of belief states rather
than in the state space of actual spaces
Sensorless Vaccum Cleaner World
L
1
2
3
4
L
R
5
6
7
8
R
Partially Observable Vacuum Cleaner
Observations alone can't introduce a new state
[B, Dirty]
R
[B, Clean]
Suppose we have what's called local sensing, that is our vacuum can see what
location it is in and it can see what's going on in the current location, that is
whether there's dirt in the current location or not, but it can't see anything about
whether there's dirt in any other location.
Stochastic Environment
[B, Dirty]
Always Maybe
R
[A, Dirty]
SRS
RSLS
SRRS
SRSRS
[B, Clean]
Action increase uncertainty, Observation decreases
uncertainty
Infinite Sequences
• e.g., [S,R,S]
S
R
A
• e.g., [S, while A:R, S]
B
S
CLASSICAL PLANNING
• STATE SPACE: k-Boolean (2^k)
• WORLD STATE: Complete assignment
• BELIEF STATE:
– Complete assignment
– Partial assignment
– Arbitrary formula
• ACTION SCHEMA
– Action(FLY(p, x, y)
• PRECOND: Plane (p) ^ Airport (x) ^ Airport (y) ^ At(p,x)
• EFFECT: -At(p,x) ^ At(p,y)
–)
SEARCH in Planning
• Progression Search (Forward search)
– Searching in Problem Solving
• Init State -> Goal State
• Regression Search (Backward search)
– Goal State -> Init State
• Progression vs Regression
– e.g.,
• Action(Buy(b),
» PRE: ISBN(b)
» EFF: OWN(b))
• GOAL (OWN(0136042597))
• Plan Space Search: Searching in the plan space rather
than in the world state space.