Download Artificial Intelligence - BFH

Document related concepts
no text concepts found
Transcript
Berne University of Applied Sciences
School of Engineering and Information Technology
Artificial Intelligence
Dr. Bernhard Anrig
October 2006
Artificial Intelligence
1
Overview
1. Problem Solving
2. Uninformed Search Strategies
3. Informed Search Strategies
4. Local Search
5. CSP – Constraint Satisfaction Problems
Source: S. Russel, P. Norvig, “Artificial Intelligence, A Modern Approach”, Prentice Hall, second ed., 2003
Artificial Intelligence
2
Problem Solving: Content
Problem Solving
Holidays in Romania
Problem Solving: The Steps
General Problem-Solving Agent
Problem Formulation
Problem Types
Searching for Solutions
Tree-Search
Search Strategies
Artificial Intelligence
Problem Solving
3
Holidays in Romania
Problem: On holiday in Romania, you are currently in Arad,
your flight leaves tomorrow from Bucharest
Formulate goal:
I
be in Bucharest
Formulate Problem:
I
I
states: various cities
actions: drive between cities
Find solution:
I
sequence of cities, e.g. Arad, Sibiu, Fagaras, Bucharest
Artificial Intelligence
Problem Solving: Holidays in Romania
4
Oradea
71
75
Neamt
Zerind
87
151
Iasi
Arad
140
Sibiu
118
99
92
Fagaras
Vaslui
80
Rimnicu Vilcea
Timisoara
111
Lugoj
142
211
Pitesti
97
70
Mehadia
146
75
Dobreta
85
101
138
120
Bucharest
90
Craiova
Artificial Intelligence
Problem Solving: Holidays in Romania
Giurgiu
98
Urziceni
Hirsova
86
Eforie
5
Steps
1. Goal Formulation
I
I
Formulate a goal based on the current situation and the
agent’s performance
Goal = the set of world states in which the goal is satisfied
2. Problem Formulation:
I
I
What actions and what states are to be considered?
“Granularity”
3. Solution Finding!
Artificial Intelligence
Problem Solving: Problem Solving: The Steps
6
Agent
Function Simple-Problem-Solving-Agent(percept) returns an action
static: seq, an action sequence, initially empty
state, some description of the current world state
goal, a goal, initially null
problem, a problem formulation
state ← Update-State(state, percept)
if seq is empty then
goal ← Formulate-Goal(state)
problem ← Formulate-Problem(state, goal)
seq ← Search(problem)
action ← Recommandation(seq, state)
seq ← remainder(seq, state)
return action
Artificial Intelligence
Problem Solving: General Problem-Solving Agent
7
Remarks:
I
This is offline problem solving
I
Online problem solving = acting without complete knowledge
So:
I
static environement, no changes in the environment are
considered
I
observable environment, the agent knows all cities and streets
I
discrete environment
Hence quite a simple agent
Artificial Intelligence
Problem Solving: General Problem-Solving Agent
8
Steps of problem formulation
Initial State
I
Where the agent starts, e.g. In(Arad)
Actions
I
I
I
I
I
Description of the possible action available to the agent
Most common: using a successor function which maps a
state x to a set of haction, successor i ordered pairs
E.g. SuccessorFn(In(Arad))) = {hGo(Sibiu), In(Sibiu)i ,
hGo(Timisoara), In(Timisoara)i , hGo(Zerind), In(Zerind)i }
State space of the problem: Set of all states reachable from
initial state
implicitly defined by initial state + successor function
State space forms a graph
Artificial Intelligence
Problem Solving: Problem Formulation
9
Steps of problem formulation
Goal Test
I
I
I
Determines whether a state is a goal state
Explicit set of goal states, e.g. {In(Bucharest)}
Implicit declaration: e.g. “checkmate” in chess
Path cost
I
I
I
I
Function which assigns a numeric cost to each path
Agent chooses a function that reflects its own performance
measure
E.g. if agent wants a quick way to get to Bucharest, then
cost function = length in kilometers
In the sequel: path cost = sum of action costs
E.g. cost of path to Bucharest = sum of distances between
cities on the path
Artificial Intelligence
Problem Solving: Problem Formulation
10
Steps of problem formulation
Solution
I
I
A sequence of actions
leading from the initial state to a goal state
Artificial Intelligence
Problem Solving: Problem Formulation
11
Typology
I
Deterministic, fully observable → single-state problem
• Agent knows exactly which state it will be in
• Solution is a sequence
I
Non-Observable → conformant problem
• Agent may have no idea where it is
• solution (if any) is a sequence
I
Nondeterministic and/or partially observable → contingency
problem
• Percepts provide new information about current state
• solution is a tree or policy
• often interleave search, execution
I
Unknown state space → exploration problem (“online”)
Artificial Intelligence
Problem Solving: Problem Types
12
Examples: Vacuum World
1
2
3
4
5
6
7
8
Vacuum world:
States:
8 possible world states
Artificial Intelligence
Problem Solving: Problem Types
13
Examples: Vacuum World
Initial state:
any of the states
Successor function: This generates the legal states that results
from trying the three actions Left, Right and
Suck
Goal test:
Checks whether all squares are clean
Path cost:
Each step costs 1
Artificial Intelligence
Problem Solving: Problem Types
14
Examples: Vacuum World
R
L
R
L
S
S
R
R
L
R
L
R
L
L
S
S
S
S
R
L
R
L
S
Artificial Intelligence
Problem Solving: Problem Types
S
15
Examples: Vacuum World
I
Single-state, start in # 5:
Solution: [Right, Suck]
I
Conformant, start in
{1, 2, 3, 4, 5, 6, 7, 8}
Solution:
[Right, Suck, Left, Suck]
I
Contingency, start in # 5
Murphy’s Law: Suck can dirty
a clean carpet
Local sensing: dirt, location
only
Solution:
[Right, if dirt then Suck]
Artificial Intelligence
Problem Solving: Problem Types
1
2
3
4
5
6
7
8
16
Example: 8-puzzle
7
2
5
8
3
4
2
6
3
4
5
1
6
7
8
Start State
States:
1
Goal State
integer locations of tiles
Initial State: any state can be given as initial one
Actions:
move blank left, right, up, down
Goal test:
Goal state (given)
Path cost:
1 per move
Artificial Intelligence
Problem Solving: Problem Types
17
Example: 8-queens
States:
any arrangement of 0 to 8
queens on the board is a state
Initial State: No queens on the board
Action:
Add a queen to an empty
square
Goal test:
8 queens on the board, none
attacked
Path cost:
–
Artificial Intelligence
Problem Solving: Problem Types
Not a solution. . .
18
Example: Route finding
States:
each state is a location (e.g. railway station)
and the current time
Initial State:
Specified by the problem
Successor function: Returns the states resulting from taking any
train leaving later than the curent time from
the current station to another
Goal test:
Do we arrive at the destination by some
prespecified time?
Path cost:
Depends on ticket fare, waiting time, train
quality, changes, . . .
Artificial Intelligence
Problem Solving: Problem Types
19
Example: Robotic assembly
States:
real-valued coordinated of robot joint angles
part of the object to be assembled
Actions:
continuos motions of robot joints
Goal test: complete assembly (with no robot included)
Path cost: time to execute
Artificial Intelligence
Problem Solving: Problem Types
20
Search Tree
I
explicit search tree
I
generated by the initial state and the successor function
I
offline, simulated exploration of state space
I
which search strategy?
I
state space 6= search tree!
Artificial Intelligence
Problem Solving: Searching for Solutions
21
Node in a search tree
Data structure for a node in the search tree:
State:
the state in the state space to which the node
corresponds
Parent-Node: the node in the search tree that generated this
node
Action:
the action that was applied to the parent to
generate the node
Path-Cost:
the cost g (n) of the path from the initial state to
the node, as indicated by the parent pointers
Depth:
the number of steps along the path from the initial
state
Artificial Intelligence
Problem Solving: Searching for Solutions
22
Node in a search tree
PARENT-NODE
Node
5
4
6
1
88
7
3
22
ACTION = right
DEPTH = 6
PATH-COST = 6
STATE
Node: Data structure constituting part of the search tree
State: Representation of a physical configuration
Artificial Intelligence
Problem Solving: Searching for Solutions
23
Informal description:
Function Tree-Search(problem, strategy ) returns A solution,
or failure
initialize the search tree using the initial state of problem
loop do
if there are no candidates for expansion then return failure
choose a leaf node for expansion according to strategy
if the node contains a goal state
then return the corresponding solution
else expand the node and
add the resulting nodes to the search tree
The set of all nodes that have been generated but not yet
expanded is called the fringe.
Artificial Intelligence
Problem Solving: Tree-Search
24
Elements of a
formal description:
I
MakeQueue(element,. . . ) creates a queue with the given
element(s)
I
Empty?(queue) returns true only if there are no more
elements in the queue
I
First(queue) returns the first element of the queue
I
RemoveFirst(element, queue) returns First(queue) and
removes it from the queue
I
Insert(element, queue) inserts an element into the queue and
returns the resulting queue (we do not yet say where to insert
the element here!)
I
InsertAll(elements, queue) . . .
Artificial Intelligence
Problem Solving: Tree-Search
25
Formal description:
Function Tree-Search(problem, fringe) returns a solution, or failure
fringe ← Insert(Make-Node(Initial-State[problem]), fringe)
loop do
if Empty?(fringe) then return failure
node ← Remove-First(fringe)
if Goal-Test[problem] applied to State[node] succeeds
then return Solution(node)
fringe ← InsertAll(Expand(node, problem), fringe)
Argument fringe must be an empty queue and the type of this
queue will imply the order of the search.
Artificial Intelligence
Problem Solving: Tree-Search
26
Formal description:
Function Expand(node, problem) returns a set of nodes
successors ← the empty set
for each haction, resulti in SuccessorFn[problem](State[node]) do
s ← a new Node
State[s] ← result
ParentNode[s] ← node
Action[s] ← action
PathCost[s] ← PathCost[node] + StepCost(node, action, s)
Depth[s] ← Depth(node)+1
add s to successors
return successors
Artificial Intelligence
Problem Solving: Tree-Search
27
Strategy: defined through the
order of node expansion!
Evaluation of strategies:
Completeness:
does it always find a solution if one exists?
Time complexity: number of nodes generated and/or expanded
Space complexity: maximum number of nodes in memory
Optimality:
does it always find a least-cost solution?
Time and space complexity are measured using
b
maximum branching factor in the search tree
d
depth of the least-cost solution
m
maximum depth of the state space (can be ∞!)
Artificial Intelligence
Problem Solving: Search Strategies
28
Uninformed Search Strategies:
Content
Uninformed Search Strategies
Introduction
Breath-first Search BFS
Uniform-cost Search UCS
Depth-first search DFS
Depth-limited search DLS
Iterative deepening search IDS
Bi-directional search
Comparison
Repeated steps
Search with partial information
Artificial Intelligence
Uninformed Search Strategies
29
Types of Searching
Uninformed search = blind search:
The search strategie considers only the information
provided by the problem definition, no additional
information.
Hence, what can be done is to generate successors and
distinguish goal states from non-goal ones.
Informed search = heuristic search:
see Informed Search
Artificial Intelligence
Uninformed Search Strategies: Introduction
30
Restrictions for this Subject:
In this part:
I
Uninformed search
I
Fully observable environment
I
Deterministic environment
I
Hence the agent can calculate exactly which state results from
any sequence of actions and always knows which state he is in!
Key for distinguishing these search algorithms:
Order in which the nodes are expanded!
Artificial Intelligence
Uninformed Search Strategies: Introduction
31
Breath-first Search
– General Idea
Concept:
expand shallowest unexpanded node
Implementation: fringe is a FIFO queue, i.e. insert at read end!
A
A
B
D
C
E
F
A
B
G
D
C
E
F
A
B
G
D
C
E
Artificial Intelligence
Uninformed Search Strategies: Breath-first Search BFS
F
B
G
D
C
E
F
G
32
Breath-first Search Properties
Complete? Yes (if b finite!)
Time?
1 + b + b 2 + b 3 + · · · + b d + b(b d − 1) = O(b d+1 )
Exponentional in d
Space?
Keeps every node in memory, O(b d+1 ),
Optimal?
Yes (if cost=1 per step); not optimal in general
Big problem: Space. . .
Artificial Intelligence
Uninformed Search Strategies: Breath-first Search BFS
33
Breath-first Search Example
Example: tree with b=10, 10’000 node per second, 1 Kbyte per
node
Depth
2
4
6
8
10
12
14
Nodes
1100
111’100
107
109
1011
1013
1015
Time
0.11 sec
11 sec
19 min
31 hours
129 days
35 years
3’523 years
Artificial Intelligence
Uninformed Search Strategies: Breath-first Search BFS
1
106
10
1
101
10
1
Memory
megabyte
megabyte
gigabyte
terabyte
terabyte
petabyte
exabyte
34
Uniform-cost Search
– General Idea
Concept:
expand least-cost unexpanded node
Implementation: fringe ordered by path cost
Equivalent to breath-first if step costs all equal!
Complete? Yes (if if all step costs ≥ for some > 0)
Time?
number of nodes with g ≤ cost of optimal solution,
C∗
O(b b +1c ), where C ∗ = cost of optimal solution
Space?
number of nodes with g ≤ cost of optimal solution,
C∗
O(b b +1c ), where C ∗ = cost of optimal solution
Optimal?
Yes! Nodes are expanded in increasing order of g (n)
Artificial Intelligence
Uninformed Search Strategies: Uniform-cost Search UCS
35
Depth-first search
– General Idea
Concept:
expand deepest unexpanded node
Implementation: fringe is a LIFO queue, i.e. insert at front end!
Artificial Intelligence
Uninformed Search Strategies: Depth-first search DFS
36
Depth-first search Example
A
A
B
C
D
H
E
I
J
B
F
K
L
N
C
D
G
M
O
H
E
I
J
K
C
E
I
J
L
G
N
M
D
J
I
J
K
O
H
I
J
L
K
L
G
M
N
O
H
J
N
M
O
H
E
I
J
F
K
L
G
N
M
B
F
K
O
O
A
E
I
N
M
C
D
G
C
D
L
G
B
F
B
F
K
H
F
A
E
C
E
I
O
E
A
B
H
C
D
C
A
D
N
M
B
F
K
L
G
A
B
H
B
F
A
D
A
L
G
M
N
Artificial Intelligence
Uninformed Search Strategies: Depth-first search DFS
C
D
O
H
E
I
J
F
K
L
G
M
N
O
37
H
I
J
K
L
N
M
O
H
I
J
K
D
C
H
I
J
K
L
N
M
E
I
J
F
K
L
G
N
M
C
D
O
H
E
I
J
C
E
I
J
G
N
M
D
O
H
J
I
J
K
L
G
G
N
M
M
N
O
H
J
H
E
I
J
F
L
K
G
N
M
B
F
K
O
O
A
E
I
N
M
C
D
O
C
D
L
B
F
L
K
B
F
K
H
G
A
E
I
C
E
J
O
F
A
B
I
N
M
E
C
A
D
D
G
B
F
L
K
L
C
A
B
D
B
F
K
O
A
B
A
H
O
A
B
H
N
M
Depth-first search Example
(cont)
A
H
L
L
G
M
N
Artificial Intelligence
Uninformed Search Strategies: Depth-first search DFS
C
D
O
H
E
I
J
F
K
L
G
M
N
O
38
Depth-first search Properties
Complete? No, fails in infinite spaces or spaces with loops
Complete in finite spaces if modified to avoid
repeated states along paths
Time?
O(b m ) which is very bad if m much larger than d
If solutions are “dense”, may be much faster than BFS
Space?
O(bm), linear!
Optimal?
No
Remember: m = max. depth of state space!
Artificial Intelligence
Uninformed Search Strategies: Depth-first search DFS
39
Depth-limited search
– General Idea, Properties
Concept:
depth-first search with pre-determined limit `
So all nodes with depth ` are assumed to have
no successor!
Implementation: DFS + stop at depth `
Complete? No, fails if ` < d, and in infinite spaces or spaces with
loops
Complete if ` ≥ d in finite spaces if modified to avoid
repeated states along paths
Time?
O(b ` )
Space?
O(b`), linear!
Optimal?
No
Artificial Intelligence
Uninformed Search Strategies: Depth-limited search DLS
40
DLS Algorithm, Properties
a solution,
Function Depth-Limited-Search(problem, limit) returns or failure,
or cutoff
return Recursive-DLS(MakeNode(InitialState[problem]), problem, limit)
a solution,
Function Recursive-DLS(node, problem, limit) returns or failure,
or cutoff
cutoff-occurred? ← false
if GoalTest[problem](State[node]) then return Solution(node)
else if Depth[node] = limit then return cutoff
else for each successor in Expand(node, problem) do
result ← Recursive-DLS(successor, problem, limit)
if result = cutoff then cutoff-occurred? ← true
else if result 6= failure then return result
if cutoff-occurred? then return cutoff else return failure
Artificial Intelligence
Uninformed Search Strategies: Depth-limited search DLS
41
IDS – General-Idea
Concept: calls depth-limited search with increasing depth
Function Iterative-Deepening-Search(problem) returns aorsolution,
failure
for depth ← 0 to ∞ do
result ← Depth-Limited-Search(problem, depth)
if result 6= cutoff then return result
Artificial Intelligence
Uninformed Search Strategies: Iterative deepening search IDS
42
IDS Example
A
Limit = 0
A
A
Limit = 1
A
B
C
B
C
B
A
Limit = 2
E
F
G
D
E
A
C
E
F
H
G
D
C
E
I
J
F
G
D
E
E
F
F
D
B
A
F
G
C
D
E
F
G
D
C
E
F
B
F
G
C
D
E
F
D
C
E
M Intelligence
M
H
H
K
L
O
I
J
K
L
O
I
J
K
L
N
N
Artificial
Uninformed Search Strategies: Iterative deepening search IDS
G
A
B
G
G
A
B
C
E
C
C
A
B
G
B
A
C
A
B
D
B
A
C
A
B
A
Limit = 3
C
A
B
D
B
A
B
D
A
C
B
F
M
N
C
D
G
O
H
E
I
J
F
K
L
G
M
N
O
43
B
C
D
E
F
A
E
E
G
D
J
G
M
H
D
J
O
N
H
I
J
K
D
G
M
L
I
J
O
N
H
I
J
O
N
H
G
J
L
M
N
O
H
M
L
J
O
N
F
K
H
M
L
O
N
C
D
H
E
I
J
F
K
J
B
F
K
L
G
G
M
N
O
H
M
L
O
N
H
E
I
J
F
K
J
B
F
K
G
M
L
O
N
A
E
I
O
N
C
D
C
D
G
M
L
A
E
I
G
B
G
B
F
K
E
A
E
I
C
D
C
D
C
D
G
F
A
G
A
A
E
I
G
B
F
B
F
K
F
A
B
F
K
M
L
E
C
E
E
C
D
G
C
A
H
E
A
B
D
D
B
F
B
F
K
C
D
A
E
C
E
I
G
C
A
B
D
G
C
A
L
B
F
B
F
B
F
K
E
A
E
C
E
I
C
D
C
A
B
D
G
B
F
Limit = 3
B
F
A
C
D
C
D
IDS Example (cont)
B
H
B
G
L
Artificial Intelligence
Uninformed Search Strategies: Iterative deepening search IDS
G
M
N
C
D
O
H
E
I
J
F
K
L
G
M
N
O
44
IDS Properties
Complete? Yes
Time?
(d + 1)b 0 + db1 + (d − 1)b 2 + · · · + b d = O(b d )
Space?
O(bd)
Optimal?
Yes, if step cost ≡ 1
Artificial Intelligence
Uninformed Search Strategies: Iterative deepening search IDS
45
General idea
Concept: two simultaneous searches:
one forward from the initial state
one backward from the goal
Stop when the searches meet somewhere
Example: 8-puzzle, route in Romania
Problem: Not always applicable!
How do you compute the predecessor of a node?
Hard example: Chess! How do you go backwards from
all goal states, i.e. all checkmate states?
Artificial Intelligence
Uninformed Search Strategies: Bi-directional search
46
Graphical Example
Start
Artificial Intelligence
Uninformed Search Strategies: Bi-directional search
Goal
47
Search Techniques compared
Criterion
Complete?
Time
SPACE
Optimal?
a:
b:
c:
d:
e:
BFS
Yesa
O(b d+1 )
O(b d+1 )
Yesc
Uniform
Cost
Yesa,b
C∗
O(b b +1c )
C∗
O(b b +1c )
Yes
DFS
no
O(b m )
O(bm)
No
Depth
limited
no
O(b ` )
O(b`)
No
Iterat.
Deep.
Yesa
O(b d )
O(bd)
Yesc
Bidir.e
Yesa,d
d
O(b 2 )
d
O(b 2 )
Yesc,d
complete if b is finite
complete if step costs ≥ for some > 0
optimal if step cost are all identical
if both directions use BFS
if applicable
Artificial Intelligence
Uninformed Search Strategies: Comparison
48
Problem of Repeated Steps
If repeated states are not detected, a simple problem might turn
into an exponentional one!
A
A
B
B
C
C
B
C
C
C
D
Artificial Intelligence
Uninformed Search Strategies: Repeated steps
49
Idea how to avoid
repeating steps
Idea: Compare the node to be expanded to the already expanded
ones, if match then delete one of the paths to this node.
Tradeoff: Keep already expanded nodes in memory
Advantage: Avoid loops
Disadvantage: Memory. . .
Artificial Intelligence
Uninformed Search Strategies: Repeated steps
50
Function Graph-Search(problem, fringe) returns a solution, or failure
closed ← an empty set
fringe ← Insert(Make-Node(Initial-State[problem]),fringe)
loop do
If Empty?(fringe) then return failure
node ← RemoveFirst(fringe)
if Goal-Test[problem](State[node]) then return Solution(node)
if State[node] is not in closed then
add State[node] to closed
fringe ← InsertAll(Expand(node, problem), fringe)
Artificial Intelligence
Uninformed Search Strategies: Repeated steps
51
Partial Information
Sensorless problems or conformant problems:
I
Agent has no sensors at all
I
It can be in one of several possible initial states
I
Each action can lead to one of several possible successor states
Artificial Intelligence
Uninformed Search Strategies: Search with partial information
52
Example: Sensoreless Problem
L
R
L
R
S
S
S
L
R
L
S
S
R
R
L
L
S
L
R
S
R
Artificial Intelligence
Uninformed Search Strategies: Search with partial information
53
Example: Sensorless problems
(cont.)
a) Start in {1, 2, 3, 4, 5, 6, 7, 8}
b) Action right
1
2
3
4
5
6
7
8
c) then in one of {2, 4, 6, 8}
d) Action suck
e) then in one of {4, 8}
f) Action left
g) then in one of {3, 7, }
h) Action suck
i) in 6
I
Such sets of states = belief states
I
Agents coerces world into state 7
Artificial Intelligence
Uninformed Search Strategies: Search with partial information
54
Contingency problems
I
Environment is partially observable
I
Or actions are uncertain
I
Agent has sensors
I
It can obtain new information from its sensors after acting
Each possible percept defines a contingency that most be planned
for
Special case: adversarial contingency if the uncertainty if cause by
the actions of another agent
General solution:
lectures on “planning”
Artificial Intelligence
Uninformed Search Strategies: Search with partial information
55
Contingency problems: Example
Example: Vacuum world with Murphy’s law, i.e. sucking on a
clean square might result in dirt on that square.
a) Percept: [L,dirty ]
1
2
3
4
5
6
7
8
b) Hence in {1, 3}
c) Action suck,
d) Hence in one of {5, 7}
e) Action right
f) then in one of {6, 8}
g) if dirt then Action suck
h) Then in 8
Artificial Intelligence
Uninformed Search Strategies: Search with partial information
56
Informed Search Strategies:
Content
Informed Search Strategies
Introduction
Best-first search
Heuristic functions
Artificial Intelligence
Informed Search Strategies
57
Types of Searching
Uninformed search = blind search:
The search strategie considers only the information
provided by the problem definition, no additional
information.
Informed search = heuristic search:
Use problem-specific knowledge!
Artificial Intelligence
Informed Search Strategies: Introduction
58
Best-First Search – General Idea
Problem-specific knowledge: Evaluation function for each node
(heuristic function)
Desirability: Nodes with low evaluation are more desirable
Idea: Use desirability for choosing the node to expand
Implementation: fringe is a priority queue sorted by desirability
Two examples: – Greedy best first search
– A* search
Artificial Intelligence
Informed Search Strategies: Best-first search
59
Greedy best-first search
Evaluation function: = estimation of cost from node to the
closest goal
Idea:
Greedy best-first search expand the node
that appears to be closest to the goal
Example: Holidays in Romania
Evaluation function: hSLD (n) = straight-line distance from n to
Bucharest
Artificial Intelligence
Informed Search Strategies: Best-first search
60
Arad
Bucharest
Craiova
Dobreta
Eforie
Fagaras
Giurgiu
Hirsova
Iasi
Lugoj
hSLD :
Oradea
71
75
Zerind
Arad
151
Arad
Mehadia
366
366
Bucharest
Neamt 0
0
Craiova Oradea160
160
Dobreta Pitesti 242
242
Eforie
Rimnicu
Vilcea
161
161
Fagaras Sibiu 176
176
Timisoara
Giurgiu
77
Neamt 77
Hirsova Urziceni151
151
Iasi
Vaslui
226
226 87
Lugoj
Zerind 244
244
Mehadia
241
Neamt
234
Oradea
380
Pitesti
100
Rimnicu
193 Vilcea
Sibiu253
Timisoara
329
Urziceni
80
Vaslui
199
Zerind
374
Iasi
140
Sibiu
118
99
241
234
380
100
193
253
329
80
199
374
92
Fagaras
Vaslui
80
Rimnicu Vilcea
Timisoara
111
Lugoj
142
211
Pitesti
97
70
Mehadia
146
75
Dobreta
85
101
138
120
Bucharest
90
Craiova
Artificial Intelligence
Informed Search Strategies: Best-first search
Giurgiu
98
Urziceni
Hirsova
86
Eforie
61
Greedy best-first search:
an example
(a) The initial state
Arad
366
(b) After expanding Arad
Arad
Sibiu
Timisoara
Zerind
253
329
374
Timisoara
Zerind
329
374
(c) After expanding Sibiu
Arad
Sibiu
Arad
Fagaras
Oradea
Rimnicu Vilcea
366
176
380
193
(d) After expanding Fagaras
Artificial Intelligence
Informed Search Strategies: Best-first search
Arad
62
(c) After expanding Sibiu
Arad
Greedy best-first search:
an example
Sibiu
Arad
Fagaras
Oradea
Rimnicu Vilcea
366
176
380
193
(d) After expanding Fagaras
Fagaras
Oradea
Rimnicu Vilcea
380
193
366
Sibiu
Bucharest
253
0
Zerind
329
374
Timisoara
Zerind
329
374
Arad
Sibiu
Arad
Timisoara
Artificial Intelligence
Informed Search Strategies: Best-first search
63
Greedy best-first search:
Properties
Complete? No! May get stuck in loops!
Example: Goal = Oradea, start = Iasi gives sequence
Iasi, Neamt, Iasi, Neamt,. . .
Time?
O(b m ), but a good heuristic can give big
improvements
Space?
O(b m )
Optimal?
No
Artificial Intelligence
Informed Search Strategies: Best-first search
64
A* search – General Idea
Idea: Minimize total estimated solution cost
Combine
– cost to reach the node g (n)
– estimated cost to get from the node to the goal h(n)
Evaluation function: f (n) = g (n) + h(n)
= estimated cost of the cheapest solution
Heuristic: Expand node with minimal value of f
This heuristic is admissible if h(n) ≤ h∗ (n)
where h∗ (n) is the true cost of n
Example: hSLD (n) is never larger than the actual road distance!
Artificial Intelligence
Informed Search Strategies: Best-first search
65
A* search: an example
(a) The initial state
Arad
366=0+366
(b) After expanding Arad
Arad
Sibiu
393=140+253
(c) After expanding Sibiu
Fagaras
Oradea
Zerind
449=75+374
Arad
Sibiu
Arad
Timisoara
447=118+329
Timisoara
Zerind
447=118+329
449=75+374
Rimnicu Vilcea
646=280+366 415=239+176 671=291+380 413=220+193
(d) After expanding Rimnicu Vilcea
Sibiu
Artificial Intelligence
Informed Search Strategies: Best-first search
Arad
Timisoara
Zerind
447=118+329
449=75+374
66
(c) After expanding Sibiu
Arad
A* search: an example
Sibiu
Arad
Fagaras
Oradea
Timisoara
Zerind
447=118+329
449=75+374
Rimnicu Vilcea
646=280+366 415=239+176 671=291+380 413=220+193
(d) After expanding Rimnicu Vilcea
Arad
Sibiu
Arad
Fagaras
Oradea
Timisoara
Zerind
447=118+329
449=75+374
Rimnicu Vilcea
646=280+366 415=239+176 671=291+380
Craiova
Pitesti
Sibiu
526=366+160 417=317+100 553=300+253
(e) After expanding Fagaras
Arad
Sibiu
Arad
Fagaras
646=280+366
Oradea
Timisoara
Zerind
447=118+329
449=75+374
Rimnicu Vilcea
671=291+380
Artificial Intelligence
Informed Search
Strategies: Best-first
Sibiu
Bucharest
Craiovasearch
Pitesti
Sibiu
67
Sibiu
Timisoara
Zerind
447=118+329
449=75+374
A* search: an example
Arad
Fagaras
Oradea
Rimnicu Vilcea
646=280+366 415=239+176 671=291+380
Craiova
Pitesti
Sibiu
526=366+160 417=317+100 553=300+253
(e) After expanding Fagaras
Arad
Sibiu
Arad
Fagaras
Rimnicu Vilcea
Bucharest
591=338+253 450=450+0
Craiova
Pitesti
Arad
Sibiu
Fagaras
Arad
Sibiu
526=366+160 417=317+100 553=300+253
(f) After expanding Pitesti
646=280+366
Zerind
449=75+374
671=291+380
646=280+366
Sibiu
Oradea
Timisoara
447=118+329
Oradea
Timisoara
Zerind
447=118+329
449=75+374
Rimnicu Vilcea
671=291+380
Artificial Intelligence
Informed Search
Strategies: Best-first
Sibiu
Bucharest
Craiovasearch Pitesti
Sibiu
68
447=118+329
Arad
646=280+366
449=75+374
A* search: an example
Fagaras
Sibiu
Oradea
Rimnicu Vilcea
671=291+380
Bucharest
591=338+253 450=450+0
Craiova
Pitesti
(f) After expanding Pitesti
Arad
Sibiu
Fagaras
Arad
646=280+366
Sibiu
Sibiu
526=366+160 417=317+100 553=300+253
Timisoara
Zerind
447=118+329
449=75+374
Rimnicu Vilcea
Oradea
671=291+380
Bucharest
591=338+253 450=450+0
Craiova
Pitesti
526=366+160
Bucharest
Sibiu
553=300+253
Craiova
Rimnicu Vilcea
418=418+0 615=455+160 607=414+193
Artificial Intelligence
Informed Search Strategies: Best-first search
69
A* search: Properties
Complete? Yes, unless there are infinitely many nodes n with
f (n) ≤ f (G ) where G is the goal.
Time?
e relative error in h×length of solution
Space?
Keeps all nodes in memory
Optimal?
Yes, if heuristic is admissible!
Artificial Intelligence
Informed Search Strategies: Best-first search
70
A* search: Contour Map
O
N
Z
A
I
S
380
400
T
F
R
P
L
Contour map of
Romania using
f -costs:
V
M
D
Artificial Intelligence
Informed Search Strategies: Best-first search
B
420
C
U
H
E
G
71
A* search: Optimality
A* is optimally efficient for any given heuristic function
no other algorithm is guaranteed to expand fewer nodes than A*
(except possibly through tie-breaking among the nodes which
have f (n) = C ∗ ).
But . . .
I
the number of nodes within the goal contour search space is
still exponential in the length of the solution for most
problems!
I
A* usually runs out of space as it keeps all nodes in memory!
Artificial Intelligence
Informed Search Strategies: Best-first search
72
Memory Bounded
Heuristic Search
Several approaches:
IDA*
Iterative deepening A*
A combination of IDS and A*
RBFS
Recursive best-first search
Uses idea of best-fist search with only linear space
MA*
Memory bounded A*
SMA*
simplified MA*
Artificial Intelligence
Informed Search Strategies: Best-first search
73
SMA*
I
Like A*, it expands the best leaf until memory is full
I
Then, it deletes the worst leaf node
I
And it backs up the value of this leaf node to the
corresponding parent node, hence the parent knows the
quality of the best path in the subtree
Artificial Intelligence
Informed Search Strategies: Best-first search
74
SMA* search: Properties
Complete? Yes, if there is any reachable solution, i.e. the depth
of the shallowest goal node is less than memory size.
Space?
Linear
Optimal?
Yes, if any optimal solutions is reachable
Artificial Intelligence
Informed Search Strategies: Best-first search
75
9-Puzzle Example
7
2
5
Example: 8-puzzle
8
I
Average solution cost: 22 steps
I
Branching factor ≈ 3
I
Exhaustive search:
322 ≈ 3.1 × 1010
7
2
5
8
3
Start State
Artificial Intelligence
Informed Search Strategies: Heuristic functions
3
4
6
3
1
6
Start State
4
1
2
6
3
4
5
1
6
7
8
Goal State
76
Heuristics for the 8-Puzzle
Typical heuristic functions for the 8-puzzle:
I
h1 = the number of misplaces tiles
I
h2 = the total city block distance or Manhattan distance
7
h1 = 8
h2 = 3 + 1 + 2 + 2 + 2 + 3 + 3 + 2
= 18
2
5
8
3
4
6
3
1
6
Start State
True solution cost = 26, hence both heuristics are admissible!
Artificial Intelligence
Informed Search Strategies: Heuristic functions
77
Heuristics for the 8-Puzzle
For every n: h2 (n) ≥ h1 (n), i.e. h2 dominates h1
→ h2 is better for search!
Comparison of
I
IDS Iterative deepening search
I
A* with h1
I
A* with h2
Data: 100 instances of 8-puzzles, various solution lengths
Artificial Intelligence
Informed Search Strategies: Heuristic functions
78
Results
d
2
4
6
8
10
12
14
16
18
20
22
24
Search Cost
IDS A*(h1 ) A*(h2 )
10
6
6
112
13
12
680
20
18
6384
39
25
47127
93
39
3644035
227
73
–
539
113
–
1301
211
–
3056
363
–
7276
676
–
18094
1219
–
39135
1641
Artificial Intelligence
Informed Search Strategies: Heuristic functions
Effective branching factor
IDS A*(h1 )
A*(h2 )
2.45
1.79
1.79
2.87
1.48
1.45
2.73
1.34
1.30
2.80
1.33
1.24
2.79
1.38
1.22
2.79
1.42
1.24
–
1.44
1.23
–
1.45
1.25
–
1.46
1.26
–
1.47
1.27
–
1.48
1.28
–
1.48
1.26
79
New Heuristic Functions
Invent new admissible heuristic functions:
Relaxed problem = problem with fewer restrictions on the action
than the original one
Idea:
Cost of an optimal solution to a relaxed
problem is a heuristic for the original problem!
The optimal solution cost of the relaxed
problem is no greater than the optimal solution
cost of the real problem
Admissible:
clearly fulfilled!
Artificial Intelligence
Informed Search Strategies: Heuristic functions
80
New Heuristic Functions
Example: 8-puzzle
Original rule: A tile can move from square A to square B if A is
horizontally or vertically adjacent to B and B is blank
Relaxed problems:
a) A tile can move from square A to square B if A is adjacent to
B
b) A tile can move from square A to square B if B is blank
c) A tile can move from square A to square B
Derived heuristics:
a) ⇒ Manhattan distance h2
b) ⇒ ?
c) ⇒ Number of misplaced tiles h1
Artificial Intelligence
Informed Search Strategies: Heuristic functions
81
Local Search Strategies:
Content
Local Search Strategies
Introduction
Hill-climbing
Simulated annealing
Local beam search
Genetic Algorithms
Artificial Intelligence
Local Search Strategies
82
Properties of
local search algorithms
Local search algorithms . . .
I
are also informed search algorithms
I
The path to the solution does not matter!
I
operate using a single current state
I
use very little memory
I
can often find reasonable solutions in large or infinite state
spaces
Typical examples: Optimization problems!
Artificial Intelligence
Local Search Strategies: Introduction
83
Example
objective function
global maximum
shoulder
local maximum
“flat” local maximum
current
state
Artificial Intelligence
Local Search Strategies: Introduction
state space
84
Algorithm
Function Hill-Climbing(problem) returns a state that is
a local maximum
inputs: problem, a problem
local variables: current, a node
neighbor, a node
current ← MakeNode(InitialState[problem])
loop do
neighbor ← a highest-valued successor of current
if Value[neighbor ] ≤ Value[current] then return State[current]
current ← neighbor
This is steepest-ascent hill climbing!
Artificial Intelligence
Local Search Strategies: Hill-climbing
85
Example: 8-queens
I
Each state has all 8 queens on the board, one per column
I
Successor function: returns all possible states generated by
moving one single queen in its column
I
Each state has 8 × 7 = 56 successors
I
Heuristic function: # of pairs of queens attacking each other
(directly of indirectly!)
I
Global minimum if h is zero!
Artificial Intelligence
Local Search Strategies: Hill-climbing
86
Example: 8-queens (cont)
18 12 14 13 13 12 14 14
14 16 13 15 12 14 12 16
14 12 18 13 15 12 14 14
15 14 14
13 16 13 16
14 17 15
17
14 16 16
16 18 15
18 14
15
15 15 14
16
14 14 13 17 12 14 12 18
State with h = 17
Artificial Intelligence
Local Search Strategies: Hill-climbing
Local minimum with h = 1
87
Problems
I
Stuck in local maxima
I
Stuck in ridges (= sequence of local maxima not directly
connected)
I
Stuck on a plateaux: flat area
Example 8-queens: steepest-ascent hill-climbing get stuck in 86%
Artificial Intelligence
Local Search Strategies: Hill-climbing
88
Variants
Stochastic hill-climbing: choose at random from among the
uphill moves
Convergence is more slowly, sometimes finds better solutions!
First-choice hill-climbing: Generate successors randomly until
one is better than the actual one
Good strategy if a state has many many successors!
Random-restart hill-climbing: “If at first you don’t succeed,
then try, try again”
Artificial Intelligence
Local Search Strategies: Hill-climbing
89
Simulated Annealing
– General Idea
Annealing: process used to temper or harden metals and glass by
heating them to a high temperature and then
gradually cooling them, thus allowing the material to
coalesce into a low-energy crystalline state.
Idea:
Task of getting a ping-pong ball into the deepest
crevice in a bumpy surface.
If you let the ball roll, it goes to the next local
minimum
But if you shake the surface more or less . . .
Algorithm: First shake hard, and then gradually reduce the
intensity of shaking.
Artificial Intelligence
Local Search Strategies: Simulated annealing
90
Algorithm
Function Simulated-Annealing(problem, schedule) returns a sol. state
inputs: problem, a problem
schedule, a mapping from time to “temperature”
local variables: current, a node
next, a node
T, a “temperature” controlling the probab. of downward steps
current ← MakeNode(InitialState[problem])
for t ← 1 to ∞ do
T ← schedule[t]
if T = 0 then return current
next ← a randomly selected successor of current
∆E ← Value[next] - Value[current]
if ∆E > 0 then current ← next
∆E
else current ← next only with probability e D
Artificial Intelligence
Local Search Strategies: Simulated annealing
91
Local beam search
– General Idea
Idea:
Start with k randomly selected states
Successors:
Compute k successor states in parallel
Information sharing: Between the parallel searches information is
shared
Example:
If one state generates several good
successors, the other ones only poor
successors, the first one informs the other
ones that “the grass is greener over here,
come over!”
Variant:
Stochastic beam search, chooses k
successors at random
Artificial Intelligence
Local Search Strategies: Local beam search
92
Genetic Algorithms
– General Idea
Idea:
Variant of stochastic beam search, successor states
are generated by combining two parent states
Background: This corresponds to sexual reproduction rather than
asexual reproduction as in the beam search
Start:
k randomly selected states
Each state is represented as a string over a finite
alphabet (often 0/1)
Artificial Intelligence
Local Search Strategies: Genetic Algorithms
93
8-queens example
24748552
24 31%
32752411
32748552
32748152
32752411
23 29%
24748552
24752411
24752411
24415124
20 26%
32752411
32752124
32252124
32543213
11 14%
24415124
24415411
24415417
(a)
Initial Population
(b)
Fitness Function
(c)
Selection
(d)
Crossover
(e)
Mutation
Fitness function = # of non-attacking queens (optimum = 28)
Artificial Intelligence
Local Search Strategies: Genetic Algorithms
94
8-Queens (cont)
Crossover of the first two parents from the previous slide:
+
Artificial Intelligence
Local Search Strategies: Genetic Algorithms
=
95
Algorithm
Function Genetic-Algorithm(population, Fitness-Fn) returns individual
inputs: population, a set of individuals
Fitness-Fn, a function that measures the fitness
of an individual
repeat
new-population ← empty set
loop for i from 1 to Size(population) do
x ← Random-Selection(population,Fitness-Fn)
y ← Random-Selection(population,Fitness-Fn)
child ← Reproduce(x,y )
if (small random probability) then child ← Mutate(child)
add child to new-population
population ← new-population
until some individual is fit enough, or enough time has elapsed
return the best individual in population according to Fitness-Fn
Artificial Intelligence
Local Search Strategies: Genetic Algorithms
96
Algorithm (cont)
Function Reproduce(x, y ) returns an individual
inputs: x,y , parent individuals
n ← Length(x)
c ← random number from 1 to n
return Append(Substring(x,1,c),Substring(y ,c + 1,n))
Artificial Intelligence
Local Search Strategies: Genetic Algorithms
97
Constraint Satisfaction
Problems: Content
Constraint Satisfaction Problems
Introduction
Formal Definition
Visualization: Constraint Graph
Typology
Backtracking Search
Improving Efficiency
Propagating information
Artificial Intelligence
Constraint Satisfaction Problems
98
Introduction
Search problems as seen before:
I
Each state is a black box!
I
It supports goal-test, successor, eval
Constraint satisfaction Problems CSP
I
State is defined by variables with values from domains
I
Goal-test is a set of constraints which specify the allowable
values for some variables
Artificial Intelligence
Constraint Satisfaction Problems: Introduction
99
Definitions
A CSP is defined by
Variables:
X1 , X2 , . . . , Xn
Constraints: C1 , C2 , . . . , Cm
I
Each variables Xi has a domain Di
I
Each constraint Ci involves some variables and constraints
their values
I
A state of the problem is an assignment of values to variables
I
Assignment is consistent when it does not violate any
constraint
I
Complete assignment = assignment with all variables
I
Solution of CSP = complete consistent assignment
Artificial Intelligence
Constraint Satisfaction Problems: Formal Definition
100
Example: Map-Coloring
Northern
Territory
Western
Australia
Queensland
South
Australia
New
South
Wales
Victoria
Tasmania
I
Variables: WA, NT, Q, NSW, V, SA, T
I
Domains: Di = {red, green, blue}
I
Constraints: Adjacent regions must have different colors
Artificial Intelligence
Constraint Satisfaction Problems: Formal Definition
101
Example
NT
WA
Northern
Territory
Western
Australia
Q
Queensland
South
Australia
New
South
Wales
SA
NSW
V
Victoria
Tasmania
T
Binary CSP: Each constraint relates at most two variables
Constraint graph: nodes are variables, arcs show constraints
Artificial Intelligence
Constraint Satisfaction Problems: Visualization: Constraint Graph
102
CSP as search problem
A CSP can be seen as a search problem:
Initial State:
empty assignment {}
all variables are unassigned
Successor function: a value is assigned to any unassigned
variable, provided that there is no conflict
with previously assigned variables and the
constraints
Goal test:
the assignment is complete
Path cost:
a constant cost for every step
Artificial Intelligence
Constraint Satisfaction Problems: Visualization: Constraint Graph
103
Types of CSP’s
Discrete variables and finite domains: Map-coloring, 8-queens
If domain size = d then O(d m ) possible complete assignments
Boolean CSP: Each variables has only two values
Example: 3Sat
Discrete variables with infinite domains: Scheduling
Needed: Constrain language, e.g. StartJob1 + 5 ≤ StartJob3
Linear constraints: Algorithms for integer variables
Non-linear constraints may be undecidable
Continuous variables: linear programming
Artificial Intelligence
Constraint Satisfaction Problems: Typology
104
Types of CSP’s (cont)
Unary constraints: only one variable, SA6=green
Binary constraints: pairs of variables SA6=WA
Higher order constraints: more variables . . .
Preference constraints: e.g. green is better than red
Artificial Intelligence
Constraint Satisfaction Problems: Typology
105
Example: Cryptarithmetic
F
T W O
+ T W O
T
U
W
R
O
F O U R
X3
(a)
X1
X2
(b)
O + O = R + 10X1
X1 + W + W
= U + 10X2
X2 + T + T
= O + 10X3
Alldif(F , T , U, W , R, O)
X3 = F
Artificial Intelligence
Constraint Satisfaction Problems: Typology
106
Typical Examples
I
Assignment problem, timetabling problems
I
Hardware configuration
I
Spreadsheets
I
Transportation scheduling
I
Floorplanning
I
...
Artificial Intelligence
Constraint Satisfaction Problems: Typology
107
Backtracking
Naive depth-first search:
I
branching factor at the top: nd
Any of d values can be assigned to any of n variables
I
on the next level, branching factors: (n − 1)d
I
So total n! · d n leaves
I
But there are “only” d n possible complete assignments!!!
What can help us: Commutativity of variables assignments!
Hence only assignment to a single variable at each node and b = d
and only d n leaves.
Depth-first search for CSPs with single variable assignment =
backtracking search!
Artificial Intelligence
Constraint Satisfaction Problems: Backtracking Search
108
Algorithm
Function Backtracking-Search(csp) returns solution/failure
return Recursive-Backtracking({},csp)
Function Recursive-Backtracking(assignment, csp) returns sol/fail
if assignment is complete then return assignment
var ← Select-Unassigned-Variable(Variables[csp],assignment,csp)
for each value in Order-Domain-Values(var, assignment, csp) do
if value is consistent with assignment
according to Constraints[csp] then
add {var =value} to assignment
result ← Recursive-Backtracking(assignment, csp)
if result 6= failure then return result
remove {var =value} from assignment
return failure
Artificial Intelligence
Constraint Satisfaction Problems: Backtracking Search
109
Example
WA=red
WA=red
NT=green
WA=red
NT=green
Q=red
WA=green
WA=blue
WA=red
NT=blue
WA=red
NT=green
Q=blue
Artificial Intelligence
Constraint Satisfaction Problems: Backtracking Search
110
Improving Efficiency
Key points for improving the efficiency of backtracking search:
I
Which variable should be assigned next?
I
In what order should its values be tried?
I
Can inevitable failures be detected early?
I
Can we take advantage of the problem structure?
Artificial Intelligence
Constraint Satisfaction Problems: Improving Efficiency
111
Selecting variable + value
var ← Select-Unassigned-Variable(Variables[csp],assignment,csp)
⇒ Choose the variable with the minimum remaining values
(MRV)!
⇒ For tie breaks: Choose the variable which is involved in the
largest number of constraints
⇒ For this variable, choose the least constraining value
i.e. the one that rules out the fewest values in the remaining
variables
Artificial Intelligence
Constraint Satisfaction Problems: Improving Efficiency
112
Forward-checking FC
Idea: Keep track of remaining legal values for unassigned variables
Terminate search when any variable has no legal value left
FC propagates information from assigned to unassigned variables.
Artificial Intelligence
Constraint Satisfaction Problems: Propagating information
113
Example of
Propagating information
WA
Initial domains
After WA=red
After Q=green
After V=blue
NT
Q
NSW
V
SA
T
R G B R G B R G B R G B R G B R G B R G
G B R G B R G B R G B
R
G B R G
G
B
R
B R G B
R
B R G
G
B
R
B
R G
R
Northern
Territory
Western
Australia
B
B
B
B
Queensland
South
Australia
New
South
Wales
Victoria
Tasmania
Artificial Intelligence
Constraint Satisfaction Problems: Propagating information
114
Constraint propagation CP
FC does not detect all failures early
Example: After WA=red and Q=green,
NT and SA cannot both be blue!
Types of CP:
I
Arc consistency
I
k-consistency
I
path-consistency
I
strong k-consistency
Artificial Intelligence
Constraint Satisfaction Problems: Propagating information
115
Exercices
1 Give the initial state, goal test, successor function, and cost
function for each of the following. Choose a formulation that
is precise enough to be implemented!
a) You have to color a planar map using only four colors, in
such a way that no two adjacent regions have the same
color.
b) A 3-foot-tall monkey is in a room where some bananas
are suspended from the 8-foot ceiling. He would like to
get the bananas. The room contains two stackable,
movable, climbable 3-foot-high crates.
c) You have a program that outputs the message “illegal
input record” when fed a certain file of input records.
You know that processing of each input record is
independent of the other records. You want to discover
what record is illegal.
Artificial Intelligence
Exercises
116
d) You have three jugs, measuring 12 gallons, 8 gallons, and
3 gallons, and a water faucet. You can fill the jugs up or
empty them out from one to another or onto the ground.
You need to measure exactly one gallon
e) Three missionaries and three cannibals find themselves at
one side of a river. They have agreed that they would all
like to get to the other side. But the missionaries are not
sure what else the cannibals have agreed to. So the
missionaries want to manage the trip across the river in
such a way that the number of missionaries on either side
of the river is never less than the number of cannibals
who are on the same side. The only boat available holds
only two people at a time. How can everyone get across
the river without the missionaries risking being eaten.
Artificial Intelligence
Exercises
117
Exercises
2 Consider a state space where the start state is number 1 and
the successor function for state n returns two states,
numbers 2n and 2n + 1.
a) Draw the portion of the state space for states 1 to 15.
b) Suppose that the goal state is 11. List the order in which
nodes will be visited for breath-first search, depth-limited
search with limit 3 and iterative deepening search.
c) Would bidirectional search be appropriate for this
problem? If so, describe in detail how it would work.
d) What is the branching factor in each direction of the
bidirectional search?
e) Does the answer to c) suggest a reformulation of the
problem that would allow you to solve the problem of
getting from state 1 to a given goal state wich almost no
search?
Artificial Intelligence
Exercises
118
Exercises
3 Consider again the Water Jug Problem (Exercice 3.1.c).
a) Solve the problem once using BFS, once using DBS.
b) Is bidirectional search appropriate for this problem?
4 Consider again the Missionaries + Cannibals problem
(Exercice 3.1.e).
a) Draw a diagram of the complete state space.
b) Implement and solve the problem optimally using an
appropriate search algorithm. Is it a good idea to check
for repeated states?
c) Why do you think people have a hard time solving this
puzzle, given that the state space is so simple?
Artificial Intelligence
Exercises
119
Exercises
5 Consider the problem of finding the shortest path between two
points on a plane that has convex polygon obstacles as shown
in the figure below.
G
S
a) Suppose the state space consists of all positions (x, y ) in
the plane, How many states are there? How many paths
are there to the goal?
Artificial Intelligence
Exercises
120
Exercises
b) Explain why the shortest path from one polygon vertex to
any other in the scene must consist of straight-line
segments joining some of the vertices of the polygons.
Define a good state space now. How large is the state
space?
c) Define the necessary functions to implement the search
problem, including a successor function that takes a
vertex as input and returns the set of vertices that can be
reached in a straight line from the given vertex. (Do not
forget the neighbors on the same polygon.) Use the
straight-line distance for the heuristic function.
d) Apply one or more of the algorithms in this chapter to
solve a range of problems in the domain, and comment
on their performance.
Artificial Intelligence
Exercises
121
Exercises
6 Write a program that will take as input two Web Page URLs
and finds a path of links from one to the other.
a) What is an appropriate search strategy? Is bidirectional
search a good idea?
b) Could a search engine be used to implement a
predecessor function?
c) Consider the two URLs below as input for your algorithm.
How many URLs are on the shortest path of your
algorithm?
http://www.gnu.org
Artificial Intelligence
Exercises
and
http://www.hti.bfh.ch
122
Exercises
7 Prove each of the following statements:
I Breath-first search is a special case of uniform-cost
search.
I Breath-first search, depth-first search, and uniform-cost
search are special cases of best-first search.
I Uniform-cost search is a special case of A*-search.
8 The straight-line distance heuristic leads greedy best-first
search astray on the problem going from Iasi to Fargas (in the
Rumania example), but the heuristic is perfect in the opposite
direction, i.e. going from Fargas to Iasi. Are there problems
for which the heuristic is misleading in both directions? If so,
make an example!
Artificial Intelligence
Exercises
123
Exercises
9 The traveling salesman problem (TSP) can be solved via the
minimum spanning tree (MST) heuristic, which is used to
estimate the cost of completing a tour, given that a partial
tour has already been constructed. The MST cost of a set of
cities is the smallest sum of the link costs of any tree that
connects all cities.
a) Show how this heuristic can be derived from a relaxed
version of the TSP.
b) Show that the MST heuristic dominates straight-line
distance.
c) Write a problem-generator for instances of the TSP
where the cities are represented by random points in the
unit square (= square with length and height equal to 1).
d) Implement a good algorithm for constructing the MST.
Use this algorithm with an admissible search algorithm to
solve instances of the TSP.
Artificial Intelligence
Exercises
124
Exercises
10 Program a genetic algorithm for the problem of the n-queens.
a) Run the program on different n’s and random initial
states. Compare it to other algorithms which solve the
problem.
b) Increase and decrease systematically the probability p for
a mutation. Make a graphical comparison of a large set
of tests of n-queens problem with different p’s.
Artificial Intelligence
Exercises
125
Exercises
11 How many solution are there for the map coloring problem of
slide 99?
12 Solve the cryptarithmetic problem (slide 103) by hand using
backtracking, forward checking and the MRV and
least-constraining-value heuristic.
Artificial Intelligence
Exercises
126
Exercises
13 Consider the following logical puzzle: In five houses, each with
a different color, live 5 persons of different nationality, each of
whom prefer a different brand of cigarette, a different drink,
and a different pet. Given the following facts, the question to
answer is: “Where does the zebra live, and in which house do
they drink water?”
I The Englishman lives in a red house.
I The Spaniard owns the dog.
I The Norwegian lives in the first house on the left.
I Kools are smoked in the yellow house.
I The man who smokes Chesterfields lives in the house
next to the man with the fox.
I The Norwegian lives next to the blue house.
Artificial Intelligence
Exercises
127
Exercises
I
I
I
I
I
I
I
I
The Winston smoker owns snails.
The Lucky Strike smoker drinks orange juice.
The Ukrainian drinks tea.
The Japanese smokes Parliaments.
Kools are smoked in the house next to the house where
the horse is kept.
Coffee is drunk in the green house.
The green house is immediately to the right (your right!)
of the ivory house.
Milk is drunk in the middle house.
Discuss different representations of this problem as a CSP.
Why would one prefer one representation over another?
Solve the problem!
Artificial Intelligence
Exercises
128