Download Forward and Backward Chaining

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Intelligence explosion wikipedia , lookup

History of artificial intelligence wikipedia , lookup

Soar (cognitive architecture) wikipedia , lookup

Personal knowledge base wikipedia , lookup

Existential risk from artificial general intelligence wikipedia , lookup

Human–computer chess matches wikipedia , lookup

Knowledge representation and reasoning wikipedia , lookup

Transcript
Forward and Backward
Chaining
Rule-Based Systems
 Instead of representing knowledge in a relatively
declarative, static way (as a bunch of things that
are true), rule-based system represent knowledge
in terms of a bunch of rules that tell you what
you should do or what you could conclude in
different situations.
 A rule-based system consists of a bunch of IFTHEN rules, a bunch of facts, and some
interpreter controlling the application of the rules,
given the facts.
Two broad kinds of rule system
 Forward chaining systems and
 Backward chaining systems.
Forward chaining
is a data driven method of deriving a particular
goal from a given knowledge base and set of
inference rules.
Inference rules are applied by matching facts to
the antecedents of consequence relations in the
knowledge base
Cont’d…
 In a forward chaining system you start with
the initial facts, and keep using the rules to
draw new conclusions (or take certain
actions) given those facts.
 Forward chaining systems are primarily
data-driven
 Inference rules are successively applied to
elements of the knowledge base until the
goal is reached
Cont’d…
 A search control method is needed to select
which element(s) of the knowledge base to
apply the inference rule to at any point in
the deduction.
 facts in the system are represented in a
working memory which is continually
updated.
Cont’d…
 Rules in the system represent possible actions to
take when specified conditions hold on items in
the working memory
 they are sometimes called condition-action rules
 The conditions are usually patterns that must
match items in the working memory.
Forward Chaining Systems
 actions usually involve adding or deleting items
from the working memory.
 interpreter controls the application of the rules,
given the working memory, thus controlling the
system's activity.
 It is based on a cycle of activity sometimes known
as a recognize-act cycle
 The system first checks to find all the rules whose
conditions hold
 selects one and performs the actions in the action
part of the rule
 selection of a rule to fire is based on fixed
strategies, known as conflict resolution strategies
Forward Chaining Systems
 The actions will result in a new working
memory, and the cycle begins again.
 This cycle will be repeated until either no
rules fire, or some specified goal state is
satisfied.
 Rule-based systems vary greatly in their
details and syntax, so the following
examples are only illustrative
Forward Chaining Example
 Knowledge Base:
– If [X croaks and eats flies] Then [X is a frog]
– If [X chirps and sings] Then [X is a canary]
– If [X is a frog] Then [X is colored green]
– If [X is a canary] Then [X is colored yellow]
– [Fritz croaks and eats flies]
 Goal:
– [Fritz is colored Y]?
Forward Chaining Example
Knowledge Base
If [X croaks and eats flies]
Then [X is a frog]
If [X chirps and sings]
Then [X is a canary]
If [X is a frog]
Then [X is colored green]
If [X is a canary]
Then [X is colored yellow]
[Fritz croaks and eats flies]
Goal
[Fritz is colored Y]?
CPSC 433 Artificial Intelligence
Forward Chaining Example
Knowledge Base
If [X croaks and eats flies]
Then [X is a frog]
If [X chirps and sings]
Then [X is a canary]
If [X is a frog]
Then [X is colored green]
If [X is a canary]
Then [X is colored yellow]
[Fritz croaks and eats flies]
Goal
[Fritz is colored Y]?
CPSC 433 Artificial Intelligence
Forward Chaining Example
[Fritz croaks and eats flies]
Knowledge Base
If [X croaks and eats flies]
Then [X is a frog]
If [X chirps and sings]
Then [X is a canary]
If [X croaks and eats flies]
Then [X is a frog]
If [X is a frog]
Then [X is colored green]
If [X is a canary]
Then [X is colored yellow]
[Fritz croaks and eats flies]
[Fritz is a frog]
Goal
[Fritz is colored Y]?
CPSC 433 Artificial Intelligence
Forward Chaining Example
[Fritz croaks and eats flies]
Knowledge Base
If [X croaks and eats flies]
Then [X is a frog]
If [X chirps and sings]
Then [X is a canary]
If [X croaks and eats flies]
Then [X is a frog]
If [X is a frog]
Then [X is colored green]
If [X is a canary]
Then [X is colored yellow]
[Fritz is a frog]
[Fritz croaks and eats flies]
[Fritz is a frog]
Goal
[Fritz is colored Y]?
CPSC 433 Artificial Intelligence
Forward Chaining Example
Knowledge Base
If [X croaks and eats flies]
Then [X is a frog]
[Fritz croaks and eats flies]
If [X croaks and eats flies]
Then [X is a frog]
If [X chirps and sings]
Then [X is a canary]
If [X is a frog]
Then [X is colored green]
If [X is a canary]
Then [X is colored yellow]
[Fritz is a frog]
[Fritz croaks and eats flies]
[Fritz is a frog]
?
CPSC 433 Artificial Intelligence
Goal
[Fritz is colored Y]?
Forward Chaining Example
If [X croaks and eats
flies]
Then [X is a frog]
[Fritz croaks and eats flies]
Knowledge Base
If [X croaks and eats flies]
Then [X is a frog]
If [X chirps and sings]
Then [X is a canary]
If [X is a frog]
Then [X is colored green]
[Fritz is a frog]
If [X is a canary]
Then [X is colored yellow]
[Fritz croaks and eats flies]
[Fritz is a frog]
Goal
[Fritz is colored Y]?
CPSC 433 Artificial Intelligence
Forward Chaining Example
If [X croaks and eats
flies]
Then [X is a frog]
[Fritz is a frog]
[Fritz croaks and eats flies]
Knowledge Base
If [X croaks and eats flies]
Then [X is a frog]
If [X chirps and sings]
Then [X is a canary]
If [X is a frog]
Then [X is colored green]
If [X is a frog]
Then [X is colored green]
If [X is a canary]
Then [X is colored yellow]
[Fritz croaks and eats flies]
[Fritz is a frog]
[Fritz is colored green]
Goal
[Fritz is colored Y]?
CPSC 433 Artificial Intelligence
Forward Chaining Example
If [X croaks and eats
flies]
Then [X is a frog]
[Fritz is a frog]
[Fritz croaks and eats flies]
Knowledge Base
If [X croaks and eats flies]
Then [X is a frog]
If [X chirps and sings]
Then [X is a canary]
If [X is a frog]
Then [X is colored green]
If [X is a frog]
Then [X is colored green]
If [X is a canary]
Then [X is colored yellow]
[Fritz croaks and eats flies]
[Fritz is colored green]
[Fritz is a frog]
[Fritz is colored green]
Goal
[Fritz is colored Y]?
CPSC 433 Artificial Intelligence
Forward Chaining Example
If [X croaks and eats
flies]
Then [X is a frog]
[Fritz is a frog]
Knowledge Base
If [X croaks and eats flies]
Then [X is a frog]
[Fritz croaks and eats flies]
If [X chirps and sings]
Then [X is a canary]
If [X is a frog]
Then [X is colored green]
If [X is a frog]
Then [X is colored green]
If [X is a canary]
Then [X is colored yellow]
[Fritz croaks and eats flies]
[Fritz is colored green]
[Fritz is a frog]
[Fritz is colored green]
?
CPSC 433 Artificial Intelligence
Goal
[Fritz is colored Y]?
Forward Chaining Example
If [X croaks and eats
flies]
Then [X is a frog]
[Fritz croaks and eats flies]
Knowledge Base
If [X croaks and eats flies]
Then [X is a frog]
If [X chirps and sings]
Then [X is a canary]
[Fritz is a frog]
If [X is a frog]
Then [X is colored green]
If [X is a frog]
Then [X is colored green]
If [X is a canary]
Then [X is colored yellow]
[Fritz croaks and eats flies]
[Fritz is colored green]
[Fritz is a frog]
[Fritz is colored green]
Goal
[Fritz is colored Y]?
CPSC 433 Artificial Intelligence
Forward Chaining Example
If [X croaks and eats
flies]
Then [X is a frog]
[Fritz is a frog]
[Fritz croaks and eats flies]
Knowledge Base
If [X croaks and eats flies]
Then [X is a frog]
If [X chirps and sings]
Then [X is a canary]
If [X is a frog]
Then [X is colored green]
If [X is a frog]
Then [X is colored green]
If [X is a canary]
Then [X is colored yellow]
[Fritz croaks and eats flies]
[Fritz is colored
green]
[Fritz is colored Y] ?
[Fritz is a frog]
[Fritz is colored green]
Goal
[Fritz is colored Y]?
Y = green
CPSC 433 Artificial Intelligence
Backward chaining
 Backward chaining is a goal driven method
of deriving a particular goal from a given
knowledge base and set of inference rules
 Inference rules are applied by matching the
goal of the search to the consequents of the
relations stored in the knowledge base
Cont’d...
 When such a relation is found, the
antecedent of the relation is added to the
list of goals (and not into the knowledge
base, as is done in forward chaining)
 Search proceeds in this manner until a goal
can be matched against a fact in the
knowledge base
Cont’d…
 As with forward chaining, a search control
method is needed to select which goals will
be matched against which consequence
relations from the knowledge base
 backward chaining systems are goal-driven
Backward Chaining Systems
 In a backward chaining system you start
with some hypothesis (or goal) you are
trying to prove, and keep looking for rules
that would allow you to conclude that
hypothesis, perhaps setting new sub goals
to prove as you go
Cont’d
 So far we have looked at how rule-based systems can be
used to draw new conclusions from existing data, adding
these conclusions to a working memory
 This approach is most useful when you know all the
initial facts, but don't have much idea what the conclusion
might be
 If you DO know what the conclusion might be, or have
some specific hypothesis to test, forward chaining
systems may be inefficient
backward chaining system
 Note that a backward chaining system does
NOT need to update a working memory
 Instead it needs to keep track of what goals
it needs to prove its main hypothesis.
 Lets take an example
 ……………..
Forward chaining vs. backward
chaining
 Data-driven, forward chaining
– Starts with the initial given data and search for the goal.
– At each iteration, new conclusion (RHS) becomes the pattern to look for
next
– Working memory contains true sentences (RHS’s).
– Stop when the goal is reached.
 Goal-driven is the reverse.
– Starts with the goal and try to search for the initial given data.
– At each iteration, new premise (LHS) becomes the new sub goals, the
pattern to look for next
– working memory contains sub goals (LHS’s) to be satisfied.
– Stop when all the premises (sub goals) of fired productions are reached.
 Sense of the arrow is in reality reversed.
 Both repeatedly pick the next rule to fire.
Cont’d…..
condition  action
premise  conclusion
Forward
chaining
Backward
chaining
Starts with
premise
conclusion
Search for
conclusion
premise
Working
memory
true
sub goals
statements to be
proved
Stopping
criteria
goal is
reached
Initial data
are
reached
Datadriven
Goaldriven
Combining forward- and
backward-chaining
 Begin with data and search forward until
the number of states becomes
unmanageably large.
 Switch to goal-directed search to use sub
goals to guide state selection.
Strategies
 These strategies may help in getting reasonable
behavior from a forward chaining system,
 but the most important thing is how we write the
rules.
 They should be carefully constructed, with the
preconditions specifying as precisely as possible
when different rules should fire.
 Otherwise we will have little idea or control of
what will happen
Conclusion
 Goal-driven, backward chaining
 Data-driven, forward chaining
 conditions first, then actions
 working memory contains true statements
describing the current environment
 actions first, then conditions (sub goals)
 working memory contains sub goals to be
shown as true
Forwards vs. Backwards
Reasoning
 Whether you use forward or backwards reasoning
to solve a problem depends on the properties of
your rule set and initial facts.
 Sometimes, if you have some particular goal (to
test some hypothesis), then backward chaining
will be much more efficient, as you avoid
drawing conclusions from irrelevant facts.
 However, sometimes backward chaining can be
very wasteful - there may be many possible ways
of trying to prove something, and you may have to
try almost all of them before you find one that
works.
 Forward chaining may be better if you have lots of
things you want to prove
 when you have a small set of initial facts; and
when there tend to be lots of different rules which
allow you to draw the same conclusion
 Backward chaining may be better if you are trying
to prove a single fact, given a large set of initial
facts, and where, if you used forward chaining,
lots of rules would be eligible to fire in any cycle.