Download Final Course Review

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Genetic algorithm wikipedia , lookup

Hard problem of consciousness wikipedia , lookup

Unification (computer science) wikipedia , lookup

Logic programming wikipedia , lookup

Embodied language processing wikipedia , lookup

AI winter wikipedia , lookup

Artificial intelligence in video games wikipedia , lookup

Personal knowledge base wikipedia , lookup

Soar (cognitive architecture) wikipedia , lookup

Multi-armed bandit wikipedia , lookup

Philosophy of artificial intelligence wikipedia , lookup

Ecological interface design wikipedia , lookup

Computer Go wikipedia , lookup

Expert system wikipedia , lookup

Embodied cognitive science wikipedia , lookup

History of artificial intelligence wikipedia , lookup

Knowledge representation and reasoning wikipedia , lookup

Transcript
DCP 1172
Introduction to Artificial Intelligence
Chang-Sheng Chen
Topics Covered:
•Review of DCP 1172
1
Summary/Review
• Introduction to Artificial Intelligence
• Problem-solving
• Problem formulation
• CSP, Game Search, Planning,…etc.
• Search
• Best-first search, A*, GA, etc.
• Heuristics
• local search, Means-ends analysis, etc.
• Knowledge Representation
• Logic/Ontology/Computation model
• Rules/Semantic networks/Frame/… etc
• Logical Reasoning
•
•
•
•
Propositional logic, predicate logic
Forward reasoning, backward reasoning
Logic programming
Monotonic reasoning, non-monotonic reasoning
DCP 1172, Final
2
Classification of knowledge systems
KNOWLEDGE SYSTEMS
Symbolic
(derivability,
soundness,
completeness)
Rules
(effcient
computabiliy)
Semantic networks
(eloquence,
simplicity)
DCP 1172, Final
Frames
(modularity,
inheritance)
3
Representation we have included
•
•
•
•
Propositional and predicate logic
Semantic nets
Search trees
frames
DCP 1172, Final
4
Knowledge-Based Agents
 A knowledge representation is a formal scheme that dictates
how an agent is going to represent its knowledge in the
knowledge base.
• Syntax (語法): Rules that determine the possible strings
in the language.
• Semantics(語意): Rules that determine a mapping from
sentences in the representation to situations in the
world.
 . Knowledge Representation
= Logic + Ontology + Computation
DCP 1172, Final
5
Knowledge Representation
= Logic + Ontology + Computation
• Knowledge representation (KR) is a multidisciplinary subject that applies theories and
techniques from three fields:
 Logic
• provides the formal structure and rules of
inference.
• Ontology
• defines the kinds of thins that exist in the
application domain.
• Computation
• supports the applications that distinguish
knowledge representation from pure philosophy.
DCP 1172, Final
6
KR = Logic + Ontology + Computation (cont.)
• Knowledge representation is the application of logic
and ontology to the task of constructing computable
models for some domain.
• Without logic, a knowledge representation is
vague, with no criteria for determining whether
statements are redundant or contradictory.
• Without ontology, the terms and symbols are
ill-defined, confused, and confusing.
• Without computable models, the logic and
ontology cannot be implemented in a computer
program.
DCP 1172, Final
7
Monotonic Reasoning
• Monotonic reasoning
• If a conclusion C can be derived from a set of expressions,
S, then any number of additional expressions being added
to S cannot change the truth value of C, provided the
expression in S remain consistent.
• Propositional logic and predicate logic are monotonic
reasoning systems.
• In other words, a monotonic reasoning system that stores
facts about the real world can deduce new facts from its
existing facts but would never have cause to delete or
modify an existing fact (unless the world changed).
• Hence, the number of facts the system stores will
increase monotonically.
DCP 1172, Final
8
Review
• Agents can use search to find useful actions
based on looking into the future
• Agents can use logic to complement search to
represent and reason about
• Unseen parts of the current environment
• Past environments (where are my keys)
• Future environments
• And they can play a mean game of chess
DCP 1172, Final
9
A Framework : What is AI?
The exciting new effort to
make computers thinks …
machine with minds, in the full
and literal sense”
(Haugeland 1985)
“The study of mental faculties
through the use of computational
models”
(Charniak et al. 1985)
“The art of creating machines
that perform functions that
require intelligence when
performed by people”
(Kurzweil, 1990)
A field of study that seeks to
explain and emulate intelligent
behavior in terms of
computational processes”
(Schalkol, 1990)
Systems that think like humans
Systems that think rationally
Systems that act like humans
Systems that act rationally
DCP 1172, Final
10
The Architectural Components of AI Systems
•
•
•
•
•
State-space search
Knowledge representation
Logical reasoning
Reasoning under uncertainty [ Not covered ]
Learning [Not covered]
DCP 1172, Final
11
Acting Humanly: The Turing Test
• Alan Turing's 1950 article Computing Machinery and
Intelligence discussed conditions for considering a
machine to be intelligent
• “Can machines think?”  “Can machines
behave intelligently?”
• The Turing test (The Imitation Game): Operational
definition of intelligence.
DCP 1172, Final
12
What would a computer need to pass the
Turing test? (1)
• Natural language processing: to communicate with
examiner.
• Knowledge representation: to store and retrieve
information provided before or during interrogation.
• Automated reasoning: to use the stored information to
answer questions and to draw new conclusions.
• Machine learning: to adapt to new circumstances and to
detect and extrapolate patterns.
DCP 1172, Final
13
Acting Humanly: The Full Turing Test
Problem:
1) Turing test is not reproducible,
constructive, and amenable to
mathematic analysis.
2) What about physical
interaction with interrogator
and environment?
Trap door
DCP 1172, Final
14
What would a computer need to pass the
Turing test? (2)
• Vision (for Total Turing test): to recognize the
examiner’s actions and various objects presented by the
examiner.
• Motor control (total test): to act upon objects as
requested.
• Other senses (total test): such as audition, smell, touch,
etc.
DCP 1172, Final
15
Summary - Intelligent Agents
• Intelligent Agents:
• Anything that can be viewed as perceiving its environment
through sensors and acting upon that environment through its
actuators to maximize progress towards its goals.
• PAGE (Percepts, Actions, Goals, Environment)
• Described as a Perception (sequence) to Action Mapping:
f : P*  A
• Using look-up-table, closed form, etc.
• Agent Types: Reflex, state-based, goal-based, utilitybased
• Rational Action: The action that maximizes the
expected value of the performance measure given
the percept sequence to date
DCP 1172, Final
16
Summary - Problem solving and search
• Problem formulation usually requires abstracting away
real-world details to define a state space that can be
explored using computer algorithms.
• Uninformed search (depth-first, breadth-first)
• Variety of uninformed search strategies; difference lies in
method used to pick node that will be further expanded.
• Informed search (Heuristic functions, best-first,
A*)
• Iterative deepening search only uses linear space and not
much more time than other uniformed search strategies.
• Once problem is formulated in abstract form, complexity
analysis helps us picking out best algorithm to solve
problem.
DCP 1172, Final
17
The Problem Space
• Formal way to specify the details of a specific problem;
captures critical features that influence problem
solving
• Problem Space: Includes the initial, intermediate
and goal states of the problem.
• Also includes the problem solver’s knowledge at each
of these steps.
DCP 1172, Final
18
The Problem Space, Illustrated
DCP 1172, Final
19
Planning as a Real-World Problem
• What is Planning?
• Planning is a search problem that requires to find an
efficient sequence of actions that transform a system from
a given starting state to the goal state
• Planning problem has a wide range of applications in the
real world
• planning in daily life
• game world
• workflow management
DCP 1172, Final
20
Features of Planning Problems
• Large search space
• Action is associated with system states
• Restrictions on the action sequence
• Valid solution may not exist
• Optimization requirement
DCP 1172, Final
21
Planning - Information Processing Approach
• Elements in the Representation of a Problem:
• The givens (or start state)
• The goal (or end state)
• The operators
• The constraints
• Example: Tower of Hanoi Problem
• Concept of the problem solving process
• Heuristics for moving through the problem space
• Means-ends analysis
DCP 1172, Final
2
22
Problem Space: Operators and Goals
• Operators: The set of legal moves that can be
performed during problem solving.
• Goal: Ultimate solution to the problem.
 Well-defined problems explicitly specify the
final goal.
• Tower of Hanoi -- all disks moved to target peg
• Algebra equation -- x = ??
 Ill-defined problems only vaguely specify the
goal state, the operators or both.
• Write a coherent essay about……….
• Become a millionaire!
DCP 1172, Final
23
Algorithms vs. Heuristics
• Algorithms
• guarantee a solution to a problem
• e.g., algebra: 3x + 4 = 2x
• Usually problem specific
• Heuristics
• don't guarantee a solution to a problem,
• cut down search
• can be used on a lot of problems
DCP 1172, Final
924
Heuristics: Means-Ends Analysis
• Identify a difference between current state and
goal state
• Set a subgoal to reduce the difference.
• Apply an operator to reduce the difference
• (If operator can’t be applied, new
subgoal = remove obstacle that prevents
applying the operator)
DCP 1172, Final
13
25
Heuristics: Means-Ends Analysis
• Unlike search techniques, means-ends analysis can
select an action even if it is not possible in the
current state.
• If a planner selects an action that results in the goal
state, but is not currently possible, then it will be set
as a new goal the conditions necessary for carrying
put that actions.
• Break the problem down into subgoals and solve these one
at a time (E.g. Tower of Hanoi )
• First concentrate on getting the large disk to the
third peg
DCP 1172, Final
26
Best-first Search Summary
• Best-first search = general search, where the minimum-cost
nodes (according to some measure) are expanded first.
• Greedy search = best-first with the estimated cost to
reach the goal as a heuristic measure.
- Generally faster than uninformed search
- not optimal
- not complete.
• A* search = best-first with measure = path cost so far
+ estimated path cost to goal.
- combines advantages of uniform-cost and greedy searches
- complete, optimal and optimally efficient
- space complexity still exponential
DCP 1172, Final
27
Local Search Summary
• We had talked about two very simple approaches
• Hill-climbing
• Simulated Annealing
• Iterative Improvement
• These searches are sometimes referred to as iterative
improvement algorithms.
• This name stems from the fact that they always have
some solution available.
• They operate by successively attempting to improve
the known solution.
DCP 1172, Final
28
Local Search Summary
• Time complexity of heuristic algorithms depend on
quality of heuristic function.
• Good heuristics can sometimes be constructed by
examining the problem definition or by generalizing
from experience with the problem class.
• Iterative improvement algorithms keep only a single
state in memory.
• Can get stuck in local extrema;
• simulated annealing provides a way to escape local
extrema, and is complete and optimal given a slow
enough cooling schedule.
DCP 1172, Final
29
Constraint Satisfaction Problem (CSP)
• Constraint Satisfaction Problems (or CSP) consists
of variables with constraints on them.
 In CSP problems, states are represented as sets of
variables, each with values chosen from some
domain
 A goal test consists of satisfying constraints
on sets of variable/value combinations
 A goal state is one that has no constraint
violations
DCP 1172, Final
30
Constraint Satisfactory Problems (1)
• CSP Formulation 1 - incremental formulation as a
standard search problem :
• Initial State: the empty assignment {};

Start state has no variables assigned
• Successor function:

Assign a variable at each step
• Goal test: the current assignment is complete.
• Apply goal test to completed states
• Path Cost: a constant cost (e.g., 1) for every step
• CSP formulation 2 - the complete-state formulation:
• Every state is a complete assignment that might or might not
satisfy the constraints.
• Local search methods work well for this formulation.
DCP 1172, Final
31
CSP Heuristics
• How do these heuristics differ from the ones that we
examined in the context of a standard state-space
search (for example, in route finding or the 8-puzzle)?
• Hint: domain-specific knowledge
• There are two places heuristics can help
• Which variable to assign next
Degree heuristic
• The one involved in the largest number of constraints
• Choose the most constrained variable

• Minimum remaining values (MRV) heuristic
• Which value to assign to a variable
• Choose as a value the one that leaves the most choices remaining
for other variables
• Least-constraining value heuristic
DCP 1172, Final
32
Why Study Games ? (1)
• Game playing was one the first tasks undertaken in AI.
• By 1950, Chess had been studied by many forerunners in AI (
e.g., Claude Shannon, Alan Turing, etc.)
• For AI researchers, the abstract nature of games make
them an appealing feature for study.
• The state of a game is easy to represent,
• and agents are usually restricted to a small number
of actions,
• whose outcomes are defined by precise rules.
DCP 1172, Final
33
Two-player games
• A game formulated as a search problem:
•
•
•
•
Initial state: board position and turn
Successor functions: definition of legal moves
Terminal state: conditions for when game is over
Utility function: a numeric value that describes the outcome
of the game. E.g., -1, 0, 1 for loss, draw, win.
(AKA payoff function)
DCP 1172, Final
34
Why Study Games ? (2)
• Games are interesting because they are too
hard to solve.
• Games requires the ability to make some decision
even when calculating the optimal decision is
infeasible.
• Games also penalize inefficiency severely.
• Game-playing research has therefore spawned a
number of interesting ideas on how to make the
best possible use of time.
DCP 1172, Final
35
Summary – Game Playing
DCP 1172, Final
36
Knowledge-Based Agents
 A knowledge-based agent is composed of a knowledge base
and an inference mechanism.
• A knowledge-base is simply a repository of domainspecific things (or sentences about the world) that you
know represented in some useful way.
 A knowledge-based agent operates by storing sentences
about the world in its knowledge base, using the inference
mechanism to infer new sentences, and using these sentences
to decide what action to take.
• The knowledge base cannot be a simple table because an
agent should be able to conclude facts about the world that
are not already represented in the knowledge base.
DCP 1172, Final
37
Semantic networks
Linguists noticed long ago that the structure of a sentence can
be represented as a network.
• Words of the sentence are nodes, and they are bound by arcs
expressing relations between the words.
• The network as a whole represents in this way a meaning of the
sentence in terms of meanings of words and relations between
the words.
• This meaning is an approximation of the meaning people can
assign to the sentence, analogous in a way to other
approximate representations of the meaning, for instance, how
floating point numbers represent the approximate meaning of
real numbers.
•
DCP 1172, Final
38
Example
John must pick up his report in the morning and have
a meeting after lunch. After the meeting he will give the report to me.
before
lunch
morning
at the time
after
have a meeting
pick up
what?
after
who?
who?
report
whos?
give
John
to whom?
his
DCP 1172, Final
me
39
Example continued
Inferences can be made, depending on the properties of the relations
of a semantic network. Let us consider only time relations of the
network in our example, and encode the time relations by atomic
formulas as follows:
before(lunch,morning)
after(morning,lunch)
= general knowledge
after(lunch,have a meeting)
after(have a meeting,give)
at-the-time(morning,pick up)
= specific knowledge
DCP 1172, Final
40
Example continued
Inference rules:
before(x,y) before(y,z)
before(x,z)
after(x,y)
before(y,x)
at-the-time(x,z) before(y,z)
before(y,x)
Applying these rules, we can infere
after(lunch,have a meeting)
before(have a meeting,lunch)
at-the-time(pick up,morning)
before(lunch,pick up)
and
before(lunch,morning)
etc.
DCP 1172, Final
41
Rules
Rules are a well-known form of knowledge which is easy
to use. A rule is a pair
(condition, action)
which has the meaning: "If the condition is satisfied,
then the action can be taken." Also other modalities for
performing the action are possible - "must be taken", for
instance.
DCP 1172, Final
42
Using rules
• Let us have a set of rules called rules and functions
cond(p) and act(p) which select the condition part and
action part of a given rule p and present them in the
executable form.
• The following is a simple algorithm for problem solving
with rules:
while not good do
found := false;
for p  rules do
if cond(p) then act(p); found:=true fi
od;
if not found then failure fi
od
DCP 1172, Final
43
Decision trees
A simple way to represent rules is decision tree:
• a tree with nodes for attributes and arcs for attribute
values. Example:
legs
two
four
hands
furry
no
no
yes
furry
yes
monkey
bird
table
yes
animal
no
man
DCP 1172, Final
44
Frames
1. The essence of the frame is that it is a module of knowledge
about something which we can call a concept. This can be a
situation, an object, a phenomenon, a relation.
2. Frames contain smaller pieces of knowledge: components,
attributes, actions which can be (or must be) taken when
conditions for taking an action occur.
3. Frames contain slots which are places to put pieces of
knowledge in. These pieces may be just concrete values of
attributes, more complicated objects, or even other frames. A
slot is being filled in when a frame is applied to represent a
particular situation, object or phenomenon.
DCP 1172, Final
45
Inheritance
An essential idea developed in connection with frames was inheritance.
Inheritance is a convenient way of reusing existing knowledge in describing new
frames. Knowing a frame f, one can describe a new frame as a kind of f, meaning
that the new frame inherits the properties of f, i.e. it will have these properties in
addition to newly described properties described. Inheritance relation expresses
very precisely the relation between super- and sub-concepts.
ideas
events
actions
states
things
abstract things
polygons
triangles
quadrangles
parallelograms
rectangles
DCP 1172, Final
rhombuses
46