Download CSE4715 Artificial Intelligence Segment-1

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Intelligence explosion wikipedia , lookup

History of artificial intelligence wikipedia , lookup

Ethics of artificial intelligence wikipedia , lookup

Existential risk from artificial general intelligence wikipedia , lookup

Embodied cognitive science wikipedia , lookup

Transcript
1
Segment – 1
Concepts of Artificial Intelligence
 2004 Prentice Hall, Inc. All rights reserved.
2
 2004 Prentice Hall, Inc. All rights reserved.
3
 2004 Prentice Hall, Inc. All rights reserved.
4
 2004 Prentice Hall, Inc. All rights reserved.
5
 2004 Prentice Hall, Inc. All rights reserved.
6
 2004 Prentice Hall, Inc. All rights reserved.
7
 2004 Prentice Hall, Inc. All rights reserved.
AI pre-history
• Philosophy
Logic, methods of reasoning, mind as physical
system foundations of learning, language,
rationality
• Mathematics
Formal representation and proof algorithms,
computation, (un)decidability, (in)tractability,
probability
• Economics
utility, decision theory
• Neuroscience physical substrate for mental activity
• Psychology
phenomena of perception and motor control,
experimental techniques
• Computer
building fast computers
engineering
• Control theory
design systems that maximize an objective
function over time
• Linguistics
knowledge representation, grammar
 2004 Prentice Hall, Inc. All rights reserved.
Abridged history of AI
•
•
•
•
•
1943
1950
1956
1952—69
1950s
• 1965
• 1966—73
•
•
•
•
•
1969—79
1980-1986-1987-1995--
McCulloch & Pitts: Boolean circuit model of brain
Turing's "Computing Machinery and Intelligence"
Dartmouth meeting: "Artificial Intelligence" adopted
Look, Ma, no hands!
Early AI programs, including Samuel's checkers
program, Newell & Simon's Logic Theorist,
Gelernter's Geometry Engine
Robinson's complete algorithm for logical reasoning
AI discovers computational complexity
Neural network research almost disappears
Early development of knowledge-based systems
AI becomes an industry
Neural networks return to popularity
AI becomes a science
The emergence of intelligent agents
 2004 Prentice Hall, Inc. All rights reserved.
10
The State of the ART
Robotic Cehicles
Speech recognition
Game playing
Logistics planning
Robotics
Machine Translation
 2004 Prentice Hall, Inc. All rights reserved.
The State of the Art
• Computer beats human in a chess game.
• Computer-human conversation using speech recognition.
• Expert system controls a spacecraft.
• Robot can walk on stairs and hold a cup of water.
• Language translation for web pages.
• Home appliances use fuzzy logic.
• And many more
 2004 Prentice Hall, Inc. All rights reserved.
Agents
• An agent is anything that can be viewed as perceiving
its environment through sensors and acting upon that
environment through actuators
• Human agent:
– eyes, ears, and other organs for sensors;
– hands, legs, mouth, and other body parts for actuators
• Robotic agent:
– cameras and infrared range finders for sensors
– various motors for actuators
Artificial Intelligence a modern
 2004 Prentice Hall, Inc. All rights reserved.
approach
13
•Rationality
–Performance measuring success
–Agents prior knowledge of
environment
–Actions that agent can perform
–Agent’s percept sequence to date
 2004 Prentice Hall, Inc. All rights reserved.
Rationality
14
• Rational agent is different from omniscience agent
– Percepts may not supply all relevant information
– An omniscient agent knows the actual outcome of its actions and
can act accordingly.
– E.g., in card game, don’t know cards of others.
• Rational is different from being perfect
– Rationality maximizes expected outcome while perfection
maximizes actual outcome.
Artificial Intelligence a modern
 2004 Prentice Hall, Inc. All rights reserved.
PEAS
15
• PEAS: Performance measure, Environment, Actuators,
Sensors
• Must first specify the setting for intelligent agent design
• Consider, e.g., the task of designing an automated taxi
driver:
– Performance measure: Safe, fast, legal, comfortable trip,
maximize profits
– Environment: Roads, other traffic, pedestrians, customers
– Actuators: Steering wheel, accelerator, brake, signal, horn
– Sensors: Cameras, sonar, speedometer, GPS, odometer, engine
sensors, keyboard
Artificial Intelligence a modern
 2004 Prentice Hall, Inc. All rights reserved.
PEAS
16
•
•
•
•
Agent: Interactive English tutor
Performance measure: Maximize student's score on test
Environment: Set of students
Actuators: Screen display (exercises, suggestions,
corrections)
• Sensors: Keyboard
Artificial Intelligence a modern
 2004 Prentice Hall, Inc. All rights reserved.
Properties of task environment
17
•

•

•

•


Fully observable (vs. partially observable)
Access to complete state vs access to partial state.
Deterministic (vs. stochastic)
Next state completely determined by current state otherwise
stochastic
Episodic (vs. sequential)
Divided into atomic episodes vs current decision could affect all
future decisions.
Static (vs. dynamic)
Static environments don’t change
Dynamic environments do change
Artificial Intelligence a modern
 2004 Prentice Hall, Inc. All rights reserved.
18
•Discrete (vs. continuous)
Has a finite number of distinct state vs has continuous state.
•Single agent (vs. multiagent):
An agent operating by itself in an environment vs there are
many agents working together
 2004 Prentice Hall, Inc. All rights reserved.
Agent types and architecture
19
• Four basic types in order of increasing generality:
–
–
–
–
Simple reflex agents
Reflex agents with state/model
Goal-based agents
Utility-based agents
Artificial Intelligence a modern
 2004 Prentice Hall, Inc. All rights reserved.
Simple reflex agents
Simple but very limited intelligence.
Action does not depend on percept history, only on current percept.
Therefore no memory requirements.
Artificial Intelligence a modern
 2004 Prentice Hall, Inc. All rights reserved.
Model-based reflex agents
Know how world evolves
 Overtaking car gets closer
from behind
 How agents actions affect the
world
 Wheel turned clockwise
takes you right


Artificial Intelligence a modern
 2004 Prentice Hall, Inc. All rights reserved.
Model base agents update their
state
Goal-based agents
• knowing state and environment? Enough?
– Taxi can go left, right, straight
• Have a goal
– A destination to get to
• Uses knowledge about a goal to guide its actions
– E.g., Search, planning
Artificial Intelligence a modern
 2004 Prentice Hall, Inc. All rights reserved.
Goal-based agents
• Reflex agent breaks when it sees brake lights. Goal based
agent reasons
–
Brake light -> car in front is stopping -> I should stop -> I should use
brake
Artificial Intelligence a modern
 2004 Prentice Hall, Inc. All rights reserved.
Utility-based agents
• Goals are not always enough
– Many action sequences get taxi to destination
– Consider other things. How fast, how safe…..
• A utility function maps a state onto a real number
which describes the associated degree of “happiness”,
“goodness”, “success”.
• Where does the utility measure come from?
– Economics: money.
– Biology: number of offspring.
– Your life?
Artificial Intelligence a modern
 2004 Prentice Hall, Inc. All rights reserved.
Utility-based agents
Artificial Intelligence a modern
 2004 Prentice Hall, Inc. All rights reserved.
14 Jan 2004
Problem-solving agents
It first formulates a goal and a problem, searches for a sequence of actions that
would be solve the problem and then executes the actions one at a time. When
this is complete it formulates another goal and starts over.
 2004 Prentice Hall, Inc. All rights reserved.
14 Jan 2004
Example: Romania
• On holiday in Romania; currently in Arad.
• Flight leaves tomorrow from Bucharest
• Formulate goal:
– be in Bucharest
• Formulate problem:
– states: various cities
– actions: drive between cities
• Find solution:
– sequence of cities, e.g., Arad, Sibiu, Fagaras, Bucharest
CS 3243 - Blind Search
 2004 Prentice Hall, Inc. All rights reserved.
27
14 Jan 2004
Example: Romania
CS 3243 - Blind Search
 2004 Prentice Hall, Inc. All rights reserved.
28
14 Jan 2004
Problem types
• Deterministic, fully observable  single-state problem
– Agent knows exactly which state it will be in; solution is a
sequence
• Non-observable  sensorless problem (conformant
problem)
– Agent may have no idea where it is; solution is a sequence
• Nondeterministic and/or partially observable 
contingency problem
– percepts provide new information about current state
– often interleave} search, execution
• Unknown state space  exploration problem
CS 3243 - Blind Search
 2004 Prentice Hall, Inc. All rights reserved.
29
Example: vacuum world
• Single-state, start in #5.
Solution?
•
14 Jan 2004
CS 3243 - Blind Search
 2004 Prentice Hall, Inc. All rights reserved.
30
14 Jan 2004
Example: vacuum world
• Single-state, start in #5.
Solution? [Right, Suck]
• Sensorless, start in
{1,2,3,4,5,6,7,8} e.g.,
Right goes to {2,4,6,8}
Solution?
CS 3243 - Blind Search
 2004 Prentice Hall, Inc. All rights reserved.
31
14 Jan 2004
Example: vacuum world
•
Sensorless, start in
{1,2,3,4,5,6,7,8} e.g.,
Right goes to {2,4,6,8}
Solution?
[Right,Suck,Left,Suck]
CS 3243 - Blind Search
 2004 Prentice Hall, Inc. All rights reserved.
32
Single-state problem
formulation
14 Jan 2004
A problem is defined by four items:
1. initial state e.g., "at Arad
2. actions or successor function S(x) = set of action–state pairs
– e.g., S(Arad) = {<Arad  Zerind, Timisoara,Sibiu>, … }
3. goal test, which determines whether a given state is a goal state.
– explicit, e.g., x = "at Bucharest"
– implicit, e.g., Checkmate(x)
4. path cost (additive)
– e.g., sum of distances, number of actions executed, etc.
– c(x,a,y) is the step cost, assumed to be ≥ 0
•
A solution is a sequence of actions leading from the initial state to a
goal state
CS 3243 - Blind Search
 2004 Prentice Hall, Inc. All rights reserved.
33
Vacuum world state space
graph
•
•
•
•
•
•
14 Jan 2004
states?
Initial State?
actions?
goal test?
path cost?
CS 3243 - Blind Search
 2004 Prentice Hall, Inc. All rights reserved.
34
Vacuum world state space
graph
•
•
•
•
•
14 Jan 2004
states? integer dirt and robot location
Initial State? Any State can be initial state.
actions? Left, Right, Suck
goal test? no dirt at all locations
path cost? 1 per action
CS 3243 - Blind Search
 2004 Prentice Hall, Inc. All rights reserved.
35
14 Jan 2004
Example: The 8-puzzle
•
•
•
•
states?
actions?
goal test?
path cost?
CS 3243 - Blind Search
 2004 Prentice Hall, Inc. All rights reserved.
36
14 Jan 2004
Example: The 8-puzzle
•
•
•
•
states? locations of tiles
actions? move blank left, right, up, down
goal test? = goal state (given)
path cost? 1 per move
[Note: optimal solution of n-Puzzle family is NP-hard]
CS 3243 - Blind Search
 2004 Prentice Hall, Inc. All rights reserved.
37