Download lecture1-457

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Soar (cognitive architecture) wikipedia , lookup

Wizard of Oz experiment wikipedia , lookup

Human-Computer Interaction Institute wikipedia , lookup

Agent-based model in biology wikipedia , lookup

Agent-based model wikipedia , lookup

Knowledge representation and reasoning wikipedia , lookup

Artificial intelligence in video games wikipedia , lookup

Computer Go wikipedia , lookup

Intelligence explosion wikipedia , lookup

Human–computer interaction wikipedia , lookup

Existential risk from artificial general intelligence wikipedia , lookup

Agent (The Matrix) wikipedia , lookup

Ethics of artificial intelligence wikipedia , lookup

Philosophy of artificial intelligence wikipedia , lookup

Embodied cognitive science wikipedia , lookup

Cognitive model wikipedia , lookup

History of artificial intelligence wikipedia , lookup

Transcript
Chapter 1
Introduction
General Concepts
• The field of Artificial Intelligence attempts to
understand, model, and simulate the behavior (to
some extend) of intelligent entities.
• Artificial Intelligence encompasses areas such as
perception, reasoning, planning, and theorem
proving.
• Artificial Intelligence is the study of ideas that
enables computers to act intelligently.
Understanding Intelligence
• The perspective of AI complements the
traditional perspectives of psychology,
linguistics, and philosophy.
– Computer metaphors aid thinking
– Computer models force precision
– Computer implementations quantify task
requirements
– Computer programs exhibit unlimited patience
AI Definition Categories
•
The definitions for A.I fall into four
categories:
1. Systems act like humans
2. Systems that act rationally
3. Systems that are concerned with thought
processes and reasoning
4. Systems that re concerned with behavior
Acting Humanly
• The Turing Test approach: System intelligence is achieved
when a computer is interrogated by a human by teletype,
and the human can not tell if there is a computer or a
human at the other end.
• System capabilities needed to pass the Turing Test:
– Natural language processing
– Knowledge representation
– Automated reasoning
– Machine learning
– Computer vision
– Robotics
Thinking Humanly
• Bringing together computer models from
Artificial Intelligence and experimental
techniques from psychology to try to
construct precise and testable theories of the
workings of the human mind.
Thinking Rationally
• Patterns for argument structures that always give
correct conclusions given correct premises. These
patterns were thought that govern the operation of
the mind, and initiated the field of logic.
• For many years dominated the area of A.I
• Issues:
– Uncertain knowledge
– Intractable problems
Acting Rationally
• Acting rationally means acting so as to achieve
one’s goals, given one’s beliefs.
• An agent is just something that perceives and acts
• In the laws of thought approach to AI, the whole
emphasis is on correct inferences. However, this is
only part of rational behavior.
• We need the ability to represent knowledge and
reason with it because it enables us to reach good
decisions in a wide variety of situations.
Rational Agents
• The study of AI as rational agent design has
two advantages
– It is more general than the “laws of thought”
because correct inference is only a useful
mechanism for achieving rationality, and not a
necessary one.
– It is more amenable to scientific development
that approaches based on human behavior or
human thought
Example – Simple Reflex Agent
function SIMPLE_REFLEX_AGENT(percept): returns action
static: rules, a set of condition action rules
state = INTERPRET_INPUT(percept)
rule = RULE_MATCH(state, rules)
action = RULE_ACTION(rule)
return action;
end
Intelligent Agents
• An agent is perceiving its environment through sensors and acting
upon that environment through effectors.
• A Rational Agent is one that does the {\it right} thing. A right action is
the one that will cause agent to be most successful.
• The problem becomes how and when to evaluate agent's success.
• Performance measure of how
– The criteria that determine how successful an agent is
• When to evaluate
– Measure of performance over a long run
• Issue: Rationality vs. Omniscience. An onmiscient agent knows the
actual outcome of its actions and can act accordingly (impossible in
reality).
• Rationality: Expected result given what has been perceived
Intelligent Agents
•
•
•
•
•
•
In summary, what is rational at any given time depends on four things:
1. The performance measure that defines degree of success
2. Everything that the agent has perceived so far. We will call this
complete perceptual history, the {\bf percept sequence}.
3. What the agent knows about the environment.
4. The actions that the agent can perform
Ideal Rational Agent: For each possible percept sequence,an ideal rational
agent should do whatever action is expected to maximize its performance
measure, on the basis of the evidence provided by the percept sequence and
whatever built-in knowledge the agent has
Ideal mapping: Percept sequence  actions
Possible to describe an agent with a table of actions that the agent does in
response to each percept sequence
Possible to try out all possible sequences and observe agent's action
response
Possible to define a specification without an exhaustive enumeration
Intelligent Agents
•
•
•
•
Agent lacks autonomy if actions based on solely in built-in
knowledge, not in percepts.
System is autonomous to the extent that its behaviour is
determined by its own experience.
It is not realistic to expect complete autonomy from very
start
The structure of intelligent agents
Agent = Architecture + Program
•
Architecture
–
–
–
•
•
•
•
•
•
Makes percepts available to program
Runs the program
Passes program's actions to effectors as they are generated
Agent Programs
Table Driven Agents
Simple Reflex Agents
Reflex Agents with Internal State
Goal-based Agents
Utility-based Agents
Application Areas
• In business, computers can suggest financial strategies, and give
marketing advice
• In engineering, computers can check design rules, recall relevant
precedent designs, offer design suggestions
• In manufacturing, computers can perform dangerous, or labor intensive
tasks
• In farming, computers can help in selectively harvest crops, and prune
trees
• In mining, computers can suggest exploration sites, and perform work
in hostile environments for humans.
• In schools, computers can understand students’ mistakes and act as
superbooks
• In hospitals, computers can help in diagnosis, medical imaging, and
administering therapies
• In household, computers can help in planning, and controlling devices
The Foundations of AI
• Philosophy (428 B.C. – present)
– Socrates, Plato, Aristotle (laws for governing the
rational part of the mind)
– Rene Descartes (dualism)
– Wilhelm Leibniz (materialism)
– Francis Bacon (empiricist movement)
– Dave Hume (induction)
– Bertrand Russel (logical positivism)
– Aristotle – Newell Simon (means-ends analysis) GPS
The Foundations of AI
• Mathematics (800 – present)
–
–
–
–
–
–
–
Al-Khowarazmi (algorithms, notation)
Boole (logic algebras)
Hilbert (limits to proof procedures)
Godel (incompleteness theorem)
Dantig, Edmonds (reduction)
Cook (Computability, NP completeness)
Von Neuman (decision theory)
The Foundations of AI
• Psychology (1879 – Present)
• Computer Engineering (1940 – Present)
• Linguistics (1957 – Present)
State of the Art
• Technologies
–
–
–
–
Knowledge based systems
Hidden Markov Models
Belief Networks
Neural Networks
• Applications
–
–
–
–
–
Diagnosis
Medical imaging
Speech recognition
Exploration
Planning