Download Intelligent Agent Technology and Application

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

History of artificial intelligence wikipedia , lookup

Enactivism wikipedia , lookup

Agent-based model wikipedia , lookup

Agent-based model in biology wikipedia , lookup

Agent (The Matrix) wikipedia , lookup

Embodied cognitive science wikipedia , lookup

Cognitive model wikipedia , lookup

Transcript
Agent Technology
Course overview
and
what is intelligent agent
©Intelligent Agent Technology and Application, 2008, Ai Lab NJU
Before we start
2

Software Agent: Prof. Tao Xianping

Intelligent Agent: A. Prof., Dr. Gao Yang

Email: [email protected]

83686586(O)

Ai Lab, CS Dept., NJU

Room 403-A, Mengminwei Building

Http://cs.nju.edu.cn/gaoy

Courseware could be found from my homepage.
©Gao Yang, Ai Lab NJU
Sept. 2008
Motivation
3

Agents, the next paradigm for software?

Agent-Oriented taking over for Object-Oriented?

Agents is crucial for open distributed systems?

Agents the most natural entity in e-business and
other e-***?

Agent and peer-to-peer, sensor network
technologies inseparable?

Which is the killer application using the agent
technology?
©Gao Yang, Ai Lab NJU
Sept. 2008
What will you learn from this course?

Upon completed this course a student should
•
Know what an agent and an agent system is.
•
Have a good overview of important agent issues:
•
4
•
Agent Negotiation, Coordination and Communication.
•
Micro and macro agent Architectures.
•
Agent Learning.
•
Agent Model and Theory.
•
Agent-oriented Software Engineering.
Get valuable hands-on experience in developing
agent system.
©Gao Yang, Ai Lab NJU
Sept. 2008
Lectures: Part A









5
1st Week Course overview and what is intelligent
agent
2nd Week Negotiation in MAS(i)
3rd Week Negotiation in MAS(ii)
4th Week Agent learning (i)
5th Week Agent learning (ii)
6th Week Agent communication language
7th Week Application: RoboCup, Trading Agent
Competition & Intelligent Game
8th Week Agent architectures
9th Week Agent model and theory
©Gao Yang, Ai Lab NJU
Sept. 2008
Other Issues

6
Other issues:
– Architectures of multi-agent system(Macro)
– Coordination in MAS
– Agent oriented software engineering
– Agent oriented programming
– Agent and p2p computing
– Agent and Grid computing
– Classification of agents and its application
©Gao Yang, Ai Lab NJU
Sept. 2008
Recommended books

Michael Wooldridge. “An Introduction to MultiAgent Systems”, 2002

Shi Zhong-zhi. “Intelligent Agent and Its Application” (in Chinese).
Science press, 2000.

G.Weiss, editor. "Multiagent Systems". MIT Press, 1999.

J. Ferber. "Multi-Agent Systems". Addison-Wesley, 1999.

G. M. P. O'Hare and N. R. Jennings, editors. "Foundations of
Distributed AI". Wiley Interscience, 1996.

M. Singh and M. Huhns. "Readings in Agents". Morgan-Kaufmann
Publishers, 1997.

7
And other choiced papers and websites.
©Gao Yang, Ai Lab NJU
Sept. 2008
Assessment
8

Lecturee
10%

Experiments
30%

Final Exam(open) 60%
©Gao Yang, Ai Lab NJU
Sept. 2008
What is intelligent agent

Field that inspired the agent fields?
– Artificial Intelligence

–
Software Engineering

–
9
Agent architecture, MAS, Coordination
Game Theory and Economics


Agent as an abstract entity
Distributed System and Computer Network

–
Agent intelligence and micro-agent
?
Negotiation
There are two kinds definition of agent
– Often quite narrow
– Extremely general
©Gao Yang, Ai Lab NJU
Agent
Sept. 2008
General definitions

American Heritage Dictionary
–

Russel and Norvig
–

”An agent is anything that can be viewed as perceiving its
environment through sensors and acting upon that
environment through effectors.”
Maes, Parrie
–
10
”... One that acts or has the power or authority to act ... or
represent another”
”Autonomous agents are computational systems that
inhabit some complex dynamic environment, sense and act
autonomously in this environment, and by doing so realize
a set of goals or tasks for which they are designed”.
©Gao Yang, Ai Lab NJU
Sept. 2008
Agent: more specific definitions

Smith, Cypher and Spohrer
–

Hayes-Roth
–
11
”Let us define an agent as a persistent software entity
dedicated to a specific purpose. ’Persistent’ distinguishes
agents from subroutines; agents have their own ideas
about how to accomplish tasks, their own agendas.
’Special purpose’ distinguishes them from multifunction
applications; agents are typically much smaller.
”Intelligent Agents continuously perform three functions:
perception of dynamic conditions in the environment;
action to affect conditions in the environment; and
reasoning to interpret perceptions, solve problems, draw
inferences, and determine actions.
©Gao Yang, Ai Lab NJU
Sept. 2008
Agent: industrial definitions

IBM
–
”Intelligent agents are software entities that carry
out some set of operations on behalf of a user or
another program with some degree of
independence or autonomy, and in doing so,
employ some knowledge or representations of
the user’s goals or desires”
12
©Gao Yang, Ai Lab NJU
Sept. 2008
Agent: weak notions

Wooldridge and Jennings
–
13
An Agent is a piece of hardware or (more commonly) softwarebased computer system that enjoys the following properties

Autonomy: agents operate without the direct intervention of
humans or others, and have some kind of control over their
actions and internal state;

Pro-activeness: agents do not simply act in response to their
environment, they are able to exhibit goal-directed behavior by
taking the initiative.

Reactivity: agents perceive their environment and respond to
it in timely fashion to changes that occur in it.

Social Ability: agents interact with other agents (and possibly
humans) via some kind of agent-communication language.”
©Gao Yang, Ai Lab NJU
Sept. 2008
Agent: strong notions

Wooldridge and Jennings
–
14
Weak notion in addition to

Mobility: the ability of an agent to move around a
network

Veracity: agent will not knowingly communicate false
information

Benevolence: agents do not have conflicting goals and
always try to do what is asked of it.

Rationality: an agent will act in order to achieve its
goals and will not act in such a way as to prevent its
goals being achieved
©Gao Yang, Ai Lab NJU
Sept. 2008
Summary of agent definitions

An agent act on behalf user or another entity.

An agent has the weak agent characteristics. (Autonomy, Pro-
activeness, Reactivity, Social ability)

An agent may have the strong agent characteristics. (Mobility,
Veracity, Benevolence, Rationality)
15
©Gao Yang, Ai Lab NJU
Sept. 2008
Dear child gets many names…

16
Many synonyms of the term “Intelligent agent”
–
Robots
–
Software agent or softbots
–
Knowbots
–
Taskbots
–
Userbots
–
……
©Gao Yang, Ai Lab NJU
Sept. 2008
Why the buzz around the agents?

Lack of programming paradigm for distributed systems.

Tries to meet problems of the “closed world” assumption in
object-orientation.

Agents is a frequently used term to describe software in
general (due to vague definition) .

17
Massive media hype in the era of the dot-coms.
©Gao Yang, Ai Lab NJU
Sept. 2008
Autonomy is the key feature of agent

Examples
–
–
Thermostat

Control / Regulator

Any control system
Agent
Action
Input
Sensor
Input
Software Daemon

Print server

Http server
Environment

18
Most software daemons
©Gao Yang, Ai Lab NJU
Sept. 2008
Thinking…

19
Give other examples of agents (not necessarily
intelligent) that you know of. For each, define as
precisely as possible:
– (a). the environment that the agent occupies, the
states that this environment can be in, and the
type of environment.
– (b). The action repertoire available to the agent,
and any pre-conditions associated with these
actions;
– (c). The goal, or design objectives of the agent –
what it is intended to achieve.
©Gao Yang, Ai Lab NJU
Sept. 2008
Thinking again…

20
If a traffic light (together with its control system) is
considered as intelligent agent, which of agent’s
properties should be employ? Illustrate your answer
by examples.
©Gao Yang, Ai Lab NJU
Sept. 2008
Type of environment

An agent will not have complete control over its
environment, but have partial control, in that it can
influence it.
–

21
Scientific computing or MIS in traditonal computing.
Classification of environment properties [Russell
1995, p49]
–
Accessible vs. inaccessible
–
Deterministic vs. non-deterministic
–
Episodic vs. non-episodic
–
Static vs. dynamic
–
Discrete vs. continuous
©Gao Yang, Ai Lab NJU
Sept. 2008
Accessible vs. inaccessible

22
Accessible vs. inaccessible
–
An accessible environment is one in which the
agent can obtain complete, accurate, up-to-date
information about the environment’s state. (also
complete observable vs. partial observable)
–
Accessible: sensor give complete state of the
environment.
–
In an accessible environment, agent needn’t keep
track of the world through its internal state.
©Gao Yang, Ai Lab NJU
Sept. 2008
Deterministic vs. non-deterministic

23
Deterministic vs. non-deterministic
–
A deterministic environment is one in which any
action has a single guaranteed effect , there is no
uncertainty about the state that will result from
performing an action.
–
That is, next state of the environment is
completely determined by the current state and
the action select by the agent.
–
Non-deterministic: a probabilistic model could be
available.
©Gao Yang, Ai Lab NJU
Sept. 2008
Episodic vs. non-episodic

24
Episodic vs. non-episodic
–
In an episodic environment, the performance of
an agent is dependent on a number of discrete
episodes, with no link between the performance
of an agent in different scenarios. It need not
reason about the interaction between this and
future episodes. (such as a game of chess)
–
In an episodic environment, agent doesn’t need
to remember the past, and doesn’t have to think
the next episodic ahead.
©Gao Yang, Ai Lab NJU
Sept. 2008
Static vs. dynamic

Static vs. dynamic
–
A static environment is one that can assumed to
remain unchanged expect by the performance of
actions by the agents.
–
A dynamic environment is one that has other
processes operating on it which hence changes
in ways beyond the agent’s control.
25
©Gao Yang, Ai Lab NJU
Sept. 2008
Discrete vs. continuous

26
Discrete vs. continuous
– An environment is discrete if there are a fixed,
finite number of actions and percepts in it.
©Gao Yang, Ai Lab NJU
Sept. 2008
Why classify environments

The type of environment largely determines the
design of agent.

Classifying environment can help guide the agent’s
design process (like system analysis in software
engineering).

Most complex general class of environments
–
27
Are inaccessible, non-deterministic, non-episodic,
dynamic, and continuous.
©Gao Yang, Ai Lab NJU
Sept. 2008
Discuss about environment: Gripper

Gripper is a standard example for probabilistic
planning model
–
Robot has three possible actions: paint (P), dry
(W) and pickup (U)
–
State has four binary features: block painted,
gripper dry, holding block, gripper clean
28
–
Initial state:
–
Goal state:
©Gao Yang, Ai Lab NJU
Sept. 2008
Discuss about environment: Gripper
s8
s12
(P,1,1)
(U,0.95,1)
(U,0.05,-0.1)
s7
(P,1,-1)
s4
(W,0.8,-0.1)
s6
(U,0.95,-0.1)
(P,0.1,-1)
s10
s11
(P,0.9,-0.1)
(U,0.5,-1)
(W,0.8,-0.1)
(W,0.2,-0.1)
s3
s2
(U,0.05,-0.1)
(U,0.5,-0.1)
(U,0.5,-0.1)
(W,0.8,-0.1)
s9
(W,0.2,-0.1)
s5
(P,0.9,-0.1)
(P,0.1,-1)
s1
(U,0.5,-0.1)
(W,0.2,-0.1)
Gripper
29
©Gao Yang, Ai Lab NJU
Sept. 2008
Thinking…

Please determine the environment’s type.
Chess
Poker
Minesweeper
Eshopping
Accessible??
Deterministic
??
Episodic??
Static??
Discrete??
30
©Gao Yang, Ai Lab NJU
Sept. 2008
Intelligent agent vs. agent

31
An intelligent agent is one that is capable of flexible
autonomous action in order to meet its design
objectives, where flexibility means three things:
–
Pro-activeness: the ability of exhibit goal-directed
behavior by taking the initiative.
–
Reactivity: the ability of percept the environment,
and respond in a timely fashion to changes that
occur in it.
–
Social ability: the ability of interaction with other
agents (include human).
©Gao Yang, Ai Lab NJU
Sept. 2008
Pro-activeness

32
Pro-activeness
–
In functional system (goal must remain valid at least until
the action complete.), apply pre-condition and postcondition to realize goal directed behavior.
–
But for non-functional system (dynamic system), agent
blindly executing a procedure without regard to whether
the assumptions underpinning the procedure are valid is a
poor strategy.

Observe incompletely

Environment is non-deterministic

Other agent can affect the environment
©Gao Yang, Ai Lab NJU
Sept. 2008
Reactivity

33
Reactivity
–
Agent must be responsive to events that occur in
its environment.
–
Building a system that achieves an effective
balance between goal-directed and reactive
behavior is hard.
©Gao Yang, Ai Lab NJU
Sept. 2008
Social ability

34
Social ability
– Must negotiate and cooperate with others.
©Gao Yang, Ai Lab NJU
Sept. 2008
Agent vs. object

Object
–
35
Are defined as computational entities that
encapsulate some state, are able to perform
actions, or methods on this state, and
communicate by message passing.

Are computational entities.

Encapsulate some internal state.

Are able to perform actions, or methods, to change this
state.

Communicate by message passing.
©Gao Yang, Ai Lab NJU
Sept. 2008
Agent and object

36
Differences between agent and object
–
An object can be thought of as exhibiting
autonomy over its state: it has control over it. But
an object does not exhibit control over it’s
behavior.
–
Other objects invoke their public method. Agent
can only request other agents to perform actions.
–
“Objects do it for free, agents do it for money.”
–
(implement agents using object-oriented
technology)……Thinking it.
©Gao Yang, Ai Lab NJU
Sept. 2008
Agent and object
37
–
In standard object model has nothing whatsoever
to say about how to build systems that integrate
reactive, pro-active, social behavior.
–
Each has their own thread of control. In the
standard object model, there is a single thread of
control in the system.
–
(agent is similar with an active object.)
–
Summary,

Agent embody stronger notion of autonomy than object

Agent are capable of flexible behavior

Multi-agent system is inherently multi-threaded
©Gao Yang, Ai Lab NJU
Sept. 2008
Agent and expert system

Expert system
–

38
Is one that is capable of solving problems or
giving advice in some knowledge-rich domain.
The most important distinction
–
Expert system is disembodied, rather than being
situated.
–
It do not interact with any environment. Give
feedback or advice to a third part.
–
Are not required to interact with other agents.
©Gao Yang, Ai Lab NJU
Sept. 2008
Example of agents
Mobile
Customer
Mobile
Customer
Agent
(Peer)
Mobile
Customer
39
©Gao Yang, Ai Lab NJU
Agent
(Peer)
Agent
(Peer)
Agent
(Peer)
Mobile
Customer
Sept. 2008
Distributed Artificial Intelligence (DAI)

DAI is a sub-field of AI

DAI is concerned with problem solving where
agents solve (sub-) tasks (macro level)

Main area of DAI
–
Distributed problem solving (DPS)

–
Centralized Control and Distributed Data (Massively
Parallel Processing)
Multi-agent system (MAS)

Distributed Control and Distributed Data (coordination
crucial)
Some histories
40
©Gao Yang, Ai Lab NJU
Sept. 2008
DAI is concerned with……




Agent granularity (agent size)
Heterogeneity agent (agent type)
Methods of distributing control (among agents)
Communication possibilities
Distributed
AI

MAS
Distributed
– Coarse agent granularity Computing
– And high-level communication
Distributed
Problem
Solving
41
©Gao Yang, Ai Lab NJU
Artificial
Intelligence
Multi-Agent
Systems
Sept. 2008
DAI is not concerned with……
42

Issues of coordination of concurrent processes at
the problem solving and representational level.

Parallel computer architecture, parallel
programming languages or distributed operation
system.

No semaphores, monitors or threads etc.

Higher semantics of communication (speech-act
level)
©Gao Yang, Ai Lab NJU
Sept. 2008
Motivation behind MAS

To solve problems too large for a centralized agent
–

To allow interconnection and interoperation of
multiple legacy system
–
43
E.g. Financial system
E.g. Web crawling

To provide a solution to inherently distributed
system

To provide a solution where expertise is distributed

To provide conceptual clarity and simplicity of
design
©Gao Yang, Ai Lab NJU
Sept. 2008
Benefits of MAS

Faster problem solving

Decreasing communication
–
Higher semantics of communication (speech-act
level)
44

Flexibility

Increasing reliability
©Gao Yang, Ai Lab NJU
Sept. 2008
Heterogeneity degrees in MAS

Low
–

Medium
–

Identical agents, different resources
Different agent expertise
High
–
Share only interaction protocol (e.g. FIPA or
KQML)
45
©Gao Yang, Ai Lab NJU
Sept. 2008
Cooperative and self-interested MAS


46
Cooperative
–
Agents are designed by interdependent designers
–
Agents act for increased good of the system (i.e. MAS)
–
Concerned with increasing the systems performance and
not the individual agents
Self-interested
–
Agents are designed by independent designer
–
Agents have their own agenda and motivation
–
Concerned with the benefit of each agent (’individualistic’)
–
The latter more realistic in an Internet-setting?
©Gao Yang, Ai Lab NJU
Sept. 2008
Our categories about MAS

Cooperation
–

Competitive
–

Both has a common object
Each have different objects which are
contradictory.
Semi-competitive
–
Each have different objects which are conflictive,
but the total system has one explicit (or implicit)
object
The first now is known as TEAMWORK.
47
©Gao Yang, Ai Lab NJU
Sept. 2008
Distributed AI perspectives
Distributed
AI
Perspectives
Agent
T he or y
H yb
Gr
ou
p
ge
ua
c
re
La
ng
Ar
tu
er
e
er at iv
ec
ign
D el ib
iv e
t
hi
Des
R ea ct
aches
Appro
c
i
f
Speci
ri d
Coop
erat
ion
eds
Testb
Methods
tion
dina
Coor
©Gao Yang, Ai Lab NJU
Coh
Beh erent
avi
or
ns
io
at
c
i
pl
Ap
De
si
gn
s
ol
To
48
Planning
n
io
at
i
t
go
Ne
s
si
ly
a
An
Sept. 2008
Our Thinking in MAS

Single benefit vs. collective benefit

No need central control

Social intelligence vs. single intelligence

Self-organize system
–
49
Self-form, self-evolve

Intelligence is emergence, not innative

…..
©Gao Yang, Ai Lab NJU
Sept. 2008
Conclusions of lecture

Agent has general definition, weak definition and
strong definition

Classification of the environment

Differences between agent and intelligent agent,
agent and object, agent and expert system

Multi-agent system is macro issues of agent
systems
50
©Gao Yang, Ai Lab NJU
Sept. 2008
References

51
[Russell 1995] S. Russell and P. Norvig. Artificial Intelligence: A Modern
Approach. Prentice-Hall, 1995.
©Gao Yang, Ai Lab NJU
Sept. 2008