Download AI Entities Intelligent Agents Degrees of Intelligence Agent

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Agents of S.H.I.E.L.D. (season 4) wikipedia , lookup

Artificial intelligence in video games wikipedia , lookup

History of artificial intelligence wikipedia , lookup

Soar (cognitive architecture) wikipedia , lookup

Existential risk from artificial general intelligence wikipedia , lookup

Ethics of artificial intelligence wikipedia , lookup

Agent-based model in biology wikipedia , lookup

Agent-based model wikipedia , lookup

Embodied cognitive science wikipedia , lookup

Agent (The Matrix) wikipedia , lookup

Transcript
AI Entities
Intelligent Agents
• When creating artificial intelligence, the purpose is to
produce entities that are able to operate independent
of human direction
• Agents = a software entity that exists in an
environment and acts on that environment based on
its perceptions and goals.
– An entity may act on a human’s behalf though
• These entities need to have the following properties:
– autonomy = needs no direct involvement to perform duties
– reactivity = must be able to perceive and react to its
environment
– proactivity = must exhibit goal-directed behaviour
– (sociability = interacts with other agents)
• Another possible agent example:
• NAVLAB (video)
CSC384 Lecture Slides © Steve Engels, 2005
Slide 1 of 10
Degrees of Intelligence
•
1. the ability to perceive and understand a variety of
circumstances
2. the ability to adopt complex goals and manage them to
arrive at an optimal objective
3. the ability to act effectively to achieve its interests
4. the ability to adapt to new circumstances
Slide 3 of 10
Types of Agents
– e.g. “Blocks world” = planning domain, moving blocks from
one configuration to another, given movement rules
A
B
C
C
A
B
– Steve’s favorite: Vaccuum-cleaner world
• environment: rooms with connections between the rooms and
dirt in zero or more rooms
• perceptions: current room, adjacent rooms, existence of dirt
• actions: suck, move, no-op
• goal: remove dirt from all rooms
CSC384 Lecture Slides © Steve Engels, 2005
• Model-based agents
– operate on current state of environment, ignores all past
perceptions (knee-jerk reactions)
– very simple, not very intelligent
– VC example:
©
Slide 4 of 10
– Also act on current state, but keeps
track of the other parts of the world
that it can’t currently see
– Needs to store a representation
of the current world, somehow
©
• Goal-based agents
– incorporates goals into its decisionmaking process when selecting an
action to perform
– tries to achieve a desirable state
– often needs to have more
intuition about its environment
– hard to verify success for complex tasks
• good for simple agents (bar code scanners, e.g.)
CSC384 Lecture Slides © Steve Engels, 2005
• Usually described in terms of “worlds”
Types of Agents (cont’d)
• Simple reflex agents
• if dirt exists, suck
• move to random room
Slide 2 of 10
Agent Environments
Focusing on rational agents (agents that try to do the
best thing under the circumstances), one tends to
measure the intelligence of the agent according to:
CSC384 Lecture Slides © Steve Engels, 2005
CSC384 Lecture Slides © Steve Engels, 2005
Slide 5 of 10
CSC384 Lecture Slides © Steve Engels, 2005
©
Slide 6 of 10
1
Types of Agents (cont’d)
Multi-Agent Systems
– when multiple paths exist to an agent’s goals, the possible
actions are given a weight called a utility, that allows the
agent to select the best action
– usually involves a utility function
©
that maps the next state to a
number value that indicates the
agent’s “happiness”
• When multiple agents work towards a collective goal,
the rules for each agent change. It is no longer
sufficient for agents to act solely in their own interests
– Example: The Prisoner’s Dilemma
Agent A
Confess
• Learning-based agents
Agent B
¬ Confess
– “experiments” with various actions to try to find the set that
produce the optimal results, and then follows those actions
– requires database of actions, performance evaluation,
learning mechanism
Confess
• Utility-based agents
¬ Confess
A: 5 years
B: 5 years
A: 10 years
B: 1 year
A: 1 year
B: 10 years
A: 2 years
B: 2 years
What is the
optimal
strategy here?
• e.g. car-driving robot
CSC384 Lecture Slides © Steve Engels, 2005
Slide 7 of 10
Multi-Agent Systems
Slide 8 of 10
Multi-Agent Applications
• States and perceptions are changed to accommodate
the collective:
– a larger model of the environment is developed by the
collective explorations of the agents
– the number of possible actions rises exponentially with the
number of agents
• if each agent has n moves and there are m agents, there are
O(nm-nm) more actions to consider in the collective process
than in the isolated process.
– utilities and goals are evaluated collectively to determine the
overall benefit of the action combinations
CSC384 Lecture Slides © Steve Engels, 2005
CSC384 Lecture Slides © Steve Engels, 2005
Slide 9 of 10
• RoboCup
– robot soccer league
– international
competition
– also offers search &
rescue, RoboCup
junior, and a dance
competition
• Radiation cleanup, bomb disposal robots
• Still a lot of current multi-agent research being
conducted today
CSC384 Lecture Slides © Steve Engels, 2005
Slide 10 of 10
2