* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
Download MCS 8100/CSC 2114 : Artificial Intelligence
Survey
Document related concepts
Perceptual control theory wikipedia , lookup
Ethics of artificial intelligence wikipedia , lookup
Philosophy of artificial intelligence wikipedia , lookup
Agent-based model in biology wikipedia , lookup
History of artificial intelligence wikipedia , lookup
Existential risk from artificial general intelligence wikipedia , lookup
Incomplete Nature wikipedia , lookup
Agent-based model wikipedia , lookup
Ecological interface design wikipedia , lookup
Adaptive collaborative control wikipedia , lookup
Agent (The Matrix) wikipedia , lookup
Transcript
MCS 8100/CSC 2114 : Artificial Intelligence Week 1 : Introduction to Artificial Intelligence Ernest Mwebaze [email protected] School of Computing & IT Makerere University September, 2015 What is AI Systems that think like humans Systems that act like humans Systems that think rationally Systems that act rationally What is AI Systems that think like humans Systems that act like humans Systems that think rationally Systems that act rationally • Acting Humanly : The Turing Test What is AI Systems that think like humans Systems that act like humans Systems that think rationally Systems that act rationally • Acting Humanly : The Turing Test • Thinking Humanly : Cognitive Science What is AI Systems that think like humans Systems that act like humans Systems that think rationally Systems that act rationally • Acting Humanly : The Turing Test • Thinking Humanly : Cognitive Science • Thinking Rationally : Logic/laws of thought What is AI Systems that think like humans Systems that act like humans Systems that think rationally Systems that act rationally • Acting Humanly : The Turing Test • Thinking Humanly : Cognitive Science • Thinking Rationally : Logic/laws of thought • Acting Rationally : Acting right ! What is AI Systems that think like humans Systems that act like humans Systems that think rationally Systems that act rationally • Acting Humanly : The Turing Test • Thinking Humanly : Cognitive Science • Thinking Rationally : Logic/laws of thought • Acting Rationally : Acting right ! What is AI • Computational models of human behavior? - Programs that behave (externally) like humans What is AI • Computational models of human behavior? - Programs that behave (externally) like humans • Computational models of human thought processes ? - Programs that operate (internally) the way humans do What is AI • Computational models of human behavior? - Programs that behave (externally) like humans • Computational models of human thought processes ? - Programs that operate (internally) the way humans do • Computational systems that behave intelligently? - What does it mean to behave intelligently? What is AI • Computational models of human behavior? - Programs that behave (externally) like humans • Computational models of human thought processes ? - Programs that operate (internally) the way humans do • Computational systems that behave intelligently? - What does it mean to behave intelligently? • Computational systems that behave rationally - Agents that act right. What is AI • Computational models of human behavior? - Programs that behave (externally) like humans • Computational models of human thought processes ? - Programs that operate (internally) the way humans do • Computational systems that behave intelligently? - What does it mean to behave intelligently? • Computational systems that behave rationally - Agents that act right. • Loosely : AI applications - Monitor trades, detect fraud, schedule shuttle loading, etc. Rational Agents An agent is an entity that perceives and acts This course is about designing rational agents Abstractly, an agent is a function from percept histories to actions: f : P∗ → A For any given class of environments and tasks, we seek the agent (or class of agents) with the best performance. Rational Agents More generally : Software that gathers information about an environment and takes actions based on that information. Eg • a robot • a web shopping program • a traffic control system The Agent and the Environment sensors percepts ? environment actions agent actuators Agents include humans, robots, softbots, thermostats, etc. The agent function maps from percept histories to actions: f : P∗ → A The agent program runs on the physical architecture to produce f The Agent and the Environment How do we begin to formalize the problem of building an agent? −→ Make a dichotomy between the agent and its environment. The Agent and the Environment How do we begin to formalize the problem of building an agent? −→ Make a dichotomy between the agent and its environment. The World Model • A the action space The World Model • A the action space • P the percept space The World Model • A the action space • P the percept space • E the environment: A∗ → P The World Model • A the action space • P the percept space • E the environment: A∗ → P • Alternatively, define: • S internal state [may not be visible to agent] The World Model • A the action space • P the percept space • E the environment: A∗ → P • Alternatively, define: • S internal state [may not be visible to agent] • Perception function: S → P The World Model • A the action space • P the percept space • E the environment: A∗ → P • Alternatively, define: • S internal state [may not be visible to agent] • Perception function: S → P • World dynamics:S × A → S The World Model • A the action space • P the percept space • E the environment: A∗ → P • Alternatively, define: • S internal state [may not be visible to agent] • Perception function: S → P • World dynamics:S × A → S Agent Design • U utility function: S → R (or S ∗ → R ) Agent Design • U utility function: S → R (or S ∗ → R ) • The agent design problem: Find P ∗ → A • mapping of sequences of percepts to actions • maximizes the utility of the resulting sequence of states (each action maps from one state to next state). Rationality • A rational agent takes actions it believes will achieve its goals. • Assume I don’t like to get wet, so I bring an umbrella. Is that rational ? • Depends on the weather forecast and whether I’ve heard it. If I’ve heard the forecast for rain (and I believe it) then bringing the umbrella is rational. Rationality • A rational agent takes actions it believes will achieve its goals. • Assume I don’t like to get wet, so I bring an umbrella. Is that rational ? • Depends on the weather forecast and whether I’ve heard it. If I’ve heard the forecast for rain (and I believe it) then bringing the umbrella is rational. • Rationality 6= omniscience . • Assume the most recent forecast is for rain but I did not listen to it and I did not bring my umbrella. Is that rational ? • Yes, since I did not know about the recent forecast! Rationality • A rational agent takes actions it believes will achieve its goals. • Assume I don’t like to get wet, so I bring an umbrella. Is that rational ? • Depends on the weather forecast and whether I’ve heard it. If I’ve heard the forecast for rain (and I believe it) then bringing the umbrella is rational. • Rationality 6= omniscience . • Assume the most recent forecast is for rain but I did not listen to it and I did not bring my umbrella. Is that rational ? • Yes, since I did not know about the recent forecast! • Rationality 6= success • Suppose the forecast is for no rain but I bring my umbrella and I use it to defend myself against an attack. Is that rational ? • No, although successful, it was done for the wrong reason. Limited Rationality • There is a big problem with our definition of rationality Limited Rationality • There is a big problem with our definition of rationality • The agent might not be able to compute the best action (subject to its beliefs and goals). Limited Rationality • There is a big problem with our definition of rationality • The agent might not be able to compute the best action (subject to its beliefs and goals). • So, we want to use limited rationality : ”‘acting in the best way you can subject to the computational constraints that you have”’ Limited Rationality • There is a big problem with our definition of rationality • The agent might not be able to compute the best action (subject to its beliefs and goals). • So, we want to use limited rationality : ”‘acting in the best way you can subject to the computational constraints that you have”’ • The (limited rational) agent design problem: Find P → A • mapping of sequences of percepts to actions • maximizes the utility of the resulting sequence of states • subject to our computational constraints ! Limited Rationality • There is a big problem with our definition of rationality • The agent might not be able to compute the best action (subject to its beliefs and goals). • So, we want to use limited rationality : ”‘acting in the best way you can subject to the computational constraints that you have”’ • The (limited rational) agent design problem: Find P → A • mapping of sequences of percepts to actions • maximizes the utility of the resulting sequence of states • subject to our computational constraints ! • Learning...... Environment Types • Accessible (vs. Inaccessible) - Can you see the state of the world directly? Environment Types • Accessible (vs. Inaccessible) - Can you see the state of the world directly? • Deterministic (vs. Non-Deterministic) - Does an action map one state into a single other state? Environment Types • Accessible (vs. Inaccessible) - Can you see the state of the world directly? • Deterministic (vs. Non-Deterministic) - Does an action map one state into a single other state? • Static (vs. Dynamic) - Can the world change while you are thinking? Environment Types • Accessible (vs. Inaccessible) - Can you see the state of the world directly? • Deterministic (vs. Non-Deterministic) - Does an action map one state into a single other state? • Static (vs. Dynamic) - Can the world change while you are thinking? • Discrete (vs. Continuous) - Are the percepts and actions discrete (like integers) or continuous (like reals)? Environment Types Solitaire Observable Deterministic Static Discrete Single Agent Internet Shopping AI-Taxi Environment Types Observable Deterministic Static Discrete Single Agent Solitaire YES Internet Shopping NO AI-Taxi NO Environment Types Observable Deterministic Static Discrete Single Agent Solitaire YES YES Internet Shopping NO NO AI-Taxi NO NO Environment Types Observable Deterministic Static Discrete Single Agent Solitaire YES YES YES Internet Shopping NO NO NO AI-Taxi NO NO NO Environment Types Observable Deterministic Static Discrete Single Agent Solitaire YES YES YES YES Internet Shopping NO NO NO YES AI-Taxi NO NO NO NO Environment Types Observable Deterministic Static Discrete Single Agent Solitaire YES YES YES YES YES Internet Shopping NO NO NO YES YES AI-Taxi NO NO NO NO NO Environment Types Observable Deterministic Static Discrete Single Agent Solitaire YES YES YES YES YES Internet Shopping NO NO NO YES YES AI-Taxi NO NO NO NO NO The real world is (of course) partially observable, stochastic, sequential, dynamic, continuous, multi-agent. Agent Types Four basic types can be generalized : • Simple reflex agents • Reflex agents with state • Goal-based agents • Utility-based agents A learning component can be added to any of these to form a learning agent. Simple Reflex Agent Agent Sensors Condition−action rules What action I should do now Actuators Environment What the world is like now Reflex Agent with State Sensors State How the world evolves What my actions do Condition−action rules Agent What action I should do now Actuators Environment What the world is like now Goal-Based Agent Sensors State What the world is like now What my actions do What it will be like if I do action A Goals What action I should do now Agent Actuators Environment How the world evolves Utility-Based Agent Sensors State What the world is like now What my actions do What it will be like if I do action A Utility How happy I will be in such a state What action I should do now Agent Actuators Environment How the world evolves Learning Agent Performance standard Sensors Critic changes Learning element knowledge Performance element learning goals Problem generator Agent Actuators Environment feedback