* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
Download Intelligent Agents
Ecological interface design wikipedia , lookup
Soar (cognitive architecture) wikipedia , lookup
Multi-armed bandit wikipedia , lookup
Adaptive collaborative control wikipedia , lookup
Reinforcement learning wikipedia , lookup
Embodied language processing wikipedia , lookup
Agent (The Matrix) wikipedia , lookup
Reading assignment • Chapters 1, 2 • Sections 3.1 and 3.2 What is artificial intelligence • Act rationally • Integrate sub-areas in AI into intelligent agents – A full breath of potential applications – Play games – Control space-rovers – Cure cancer – Trade stocks – Fight wars AI-complete dream • Robot that saves the world – Robot that cleans your room • But… – It’s definitely useful, but… • Really narrow – Hardware is a real issue • Will take a while • What’s an “AI-complete” problem that will be useful to a huge number of people in the next 5-10 years? • What’s a problem accessible to a large part of AI community? What makes a good AI-complete problem? • A complete AI-system loop: – – – – – – Sensing: gathering raw information from the world Translating: process information Reasoning: making high-level conclusions from information Planning: making decisions on what to do Acting: carry out actions Feedback (back to sensing) • But also – – – – – Hugely complex Can get access to real data Can scale up and layer up Can make progress Very cool and exciting Factcheck.org • Take a statement • Collect information from multiple sources • Evaluate quality of sources • Connect them • Make a conclusion AND provide an analysis Automated fact checking Inferenc e Query Fact or Fiction? Model s Conclusion and Justification Active user feedback on sources and proof Agent • A concept to help us formalize the problemsolving process • An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators – Human agent: eyes, ears, and other organs for sensors; hands, legs, mouth, and other body parts for actuators An AI agent • http://www.youtube.com/watch?v=hyGYas f5rKc Vacuum-cleaner world • Percepts: location and contents, e.g., [A, Dirty] • Actions: move-left, move-right, suck Rational agents • An agent should strive to "do the right thing", based on what it can perceive and the actions it can perform. The right action is the one that will cause the agent to be most successful • Performance measure: An objective criterion for success of an agent's behavior Vacuum-cleaner world • Percepts: location and contents, e.g., [A, Dirty] • Actions: move-left, move-right, suck • Performance measure: award one point for each clean square at each time step, over a lifetime of 1000 time steps • What should be rational actions? Rational agents • Rational Agent: For each possible percept sequence, a rational agent should select an action that is expected to maximize its performance measure, given the evidence provided by the percept sequence and whatever built-in knowledge the agent has. A simple agent function Percept Sequence Action [a, clean] right [a, dirty] suck [b,clean] left [b,dirty] suck [a, clean], [a, clean] right [a, clean], [a, dirty] Suck ……. …….. • A rational agent given the performance measure, and the geography is known (why?) • What if a different performance measure is used? • e.g. deduct one point each time the vacuum moves • What if the geography is not known? An other example • The savage game – Performance measure: minimize the total number of steps – Environment known • How to design a rational agent? PEAS: specifying the setting for the agent • PEAS: Performance measure, Environment, Actuators, Sensors – – – – Performance measure Environment Actuators Sensors PEAS • Must first specify the setting for intelligent agent design • Consider, e.g., the task of designing an automated taxi driver: – Performance measure: Safe, fast, legal, comfortable trip, maximize profits – Environment: Roads, other traffic, pedestrians, customers – Actuators: Steering wheel, accelerator, brake, signal, horn – Sensors: Cameras, sonar, speedometer, GPS, odometer, engine sensors, keyboard Environment types • Fully observable (vs. partially observable): An agent's sensors give it access to the complete state of the environment at each point in time. • Deterministic (vs. stochastic): The next state of the environment is completely determined by the current state and the action executed by the agent. Partially observable Stochastic environment Environment types • Static (vs. dynamic): The environment is unchanged while an agent is deliberating • Discrete (vs. continuous): A limited number of distinct, clearly defined percepts and actions • Single agent (vs. multiagent): An agent operating by itself in an environment State Space Formulation • Let us start from the simplest form: – Fully observed, deterministic, static, discrete, single agent • A natural way to represent the problem is called State Space Formulation – Consider the savage game example – Key idea: represent the facts by states, and actions by state transitions Example: Romania 22 Example: Romania • On holiday in Romania; currently in Arad. • Flight leaves tomorrow from Bucharest • Formulate goal: – be in Bucharest • Formulate problem: – states: various cities – actions: drive between cities • Find solution: – sequence of cities, e.g., Arad, Sibiu, Fagaras, Bucharest 23 State-space problem formulation A problem is defined by four items: 1. initial state e.g., "at Arad" 2. actions or successor function S(x) = set of action–state pairs – e.g., S(Arad) = {<Arad Zerind, Zerind>, … } 3. goal test, can be – explicit, e.g., x = "at Bucharest" – implicit, e.g., Checkmate(x) 4. path cost (additive) – e.g., sum of distances, number of actions executed, etc. – c(x,a,y) is the step cost, assumed to be ≥ 0 • • A solution is a sequence of actions leading from the initial state to a goal state What’s the problem formulation for two travelers? 24 Abstraction • Real world is absurdly complex state space must be abstracted for problem solving • (Abstract) state = set of real states • (Abstract) action = complex combination of real actions – e.g., "Arad Zerind" represents a complex set of possible routes, detours, rest stops, etc. • (Abstract) solution = – set of real paths that are solutions in the real world 25 Example: vacuum world • What is the state space transition graph? • Single-state, start in #5. Solution? 26 Vacuum world state space graph • • • • states? actions? goal test? path cost? 27 Vacuum world state space graph • • • • states? Dirty? and robot location actions? Left, Right, Suck goal test? no dirt at all locations path cost? 1 per action 28 Example: The 8-puzzle • • • • states? actions? goal test? path cost? 29 Example: The 8-puzzle • • • • states? locations of tiles actions? move blank left, right, up, down goal test? = goal state (given) path cost? 1 per move 30 Multiplication of state-space • Often the problem involves multiple entities – a combination of multiple subproblems • Cartesian products of search space