Download Ch 16. Artificial Intelligence

Document related concepts

Ethics of artificial intelligence wikipedia , lookup

Pattern recognition wikipedia , lookup

Catastrophic interference wikipedia , lookup

Computer Go wikipedia , lookup

Existential risk from artificial general intelligence wikipedia , lookup

Philosophy of artificial intelligence wikipedia , lookup

Personal knowledge base wikipedia , lookup

Machine learning wikipedia , lookup

Concept learning wikipedia , lookup

History of artificial intelligence wikipedia , lookup

Knowledge representation and reasoning wikipedia , lookup

Transcript
Great Ideas of CS with Java








Part 1 WWW & Computer programming in the language Java
Ch 1: The World Wide Web
Ch 2: Watch out: Here comes Java
Ch 3: Numerical computation & Function
Ch 4: Subroutines & Databases
Ch 5: Graphics
Ch 6: Simulation
Ch 7: Software engineering







Part 2 Understanding what a computer is and how it works
Ch 8: Machine architecture
Ch 9: Language translation
Ch 10: Virtual Environment for Computing
Ch 11: Security, Privacy, and Wishful thinking
Ch 12: Computer Communication






Part 3 Advanced topics
Ch 13: Program Execution Time
Ch 14: Parallel Computation
Ch 15: Noncomputability
Ch 16: Artificial intelligence
1
SNU
OOPSLA Lab.
Ch 16. Artificial Intelligence
Copyright © SNU OOPSLA Lab.
SNU
OOPSLA Lab.
Artificial Intelligence
참고자료
SNU
OOPSLA Lab.
What Is AI?

Artificial Intelligence (AI)




Intelligent behavior


Intelligent behavior in artifacts
Designing computer programs to make computers smarter
Making computers do things at which, at the moment, people are better
Perception, reasoning, learning, and communicating in complex environments
Long term goals of AI


Develop machines that do things as well as humans can or possibly even better
Understand behaviors
4
SNU
OOPSLA Lab.
연구 초기의 실패


간과했던 사실: 인간 지능을 흉내내는 일은 아주 복잡
대표적인 실패 사례




1950, 60년대의 기계 번역(machine translation) 시도
이후 80년대 전까지 인공 지능 연구가 위축됨
80년대 이후 지속적으로 컴퓨터의 학습법에 대한 연구
Brain vs. Computer


Human Brain: parallel processing,
Computer:
serial processing,
5
fuzzy logic
binary logic
SNU
OOPSLA Lab.
인공 지능의 여러 분야

문제 해결 (Problem-solving)


자연 언어 처리 (Natural language processing)


자연 언어를 이용한 인간과 기계의 인터페이스
전문가 시스템 (Expert system)


체스 게임, 군사 전략 등의 문제 해결
컴퓨터를 특정 분야의 전문가로 이용
로보틱스 (Robotics)

시각, 촉각, 발화 능력을 기계에 부여
6
SNU
OOPSLA Lab.
인공지능
연구분야
학습
추론
지식
지능
응용분야
지능형 에이전트
정보검색
데이터마이닝
전문가 시스템
지능형 로봇
자연언어 처리
접근방법
합리론적(논리기호)
경험론적(확률통계)
연결론적(신경소자)
진화론적(유전 진화)
생물학적(인공생명)
7
알고리즘
메커니즘
표현 방식
시스템 구조
SNU
OOPSLA Lab.
컴퓨터의 학습 (Learning)

인공 지능의 특징



새로운 것을 학습하는 능력
과거에 범했던 오류를 바탕으로 향상된 결과 제시
AI Software 의 기본 구성요소

지식 베이스 (Knowledge base)


사실과 규칙의 집합
추론 기관 (Inference engine)

주어진 사실과 규칙들을 근거로 새로운 사실 도출
8
SNU
OOPSLA Lab.
AI Software 학습의 예


Knowledge base (given)

Fact: Amy is Ken’s wife.

Rule: If X is Y’s wife then Y is X’s husband.
Inference (derived)

Ken is Amy’s husband
9
SNU
OOPSLA Lab.
인공 지능 논쟁

오래 전부터 논쟁된 이슈

컴퓨터에게 생각하는 능력이 있는가?

현재까지는 인간의 문제 해결 방식과 차이가 많음

인간: 뉴런의 동시 활동  즉각적, 병렬적

컴퓨터: 논리적, 단계적 문제 해결 경향
10
SNU
OOPSLA Lab.
튜링 테스트 (Turing Test)




지능을 판단하기 위한 기준으로 제안
영국의 수학자 Alan Turing이 제안
피 실험자로 사람과 컴퓨터가 있고 질문과 응답만을 통해
사람인지 컴퓨터 인지 판별하는 테스트
만약 차이점을 발견할 수 없으면 그 기계는 지능이 있다고
간주
11
SNU
OOPSLA Lab.
튜링 테스트의 예 (1)

Eliza


1960년대 정신과 의사를 흉내 내는 프로그램
예




Patient: I am desperate.
Eliza: I know you are desperate.
Patient: My mother never liked me.
Eliza: Tell me more about your family.
12
SNU
OOPSLA Lab.
Approaches to AI

Two main approaches: symbolism vs. connectionism

Symbolism approach




“Good-Old-Fashioned AI” (GOFAI)
Physical symbol system hypothesis
Logic based, top-down, designed behavior, knowledge-intensive
Connectionism approach



Neural networks, evolutionary machines
Intelligent behavior is the result of subsymbolic processing
Biological, bottom-up, emergent behavior, learning-based
13
SNU
OOPSLA Lab.
Brief History of AI Research
Symbolism AI










1943:
1956:
1958:
1965:
Connectionism AI
Production rules
“Artificial Intelligence”
LISP AI language
Resolution theorem
proving

1943:
1959:
1965:
1966:
1966:

1975: Genetic algorithm




1970: PROLOG language
1971: STRIPS planner
1973: MYCIN expert system
1982-92: Fifth generation
computer systems project
1986: Society of mind




1994: Intelligent agents

14
McCulloch-Pitt’s neurons
Perceptron
Cybernetics
Simulated evolution
Self-reproducing automata
1982: Neural networks
1986: Connectionism
1987: Artificial life
1992: Genetic programming
1994: DNA computing
SNU
OOPSLA Lab.
Brief History of AI (1)

1940~1950





Coined the name “Artificial Intelligence”
Programs that perform elementary reasoning tasks
Alan Turing: First modern article dealing with the possibility
of mechanizing human-style intelligence
McCarthy: Predicate calculus: language for representing and
using knowledge in a system called “advice taker”
Rosenblatt: Perceptron for learning and for pattern
recognition
15
SNU
OOPSLA Lab.
Brief History of AI (2)

1960~1970: Many AI Techniques and Programs appeared

Technologies




Problem representations
search techniques
general heuristics
AI Techniques




Simple puzzle solving, game playing
Chess, Checkers
Theorem proving in plane geometry
GPS (General Problem Solver)
16
SNU
OOPSLA Lab.
Brief History of AI (3)

Late 1970 ~ early 1980



Development of more capable programs that contained the
knowledge required to mimic expert human performance
Methods of representing problem-specific knowledge
DENDRAL



Expert Systems


Input: chemical formula, mass spectrogram analyses
Output: predicting the structure of organic molecules
Mycin: 1974, Medical diagnoses
DEEP BLUE (1997/5/11)

Chess game playing program
17
SNU
OOPSLA Lab.
TextBook: Table of Contents







Introduction
Representing Knowledge
Understanding
Learning
Frames
Reasoning
Summary
18
SNU
OOPSLA Lab.
Introduction (1/2)

Dream of artificial intelligence


Machines may be programmed to become more and more
powerful in the solution of problems until their abilities equal or
exceed human beings
Computer already can do some things much better than
human beings



adding numbers faster
remembering facts better
searching a fact faster
19
SNU
OOPSLA Lab.
Introduction (2/2)

Difference between human and machine capabilities




depends on the intrinsic complexity of the task
If complexity is low, the efficient program can be written and
machines will do a superior job.
If complexity is high, either the programs are two slow or the code
could not be written.
Graph showing relationships between task complexity and
human and computer capabilities (Fig. 16.1)

Goal for artificial intelligence is to raise the level of the machine
curve as much as possible, especially in the regions on the right
20
SNU
OOPSLA Lab.
Figure 16.1
21
SNU
OOPSLA Lab.
Table of Contents

Introduction

Representing Knowledge





Understanding
Learning
Frames
Reasoning
Summary
22
SNU
OOPSLA Lab.
Representing Knowledge (1/3)

Knowledge of an object or event


The set of facts and relationships pertaining to it
For example, knowledge of a particular chair


Its position, material, color, size and shape, owner, cost, age, history…
A particular data item becomes part of the knowledge of the chair if
there are processing operations related to the chair that references
that item

Data items required to use the chair, move it, describe it, or many other
operations compose the knowledge about chair
23
SNU
OOPSLA Lab.
Representing Knowledge (2/3)


Knowledge should be organized so that the machine can
use it efficiently to do its job
Semantic networks (Fig. 16.2)


One of ways in representing knowledge
A set of labeled nodes with labeled arcs connecting them



Nodes represent objects, names, properties, and other entities
Arcs show the relationships between these entities
For example, semantic network partly describing a chair
24
SNU
OOPSLA Lab.
Figure 16.2:
Semantic Networks
for Chair
25
SNU
OOPSLA Lab.
Representing Knowledge (3/3)

How can we store this network in the machine?

Every arc  its initial node, its arc label, and its final node
X1 part of a1
X2 part of a1
.
.
A1 name chair
.
.

This listing contains all the information in the network and is suitable
for storage in the machine
26
SNU
OOPSLA Lab.
Table of Contents

Introduction
Representing Knowledge

Understanding





Learning
Frames
Reasoning
Summary
27
SNU
OOPSLA Lab.
Understanding (1)

Suppose



The system understands this image


knowledge structure of the chair is stored in the machine
an image has been perceived and is stored in a nearby memory region
If the linkages can be made between the knowledge structure and the
image,
So, understanding means finding a linkage between these
two structures
28
SNU
OOPSLA Lab.
Figure 16.3
29
SNU
OOPSLA Lab.
Understanding (2)

The process of searching for an understanding of the image
with respect to this knowledge

Where is the seat of the chair?




First, link the node x6 with a randomly selected region (x,y) = (100,120)
in the image (Fig. 16.4), then store this linkage in a table
In order to confirm the linkage made, scan the periphery of the selected
region in the image to find the five objects connected to x6
A search of the surrounding area yields only one obviously connected part
(the first tried linkage seems incorrect)
Break the linkage and try connecting it to other regions
 Eventually, the linkage in Fig. 16.5 will be tried and confirmed: the
selected region has five objects connected to it
30
SNU
OOPSLA Lab.
Figure 16.4:
First trial for
searching
31
SNU
OOPSLA Lab.
Figure 16.5
Successful trial
32
SNU
OOPSLA Lab.
Understanding (3)

The process of full understanding of the image

Perhaps, the region designated(100,120) can be associated with
some other node




Since it touches the floor, we propose that it is x3
The fact that x3 is connected to x6 can be confirmed in the image
Carrying this process on, three more objects can be identified as legs,
and one can be identified as the back
The system can assume that the knowledge structure has been
correctly related to the image (Fig. 16.6)
33
SNU
OOPSLA Lab.
Figure 16.6: Full Understanding of Chair
SNU
34
OOPSLA Lab.
Understanding (4)

Now, we say that the system understands the image

System can find the name of the object of which the identified regions
are parts



Follow the “part of” links to a1 and trace the name link to “chair” (System
can output “This is a chair”)
It can follow the name links for all the parts and name them
It can also follow ownership, cost, history, location, and other links, if
present, to obtain additional information about the chair
35
SNU
OOPSLA Lab.
Understanding (5)

A being may incorrectly set up linkages




Misunderstanding will occur
Incorrect inferences may be made
Wood block example shown in Fig. 16.7
A being may also fail to understand a perception although
its knowledge is adequate

The discovery of the proper linkage may involve a calculation
outside its repertoire (Fig. 16.8)
36
SNU
OOPSLA Lab.
Figure 16.7: Understanding Wood block chair
37
SNU
OOPSLA Lab.
Figure 16.8: Understanding a fallen chair
SNU
38
OOPSLA Lab.
Understanding (6)

There are kinds of knowledge that can never be satisfactorily
represented in machines

Knowledge of our own humanity

The vast array of human emotions and experiences has been earned by
each of us in a multitude of events
39
SNU
OOPSLA Lab.
Table of Contents

Introduction
Representing Knowledge
Understanding

Learning





Frames
Reasoning
Summary
40
SNU
OOPSLA Lab.
Learning (1/5)

How to build adequate stores of knowledge




Preparing a knowledge base in an appropriate form
Entering it directly as data
Having the system learn the knowledge
There are two kinds of learning


Rote learning (The most primitive type)
Concept learning (More profound type)
41
SNU
OOPSLA Lab.
Learning (2/5)

Rote learning

Suppose a being presented with the image (Fig. 16.10)



The being has no concept “chair”, has never seen a chair, and has never
encountered the term chair
The being can distinguish some primitive elements(a group of oaken
boards) in the scene (Fig. 16.11)
We assume a teacher says, “this is a chair”



The being thus notes the exact nature of the set of objects
The being assigns them a representation as a group
The beging attaches the name “chair” (Fig. 16.12)
42
SNU
OOPSLA Lab.
Figure 16.10
Figure 16.11
43
SNU
OOPSLA Lab.
Figure 16.12
44
SNU
OOPSLA Lab.
Learning (3/5)

Rote learning

We assume the teacher also names the components of the chair and
demonstrates its use for sitting


The system would increase its knowledge structure to account for these
additional data (Fig. 16.13)
Facts of Rote learning




A single data item has been observed
A memory representation has been created for it
Each new object has its own configuration and memorized structure
Thousands objects  thousands representations
45
SNU
OOPSLA Lab.
Figure 16.13:
Adding Knowledge in
Rote Learning
46
SNU
OOPSLA Lab.
Learning (4/5)

Rote learning

Suppose the system was given the three-legged image (Fig. 16.14)
and asked to identify it


It would fail because of no way to build a correspondence between the
image and any existing knowledge representation
Let’s indicate to the system “this new image is also a chair”

We similarly name its parts and demonstrate it as a useful auxiliary for
sitting
 The system has two representation of “chair” (Fig. 16.15)
47
SNU
OOPSLA Lab.
Figure 16.15: Two Representations of Chair in Rote learning
Figure 16.14
48
SNU
OOPSLA Lab.
Learning (5/5)

Concept learning



Merge existing knowledge structures and generalize to
some extent (Fig. 16.16)
Generalization: Something is a chair if it has a seat, a
back, is useful for sitting, and has any number of legs
Advantages for concept learning


It saves memory space
Merged representation allows the learner to deal with new
situations having never before been encountered
49
SNU
OOPSLA Lab.
Figure 16.16:
Merging
Knowledge
Representations in
Concept Learning
50
SNU
OOPSLA Lab.
Table of Contents

Introduction
Representing Knowledge
Understanding
Learning

Frames





Reasoning
Summary
51
SNU
OOPSLA Lab.
Frames (1)




Knowledge representation method by 1974, M. McCarthy
전형적인 구성요소를 표현하는 slot과 slot filler들의 집합
Slot : 객체의 서로 다른 면을 서술
Filler : Slot에 할당되는 개체



실제 값, 초기 값, 프로시저, 다른 프레임
한 덩어리로 이루어진 지식을 표현하는데 적합
A frame ~~ A generalized concept
52
SNU
OOPSLA Lab.
Frame 예: Whenever the being needs information about
chairs, CHAIR frame will be called upon for understanding
and for guiding correct action
CHAIR
SPECIALIZATION-OF: FURNITURE
NUMBER-OF-LEGS:
DEFAULT: 4
IF-NEEDED: use procedure COUNT ‘EM
NUMBER-OF-ARMS: 0, 1, or 2
SEAT:
BACK:
DEFAULT: same as SEAT
STYLE:
GOOD-FOR: sitting
특수화
MY-DESK-CHAIR
SPECIALIZATION-OF: CHAIR
NUMBER-OF-LEGS:
NUMBER-OF-ARMS: 2
SEAT: cushioned
BACK: cushioned
STYLE: high-tech
GOOD-FOR: storing unread papers
일반 의자
일반화
개별 의자
53
SNU
OOPSLA Lab.
Frames (2)

Hypothesis



Human brains might have thousands & thousands of frames
Frames may be built up over the years through a process of
thousands of concept learning mergers
Consider actions of a person walking down a hall



A walking frame, a hall frame
The weather frame, the politics frame
…….
54
SNU
OOPSLA Lab.
Table of Contents

Introduction
Representing Knowledge
Understanding
Learning
Frames

Reasoning

Summary




55
SNU
OOPSLA Lab.
Reasoning (1/6)

Definition


3 components of reasoning


an initial entity, target entity, and a way of choosing paths among
entities
Constructing links from the initial entity to the target entity


The process of constructing a linkage from one entity in memory to
another
a sequence of actions for going from the initial state to the goal
AI Software has to find the sequence of actions
56
SNU
OOPSLA Lab.
Reasoning (2/6)

Example: Monkey and Bananas Problem

Suppose the following are in a room


Goal is for the monkey to get the bananas



monkey, chair, and some bananas hanging from the ceiling
find a sequence of actions beginning at the current state and reaching the
goal state
In terms of semantic nets, the initial state is represented in Fig. 16.25
The goal state is represented by the subnet in Fig. 16.26
57
SNU
OOPSLA Lab.
Figure 16.26: goal state
Figure 16.25: initial state
A: The position on the floor below the bananas
B: The position on the floor below the chair
C: The position on the floor below the monkey
A’: The position well above A, reachable only by standing on the chair
58
SNU
OOPSLA Lab.
Reasoning (3/6)

The transition from one state to another

Five operations are available for this problem

Suppose the monkey is at C and tries operation “go X” to go to A or B

Resulting states achieved in each case (Fig. 16.27)
59
SNU
OOPSLA Lab.
Figure 16.27: state
transition
60
SNU
OOPSLA Lab.
Reasoning (4/6)


This problem is to find the correct sequence of actions from
the initial state to the goal state (Fig. 16.28)
This is not necessarily an easy problem


There are many things the monkey could do
Only a few of the sequences lead to success (Fig. 16.29)
61
SNU
OOPSLA Lab.
Figure 16.28:
AI Software has to find
the sequence of actions
Goal state
62
SNU
OOPSLA Lab.
Figure 16.29: Search space
63
SNU
OOPSLA Lab.
Reasoning (5/6)

Search algorithm trying to find a path from initial node to the
goal node in the tree


Works by examining nodes farther and farther down the tree until the
goal is found
Data structures


ACTIVE List: stores the set of nodes that need to be examined next
CLOSED List: stores the set of all nodes that have been examined
and found not to be goals
64
SNU
OOPSLA Lab.
Figure 16.30 Searching Example
65
SNU
OOPSLA Lab.
(a) Put initial node on ACTIVE list;
The CLOSED list begins empty;
While there are nodes on ACTIVE do
(b)
Remove a node from ACTIVE using criterion C and call it N;
(c)
If N is a goal, then
Print “success”;
Print the list of actions on arcs from initial node to N;
Halt;
(d)
Find all children nodes of N that are not on ACTIVE or CLOSED
and add them to ACTIVE;
(e) Add N to CLOSED;
(f) Print “failure”;
Search Algorithm

Criterion C is the strategy for guiding the search


If no helpful strategy is available, C may choose the next node from
ACTIVE randomly
If there is a way to gauge the direction, C is the mechanism or
subroutine making the decision
SNU
66
OOPSLA Lab.

The monkey and bananas
problem solved with the
previous search algorithm


67
We do not discuss here how
C makes selections
We assume that C usually
makes good decisions
SNU
OOPSLA Lab.
Reasoning (6/6)


Criterion C greatly affects the operation of the search algorithm
There are three methods in C



Bread-first strategy
Depth-first strategy
Heuristic strategy
68
SNU
OOPSLA Lab.
Fig. 16.32 Breadth-First Search

The order of nodes selected
 A, B, C, D, E, F, G, H, I, J, K, L, M, N, O
69
SNU
OOPSLA Lab.
Fig. 16.33 Depth-First Search

The order of nodes selected
 A, B, D, H, I, E, J, K, C, F, L, M, G, N, O
70
SNU
OOPSLA Lab.
Fig. 16.34 Heuristic Strategy

Assumptions



the system has some knowledge about how to find the goal
Node selections are made in the direction of the earliest possible achievement of the goal
Example (Fig. 16.31)

If L is a goal node  C(Heuristic Function) might select A, C, F, and L

If the information were weaker  C might choose A, B, C, F, G, M, L
71
SNU
OOPSLA Lab.
Table of Contents







Introduction
Representing Knowledge
Understanding
Learning
Frames
Reasoning
Summary
72
SNU
OOPSLA Lab.
Summary

Components of Intelligent software

Knowledge representation and Understanding



Learning




Semantic network
Frame
Rote Learning
Concept Learning
Reasoning & Searching methods
Recently combining AI and BT is hot!
73
SNU
OOPSLA Lab.


Ch16: AI
Text Review Time
74
SNU
OOPSLA Lab.