Download USI3

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Linear belief function wikipedia , lookup

Concept learning wikipedia , lookup

Embodied cognitive science wikipedia , lookup

Catastrophic interference wikipedia , lookup

Time series wikipedia , lookup

Personal knowledge base wikipedia , lookup

Convolutional neural network wikipedia , lookup

Pattern recognition wikipedia , lookup

Neural modeling fields wikipedia , lookup

Knowledge representation and reasoning wikipedia , lookup

Transcript
Data representation techniques
for adaptation
Alexandra I. Cristea
USI intensive course “Adaptive Systems” April-May 2003
Overview: Data representation
1.
2.
3.
4.
5.
6.
Data or knowledge?
Subsymbolic vs. symbolic techniques
Symbolic representation
Example
Subsymbolic reprensentation
Example
Data or knowledge?
• Data for AS becomes often knowledge
– data < information < knowledge
• We divide into:
– Symbolic
– Sub-symbolic knowledge representation
Data representation techniques
for adaptation
• Symbolic AI and knowledge
representation, such as:
– Concept Maps
– Probabilistic AI (belief networks)
• see UM course
• Sub-symbolic: Machine learning, such as:
– Neural Networks
Symbolic Knowledge
Representation
Symbolic AI and knowledge
representation
• Static knowledge
– Concept mapping
– terminological knowledge
– concept subsumption (inclusion) inference
• Dynamic Knowledge
– ontological engineering, e.g., temporal representation
and reasoning
– planning
Concept Maps
Example
Proposition: Without the industrial chemical reduction
of atmospheric nitrogen, starvation would be rampant
in third world countries.
Starvation
and Famine
FOOD
Deprivation leads to
Can be
limited by
Predicted by
Malthus 1819
Eastern
Europe
Population Growth
and
Contains
Climate
Such as in
Requiring more
Required
for
Protein
Politics
Human Health
and Survival
Includes
Essential Amino Acids
Economics
and
India
Made
by
Distribution
Animals
Grains
Legumes
Africa
Agricultural Practices
Eaten
by
Such as
Such as
Plants
Pesticides
Genetics &
Breeding
Herbicides
Fertilizer
Atmospheric N2
Haber
Process
NH3
Used for
Irrigation
Which significantly
supplements naturally
Required for
growth of
Possess
Symbiotic Bacteria
“Fixed” Nitrogen
That produce
Constructing a CM
• Brainstorming Phase:
• Organizing Phase: create groups and subgroups of related items.
• Layout Phase:
• Linking Phase: lines with arrows
Reviewing the CM
• Accuracy and Thoroughness.
– Are the concepts and relationships correct? Are
important concepts missing? Are any
misconceptions apparent?
• Organization.
– Was the concept map laid out in a way that
higher order relationships are apparent and easy
to follow? Does it have a representative title?
• Appearance.
– spelling, etc.?
• Creativity.
Sub-symbolic knowledge
representation
Subsymbolic systems
•
•
•
•
•
•
human-like information processing:
learning from examples,
context sensitivity,
generalization,
robustness of behaviour, and
intuitive reasoning
Some notes on NN
Example
Why NN?
•
•
•
•
To learn how our brain works (!!)
High computation rate technology
Intelligence
User-friendly-ness
Why NNs?
Applications
vs
Why NNs?
Applications
Man-machine hardware
comparison
Man-machine information
processing
What are humans good at and
machines not?
• Humans:
– pattern recognition
– Reasoning with incomplete knowledge
• Computers:
– Precise computing
– Number crunching
The Biological Neuron
(very small) Biological NN
Purkinje cell
Spike (width 0.2 – 5ms)
Firing
• Resulting signal
– Excitatory:
• encourages firing of the next neuron
– Inhibitory:
• Discourages firing of the next neuron
What does a neuron do?
• Sums its inputs
• Decides if to fire or not with respect to
a threshold
• But: limited capacity:
– Neuron cannot fire all the time
– Refractory period: 10ms – min time to
fire again
– So:  max. firing frequency: 100 spikes/
sec
Hebbian learning rule (1949)
• If neuron A repeatedly and persistently
contributes to the firing of neuron B, than
the connection between A and B will get
stronger.
• If neuron A does not contribute to the firing
of neuron B for a long period of time, than
the connection between A and B becomes
weaker.
Different size synapses
Summarizing
• A neuron doesn’t fire if cumulated activity
below threshold
• If the activity is above threshold, neuron
fires (produces a spike)
• Firing frequency increases with
accumulated activity until max. firing
frequency reached
The ANN
The Artificial Neuron
Input
Functions:
Inside:Synapse
Outside:f =threshold
Output
An ANN
Input
Layer:1
Black Box
Layer:2
Layer:3
Output
• Let’s look in the Black Box!
NEURON LINK
value
V2=w*v1
value
V1
W: weight
neuron1
neuron2
ANN
• Pulse train – average firing frequency 0
• Model of synapse (connecting element)
– Real number w0 : excitatory
– Real number w0 : inhibitory
• N(i) – set of neurons that have a
connection to neuron i
– jN(i)
– wij – weight of connection of j to i
neuron computation
V2
V1
W2
。。。
W1
internal activation fct
Wn
S=ΣVi*Wi - b
i=1..n
external activation fct
Vn
O = f (S)
O
Typical input output relation f
1. Standard sigmoid fct.: f(z)= 1/(1+e-z)
2. Discrete neuron: fires at max. speed, or does not fire
xi={0,1}; f(z) = 1, z>0; 0 z0
Other I-O functions f
3. Linear neuron f(z)=z
output xi=zi –  = …
4. Stochastic neuron: xi  {0,1}; output 0 or 1
input zi =  j wij vi – ii
probability that neuron fires f(zi)
probability that it doesn’t fire 1- f(zi)
Feedforward NNs
Recurrent NNs
Summarizing ANNs
• Feedforward network, layered
– No connection from the output to the input, at
each layer but also at neuron level
• Recurrent network
– Anything is allowed – cycles, etc.