Download 1 - users.cs.umn.edu - University of Minnesota

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Consciousness wikipedia , lookup

Self-knowledge (psychology) wikipedia , lookup

Mind-wandering wikipedia , lookup

Mind wikipedia , lookup

Holonomic brain theory wikipedia , lookup

Cognitive science wikipedia , lookup

Situated cognition wikipedia , lookup

Enactivism wikipedia , lookup

Hard problem of consciousness wikipedia , lookup

Artificial consciousness wikipedia , lookup

Philosophy of experience wikipedia , lookup

Neural correlates of consciousness wikipedia , lookup

Transcript
Consciousness, Emotion and Learning in Human Mind
A New Approach
Murali Sangubhatla
Computer Science
University of Minnesota, TwinCities.
[email protected]
Abstract
Advanced research in the fields of Cognitive Science, Neuroscience, Psychology, Computer
Science and Physics is setting a stage for understanding the architecture of mind. This paper
portrays a totally new role for consciousness and emotion. Consciousness is employed by
mind as a tool for filtering the perceived information. Emotion is the internal information
generated by mind in the service of choosing what to do next. Emotion helps in organizing the
information. All the information is represented in a suitable form that accounts for Chalmers’
view that patterns and information are two aspects of the same thing – pattern-information. A
learning mechanism for the mind that describes how sophisticated thoughts arise from basic
incentives is proposed and illustrated with an example. The paper also attempts to address the
uncertainty left in the Pandemonium model concerning consciousness by formulating a timer
mechanism. Since the discussion assumes an efficient knowledge representation, the need for
knowledge to exhibit intelligent behavior is emphasized.
Murali Sangubhatla, University of Minnesota
1 Introduction
The advent of computers has accelerated the rate at which man is gaining more and more luxuries.
Online shopping, intelligent agents to bid for cheap fares and robots replacing pets is the norm. Current
focus is on developing more and more agents so that man has enough time to take care of other things.
But, are we ready to accept that these entities are intelligent?
Even the scientists in the fields concerned with Artificial Intelligence are divided upon this. If we choose
not to elaborate on the opinions of strong and weak AI communities, the Oxford English Dictionary
defines intelligence as the faculty of understanding; intellect. I am willing to accept an entity to be
intelligent as long as it performs its tasks in the given domain well. With the knowledge of the bidding
constraints, online bidding agents are intelligent; for, they understand a string as ask quote and issue a
price based on an embedded strategy. But, what is the measure of intelligence?
Human mind has been the standard metric for intelligence. A comprehensive model of human mind has
been elusive. Advanced research in the fields of Cognitive Science, Neuroscience, Psychology,
Computer Science and Physics is setting a stage for understanding the architecture of mind. This paper
portrays a totally new role for consciousness and emotion. Consciousness is employed by mind as a tool
for filtering the perceived information. Emotion helps in organizing the information. A different learning
mechanism for the mind is proposed and illustrated with an example.
2
Murali Sangubhatla, University of Minnesota
Section 2 introduces how the paper uses the multi-perspective terms mind, consciousness, cognition and
emotion. Section 3 emphasizes the necessity of knowledge for intelligent behavior. Section 4 elaborates
on issues concerning consciousness. Section 5 describes the role of consciousness and emotion with
respect to cognition. Section 6 proposes learning mechanism for human mind and the mechanism
responsible for switching consciousness.
2 Background
Before we get into the discussion, it would be appropriate to clearly define the terms mind, cognition,
consciousness and emotion. Here are some of the definitions from the Oxford English Dictionary:
“Mind is subjective, identified with conscious mind; mind is incorporeal and needs to be distinguished
from body.”
Brain is the physical mass that exists out there in the head. It is the framework, the hardware for mind.
Minds are what brains do. The nervous system, the brain is a snapshot while the mind is dynamic. We
will also see how mind and body are interrelated with one another.
Consciousness:
1.
Internal knowledge or conviction; knowledge as to which one has the testimony within oneself;
esp. of one's own innocence, guilt, deficiencies, etc.
2.
Consciousness is the state or fact of being mentally conscious or aware of anything.
3
Murali Sangubhatla, University of Minnesota
Consciousness, in simple words, is the awareness that a fact is being recorded in the knowledge
database. While we are asleep, we do not remember most of the external perceptions since we are not
aware of recording any event; as John Jackson (1987) proposed, we shut off the consciousness
interference. If one tends to argue that he remembered what he dreamed, I argue that he was not in
sound sleep but conscious of the fact that it is being recorded.
Cognition:
1. The action or faculty of knowing; knowledge, consciousness; acquaintance with a subject.
2. Apprehension, perception.
3. The action or faculty of knowing taken in its widest sense, including sensation, perception,
conception, etc., as distinguished from feeling and volition; also, more specifically, the action of
cognizing an object in perception proper.
4.
A product of such an action: a sensation, perception, notion, or higher intuition
The dictionary does not draw a clear line between cognition and consciousness. Definition 4 of
Cognition is significant for this discussion. A perception from the environment could lead to the creation
of a new fact that is added to the knowledge database. I wish to use the terms cognition and knowledge
database interchangeably. As we see later, we assume that knowledge is key to intelligence.
Emotion:
1. A moving, stirring, agitation, perturbation (in physical sense).
2. Any agitation or disturbance of mind, feeling, passion; any vehement or excited mental state.
4
Murali Sangubhatla, University of Minnesota
3. A mental ‘feeling’ or ‘affection’ (e.g. of pleasure or pain, desire or aversion, surprise, hope or fear,
etc.), as distinguished from cognitive or volitional states of consciousness. Also abstr. ‘feeling’ as
distinguished from the other classes of mental phenomena.
Emotion is strongly interrelated with Cognition. A precondition/context in the knowledge database
could arouse a particular emotion. Similarly, an emotion could have great impact on the event being
recorded in the knowledge database.
3 Is Knowledge Essential For Intelligence?
We have come across several Knowledge Based Systems. IBM Deep Blue had a huge knowledge
database comprising all the previous games of Kasparov. Using this knowledge, the game strategy
would place the next move. Had there been no knowledge encoded in the form of programs and
statistics, all that the system had to do was to choose a random move. Surely, I would not be convinced
to accept a randomizer to be intelligent. Even a simple natural language processing agent, Non Stop
Chatter Box1, required about 75 columns (about 600 bytes) per word entry and pointers to several such
other words (indexed by grouping). The semantic parser and the responder programs were selection
processes that heavily relied upon the encoded rules and the database. Following sections go about with
the same assumption that intelligent action needs efficient knowledge representation.
1
Non Stop Chatter Box, designed by me along with Komal K, Mythreyi V was a simple natural language processing
program implemented in Java that made use of a relational database for accumulating knowledge.
5
Murali Sangubhatla, University of Minnesota
An intelligent system has to constantly acquire new knowledge and integrate it with the old knowledge.
This way the machine can learn and adapt to changing environments.
If we are invited for a party, several questions pop up in our head. What dress should I wear? What gift
should I purchase?
There should have been some way of rolling out the contextual knowledge
associated with the words ‘party’. Then, our thought processor would have switched to the query mode
raising several questions from this working memory. Imagine we do not store any information, then, it is
difficult for me to understand how the acoustics of the word ‘party’ would convey anything to us. Thus,
knowledge is essential for intelligent behavior.
The above discussion only emphasizes the necessity for knowledge. But should it be represented within
the same entity? To represent or not is the famous Third AI Debate. Brooks’ Subsumption Architecture
and his two robots Allen and Newell2 prove that no internal representation is necessary for intelligence.
Even the Pengi World developed by Agre and Chapman does not employ any sort of internal
representation. The Pengi gets the knowledge by rescanning the environment and exhibits intelligent
behavior.
But then, how about autonomous agents for complex tasks? What if we had two Pengis and they needed
to communicate in order not to hurt each other or need to trap a bee? The necessity for representation
would surely have arised. Freeman says that brains create internal information in the service of choosing
what to do next. Thus, the brains not only store the information out there in the world but also create the
information created in the process of choosing an action.
2
Brooks, Rodney implemented two sophisticated robots and named them Allen and Newell. Allen simply avoids
obstacles while Newell is more layered and picks up empty cans inside a room. Both are based on the subsumption
architecture.
6
Murali Sangubhatla, University of Minnesota
4 Consciousness
Chalmers (1991) talks about consciousness and takes two different approaches. The third person
approach regards consciousness as a scientific problem, while the first person approach treats it as a
metaphysical issue. Under the third-person approach, consciousness is considered a problem in science
and subject to the same methods. He also claims that there are three problems concerning consciousness.
The first is the problem of sensory qualia (why does red look like red?). The second is the problem of
mental content: thoughts are about something, say white elephants. By our physicalist assumption,
thought arises from neural firings. What should neural firings have to do with white elephants? The third
is the existence of subjective experience. Why should subjective states exist in the first place?
Qualia are the qualitative aspects of our mental states, such as the color sensations, taste of a chocolate,
pleasure and pain. One might conclude that qualia cannot be understood in terms of physical brain
functions, that something else is needed to explain them. Chalmers idea is that “patterns carry some
information; all information is carried out in this world by some pattern or another; patterns and
information are two aspects of the same thing”. The following sections also go about on these lines
(interchangeability of pattern and information). Chalmers view point is that any agent that uses
information derived from patterns is conscious to some degree.
Can a behavior model simulate the full functionality of the human brain? The prospect of computers
competing with the full range of human capabilities generates strong, often adverse feelings, as well as
no shortage if arguments that such a specter is theoretically impossible. In
7
Murali Sangubhatla, University of Minnesota
1989, Roger Penrose put forth a conjecture that has to deal with the famous “incompleteness theorem”
proved by Godel. The theorem states that in a mathematical system powerful enough to generate the
natural numbers, there inevitably exist propositions that can be neither proved nor disproved. A
corollary of Godel’s theorem is that there are mathematical propositions that cannot be decided by an
algorithm. In essence, these Godelian impossible problems require an infinite number of steps to be
solved. So, Penrose conjectures that machines cannot do what humans can do because machines can
only follow an algorithm. An algorithm cannot solve a Godelian Unsolvable problem. But humans can.
Therefore, humans are better. Penrose goes on to state that humans can solve unsolvable problems
because our brains do quantum computing.
The second conjecture made by Penrose is relevant to the discussion. It is more difficult to resolve. It is
that an entity exhibiting quantum computing is conscious. He states that it is human computing that
accounts for her consciousness. Thus, quantum computing and quantum decoherence yields
consciousness. Since observing quantum uncertainty causes quantum decoherence, there is a link
between consciousness and quantum decoherence. Even applying quantum logic, Penrose’s conjecture
does not seem to follow. But, due to the strong nexus between quantum decoherence and consciousness,
we cannot reject it.
Since humans are conscious, does human brain have quantum computers? It is a philosophical question
whether something is conscious or not. But, assuming that humans are conscious, the following section
portrays a new role for consciousness.
8
Murali Sangubhatla, University of Minnesota
5 Consciousness, Emotion in a new role
After a general discussion on intelligence and knowledge, let us now focus on human mind. The issues
that need to be addressed are:

What is to be stored?

How should it be stored?
The human eye performs so many scans per second and considering the size of each image, we would
exhaust huge memories. Clearly, The British Museum Method does not suffice. It is worthwhile to recall
Ornstein’s statement that the sensory system is not a window into the environment but a filter of the
information out there. Although we look at several things, we are not conscious of most of them (and
hence, according to our definition, do not record/store most of the information). This is plausible in the
sense that human ear insensitive to acoustic waves outside the range of 20 Hz - 20 KHz. Similarly, our
eyes are incapable of seeing light outside the visible spectrum (Violet to Red). The definition of senses
as filter reduces the problem, but only to a certain extent. Even in the perceivable range exists lots of
information. Then, What is responsible for filtering the perceivable information? Apparently, we are
not aware of each and every event in the environment, even if our senses have perceived it.
This is the responsibility of consciousness. We make a note of the events only when we are conscious.
Thus, we could say that, mind employs consciousness as a tool to filter the information available in the
perceivable range. Consciousness determines what is to be stored / processed. The following figure is an
attempt to illustrate this feature.
9
Murali Sangubhatla, University of Minnesota
The sensory system filters the information first with the help of preceptors. The perceived information is
then carried to the response generator. This demon consults memory / knowledge database and employs
action selection mechanism in generating an appropriate response. The action selection mechanism is
influenced by environment and the biological condition of the body. Observe that emotions are
generated as a side effect. Meanwhile, consciousness filters the information on its way to the
information processor. When we are conscious, we let the information flow through. When
unconscious, the information is filtered. Recalling what John Jackson (1987) in his Pandemonium model
said, “Should there be a demon that acts on the subarena (consciousness) to abort sensory input?” This
issue is dealt in the next section.
The functions of information processor are:
10
Murali Sangubhatla, University of Minnesota

To represent the information appropriately. (find a suitable pattern)

To build the contextual knowledge for this information. It could get the aid of pattern
recognition system for this purpose.

“Organize” the memory / knowledge database with the new pattern and its contextual
knowledge.
Contextual Knowledge is important for the response generator demon in generating an appropriate
response. The same pattern ‘fire’ triggers a response of fear in the context of ‘sudden occurrence’ while
not triggering any exclamatory response in the context of ‘keep warm’. To put in simple terms, it can be
viewed as a tuple indexed by the pattern with several columns for different external /internal situations.
The information processor keeps filling these columns. A good candidate for implementing contextual
knowledge would be K-Lines introduced by Minsky in 1980. K-Lines are active agents in memory that
can be used to construct hierarchical memories. More information about K-Lines can be found in
Cognitive Science (1980), Minsky. This is concerned with the second issue we raised at the beginning of
the section about how information is to be stored.
The separation of response generator from the information processor is important to account for the
time difference between acting on reflex and being aware of it. Even during sleep, the bite of a mosquito
draws an immediate reflex from the response generator.
Emotions are triggered as a side effect of a response (internal information created in the process
choosing the next action). Vandamme (1988) describes emotion to be crucial in the survival of
individual and species as follows:
11
Murali Sangubhatla, University of Minnesota
....emotion plays a crucial role in the survival of the individual. Although it is difficult to indicate
where, in the evolution of species, emotion starts, there is no doubt that in those species where
emotion is present as a specialization (all animals(?)), the species adapts more quickly to its
environment.
Are we adapting better just because we are emotional? Perhaps. The key idea is to view emotion as an
enabler of concentration of attention and energy on certain aspects of the situation, which becomes
thereby organized and reorganized, and even hierarchized. We do not easily forget events that had a
great emotional impact. If we close our eyes and try to recall something, we recall events in the order
determined by their emotional impact on us. Thus, the information processor uses emotions as a tool to
organize the information.
6 Learning Mechanism in Human Mind
A child is born with a genetic build up. Initially, human brain consists just of the hardware. Primary
drives such as cry when hungry are hardwired. The knowledge database is initially empty. Ever since the
child starts interacting with the environment, the brain keeps building the special software that updates
the knowledge database. By means of learning, this hardwired brain evolves into a mind. For example,
initially, the child has no idea as to why it is crying. It is simply a biological reaction. The hypothalamus
and genetic factors responsible for hunger trigger the cry mechanism. As the child recognizes the
environment, the mind recognizes the feeling (since it already happened once) as hunger and instead of
triggering the cry mechanism,
12
Murali Sangubhatla, University of Minnesota
searches (by initiating food fetch demon) for the pattern called food that fulfilled hunger previously.
Hunger
Biological
Process
Cry Mechanism
Initial Hardwiring
The above illustration is to bring forth a learning model for the mind. Observe that the knowledge
database not only stores the information out there in the world (food) but also internal information
(feelings, emotions). The mind records the precondition, the action and the post-
13
Murali Sangubhatla, University of Minnesota
condition3. Such blocks of information are used in building the contextual knowledge. Also, the mind
starts framing goals as illustrated above; the goal formation is initiated by the primary drives such as
hunger, thirst etc., that are essential for survival. As the human cognition improves (i.e., as the
knowledge database increases), the mind learns to frame complex goals. Instead of searching just for
one pattern of food, the mind searches for patterns similar to milk, water, etc., After every successful
consumption, the knowledge database has a record of each of the food items. The mind learns from the
contexts of food and where it can be found that it has to be fetched and frames goals to search around
and finds out the means of getting it. As it learns, it strengthens its ability to inhibit the cry mechanism.
Thus, a grown up person instead of crying in hunger tries to find food first.
Similarly, we could explain the feeling of fear. When a pattern is perceived by the eyes and sent to the
information processor or when we are dreaming, the context associated with the pattern could trigger an
emotion of fear (the release of adrenaline etc.,). The inhibitory link in this case would be the person’s
braveness.
Each interaction with the environment enriches its own learning mechanism and its knowledge
database. Thus, secondary drives evolve as a result of experience and learning. As the mind learns more,
it begins to build better goals and deal with different situations well.
If every one is born with an empty knowledge database, how could only Einstein formulate the mass
energy equivalence? The greatest scientists of all times are believed to be the ones with the ability to
utilize their minds as laboratories. They keep questioning (perhaps, they are born with a
3
Inspired by Mae’s System - Behavior Networks. Source: Franklin, Stan (1995). Artificial Minds.
14
Murali Sangubhatla, University of Minnesota
lower threshold for the node responsible for raising questions) and this keeps their information
processor busy. Information is frequently retrieved from the knowledge database, new facts are derived
(after gaining more patterns from books, discussions etc.,) and integrated with the existing ones.
It is now time to discuss about the mechanism that switches consciousness. I propose that the conscious
mind, as a side effect of formulating goals, sets a timer for each goal (this is plausible as a direct
extension of time framed real world goals). When the biological situation of the body demands, the mind
loses consciousness. This could be thought of as a biological process (Example: Sleep). When the first
timer for a goal goes off, the mind becomes conscious. For example, during infantry, when the mind has
no goals but primary drives, the only clocks are the biological clocks. The only states are
sleep(unconscious) and wake (conscious). As we grow up, we fomulate new goals (wake up early and
prepare for the exam etc.,). Sometimes, we wakeup even before the alarm-clock (real world) goes off.
How do we manage to that? It has to be some timer mechanism set by the conscious mind.
7 Conclusion
The discussion was at certain level of abstraction assuming an efficient representation of information
and focused mainly on roles of consciousness and emotion in relation with cognition. The learning
mechanism borrows certain ideas from Mae’s behavior networks, Minsky’s mechanisms of mind, and
Chalmers’ idea of mutual relationship between information
15
Murali Sangubhatla, University of Minnesota
and patterns. Behavior networks as in Mae’s system would be apt to experiment with the learning
mechanism described. The initial strengths of the synapses and the thresholds will be determined by the
genetic buildup and the network topology would be just suitable to support the primary drives. The
learning mechanism could be integrated with a difference engine that triggers terror. (Terror is explained
to be due to uncertainty). Similarly it could be extended to explain several mechanisms of mind.
The discussion also addressed the uncertainty left by John Jackson in the pandemonium model about the
demon responsible for switching consciousness. Dennett explains that hallucinations are a result of
sensory systems’ deprivation of percepts for substantial time. Perhaps, the timer mechanism proposed
towards the end can be extended to sensory system.
8 References
Agre, Philip E (1997). Computation and Human Experience. Cambridge University Press.
Brown, Julian. (1999). Minds, Machines And The Multiverse – The Quest for the Quantum Computer.
Simon & Schuster.
Campbell, H.W, Stuart Graham D (1988). “Cognition and The General Theory of Symbolic
Expressions”. Research Group for Informatica Humana, The National University of Groningen, The
Netherlands. Kluwer Academic Publishers.
Dennett, Daniel C (1991). Consciousness Explained. Little, Brown and Company.
Erdi P. (1988). “From Brain Theory To Future Generations Computer Systems”. Central Research
Institute For Physics, Hungarian Academy of Sciences, Hungary. Kluwer Academic Publishers.
Franklin, Stan (1995). Artificial Minds. The MIT Press, Cambridge, Massachusetts.
Kurzweil, Ray (1999). The Age Of Spiritual Machines. The Penguin Group, NY.
16
Murali Sangubhatla, University of Minnesota
Penrose, Roger (1997). The Large, The Small And The Human Mind. Cambridge University Press.
Vandamme, Fernand (1988). “Emotion, Cognition, Intelligence and Meaning In An Artificial
Intelligence Perspective”. Labo for Applied Epistemology, Blandijnberg. Kluwer Academic
Publishers.
Waldrop, Mitchell M (1987). Man-Made Minds. Walker and Company.
17