Download The Many Problems of Representation

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Transactionalism wikipedia , lookup

Enactivism wikipedia , lookup

Semantic holism wikipedia , lookup

List of unsolved problems in philosophy wikipedia , lookup

Direct and indirect realism wikipedia , lookup

Symbol grounding problem wikipedia , lookup

Representation (arts) wikipedia , lookup

Transcript
1
The Many Problems of Representation
I.
Background.
A representation is anything that is about something. Philosophers call the property of being
about something Intentionality. Intentionality and intentional are technical terms in philosophy; they do
not mean a representation was necessarily made ‘intentionally’ in the everyday sense; just that it is
about something. When we think, we think about things, e.g., about my birthday party yesterday, so
thoughts are ways of representing. The party plays a role in my thought in two distinct ways. On the
one hand, there is the actual party I was at yesterday which my current thought is directed at or refers
to. This is called the target (referent or intentional object--these terms are not all quite synonymous) of
the thought. On the other hand, there is how the party is presented in my thought; for example, I may
think of it as, ‘yesterday's party’, or as ‘my last birthday party.’ These two thoughts have different
contents because they present the same referent (intentional object) in different ways (different modes
of presentation). If I have a thought with the content, ‘my party was yesterday,’ then the thought is true
(target and content match); but if the thought had the content ‘my party happened a week ago’, the
thought would be false. It is because target and content are distinct that thoughts can be true or false
(target and content match or mismatch); and that deception, illusions and hallucinations are thus
possible. Similarly, the distinction is what allows my boss and my spouses’ lover to be thought of as
different people (same referent person, different contents); and thus more generally allows people to be
at cross-purposes and for life to be interesting.
Representation is a central concept in cognitive science. The fact that some representations are
not mental (e.g. photos) has motivated the idea that understanding simple physical representations
might enable understanding the mind in a naturalistic way. Some feel that understanding representation
might give us a handle on consciousness itself. Perhaps the goal of psychology in general is precisely to
understand to what extent and in what way the mind represents. According to Perner (1991), how
children come to understand the mind as representational is central to child development. Further,
understanding representation is central to the philosophy of language, of science and of rationality.
Here we will focus just on issues relevant to the nature of mental representation.
II.
The Representational Theory of Mind.
If you believe that there is a real external world, and you are not a complete relativist about
truth, you perhaps accept the claim that people in having mental states can represent that external
world. This claim is a representational theory of mind in its weakest sense. It is bolder to assert the best
explanations of the mind will be in terms of representations (a move not all are willing to make, as
discussed below). Explaining the mind in terms of representations offers the appeal of having a
naturalistic explanation of the mind, if only representations could be understood naturalistically. If a
student thinks of a party, then presumably the student has some neural activity (representational
vehicle/medium) that is about the party,. There is reason to think we should be able to understand how
physical things, like neural activity, could acquire the apparently non-physical relation of aboutness.
Computer programmes for example consist of internal electrical processes that represent things outside
the machine. Other familiar physical entities like linguistic inscriptions or pictures are also able to
acquire aboutness. For instance, a photo of the party is a physical object (representational vehicle: piece
of paper with patterns of colour) that shows a scene of the party (target) in a certain way (content).
Moreover, it seems clear what each of these elements (vehicle, content and target) is. However, when
one asks why the photo shows the scene of the party the answer defers to the beholders of the photo:
they see that scene in it; similarly for language: the readers understand the linguistic inscriptions in the
book, etc. If it is us human users of these representations that determine content, then this makes them
useless to explain the origin of Intentionality of the mind, because it is only derived from their users'
2
Intentionality (derived Intentionality, see Searle 2005). There is no user endowed with Intentionality
reading our nervous activity. Philosophers have since made various proposals of how Intentionality can
be naturalized by explaining how mental representations can have content without the help of an agent
imbued with Intentionality. This quest for naturalization has also been made pressing by the shift from
classical computer programming, where the programmer starts with meaningful expressions, to
connectionist and robotic systems, where the system starts with nodes and connections that carry no
meaning. Meaning has to be built up by interacting with the environment. What are the criteria that an
element in such a network represents something? We discuss possible answers below. A failure to
answer the question pushes one to either adopt dualism (intentionality is an inexplicable primitive
property of minds) or to deny a representational explanation of the mind is possible.
According to the representational theory of mind proposed by Fodor (amongst others; see Sterelny,
1990) a mental state like thinking of a party consists of a mental representation of the party as well as
the attitude of thinking, captured by the functional role the representation of the party plays in relation
to other mental states. Different functional roles define different attitudes, including wishing,
wondering, dreaming and so on (in fact, all the different types of mental states recognised in ordinary
language, i.e. as recognised in ‘folk psychology’). Some people who endorse explaining the mind in
terms of representations nonetheless reject the claim that the folk categories of believing, wanting etc
are real natural kinds (for example, Churchland). Fodor’s suggestion of a representational theory of
mind included the claim that the mental representations were of a special sort, namely symbolic. Others
have denied mental representations are symbolic, an issue we also comment on below.
III.
Foundational Problems of Naturalizing Representation:
III 1. Referents vs Targets
Linguistic expressions have referents, i.e., the thing they pick out in order to provide more
information about it. If I say “The tree is green” I pick out the tree (referent) and volunteer the
information that it is green. The target, a term introduced by Cummins, is the whole situation the
content of the representation has to be matched against to determine its adequacy, i.e. whether the
representation represents accurately or misrepresents. The target of “the tree is green” is not just the
tree indicated but also its colour. If its colour is indeed green my sentence represents the tree and its
colour accurately (is true), if the tree is autumn yellow my sentence is false (inaccurate). A
representation misrepresents if it does not match its target. Theories of representation need to specify
not only what determines content but also the target of a representation; this is a largely neglected
problem.
Not all representations have referents. Take a photo of the green tree. It is otiose trying to
decide whether the tree or the colour is the referent. Only different verbal descriptions of what the
photo shows partition the content into referent and predicate, e.g., ‘The tree is green’ makes the tree the
referent and being green the predicate, while ‘green is the colour of this tree,’ makes green the referent
and being the colour of this tree the predicate. But the photo has a target, namely the fact that the tree is
green. If the photo matches this fact it is accurate otherwise inaccurate.
Another interesting case is talk about nonexisting entities. If I say ‘the phlogiston in metal has
negative weight’, there is no phlogiston I could be referring to. This led Russell to deny that—with few
exceptions—there are any referential expressions. Another line of thought, made popular in Discourse
Representation Theory (DRT), the expression ‘phlogiston’ has a discourse referent, so that chemists at
the time could coherently talk about the seemingly same thing, even though—as we now know with
hindsight—there are no external referents for this particular discourse referent.
An open question is whether all representations have targets. I may imagine a horse of a certain
sort but not apply it to any actual horse. Hence the question about the accuracy of my image is in vain.
3
As in the case of non-existing referents we have the option to claim that there are no targets for such
images, or we can assume that the image brings with it an imagined situation as its target. The content
of the image fixes its target, which means that such images cannot be inaccurate. However, by fixing a
target it becomes meaningful to speak of subsequent images of the same situation and to judge their
accuracy. In other words, by assuming that fiction creates a fictional world can we account for the
human capacity to have endless theological disputes.
III.2
Representational content.
The content of a representation is how it portrays the world as being. We discuss four main approaches
to determining how the content of a representation is determined. There is not wide agreement yet that
any of them has solved the problem.
III.2.a. According to Dennett, there is no fact of the matter as to what the content of a representation is,
or even whether it is a representation (a position called irrealism). It is sometimes useful for an
observer to view a system as an intentional one; in that way the observer takes an intentional stance.
For example, I might say ‘my computer is thinking about the problem’, ‘my car wants to stop now’, if I
find this useful in predicting behaviour. In taking this stance one works out what beliefs and desires the
system ought to have if it were a rational agent with a given purpose. Hence one can work out what
behaviour the system ought to engage in. On other occasions it might be more useful to take a physical
stance, analysing the system in terms of physical laws and principles, a process that can take
considerable effort. What type of stance we take is largely a matter of convenience. Similarly, even the
thoughts we seem introspectively to be having are actually interpretations we make: stories we spin
about ourselves (for example, that we are having a certain thought) for which there is no fact of the
matter. The account raises a number of problems: Why is the intentional stance so good at making
predictions (at least for people) if it fails to describe and explain anything real? How can we as
observers make interpretations of ourselves without already having intentionality? There is also the
problem that people are not always optimally rational; but the intentional stance needs some criterion
like rationality to constrain the stories that are spun. The remaining theories are realist about
representations: They presume there is some fact of the matter as to whether something represents and
what its content is.
III.2.b Causal/informational theories, for example Dretske’s early work, assume that a neural activity
‘cow’ becomes a representation meaning COW if it is reliably caused by the presence of a cow, hence
reliably carries the information that (indicates) a cow is present. A recognized problem with this
approach is that on this theory representations cannot misrepresent. If ‘cow’ is sometimes caused by the
presence of a horse on a dark night then it simply accurately represents the disjunction of COW-ORHORSE-ON-A-DARK-NIGHT (disjunction problem) instead of misrepresenting the horse as a cow as
one would intuitively expect. A possible solution to this problem is to distinguish an error free learning
period, which establishes meaning, and an application period in which misrepresentation can occur. But
this distinction is arbitrary and has no basis in reality.
III.2.c Functional role semantics, for example Harman’s , are conceived in the spirit of Sellars' theory
theory of concepts. Here the meaning of a representation is determined by the permissible inferences
that can be drawn from it or, in more liberal versions, how the representation is used more generally.
‘Cow’ means COW because it licenses inferences that are appropriate about cows. The idea can be
motivated by considering a calculator. If one inspected the wiring diagram, one could work out which
button or state corresponded to the number ‘3’, which to the process of addition, and so on, just by
determining how the states were related to each other. To be able to represent the number ‘3’, surely
4
one needs to have appropriate relations between that representation and the one for ‘2’, for the concept
of ‘successor’, etc. And maybe it is just the same for concepts of cows, bachelors, assassins, and so on.
One problem with the approach as a general solution is that any pattern of connections in the
head can always license an interpretation of the states in terms of simply a numerical function. This
problem can be avoided by having the relevant uses determining meaning not be restricted to inside the
head, but include interactions with the world, e.g. behaviour towards cows for a COW representation.
Another problem with functional role semantics is how learning anything new, by changing the
inferences one is disposed to draw, may change the meaning of all one’s representations (a
consequence called holism). As I am always learning, the meaning of all my representations must
therefore be changing from one moment to the next. Thus, I can never think the same thought twice.
Nor can any two people think the same thought, given their belief structures are different, if only
slightly. This makes it impossible to state useful generalisations concerning mental states and
behaviour. Much of psychology is thus impossible. How can we respect the difference between
changing one’s belief about a concept and changing one’s concept? It is also not clear how conceptual
role semantics can allow misrepresentation: because the meaning of a representation is determined by
its uses, how can a representation meaning one thing be used as if it meant something else? The
solution to both problems is to restrict which uses determine meaning, but there is no agreement how in
general to do this.
III.2.d Dretske and Millikan proposed teleosemantics as a theory of naturalising content. Whereas
causal/informational theories focus on the input relationship of how external conditions cause
representations, and functional role semantics focuses on the output side of what inferences a
representation enables, teleosemantics sees representations as midway between input and output and
focuses on function as determined by history. Representations do not represent simply because they
reliably indicate, or simply because they are used, but because of functions brought about by an
evolutionary or learning history. Because of an evolutionary history, a heart is an object with the
function of pumping blood. So certain patterns of neural activity may come to have the function of
indicating, for example, cows. The basic assumption is that there must be enough covariation between
the presence of a cow causing a ‘cow’ token, so that the token can guide cow-appropriate behaviour
often enough that there is a selective advantage without which the cow-‘cow’ causal link would not
have evolved. The ‘cow’ token thereby acquires the function of indicating cows. Just as historically
defined function allows a heart to both function and malfunction, a representation can malfunction
making misrepresentation possible. If a horse happens to cause a token naturally selected for indicating
cows, then the token misrepresents the horse as a cow. The token still means COW (disjunction
problem solved) because the causing of ‘cow’ by horses was not what established this causal link.
Although this theory seems to be the current leading contender it is far from uncontested (see Neander,
2004). For example, intuitions are divided whether history can be relevant to Intentionality; if a cosmic
accident resulted in an atom-by-atom replica of you in a swamp, would not swamp-you have
intentional states despite having no evolutionary or learning history? It also seems conceptually clear
that a representation could be adaptive but not accurate, or true but maladaptive to believe. The
distinction between accurate and adaptive makes no sense to teleosemantics.
III.3. Broad versus narrow content:
Let us say when Joe and I think ‘there is my mother’, the representational medium is identical;
say, the same pattern of activation in our brain (produced by the same mechanisms and with the same
dispositions to produce other relevant inferences – philosophers typically have Joe and I be
doppelgangers on parallel duplicate earths to ensure our inferences are really all the same in relevant
ways). The thought ‘There is my mother’ would misrepresent if I thought it of Joe’s mother on a dark
night. On the other hand, Joe’s thought ‘There is my mother’ would be true when it targets his mother
5
on a dark night. Same target situation, but for me I misrepresent and Joe does not. Thus (a common
conclusion goes), our contents must be different. Same vehicle in our head; thus (a common conclusion
goes), content (in general) is not constituted by what occurs in our head. There is an opposite intuition
too: Joe’s and my thought surely have the same content, namely ‘there is my mother’. To the extent the
content can be fixed by what happens in our head, by having the same vehicle state, the content is
called narrow. To the extent that the content can be fixed only by reference to the world as well as the
vehicle state, it is called broad or, equivalently, wide. People who believe content is in general narrow
are called internalists; those who believe content is in general broad are called externalists. The two
camps have been vigorously debating for some decades now.
Narrow and broad content do not refer to the fact that the content of our mental states can be
caused by external events, for example that perception of a tree can be caused by a tree. Internalists,
accept content depends on the world in that sense. Similarly, internalists do not deny that thoughts can
be typed by external objects: A thought of a tree is the thought it is because it is about trees, which are
external objects in our world. It is just that internalists believe that the state of thinking e.g. ‘the tree is
green’ (as given by a certain vehicle state) would be the same mental act with the same content even in
possible worlds where trees did not exist. For example, according to an internalist but not an
externalist, a person could think the thought ‘the tree is green’ if he were in the Matrix and so there
were no real trees and never had been.
Classic arguments for externalism were provided by Putnam and Burge. Putnam argued that
before we knew that water was H2O we nonetheless meant H2O by the word water. When we
christened water we meant ‘water is whatever stuff THIS is’. If we later saw some other transparent
fluid that was not H2O but behaved similarly and cried ‘Lo, some water!’ we would be wrong. But
nowhere in our head, at least before 1750, would we find anything specifying that we meant H2O by
‘water’. So content is not fixed exclusively by what is in the head. Similarly, Burge argued that when
we use words we defer to accepted usage in our linguistic community in determining meaning. If I call
a pain ‘arthritis’, what I mean by that depends on what my linguistic community means by the word
‘arthritis’. Hence what I mean by ‘arthritis’ depends on something outside my head, namely the use of
the word in my linguistic community.
Internalism has been defended against the standard arguments by pointing out all the examples
involve an indexical component: ‘MY mother’, ‘THIS stuff’, ‘THIS community’ and so on. Searle (eg
2005) argues an indexical thought has as part of its content that certain relations exist with respect to
the thinker of the thought. The specification of what those relations consist of is inside the head (e.g.
that ‘I’ refers to the thinker of the thought); hence, in a sense, the content is fixed by what is inside the
head. Nonetheless, the adequacy of a representation depends on both the specification in the head and
the context in determining whether the target has been matched. Perhaps it is just a matter of
terminology whether we say the content involves only the specifications in the head (narrow content)
or both the specifications and the context (broad content).
Externalism or internalism are motivated by different theories of content. On teleological
theories, for example, content is necessarily broad because content is determined by a selectional
history. Exactly the same neural wiring may constitute a mouse detector or a poteroo detector in
different possible worlds in which it was selected for mapping its on/off state onto mice or poteroos,
respectively. No amount of examination of the vehicle alone need answer the question of which it
represented. Similarly, two different neural wirings may both be poteroo detectors given relevant
selectional pressures. Conversely, on a strict functional role semantics, content depends on the
inferences that could be drawn, so content depends only on what goes on inside the head.
III.3. Thoughts about the non-existent.
The attempts to naturalize representational content has almost exclusively focused on the use of
representations in detecting the state of the environment and acting on the environment. In other words
6
the analysis—in particular of causal and informational approaches—is restricted to cases of real,
existing targets. The teleosemantic approach can be extended to the content of desires since these
representations can also contribute to the organism's fitness in a similar way to beliefs. Papineau made
desires the primary source of representation. However, little has been said how natural processes can
build representations of hypothetical, counterfactual assumptions about the world or even outright
fiction. One intuitively appealing step is to copy informative representations (caused by external states)
into fictional contexts, thereby decoupling or quarantining them from their normal indicative functions.
Although intuitively appealing, the idea presupposes the standard assumption of symbolic computer
languages that symbols exist (but based on derived intentionality) that can be copied and keep their
meaning. It remains to be shown how such copying can be sustained on a naturalist theory.
III.4 Implicit vs explicit representation.
A representation need not make all of its content explicit. According to Dienes and Perner (see
our 2002) something is implicitly represented if it is part of the representational content but the
representational medium does not articulate that aspect. This can be illustrated with bee dances to
indicate direction and amount of nectar. From the placement in space and the shape of the dance bees
convey to their fellow bees the direction and distance of nectar. The parameters of the dance, however,
only change with the nectar’s direction and distance. Nothing in the dance indicates specifically that it
is about nectar. Yet nectar is part of the meaning. It is represented implicitly within the explicit
representation of its direction and distance. Just as a bee dance is about nectar without explicitly
representing that fact, arguably a person ignorant of chemistry means H2O when she says ‘water’ even
though such a person does not explicitly represent the distinction between H2O and some other
structure.
Bee dances only make the properties of distance and direction explicit. The English sentence
‘the nectar is 100 yards south’ makes explicit the full proposition that the location is predicated of a
certain individual, the nectar. Similarly, if a word ‘butter’ is flashed to a person very quickly, they may
form an active BUTTER representation, but not represent explicitly that the word in front of them has
the meaning butter (subliminal perception). The sentence ‘I see the nectar is 100 yards south’ or ‘I see
the word in front of me is butter’ makes even more explicit; namely, the fact that one knows the
proposition. Explicit representation is fundamental to consciousness as we discuss below.
IV.
Application in Sciences:
Despite foundational problems, research in psychology, functional neuroscience and computer science
has proceeded using the notion of representation. The tacit working definition is in most cases
covariation between stimuli and mental activity inferred from behaviour or neural processes.
IV.1. Cognitive Neuroscience:
Cognitive neuroscience investigates the representational vehicle (neural structures) directly. The
intuitive notion of representation relies heavily on covariation. If a stimulus (including task instructions
for humans) reliably evokes activity in a brain region it is inferred that this region contains the mental
representations required to carry out the given task. Particularly pertinent was the success of finding
single cortical cells responsive to particular stimuli (single cell recording), which led to the probably
erroneous assumption that particular features are represented by single cells, parodied as "grandmother
cells". In the research programme investigating the Neural Correlates of Consciousness (often
abbreviated NCC), conscious contents of neural vehicles are determined by correlating verbal reports
with neural activities. Where the NCC are, and if any such thing exists, is an open and active current
area of research.
7
IV.2. Cognitive Psychology:
Cognitive psychology assumes that experimental stimuli and task instructions are mentally
represented. Theories consist of postulating how the representations are transformed. These
representations are modelled and task performance (errors or reaction times) predicted. The quality of
predictions provides the evidential basis of mental representations. For the investigation of longer
reasoning processes sometimes the introspective verbal report is taken as evidence for the underlying
mental representations. The enterprise is predicated on the notion that not only do people represent by
having mental states, but thinking, perception and memory in turn can be explained in terms of subpersonal representations. The implicit assumption is that sub-personal intentionality will be naturalised,
uniting cognitive psychology with the rest of the sciences.
One extensively argued issue is whether or in what domains thinking is symbolic or nonsymbolic (normally, connectionist). A representation is symbolic if a token of it can be copied in
different contexts and it retains the same meaning (normally explained because it has the ‘same
shape’). The notion originates from the use of a central processor in normal serial computers. Because
different tokens of a symbol in different lines of program will only be processed when they pass
through the central processor, of course different tokens can all be treated the same way. It is not clear
that the same logic applies to the brain, for which there is no CPU. Nonetheless, Fodor has argued that
the symbolic style of representation is the only theory we have for why people think in systematic and
indefinitely productive ways.
Connectionist networks potentially have two types of qualitatively different representation
types: patterns of activation of units and patterns of weight strength between units. Given a teleological
theory of content, for example, a learning algorithm may bring it about that a pattern of activation maps
onto some external state of affairs so that the rest of the network can do its job. Then the pattern of
activation represents that state of affairs. Similarly, the learning algorithm may bring it about that a
pattern of weight strengths maps onto enduring statistical structures in the world so that the rest of the
network can do its job. Then the pattern of weight strengths represents the external statistical structures.
Psychologists naturally think in terms of activation patterns being representational because they
naturally think in terms of representations being active or not. In the same way, sometimes they regard
weights as non-representational. Weights on a teleological story are representational. They can
misrepresent statistical structure in the environment so they can represent it. The process by which
knowledge becomes embedded in weights in people is an example of implicit learning, the acquisition
of unconscious knowledge.
Cognitive approaches to perception can be contrasted with ecological and sensorimotor
approaches; these latter attempt to do without a notion of representation at all. The antirepresentationalist style of argument often goes like this: We need less rich representations than we
thought to solve a task; therefore representations are not necessary at all. The premise is a valuable
scientific insight; but the conclusion a non-sequitur. To take a case in ecological psychology, while
people in running to catch a ball do not need to represent the trajectory of the ball, as might first be
supposed, they do represent the angle of elevation of their gaze to the ball. The representation is
brutally minimal, but the postulation of a content-bearing state about something (the angle) is necessary
to explain how people solve the task. There is a similar conflict between cognitive (representational)
and dynamical systems (anti-representational) approaches to the mind, e.g. development. For example,
how children learn to walk may be best understood in terms of the dynamic properties of legs getting
bigger, the muscles getting elastic in certain ways; nothing need change in how the child represents
how to walk. The debate has focussed people’s minds on a very useful research heuristic: Try to work
out the minimal representations needed to get a task done. Only when such a simple story has been
discredited, then increase the complexity of the content of the postulated representations.
IV.3. Artificial Intelligence:
8
The traditional approach (symbolic AI, or Good Old Fashioned AI, GOFAI) assumed that the
mind can be captured by a program, which is designed to have building blocks with representational
content (derived intentionality). The New Robotics is a reaction to trying to program explicitly all
required information into an AI, a strategy that largely failed. The new strategy is to start from the
bottom up, trying to construct the simplest sort of device that can actually interact with an environment.
Simplicity is achieved by having the device presuppose as much as possible about its environment by
being embedded in it; the aim is to minimise explicit representation where a crucial feature need only
be implicitly represented. In fact, the proclaimed aim of New Robotics is sometimes to do without
representation altogether (see Clark, 1997).
V.
Representation and Consciousness:
V.1 Content of Consciousness
According to the strong representationalist position all the contents of consciousness are just
the contents of suitable representations (e.g. Tye, 1995). This is an attempt to naturalise consciousness
by explaining it in terms of what may be a more tractable problem, representations. Others, like Block,
believe that conscious states can be qualitative in ways that are not representational; for example, ,
orgasms, moods, pain are just states and not states about anything. A strong representationalist may
specify representational contents of these states (e.g. generalised anxiety is both about a certain state of
the body and cognitive system, and a representation that the world is dangerous). The sceptic replies
there is still something it is like to be in these states that is left out (i.e. qualia). A further position is that
there is nothing left out; qualia are identical to certain intentional contents, but those intentional
contents cannot be reduced or naturalised in other terms.
V.1
Access consciousness.
Block (see 1995) introduced the term access consciousness (in contrast to phenomenal
consciousness, considered next) to indicate consciousness as a matter of whether mental
representations are accessible for general use by the system. The contents of access consciousness are
commonly taken to be just the contents of certain representations. Access consciousness is frequently
tied to the distinction between implicit and explicit knowledge. Dienes & Perner (2002) explain why.
To return to the bee dance, the fact that direction and distance are predicates of nectar is not made
explicit in the bee dance. However for referential communication and directing attention explicit
predication becomes crucial: in order to direct one's attention to an object one needs to identify under
one aspect and then learn more about its other aspects.
Whether predication and the ability to refer is sufficient is another question. When one is
conscious of some state of affairs one is also conscious of one's own mental state with which one
beholds that state of affairs, which suggests that one needs explicit representation of one's attitude
(explicit attitude). This criterion for consciousness goes hand in hand with the Higher Order Thought
theory of consciousness (see Rosenthal, 2002), according to which a mental state is a conscious state if
one has a higher-order thought representing it. Higher Order Thought theory motivates testing the
conscious or unconscious status of knowledge in perception and learning by asking subjects to report or
discriminate the mental state they are in (guessing vs knowing) rather than simply discriminating the
state of the world. Sceptics of unconscious mental states in psychology generally take the ability of a
person to make discriminations of worldly states of affairs to indicate the presence of conscious
knowledge. If a person flashed a word can discriminate between possible words that might have been
flashed at above chance levels, such sceptics include there was conscious perception. According to
Higher Order Thought theory such behaviour is perfectly compatible with all the perception being
unconscious. However, if the person performed above chance at such a discrimination while they
believed they were guessing (guessing criterion of unconscious knowledge) there would be evidence
9
for unconscious perception, because that would be evidence for a failure to represent one’s mental state
of seeing.
V.2.
Phenomenal consciousness.
Phenomenal consciousness is concerned about the subjective feel of conscious experiences.
Whether phenomenal consciousness can be reduced to the contents of any representation is
controversial. Some argue it can be. First-order representationalists (e.g. Tye, 1995) argue that the
relevant representations just need to have certain properties, in addition to being poised for general
access, like being fine-grained (i.e. containing nonconceptual content). Higher Order
Representationalists, like Carruthers argue that subjective feel requires higher order representations
about the subjective nature of one’s first order mental state. This is required—according to
Carruthers—because first order representations only give a subjective view of the world but no
appreciation that the view is subjective, i.e., no feeling of subjectivity. For this feeling the higher order
state has to metarepresent the difference between what is represented by the lower level (the objective
world, albeit from the perceiver’s subjective point of view) and how it is represented as being
(subjective content). Since such metarepresentation is thought to develop over the first few years, this
carries the implication that children up to 4 years may not have phenomenal consciousness; an
implication used either to discredit higher-order thought theories of consciousness or as an inducement
to weaken the requirements for entertaining higher-order thoughts.
Dr Zoltán Dienes
University of Sussex
Prof Josef Perner
University of Salzburg
GLOSSARY
Broad content: representational content that cannot be fixed by properties of the representational
vehicle alone; features of its past or current context must be taken into account
Conceptual role semantics: the theory that the meaning of a concept arises from the inferences (and
perhaps other uses) the concept is disposed to take part in
Intentionality: Having the property of being about something
Metarepresentation: Representing a representation as a representation
Representation: Something that is about something else. It portrays the something else as being a
certain way, the representational content.
Symbol: A representation that can have copies made which keep the same meaning in different contexts
Teleosemantics: The theory that meaning is a certain sort of function, arising from an evolutionary or
learning history. Indicative representations (like statements or perceptions) have the function of
indicating; imperative representations (like orders or intentions) have the function of bringing about
their content.
RECOMMENDED READING
Block, N. (1995). On a confusion about a function of consciousness. Behavioral and Brain Sciences,
18, 227-247.
Clark, A. (1997). Being there: Putting brain, body and world together again. MIT Press.
Crane, T. (2005). The mechanical mind, 2nd Edition. Routledge.
Cummins, R. (1996). Representations, targets, and attitudes. MIT Press
10
Dennett, D. C. (1996). Kinds of minds: Towards an understanding of consciousness. Phoenix.
Dienes, Z., & Perner, J. (2002). A theory of the implicit nature of implicit learning. In French, R M &
Cleeremans, A. (Eds), Implicit Learning and Consciousness: An Empirical, Philosophical, and
Computational Consensus in the Making? Psychology Press (68-92).
Greenberg, M. & Harman, G. (2006). Conceptual Role Semantics. In E. Lepore & B. Smith (eds) The
Oxford Handbook of Philosophy of Language. Oxford University Press.
Kim, J. (2006). The philosophy of mind. 2nd edition. Westview Press.
Neander, K. (2004). Teleological theories of mental content. In the Stanford Encyclopedia of
Philosophy.
Perner, J. (1991). Understanding the representational mind. MIT Press.
Rosenthal, D. M. (2002). Consciousness and higher order thought. Encyclopedia of Cognitive Science.
Macmillan Publishers Ltd (pp. 717-726).
Searle, J. (2004). Mind: A brief introduction. Oxford University press.
Sterelny, K. (1990). The representational theory of mind: An introduction. Blackwell.
Tye, M. (1995). Ten problems of consciousness: A representational theory of the phenomenal mind.
MIT Press.