Download NEUROMYTHS Science advances through trial and

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Dual consciousness wikipedia , lookup

Holonomic brain theory wikipedia , lookup

Split-brain wikipedia , lookup

Transcript
NEUROMYTHS
Science advances through trial and error. From observation, a theory is constructed, that other phenomena
come to confirm, modify, or refute. Another theory, complementary or contradictory to the previous one,
is then created, and the process continues in this manner indefinitely. This rough advance of science is the
only one possible, but it has its flaws: the hypotheses that were invalidated leave marks on your mind. If
said marks become notorious enough, then “myths” take root. These beliefs are demolished by science
but are widely held and relayed, by various mediums, into the mind of the public.
Of course neuroscience is also involved in this phenomenon. Some expressions in the English language
confirm its involvement: the “number sense,” for example, derives from the research of a German
anatomist and physiologist, Franz Joseph Gall (1758-1828). By analyzing the heads of living criminals
and dissecting the brains of the deceased, Gall, in his time, established the phrenology theory: a particular
talent would produce an outgrowth on the brain which pushes on the bone and distorts the skull. By
feeling the head, Gall boasted that he could identify the criminal from the honest man, a “math” person
from a “literary” person. Although phrenology has long since been out-of-date, it is now known that
certain areas of the brain are specialized, more particularly than others associated with certain functions.
However, contrary to the regions that Gall thought he had identified, it is, in fact, a question of functional
specialties (such as image formation, word production, tactile sensibility, etc.) and not of moral
characteristics like kindness, hope, combativeness1 etc.
Of course science is not solely responsible for the emergence of such myths. Indeed, it is not always easy
to understand all the subtleties of the findings of a study, any more than its protocols and other
methodological choices. Nevertheless, human nature is often content with, or even takes delight in, quick,
simple, and univocal explanations2, which inevitably lead to faulty interpretations, questionable
extrapolations, and, all things considered, the genesis of false ideas.3
This chapter examines one by one the myths belonging to brain science; more time will be devoted to
those that can influence learning methods. For each myth, a historical look will explain how the false idea
came about, and then the current state of scientific research on the subject will be reviewed.
Paradoxically, some myths have been beneficial to education, in that they enabled it to diversify.
Nonetheless, the majority can bring about unfortunate consequences and must therefore, for this reason
and others, be contested.
The Brain is More Adaptive in Your Early Years
“The early years are sometimes taken as if you planted a seed, and if you didn’t plant it at the right time,
you cannot harvest it, but if you did plant it at the right time, you are in good shape… I think that the
challenge in talking about the early years is that for some abilities the early years are formative, while for
others the early years are no more or less important than later years, and in that sense they are all
important for the educational opportunity.” (Gabrieli, 2003).
1
Gall had also presupposed the existence of areas suited to languages and arithmetic.
The mass media, whose influence in forming opinions is critical, use and abuse this quality, often to represent,
because of the support for their discursive logic with that “Human, Too Human” need, the height of excessive
simplification.
3
Scientists are in no way impervious to this tendency. It is expected that they be rigorous in their field (even if that
is not always the case), and, for subjects far from their research, they are subject to, like every human being,
subjective and emotional influences.
2
If, on your computer, you enter the keywords “Birth to three” into a search engine, you get an impressive
number of websites explaining that your child’s first three years are crucial for his/her future development
and that, practically, everything is decided at this age. You will also find numerous commercial products
prepared to stimulate your young child’s intelligence, before he reaches the famous, “fateful” age of three.
Some physiological phenomena that take place during brain development can, indeed, lead to beliefs that
the critical learning stages occur between birth and age three. This myth is overused by certain policymakers, educators, toy manufacturers, and parents, who overwhelm their children with gymnastics for
newborns and stimulating music in tape recorders and CD players attached above the baby's bed.
What are the physiological phenomena, the research behind such a belief?
The basic component of information processing in the brain is the nerve cell, or neuron. A human brain
contains about 100 billion neurons. Each on can be connected with thousands of others, which allows
nerve information to circulate heavily and in several directions at a time. Through the connections
between neurons (synapses), nerve impulses travel from one cell to another and support skill development
and learning capacity. Learning is thus the creation of new synapses, or the strengthening or weakening of
existing synapses. The brain is thus constantly changing its fine anatomy (but these changes are not
visible to the naked eye!).
Compared to that of an adult, the number of synapses in newborns is low. After two months of growth,
the synaptic density of the brain increases exponentially and exceeds that of an adult (with a peak at ten
months). There is then a steady decline until age ten, when the “adult number” of synapses is reached. A
relative stabilization then occurs. The process by which synapses are produced en masse is called
synaptogenesis. The process by which synapses decline is referred to as pruning. It is a natural
mechanism, necessary for growth and development (add a graph here).
For a long time, science believed that the maximum number of neurons was fixed at birth and that, unlike
most other cells, neurons do not regenerate. Therefore, each individual seemed to loose neurons regularly.
In the same way, following a lesion of the brain, destroyed nerve cells did not seem to be replaced.
However, since about twenty years ago, studies (Terry et al, 1987) have come to qualify these positions
by revealing two phenomena until then unsuspected: not only do new neurons appear at any point in your
life (neurogenesis), but furthermore, in some cases at least, the number of neurons does not fluctuate
throughout your life.
Be that as it may, synaptogenesis is intense in the early years of life of a human being. And as learning
seems to be connected to the creation of new synapses (intuitively, the idea is attractive), it is a short step
to deduce the idea according to which the early years of a child are those when he/she is most capable of
learning. Or, another more current version in Europe: very young children must be constantly stimulated
at their youngest age (the first two or three years), so that their learning capacities are subsequently
strengthened.
And yet, to date, not a single study has illustrated a significant and unquestionable correlation between the
number of modified synapses (created, strengthened, or destroyed) and the learning processes. Of course,
a consensus has become established in the scientific community on the fact that any learning leads to a
change in junctions between neurons. But, however important this discovery may be, it does not allow the
establishment of only one causality between synaptic development and learning.
An experiment conducted twenty years ago was able to fuel such a myth. Laboratory studies with rodents
did prove that synaptic density could increase when the subjects were placed in a complex environment,
defined in this case as a cage with other rodents and various objects to explore. When these rats were
subsequently tested on a maze learning test, it was demonstrated that they performed better (and faster)
than other rats belonging to a control group and living in “poor” or “isolated” environments.4 The
conclusion was made that rats living in "enriched" environments had increased synaptic density and thus
were better able to perform the learning task.
All the parts then came together to create a myth: a great experiment, rather easy to understand (even if it
is tricky to perform), findings that project the expected outcome (stimulating environment, good
learning), etc. However, this experiment deals with laboratory conditions, far from real conditions (are
there rats in the wild living in a poor environment?).5 Moreover, the experiment is conducted on rodents.
Non-specialists twisted experimental data on rats, obtained with unquestionable scientific precision, and
combined it with current ideas concerning human development to therefore conclude that educational
intervention, to be more effective, should be coordinated with synaptogenesis. Or that, during infancy,
“enriched environments” save synapses from pruning, or even create new synapses, thereby contributing
to a greater intelligence or in any case to a higher learning capacity. This is a problem of using facts of a
pertinent study to assign meaning that goes well beyond the evidence presented in the original study.
Once again, the limits not to be exceeded are rather clear: there is not much neuroscientific data, for
humans, about the predictive relationship between synaptic density early in life and improved learning
capacity; similarly, not much data is available regarding the predictive relationship between the synaptic
densities of children and adults; there is no direct neuroscientific evidence, in either animals or humans,
linking adult synaptic density to a greater learning capacity—which does not mean that the plasticity of
the brain in general, and synaptogenesis in particular, do not bear any relation to learning, but simply that
new research is needed.
For further reading, the reader could usefully consult the work of John Bruer: The Myth of the First Three
Years (Bruer, 1999). The author was the first to contest this myth, which he presented as “rooted in our
cultural beliefs about children and childhood, our fascination with the mind-brain, and our perennial need
to find reassuring answers to troubling questions.” Bruer goes back to the 18th century to find its origin: it
was already believed that a mother's education was the most powerful force to map out the life and fate of
a child; the successful children were those who had interacted “well” with their family. In his book, Bruer
eliminates one by one the myths based on faulty interpretations of early synaptogenesis.
There are periods when certain instruction is essential
If young children do, in fact, have an intense synaptogenesis, its influence on the adult brain is not yet
known. Nevertheless, adults to seem to be less capable of learning certain things: for anyone who starts to
learn a foreign language late in life, for example, it is highly likely he/she will always have a “foreign
accent” in said language; same thing applies to learning how to play a musical instrument: the virtuosity
4
Diamond, M. et al. (1987), “Rat cortical morphology following crowded-enriched living conditions,” Experimental
Neurology, Vol. 96, No. 2, pp. 241-247.
5
In the wild, rats live in stimulating environments (docks, pipes, etc.) and so presumably have exactly the number of
synapses needed to survive. If they are put in an artificially impoverished environment, their brains will have exactly
the synaptic density appropriate for that environment. In other words, they will be just as “smart” as they need to be
to live in a laboratory cage. By accepting that the same line of reasoning applies to human beings (which remains to
be proven), most of them being raised in normally stimulating environments, their brains are precisely adjusted for
their particular environments. Most children naturally grow up in environments that are stimulating. Furthermore,
research has shown that even children growing up in what could be traditionally defined as an impoverished
environment (a ghetto, for example) may continue, over time, to excel in school and go on to receive degrees in
higher education. Lastly, there would be too many factors to take into account when defining what an “enriched”
environment should be for the majority of students. As a result, such resultants are, in their current state, unusable
knowledge as far as education is concerned.
of a late learner will probably equal that of a child confronted with the same instruction from age five. Do
periods when certain tasks can no longer be learned then exist? Or are there merely ages whence tasks are
learned more slowly?
If it has been believed for a long time that the brain loses neurons with age, new technologies have
challenged this certainty. Terry and his colleagues showed that the total number of neurons in each area of
the cerebral cortex is not age-dependent. Age-dependency is only a factor when the number of “large”
neurons is counted. The nerve cells shrink, with the resulting consequence of increasing the number of
small neurons. However, the aggregate number remains the same. Other parts of the brain, like the
hippocampus, have recently been found to generate new neurons throughout the lifespan. The
hippocampus is, among other things, involved in spatial memory and navigation processes (Burgess N., et
O’Keefe, J., 1996). Intriguing research comparing London taxi drivers with other citizens suggests a
strong relationship between the relative size and activation of the hippocampus on the one hand, and a
good capacity for navigation on the other hand. Similarly, there is a positive correlation between the
enlargement of the auditory cortex and the development of musical talent, or an increase in motor areas of
the brain following an intense training of finger movements. In this last case, changes in the neuron
network configuration affected by the learning process were able to be measured using brain imaging
from the fifth day, that is to say for an extremely brief duration.
The processes that remodel the brain, neuron synaptogenesis, pruning, development, and modification,
are grouped together into one term: brain plasticity. The brain is so-called “plastic.” Numerous studies
have shown that the brain remained plastic throughout the lifespan, not only concerning the number of
neurons (see above), but also concerning the number of synapses. We mustn't lose sight of the fact that
the acquisition of skills results not only from training and the strengthening of certain connections, but
also from the pruning certain others. It is thus necessary to make a distinction between two types of
synaptogenesis: the one that occurs naturally early in life, and the one that results from exposure to
complex environments throughout the lifespan. In the first case, researchers are talking about experienceexpectant learning, and, in the second case, experience-dependant learning.
By way of example: grammar learning is best learned, that is to say faster and easier, when young
(roughly, before age sixteen), but the capacity to enrich vocabulary improves throughout the lifespan.6
Sensitive period learning processes, like grammar learning, correspond to experience-expectant
phenomena: so that learning can be done without excessive difficulty, a pertinent experience must ideally
take place during a given lapse of time (the sensitive period). This type of experience-expectant learning
is thus optimal during certain periods of life.
Learning processes that do not depend on a sensitive period, such as lexicon acquisition, are said to be
"experience-dependent phenomena: the period during which the learning experience occurs is not
constrained by age or time. This type of learning can improve as the years go by, and that is in fact what
happens. Similarly, if certain data show a general decline in cognitive capacities from age 20 to 80 (in
tasks such as letter comparison, pattern comparison, letter rotation, arithmetic, reading, memorization,
etc.), by contrast, and in accordance with certain discoveries concerning the adult brain, there is a notable
increase in some cognitive capacities until age 70 (with however a certain decline at age 80).
Are there critical periods, that is to say unique moments outside of which certain types of learning can
not be carried out with success? In other words, certain skills (even certain knowledge) can only be
acquired during a relatively short “window of opportunity” which would close at a precise moment in
brain development, and this once and for all? This “critical period” concept dates back to experiments
relatively well-known by the general public, conducted in 1970 by the ethologist Konrad Lorentz. He had
6
Helen Neville, REF.
observed that fledglings, at the time of hatching, became permanently attached to the prominent mobile
object of the environment. Usually, the object in question is their mother, and Lorenz had named this
attachment phenomenon “imprinting.” By taking the place of the mother, Lorenz managed to become
attached to troops of fledglings who followed him everywhere. The period that allows this attachment is
very short, right after hatching, and it was impossible to change the attachment object: the fledglings
permanently followed the substitute instead of their mother. In such a case, the term “critical period” is
essential: an event (or the lack of an event) during a specific period brings about a permanently
irreversible situation.
A critical period has not yet been found for humans; which does not mean that there are not any. We
prefer to talk about “sensitive periods,” lapse of time when learning, in some or other specific field, is
easier. The scientific community acknowledges that there are sensitive periods, particularly for language
learning (which knows several, some at adult age). Additional research is needed before it can be
determined if education system programs match the succession of sensitive periods. Brain imaging will be
able to bring new explanations concerning the biological processes linked to these periods.
Language learning provides good examples about that kind of thing. Research has shown that at birth,
children can distinguish all the sounds of the language, even those of a language very far away from that
of their parents. But it is well known that, for example, Japanese adults experience difficulty
distinguishing the difference between the “r” sound and the “l” sound; “ra” and “la” are perceived as
identical, even if any very young Japanese baby distinguishes between them. Sound perception is quickly
determined by the sound environment of the child, over the course of its first twelve months. At the end
of its first year, the child no longer perceives the sound differences to which he/she was not exposed. The
ability to differentiate foreign sounds diminishes between the sixth and the twelfth month. During this
time, the child's brain changes so that he/she can become a very competent speaker in his/her native
language. Since the acquisition of the native language sound repertoire is not an acquisition of new
sounds, but quite the opposite, a “loss” of non-perceived sounds (and, therefore, non-produced), it is
possible to formulate the hypothesis according to which this process is completed by successive pruning
of synapses. Apart from the irreversible or not aspect of said loss (important individual differences
probably exist at this level; indeed, adults learning a new language sometimes succeed in speaking it
without a foreign accent), at any rate, we prefer to talk about “sensitive periods,” among other reasons
because it is a loss of information, while “critical periods” would moreover refer to an increase in
information. Be that as it may, there is no doubt that the ability to reproduce the sounds of a language
(phonology, accent) and the capacity to effectively integrate a grammar is optimal during childhood. Only
the faculty to acquire vocabulary (learning new words) seems to endure in the same terms throughout the
entire lifespan.
The work of Piaget greatly influenced the organization of school systems over the course of the last third
of the 20th century. The basic Piagetian idea about development is the following: children experience
specific periods of cognitive development and they are not capable of learning to read or count until age
six or seven. In school systems in OECD countries, reading, writing, and arithmetic are not officially
taught before this age. Piaget and his colleagues affirmed, among others, that children come into this
world without any preconceived ideas about numbers. Nevertheless, recent research on the workings of
the brain showed that children are born with an innate sense of the representation of numbers (Dehaene,
1997). The OECD PISA study (OCDE, 2003 www.pisa.oecd.org) on the academic achievement of
children in kindergarten confirms this new point of view. This is how a scientific theory becomes
permanently refined. It is not a matter of calling into question all of Piaget's findings. As he had sensed,
there are well and truly sensitive periods. But, today, it seems that children are more gifted at birth than
researchers thought for a long time (Gopnik et al, 2005). This is why Piaget's theories must be put into
perspective in light of current and future research.
We Use 10% of Our Brain
It is often said that humans only use 10 or 20% of their brain. Where did this myth come from? It is
difficult to say. Some say it came from Einstein, who responded during an interview that he only used
10% of his brain. At the time of the first research on the brain, researchers supported this myth. In the
1930s, Karl Lashley explored the role of certain areas of the brain using electric shocks. Many areas of
the brain not reacting to these shocks, the researcher concluded that these areas had no function. This is
how the term “silent cortex” came to be. This theory is not judged incorrect.
Faulty interpretations of the workings of the brain also fueled this myth. Today, thanks to imaging
techniques, the brain is precisely described in functional areas. For example, each sense corresponds to
one or several primary functional areas: a primary visual area, which receives perceived by the eye, a
primary auditory area, which receives information perceived by the ear, etc. Furthermore, several regions
are linked to the production and comprehension of language. All these areas are sometimes described
separately by physiologists, and the general public, who only remember these partial descriptions, and
thus people fall under the impression that the brain functions area by area, and then that, on the whole, at
each instant only a small region of the brain is active. Nevertheless, this is not what occurs. First, the
primary areas are surrounded by secondary areas. For example, information from images perceived by the
eye is sent to the primary visual areas, and then is next analyzed in the secondary visual areas, where the
three-dimensional reconstitution of the perceived objects takes place. At the same time, information from
the memory of the subject circulate in the brain to recognize objects, while semantic information from
language areas comes to attach to it, so that the subject can quickly name objects. Moreover, at any time,
the brain areas that deal with posture and movement information are in action under the effect of nerve
signals from the entire body, allowing the subject to know if he is sitting, standing, the head turned to the
right, the left, etc. Therefore, a passive description of the areas of the brain can lead to a misinterpretation
of the workings of the brain.
Another origin of the myth is to be searched in the fact that the brain is made up of ten glial cells for a
neuron. Glial cells have a nutritional role and support nerve cells, but do not transmit any information.
Since, in terms of transmission of nerve impulses, only the neurons are recruited (or 10% of the cells
comprising the brain), this is where the myth of 10% comes from. But this vision of cell functions is a
little simplistic; if the glial cells have a different role from that of the neurons, they are not less essential
to the functioning of the whole.
Today, all neuroscience data shows that the brain is 100% active. In neurosurgery, when it is possible to
observe the functions of the brain on patients under local anesthetic (the cortex has no pain receptors),
electric stimulations showed no inactive areas, even when no movement, sensation, or emotion is
observed. No areas of the brain must be completely inactive, even during sleep—and finding one
indicates a serious, functional disorder. Similarly, neurological examples show that a loss of much less
than 90% of brain tissue leads to serious consequences of the workings of the brain. No region of the
brain can be damaged without causing physical or mental defects. The examples of people who have lived
for years with a bullet in the brain (or with any other trauma) do not indicate the existence of “useless
areas.” If it is in fact possible, in some cases, to completely recover from a shock, it is not due to the fact
that the lesions touch one of these “useless areas,” but rather to extraordinary plasticity described above:
neurons, or networks of neurons, can replace those that are destroyed (in such cases, the brain
reconfigures itself), and therefore overcome the defects that result from it.
Finally, a physiological reason comes to back up the demonstration: evolution does not allow wasting.
Like the other organs, probably more than any other, the brain is molded by natural selection. It represents
only 2% of the total weight of the human body, but consumes 20% of available energy. Considering this
high energy cost, it seems quite improbable that evolution allowed the development of an organ, of which
90% of the structure was useless…
A Right Brain and a Left Brain
The brain is made up of neuronal networks (see above, description of the first myth). It has functional
areas that interact among themselves (see above, description of the second myth). Moreover, it is
composed of two hemispheres: a left and a right. Each hemisphere seems more specialized in certain
fields than in others, and, thus, you often hear strange statements, such as: “me, I’m more left-brained” or
“women have a more developed right brain,” etc. Can such remarks really be supported? Is there really a
right brain and a left brain? A quick analysis of the origin of these terms is needed to determine if they
correspond to realities or facts, or if it is, once again, a matter of questionable extrapolations of scientific
data.
First, it is important to know that the two hemispheres are not separate functional and anatomic entities:
nerve structures connect them together (the corpus callosum7); many neurons have their cell nucleus in
one hemisphere and extensions in the other. This fact alone should prompt reflection.
The “left brain” is described as the seat of rational thinking, intellectual thinking, analysis, and speech. It
is also the one that processes numerical information deductively or logically. It dissects the information
by analyzing, distinguishing, and structuring the parts of a whole, by linearly arranging the data. The left
hemisphere is the best equipped to deal with tasks related to language (writing and reading), algebra,
mathematic problem solving, logical operations. Thus, it is believed that people who preferentially use
their “left brain” are rational, intellectual, logical, and have a good analytical sense. They tend to be
mathematicians, engineers, researchers, etc.
Contrary to the “left brain,” specialist in analytical thinking, the “right brain” is the seat of intuition,
emotion, non-verbal thinking, synthetic thinking, which allows representations in space, creation, and
emotions. The “right brain” tends to synthesize the whole, sees the forest but not the trees. It is the one
that recreates three-dimensional forms, notices similarities rather than differences, understands complex
configurations. It is thus a question of recognizing faces, perceiving spaces. According to these features,
people who preferentially use their “right brain” are intuitive, emotional, imaginative, and easily find their
way around. They tend to engage in artistic and creative professions.
This “left brain/right brain” opposition originates from the first neurophysiology research. Intellectual
capacities were often then described in two classes: critical and analytical aptitudes on the one hand,
creative and synthetic aptitudes on the other. One of the major doctrines of neurophysiology from the 19th
century associated each class to a hemisphere. In 1844, Arthur Ladbroke Wigan published A New View of
Insanity: Duality of the Mind. In this book, he describes the two hemispheres of the brain as independent,
and attributes to each one its own will and way of thinking. They usually work together, but in some
diseases, they can work against each other. This concept became popular, among others, by the
publication of the famous book by Robert Louis Stevenson in 1866, The Strange Case of Doctor Jekyll
and Mister Hyde, which exploits the idea of a cultivated left hemisphere, that opposes a primitive and
emotional right hemisphere, which easily loses all control. Paul Broca, a French neurologist, goes beyond
fiction to give way to real functional results. Broca was the first to localize different roles in the two
hemispheres. Between 1861 and 1863, he examined post-mortem the brains of more than twenty patients
whose language functions had been impaired. In all the brains examined, he noticed lesions in the left
hemisphere, in the frontal lobe, whereas the right hemisphere was still intact. He concluded from that that
the production of spoken language had to be located in the front part of the left hemisphere. A few years
7
REF: see chapter 4
later, Wernicke, a German neurologist, complete the theory of Broca about language. Like is French
colleague, he examined post-mortem brains with language development disorders. From these
observations, Wernicke suggested that the capacity to understand language is situated in the temporal lobe
of the left hemisphere. Thus, Broca and Wernicke associated the same hemisphere of the brain, the left, to
two essential components of processing language, comprehension and oral production.
Until the 1960s, observations regarding the lateralization of language (language is in the left hemisphere)
were based on studies post-mortem of patients with brain lesions with varying locations and severity.
Some neurologists claimed that language was not completely lateralized, in that the lack of lesions in the
right hemisphere did not mean that this one played no role in this context. The presence of lesions only on
the left side could be random. Proof of the pertinence of this intuition was provided by studies on “splitbrain” patients. The corpus callosum of these patients is severed in order to stop epileptic attacks from
one hemisphere to the other. Even if the primary goal of the operation is to reduce epileptic fits,
researchers can, on these patients, study the role of each hemisphere. The first studies of this type were
conducted in the 1960s and 1970s (the Nobel Prize winner in Medicine Roger Sperry and his team from
the California Institute of Technology played a dominant role here). Sperry and his team succeeded in
supplying information to a single hemisphere in their “split-brain” patients and asked, for example, said
patients to separately use each hand to identify objects, without looking at them. To understand this
experimental protocol, it must be understood that basic sensory and motor functions are symmetrically
divided between the two hemispheres of the brain; the left hemisphere receives (almost) all sensory
information (and controls movements) of the right part of the body while the right hemisphere receives
(almost) all the sensory information (and controls movements) of the left part of the body. Therefore,
sensory information perceived by the right hand, when it feels an object, is received in the left hemisphere
and vice versa. When patients touched an object with their right hand, they could easily name the object.
When the object was touched with the left hand, they could not name it. Proof that the left hemisphere is
the seat of principal language functions was thus produced.
This unequal localization of language functions launched the idea that the left hemisphere is the verbal
one, the right the non-verbal one. Since language has often been perceived as the noblest function of
human race, the left hemisphere was declared “dominant.”
Other experiments with the same type of patients clarified the role of the right hemisphere. A video by
Sperry and Gazzaniga about the split-brain patient “W.J.” shows one of the most surprising
demonstrations of the superiority of the right hemisphere for spatial vision. The patient was given several
dice, each with two red sides, two white sides, and two sides with alternating white and red diagonal
stripes. The task of the patient was to arrange the dice according to patterns presented on cards. The
beginning of the video shows “W.J.” quickly arranging the dice in the required pattern using his left hand
(which is controlled, remember, by the right hemisphere). However, he has great difficulty completing the
same task using his right hand (which is controlled by the left hemisphere). He is slow and moves the dice
indecisively, but once his right hand intervenes, he becomes quick and precise again. When the
researchers hold back his left hand, he again becomes slow and indecisive. Other research by Sperry also
showed the domination of the right hemisphere in spatial vision processes. This role was then confirmed
by clinical case studies. Patients suffering from lesions in the right hemisphere were not able to recognize
familiar faces; other patients had difficulty with spatial orientation.
Some patients with lesions in the right hemisphere showed defects in identifying emotional intonation of
words and recognizing emotional facial expressions. Behavioral studies back up the clinical studies:
prosody is best perceived if the stimuli are received by the left ear (and the information goes to the right
hemisphere); images seen by the left visual field provoke a greater emotional reaction. It was also
deduced from this that the right hemisphere was also specialized in the processes related to emotions.
In 1970, the psychologist Robert Ornstein, in The Psychology of Consciousness hypothesized that
“Westerners” use mainly the left half of their brain: they have a well trained left hemisphere, due to their
focus on language and logical thinking. However, they neglect their right hemisphere, and therefore, their
emotional and intuitive thinking. In short, Ornstein associates the left hemisphere with the logical and
analytical thinking of “Westerners,” and the right hemisphere with the emotional and intuitive thinking of
“Easterners.” As a result, the traditional dualism between intelligence and intuition found a physiological
origin based on the difference of the two hemispheres of the brain. In addition to the eminently
questionable aspect of Ornstein’s theories ethically, this idea resulted from the accumulation of
misinterpretations and false assertions of previous scientific findings.
Another rather widespread model on the subject, although not accepted scientifically, stipulates that the
left hemisphere tends to process quick changes and analyzes the details and characteristics of the stimuli,
while the right processes the simultaneous and overall characteristics of the stimuli. This model, without
scientific foundation, remains entirely speculative. As a result, from the differences between the verbal
hemisphere (the left) and the non-verbal hemisphere (the right), a constantly growing number of abstract
concepts and relationships between mental functions and hemispheres has appeared on the neuromyth
market. Furthermore, all these interpolations are moving further and further away from the scientific
findings.
Then, little by little, the two hemispheres were no longer described as two ways of thinking, but as
revelations of two types of personality. The concept of “left and right brain thinking,” together with the
idea of a dominant hemisphere, led to the notion that each individual depended predominantly on one
hemisphere. Cognitive styles were then created: a rational and analytical person is “left-brained;” an
intuitive and emotional person is “right-brained.” These cognitive styles, relayed among others by certain
media (periodicals, “self-knowledge” books, conferences, etc.), became very popular and raised big
questions relating to their application in education: was it necessary, according to the characteristics of the
learner in terms of cerebral lateralization, imagine teaching methods that are more effectively adapted to
the use of one or other of the hemispheres? Were the school programs well adapted to a teaching method
that uses the entire brain or, with their focus on arithmetic and language, did they concentrate too much
on the “left brain?”
The idea that western societies focus on only half of our mental capacities (“our left brain thinking”) and
neglect the other half (“our right brain thinking”) became widespread. Western education systems jumped
on the bandwagon. Renowned didacticians, such as E.P. Torrance or M. Hunter, recommended schools
change their teaching methods according to the dominant hemisphere concept. Hunter claimed that
educational programs were principally made for “left brains.” Torrance confirmed that schools favor left
brain-dependent activities, like the fact of always sitting, learning algebra, etc. Favoring the right
hemisphere would include allowing students to stretch out, learn geometry, etc. These remarks led to the
methods that engage the two hemispheres, some even going so far as to reinforce activities related to the
right hemisphere. An example of these new methods is “show and tell.” Instead of merely reading texts to
the students (left hemisphere action), the teacher also shows images and graphs (right hemisphere
actions). Other methods use music, metaphors, role-playing, meditation, drawing, etc., to activate the
synchronization of the two hemispheres. These methods advanced the educational principles by
diversifying them. Nevertheless, they are based on a scientific misinterpretation, in that the two halves of
the brain can not be so clearly separated.
Indeed, no scientific evidence indicates a correlation between the degree of creativity and the activity of
the right hemisphere. A recent analysis of 65 studies on brain imaging regarding the processing of
emotions highlights that this task can not be associated exclusively with the right hemisphere. Similarly,
no scientific evidence validates the idea that analysis and logic depend on the left hemisphere or that the
left hemisphere is the special seat for arithmetic, reading, or reading. Conversely, Stanislas Dehaene
(1997) found that the two hemispheres are active when identifying Arab numerals (e.g. 1 or 2 or 5). In
addition, other recent data establish that, when reading processes are analyzed at the level of smaller
components, subsystems of the two hemispheres are activated (e.g. decoding written words or recognizing
sounds for the higher level processes, such as reading a text). In fact, even a capacity associated in
essence to the right hemisphere, encoding spatial relationships, proves to be within the competence of the
two hemispheres–but in a different way. The left hemisphere is more skillful at encoding “categorical”
spatial relationships (e.g. high/low or right/left), while the right hemisphere is more skillful at encoding
metric spatial relationships (i.e. continuous distances). Moreover, brain imaging has shown that, evening
these two specific cases, areas of the two hemispheres were activated and that these areas were working
together. More surprising case: researches found that the dominant hemisphere for language was not also
connected to the manual laterality, as was thought. Indeed, a very widespread idea is that “right-handed
people have their language on the left, left-handed people on the right.” And yet, 5% of right-handed
people have the main areas related to language in the right hemisphere and 30% of left-handed people
have them in the left hemisphere (REF ?).
Thus, based on the latest studies, scientists think that the hemispheres of the brain do not work
individually, but together, for all cognitive tasks, even if there are functional asymmetries. The brain is a
highly integrated system; it is rare that one of its parts works individually. There are some tasks, such as
recognizing faces and producing a speech, that are dominated by a given hemisphere, but most require
that the two hemispheres work at the same time. In light of these notions, the use of “left brain” and “right
brain” concepts is improper. Even if, as previously mentioned, such concepts, for the most part incorrect,
diversified educational methods (collateral benefits…), falling into the trap of classifying students or
cultures according to a virtually dominant hemisphere is not only highly questionable scientifically,
potentially dangerous socially, but, furthermore, strongly debatable (to say the least) ethically. It is thus
important to avoid such mistakes.
A Male Brain is Different than a Female Brain
The 2003 PISA study8 reveals gender related learning differences in most countries. For example, boys do
better in math. These last few years, works claiming to be inspired by scientific findings have also
appeared to show that men and women think differently and that this distinction is due to a different brain
development.9 What of these claims actually come from real research? Can there be a “feminine brain”
and a “masculine brain?” Is it desirable to propose a teaching style specialized according to gender?
Research about the brain reveals functional and morphological differences between the male and female
brain: the male brain is larger; when language is at stake, the language area is more strongly activated in
the female brain. But determining what these differences mean is extremely difficult. Currently, no study
has shown gender-specific processes when building up neuronal networks during learning. Additional
research is needed.
Even the terms “feminine brain” and “masculine brain” correspond more to a cognitive “way of being”
than to a biological reality. Baron-Cohen, who used these expressions, to describe autism and related
disorders (Baron-Cohen, 2003), affirms that men are more “methodical” (ability to understand mechanical
systems) and women are better communicators (ability to communicate and understand others). He then
suggests that autism can be perceived as an extreme form of the “masculine brain,” but does not support
that men and women have radically different brains, or that autistic women have a masculine brain; he
tends to employ the terms “masculine and feminine brain” to refer to particular cognitive profiles. These
8
REF
Why Men Don’t Listen and Women Can’t Read Maps: How We’re Different and What to do About It by Allan and
Barbara Peas, published in 1999 (first edition), could be quoted.
9
references are a little unfortunate, in that they contribute to spreading false ideas concerning the workings
of the brain.
Let’s suppose now, that it is truly established, that on average, a girl’s brain makes her less capable of
learning math. Would it be enough to propose a specialized education? If the goal of education is to
produce over-specialized human beings, then the question may come up. But if its most important role is
to create citizens with a basic culture, the nature of the debate changes, and such an advance would lose
its relevance and its meaning in terms of educational policy. By always accepting that said differences
exist and that one manages to prove it, there is a large bet that they will be minimal, and what’s more, will
be based on averages. And yet, it is conceivable that individual variations are such that it would be
impossible to know if a young girl, taken at random, will be less capable of learning than a young boy
taken at random, etc.10
(add a § here about the Vancouver experiments)
Multilingualism Myths
In this day and age, half of the world population speaks at least two languages and multilingualism is
considered an asset. Nevertheless, it was believed for a long time that learning several languages is
problematic, and there are a few signs remaining from this superstition whose foundation is, to say the
least, dubious. Most false ideas about this subject are based on the representation of language in the brain.
A first myth is that the more one learns a new language, the more he/she loses the other. Another false
representation imagines that two languages in the brain are located in two separate areas without any
contact points. It was then believed that knowledge acquired in one language could not be transferred to
another. From these ideas, it was inferred that the simultaneous learning of two languages during infancy
could create a mixture of the two languages in the brain, which would slow down the development of the
child: the native language had to be learned “correctly” before beginning to learn another language.
These false ideas come from a combination of factors. Since language is an important cultural and
political entity, it was employed in numerous arguments, including brain research findings, to favor one
“official” language, to the detriment of others. A few medical observations also share the responsibility:
some patients, after a head trauma, completely forgot one language and not at all another, the idea that
languages occupied separate areas in the brain automatically spread. Studies conducted at the beginning
of the 20th century found that bilingual individuals had an inferior intelligence. 11 These works had a
biases protocol, in that they involved above all migrant children with difficult cultural and social
conditions, and for the most part undernourished. It would have also been necessary to take into account
that these children had started learning the language of their host country around the age of 5 or 6, or even
later, and that, not yet having a strong command of said language, they had problems learning other
subjects. As a result, these studies compared the intelligence of monolingual children from native (and
well-off) families to that of multilingual children from underprivileged environments whose family
knowledge of the dominant language was a strong social handicap. Any scientific research is not
inevitably “pure,” and it is vital that the conditions of the experiment are always well analyzed;
unfortunately, epistemological questions are difficult.
10
So far, only one individual has been awarded the Nobel Prize in two scientific disciplines: Marie Curie (Nobel
Prize in Physics in 1903, shared with Pierre Curie and Henri Becquerel, and Nobel Prize in Chemistry in 1911).
Let’s imagine a system based on averages, and in which little girls and boys would not have equal access to the
sciences…
11
The term “intelligence” must be used in moderation. It does not have a real scientific definition.
Recent studies revealed an overlapping of language areas in the brain of people who have a strong
command of several languages.12 This point could be twisted in order to confirm the myth according to
which the brain has limited resources (in volume) to store information relating to language. However,
other studies on bilingual subjects showed the activation of distinct areas of a few millimeters when these
people described what they did that day in their native language, then in a language learned much later.
(Kim, 1997) The question of “language areas” in multilingual individuals has thus not yet been resolved.
It is wrong to claim that the strong command of one’s native language is weakened when a second
language is learned. The numerous multilingual experts are living proof of that. Many studies found that
students who learn a foreign language at school do not get weaker in their native language. They tend to
advance in both of them.13
The knowledge acquired in a language would not be accessible (or transferable) in another: this myth is
probably the most counter-intuitive of all. Anyone who learns a difficult concept in one language, for
example evolution, can understand it in another language. An eventual incapacity to explain the concept
is certainly due to a lack of vocabulary, rather than a decrease in knowledge. Experiments found that the
more knowledge is acquired in different languages, the more it is stored in areas far away from the area
reserved for language: it is not only preserved in the form of words but also under different mediums, for
example images. Multilingual individuals sometimes no longer remember what language they learned
certain things in: they can forget if they saw a film in French, in German, or in English.14
The idea that you have to first speak your native language well before learning a second language implies
that the languages must be learned separately. However, studies have shown that children who master two
languages understand the structure of the language better and apply it in a more conscious way.
Therefore, multilingualism enables acquiring other competences related to language. These positive
effects are clearer when the second language is acquired early; a multilingual education does not lead to a
delay in development. It is true that, sometimes, very young children confuse languages. But in most
cases, this phenomenon later disappears.15
So far, theories on bilingualism and multilingualism are particularly based on cognitive theories. Future
school programs on language learning should rely on successful examples of teaching practices.
Additional research on the brain is needed to discover possible periods favorable to language learning
(sensitive periods as described above).
Memory Myths
Memory, essential function in learning (can you learn without memory?), is also the privileged subject of
fantasies and false ideas. “Improve your memory!” “Increase your memory capacity!” “How to Get an
Exceptional Memory Fast!” These are some examples of advertising slogans for books or pharmaceutical
products. During exam time, these types of slogans are heard with increased insistence. Do we know
12
The conditions of the creation of such an overlapping are controversial. One theory stipulates that the areas
reserved to languages overlap when the languages are learned at a young age; when the second language (or the
others) is learned late, there is no overlapping. Another theory affirms that an overlapping appears when the two
languages are mastered.
13
Studies conducted in 1990 on Turkish immigrant children in the Federal Republic of Germany found that the
number of mistakes made by these children diminished in Turkish and in German, provided that they follow regular
schooling.
14
Note about storing and reproducing here.
15
If such is not the case, that can be due to a defect in language acquisition (poor differentiation of sounds, for
example). The study of a second language could then represent an additional burden and aggravate the delay, if the
primary defect is not recognized and treated.
enough now to understand the processes and to envisage the creation of products and methods that
improve memorization? Do we need the “same memory” today as we did fifty or a hundred years ago,
since techniques have evolved and professions have changed? Are there different memories: visual,
lexical, emotional? Finally, do learning methods still use memory the way they did fifty years ago?
These last few years, research on memory processes has advanced. We now know that the memory does
not respond to only type of phenomenon and that it is not located in only one part of the brain. However,
contrary to popular belief, memory is not infinite. That is not scientifically possible because the
information is stored in neuronal networks and the number of these networks is finite (even if it may be
enormous). No one can hope to memorize the entire Encyclopedia Britannica. Research has also found
that the capacity to forget is necessary for a good memorization. The case of a patient followed by the
neuropsychologist Alexander Luri is rather enlightening: he had a memory that seemed infinite, but he
had no capacity to forget; this patient was incapable of finding a steady job, if not as “memory
champion.” It seems that the forgetting rate of children is the optimal rate to build up an efficient
memory. (Anderson, 1990)
And what about those people who have a visual almost photographic memory, very good at memorizing a
long list of numbers drawn at random, capable of simultaneously playing several games of chess,
blindfolded?... Researchers in neurology attribute these performances to specialized ways of thinking,
rather than to a type of visual memory. DeGroot (1965) took an interest in the great chess masters,
subjecting them to experiments where the layout of the chessboard was briefly shown; these excellent
players had to then recreate the layout of the pieces, which they succeeded at doing perfectly, except
when the layout shown had no chance of happening during a real game of chess. The ability of the great
players to recreate the layout of the chessboard was not therefore due to a visual memory, but rather a
capacity to mentally organize the information of a game that they know perfectly (conclusion proposed by
DeGroot). Thus, the same stimulus is perceived and understood differently according to the knowledge
the subject has of the situation.
Nevertheless, some people do seem to truly have an incredible visual memory, which would keep an
image practically intact. We are then talking about “eidetic memory.” These people can, for example, can
spell out an entire pay written in an unknown language, that they saw very briefly, a little as if they had
taken a picture of the page. However, the eidetic image is not formed in the brain like a picture; it is not a
reproduction, but a construction. It takes time to form it; those individuals with this type of memory must
look at the image for at least three to five seconds to be able to examine each point. Once this image is
formed in the brain, the subjects are able to describe what they saw, as if they were looking at what they
describe. By contrast, normal subjects (without eidetic memory) are more hesitant in their description. It
is both interesting and unsettling to know that a larger proportion of children than adults seem to possess
an eidetic memory (Haber and Haber, 1964), as if learning, or age, weakens this capacity. Haber and
Haber also showed that 2 to 15% of primary school children have an eidetic memory. Leask and his
colleagues (1969) found that verbalization while observing an image would interfere with the eidetic
capture of the image, thus giving a possible explanation for the loss of eidetic memory with age. S.M.
Kosslyn (1980) also tried to explain this negative correlation between visual memorization and age.
According to his studies, adults can encode information using words, but children have not yet finished
developing their verbal aptitudes. There is still lack of scientific evidence to confirm or contradict this
theory. Brain imaging studies are needed.
There are a great number of techniques to improve memory, but they act on a particular type of memory
only, whether it is a question of mnemonic means, repetitions of a same stimulus, the creation of concept
maps (give meaning to things that do not necessarily have any in order to learn them more easily), etc.
Joseph Novak studied concept maps a lot16 and noticed, among high school physics students, a significant
increase in their ability to resolve problems thanks to the use of concept maps. However, this theory still
lacks a study in brain imaging to define the cervical areas activated during these different processes. It
was nevertheless observed that according to whether or not the subject was a novice in the subject
concerned, different areas of the brain were activated.17
Numerous neurological studies are thus still needed to understand how memory works. Considerable
individual diversities exist, and one individual, throughout the lifespan, will use his/her memory
differently depending on age. What science has concretely confirmed is that physical exercise, an active
use of the brain, and a well-balanced diet, including fatty acids, help develop memory and reduce the risk
of degenerative diseases.
Questions relating to the use of memory in current teaching methods (and, singularly, in the critical role
played by memory in evaluation/certification systems), will have to in all likelihood be reconsidered in
the future, in light of new neuroscientific discoveries. Many programs bring memory more into play than
comprehension. The answer to the question “Is it not better to learn to learn?” certainly exceeds the
boundaries of neuroscience; but be that as it may, it must be asked, no matter the findings of future
studies regarding the workings of the brain.
16
A recently published article in Cell Biology Education (Novak, 2003) summarizes his research.
This confirms a number of other observations regarding the assessment and the way in which it is reflected in the
brain structures (see REF, chapter 4 ff).
17