Download Paying Attention: ADHD and Our Children

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Glossary of psychiatry wikipedia , lookup

Separation anxiety disorder wikipedia , lookup

Depersonalization disorder wikipedia , lookup

Autism spectrum wikipedia , lookup

Thomas Szasz wikipedia , lookup

Generalized anxiety disorder wikipedia , lookup

History of psychiatric institutions wikipedia , lookup

Cases of political abuse of psychiatry in the Soviet Union wikipedia , lookup

Political abuse of psychiatry in Russia wikipedia , lookup

Spectrum disorder wikipedia , lookup

Emergency psychiatry wikipedia , lookup

Schizoaffective disorder wikipedia , lookup

Narcissistic personality disorder wikipedia , lookup

Conduct disorder wikipedia , lookup

Mental disorder wikipedia , lookup

Anti-psychiatry wikipedia , lookup

Critical Psychiatry Network wikipedia , lookup

Political abuse of psychiatry wikipedia , lookup

Antisocial personality disorder wikipedia , lookup

Dissociative identity disorder wikipedia , lookup

Conversion disorder wikipedia , lookup

Causes of mental disorders wikipedia , lookup

Abnormal psychology wikipedia , lookup

Asperger syndrome wikipedia , lookup

Factitious disorder imposed on another wikipedia , lookup

Sluggish cognitive tempo wikipedia , lookup

Attention deficit hyperactivity disorder wikipedia , lookup

Child psychopathology wikipedia , lookup

History of mental disorders wikipedia , lookup

Attention deficit hyperactivity disorder controversies wikipedia , lookup

Pyotr Gannushkin wikipedia , lookup

History of psychiatry wikipedia , lookup

Classification of mental disorders wikipedia , lookup

Diagnostic and Statistical Manual of Mental Disorders wikipedia , lookup

Controversy surrounding psychiatry wikipedia , lookup

Transcript
Paying Attention:
ADHD and Our Children,
Inside and Out.
By
Matthew Carter, MFT
1
The American Psychiatric Association defines attention-deficit/hyperactivity disorder
(ADHD) as a “persistent pattern of inattention and/or hyperactivity-impulsivity that is more
frequently displayed and more severe than is typically observed in individuals at a comparable
level of development”1. ADHD is further indicated by three subtypes; predominately
hyperactive-impulsive type, predominantly inattentive type, and combined type. According to
the APA, ADHD primarily affects school-age children, approximately 3%-7% of all children in
the U.S.1, with approximately 30-50% retaining the disorder as adults2. Not only are these
numbers cause for concern, they are increasing: there has been a three or four-fold increase in
diagnosis of ADHD since the late 1980s3.
Undoubtedly ADHD is a serious problem, though ADHD is largely, and suspiciously, an
American problem. For every two hundred and fifty U.S. children diagnosed and treated with
ADHD, only one child would similarly have been diagnosed and treated in all of Germany,
England, France, and Italy combined4. In England alone, the rate of clinical diagnosis is
estimated as only about 1 in 3000 children, or 0.3%5. ADHD as a diagnosis is virtually unknown
in Japan6, and many other countries. Clearly ADHD has been woven into the cultural fabric of
our nation, and our nation in particular. Or is it the other way around.In this paper I will explore
how Attention-Deficit/Hyperactivity Disorder relates to, and in many seemingly conspicuous
ways parallels, socio-historical developments within the country that gave it birth. I will propose
that U.S. cultural values and expectations not only exposed the so-called problem of ADHD, but
first helped create it. I will further argue that the label ADHD is not so much a road sign to a
mental disorder, but more a road sign to a social one. Lastly, I will suggest that if we are to have
any hope of reversing this growing problem, we as helping professionals need to turn our
attention outward, to the social matrix that sustains and reinforces it.
So what is this thing called ADHD, and where did it come from? (Or has it always been
there, laying in wait to be “discovered?”) I am going to go out on a limb and say that there have
always been overly energetic, rambunctious children. I was one of them. So when did this
become a mental illness? That ball seems to have got rolling around the turn of the century with
the 1902 lectures of George Fredrick Still to the Royal College of Physicians in England, in
which Still presented twenty children from his clinical practice who exhibited poor impulse
2
control, or what he called “inhibitory volition.”7 Still proposed that these children shared a basic
“defect in moral control,” which he related to a biological defect inherited from some sort of
injury at birth. Of course, the roots of Still’s work, and the oh so American diagnosis of ADHD,
go back further than 1902, to the sociohistorical context that made it all possible.
Nearly 60 years earlier, in Philadelphia, a group of thirteen psychiatrists organized and
established the Association of Medical Superintendents of American Institutions for the Insane,
the forefather of the American Psychiatric Association8. At the time, a psychiatrist’s job was
largely “an institutional and legal task,”9 with most psychiatrists working or training in asylums,
dealing in “madness.” American psychiatry was then, as it is now, grounded in the physiological
model of illness, emphasizing organic brain damage, heredity, and moral degeneracy as the
causes of mental illness10.
As the 19th century progressed, so did the “business” of madness. The diagnostic criteria
for insanity significantly broadened, while new mental disorders such as hysteria and
neurasthenia were being “discovered” and increasingly diagnosed10. What Hippocrates defined
as a “disease of the womb” over two thousand years prior suddenly become reborn by Victorian
era physicians (like Jean-Martin Charcot) as a nervous disorder, purportedly caused by sexual
repression, perverted habits of thought, and/or idleness. This new hysteria was diagnosable by
any number of symptoms, including faintness, nervousness, insomnia, fluid retention, heaviness
in abdomen, muscle spasm, shortness of breath, irritability, loss of appetite for food or sex, and a
“tendency to cause trouble”11. Of course, what also hadn’t changed in some two thousand years
was the fact that this new “disorder” primarily affected those people born with a uterus. In 1859,
a physician claimed that a quarter of all women suffered from hysteria12.
While some Americans were enjoying the spoils of the Industrial Revolution (roughly
1760-1830) and the subsequent economic expansion of the mid- to late nineteenth century, the
gap between
3
the social classes was widening, as was the power differential between men and women,
especially amongst the upper and middle classes. Women in Victorian America (1837-1901)
were increasingly restricted, desexualized, and kept from positions of power.10 Besides
domesticity, women were expected to be pious, pure, and submissive to men,12 forcing many
educated and politically motivated women into constricted and powerless lives.10 Without a
voice, without an outlet for the anger and angst, despair and disaffection that the Victorian
woman must have felt, it’s no surprise that these sentiments would, as Philip Cushman (1995)
writes, “show up in forms that were accessible within the Victorian horizon of understandings: as
somatic symptoms. Physical symptoms were one of the few avenues of expression available to
women within the Victorian terrain.”
The same sociopolitical milieu that limited those without a voice to psychosomatic
expression, also limited American physicians to biological explanations. That was their frame of
reference. A generation later (1920's), Freud would understand hysterical symptoms as
“symbolic representations of a repressed unconscious event, accompanied by strong emotions
that could not be adequately expressed or discharged at the time.”13 Although both Freud and his
American counterparts situate the problem within the individual (in this case, within the woman),
for Freud, hysteria was rooted in repressed emotions, not biology, stemming from unintegrated
trauma. Freud’s theory spoke to the social and situational nature of emotional distress, albeit in
individualistic, intrapsychic terms. American physicians, on the other hand, explained the
emotional distress these women were experiencing as neurological dysfunction, likely from a
genetic defect. By explaining what was largely a product of social disorder in terms of mental
disorder, by placing the impetus for change on the individual, rather than sociopolitical
institutions and conditions, these doctors unknowingly became part of the very problem they
were trying to treat. These women needed a voice, instead they were told they were ill (and
likely born that way). In doing so, these doctors overlooked and unwittingly exonerated the
sociopolitical structures that made them so, while further internalizing pathology. The symptoms
of hysteria were a call for help, a sign of times that needed changing. Unfortunately, not many
listened. Which brings us to neurasthenia.
4
Neurasthenia, as a diagnostic category, was developed in 1869 by American neurologist
George Miller Beard. Neurasthenia, like hysteria, was characterized by a wide range of
symptoms, including “general lassitude, irritability, lack of concentration, worry, and
hypochondria,”14 which Beard believed resulted from a deficiency in nervous energy, or
“nervous exhaustion.”10 Beard, looking across the ever-expanding, ever-accelerating social
terrain of modern America, believed this “nervous exhaustion” to be a result of the stresses of
urbanization and the pressures of an increasingly competitive business environment.15 He also
saw it as affecting men more than women,16 although women were considered particularly
vulnerable, due to their “weaker nervous systems.”17 While initially a disease of the upper
classes, neurasthenia rapidly spread throughout the social classes,18 so much so that by the turn
of the century neurasthenia was being described as “the fashionable disease.”17 Across the
Atlantic, neurasthenia was being referred to as “the American Disease.”18
America post-Industrial Revolution was ambitious and energetic, and expected as much
from its citizens. With the U.S. expanding economically, as well as geographically, the country
needed its citizens to have initiative and be hard working. That’s what was valued, that’s what
was promoted. Seen in this light, it’s no surprise that the absence of these traits would come to be
considered an illness. To his credit, Beard recognized the psychosocial nature of neurasthenia,
but like his contemporaries, he too located the problem within the individual, and looked to treat
it there. Beard reasoned that since neurasthenia was an exhaustion of nervous energy, those
afflicted needed a recharge or a jumpstart. Keeping with the belief of the time that electricity was
vital to proper functioning of the nervous system, 16 Beard (not coincidently a friend and
collaborator of Thomas Edison) treated his patients with low-voltage electricity administered
directly to the body, what he called “electrotherapy.”10 This “treatment” remained popular even
after Beard’s death in 1883, although serious doubts about the efficacy of electrization began to
appear in the 1890's.19 What Beard knew as neurasthenia died shortly after World War I, and
with it died the shared messages of those who somatically protested a society growing
increasingly competitive and capitalistic, urban and isolated. Once again, nobody really listened,
and time goes on.
5
At the turn of the 20th century, while Psychoanalysis was slowly emerging in Europe, the
Eugenics movement had already landed on American shores and was taking root.20 The work of
Francis Galton (1822-1911), borrowing heavily from his cousin Charles Darwin (1809-1882),
was furthering the notion that individual differences, including mental illness, were largely
hereditary.21 Galton’s work both reflected and reinforced American pragmatism (itself a
reflection and furthering of post-Enlightenment scientific reductionism), which seemed to be
looking for an empirical cause and cure for everything. With no cure for mental illness in sight,
and with increasing acceptance of organic causes as opposed to environmental ones, there was
increasing pessimism as to treatment of mental disorders. American psychiatry, joining with the
spirit of the times, began focusing more on categorization and studies than on treatment.8
Like his psychiatric brethren, George Still was also looking for the biological causes of
mental illness. But what distinguished Still, and his 1902 lectures, was the fact that Still was the
first (at least publically) to look at children who were inattentive and highly misbehaving through
the lens of mental illness.22 Still didn’t see these behaviors as learned, much less psychosomatic,
he saw them as being caused by something wrong with the brain or body.22 Even though there
was no evidence to support Still’s theories, his work focused on childhood behavior, clearing a
path for what was to become the disease theory of attentional problems.
American interest in childhood behavioral problems rose dramatically shortly after WWI,
following a major epidemic of encephalitis (1917 to 1928) that affected large numbers of
children.7 Physicians began noticing that many of the children who survived the brain infection
exhibited impairment in attention, activity regulation and impulsivity, symptoms that came to be
known as Postencephalitic Behavior Disorder.7 For those looking for biological causes of
childhood mental illness, here was pretty clear evidence that damage to the brain can effect
behavioral problems. With newfound evidence in tow, researchers of the 1920s and 30s began
studying other childhood sources of brain injury, including birth trauma, lead toxicity and
epilepsy, and their relationship to behavioral problems.
The 1930s also saw the first use of stimulant drugs to treat behavioral problems, by
American psychiatrist Charles Bradley.7 Bradley had been treating children who suffered post6
pneumoencephalographyI headaches with Benzedrine, speculating that the stimulant would spur
the production of more spinal fluid. The Benzedrine did not do much for the headaches, but
teachers noticed that some of the children taking the stimulant experienced a striking
improvement in their schoolwork.23 Bradley, looking to replicate these findings in children with
behavior problems, set up a controlled trial with 30 such children. The results of the study,
published in the American Journal of Psychiatry (1937), showed that 14 of the 30 subjects
exhibited a "spectacular change in behavior (and) remarkably improved school performance,"
during one week of treatment with Benzedrine.23 These landmark results established the utility of
psychostimulants in the treatment of behavioral problems, while lending further credence to the
disease (biological) theory of attentional problems.
The 1939 German invasion of Poland marked the beginning of World War II (19391945), and the coming of widespread social changes across Europe, America, and throughout the
world. The fighting and instability in Europe drove many psychoanalysts to flee the continent,
most of whom settled in England and America.24 Naturally, this migration had a major affect on
American psychiatry. Freud’s 1909 visit to Clark University had already planted psychoanalytic
seeds that had been growing for decades. Now there were more gardeners. By the 1940s the
majority of the world’s psychoanalysts lived in America,25 and psychoanalysis was growing
increasingly popular and influential within American psychiatry, and within the American
cultural landscape. More on this later.
I
A painful and now obsolete medical procedure in which cerebrospinal fluid is drained from around the brain and
replaced with air to allow the structure of the brain to show up more clearly on an X-ray picture - Wikipedia, 2005.
7
The end of World War II (1945) ushered in an era of unprecedented economic
growth in the U.S., and of course, more social change. While the war cost over 400,000
Americans their lives, it also mobilized the American workforce, cut unemployment, and
increased commercial productivity. A surge in jobs and manufacturing, together with newfound
disposable income and rising consumer confidence, made for an economic boom (as well as a
“baby boom”). Of course, the two main industries leading America’s post-war economic charge:
cars and television.
Television was formally launched in July 1941 when the FCC authorized the first two
commercial TV stations.26 By January 1942, Pearl Harbor had been attacked, America had
entered the war, and nearly all television broadcasting worldwide had come to a screeching halt.27
By the time the war had ended in 1945, nine commercial TV stations were authorized, but only
six of them were on the air. Early television, constrained by poor picture quality, a lack of quality
programming, and a relatively high price tag, showed only modest public interest. In 1946 only
0.5% of U.S. households owned a television set.27
As the number of TV stations and quality programs continued to grow, and the price of
TV sets continued to drop (as a result of mass production and technological advances), the
demand for television steadily increased. In 1950 8.8% of U.S. households owned a TV.28 By
1954, the year color TVs hit the market, the percentage of households with a TV had risen to
55.7%.27 By 1962, the year the Beatles first appeared on TV, 90% of American homes owned at
least one television set.27 Television was no longer a luxury item, it had become an integral part of
the culture, an American way of life.
The TV boom of the 1950s was largely fueled by TV ad revenues,29 the money behind
increases in production, technological advances, and station expansion. The effectiveness of TV
ads, vis-à-vis merchandise sales, in turn fueled the rapid growth and influence of corporations.
Naturally, as the popularity and influence of TV and TV ads rose, so did the cost of air time,
which of course resulted in shorter and more frequent commercial spots.29 TV had become the
center of American attention, commercials gave them much more to attend to. Which brings us
back to our previously scheduled program.
8
In the late 1940s neuropsychiatrist Alfred E. Strauss and colleagues at the Wayne County
Training School in Northville, Michigan, had been studying the psychological effects of brain
injury in a group of mentally retarded children.7 Based on their findings, Strauss and his
colleagues isolated a number of behavioral characteristics, such as aggressiveness, impulsivity,
distractibility, and hyperactivity, which they believed could discriminate between groups of
mentally retarded children with and without brain damage. Hyperactivity was seen as the most
valid indicator.30 Strauss generalized these findings to all children displaying these characteristics,
presuming that they too had what he called “minimal brain damage.” For Strauss, children with
“MBD” were essentially overstimulated, due to their inability to filter out extraneous stimuli, and
therefore acted out.30
Around the same time the American Psychiatric Association was busy working on the
Diagnostic and Statistical Manual of Mental Disorders (DSM). The purpose of the DSM was to
create a common nomenclature based on a consensus of the contemporary knowledge about
psychiatric disorders.31 The APA sent questionnaires to 10% of its members (who were mostly
analysts), asking for comments on the proposed categories. The final version, which assigned
categories based on lists of symptoms, was approved by vote of the membership and published in
1952.31 The DSM-I included three categories of psychopathology: organic brain syndromes,
functional disorders, and mental deficiency. These categories contained 106 diagnoses. Only one
diagnosis, Adjustment Reaction of Childhood/Adolescence, could be applied to children.31 There
was no mention of “hyperactivity.”
Throughout the 1950s the concept of “minimal brain damage” gained considerable
popularity and influence, at the same time, its rise to prominence was met by widespread
criticism. Strauss had no hard evidence to support his theories, and needless to say he was taking
quite a leap by assuming that children had brain damage simply by looking at their behavior. In
1963 the Oxford International Study Group of Child Neurology released a publication which
stated “that brain damage should not be inferred from behavior alone” and suggested replacing the
term minimal brain damage with “minimal brain dysfunction.”32 And so it was. “Minimal brain
dysfunction” became the latest in a growing line of fashionable terms for hyperactivity, only to be
supplanted, like its predecessors, a few years later.
9
In 1957 American psychiatrist Maurice Laufer, a student of Charles Bradley, coined the
term “hyperkinetic impulse disorder” to describe children who presented with a hyperactive
pattern of behavior. Based on his work with emotionally disturbed children, Laufer concluded that
hyperactivity was not a result of childhood disease or brain injury, but rather a symptom of
developmental delay of the central nervous system.33 Laufer, like his mentor, asserted that
stimulant drugs were the treatment of choice for this “disorder.”
Three years later, in 1960, an article published by eminent child psychiatrist Stella Chess
(also an American) furthered the notion that hyperactivity, or what she called “hyperactive child
syndrome,” was biological in nature, rather than the result of an injury. Although seemingly more
concerned with classification and clinical descriptions than etiology, Chess believed that this
“syndrome” was genetically inherited, and relatively common. Chess described a child with
hyperactive child syndrome as “one who carries out activities at a higher than normal rate of
speed than the average child, or who is constantly in motion, or both.”34 Evidently, Chess’
hyperactive child syndrome was a bit more benign than what George Still described as a “defect
in moral character,” or what Alfred Strauss thought was “minimum brain damage.”
It was around this time, the 1950s and 60s, that the perspective on hyperactivity taken in
the U.S. began to diverge from that taken in Europe, particularly in Great Britain.34 As you can
see in the work of Laufer and Chess, American psychiatrists increasingly saw hyperactivity as a
relatively common behavioral disturbance of childhood, not typically associated with symptoms
of brain damage. Laufer shrewdly acknowledged that most normal children display some degree
of hyperactive, impulsive behavior, asserting that hyperactivity was but an extreme degree of such
behavior.
Most European psychiatrists, on the other hand, continued to see hyperactivity as an
extreme state of excessive activity that was highly uncommon, and that usually occurred in
conjunction with other signs of brain damage (such as epilepsy or retardation) or insult (such as
trauma or infection).34 This divergence in views led to large discrepancies between American and
European
10
psychiatrists and psychologists in their estimates of prevalence rates, diagnostic criteria, and
preferred treatment modalities, discrepancies which remain (although to a lesser extent) even
today.
In 1968 an eight-member committee of the American Psychiatric Association developed
and revised the DSM, publishing the DSM-II. The main goal of the DSM-II was compatibility
with the International Classification of DiseasesII, 8th edition (ICD-8), to further facilitate
communication among professionals.35 After the “revision,” which borrowed heavily from the
ICD-8, the DSM contained 11 major diagnostic categories, and 182 diagnoses.31 The DSM-II, like
its predecessor, described disorders as “reactions,” reflecting the psychoanalytic view that mental
disorders were reactions of the personality to biological, psychological, and social factors (while
also reflecting the enduring popularity and influence of psychoanalysis within the APA).
The DSM-II also paid increased attention to the problems of children and adolescents,
adding the category Behavior Disorders of Childhood/Adolescence. This category included
“Hyperkinetic Reaction of Childhood,” the DSM’s first recognition of hyperactivity, thought by
many to be the precursor to A.D.D.7 The diagnosis was based, as ADHD is today, on behavioral
criteria, particularly activity level, with little emphasis on symptoms of inattention.36 Hyperkinetic
Reaction of Childhood immediately became the standard psychiatric term for hyperactivity, with
terms like minimum brain dysfunction, hyperkinetic impulse disorder, and hyperactive child
syndrome virtually disappearing from the psychiatric landscape.31
Although psychoanalytic thought was very much a part of the DSM-II, the 1960s and 70s
saw a rapid decline in the popularity and influence of psychoanalysis within the U.S. The
psychopharmacological revolution ushered in by the development, widespread use, and
effectiveness of pychotropic medications such as benzodiazepines (like Valium), anti-psychotics
(like Thorazine), and lithium, brought with it increasing doubts about the effectiveness of “the
talking cure.”37 Neuroscience was uncovering increasing evidence relating brain functioning, in
II
A classification system of diseases, health conditions, and procedures developed by the World Health
Organization, which represents the international standard for the labeling and numeric coding of diseases and health
related problems. (WHO, 2005).
11
this case malfunctioning, to neurosis. Psychoanalysis had nothing to offer in the way of biological
evidence. Biological psychiatry (i.e. psychopharmacology) appealed to American pragmatism,
was compatible with the increasingly popular and influential field of Behaviorism, and it
“worked.” For these reasons, biological psychiatrists rapidly grew in number and in influence,
taking over many leadership positions in the field of psychiatry,37 and within the APA.
In 1969, America experienced another revolution, in children’s television. Children of the
sixties had been enjoying shows such as Romper Room, Captain Kangaroo, and Mr. Rogers’
Neighborhood, all in black & white until the mid 1960s. These shows were characteristically
simple and calm, centering on interpersonal (and inter-“puppetal”) relations and relationships.
They emphasized creative thought over comprehension, life lessons over vocabulary lessons, and
they did it all with little or no animation.
In 1969, Sesame Street made its television debut. This innovative children’s show
intermixed humans, puppets, and animation, within small, rapidly changing skits designed to be
like commercials- quick, catchy, and memorable. It utilized bright colors, flashy graphics, and
catchy phrases and songs (i.e. constant visual and auditory stimulation) to grab and hold the
attention of America’s children (which for busy parents was, as you can imagine, a godsend).
Needless to say, Sesame Street was an instant success, reaching more than half of the nation's 12
million 3 to 5 year-old children in just its first season,38 quickly becoming the model for
children’s television.
Sesame Street could not have been introduced at a more opportune time. In the 1960’s and
70’s the divorce rate in the U. S. rose dramatically,39 and with it, so rose the number of single
parent homes. To make matters worse, the 70s also saw a dip in the economy and a dramatic rise
in the cost of living,40 making dual incomes an increasing necessity. Many parents had to return to
work, work harder and longer, or even take second (or third) jobs to make ends meet.
Understandably, parents had less and less “quality” time to spend with their kids. America’s
children, as children do, needed stimulation and attention (which the TV characters provided).
Busy, overwhelmed parents needed a babysitter. Shows like Sesame Street offered both.
While Sesame Street helped children learn to spell, to count, to do math, even to sign, it
also created the expectation that learning should be fun, all of the time. In neurological terms,
12
sustained exposure to highly stimulating shows like Sesame Street develop areas of the brain that
scan and shift attention at the expense of those that focus attention,41 conditioning the brain to
generally expect such stimulation and variety. Now imagine a child who has (literally) grown up
with shows like Sesame Street being asked to sit at uncomfortable desks for hours at a time,
listening to monotone lectures in monotonous environments, where the child is no longer the
center of attention (as they are when they watch Sesame Street). Given what these kids are used
to, is it not understandable why they would be restless and easily distracted at school?
In 1971, Canadian psychologist Virginia Douglas, in a Presidential address to the
Canadian Psychological Association, presented her theory that deficits in sustained attention and
impulse control were more likely to account for the difficulties of these children than
hyperactivity. Based on her research, Douglas argued that hyperactive children were not
necessarily more distractible than other children and that sustained attention problems could
emerge under conditions where no distractions existed.42 Douglas’s research and ideas were
published the following year (1972) in the landmark article Stop, look and listen: The Problem of
Sustained Attention and Impulse Control in Hyperactive and Normal Children, an article that
almost single-handedly shifted the focus of research within the field from hyperactivity to
attention issues. By the end of the 1970s there were over 2,000 published studies on attention
deficits.7
In 1980 the DSM was revised again, largely to address growing concerns about the
manual’s reliability and validity.43 Work began on the DSM-III in 1974, and by the time it was
finished, what started out as a revision had turned into a major overhaul. Gone was any semblance
of psychoanalytic thought (including the term “reaction,” which was replaced by the term
“disorder”), a casualty of the growing supremacy of the biomedical model. The DSM-III was
purportedly based on research and empirical evidence, and “atheoretical”43 (although it was
widely criticized, especially by psychodynamically-oriented clinicians, as being inherently
biased).35 To improve reliability and validity, the DSM-III included a more descriptive approach,
specific diagnostic criteria, differential diagnoses, and the benchmark multiaxial system.43
13
The DSM-III contained 265 diagnoses (compared to the DSM-II’s 185), one of them
being the newly termed “Attention Deficit Disorder” (notice the move towards disease model
terminology). What had been known as a “hyperkinetic reaction” was thus redefined as primarily
an issue of inattention, rather than of hyperactivity (reflecting the work of Virginia Douglas and
others). To account for the pervasiveness of hyperactivity in those with “ADD,” the DSM-III
further delineated two types of attention deficit disorder: ADD with hyperactivity and without it,34
a move that was immediately met with criticism and question marks. Not only was the relegation
of hyperactivity to a secondary symptom criticized by many as hasty and empirically unfounded
(especially considering that it was the primary feature of the disorder for decades),36 many argued
that what the APA termed “ADD without hyperactivity” (i.e. inattentiveness) was itself a separate
and distinct disorder,34 an argument that continues today.
Criticisms aside, the DSM-III was widely used and accepted,43 although it continued to be
a work in progress. In 1987 the APA published a revised edition of the DSM-III, with the goal of
expanding coverage, increasing reliability, and to update research.44 The DSM-III-Revised
Edition, or DSM-III-R, contained over 100 changes in its diagnostic criteria,45 as well as changes
in its multiaxial system of diagnosis.44 Diagnostic categories were added (raising the number of
diagnoses to 297), some were removed, some were renamed. One of those renamed was, of
course, ADD. In response to growing research evidence that ADD rarely occurs without
hyperactivity,1 the name of the disorder was changed to Attention Deficit Hyperactivity Disorder
(ADHD), thus reversing the trend Virginia Douglas had started. The diagnostic criteria for ADHD
included symptoms of hyperactivity, impulsivity, and inattentiveness, with no subtypes, thus
doing away with the possibility that an individual could have the disorder without being
hyperactive.45 To account for individuals presenting with attention deficits without hyperactivity,
the DSM-III-R included the term Undifferentiated Attention Deficit Disorder (UADD), with the
specification that insufficient research existed to establish diagnostic criteria for it at that time.34
In retrospect, the APA’s decision to eliminate ADD without hyperactivity as a diagnostic
category might have caused more problems than it was intended to solve. Because the DSM-III-R
limited clinicians to two choices (in regards to diagnosing attentional problems), either AttentionDeficit Hyperactivity Disorder or Undifferentiated Attention Deficit Disorder (of which there
14
were no diagnostic criteria), many children who did not present with symptoms of hyperactivity
were inappropriately diagnosed as having ADHD,46 while others were just not diagnosed at all. It
took the APA seven years to identify and correct this oversight. Meanwhile, countless children
received the wrong kind of help (e.g. unnecessary medication), or worse, no help at all.
By the time the DSM-IV was published in 1994, the term Undifferentiated Attention
Deficit Disorder had been scrapped, and the diagnostic category ADHD had been completely
made over. Although the disorder was still called ADHD, this time the disorder had two separate
symptom lists (inattention and hyperactivity-impulsivity), and was split into three subtypes.
Children who presented with at least six out of nine symptoms of inattention (like being “easily
distracted”), “to a degree that is maladaptive and inconsistent with developmental level,” would
be considered as having ADHD, Predominantly Inattentive Type. Children who presented with at
least six out of nine symptoms of hyperactivity/impulsivity (such as “often talks excessively”)
would be considered as having ADHD, Hyperactive-Impulsive Type. Children who met criteria
for both of these subtypes would be diagnosed with ADHD, Combined Type.46 For good measure,
the DSM-IV also included a diagnostic category for children who met some but not enough
criteria for ADHD. These children would be diagnosed with ADHD, “Not Otherwise Specified.”
As you can see, the DSM-IV’s version of ADHD was exceedingly, and perhaps
excessively, comprehensive. The DSM-IV provided a laundry list of what the APA determined to
be symptoms of pathology, including “often interrupts or intrudes on others,” or “often does not
seem to listen...,” or my personal favorite, “often runs about or climbs excessively...,” all of which
are common, dare I say
15
normal, childhood behaviors (especially for boys). Since it is unlikely that a child would present
with six or more (if any) of these “symptoms” in a doctor’s office, the diagnosis was, as it is now,
largely based on second or third party (word-of-mouth) reports, usually from parents or teachers
(generally the ones most affected by hyperactive or inattentive children).
With six or more diagnostic criteria in tow, it is then up to the clinician to determine if
these symptoms are “persistent” and “maladaptive,” criteria so vague that the DSM-IV basically
leaves the diagnosis to discretion of the clinician. With such broad criteria, with so much open to
interpretation, the DSM-IV made it so that just about any child could be diagnosed with ADHD.
Surely it is no coincidence that the diagnosis of ADHD skyrocketed shortly after the DSM-IV was
published.22
You might be thinking, “If the diagnosis is so flawed, why has it been so widely used and
accepted, while remaining practically unchanged, for over ten plus years?” To answer that
question, we must first ask, Who benefits from a child being diagnosed with ADHD? First and
foremost, pharmaceutical companies have benefitted immensely from the sharp increase in the
diagnosis of ADHD. Pharmaceutical companies collect hundreds of millions of dollars annually
from the sale of stimulant drugs, such as Ritalin (methylphenidate), used to “treat” ADHD.7 The
more children diagnosed, the more drugs prescribed, the greater the profits. And yes, the use of
Ritalin skyrocketed shortly after the DSM-IV was published.47
Proponents of stimulant drug treatments like Ritalin contend that these drugs are safe and
effective at treating symptoms that would otherwise disrupt a child’s ability to stay on task, and to
learn. While stimulant drugs may be “effective” at reducing hyperactivity and improving
attentiveness in some children, these short-term gains come with serious short-term and long-term
risks. Most readily observable are the side effects, which can include nervousness, decreased
appetite, insomnia, headaches, stomachaches, dizziness and drowsiness.7 Lesser-known but more
serious side effects include cognitive impairment (e.g. inhibited range of affect, diminished ability
to perform complex and abstract thinking), which occurs in over 40% of all children taking
stimulant drugs,22 and temporary growth suppression, due to the fact that stimulant drugs directly
lower the production of growth hormones.7 There is also evidence suggesting that stimulant drugs
16
impair the body’s immune system,21 the body’s sexual system,47 and even precipitate substance
abuse.7
Despite all the well-known risks, American physicians continue to widely prescribe
Ritalin, while the rest of the world’s physicians hardly use the drug. In fact, American
pharmacists distribute five times more Ritalin than the rest of the world combined,48 leaving one
to two percent of all children (that would be about one million) and 10% of school-aged boys in
the U. S. on the drug.49 So why do American physicians, and especially American parents,
continue to give children stimulant drugs? One need only look across the American cultural
landscape to find that answer.
America, founded on the Protestant ethic, has always valued hard work and delayed
gratification, and we expect as much from our kids. American children are supposed to obey their
parents and be “seen, not heard.” When children are less than compliant, when they become
disruptive, many parents figure there’s got to be a reason why their children aren’t behaving as
they “should.” And so they take their child to a doctor to find out what’s “wrong.” Now it is quite
possible that such behavior is a result of poor parenting (poor limit setting, neglect, abuse, etc.),
but I doubt a parent would readily accept such an explanation, even if it was true. Blaming
parents, no matter what they have or haven’t done, also goes against the all-too-American value
of individual responsibility, and is generally frowned upon in our “get-over-it” culture. Needless
to say, the poor parenting road is rarely taken, and if a clinician does, they tread lightly.
On the other hand, a clinician could sit the parent(s) down and demonstrate how recent
sociohistorical developments, such as rising divorce rates, the breakdown of the nuclear and
extended family, the quickening of American lifestyles, and the shortening of American attention
spans have affected the psychosocial development of American children. Of course, the clinician
would also have to explain cognitive development and how social experiences directly affect
neural development, particularly gene expression. The clinician could cite studies, such as the one
done by pediatric researcher Dr. Dimitri Christakis, demonstrating how TV viewing in young
17
children contributes to attention problems later in life,50 III or the American Academy of
Pediatrics’ recommendation against television for children age 2 or younger,51 as clear evidence
that TV viewing can and does affect cognitive development.
The clinician can even refer to any number of controlled studies linking diet, especially
food additives and coloring, to childhood behavioral problems.52 The clinician could do all this,
but do parents really want to hear all this scary stuff? Do they want to hear about the breakdown
of America and American families? Do they want to hear that TV, food additives, even
pollution52 may be influencing their child’s behavior? We Americans are known for our social
apathy, for many, hearing about such serious, deeply entrenched social problems triggers feelings
of helplessness and despair. These issues can be equally overwhelming for clinicians, who in an
effort to help are trying to isolate and treat problems. But how do you treat a problem that is
deeply rooted in and reflects the very framework of our country and culture?
I propose that for many Americans, myopia is a defense against the painful realities of our
world. A simple answer (like faulty genes or haywire neurotransmitters are to blame for our
children’s anxieties), with a simple solution (like stimulant drugs), is a lot more comforting than
the aforementioned alternatives. A single, albeit theoretical “truth” (ADHD) is a lot less anxiety
provoking that the world of possible causes for our children’s difficulties, to which we as
Americans are all accessories. And so we accept this unsubstantiated disorder, this “disease” that
was once relatively uncommon (and still is in countries like England), that now affects an
estimated 3 to 7% of all American children,1 because it is easy to accept. The spirit of our times
has made it so. We the pill-popping, “quick-fixers” of the world then give our children stimulant
drugs (and usually very little else), which for some yet unclear reason makes them more docile
and compliant, and we consider this problem “treated.” Yet the prevalence of ADHD continues to
rise.
If we are to have any hope of reversing the growing problem of ADHD, we need to open
our hearts and minds to what these somatic messages are telling us. Children are hyperactive (act-
III
Christakis’ controlled study of 1200 6-8 year-old children, published in 2004, revealed that “each hour of television
watched per day at ages 1 through 3 increases the risk of attention problems by almost 10 percent at age 7” (2004)
18
out) or inattentive (act-in) for a reason, if not for many reasons. Maybe these symptoms are telling
us that their world is moving too fast, that they are over-stimulated (and increasingly pressured),
and that we need to slow their world down a bit. Maybe they are telling us that our children need
to be eating better, and exercising more. Maybe these symptoms are a cry for more attention,
more love. Perhaps they are a way of telling us that they are being physically or emotionally
abused, and that they need our help. Whatever the reason(s), these kids need love and attention,
they don’t need to be labeled as different or defective, and they don’t need dangerous, mind
altering drugs. We can do better.
Nearly 140 years ago, adults with not enough energy were told that they had a disease
called neurasthenia, and treated with electroshock. These days children with too much energy are
being told that they have a disease called ADHD, and are being treated with amphetamines. No
one listened to the somatic protests of increasingly oppressed, 19th century women, instead they
were labeled “hysterical” and silenced. I can only hope that we as a culture can take a break from
our ambitious, fast-paced lives to listen, to really listen to what these children are trying to tell us,
consciously or otherwise. ADHD is not a road sign to a mental disorder, but a road sign to a social
one. It is about time we paid attention.
References
1) American Psychiatric Association. (2000). Diagnostic and statistical manual of mental disorders (4th
ed.). Arlington, VA: Author.
2) Elia J, Ambrosini PJ, Rapoport JL (March 1999). "Treatment of attention-deficit-hyperactivity
disorder". N. Engl. J. Med. 340 (10): 780–8.
19
3) Maughan, B., Iervolino, A. , Collishaw, S. - Time trends in child and adolescent mental disorders.
Current Opinion in Psychiatry 18(4), July 2005, 381-385
4) Furman, R. A. (2002). Attention deficit-hyperactivity disorder: An alternative viewpoint. Journal of
Infant, Child and Adolescent Psychotherapy. 2: 125-144
5) Ford, I. (1996). Socio-educational and Biomedical Models in the Treatment of Attention Deficit /
Hyperactivity Disorder and related Neurobehavioural Disorders in Childhood and Adolescence, and their
Implications for Adult Mental Health. Retrieved on 11/1/05 from http://www.priory.com/psych/iford.htm
6) TePas, T. (1996).. Attention-deficit hyperactivity disorder Revisited. Retrieved on 9/22/05 from
http://www.nutrition4health.org/NOHAnews/NNF96ADHD.html
7) Armstrong, T. (1997) The Myth of the ADD Child. Penguin Putnam. New York, NY.
8) Merkel, L. (2003). The History of Psychiatry. Lecture. Retrieved on 9/20/05 from
http://www.healthsystem.virginia.edu/internet/psych-training/seminars/history-of-psychiatry-804.pdf#search='history%20of%20psychiatry'
9) International Encyclopedia of the Social Sciences, 1968)
10) Cushman, P. (1995). Constructing The Self, Constructing America: A Cultural History of
Psychotherapy. Addison-Wesley Publishing. NY, NY.
11) Maines, Rachel P. (1998). The Technology of Orgasm: "Hysteria", the Vibrator, and Women's Sexual
Satisfaction. Baltimore: The Johns Hopkins University Press
12) Wikipedia (2005). History of Women in the United States. Retrieved on 10/1/05 from
http//en.wikipedia.org/wiki/History_of_women_in_the_United_States
13) Columbia Encyclopedia, Sixth Edition (2001-05). Retrieved on 10/2/05 from
http://www.bartleby.com/65/hy/hysteria.html
14) Psychnet-UK (2005) Neurasthenia Disorder. Retrieved on 10/06/05 from http://www.psychnetuk.com/dsm_iv/neurasthenia.htm
15) Wikipedia (2005). Neurasthenia. Retrieved on 10/8/05 from http://en.wikipedia.org/wiki/Neurasthenia
16) Marlowe, D. (2003) Conceptual and Theoretical Medical Developments in the 19th and Early 20th
Centuries. Chapter four of Psychological and Psychosocial Consequences of Combat and Deployment
with Special Emphasis on the Gulf War. Rand publishing.
20
17) Taylor, R. (2000) Death of neurasthenia and its psychological reincarnation. The British Journal of
Psychiatry (2001) 179: 550-557
18) Goetz, C. G. (2001). Poor Beard!!: Charcot’s internationalization of neurasthenia, the "American
disease.” Neurology 2001; 57: 510-514.
19) Brown, E. MD (1980). An American Treatment for the 'American Nervousness':
George Miller Beard and General Electrization. Retrieved on 10/8/05 from
http://bms.brown.edu/HistoryofPsychiatry/Beard.html
20) Selden, S. (2005) Eugenics Popularization. Retrieved on 9/20/05 from
http://www.eugenicsarchive.org/html/eugenics/essay6text.html
21) Shultz, S., & Shultz, D. (2000). A History of Modern Psychology. Harcourt Brace & Company,
Orlando, FL.
22) Stein, D. (2001). Unraveling the ADD/ADHD Fiasco. Andrews McMeel Publishing. Kansas City, MO.
23) American Journal of Psychiatry (1998). Images in Psychiatry: Charles Bradley, M.D., 1902–1979. Am
J Psychiatry 155:968, July 1998. Retrieved on 10/18/05 from /misc/terms.shtml
http://ajp.psychiatryonline.org/cgi/content/full/155/7/968
24) New England Institute for Psychoanalytic Studies (1997). What is the History of Psychoanalysis?
Retrieved on 10/24/05 from http://www.neips.org/aboutpsych/history.htm
25) Healy, D. (2000) A dance to the music of the century: Changing fashions in 20th-century psychiatry.
Psychiatric Bulletin. 24: 1-3
26) Tvhandbook.com (2005) The History of Television. Retrieved on 10/26/05 from
http://www.tvhandbook.com/History/History_TV.htm
27) Tvhistory.tv (2005) Television History- The First 75 Years. Retrieved on 10/25/05 from
http://www.tvhistory.tv/
28) PCWorld (2005). TV Facts Then & Now. Retrieved on 10/27/05 from
http://www.pcworld.com/news/article/0,aid,118945,tfg,tfg,00.asp
29) Museum of Broadcast Communications (2005) Advertising. Retrieved on 10/26/05 from
http://www.museum.tv/archives/etv/A/htmlA/advertising/advertising.htm
30) Schwartz, S. and Johnson, J.H. (1985). Psychopathology of Childhood: A Clinical - Experimental
Approach (2nd Edition). New York: Pergamon Press.
21
31) Blashfield, R.K. (1998). Diagnostic models and systems. In A.S. Bellack, M. Herson, & Reynolds,
C.R. (Eds.), Clinical Psychology:
Assessment, Vol. 4, ( pp. 57-79). New York: Elsevier Science.
32) Sanberg, Seija and Barton, Joanne. (2002). Historical Development. In S. Sandberg (Ed.),
Hyperactivity and Attention Disorders of Childhood (pp. 1-29). Cambridge: Cambridge University Press.
33) Schrag, P. & Divoky, D. (1975). The myth of the hyperactive child: and other means of child control.
New York, NY: Pantheon Books.
34) Barkley, R. (1998). Attention-Deficit Hyperactivity Disorder: A Handbook for Diagnosis and
Treatment. New York: Guilford Press.
35) Spitzer, R. (2001). Values and Assumptions in the Development of DSM-III and DSM-III-R: An
Insider’s Perspective and a Belated Response to Sadler, Hulgus, and Agrich’s “On Values in Recent
American Psychiatric Classification.” Vol. 189, No. 6. The Journal of Nervous and Mental Disease.
Printed in USA, 2001, by Lippincott Williams & Wilkins
36) Goodyear, P. & Hynd, G. (1992). Attention-Deficit Disorder With (ADD/H) and Without (ADD/WO)
Hyperactivity: Behavioral and Neuropsychological Differentiation. Retrieved on 10/30/05 from
http://www.questia.com/PM.qst?a=o&d=81021965
37) Metzl, J. (2003). ‘Mother’s Little Helper’: The Crisis of Psychoanalysis and the Miltown Resolution.
Retrieved on 11/1/05 from
http://www.med.umich.edu/psych/faculty/metzl/07_Metzl.pdf#search='psychoanalysis%201960s%201970
s'
38) Palmer, A. (2000). The Street that Changed Everything. Retrieved on 10/25/05 from
http://www.apa.org/monitor/oct03/street.html
39) Kesselring, R. & Bremmer, D. (2005) Female Income, the Ego Effect and the Divorce
Decision:Evidence from Micro Data. Retrieved on 11/01/05 from http://www.rosehulman.edu/~bremmer/professional/divorce_micro.pdf#search='divorce%20rates%20rose%20dramatically
40) Malone, B. (1997). One Cloud, Fifty Silver Linings. Retrieved on 11/01/05 from
http://www.findarticles.com/p/articles/mi_qa3647/is_199707/ai_n8771267
41) Jensen PS, Mrazek D, Knapp PK, Steinberg L, Pfeffer C, Schowalter J, Shapiro T (1997) “Evolution
and Revolution in Child Psychiatry: ADHD as a Disorder of Adaptation” Journal of the American
Academy of Child and Adolescent Psychiatry 36:1672-1679.
42) About.com (2005). A Brief History of ADHD. Retrieved on 10/15/05 from
http://add.about.com/library/weekly/aa090597.htm
22
43) Moon, K. (2004). The History of Psychiatric Classification: From Ancient Egypt to Modern America.
Retrieved on 10/25/05 from http://www.arches.uga.edu/~kadi/index.html
44) Gilles-Thomas, D. (1989) Lecture Notes for a course in Abnormal Psychology.Retrieved on 11/02/05
from http://ccvillage.buffalo.edu/Abpsy/
45) Peele, R. (1986). Report of the speaker-elect. American Journal of Psychiatry, 143, 1348-1353.
46) Wheeler, J. & Carlson, C. (1993). Attention Deficit Disorder Without Hyperactivity:
ADHD, Predominantly Inattentive Type. Retrieved on 10/20/05 from http://www.kidsource.com/LDACA/ADD_WO.html
47) Jacobovitz, D., Sroufe, L.A., Stewart, M., and Leffert, N. (1990). Treatment of attentional and
hyperactivity problems in children with sympathomimetic drugs: A comprehensive
review.Journal of the American Academy of Child and Adolescent Psychiatry, 29, 677-688.
48) Cancer Prevention Coalition (2005). Ritalin: Stimulant for Cancer. Retrieved on 11/05/05 from
http://www.preventcancer.com/patients/children/ritalin.htm
49) Kane, A. (2005) The Stimulants: Ritalin and its Friends. Retrieved on 10/18/05 from
http://addadhdadvances.com/ritalin.html
50) Christakis, D. (2004). Early Television Exposure and Subsequent Attentional
51) American Academy of Pediatrics (2005). Television and the Family. Retrieved on 11/02.05 from
http://www.aap.org/family/tv1.htm
52) Feingold Association (2005). Diet and ADHD. Retrieved on 11/12/05 from
http://www.feingold.org/research1.html
World Health Organization (2005). International Classification of Diseases (ICD). Retrieved on 10/30/05
from
http://search1.who.int/search?ie=utf8&site=who_main&client=who_main&proxystylesheet=who_main&o
utput=xml_no_dtd&oe=utf8&q=ICD
23