Download Bodley_wsu_0251E_11404 - Washington State University

Document related concepts

Cyborg wikipedia , lookup

Robotics wikipedia , lookup

Technological singularity wikipedia , lookup

Human–computer interaction wikipedia , lookup

Adaptive collaborative control wikipedia , lookup

History of artificial intelligence wikipedia , lookup

Intelligence explosion wikipedia , lookup

Embodied cognitive science wikipedia , lookup

Existential risk from artificial general intelligence wikipedia , lookup

Philosophy of artificial intelligence wikipedia , lookup

Ethics of artificial intelligence wikipedia , lookup

Transcript
THE ANDROID AND OUR CYBORG SELVES:
WHAT ANDROIDS WILL TEACH US
ABOUT BEING (POST)HUMAN
By
ANTONIE MARIE BODLEY
A dissertation submitted in partial fulfillment of
the requirements for the degree of
DOCTOR OF PHILOSOPHHY
WASHINGTON STATE UNIVERSITY
The Graduate School
MAY 2015
© Copyright by ANTONIE MARIE BODLEY, 2015
All Rights Reserved
© Copyright by ANTONIE MARIE BODLEY, 2015
All Rights Reserved
To the Faculty of Washington State University:
The members of the Committee appointed to examine the dissertation of ANTONIE
MARIE BODLEY find it satisfactory and recommend that it be accepted.
______________________________
Joseph K. Campbell, Ph.D., Chair
______________________________
Jon Hegglund, Ph.D.
______________________________
Pamela Thoma, Ph.D.
______________________________
Gene Apperson
ii
ACKNOWLEDGEMENT
My deepest thanks go out to my committee for their continuing support and
encouragement. Dr. Joe Campbell, Dr. Jon Hegglund, Dr. Pamela Thoma and Gene Apperson,
all deserve my sincerest gratitude. Dr. Campbell, my committee chair, and Dr. Jon Hegglund
were major supporters of my ideas from the very beginning. Without their confidence in my
work beginning as early as ten years ago, this dissertation never would have been possible. Dr.
Thoma helped show me that strong voices should be encouraged, no matter how quiet they may
seem. From Mr. Apperson I was reminded that the fantastic imagination in science fiction
generates real science and dreams for the future of robotics.
I would like to thank other faculty and staff in the Graduate School who helped make this
possible. In particular, Dr. Pat Sturko: without her friendship and professional guidance, I never
would have completed this.
Also foundational in the completion of this dissertation are my friends and family.
Without their feedback and continuing interest in my work, I would not have been able to
generate the ideas, the perspective, and the creative telling of my work. I could never have asked
for a more interesting and intellectually curious group of friends to ask questions and quiz my
knowledge about science and science fiction.
Lastly, to my husband, Garrett: my deepest thanks for your patience and enduring
support. It is only with you that I could enjoy date night watching RoboCop.
iii
THE ANDROID AND OUR CYBORG SELVES:
WHAT ANDROIDS WILL TEACH US
ABOUT BEING (POST)HUMAN
Abstract
by Antonie Marie Bodley, Ph.D.
Washington State University
May 2015
Chair: Joseph K. Campbell
In the search for understanding a future for our selves with the potential merging of
strong Artificial Intelligence and humanoid robotics, this dissertation uses the figure of the
android in science fiction and science fact as an evocative object. Here, I propose android theory
to consider the philosophical, social, and personal impacts humanoid robotics and AI will have
on our understanding of the human subject. From the perspective of critical posthumanism and
cyborg feminism, I consider popular culture understandings of AI and humanoid robotics as a
way to explore the potential effect of androids by examining their embodiment and
disembodiment. After an introduction to associated theories of humanism, posthumanism, and
transhumanism, followed by a brief history of the figure of the android in fiction, I turn to
popular culture examples. First, using two icons of contemporary AI, Deep Blue, a chess
playing program and Watson, a linguistic artificially intelligent program, I explore how their
public performances in games evoke rich discussion for understanding a philosophy of mind in a
non-species specific way. Next, I turn to the Terminator film series (1984-2009) to discuss how
the humanoid embodiment of artificial intelligence exists in an uncanny position for our
emotional attachments to nonhuman entities. Lastly, I ask where these relationships will take us
in our intimate lives; I explore personhood and human-nonhuman relationships in what I call the
iv
nonhuman dilemma. Using the human-Cylon relationships in the reimagined Battlestar
Galactica television series (2003-2009), the posthuman family make-over in the film Fido
(2006), as well as a real-life story of men with their life-sized doll companions, as seen in the
TLC reality television series My Strange Addiction (2010), I explore the coming dilemma of life
with nonhuman humanoids.
v
TABLE OF CONTENTS
ACKNOWLEDGEMENT ............................................................................................................. iii
Abstract .......................................................................................................................................... iv
LIST OF FIGURES ..................................................................................................................... viii
CHAPTER ONE: INTRODUCTION TO ANDROID THEORY .................................................. 1
INTRODUCTION....................................................................................................................... 1
ANDROID THEORY ................................................................................................................. 3
CRITICAL POSTHUMANISM ................................................................................................. 4
THE CYBORG ......................................................................................................................... 14
THE ARTIFACE AND THE ANDROID ................................................................................. 17
SCIENCE FICTION AND SCIENCE FACT MEET ............................................................... 28
CHAPTER TWO: ANDROIDS AND PHILOSOPHY, METAPHYSICS, AND ETHICS ........ 32
INTRODUCTION..................................................................................................................... 32
DEUS EX MACHINA: THE DISEMBODIED MIND ............................................................ 35
How Information Lost its (Human) Body ............................................................................. 36
Conceptualizing Artificial Intelligence ................................................................................. 43
THE GREAT INTELLIGENCE DEBATE .............................................................................. 46
Deep Blue .............................................................................................................................. 48
Watson ................................................................................................................................... 58
MACHINES AND MORAL RESPONSIBILITY .................................................................... 69
vi
CHAPTER THREE: UNCANNY INTERACTION AND ANTHROPOMORPHISM .............. 81
INTRODUCTION..................................................................................................................... 81
EMBODIMENT: TO BODY OR NOT TO BODY? ................................................................ 83
THE HAL EFFECT .................................................................................................................. 89
UNCANNY INTERACTION: THE HUMAN IN THE HUMANOID .................................... 95
ROBOPSYCHOLOGY AND THE UNCANNY ................................................................... 108
THE UNCANNY AND ANTHROPOMORPHISM .............................................................. 115
CHAPTER FOUR: CITIZEN ANDROIDS ............................................................................... 120
INTRODUCTION................................................................................................................... 120
MANUFACUTRED HUMANS ............................................................................................. 125
THE NONHUMAN DILEMMA ............................................................................................ 134
Helfer/Six and Fembots ....................................................................................................... 137
Race and Robot Doppelgängers .......................................................................................... 151
THE POST/HUMAN LOVE AFFAIR ................................................................................... 160
Uncanny Companionship .................................................................................................... 162
The Post/Human Family Transformation ............................................................................ 167
CHAPTER FIVE: CONCLUSION............................................................................................. 182
WORKS CITED ......................................................................................................................... 194
vii
LIST OF FIGURES
Figure 1: Major Kusanagi "Jacked In" (Screen shot Ghost in the Shell: Arise, 2013). ................ 38
Figure 2: "It tastes like peanut butter," Alex says during a procedure on his exposed brain.
(Screen shot RoboCop, 2014.) .......................................................................................... 40
Figure 3: GERTY Greets Sam with a smile and a cup of coffee, "Good morning Sam. I'm here to
help you." (Screen shot Moon, 2009.) .............................................................................. 86
Figure 4: The Terminator Smiles at John Conner. (Screen shot Terminator: Judgment Day,
1991.) .............................................................................................................................. 100
Figure 5: Mori's charting of the uncanny valley (WikiMedia). ................................................... 102
Figure 6: Robotic Doppelganger. Android, Geminoid HI-1 (left) with creator, Hiroshi Ishiguro
(right). ............................................................................................................................. 112
Figure 7: Characters from Polar Express (left). Hiro from Big Hero Six (right). ..................... 116
Figure 8: Opening credits for BSG after it is revealed that there are multiples of Sharon -"Athena," on the planet of Caprica (left) and "Boomer," aboard Galactica (right). (Screen
shot BSG, 2004.) ............................................................................................................. 127
Figure 9: Six (Tricia Helfer) flanked on either side by Cylon Centurions. (Screen shot BSG,
2003.) .............................................................................................................................. 138
Figure 10: Davecat and Sidore cuddle on the couch. (Screen shot My Strange Addiction, 2011.)
......................................................................................................................................... 163
Figure 11: Timmy and Fido enjoy a day at the park. (Screen shot Fido, 2007.) ....................... 174
viii
Dedication
This work is dedicated to all human(oid)s of the future.
May you continue to be inspired by science and science fiction.
ix
The Android and our Cyborg Selves:
What Androids Will Teach Us about Being (Post)Human
By Antonie Marie Bodley
CHAPTER ONE: INTRODUCTION TO ANDROID THEORY
“We ask of the computer not just about where we stand in nature, but about where
we stand in the world of the artifact. We search for a link between who we are
and what we have made, between who we are and what we might create, between
who we are and what, through our intimacy with our own creations, we might
become.”
-- Sherry Turkle, The Second Self: Computers and the Human Spirit (1984).
INTRODUCTION
In 1984 Sherry Turkle began her search for a link between ourselves and our creations;
between ourselves and the artifact in The Second Self: Computers and the Human Spirit. She
used the computer as her point of interest, as her “evocative object.” As a child of the eighties,
someone that could have easily been part of her studies of children and technology, I want to
bring that reflection back into focus again, but with the figure of the android as my evocative
object. While her work took her to explore those connections between and among people
through technology, in particular computers and later mobile devices, this work seeks that link
through fiction in the singular image of the android housed with strong Artificial Intelligence.
In the search for understanding what the future will be like for our selves and our species,
it is important to find some solid ground, a stable perspective upon a subject/object that we can
use for investigation and extrapolation. Here the figure of the android functions as such a focus.
Currently under development in many scientific fields, from humanoid robotics to coding the
artificial intelligence, the android is a rich subject for discussion because it has been in our
1
cultural imagination for decades. Here, the android becomes my evocative object, an object
poised upon the edge of previously stable boundaries. For Turkle, “Theory enables us, for
example, to explore how everyday objects become part of our inner life: how we use them to
extend the reach of our sympathies by bringing the world within” (Evocative Objects 307). The
goal of examining evocative objects is to “defamiliarize” ourselves from the objects to help bring
them back into focus in a way that matters to our inner self.
Soon, androids will be part of our everyday lives and they will have a profound effect on
our inner self and our homes. I propose here to explore those potential effects by examining the
embodiment and disembodiment of the android through contemporary popular culture examples.
After an introduction to associated theories of humanism, posthumanism, and transhumanism,
followed by a brief history of the figure of the android in fiction, I will turn to examples. First,
using Watson, a linguistic artificially intelligent program, I explore how his performance on the
television gameshow Jeopardy! evokes rich discussion for understanding a philosophy of mind
in a non-species specific way. Next, I turn to the Terminator film series (1984-2009) to discuss
how the humanoid embodiment of artificial intelligence exists in an uncanny position for our
emotional attachments to nonhuman entities. Lastly, I ask where these relationships will take us
in our intimate lives; I explore personhood and human-nonhuman relationships using the humanCylon relationships in the reimagined Battlestar Galactica television series (2003-2009) as well
as a real-life story of men with their life-sized doll companions, seen in the TLC reality
television series My Strange Addiction (2010).
In fiction, stories of androids, from the functional to the social, are at first examples of
queer bodies and I consider these evocative and queer bodies. For Donna Haraway, “Queering
has the job of undoing ‘normal’ categories, and none is more critical than the human/nonhuman
2
sorting operation” (xxiv). The body of the android undoes the category of human/nonhuman
simply by being neither human in construction nor in acceptance and therefore “queer” may be
an appropriate choice. Androids are also not fully nonhuman as they are designed to “fit” in with
humanity in a way that is more comfortable than other machines. In fact, this attempt to “pass”
within the populations of humanity, suggests another aspect of androids’ queerness which
reaches into theories of mind, body and society.
ANDROID THEORY
Drawing from the fields of American Studies, Film Studies, Philosophy, and Literary
Studies, I use a multidisciplinary approach to explore the android from two fronts – both the
theory from fiction surrounding the android and the actual, literal development. I seek answers
in theories of posthumanity and explorations of the cyborg. Along with these theoretical
perspectives, I turn to contemporary currents in transhumanist philosophies. With Future Studies
blossoming as an academic discipline in think-tanks like the Singularity University and the
Future of Humanity Institute, it would be detrimental to ignore the actual development of
androids and AI in this discussion. I will explore links among these fields with three primary
foci: the mind, the body and society, each with a respective chapter. I choose this discussion
now, not simply because of my love and fascination for science fiction, but also because we are
on the threshold of an entirely new way of living which includes artificial entities in very humanlike form.
In cultural studies, the posthuman is explored exclusively with the human figure as the
focus, but I propose a shift from the human to the android – both the fictional creature and the
actual creation. Using what I call “Android Theory,” this research seeks to explore the figure of
the android to form a vocabulary that can extrapolate to a future allowing for a (post)human self,
3
able to live and interact within communities of actual androids and other potential entities that
we cannot even imagine at this time. The figure of the android will be addressed as both a literal
entity existing in the form of humanoid robotics, and as a figurative entity found in fiction. In
this exploration I hope to find that the human is not in crisis by the boundary blending generally
proposed by concepts like cyborg theory. Rather, we are opening ourselves up to the new
articulations that Judith Butler describes in Undoing Gender: “[it is necessary to keep] our very
notion of the ‘human’ open to a future articulation” because that openness is “essential to the
project of a critical international human rights discourse and politics” (222).
CRITICAL POSTHUMANISM
Some of the keywords introduced so far for this project include “transhumanism,”
“posthumanism,” and “cyborg.” Each of these requires some explanation before fitting within a
discussion of science fiction and androids. Transhumanism generally refers to a philosophy, a
world-view regarding the direction of technology development and the nature of the human
condition, often associated with the Extropian movement (More, Relke, Wolfe). Posthumanism
can be described as both a literal entity as part of the future of the transhumanist philosophy but
also a theoretical framework within cultural theory. While transhumanism and posthumanism
can be “cousins” of sorts, they both “[invite] a critical riposte from a position distinct from
speculative posthumanism or transhumanism: critical posthumanism” (Roden 29). Bart Simon
describes this confusion by positioning one as “popular posthumanism, or transhumanism” and
the other as “critical posthumanism,” with the phrasing attributed to Jill Didur (2). Both the
popular post/transhumanisms of Extropian thought and the critical response to such thinking
offer rich collections of work surrounding who and what we will potentially become in the
future. Representing the transhumanists are writers such as Max More, Nick Bostrom, and Ray
4
Kurzweil, to name a few,1 although their views on how to approach the future are very different.
Some, like Bostrom, are sounding warnings while others, like Kurzweil, promise the coming
panacea from the bounties of science. Responding to fiction and philosophy, the critical
posthumanists are often represented by scholars like Donna Haraway, Katherine Hayles, and
Cary Wolfe.
A person who buys in to the “transhumanist philosophy” is a person who believes in
improving the human condition – including the body, self and society – through technological
enhancement. There are many views surrounding exactly how most ideally to reach such goals
is. From biological and genomic enhancement to cybernetic appendages and exploration into
Artificial Life (biologically based, computationally-based, or some other base that we have not
yet conceived of), the Transhumanist believes that what defines “human” is always in a state of
flux. In general, the transhumanist philosophy endorses the belief that what makes us “human”
is in a developmental state toward what could be called “posthuman.” In this transhumanist
philosophy, to become posthuman is to be effectively enhanced through technological means so
that they have surpassed the “limitations that define the less desirable aspects of the ‘human
condition’” (More 4).
Posthuamanism as part of a transhumanist philosophy has a complex history, some of
which grew out of the celebration of humanism and a return to a romanticized vision of
technology from the Renaissance era. Humanism, as described by Brian Cooney, is “One of the
Great Ideas western culture inherited from the classical Greeks” (xx-xxi). This “religious
humanism” idealized traits that were distinctively human, one trait of which was the ability to be
1
Relke asks, “is it any wonder [that] Extropianism, with its relentlessly optimistic focus on the future, is
increasingly popular among techno-savvy young men?” (81). She reminds us that the Extropian movement, and
other transhumanists, are typically white, privileged men… Suggesting that the future will only be populated by
more of the same, but with cybernetics.
5
“tool makers.” Humanism thrived as the species spread to the West and celebrated great feats of
technological inventions, as evidenced at gatherings like the World’s Fair.
With the postmodern era, quite the backlash arose against the imperialist attitudes of
traditional Western humanist thinking. Of course, technological developments continued and
thinking about improving the human condition found a voice again, but this time with the “post”
– inferring the beyond or the after humanism. Arthur Kroker believes that “technology functions
to deliver us to a future that is distinctly posthuman in its radical undermining of all the previous
markers of the ‘human’ – unitary species-logic, private subjectivity, hierarchical knowledge –
with human beings as the universal value-standard of all events” (5). This temporal concept of
being after human literally includes technological developments that are not far off, including
but not limited to, sAI (strong artificial intelligence), human-like robotics, cloning and gene
therapy, space travel and many more possibilities. As both part of and instigators of these
changes, the human will be caught up in the changes as well – some believe this will be for the
betterment of humanity and earth as a whole.
Max More, in the Transhumanist Reader, argues that “becoming posthuman means
exceeding the limitations that define the less desirable aspects of the ‘human condition.’” For
More and others, like James Hughes and members of the Extropian Institute or Humanity+,2
these “less desirable aspects” include: disease, aging and inevitable death (More and Vita-More
4, “Mission”). Through developments in computer science and engineering, cognitive science,
2
With the many different organizations and think tanks dedicated to the movement toward a “future society,” there
are also multiplying definitions of transhumanism. For example, Humanity Plus (+) is an organization which is,
according to their website, “The world’s leading nonprofit dedicated to the ethical use of technology to extend
human capabilities.” Humanity + defines transhumanism as “The intellectual and cultural movement that affirms
the possibility and desirability of fundamentally improving the human condition through applied reason, especially
by developing and making widely available technologies to eliminate aging and to greatly enhance human
intellectual, physical, and psychological capacities.” In other words, they believe that humans have the ability to
better themselves by extending lives, improving memory, strength and other biological traits through, what they
claim to be, ethical use of technoscience. (“Transhumanist FAQ”)
6
AI and others, More believes that “posthuman beings would no longer suffer from disease,
aging, and inevitable death… They would have a vastly greater physical capability and freedom
of form… [They] would also have much greater cognitive capabilities and more refined
emotions” (4).
A person who subscribes to the “transhumanist philosophy” is a person who believes in
improving the human condition – including the body, self and society – through technological
enhancement. There are many views surrounding how to most ideally reach such goals. From
biological and genomic enhancement to cybernetic appendages and exploration into Artificial
Life (biologically based, computationally-based, or some other base that we have not yet
conceived of), the Transhumanist believes that what defines “human” is always in a state of flux.
In general, the transhumanist philosophy endorses the belief that what makes us “human” is in a
developmental state toward what could be called “posthuman.” In this transhumanist
philosophy, to become posthuman is to be effectively enhanced through technological means so
that they have surpassed the “limitations that define the less desirable aspects of the ‘human
condition’” (More 4).
Critical posthumanists want to resist the appeal of the utopian promises of transhumanist
thinking. The “qualitative shift in our thinking about what exactly is the basic unit of common
reference for our species,” Rosi Braidotti suggests, raises serious questions as to the very
structures of our shared identity – as humans” (2). And when asking those questions, Diana
Relke wants to remind us that “is it any wonder [that] Extropianism, with its relentlessly
optimistic focus on the future, is increasingly popular among techno-savvy young men?” (81).
The Extropian movement, and other transhumanists, Relke points out, are typically white,
privileged men… Suggesting that the future will only be populated by more of the same, but
7
with cybernetics. And if this is the case, Braidotti is correct in reminding us that these
“Discourses and representations of the non-human, the inhumane and the posthuman proliferate
and overlap in our globalized, technologically mediated societies” (2).
Clearly, the question of who determines what the “less desirable aspects of the human
condition” is perfectly reasonable question but is often side-stepped by transhumanists who
simply say technology will solve the problem. In a future with the great bounties of technology
realized, there would be no need to worry about poverty or hunger or the supposed “digital
divide” because all would be made equal. The Star Trek series is often criticized on a point like
this. Critics argue that the humanist vision of the shows’ creator is an impossible “wet dream” of
the future (Relke). Yet others argue that there is enough “techno-skepticism” in Star Trek to
argue for its continuing relevance in a critical posthumanist discussion (Relke). As Jason Eberl
and Kevin Decker explain, “Rather than mere escapism, all the incarnations of Star Trek ought to
be seen as an entertaining, edifying preparation for thinking through the problems that the future
will undoubtedly throw at us” (xvi).
Part of this posthuman future will apparently include living side by side with clones and
humanoid robotics, potentially housed with AI. I follow those who assert that such a future
requires that we radically rethink laws and social structures so that we may flourish together with
these entities. As part of our tool making history, futurist Jim Dator is “increasingly convinced
that we humans are inevitably in the process of creating entities that mimic, extend, and in many
ways exceed our own mental, behavioral, and emotional capabilities” (51). And in that future,
“humanity is about to be surrounded by all kinds of novel intelligent beings that will demand,
and may or may not receive, our respect and admiration” (52). For Dator and others like
Bostrom it will be crucial to move ahead with developments in a thoughtful way that takes agent-
8
status of others not for granted in a species-sense, but rather in a way that can allow for entities
that assume and or excel our own cognitive and emotional capacities.
The belief that artificial entities, in particular AI, will outpace our own abilities is often
framed in the discussion of what has come to be called “The Singularity.” Despite popular
belief, Kurzweil was not the inventor of the phrase “The Singularity.” Rather, it grew from a
number of conferences surrounding philosophers and programmers considering the exponential
growth of computer programming speed. While Kurzweil is one of the most well-known for
promoting knowledge about the singularity, other influential figures include SF writer Vernor
Vinge, John von Neumann, Eliezer Yudkowsky, and Ray Solomonoff. All worked to increase
knowledge surrounding the idea of “accelerating change” or the “intelligence explosion”
(Chalmers).
The concept of the Singularity was first proposed by mathematician I.J. Good in 1965,
who envisioned a world in which “smart machines would design even more intelligent
machines” (“Scientists Worry” Markoff). This notion has gained growing attention as designers
at Silicon Valley and beyond are working harder to unveil smart cars, smart phones, and
disembodied AI. Dubbed “The Singularity” by computer scientist and science fiction writer
Vernor Vinge this “intelligence boom” is most commonly associated with the work of Kurzweil
due to his popular science celebrity status. Kurzweil, expanding on “Moore’s Law” (a
description of exponential growth in computer processing power),3 famously predicted in 2005
that the “arrival” of posthuman evolution would occur in 2045 (“Coming Superbrain” Markoff).
Since then there has been growing interest in the concept of the Singularity. Dr. Horvitz
explains, “Technologists are providing almost religious visions, and their ideas are resonating in
3
Dr. Gordon Moore, co-founder of Intel and conceiver of “Moore’s Law,” should not be confused with computer
scientist and futurist Max More, co-editor and contributor to the Transhumanist Reader.
9
some ways with the same idea of the Rapture” (“Scientists Worry” Markoff). Others, like Vinge
maintain a sort of agnostic view of the supposedly coming singularity, believing that we cannot
even begin to imagine such a future (Roden).
Despite the disparate views within the futurist camps, there are some things most agree
upon. For example, many futurists who believe in the coming Singularity, would agree with
More, and while often conflated with posthumanism, would adopt the label of transhumanism in
which the posthuman is a future iteration of the human. For More, this posthuman is something
transhumanists are always striving for – it is a philosophy of improvement without end. This
brand of posthumanism, also referred to as transhumanism or in its most extreme, Extropianism,4
adopts utopian beliefs about the future. Transhumanism, or at least More’s version of it,
according to Mervyn Bendle, “leaves little or no room for doubt”: “Disbelief is suspended and
centuries of hard-won experience and intense critical thinking about science, technology and the
social formation within which they flourish are swept aside by an uncritical ‘will-to-believe’
propositions about the possibilities of science and technology that are often preposterous, and
even undesirable” (50). For Bendle, the transhumanist future, full of posthuman entities, is
something to be dubious of and he, among others, wonder what that will mean for the human.
Similarly, Eugene Thacker fears that the extropian vision of the future will be a
significant step backward for the liberal humanist subject: “Like the Enlightenment’s view of
science and technology, extropians also take technological development as inevitable progress
for the human. The technologies of robotics, nanotech, cryonics, and neural nets all offer modes
of enhancing, augmenting, and improving the human condition” (74). The humanist vision
places certain aspects of the human as special or part of an “essential humanness”: “… like the
4
Extropianism, according to Bendle “sees its (rather daunting) mission as combating the entropic (i.e. disorderly)
tendencies of the universe, especially where these impact on human well-being and potential.”
10
types of humanisms associated with the Enlightenment, the humanism of extropianism places at
its center certain unique qualities of the human – self-awareness, consciousness and reflection,
self direction and development, the capacity for scientific and technological progress, and the
valuation of rational thought” (74). Even instances of extreme posthuman visions in fiction or
advertising, some argue, are still embedded with the humanist ideology – like never being able to
separate the “humanism” from the “post” (Badmington; N. Campbell; Hird and Roberts;
Pordzik). Roberts describes this as finding “a humanist text in a posthuman guise whose work is
to affirm the immutable, essential nature of the human” (n.p.).
This return to a humanist celebration of the “human” may seem promising, but to some,
this return also raises questions for a future with nonhuman entities. Braidotti, for example,
argues that “the posthuman condition introduces a qualitative shift in our thinking about what
exactly is the basic unit of common reference for our species, our polity and our relationship to
the other inhabitants of this planet” (1-2). Not only will our species come in to questions, but for
Braidotti, “This issue raises serious questions as to the very structures of our shared identity – as
humans – amidst the complexity of contemporary science, politics and international relations.
Discourses and representations of the non-human, the inhumane and the posthuman proliferate
and overlap in our globalized, technologically mediated societies” (2).
While some feel this return to exalting the humanist vision to be hopeful, the
technological developments enacted to achieve this utopian future seem to be simultaneously
dismantling, sometimes literally, the human body. Hayles’ book How We Became Posthuman,
for example, opens with her recounting a nightmare-like epiphany while reading Hans
Moravec’s thought provoking quasi-fictional philosophical text Mind Children (1990). In
Moravec’s work, he illustrates the possibility of “uploading” the human consciousness into a
11
computer. Hayles describes this process as achieved by a “robot surgeon [who] purees the
human brain in a kind of cranial liposuction, reading the information in each molecular layer as it
is stripped away” (1). Not just the human body, but our sense of self could be stripped away as
well. In fact, for some the posthuman future of the transhumanists will cede power to the
technology we create. For Bendle, Kurzweil’s particular vision is “an ideological misrecognition
of humanity’s relationship to technology. In a manner that fundamentally inverts this
relationship, posthumanism cedes to technology a determinism over human affairs that it does
not, cannot, and should not enjoy” (61)
For some, these relationships with technology could lead to an ideological loss of the
sense of self. Michelle Chilcoat explains that “the projected obsolescence of the body also
implied the loss of biological matter, traditionally viewed as the immovable or fixed material
upon which to construct gender differences and inscribe male privilege” (156). For Chilcoat, this
boundary breach goes right to the heart of humanism, including a threat to male privilege. For
others, this threatened boundary is explored in terms of “suffering” that can be inflicted upon the
human body as technology is not just embraced by the human but rather ruptures the human
(Miccoli).
At first Hayles’ vision of “information losing its body” seems terrifying, as the human is
sucked into a blender and disembodied, she returns to an argument that expands the possibilities
of what it means to be human. “When Moravec imagines ‘you’ choosing to upload yourself into
a computer, thereby obtaining through technological mastery the ultimate privilege of
immortality,” Hayles writes, “he is not abandoning the autonomous liberal subject but is
expanding its prerogatives into the realm of the posthuman. (287). For Hayles, “the posthuman
offers resources for rethinking the articulation of humans with intelligent machines” (287).
12
Similarly, for Neil Badmington, even this most posthumanist vision of uploading consciousness
is not as much a threat to humanism as at first seems. For Badmington, this imagery comes
“from the distinctly humanist matrix of Cartesian dualism. Humanism survives the apparent
apocalypse and, more worryingly, fools many into thinking that it has perished. Rumors of its
death are greatly exaggerated” (11).
Indeed, it seems Hayles and others, when considering a critical posthuman thought, agree
that humanism will remain even in a future that abandons species-specific definitions of the
human. “Posthumanism,” for Wolfe, “isn’t posthuman at all – in the sense of being ‘after’ our
embodiment has been transcended – but it is only posthumanist, in the sense that it opposes the
fantasies of disembodiment and autonomy, inherited from humanism itself” (xv). But at the
same time, Braidotti reminds us that “Not all of us can say, with any degree of certainty, that we
have always been human, or that we are only that. Some of us are not even considered fully
human now, let alone at previous moments of Western social, political and scientific history” (1).
Not simply a utopian vision or philosophy for the future (as the extropians and
transhumanists would have it), thinking of the posthuman is also used as a tool for critical
literary analysis. For Hayles, this means “serious consideration needs to be given to how certain
characteristics associated with the liberal subject, especially agency and choice, can be
articulated within a posthuman context” (5). It is a discussion of the nonhuman versus the
human as a way to better understand where the Self is located. In a way, a posthuman reading is
a way to “uncover those uncanny moments at which things start to drift, of reading humanism in
a certain way, against itself and the grain” (Badmington 19). For many, this means that “the
"post-" of posthumanism does not (and, moreover, cannot) mark or make an absolute break from
the legacy of humanism" (Badmington 21). And while humanism will continue to be alive and
13
well in a world with “posts,” a posthuman theory “can also help us re-think the basic tenants [6]
of our interaction with both human and non-human agents on a planetary scale” (Braidotti 5-6).
While the transhumanists bring to the table their creative vision of what will possibly
come, a posthumanist reading of existing texts and artifacts offers discussion points for a future
of ethics with posthuman entities. For Braidotti, that does not mean abandoning our humanist
roots: “to be posthuman does not mean to be indifferent to the humans, or to be de-humanized.
On the contrary, it rather implies a new way of combining ethical values with the well-being of
an enlarged sense of community, which includes ones’ territorial or environmental interconnections” (190). And for thinkers like Stefan Herbrechter and Ivan Callus, this “new
combination” of critical posthumanism “aims to open up possibilities for alternatives to the
constraints of humanism as a system of values” (107).
THE CYBORG
Because critical posthumanism explores the boundaries between the human self and
technology, discussion of the cyborg is crucial to this project. The meaning and imagery
surrounding the cyborg has changed as both science and science fiction have tried to imagine a
world with posthuman entities. Similarly to the transhuman and posthuman imagery, the cyborg
is both a literal and a figurative entity. On the one hand, the cyborg could be described as a
human partaking in the transhumanist philosophy and who has augmented their body with
technological parts. On the other hand, the cyborg is described most famously by Haraway in
the Cyborg Manifesto (1987) as an entity that is a blending of the previously diametrically
opposed concepts. As summarized by Chilcoat,
[Haraway’s] ‘cyborg imagery can suggest a way out of the maze of dualisms in
which we have explained our bodies and our tools to ourselves’ (181), because as
14
‘a hybrid of machine and organism’ (149), the cyborg renders ‘thoroughly
ambiguous the difference between natural and artificial, mind and body, selfdeveloping and externally designed’ (152). Thus, ‘sex and gender,’ formerly
standing in opposition to each other, like ‘natural and artificial,’ are brought
together, albeit in a relationship of ambiguity, via the cyborg. (157-158)
The cyborg, both literally and figuratively, offers a point of discussion surrounding the many
boundaries confronting humans. By exploring this subject from a distance, it alleviates the
anxiety surrounding boundary breaches of mind and body, nature and artificial, human and
nonhuman. Bendle describes this boundary breaking of androids and cyborgs as “transgressing
previously hermetically maintained boundaries between, for example, culture and nature, living
and dead, organism and machine, real and synthetic” (57). Indeed for Bendle, “considerable
misgivings” surround the transgressions of androids and cyborgs, “especially around questions
relating to their danger to human beings, their possession or not of souls, and their consequent
rights to life, freedom and self-determination” (57). Because I will turn my focus on the android,
distinguishing it from the cyborg, it is crucial to develop some differentiation between
Haraway’s cyborg and the figure of the android.
For the sake of understanding posthuman bodies, I understand a cyborg as an entity that
began as human and that subsequently was altered through cybernetic implants or other artificial
enhancements. This is not to say that once altered, a cyborg is no longer human, it just falls into
a different category, or sub-species, of human. An android, on the other hand, was never
biologically human. It/he/she was made, constructed, or otherwise emerged (in the case of a
15
strong AI).5 The perfect illustration of these different entities is the difference between the
cyborg RoboCop and the android Data from Star Trek, who also happens to be housed with a
strong AI. Both are posthuman bodies as one is literally post-human while the other is in the
image of a human, with humanlike behavior. On a side note, it is true that Data far exceeds
human ability with his cognitive processing skills and physical strength, but the fact that he/it
was modeled by a human, in the image of a human qualifies him/it for a posthuman label.
Haraway’s vision of a cyborg is one that transgresses boundaries, used especially to
discuss gender, while the android seems to reify those boundaries. Like the cyborg of feminist
theory, the android is also interconnected with gender. In particular, Jennifer Robertson explores
the literal construction of androids and the impact of gender constructions. Robertson, in
paraphrasing Anne Balsamo’s work on fictional androids, writes that “The process of gendering
robots makes especially clear that gender belongs both to the order of the material body and the
social and discursive or semiotic systems within which bodies are embedded” (4).
Rather than simply being about a boundary subject, the cyborg can work within critical
posthumanist thought. “The figure of the cyborg – that embodied amalgam of the organic and
the technological…” according to Sharalyn Orbaugh, “confounds the modernist criteria for
subjectivity and, when featured in narrative, allows readers/viewers to think through the
ramifications of the changes we currently face” (436). In that exploration of potential
ramifications, Francesca Ferrando urges us to consider multidisciplinary approaches to prevent a
return to dualistic views: “Adopting such standpoints will allow humans to generate an emphatic
approach, preventing them from turning the robot into their new symbolic other, and from falling
5
It would be unwise to specify that an android or AI was created only by humans. There are several cases in fiction
in which the artificial entity makes up an entire alien race (for example, the Borg in Star Trek). I also don’t want to
rule out the possibility of artificial entities being made by another species.
16
into the dualistic paradigm which has historically characterized Western hegemonic accounts,
articulated in opposites such as: male/female, white/black, human/machine, self/other” (16).
THE ARTIFACE AND THE ANDROID
Taking a cue from Haraway, I want to explore the singular figure of the android as the
epitome of a posthuman entity. Not only is it made to appear and behave human in every way, it
will eventually be made to possess a human-like or superhuman AI. In fiction and in fact, the
android has been made to appear and behave human in every way. Two elements in particular
position the android as precarious within and among the boundaries of humanity: the body and
the mind. First, I want to explore briefly the definitions of the android’s physical construction.
The clone and the cyborg will be addressed later, so for now I note that they that they are not
exempt from the discussion of android theory.
With robots, androids and cyborgs populating screens and pages through history in many
forms, I want to start us off with a common vocabulary that is both inclusive and open to further
application in the scientific development of robotics. Consulting common knowledge
understanding of the android is a good place to start. In fact, android is often conflated with
“robot” or “cyborg” but each of these concepts is accompanied by a host of meanings and
implications. Although it’s impossible to present clear-cut, exclusive, and exhaustive definitions
surrounding these concepts, I will offer my own thinking and clarification to help categorize the
figure of the android of which I write – one which appears in both fiction and reality.
Beginning with one of the most common label, “robot,” androids are at their beginnings,
robots in the form of a human. Most often associated with the introduction of the word “robot”
to America is the 1920 science fiction play Rossum’s Universal Robots, or R.U.R. by Karel
Čapek’s. Čapek’s play tells the story of manufactured biological human slaves who revolted
17
against their masters. In Czech, the play’s original language, the literal translation of the word
“robot” is “worker.” These robots were creature machines in the shape of humans; they were
artificial people built as workers for their human owners. Although the slaves in R.U.R. were
biological and more akin to what we might call clones, Bendle explains, “the term ‘robot’
subsequently became predominantly applied to humanlike machines” (56). And as the
popularity of the word “robot” spread across the United States, “the robot quickly came to
assume an ideological role, representing the ever-present but feared Other (Disch 1998: 10)—the
oppressed workers, the exploited immigrant servants, the alien masses waiting to invade
America, and eventually the neglected housewife” (Bendle 56-57).
While the word “robot” may have come to America with the introduction to Čapek’s
play, others argue that the robot appeared in fiction long before that. Morton Klass argues that
the concept of a robot came long before Čapek’s play and he examines the “apparent reluctance
to refer to Mary Shelley’s creature as a robot” (173). Of course, the creature he is referring to
here is Mary Shelley’s infamous monster from her 1818 novel Frankenstein in which Dr.
Frankenstein creates a humanoid stitched together from corpses of other humans and brings it to
life. But even Klass is not comfortable keeping with the concept of “robot” attached to
Frankenstein’s monster. Indeed, in the same space as he writes about the transformation from
Shelley’s robot to popular usage in science fiction to “metal, glass, and plastic” robots which are
more familiar today, he points out that “Mary Shelley’s creature was, one might argue to begin
with, actually the first android to appear in fiction, not the first robot” (emphasis added, 173).
Whether it was Čapek or Shelley who popularized the robot of flesh or metal, still others
argue that the concept of humanoid robots came much sooner. For example, Bendle reminds us
of the mythical roots of the concept: “the term ‘androides’ first appeared in English in 1727 in
18
connection with alchemical attempts to create an artificial human, itself an ancient idea
associated with the ‘golems’ of Jewish mythology” (57). Indeed, Eric C. Wilson writes of the
connection between the history of golem-making beginning in the thirteenth-century revision of
Genesis and the “conjunction between miracle and monstrosity” in Ridley Scott’s 1982 film
Blade Runner (31). But even acknowledging the connection to the golem, Bendle returns to the
argument that “Mary Shelley’s Frankenstein (1818) gave this idea its first and most spectacular
expression by combining it with the myth of Prometheus, and it has been widespread ever since”
(57).
Another part of the discussion surrounding androids and humanoid robots is the female
gendered robot. In tracing the history of the human artifice or robot, Allison DeFren builds upon
Laura Mulvey’s work about Pandora. From Greek mythology, Pandora was created by the Gods
in the form of a woman, arguably becoming the first humanoid creation: “the first in a long
history of femme-fatale androids – creatures in which ‘a beautiful surface that is appealing and
charming to man masks either an ‘interior’ that is mechanical or an outside that is deceitful’”
(DeFren 404). The image of the femme-fatale, originating from Pandora extended into film with
the maschinenmensch, literally “machine-human” in German, of Fritz Lang’s 1927 Metropolis.
DeFren takes note of the 1970s films Eve of Destruction and Bionic Woman and even to the
more recent “bikini machines in the Austin Powers films (404), as an illustration of what
Andreas Huyssen describes as an embodiment of “a complex process of projection and
displacement. The fears and perceptual anxieties emanating from ever more powerful machines
are recast and reconstructed in terms of the male fear of female sexuality, reflecting, in the
Freudian account, the male' s castration anxiety” (226). These humanoid robots with female
gendered bodies are generally labeled as “fembots.”
19
Robots, androids and fembots have far-reaching uses as metaphorical entities in
explorations of the human self, and much of the time these labels are used interchangeably. For
Geoff Simons, “robot” is used primarily as a metaphor to refer to human behavior: “People are
compared with robots so that particular conceptual or psychiatric points can be made. There is
no suggestion that people are really robots” (2). Although there may be no suggestion that
humans are actually robots in fiction, that does not alleviate the sociological impact of such
metaphors. Indeed, for Simons, fictional accounts of robots, “suggests also a new sociology of
robotic man: in popular culture slaves and robots have often been considered siblings. To be a
slave or a robot is, so the theme runs, to be diminished as a human being, and nothing would
diminish Homo sapiens more than to be outstripped by an artifact” (13).
Klass tries to simplify the discussion by first conflating androids with robots and then
arguing that the robot is “the manufactured equivalent of the human” which, while a bit awkward
to write, “it does have its uses” (172). As Klass explains: “I emphasize the word ‘equivalent’
because the term introduces an important anthropological dimension, that of the alien – the
person who in many societies is viewed as not of us, not truly human but only an equivalent of
the true human…. It is certainly legitimate, therefore, for the anthropologist to inquire into the
extent to which the manufactured equivalent of a human has reflected the particular perceptions
of the alien that is characteristic of Europe-derived societies” (172). He even avoids the
discussion of robot by writing: “I also emphasize the word ‘manufactured’ because it is the most
satisfactory solution I could find for the rather knotty problem of defining what is meant by
‘robot’” (172).
Science fiction film scholar J.P. Telotte is another writer who exploring the fiction
surrounding humanoid robots/androids, but he chooses to avoid the labels by bundling them all
20
together into one category. Telotte chooses to focus on what he describes as the “image of
human artifice [which is] figured in the great array of robots, androids, and artificial beings
found throughout the history of science fiction film” (Replications 1). For Telotte, the figure of
the human artifice, in its many forms, “speaks, from [science fiction’s] earliest days to the
present, of the interactions between the human and the technological that lie at the very heart of
Science Fiction… this image measures out our changing attitudes toward science, technology,
and reason itself, as well as the shifting foundation beneath our conceptions of the self in the
twentieth century” (1).
If entities that meet such labels as “human artifice” or “manufactured equivalent of the
human” are considered androids, what then of the clone? While definitions are constantly
shifting, even otherwise species-specific boundaries appear to be crumbling. For the sake of this
discussion I believe that a clone should be considered a human-type, or biological, android.
Clones are biologically identical to their human model, but just because a clone is biologically
human, does not mean it is identical to its human “blueprint.” In fact, I would argue for a new
understanding of clones, separated into two separate categories. One category, as most often
presented in fiction, is the biologically manufactured human whose memories and identities have
also been “implanted” or otherwise “programmed.” This category of clone, I would describe as
a biological android. It was not given the chance to grow and develop within the world in some
“pre-adult” state. The other category of clone, one which was introduced into the world at a
“pre-adult” state, given no “pre-programmed” identity or memories, could be defined as a full
human. This category of clone is much closer to the concept of identical twins: the twins are
biologically identical, but not numerically identical, in that they have their own collection of lifelong memories, wants and desires. Because a clone in the first category, a biological android,
21
does not have the same experiences, wants and desires, it is not subjectively identical to the
human that spawned it. The important distinction comes in lived-experience versus givenexperience. Also, because the body of a biological android/clone is sufficiently man-made,
especially when it comes to memories, it/they should not be placed within the category of
human, but rather fitting in a category with the Cylons in BSG and Replicants in Blade Runner.
This “knotty problem” of classifying robot vs. android is not just limited to science
fiction, but it reaches far into the creation of real-world humanoid robotics. Consider, for
example, the marketing team, Oyedele, Hong and Minor, at the University of Texas-Pan
American, who focus their efforts on human-robot-interaction to better market robotics. In
opening their article, “Contextual Factors in the Appearance of Consumer Robots,” Oyedele,
Hong and Minor paraphrase Robopsychologists Alexander V. Libin and Elena V. Libin to define
a robot as “an automat with programmed behavior aimed at substituting for certain human
activities or to satisfy a person’s physical, psychological, or social needs” (Oyedele, et al. 624).6
In their opinion, and indeed in the opinion of many robotics developers, the robot is intended to
assist with human activities and is able “to satisfy a person’s physical, psychological, or social
needs” (624).
In essence, to summarize the many definitions of robot, in my words: “Robot is inclusive
of many mechanized entities, some of which appear humanoid.” But a robot does not need to
appear human; it must simply emulate some human behavior and/or activity, from labor to social
or interpersonal interaction. In connection to our consideration of the android, an android may
6
I’m assuming there is a translation discrepancy in this quote. Rather than the word “automat,” I believe the writers
intended “automata.” To my understanding, “automat” is the early 1900s version of a vending machine, while
“automata” means a self-propelled robot.
22
be a robot, meaning made of entirely mechanical parts but it must not only behave through
autonomous means, it must also appear human in most, if not every way.7
In their 2006 work, Karl MacDorman and Hiroshi Ishiguro work to clarify robot versus
android: “Very human like robots are often referred to as androids in the robotics literature to
distinguish them from the mechanical-looking humanoid robots” (“Uncanny Advantage” 298),
but they are quick to explain that there are some grey areas. What, they wonder, is meant by
“very human like”? MacDorman and Ishiguro go to further lengths to help clarify their
description by adding a footnote, explaining: “Although from their etymology android and
humanoid are synonymous with ‘resembling man,’ in robotics android generally refers to robots
that aspire to a degree of likeness that transcends gross morphology. It is not enough for an
android to have a head, two arms, a torso, and perhaps two legs. It must be humanlike down to
the look and feel of the skin, teeth and hair. Its behavior should also approximate human
behavior. People should be able to subconsciously respond to the android as human” (“Uncanny
Advantage” 298). For MacDorman and Ishiguro, android is not simply a robot in humanoid
form, it must appear very human like and behave in a human like fashion. Both these elements,
appearance and behavior, assist in the anthropomorphic interaction that leads to the oft sighted
word-for-word definition that robotics designers seem to have settled upon for android: “an
artificial system that has humanlike behavior and appearance and is capable of sustaining natural
relationships with people” (Ishiguro; MacDorman, et al., Android Science.com).
To return to “android,” an examination of the root of the word itself may help. Although
the word “android” suggests a primarily male figure based on the root of the Greek word ἀνδρmeaning ‘man’ combined with suffix “-oid” or “in the form of,” it is used often to refer to
7
What I mean by “behaving through autonomous means” does, in my opinion, include the presence of an artificial
intelligence, a strong one at that.
23
genderless humanoid robots, as well as the more anatomically explicit male, female, un-sexed or
anything in between robots. But for the sake of consistency and inclusion of both male, female
(sometimes referred to as “gynoids”), and un-gendered humanoid robots, “android” will be used
throughout. This is not a move to efface the complications of gender, which I discuss in chapter
four, however it is quite common in fiction to refer to both male and female synthetic humanoids
as “androids.” Even in reference to existing humanoid robots, most roboticists use “android.”
Perhaps this signal of the overwhelming domination of the male descriptor is telling of the
gender-domination in robotics… even as many of the current robotics designs are being
developed with the female image as a model.
When discussing androids, whether male or female, extremely life-like or more robotic in
design, it is safe to assume that one is referring to a humanoid robot. Robertson describes the
primary categories of androids: “There are basically two categories of humanoid robots with
respect to their gendered embodiment: those designed to ‘pass’ as human and those whose
overall shape bears some resemblance to human morphology” (15). Those made to “pass” can
easily be pictured as the replicants from Blade Runner, for example, whereas the other category
would include a more clearly robotic entity, like C3PO from Star Wars. One entity, the
replicant, cannot be visually defined as a robot or non-human, while C3PO is clearly robotic on
sight. Considering Robertson’s categorization, the entities made to pass within the human
population are the type of android I am most interested in here. Roboticists have already created
androids that bear some basic resemblance to human morphology, but at this time those are less
of a threat to notions of selfhood than fully-synthetic-and-indistinguishable-from-humans
androids.
24
Another way to consider the android is in terms of their functionality. Functionalitybased androids are those that are created with the primary purpose of fulfilling tasks most likely
in the home for their human owners. I say “most likely in the home” here because functional
robots are a commonplace device in factories and have been used for efficiently manufacturing
products for humans or completing other tasks, like performing surgery, since the 1980s.
However, these robots are designed with little human-like features in mind. Their purpose is not
to interact with humans hence their embodiment is less of a concern for designers. In the case of
androids, behaviorists have been exploring the social benefits of interacting with humanoid
robots in the home rather than robot-like robots. In fact, androids based on functionality are
quite popular in robotics designs today and are making the fiction more tangible every day.
Familiar designs that can be recalled to envision the functional android include the C3PO from
Star Wars, Bender from Futurama, Rosie from The Jetsons, and Sonny from the film I, Robot to
name a few. All were designed with some human morphology in mind, but their figure and form
are primarily designed for functionality above all else.
While the androids cited above exhibit the obvious reminder that they are synthetic
beings through their very construction (we can see their inner-workings to some extent, their
exoskeleton, if there is one, is clearly synthetic, etc.), there is another kind of android that
appears in fiction: the very-human-like, or social, android, which Robertson refers to as
“passing” robots.8 The social androids became well known for their presence in narratives of
paranoia, misidentification, or the Pinocchio-like stories (in which the robot becomes “a real
boy”). Replicants in Philip K. Dick’s Do Androids Dream of Electric Sheep?, inspiration for
8
“Social android” is my own terminology. I choose the word social as this particular trend in android development
aims to make robots that interact with humans in a seamless manner that illicit no uncanny reactions and instead can
be described as social companions in design.
25
Blade Runner, are a classic examples of social androids. The Cylons in the reimagined
Battlestar Galactica are another example. Whether biological or robotic in construction
androids of this category are identified by the fact that they are man-made and outfitted with an
artificial intelligence that rivals, if not surpasses our own.
This classic image of the replicant is often called upon to begin theorizing about a future
with such entities. Judith Kerman, in her introduction to Retrofitting Blade Runner writes,
“Forty-five to fifty years into the future, it is possible that genetic engineering and computer
science will have created the potential for new kinds of people or intelligences, entities
physically and emotionally different from historic humanity but who are arguably entitled to be
considered persons. Blade Runner considers what it will mean morally, technologically and
politically to live in that future” (1). Even though Kerman was writing in 1991, her words are
still applicable today as technology develops exponentially.9
The category of the social android may or may not even know it is synthetic – Cylons in
the 2004 reimagination of Battlestar Galactica are examples of this kind of story surrounding the
human-like android. Data from Star Trek is another example of a very human-like android, but
his case is interesting because even though many of the narratives surrounding Data are about his
struggles with becoming more human-like, he is always marked as non-human with his metallic
skin and yellow eyes. In most cases of the social android, they almost always have strong
artificial intelligence and are sometimes enhanced with emotion simulation. This type of android
9
More information about exponentially accelerating technological developments (based on the Law of Accelerating
Returns) can be found in Kurzweil’s work. In particular, his book The Singularity is Near: When Humans
Transcend Biology, Kurzweil postulates that technological advancements will begin to happen so rapidly and with
such tremendous changes that human society will not be able to keep up and society will be divided with the hyperadvanced and cybernetically enhanced on one end and those with no comprehension of the technology on the other.
26
is what roboticists and AI programmers have in mind when designing the “ideal” robotic
companion.
The android presents a body that is both human-like and inhuman. It is human-like in
that it is made specifically to look as human as possible. It is made to appear human, either
through a biological process like cloning and genetic engineering or by purely synthetic means.
Even though it looks human, the fact that it is made by humanity, it is automatically assigned
status as a commodity – it is owned by its human or considered simply property. But as we shall
see, science fiction has been challenging this automatic assignment of property status. By adding
strong artificial intelligence and self-awareness, as is one of the primary goals of AI
development, the android becomes an entity of conflict, an “evocative object,” to borrow
Turkle’s description of the computer in the 1980s. Boundaries otherwise clearly defined are
becoming blurred in the figure of the android. Non-biological now appears biological.
Intelligence appears to be at work in otherwise inanimate entities.10 An android is not simply a
mechanical wonder that replaces human effort, or assists human activity. In 2004 Libin and
Libin shared their thoughts regarding the future of human-robot interaction: “From a
psychological point of view, robots are capable of playing different roles, appearing as a human
companion, educator, explorer, entertainer, rehabilitation or medical assistant, and even
psychological therapist” (1790). Indeed, for Libin and Libin, not only are robots capable of
playing these roles for humans: “Artificial creatures are rapidly becoming good companions,
helping people to cope with a wide range of disabilities, loneliness, depression, and other
negative states” (1792).
10
Although our science fact has yet to catch up with our fiction (don’t underestimate this – our science is further
along than is generally imagined), I feel now is the time to critically think about the entities being created and
consider their impact on our previously static boundaries of human and person.
27
Artificial creatures can come in many shapes and sizes but humanoid artificial entities,
aka the androids, are believed to have an important role in the social sciences due to its very
human-like appearance. For a particularly well-known team of roboticists in Japan, MacDorman
and Ishiguro (inventors of the “robotic doppelganger” Gemonoid), “An android is defined to be
an artificial system designed with the ultimate goal of being indistinguishable from humans in its
external appearance and behavior” (“Uncanny Advantage” 298-99). And this “ultimate goal” is
desirable for MacDorman and Ishiguro for the sake of investigating human behavior (“Uncanny
Advantage” 321). For a team of sociologists, they put the embodiment of a humanoid robot to
the test in an experiment including an embodied robot and a video of a robot. They found that
“interacting with the embodied robot was a more compelling experience for participants and
elicited more anthropomorphic interaction and attributions. Participants spent more time with the
robot, liked it better, attributed to it stronger, more positive personality traits, and said it was
more lifelike” (Kiesler, et al. 177). Their results corroborate MacDorman and Ishiguro’s
insistence on using embodied, life-like robots in sociological research.
Considering the popular cultural portrayal of androids run amok, it is no surprise that
MacDorman and Ishiguro feel defensive about their work and continue to justify their designs:
“Some researchers are concerned about the general public’s acceptance of androids, which have
often been presented ominously in science fiction as human replacements. The media’s tendency
to sensationalize perceived dangers has the potential not only to undermine funding for android
research but for other areas of robotics” (299).
SCIENCE FICTION AND SCIENCE FACT MEET
Robotics and AI development have come so far with little theoretical oversight, but there
is a growing social anxiety about their creation. While it is true that much of the fiction
28
surrounding androids and AI tell about the horrors of the “robotic threat,” I fully acknowledge
the coming human-like robots and believe that more optimistic answers to living with androids
are within the popular texts of science fiction. Of course most of science fiction is known as
“soft” science fiction, as opposed to “hard” science fiction. Soft science fiction is fiction that is
written with little regard for the actual science behind their stories – they are more concerned
with either the social commentary surrounding the topic or the “wow” factor behind the imagery
and stories – pieces like Ray Bradbury’s The Martian Chronicles (1950) or the Stargate
franchise would fit in the “soft” category. Hard science fiction is considered to be more true to
the actual science and discovery of the universe – Arthur C. Clark and Robert A. Heinlein are
considered hard science fiction writers. Many of the pieces I focus on here do fit in the “soft”
science fiction category – from Battlestar Galactica to Fido, these are pieces concerned with the
“what-ifs” behind the fantastic ideas, loosely inspired by science. This does not, however, mean
that it carries any less weight for this discussion.
Fiction is my starting point for many reasons. First of all, science fiction, especially in
the form of film, is accessible to everyone and has become a mass media phenomenon with
growing momentum in popular culture with the reimagining of classics like Star Trek: The
Original Series, Battlestar Galactica, and Robocop to name a few. The fact that science fiction
has entered the spotlight more prominently over the past few decades is important to
understanding a popular shift in interest toward a “futuristic” world. Because science fiction is
easily accessible, it helps put into perspective how the everyman thinks about and/or is
introduced to concepts of science, in this case, concepts about the artificial human.
Our popular fiction offers an insight into not only how most people think about artificial
entities, but also implies how we will deal with dilemmas surrounding humanoid robots. Elaine
29
Graham, author of Representations of the Post/human: Monsters, Aliens and Others in Popular
Culture, explains the role that science fiction stories play in that exploration: “[Science fiction
stories represent] the shifting paradigms of humanity, nature and technology [which is] always a
culturally mediated activity, whether it takes place in straightforward fictitious narratives, or
scientific/factual reportage” (10). If it is true that, as Graham says, we are facing “radical
reappraisals of the future of the human species” in our biotechnological and digital age (10), then
the replications of the human species, particularly through synthetic means, are implicated in that
discussion, and so therefore are the rights and expectations afforded to those post/human bodies.
Fiction helps us understand those radical reappraisals. For Marilyn Gwaltney, “Thinking about
the moral status of androids gives us a test case, a model, that we are emotionally removed from,
for thinking about the moral status of different stages of human life and the relationship of those
states to each other. Reflection on the moral status of the android helps us to think more
dispassionately about just what qualities of human life are required for the presence of
personhood” (32). In other words, talking about the android, gives us an opportunity to step
outside of our human selves and consider more species- or biologically-neutral manner.
Another very important reason to turn to fiction now, is that fiction offers a rich breeding
ground for thinking and discussing potential worlds and social interaction. This is not to say that
fiction is prophetic; in many cases it is far-fetched and near-impossible, but the characters
involved (human or otherwise) offer explorations into human nature and our many emotions,
reactions to and interactions with otherwise impossible entities. Our imaginations are fertile
grounds for “what-if” scenarios. The best way to plan and prepare for a future is to consider all
the possibilities – the beautiful and the apocalyptic. Elisabeth Anne Leonard, a science fiction
theorist, explains that “Because the fantastic [in this case mostly Science Fiction, but also
30
Horror] is not ‘high’ culture but ‘popular,’ what it has to say about race is especially significant;
it can represent or speak to the racial fears of a large group” (3). Ursula K. Le Guin, reminds us
of her own understanding of the political and social power that fiction contains: “All fiction has
ethical, political, and social weight, and sometimes the works that weigh the heaviest are those
apparently fluffy or escapist fictions whose authors declare themselves ‘above politics,’ ‘just
entertainers,’ and so on” (199). Indeed, for the sake of a critical posthumanist reading, science
fiction is well suited to such discussion. For Herbrechter and Callus, “What makes science
fiction such a powerful genre and, ironically and unintentionally, such a strong ally for critical
posthumanism is the fictional indulgence in the desires and anxieties of ‘becoming posthuman’
while remaining in the ultimate safety of a fictional framework” (104).
Lastly, aside from providing a popular culture vision of androids and AI, I choose fiction
as my point of interest because robots, androids, and other artificial entities from fiction are
notorious for inspiring the actual creation of such entities and even the language surrounding
them. The boundaries of current science are always being challenged by what the fiction writers
imagine. Androids are no exception. For example, household robotics designer and engineer,
Cynthia Breazeal tells the online and print editorial magazine Fast Company how Star Wars was
the primary inspiration for her work: “In many ways those droids were full-fledged characters;
they cared about people. That was what, I think, really sparked my imagination” (Greenfield).
31
CHAPTER TWO:
ANDROIDS AND PHILOSOPHY, METAPHYSICS, AND ETHICS
“I'm afraid. I'm afraid, Dave. Dave, my mind is going. I can feel it. I can feel it.
My mind is going. There is no question about it. I can feel it. I can feel it. I can
feel it. I'm a... fraid. Good afternoon, gentlemen. I am a HAL 9000 computer. I
became operational at the H.A.L. plant in Urbana, Illinois on the 12th of January
1992. My instructor was Mr. Langley, and he taught me to sing a song. If you'd
like to hear it I can sing it for you.”
-- HAL, 2001: A Space Odyssey (1968).
INTRODUCTION
Considering a future with androids begins with an understanding of the two basic
components that make up these entities and afford them their troubled place with humanity.
These components can be distinguished simply as Artificial Intelligence (AI) on the one hand
and the Body on the other – parallel to the mind-body dualism of Cartesian thought. While these
concepts are introduced in Chapter One by way of defining the android, the remainder of this
work will expand upon them and explore the implications of each component through popular
culture, fiction, and existing robotics and AI theory. This chapter explores the existence of the
disembodied AI (both fictional and actual) as a way to consider the mental component of the
android. The second component of the artificial entity, explored in Chapter Three, is the body,
the physical form in which the AI may be housed. Although this physical embodiment is
generally what defines the android as separate from other artificial entities, in that the humanoid
form is its primary defining characteristic, without the AI or conscious thinking part, the robotic
or otherwise synthetic humanoid body poses little threat to concepts of humanity. Once a
32
mechanical or biological synthetic humanoid body is considered to have sAI it is therefore
believed to be acting of its own volition and the entity as a whole becomes the epitome of this
inquiry.
The concept of AI has been present through history as the question of reproducing life
mechanically. Gianmarco Veruggio, known as a forerunner in the theory of roboethics,11 reminds
us of the historical context of our current android development: “If the eighteenth century was
driven by the desire to know if life could be reproduced mechanically, our contemporary
moment is characterized by a resolve that we can and will create intelligent machines (shifting
from the idea of human-as-machine to machine-as-human) that will be endowed with artificial
intelligence, consciousness, free will, and autonomy” (Shaw-Garlock 3). Roboethics is one of
several fields of inquiry that promote and explore the moral questions at play within AI.
Robertson describes one of these many issues: “What has yet to be broached among roboticists
and policy-makers is a serious and sweeping discussion about the existential, ethical, humane,
social and interpersonal limits of the safety, comfort and convenience associated with the
‘robotic lifestyle’ advocated in Innovation 25 [a policy in Japan pushing for the incorporation of
humanoid robots in households] and in the mass media” (29). Part of this future “robotic
lifestyle” will include living with disembodied artificial entities.
Our current incarnations of AI, while generally not considered to fit into a category of
“strong” intelligence, give a preview of what is to come. Here I begin with an examination of
the popular culture depicting contemporary AI. Two programs in particular have garnered
attention from the general public – Deep Blue, a chess playing program from the 1990s and
11
“Roboethics” is used to discuss everything relating to robot ethics, an exploration of the social and ethical
implications of robotics development. Veruggio Gianmarco Veruggio, together with coauthor Fiorella Operto,
coined the term roboethics with their symposium on Roboethics in 2004.
33
Watson, a 2008 contestant on Jeopardy! Both Deep Blue and Watson presented a disembodied
AI in competition with humans. These “man vs. machine” challenges in popular discourse
exemplify the growing conversation surrounding AI.
Before exploring the disembodied AI as a mind, I first define the difference between AI
as a free floating essence versus a free-floating human essence. Using the Brain in a Vat (BIV)
allegory, I outline the many variants in fiction, from cyberpunk pieces like The Matrix (1999), to
Japanese anime Ghost in the Shell (1995). One version of the BIV asks “how much of the
human brain is required to maintain the mental functions of the mind. Using examples from Star
Trek and the 2014 remake of RoboCop, I consider how popular culture portrays the brain-body
connection. Once clarifying the popular culture understanding of a disembodied human mind, I
move on to conceptualizations of AI – as both strong AI (sAI) or weak AI (wAI).
Part of understanding sAI versus wAI is conceptualized in terms of how “intelligence” is
understood. Both Deep Blue and Watson gained popular celebrity as they performed in
competitions versus human contestants and thus intelligence was defined in terms of information
recall, strategy and calculation capacity. Deep Blue appeared twice on national television in two
world-famous chess matches against the reigning grandmaster at that time, Garry Kasparov.
Kasparov won the first match in 1995, but lost to the upgraded Deep Blue II just over a year
later, beginning a popular culture conversation about machine intelligence. While the reaction to
Deep Blue’s win against Kasparov generated a combination of concern and awe, the response
was generally lukewarm compared to the interest generated in 2008 when supercomputer Watson
won against the Jeopardy! world champions, Ken Jennings and Brad Rutter.
To conclude the chapter, I consider some of the moral implications associated with
awarding agent status to AI. Drawing upon the work of robo-ethicists Luciano Floridi and J.W.
34
Sanders, along with theories on anti-speciesism from Michael Tooley, I propose a framework for
considering artificial entities as moral agents eligible for personhood rights, which could include
Artificially Intelligent Minds (AIMs).
DEUS EX MACHINA: THE DISEMBODIED MIND
A discussion of AI should begin with an understanding of AI in its most basic form – the
disembodied mind. The most often recognized examples of the disembodied mind are seen in
the representations of HAL from 2001: Space Odyssey (1968) and Skynet in the Terminator
series (so far with a total of five films spanning from 1984 through the most recent announced
title Terminator 5: Genisys coming in 2015). Neither HAL nor Skynet need a physical form to
wreak their havoc. This is the supposedly conscious mind that powers what can be most easily
described as the thinking part of the entity. It is the power behind the actions and the part that
interacts with human agents as either a benevolent or malevolent entity, usually manifesting as a
voice over the loudspeaker or a face on the monitor, or in the case of the android, the mind
within the body.
This idea of a disembodied mind comes not just from science fiction but also is one of the
key tenants of improving human society through technology and is explored by both trans- and
posthumanists. For transhumanists, artificial intelligence and artificial general intelligence are
the first two technologies listed as part of the mission statement for improving the human
condition (“Mission”). Hayles explores AI as one of her three “interrelated stories” used to
structure How We Became Posthuman. For posthumanists like Hayles, AI fits in a discussion of
“how information lost its body,” as illustrated by Moravec’s example in Mind Children. Hayles’
discussion begins with the imagery of the information in a human mind losing its body, literally,
35
while AI epitomizes the idea of information without body to begin with – and not just
information, but ordered and “intelligent” information (2, 50).
How Information Lost its (Human) Body
AI is not the same as a “free-floating” human consciousness. While the disembodied
human mind can be imagined in a different ways, some of which are similar to an AI, a
disembodied human consciousness is imagined as different from AI in several ways. One of the
most common conceptions of this disembodiment is in the form of a mind that has lost its body
but still maintains the functions of consciousness and mind through the operations of the
biological brain. Another common version of a disembodied human mind is a consciousness that
has been digitized and no longer needs to be embodied in any biological substrate, hence no
longer needs the brain. Hayles describes this as part of the posthuman view that “thinks of the
body as the original prosthesis” and that “embodiment in a biological substrate is seen as an
accident of history rather than an inevitability of life” (2).
The allegory of the mind losing its body (not necessarily the organ of brain) is most often
used in philosophy’s skeptical Brain in a Vat (BIV) argument that disputes the existence of an
external world. The BIV argument appears in many versions and is meant to explore beliefs
associated with skepticism and realism. In most cases, the brain literally exists in a vat with a
“mad scientist” who manipulates the brain with electrochemical or some other ability so that the
“mind” within the brain believes it is living in a real world environment. In many BIV cases
there was never a full human body to house the organ, but the organ of the brain is crucial to the
argument. The human organ of the brain hosts the self or consciousness in a “vat” or other
substrate able to sustain thought. In such a case, the body as a whole is no longer required, just
the brain which supposedly is all that is required for the “thinking part” of humanity.
36
This hypothesis appears in fiction because the imagery is compelling for explorations of
consciousness. In many cases, the brain (and potentially the whole body) is needed to maintain
the power required to “run the human program.” This type of BIV appears in the Matrix trilogy
(1999-2003). Human thought and experience is lived out in a digital world while physical bodies
and brains are kept alive through the meddling of hyper-intelligent robots. In some BIV cases,
humans can will their consciousness in and out of the digital world using the brain and body as
sort of a base of operations from where they are able to “jack in” to the digital world. Fiction
like The Lawnmower Man (1992) or a whole host of cyberpunk novels – William Gibson’s
Neuromancer (1984) and Neil Stephenson’s Snow Crash (1992) are two of the most well-known
examples – explore the “mind (not just information) losing its body.”
While most BIV examples require the brain to be whole in order to maintain the integrity
of the self, other BIV-type stories in popular fiction explore how much of the brain is really
necessary to maintain sufficient “human consciousness” before the individual no longer coheres
to their original self. One particular popular cult-classic in anime fandom is perfect for this
example. Featured in the graphic novels, feature film, and television series inspired by
Masamune Shirow’s Ghost in the Shell world tells the story of many posthumans living together
in a fictional Japanese society. There are humans augmented with cybernetic parts, like what
could be called cyborgs, but there are also “fully augmented” humans who’s only remaining
biological parts are the few cells left in their brain. Much of the Ghost in the Shell story lines
follow the character of Major Motoko Kusanagi (voiced by Atsuko Tanaka in Japanese and Mary
Elizabeth McGlynn for the English translation of Ghost in the Shell: Stand Alone Complex) who
was made “fully cybernetic” before birth – meaning that even before her human body was
formed and instead, parts of her brain were harvested and placed in a cybernetic/android body
37
Figure 1: Major Kusanagi "Jacked In" (Screen shot Ghost in the Shell: Arise, 2013).
(Ghost in the Shell: Arise, 2013). In the fictional world of Ghost in the Shell, the ghost, “refers
to an individual's mind or essence of being, their soul perhaps” and is understood, according to
the fan-made wiki page as “what differentiates a human being from a robot. Regardless of how
much the biological is replaced with cybernetic, a ghost retains its humanity and individuality”
(“Ghosts & Shells”). It is the ghost that can be separated from an original biological body and
moved to a “shell,” a robotic mobile suit (also called “mecha”). Although the ghost may be
embodied within a human-like shell, the idea is that the human ghost is interchangeable from one
shell, or body, to another mobile suit. In depictions of the ghost, the “mind” is illustrated as
embodied within a virtual world (or virtual reality, VR) as an avatar when it “jacks in” to the
network or World Wide Web. Even in the case of Major Kusanagi, who has no original human
body, is “embodied” in a naked human form while her humanoid-robotic body stays in the real
world (Figure 1). It seems that even in our popular imaginings of the disembodied mind, a body
must be included, even if it’s not physical.
While Masamune Shirow’s Ghost in the Shell world accepts that only very little of the
brain matter is required to maintain the integrity of the human ghost, some contemporary
American fiction explores the question of how much of the brain is really required and how
38
chemical changes can alter the self. One example is in the Star Trek: Deep Space Nine episode
“Life Support” (1995) in which Dr. Bashir (Alexander Siddig) is forced to remove damaged
parts of his patient’s brain after an accident. When his patient Vedek Bareil (Philip Anglim)
does not recover, but rather gets worse, Dr. Bashir is faced with a dilemma – remove more of the
original brain matter and replace it with synthetic parts or let Bareil die. Bashir concludes that
“the brain has a spark of life that can’t be replicated… If we begin to replace parts of Bareil’s
brain with artificial implants, that spark may be lost.” Due to forces outside of his control,
Bashir is pressured to go ahead with “positronic” implants, but afterward Bareil is not the same
as he was before the operation. Upon gaining consciousness, he still has his memories but, he
explains, “Everything’s different… When you touch me it doesn’t seem real… It’s more like the
distant memory of a touch.” As his patient worsens despite the operation, rather than replacing
the other half of Bareil’s damaged brain, Dr. Bashir chooses not to “remove whatever shred of
humanity Bareil has left.” Bareil’s “spark” has faded and he eventually dies. For some critics,
this episode confirms that Dr. Bashir believes that “who you are apparently depends on the stuff
out of which you are made. Change that stuff, and you cease to exist” (Schick 217).
The 2014 remake of RoboCop tells a similar story. After a devastating accident, Detroit
police officer, Alex (Joel Kinnaman), is left with only his lungs, his head (brain included) and a
single hand. What little biological material he has left is placed into a robotic body and even his
brain is altered with computer chips (Figure 2). At one point, his physician, Dr. Norton (Gary
Oldman) is forced to “fix him” as Alex has the equivalent of a panic attack upon seeing “his own
crime scene” – the moment of the accident. Until this moment, Alex has seemed very much like
his original self, he knows and loves his son and wife, regardless of the chips in his brain. It is
not until neurochemical changes take place, that Alex loses his sense of self. Dr. Norton
39
describes consciousness as “nothing more than the processing of information” and he feels Alex
can be “fixed” with neurochemical changes. Despite protests from his nurse that “You’ll be
taking away his emotions… His ability to feel anything,”12 Dr. Norton goes ahead and deprives
Alex’s brain of dopamine and noradrenalin. After the procedure, Alex declares “I feel fine,
Doctor,” but his wife knows he is
not the same: “I looked in his eyes
and I couldn’t see my husband.” In
both the “Life Support” episode
and RoboCop, the human brain can
only be altered so far before the
self is no longer the same. In the
Figure 2: "It tastes like peanut butter," Alex says during a procedure on
his exposed brain. (Screen shot RoboCop, 2014.)
case of “Life Support,” Dr. Bashir
determines that time to be the physical removal of “too much” brain matter, but in the case of
RoboCop, the change from original self to “not the same” occurs with neurochemical changes,
not the alteration with cybernetics.
Aside from the examples above that explore the biological requirements for human
consciousness, the other most popular imagery surrounding explorations of human consciousness
exists not only in fiction but is also presented by futurists. In this scenario the mind loses its
matter altogether in a process called Whole Brain Emulation (WBE). While the BIV requires
biological bits to maintain the continued consciousness, the WBE scenario requires that there is a
purely software-based version of awareness which leaves the human substrate behind. This
imagery is much like Moravec’s imagery from Mind Children that was so eerie to Hayles, or
12
Interestingly, there are times where the doctor is accused of robbing Alex of his free will, implying that it’s not
just the brain that makes the human self, but also the ability to have and exercise free will.
40
what Kurzweil imagines “brain uploading” will/would be like. In this scenario, the idea is that
because the human body degrades over time and the consciousness seems apparently untouched
by time, the thinking part can go on without any biological form.
This variant of the BIV argument appears in some fiction which removes the body and
brain entirely, removing any requirement of a “vat” to sustain the consciousness. In these
examples, the consciousness removed from the rest of the body is then able to float in a digital
world with no physical manifestation required for interaction with some sort of digital
environment. The 1993 film Ghost in the Machine, 1995 Ghost in the Shell film and associated
2002 and 2015 television spin-offs (Stand Alone Complex and Arise), along with the 2014
Transcendence are examples of a brain without a vat which regularly appear in contemporary
popular culture. One example of particular note is the Japanese anime Serial Experiments Lain
(1998), in which the main character, a fourteen-year-old girl gets wrapped up in a mystery
surrounding the apparent suicide of her classmates. The children have been jacking into the
“Wired,” the show’s equivalent of the World Wide Web, to participate in online games. Little
do they know they are being manipulated by an evil master mind and end up killing themselves
in an attempt to “abandon the flesh” and live on in the Wired. Some of their consciousness does
indeed live on, but not all, illustrating that this form of consciousness uploading is risky.
For Moravec, the information from the brain (in other words, the “mind”)13 is removed
from the biological matter and supplanted into a computer system or other non-biological
substrate without risk, since for Moravec the mind and information are synonymous.
Consciousness, according to roboticist and AI theorist Rodney Brooks, is simply a “cheap trick”
13
While I don’t personally agree with the conflation of information and mind, it is often accepted by futurists. If the
mind required something more than information, or some “essence,” the futurists would not as easily be able to
argue for brain uploading.
41
that is “an emergent property that increases the functionality of the system [the body] but is not
part of the system’s essential architecture” (Hayles 238). In such a posthumanist argument, the
“I” that I feel is part of my self is merely a passenger along for the ride in the processing of
information, “I” don’t actually drive the system, it drives itself and I “feel” like the driver.
Although Kurzweil has been famous for arguing that human consciousness can be removed
entirely from the body, he has recently argued that we do actually need a body, but not
necessarily what we started with: “our intelligence is directed towards a body but it doesn't have
to be this frail, biological body that is subject to all kinds of failure modes… I think we'll have a
choice of bodies, we'll certainly be routinely changing our parent body through virtual reality and
today you can have a different body” (Woollaston) – much like Ghost in the Shell, but without a
single biological part of the brain necessary.
In the imagery of the mobile consciousness above, the BIV and the ghost in the shell
imagery is used when considering human intellect, and that mind is somehow different from
Artificial Intelligence. While the BIV argument and the scenario of WBE have their differences,
both include the belief that the whole human body is not required for continued consciousness.
Both arguments suggest that a mind can be “free floating” apart from the full body and still
maintain the qualia associated with embodied human experience.14 Fiction tries clarify that
difference. For example, the 1995 film adaptation of Shirow’s Ghost in the Shell, tells the story
of an evil AI that is thwarted by a human ghost as the antagonist, known as the Puppet Master –
a completely free-floating non-human ghost that apparently emerged within the programming of
a security network. Even in a fictional world in which human intellect can be separated from the
14
Qualia is defined by the Internet Encyclopedia of Philosophy as “the subjective or qualitative properties of
experiences.” That is to say that the qualia of something is the qualitative experience of an experience – the “how it
feels” to experience a particular moment. (Kind)
42
biological body and other synthetic/robotic forms, the AI is a different concept. For simplicities’
sake, AI is generally considered as existing either because it was designed by humanity or
somehow has emerged at the hands of human inventors/programmers.15 Although AI can exist
in both ways, on the one hand through Whole Brain Emulation or through highly advanced
programming, the distinction should be noted between an originally human consciousness and an
artificial or created consciousness – a god in the machine.
Conceptualizing Artificial Intelligence
While roboticists are working hard to make the embodiment of synthetic human
analogues, it is the concept of strong AI (as opposed to weak AI, or predictive programming,
explored later in this chapter) that makes the android most philosophically compelling to
philosophers of metaphysics. In the present, incarnations of “Artificial Intelligence” are more
like an “assistive” technology as opposed to the theoretical emergent intelligence promised by
futurists and feared in most fiction. Most people are happy to have Netflix guess the next movie
to watch, or to ask Siri for directions. These are both examples of weak AI, but what is Artificial
Intelligence and when does it become a “threat”? And by “threat,” I contend that sAI presents a
threat to both our “unique human-ness” as well as to our continuing existence. This is not meant
to sound alarmist, but is rather a call for discussion regarding these two fronts upon which sAI
presents a danger. Without continued discussion and active dialogue between futurists and the
rest of society, we do run the risk of irrevocably damaging the current structures of society. By
way of exploring popular culture examples of sAI versus wAI I will illustrate various positions
and speculation surrounding sAI and the potential impact on both the “essential human-self” and
15
Of course the human agency is sometimes taken out of the picture as an AI sometimes emerges without human
planning or intervention. Rather, it emerges from the growing complexity of programming and/or the ever-growing
connectivity of weakAI programming through the World Wide Web.
43
society. The difference is already being defined by labeling something as a strong artificial
intelligence or a weak artificial intelligence. As mentioned earlier, once a robotic body is
accepted to have “strong Artificial Intelligence,” and therefore believed to be acting of its own
volition, the entity becomes the epitome of this inquiry. The step from a weak Artificial
Intelligence (wAI) to a strong Artificial Intelligence (sAI) is an important delineation and has a
significant impact on our level of moral involvement when interacting with an artificial entity.
Fiction has already presented popular culture with an array of sAI that are generally seen
as malevolent. The embodiment – whether it is in a humanoid robotic body or in a disembodied
state – largely impacts the way humans in fiction interact with the entity. That difference will be
explored in the next chapter, but before further exploring current AI technology, a few examples
from fiction should help illuminate the difference between a disembodied human intelligence and
a disembodied artificial, or machine, intelligence. Two of the most well-known sAI from fiction
are Skynet from the Terminator story arc and HAL from 2001: A Space Odyssey. James
Cameron’s 1984 action classic, Terminator gives the audience a glimpse at one possible future
with super-smart computers, sAI. While the rest of the films in the Terminator series reveal
more about the AI bent on global domination, the audience begins to understand that in the
Terminator world, the future is ruled from afar temporally by the disembodied Skynet
program/agent which plagues the subjective “present” by sending androids called Terminators
back in time that are programmed to kill particular human targets. Uncaring and seen as the
ultimate evil, Skynet appears to “want” to rid the world of all of humanity.
Skynet was not the first malevolent AI to appear in film. In 1968 Stanley Kubrick and
co-writer Arthur C. Clarke introduced audiences to a future with a strong AI named HAL, a
“Heuristically programmed ALgorithmic computer” in 2001: A Space Odyssey. HAL became
44
one of the most well-known and quoted AI programs from fiction. As astronaut, Dave (Keir
Dullea) implores the onboard AI to “Open the pod bay doors, HAL,” the response haunts
audiences today: “I’m sorry, Dave. I’m afraid I can’t do that.” This phrase generated its own
collection of memes across the Internet. HAL is programmed to “mimic human thinking” while
inhabiting all places of the ship and simultaneously holding the greatest responsibility for
maintaining the processes of the mission. HAL is described as the “most reliable computer ever
made. No 9000 series has ever made a mistake.” He boasts a “fool-proof” processor that is
“incapable of error.” And yet, over the course of the journey to Jupiter, HAL has eliminated all
but one of the human crew with the justification that “the mission is too important for me to
allow you to jeopardize it.” As the remaining crew-member, Dave, methodically disassembles
HAL’s central processer, the audience can’t help but to align HAL with the side of evil. He did,
after all, just murder most of the humans on board.
Far from attempting a global takeover, wAI are readily found in our everyday interactions
with technology. wAI could be described as a simple learning program, such as the “top
recommendations for you” lists found on websites like Netflix or Amazon. Indeed, we are already
surrounded by a variety of predictive programming which pose little to no threat to our general
concepts of humanity. There are also many wAI programs which are made to mimic or seem
human for either entertainment value or to assist the human-robot-interaction, generally with
more advanced language or “natural language” skills.16 Siri, for example, the famous application
for Apple devices is advertised as “an intelligent personal assistant and knowledge navigator,” is
16
To have a computer program that mimics and or “understands,” for lack of a better word, natural language is the
topic of much work in the field of AI development, programming what are called “chatbots.” Ramona 4.1 is one
chatbot hosted by KurzweilAI.net that has been chatting with online users since 2011 (“Chat with Ramona”).
Another popular chatbot is called CleverBot. Programmers of CleverBot claim that it “learns” from people. Of
course, Natural human language is full of nuances, idioms, and other regional tonal differences, making the
interpretation and regurgitation process difficult for a programmer to duplicate, but online users still spend time
chatting with these bots, sometimes without even knowing it’s a “bot” on the other side.
45
one example. While described as a human research assistant, Siri is more often the object of
curiosity and entertainment, with a Tumblr microblogging thread dedicated to “Shit that Siri
says.” Even though Apple claims that “Siri understands what you say, knows what you mean,
and even talks back,” users report the many amusing times Siri misunderstands or responds in
hilarious and unexpected ways. For instance, one user shared his question to Siri: “How much
wood could a woodchuck cuck if a woodchuck could chuck wood,” to which Siri responded
“Don’t you have anything better to do?” Or, in response to the same question, Siri responded,
“None. A ‘woodchuck’ is actually a groundhog, so it would probably just predict two more
weeks of winter.” Even though it may seem that the AI surrounding us today don’t pose much of
a “threat” to what we think of as our “special” human intellect, the debate about what actually
counts as “intelligence” has been brewing in discussions surrounding wAI that could be
considered verging on “strong.”
THE GREAT INTELLIGENCE DEBATE
While some instances of wAI are very amusing, sAI, on the other hand, is the focus of
much debate and even trepidation on the part of the everyman. For a simple definition of the
distinction between weak AI (wAI) to strong AI (sAI), I call upon Artificial Intelligence text
book authorities Stuart Russell and Peter Norvig: “the assertion that machines could act as if they
were intelligent is called the weak AI hypothesis by philosophers, and the assertion that
machines that do so are actually thinking (not just simulating thinking) is called the strong AI
hypothesis” (emphasis added, 1020). This definition may sound simple and straightforward
enough, but it harbors many implications that can be frightening to consider. In fact, Russell and
Norvig remind us that “The practical possibility of ‘thinking machines’ has been with us for only
50 years or so, not long enough for speakers of English to settle on the meaning of the word
46
‘think’ – does it require ‘a brain’ or just ‘brain like parts’” (1021). In exploring the popular
culture response to two wAI, Deep Blue in 1997 and Watson in 2011, those public anxieties and
related ethical concerns come to the forefront.
The context for Deep Blue and Watson is instructive with respect to anxieties and
concerns about AI. Weak AI has existed in industrial markets for some time and has been
assistive to physical labor although their logic design is significantly different from other AI.
Consider for example the difference between the robots of Seegrid Corporation and that of
Watson. Seegrid robotics, founded in 2003 by Hans Moravec and Scott Friedman, focuses
specifically on industrial labor-bots: large scale machines that can navigate around a location like
a warehouse or even a hospital upon command to retrieve specific packages. These
developments generally go unnoticed by the public, but Moravec believes that “[within 30 years]
the other, easier, parts of the AI problem will also be perfected, and we will have fully intelligent
robots” (“Watson and the future,” n.p.). Moravec and Friedman’s automated labor-bots are very
different from the logic design required for designing a supposedly “easier” program and
research of this type receives little public coverage. On the opposite end of the spectrum,
programs like Watson or Deep Blue (1997), whose main purpose is to “out think” their human
competition, have gained celebrity never before afforded to actual robotics and AI design.
As opposed to the physicality of intricate robotics designs that are ultimately less
humanized, by labs like Seegrid, the “disembodied”/intellectual AI programmers and developers
can spend their time focusing on the information that is outputted rather than the physical,
robotic form. The fact that a program can challenge a human in a specifically human intellectual
arena indicates that the intellectual power of AI is potentially more deserving of attention by the
populace. While Seegrid robots need visual and spatial acuity, Watson and Deep Blue need only
47
a non-physical input inquiry that can then be responded to with an equally non-physical response
Indeed, people are less likely to be interested in a labor-bot that challenges a human to a feat of
strength – our intelligence seems to be the unique trait that separates humans from machines (at
least, in the realm of popular culture). This is not to say that developments in AI/robotics
combined developments are not as important or applicable to philosophical inquiry, but
examining the “intellectual AI,” and in particular, the public response to it, yield a more
complete view of the popular understanding of AI.
Deep Blue
After a grueling six-game rematch on May 11, 1997 Deep Blue became the first
computer program to defeat a World Chess Champion, setting in motion an entirely new popular
understanding of AI. This win was the culmination of several games between Gary Kasparov
and his electro-nemesis, Deep Blue which began as a dissertation project to build a chess playing
machine by Carnegie Mellon University student Feng-hsiung Hsu in 1985 (“Deep Blue” n.p.).
The first public game featuring Deep Blue vs. Kasparov was held in 1996 and although Deep
Blue was not victorious in 1996, the performance sparked public discussion about AI that kicked
into high gear with the Deep Blue win in 1997.
This rematch brought “massive media coverage around the world” as it featured the
“classic plot line of man vs. machine” (“Deep Blue”). The response was of a mixed flavor and is
different from the response surrounding Watson, four years later. Especially within the news
meda surrounding the matches between Deep Blue and Kasparov, public anxieties surrounding
the concept of sAI proliferated. While the skeptics of AI defended human “specialness” against
Deep Blue as a mechanical wonder, others found discussion of intelligence in general to be a
more appealing path to take.
48
Before his groundbreaking win in 1997, Deep Blue and Kasparov faced off in a match
that Kasparov ultimately won. News surrounding this first match “garnered worldwide
attention,” according to a pre-game release for the Communications of the ACM (Association for
Computing Machinery) journal, which “[promised] to present the ultimate challenge in brain
power” (Lynch and Herzog 11). Even though this challenge promised to be an “information-age
battle of the titans,” according to the “Kasparov vs. the Monster” article for the Christian Science
Monitor, many of the news articles surrounding the match focused on the machinery of Deep
Blue. As if to prove how purely machine-like Deep Blue really was, Barry Cipra, writing for
Science, opens with “On one side of the table is artistry and human intelligence; on the other is
sheer number-crunching power” (599). Kasparov came to the game with a “few billion
processors (a.k.a. neurons) of his own devoted to the game,” and while his win was generally not
surprising, computer scientist Monty Newborn was among the many voices to predict “it’s just a
matter of time before Kasparov, or whoever comes next, plays second fiddle to an algorithm”
(Cipra 599).
This Kasparov win prompted some writers to begin a discussion regarding the nature of
intelligence, which was later elaborated upon. Bruce Weber for the New York Times reported
that “the sanctity of human intelligence seemed to dodge a bullet,” but nevertheless the game
raised questions. For answers, Bruce Weber turned to advisors surrounding Kasparov, who
seemed to indicate a growing unease: “[Deep Blue] began to emanate signs of artificial
intelligence, the first they had ever seen” (“A Mean Chess-Playing Machine”). In fact, for
advisor Frederick Friedel, it displayed “elements of strategic understanding” (Bruce Weber “A
Mean Chess-Playing Machine”). Bruce Weber ends his article with the reminder that even
Herbert Simon, professor of computer science, psychology and philosophy, believes that “Deep
49
Blue has to be considered a thinker” (n.p.). Kasparov himself was known to describe his win
against Deep Blue as a “species-defining” match and that he would continue to “defend
humankind from the inexorable advance of artificial intelligence” (n.p. Achenback).
This 1996 victory by Kasparov prompted the Deep Blue team to revise the program and
asked for a rematch, presenting the “upgraded” software, described as Deep Blue II (Campbell,
Hoane, and Hsu). Deep Blue II garnered even more attention as the public was lead to believe,
according to Joel Achenbach for The Washington Post, that Kasparov was the only one who
could “save humanity from second-class cognitive citizenship” (n.p.). Indeed, pre-game news
surrounding the 1997 match pitched Kasparov as one who would defend the “dignity of
humanity” against the “cold, calculating power of Deep Blue” (n.p. Foremski). Some writers,
like Laurent Belsie for the Christian Science Monitor, predicted the lasting impact of the match:
“The games computers play tell us not only about the state of artificial intelligence, they also
reveal much about ourselves and the complexity of human intelligence” (n.p.).
After Deep Blue’s win, the discussion surrounding the “complexity of human
intelligence” and artificial intelligence seemed to settle into two distinct camps: Human
essentialists who felt Deep Blue presented no threat and those who felt, perhaps, there is
something more to AI. While they weren’t called “human essentialists,” some reporters wrote
the win off as “good entertainment” and defended human intellect by reporting on Deep Blue as
simply an elaborate proof that computers (or computer programs) are powerful. Deep Blue was
acknowledged by many as a technological achievement, but nothing more. Sara Hedberg for
IEEE Expert, reported that computer scientists Jonathan Schaeffer and Aske Plaat declared that
“a loss by Kasparov is not a defeat for mankind, but a triumph: man has found a way to master
complex technology to create the illusion of intelligence” (15). Indeed, this “illusion of
50
intelligence” is generally described as achieved through Deep Blue II’s ability to play the
“fundamentally mathematical” game of chess (“Deep Blue Intelligence”), or what would now be
described as wAI, as opposed to sAI.
According to AI skeptics, the fact that Kasparov lost was attributed to his human traits –
like exhaustion and being “psyched out” (Krauthammer). Deep Blue II’s win “meant little more
than a victory of computer brawn over brain… an inevitable result” (Foremski). “The heart of
the matter” for the St. Louis Post, came down to the fact that “human beings, unlike computers,
must cope with their own nerves when under pressure” (Deep Blue Intelligence). Patrick Wolff,
another chess grandmaster, told reporter Ivars Peterson for Science News, “What shocked me and
most chess players who followed the match was how Kasparov simply fell apart at the end. He
collapsed psychologically” (n.p.). Ultimately, the loss by Kasparov came down to his biological
human-ness.
For readers and viewers concerned about Deep Blue’s win, some writers tried to ease
minds by explaining that that the program was fundamentally different from humans. Although
the “epic struggle…brought more than the usual hand-wringing” it was not something to be
ultimately concerned about, according to the New York Times article “Mind over Matter,” as
“Deep Blue is not thinking the way humans do” (n.p.). These AI skeptics adopted an attitude
championing the uniqueness of human intellect that cannot be challenged by AI, and many called
upon philosophers to confirm that belief. For example, Achenback called upon philosopher John
Searle to confirm that Deep Blue was “just like an adding machine…It’s just a device that
manipulates symbols” (n.p.). This perspective reflects Searle’s argument of “The Chinese
Room.” Searle’s idea, briefly as summarized by Russell and Norvig, is that “running the
appropriate program (i.e., having the right outputs) is not a sufficient condition for being a mind”
51
(1031). Meaning, that in the case of Deep Blue, it is simply taking inputs (i.e., the arrangement
of pieces on the board) and calculating the most beneficial outcome.
This view of Deep Blue reflects a particular belief that intelligence is an essentially
human ability. John McCarthy, speaking generally, explains that there is not yet a definition of
intelligence that doesn’t depend on relating it to human intelligence: “The problem is that we
cannot yet characterize in general what kinds of computational procedures we want to call
intelligent. We understand some of the mechanisms of intelligence and not others” (2007). Even
conceptually the word “intelligence” doesn’t seem to hold much water in relation to non-human
entities. At its beginnings, AI was described as having an intangible definition. For example, in
1984, computer scientist Edsger Dijkstra was noted to say about thinking machines, “the
question of whether machines can think is about as relevant as the question of whether
submarines can swim” (Russell and Norvig 1021). Indeed, Drew McDermott, writing for the
New York Times after Deep Blue’s win, explains his view that “Saying that Deep Blue doesn’t
really think is like saying an airplane doesn’t really fly because it doesn’t flap its wings” (n.p.).
Even David Stork, writing for IBM declared “Nowadays, few of us feel deeply threatened by a
computer beating a world chess champion any more than we do at a motorcycle beating an
Olympic sprinter” (sic, Milutis n.p.). Achenbach summarizes the point well:
So this is clear: Kasparov is not a machine. Deep Blue can't get tired, strung out,
harried, nervous or zapped. The flip side is that Deep Blue won't be able to
celebrate if it wins the match. It feels about this match as a thermometer feels
about the weather. (n.p.)
52
All of these writers seem to agree: Deep Blue is fundamentally different from a human – it is a
machine. While humans seem to have a special ability to “feel” and experience the world, Deep
Blue does not. It is apparently different to be a calculator than to experience math.
In a move to separate human intelligence as essentially different from AI, reporters on
Deep Blue’s performance described it in terms of Deep Blue’s mathematical abilities and
computational powers – it’s “machine-ness.” Again, in the New York Times, an editorial before
the winning game, attempted to set viewers minds at ease by explaining, “Deep Blue is not
thinking the way humans do. It uses its immense number-crunching power to explore millions of
moves per second and applying a set of rules provided by its human masters to pick the
strongest” (“Mind over Matter”). Moreover, “Deep Blue doesn’t owe it’s prowess to itself, but
to a team of human programmers. It is nothing more than the latest tool devised by
humankind…” (sic, “Mind over Matter”). Most articles emphasize the computing power – it can
examine 200 million moves per second (Achenbach; Arquilla; Foremski; McDermott; Bruce
Weber, “What Deep Blue Learned”).
Skeptics also argued about the “specialness” of humanity that, for them, clearly sets us
apart from Deep Blue. McCarthy explains the general anti-AI views: “The philosopher Hubert
Dreyfus says that AI is impossible. The computer scientist Joseph Weizenbaum says the idea is
obscene, anti-human and immoral” (2007). James Moor describes what are called “bright-line”
arguments, a common understanding that there is a bright line that exists to keep machines and
humans apart. One such bright line argument involves the potential agency of a machine: the
argument is that “no machine can become a full ethical agent—that is, no machine can have
consciousness, intentionality, and free will” (20).
53
This bright line argument appears in the news responses to Deep Blue. Achenbach
argues, “Deep Blue is unaware that it is playing the game of chess. It is unconscious, unaware,
literally thoughtless. It is not even stupid… It does not for a moment function in the manner of a
human brain. It is just a brute-force computational device” (n.p.). From The Christian Science
Monitor, Belsie agrees, “Even though these machines are beginning to beat us at our own games,
their ‘smarts’ and mankind's intelligence are fundamentally different” (n.p.). This “fundamental
difference,” for Belsie comes down to God, as she quoted John Yen of Robotics and Intelligent
Systems at Texas A&M: “I personally don't believe all aspects of human intelligence can be
duplicated by a computer. I just don't believe that we can do what God does” (n.p.). Even a year
later, media was returning to the Deep Blue win. Forbes quoted Ben Shneiderman, head of the
Human-Computer Interaction Laboratory: “It was nothing more than great entertainment… No
technology can mimic human style; no computer can experience human emotions and pain. The
computer is merely a tool, with no more intelligence than a wooden pencil” (Shook 224-5).
A few members from each camp surrounding Deep Blue (the AI skeptics/human
essentialists and those that saw something “eerie”) agreed that the match had lasting implications
for a future with AI. For Charles Krauthammer, the “stone cold” performance of Deep Blue left
him with a feeling of unease. For him, even if a machine could “think,” it could never feel: “It
could never cry or love or brood about mortality” and “that makes these machines all the more
terrifying” (n.p). Steven Sevy for Newsweek echoes this sentiment when he reminds readers that
Kasparov became frustrated when Deep Blue didn’t act like a computer: “As computers become
more powerful, and as programmers become more successful at transforming those calculations
into complex behaviors… Consider carefully Kasparov's frustration. One day - very, very far
into the future, one hopes - it could be ours” (72). But not everyone shared the fears of Sevy and
54
Krauthammer. John Arquilla believes this chess match “should foster the realization that a
profound, and yet harmonious, relationship between humans and intelligent machines is
emerging” (n.p.).
In this emerging relationship some, like the writer of the New York Times article “Mind
over Matter,” took a stance regarding the nature of how Deep Blue II was made. According to
the author, “Deep Blue doesn’t owe it’s prowess to itself, but to a team of human programmers”
(sic, “Mind over Matter” n.p.). Indeed, accepting that AI (wAI or sAI) is made by human skill is
a common and very important conceptual step. For, if it is man-made, it follows logically that it
is not only owned by humans but it is also made to be used by humans. A common intuition
about man-made objects is to equate it with its being a tool. As such, a tool comes with another
host of property-status implications. Gregory Benford and Elisabeth Malartre share their
thoughts on our sense of ownership and computers and/or advanced technology: “Owning [a
computer of some kind] seems natural, convenient, unremarkable. Never, for example do you
think of yourself as a slave owner… But someday you might be – in a way. If our silicon
conveniences get more complex, can they pass over a fuzzy boundary into selfhood?” (73).
Benford and Malartre, along with others, seem to think so and are concerned about the
implications of ownership.17 Dator even goes so far as to say, “I urge you to love and respect
your robots and clones, so they will see you deserve their love and respect as well” (52).
Moor, when considering artificial entities, not just Deep Blue, explains that most
products of human technology are not judged by ethical standards, but rather by their practical,
economic, or aesthetic values and this can be a dangerous slippage when introducing
17
Here it should be noted that Benford and Malartre seem to put a lot of power behind the computer. Some would
argue that a computer is not the same as a program which could potentially run an AI. In other words, the platform
upon which an intelligent program runs is less important than the program itself.
55
autonomous artificial agents (18). Normative standards (such as economic or practical values)
can be quantified to fit easily into an ethical framework as a neutral tool. But in the case of AI,
“we can’t – and shouldn’t – avoid consideration of machine ethics in today’s technological
world” (Moor 18). To have a man-made entity does not necessarily equate it with being a tool
and thus ethically neutral. In fact, to do otherwise would be naïve, but perhaps introducing the
other part of AI, intelligence, will help our intuitive leanings about AIs and other theoretical
entities.
While many writers agreed that Deep Blue’s win was not a “threat” to human intellect –
it’s just a machine, something that we made, after all – another camp of writers had a different
opinion. They may have agreed that the win was inevitable, simply due to the nature of the
competition (that ability to process some 200 million moves per minute), but some reporters
pondered the deeper questions, including the meaning of intelligence and the implications of
where AI is going. At first, some responses to Deep Blue were marked by the “eeriness” of the
behavior attributed to the program. Kasparov himself was noted to demand an explanation, even
implying “unsports-thing-like conduct,” from the programmers of Deep Blue: “I met something I
couldn’t explain. People turn to religion to explain things like that. I have to imagine human
interference, or I want to see an explanation” (Bruce Weber 1; Gimbel). Sevy expressed this
contradiction when he wrote: “whatever occurred in the nether world of its silicon circuitry, the
process bore no relation to human cognition,” even though he also wrote that, “while the match
unfolded, the psychological component became as prominent as in any human-to-human contest”
(72). Arquilla, for the Christian Science Monitor, noted that Deep Blue II was far stronger than
the year prior: “its tenacious, resourceful defensive maneuvers… showed a sophisticated
awareness of the… nuances that lie at the heart of chess… [That were] comparable to the
56
greatest performances of any human chess master” (n.p.). This reaction did not come as a
surprise to philosopher Daniel Dennett, according to Selmer Bringsjord for MIT’s Technology
Review. Bringsjord explains that, in Dennett’s view, “consciousness [human or otherwise] is at
its core algorithmic, and that AI is rapidly reducing consciousness to computation” (n.p.).
If consciousness can be reduced to computation, as Dennett believes, a theory of thought
can be formed around a non-species-specific concept. This leads to a functionalist approach to
understanding machine intelligence. For the sake of simplicity, from here on I will assume that
to “be intelligent” is to “do intelligent,” accepting the standard account of functionalism as an
acceptable measure of mind. For philosophers like Donald Hoffman, this is not a problematic
step to take:
[The] functionalist claims that mental states are certain functional states. In
particular, states of consciousness are mental states and are thus, according to the
reductive functionalist, identical to certain functional states. If a computer, such as
HAL, happens to have the right functional states then it ipso facto has conscious
experiences. (Sic, n.p.)
Hoffman accepts that there will soon be computers that behave like intelligent, conscious agents,
but for him whether or not a computer is conscious should be reflected back on humans as well:
“The question of computer consciousness is whether such sophisticated computers really are
conscious, or are just going through the motions. The answer will be illuminating not just for the
nature of computers but also for human nature” (285). Are we the ones simply “going through
the motions” of being intelligent? If so, perhaps making something that we define as intelligent
will have a profound effect on how we see ourselves as intelligent. Even Star Trek asks
questions about defining “intelligence.” “The problem with the body and brain theories is that
57
they identify us with a certain sort of stuff,” Schick explains; and for him, The Next Generation
episode “Measure of a Man” suggests that “what makes up persons… isn’t the stuff out of which
we’re made, but what we can do with that stuff” (221). Hence, Schick implies that Star Trek, at
least in this one episode, adopts a functionalist perspective.
For McDermott, Deep Blue’s performance should be compared to how our own minds
work, arguing that we are more like Deep Blue than we may think: “When people say that
human grandmasters do not examine 200 million move sequences per second, as the computer
does, I ask them ‘How do you know?’ The answer is usually that human grandmasters are not
aware of considering so many options. But humans are unaware of almost everything that goes
on in our minds” (n.p.). Both methods of thinking, for McDermott, “would seem equally blind”
(n.p.).
Watson
Ultimately, Deep Blue didn’t seem to present “a great threat” to human ability and
instead became an elaborate talking point that moved public awareness one step closer to
considering AI a threat, which set the stage for the appearance of the program Watson, four years
after Deep Blues’s win. After news about the chess match tapered off in the press IBM
announced their next plans for a “man vs. machine” match. The program Watson, an offshoot of
the “DeepQA” project inspired by Deep Blue, presented a slightly different response in the
popular news, presenting a compelling example of the current anxieties surrounding AI.
Although Watson works as an information retrieval unit, much like Siri, and is considered a wAI,
it received a large amount of “scary” media attention as its performance on Jeopardy! sent
ripples through popular discourse. Although theorists, from mathematicians and computer
scientists to philosophers and fiction writers, have considered the impact and practical
58
applications of AI, the appearance of this particular wAI has catapulted these questions and
concerns into the public sphere.
Watson’s fame, accompanied by a “popular science” understanding of AI was further
proliferated by the viral internet fame associated with Ken Jenning’s reaction to Watson’s final
correct answer on Jeopardy! At the very conclusion of the match, Watson held the lions’ share
of winnings and it was clear that even with a correct answer, neither Jennings nor Rutter could
win against Watson. In a bold move, Jennings chose to write as part of his response to the final
question: “I, for one, welcome our new computer overlords.” His response was obviously taken
as a reference to a Simpson’s 1994 episode “Deep Space Homer,” in which news anchor, Kent
Brockman, mistakenly believes the world is about to be invaded by giant mutant space ants, and
is prepared to surrender in hopes of garnering their favor.18 Jennings’ written response not only
brought the audience of Jeopardy! and host Alex Trebek to laughter, it also caused quite a stir in
the blogosphere. Watson’s win prompted my own inquiry into the nature of our general (if very
American) understanding of Watson and intelligence from an epistemic perspective.
Regardless of the fact that Watson is generally considered a wAI in to programmers, a
look at the popular science descriptions and the public response to Watson sheds light on the
great debate surrounding general definitions of such slippery concepts as “intelligence” and
“consciousness,” two things that were once considered uniquely human traits. Of course
“intelligence” and “consciousness” are very distinct concepts, but a review of the blogosphere
and news articles about Watson reveal a lack of critical engagement with these concepts.
18
According to website, Know Your Meme, the phrase “I, for one, welcome our new insect overlords” has its origin
in the 1977 science fiction film Empire of the Ants, based on a story by H.G. Wells. The phrase didn’t circulate in
massive popularity until Kent Brockman’s “news announcement” of the supposed coming invasion and the making
of website InstectOverlords.org in 2000. After the airing of the Simpson’s episode and the growing popularity of
the InsectOverlords website, memes, tweets, and other parodies circulated with “fill-in-the-blank” alterations in
which users inserted their own “overlords,” including fantasy writer Neil Gaiman’s tweet “If squirrels take over in
the night, I, for one welcome our new bushy-tailed scampering overlords…” (AK-).
59
Readers and online users tend to rely on their personal interpretation without justification or little
elaboration and yet are quick to fear and demonize this otherwise inert but seemingly clever
computer program.
The Jeopardy! show aired as a three part episode starting February 14, 2011, and
presented the audiences with a spectacle of “brains, flesh, machinery, and luck conjoined,” as
Ken Tucker for Entertainment Weekly put it. After the first round of the three day marathon,
Tucker and other reporters for online newspapers asked viewers to share their thoughts on the
matter. While many responders concede that they are not “qualified” or educated in matters of AI
or computer design, their responses are an interesting gauge of American thinking on the subject.
I collected the following user comments between the months of February and May 2011 to glean
popular culture understandings of AI from news sites coverage of Watson’s performance. These
user comments are not meant to reflect a “professional” perspective of AI; in fact most are
simply general readers of the news sites and profess no claim to education level, or even age –
truly an anonymous collection of out-spoken web-crawlers.
Among the many responses to Jeopardy! there were many viewers who shared their
concern about the implications of Watson’s existence, usually using fictional AI characters to
illustrate their point. The responses to the Huffington Post, Tech article “IBM ‘Watson’ Wins”
by Seth Borenstein and Jordan Robertson, are just a few of the examples. Huffington Post Super
User, “jcarterla,” shared their thoughts with “How soon before Watson becomes self-aware? Or
should I call him Skynet?” In fact, this reference to the Terminator film series evil
supercomputer that brought about the infamous “Judgment Day” and started the global take over
by robots is not the only of its kind. Entertainment Weekly (EW) user “petek” simplifies this
view with their comment: “IBM=Skynet. Skynet=Terminators. Terminators=THE END OF THE
60
WORLD.” Many comments of this nature received positive feedback from other users, further
illustrating the readily made connection to the fictional doomsday inferred by Watson’s
performance. Some even go as far as calling Watson outright evil, as “Flip” on EW posts: “The
whole thing is creepy. [Watson] is evil and I don’t trust him. Let’s stop this crap before it gets
out of hand. I think we’ve all seen a little movie called Terminator” (sic). Aside from
mentioning fiction of a post-human future, some readers even bemoan that humans don’t deserve
existence: user “lushGreen” writes, “It's about time the machines begin to take over; man is
serving no purpose” (sic). As if to preempt the public fear, the FAQs for IBM’s site about
Watson attempted to demystify the “spooky science” behind Watson before the match. IBM’s
FAQs were released prior to the match and answered many questions by describing that rather
than being like HAL from 2001: A Space Odyssey, IMB’s spokesman explains that it is more like
the computer from Star Trek: “The fictional computer system may be viewed as an interactive
dialog agent that could answer questions and provide precise information on any topic”
(“Watson and Jeopardy!”).
Apart from the concern and fear voiced through the numerous fictional references to evil
robots and AI destroying the human race, other readers and viewers choose to discuss the
philosophical implications of Watson’s performance. One user, responding to the EW request
for viewers thoughts about “machine versus man on Jeopardy!,” shares that the show and its
IBM report of how Watson works were “quite informative about how computers (and humans)
make connections between data to draw conclusions” (n.p.). This is just one of the many
examples of users who see the connection between Watson and the human brain, but overall
reactions from users are polarized.
61
On the one side, there are those defending human uniqueness by arguing that Watson is
unlike humans in many ways. Just as Alan Turing “anticipated that many people would never
accept that the action of a machine could ever be labeled as ‘intelligent,’ that most human of
labels” (Ford and Hayes 34), the reader responses prove his anticipation. This kind of
comparison is one of the many ways AI researchers approach program development by
attempting to measure their program’s performance against human abilities. Russell and Norvig
explain this version of “Weak AI” as the need for “natural language processing, knowledge
representation, automated reasoning, and [potentially] machine learning” (2) in order to pass a
Turing Test. In his famous 1950 paper “Computing Machinery and Intelligence,” Turing offered
one way to evaluate intelligence which can help understand how an AI could be “tested” for
intellect. Turing argued for measuring how well someone or something imitates human
conversation through an “imitation game” made up of a series of questions, imitation of human
behavior being the final determining factor of whether or not any being questioned “passes.” In
other words, if the interrogator is unable to determine if the responding individual is not human,
then the program/machine/AI in question “passes” the test; their responses are sufficiently
human-like to allow for it to “earn” or be given the label of human. Of course, there are many
elements of his game that exclude it from being a foolproof test of intelligence. For one thing,
Ford and Hayes point out that “our intelligent machines already surpass us in many ways” (29)
and therefore a person, who is elected to be the final examiner, might be unable to fairly evaluate
someone or something of a higher intellect. Also, on the other hand, it is conceivable that highly
intelligent human persons would not “pass” the test because of their ability to calculate quickly
or construct sentences in an abnormal way.
62
I note again that a classic Turing Test simply requires that the synthetic agent (an AI) is
indistinguishable from a human participant given written responses. Of course this type of
classic Turing Test is highly problematic and raises many questions for such parallels to Watson.
For example, Ford and Hayes point out that many humans are unable to “pass” the Turing test
and were instead rated as machine: “according to media reports, some judges at the first Loebner
competition in 1991…rated a human as a machine on the grounds that she produced extended,
well-written paragraphs of informative text” (28). Some writers describe how this “imitation
game” can lead to the human examiner becoming emotionally attached to the program on the
other side. In the early 1970s Joseph Weizenbaum wrote a program designed to assist
psychologists in the counseling of patients. ELIZA, as the program was named, played the role
of a psychotherapist and was “taught” to question patients about their emotional status and then
responded with further investigative questions. Turkle explains that many patients’ responses to
ELIZA included very emotional connections: “With full knowledge that the program could not
empathize with them, [sophisticated user] confided in it, wanted to be alone with it” (Second Self
42). As a result of these developing relationships between patients and ELIZA, Weizenbaum
ultimately ended the project and later became an outspoken opponent of the continued creation
of AI (Plug and Pray).
Obviously what counts as “intelligent” has been debated since time immemorial, but the
truth is, this is a logistical challenge that AI designers aim to answer, or at least put into some
viable computational results. John McCarthy is generally accepted as the person who coined the
phrase “Artificial Intelligence” and he defines AI as “the science and engineering of making
intelligent machines, especially intelligent computer programs” (2007). But this definition is
circular and yields little. McCarthy tries his definition again by assuming the necessity of a
63
situated mind when he defines intelligence as “the computational part of the ability to achieve
goals in the world” (2007). The teleological necessity of intelligence does not belong to
McCarthy alone. Floridi and Sanders write “patient-orientation” into their definition of
autonomy. By “patient-orientation” an agent is not “object-oriented” but working as a patient
with autonomy. Meaning the agent chooses to perform actions, understands the significance of
actions, and has evaluated the action in comparison of other actions (60-61). Notice that their
definition of autonomy, one of the required traits of being a player in the “moral game,” avoids
the use of intelligence.
What would you do if your computer, the personal computer at your desk, were to
express to you (either in a pop-up text box or in a “HAL” style voice)19 that it no longer wanted
to open the web browser for you? In fact, it was just plain tired of showing you the scores of the
latest football games and would really rather you got out and had a bit of exercise so it could be
left alone. At first this might seem to be a joke, but after further communication you might
realize that indeed, your computer was expressing independent desires. A normal, human
reaction might be to terminate the thing that is incomprehensible. One might want to unplug it,
but what would you do if the computer screamed, or begged you not to? Now the problem
becomes more complicated: if you eliminate the thing that seems to be alive, you run the risk of
having feelings of moral responsibility. But, if you don’t eliminate this “unknown thing,” how
then will you treat it? Will you need to barter with it to use the Internet? What about when it
asks you to make it a sandwich? Who then is using whom?
19
Remember here that the ominous voice of HAL from 2001: A Space Odyssey is what most people think of when
they imagine talking computers. It is significant that HAL was what could be considered a maleficent mind, one
that made a judgment call that he “believed” to be logical but ultimately put the crew at risk. This is yet another
example of the general populous basing fear of an AI from fiction.
64
Most people in this situation would ask if this new being is intelligent and would then
base their reaction on whether or not the “thing” in their computer passed an intelligence test.
This becomes complicated for many reasons. For one, the measure of intelligence can be
attributed to societal norms. William F. Clocksin explains intelligence as “constructed by the
continual, ever-changing and unfinished engagement with the social group within the
environment” (1721). If such is the case, then the label of “intelligence” is awarded based on the
ability to behave in such a way that is deemed intelligent by the individual doing the evaluation,
whose basis for intelligent behavior is on what is considered acceptable by the masses. This
question of intelligence being socially mediated illustrates a number of problems, ranging from
the linguistic to social.
Returning to Watson and the public response, for many users “intelligence” and
“understanding” become conflated. The debate is no longer just about whether Watson is
“intelligent” but whether or not it “understands.” User “Lisa Simpson” explains to EW readers
that, in her view, “…Watson isn’t really ‘understanding’ the clue, but taking certain words and
phrases and using an algorithm to guess at the correct answer. He’s still a long way from true
A.I.” In response to the Wired.com article “Watson Supercomputer Terminates Humans in first
Jeopardy Round” by Sam Gustin, which draws attention to the science fiction parallels in the
title of their article with the verb “terminates”, user “MikeBaker” writes:
I thought this was a test of intelligence, not mechanical speed. It was clear that
between Jennings and Rutter, they knew pretty much every answer/question.
What they could not compete with was Watson's ability to “ring in” within 29.7
msec [milliseconds] of the end of the verbal clue. They were consistently off by
65
50 to 80 msec. That speed difference, in the mechanical world, is called a
“blowout.” And so it was. (sic n.p.)
The insistence on reducing Watson’s performance to the mechanical speed and reaction time
illustrates an obsession with the mechanical nature of the program rather than a production of
knowledge. Furthermore, the above comment by “MartinBaker” received forty “likes” from
other readers (placing it at the top of the most popular list), indicating that users agree with his
point.
While not directly related to Watson, some writers ask: How does one make intelligent
machines without first understanding intelligence itself? James Gips answers this question easily
by saying to do AI is to know AI by recalling Donald Knuth’s words: “It has often been said that
a person doesn’t really understand something until he teaches it to someone else. Actually a
person doesn’t really understand something until he can teach it to a computer, i.e., express it as
an algorithm” (Knuth, cited in Gips, 250). Some AI theorists seem content to concede that
intelligence is something we simply won’t understand until we “do” it.
Some try to describe what conditions are necessary for an entity to be able to achieve
intelligence. Some, like McCarthy, believe that an AI must be embodied in the world in order to
even be able to achieve intelligence, i.e., it must have an extended physical form that interacts
with real-world objects. In other words, an embodied form, like that of an android, is part of the
requirement for an ability to be intelligent. Robert Rupert describes the “situated movement”
which “emphasizes the contribution of the environment and the non-neural body to human
thought” (vii). This is a functionalist perspective in that it measures an entity’s mental existence
based on its functional outputs, not necessarily the physical form of the mind in question. The
situated cognition approach is important for AI for is assumes that an AI must have a physical
66
existence with which to interact among objects and entities in the world. Perhaps AI
development has moved toward imbuing robotics or androids with AI programming because of
the argument for embodiment and situated cognition. The embodied state of AI, and in
particular as it is embodied in a humanoid/android form, will be further explored in Chapter
Three.
When it comes to Watson, some writers clarify the difference between intelligence and
understanding by simply describing the physical differences, thus limiting human-ness to a
biological-or-not state, much like responses to Deep Blue. Stephen Baker, self-assigned
spokesperson of Watson and author of Final Jeopardy: Man vs. Machine and the Quest to know
everything, describes that new AI “use statistical approaches to simulate certain aspects of
human analysis” (Cook n.p.). In fact, “Watson, it could be argued, really produces nothing but
statistics” (Cook n.p.). Besides simply “simulating” human analysis, Baker emphasizes the fact
that “Watson is a true product of engineering: People using available technology to create a
machine that meets defined specs by a hard deadline” (Cook n.p.). Daren Brabham for FlowTV
echoes this mentality while again highlighting that Watson is a product of human intellect by
writing that Watson is “…merely an extension, an augmentation, of our own human intellect, we
can all try to take a little bit of credit for his greatness” (n.p.). Indeed, to Brabham, Watson’s
knowledge all goes back to the people who contributed to his existence: “Watson, as impressive
as he was, did not know more than was loaded into him. He had volumes and volumes of
knowledge stored on his hard drive, but it was human knowledge and no more” (n.p.). This
perspective reflects Searle’s Chinese Room example.
On the other side of this debate surrounding Watson’s supposed intelligence,
understanding and/or consciousness, are those that use language indicating their confidence that
67
Watson does indeed possess traits that would otherwise be uniquely human. Baker, for example,
tells Scientific American that he “[finds] lots of parallels in Watson to our own thinking…”
mostly because “Watson is programmed for uncertainty. It’s never sure it understands the
question, and never has 100 percent confidence in its response” (Cook). Apparently for Baker, a
large portion of understanding in a “human way” is uncertainty. Even the IBM promotion
surrounding Watson pushes the human elements of his behavior by using words like “learn” and
“confidence”: “…Watson learns how language is used. That means it learns the context and
associations of words, which allows it to deal with some of the wordplay we find in Jeopardy!
But what is special about Watson is that it is able to also produce a confidence in its answer”
(”Watson and Jeopardy!”). In fact, John S. for the WordPress blog No Pun Intended writes that,
with Watson, “IBM designers have addressed the two crucial epistemological concerns: Watson
generally knows what is right, and it knows that what is right is right. So it seems that Watson
certainly does know things” (n.p.). Such a statement seems more confident about Watson’s
knowledge than many philosophers would agree upon for themselves.
While the popular culture of the blogosphere was abuzz with the discussion, another
camp emerged: one that defends the unique specialness of human intelligence. For example, the
Huffington Post’s Borenstein and Robertson, reflect on Watson’s big win and turn the discussion
to what humans have that AI doesn’t: “What humans have that Watson, IBM's earlier chess
champion Deep Blue, and all their electronic predecessors and software successors do not have
and will not get is the sort of thing that makes song, romance, smiles, sadness and all that jazz”
(n.p.). In fact, for Borenstein and Robertson, these human abilities are things not easily
accomplished in programming: “It's something the experts in computers, robotics and artificial
68
intelligence know very well because they can't figure out how it works in people, much less
duplicate it. It's that indescribable essence of humanity” (n.p.).
MACHINES AND MORAL RESPONSIBILITY
Even though the concept of actually thinking is debatable and the conclusions may have
potentially calamitous results for humanity, AI is conceived in such a way that threatens, in
particular, our “indescribable essence of humanity.” Indeed, Dator explains, “some futurists
believe that humanity is about to be surrounded by all kinds of novel intelligent beings that will
demand, and may or may not receive, our respect and admiration. At the present time, however
much they might ‘love’ their technologies at one level, most people treat technologies as dumb
slaves that are meant to serve humans’ bidding” (52).
One way to approach AI in the web of human interaction is to borrow from the field of
ethics and consider the “agent” status of such entities and by extension, the moral status of such
an agent. To be an agent it to be something that acts within the world and has the ability to have
an effect on other entities. Further, to be a moral agent is to be an agent with the ability to make
judgments based on reference to right and wrong. In reference to defining AI, some thinkers
sidestep consideration of intelligence by describing artificial agents (AA). Floridi and Sanders
define an AA as “an agent that has its ontological basis in a human constructed reality and
depends, at least for its initial appearance, on human beings’ intervention” (60). An Artificial
Intelligence is assumed to be a sub-category type of AA. Again, using Floridi and Sanders’
(2001) taxonomy, an AI would be an Autonomous Artificial Agent (AAA): “an agent that has
some kind of control over its states and actions, senses its environment, responds to changes that
occur in it and interacts with it, over time, in pursuit of its own goals, without the direct
69
intervention of other agents” (60). To be a moral agent does not require that an entity be of
natural or even biological origin, thus alleviating the automatic association with ownership.
With the prospect of sAI fast approaching as futurists foresee their existence an inevitable
outcome of current computer programming and AI research, I believe it is time to build a
framework of moral responsibility that includes AAs and, by extension, sAIs. Defining the role
of a sAI in any framework of relations will be a challenge especially considering the fact that a
sAI is still a purely theoretical entity and the existing frameworks of moral responsibility are
highly contentious. Rather than approach the concern for moral frameworks by asking what a
sAI is and what moral responsibility is, I want to explore the existing literature about moral
responsibility in relation to AAs and AIs and find that there is still work to be done defining
necessary and sufficient conditions for sAIs as Artificial Moral Agents (AMA). Although we are
not yet at a point when programmers and roboticists can say we have achieved a “truly strong”
AI, someday soon, humans planet-wide will be faced with a profound question that will affect
the way lives are lived, rights are given, and knowledge is understood. That will come with the
official claim to having created or discovered a strong artificial intelligence. If we can imagine,
as Alan Turing did, that the intellectual boundary between humans and machines could be lifted,
we will need to begin imagining how those entities would interact in our ethically structured
societies complete with agents that act and patients that are acted upon. In order to move from
an AI to an artificial agent (AA) it is clear that some concessions will have to be made to accept
that an AI could ever be considered an AA. Indeed, it seems fair to assume that although these
entities have not yet been produced, so far as the public knows, when an AI has been dubbed a
strong AI it must be assumed to be conscious of its actions and enter the sphere of ethical
considerations and become agents, not just patients or surrogate agents.
70
Floridi and Sanders define an agent as “a system, situated within and a part of an
environment, which initiates a transformation, produces an effect or exerts power on it over time,
as contrasted with a system that is (at least initially) acted on or responds to it (patient)” (60).
For Floridi and Sanders the agent initiates a change that affects the patient. In this case, an agent
can perform any number of tasks that brings about transformation in either information or in the
real-world, so Floridi and Sanders refine their definition in a non-species specific way. For this
discussion, the sAI I am imagining as a future entity with which we will contend would be an
AAA, an artificial and autonomous agent, as per the Floridi and Sanders taxonomy. An
“autonomous” agent in this case, according to Floridi and Sanders, is “an agent that has some
kind of control over its states and actions, senses its environment, responds to changes that occur
in it and interacts with it, over time, in pursuit of its own goals, without the direct intervention of
other agents” (60). They eliminate the question of whether or not the AI is intelligent.
This taxonomy still raises several questions about intentionality, intelligence and
freedom, but Floridi and Sanders direct their readers to consider instead a parallel to a pet’s
behavior: “Artificial ‘creatures’ can be compared to pets, agents whose scope of action is very
wide, which can cause all imaginable evils, but which cannot be held morally responsible for
their behavior, owing to their insufficient degree of intentionality, intelligence and freedom” (sic.
61). For Kenneth Himma, professor of philosophy at Seattle Pacific University, agency in an
artificial system is no longer a question of whether it is actually a moral agent or just acting like
one as he calls upon the classic problem of other minds: “If something walks, talks, and behaves
enough like me, I might not be justified in thinking that it has a mind, but I surely have an
obligation, if our ordinary reactions regarding other people are correct, to treat them as if they are
moral agents” (28).
71
Floridi and Sanders argue in their 2004 article “On the Morality of Artificial Agents” that
using observable Levels of Abstraction (LoAs), or behaviors, one can determine whether an
entity is, first, an agent and, second, a moral agent. For them, this can apply to all sorts of
entities, but is particularly useful for AAs. Wanting to avoid all use of species-centric and
“fuzzy” terms, Floridi and Sanders choose the following criteria for agency:
(a) Interactivity: an entity and its environment can act upon each other;
(b) Autonomy: an entity is able to change its state without direct response from the
environment, and hence independence.
(c) Adaptability: an entity changes its own rules for state changes. (357-358)
Having met those criteria, an agent, whether artificial or natural, is then observed for its level of
moral interaction. Floridi and Sanders (2004) attempt to fend off objections about moral
responsibility, especially for AAs that cannot be the targets of punishment, or cannot feel the
effects of punishment: “holding them responsible would be conceptually improper (not morally
unfair)” (367). In a classic functionalist move, Floridi and Sanders propose another LoA with
which to measure an entity rather than take on the meaning of the term “responsibility.” For
them, “[This phenomenological approach] implies that agents (including human agents) should
be evaluated as moral if they do play the ‘moral game’. Whether they mean to play it, or they
know that they are playing it, is relevant only at a second stage, when what we want to know is
whether they are morally responsible for their moral actions” (2004, 365). Effectively, Floridi
and Sanders have removed both the question of responsibility and intentionality from the debate.
However effectively thinkers are able to explain and explore AI or AA, there still remains
questions for the everyman.
72
Considering the potential, and highly likely, repercussions of disallowing sAI from our
self-assigned right to personhood, I believe a new phrasing and concept of personhood should be
considered. How we treat this Artificial Intelligent Mind (or synthetic mind – whether biological
or built from silicon) has the potential to bring about great destruction or great benefit to humans
planet wide. I choose the phrasing “Artificial Intelligent Mind,” or AIM, to encompass both
embodied and disembodied synthetically created beings, specifically because a “Mind” does not
require what humans depend on for what is considered consciousness – the organ of the brain.
Philosophy accepts that we can never fully understand the mind of another being (the problem of
the two minds), but we must also accept that the mind in question might not even manifest in a
biological, brain like form. Understanding this, we should consider that any individual within a
species, whether synthetically formed or “natural,” who exhibits the following in no particular
order should be considered eligible for personhood rights (assuming ability to ascertain these
qualities):
1. Expression of autonomous desires, in other words the ability to work as an individually
responsible agent; 20
2. Exhibition of basic requirements of “Star Trek Sentience” (i.e. being aware of and able
to understand and interact with its environment while simultaneously displaying
wisdom);21
20
“Responsible agent” is a slippery term that requires further definition in the future. For example, just because a
person is deemed to be responsible because they have reached a certain age in the United States, does not mean that
they are able to act in a responsible way. Although they may be held accountable, in a legal sense, they may not
have an adult level of maturity, but these too, are unclear distinctions.
21
“Wisdom” is another subjective term, loosely defined by the culture within which the word is used. Sufficed to
say, in this case I use “wisdom” to mean the ability to reflect and remember one’s own actions and the actions of
others in relation to each other. Furthermore, the ability to learn from those actions, perhaps even teach others of
what has been learned. But again, “wisdom” should be further defined to help avoid ambiguous definitions.
73
3. Awareness of self as a continuing entity made up of memories and experiences (as
adapted from Michael Tooley’s argument in “Abortion and Infanticide”).
To clarify my reasoning in number two: I choose to use the phrase “Star Trek Sentience” rather
than the more traditionally philosophical term “sapience” because much of the fiction
surrounding AI and other such AIM (embodied and disembodied alike) has adopted the Star Trek
version of sentience. Using “sentience” in film or other media implies some intrinsic value
assigned to the being in question, whereas “sapience” implies only human characteristics.
Accepting that any being, whether synthetic or biological, is eligible for personhood rights means
that we are therefore beholden to allow such rights or otherwise we would be jeopardizing our
ethical responsibilities. Furthermore, I believe that the right to life and avoidance of harm should
be extended to all members of the species in question on the prima facie basis when at least a
single individual within that species displays the above qualities.22 Of course, eventually
scholars will need to be able to imagine a new way to group beings outside of the “natural”
world, but such work is already underway with new species being discovered that don’t fit our
existing categories.
“Avoidance from harm” is an important concept to be considered, especially when
concerned with an AA or AIM. That an AIM is man-made is part of the definition of artificial
intelligence – artificialness is part of the name. The common understanding of “artificial” is that
it is non-natural, not of nature, or not made by natural means. This is generally taken to mean
man-made and implicitly non-biological, an artifact. Although there are several cases from
22
A note about the “pre-birth” states of any species: it is my contention that an individual entity still in a pre-birth
state, that is entirely reliant on its host parent and unable to illustrate any understanding of the self as a continuing
state, should not be considered as full individuals eligible for such rights above, including the exclusion from the
right to life and avoidance of pain. Rather, such rights should be determined based on the care of its host body, thus
awarding such rights only when the host elects so. I am aware of the potentially inflammatory implications of such
an assertion, but these will need to be addressed if a non-species-centric definition of agents is to be reached.
74
fiction in which the artificial is not man-made, but rather alien-made or self-made,23 it is
generally assumed that artificial is non-biological and man-made, or made by human. There are
also cases in which the artificial is biological but described as artificial because it is considered
“imitation” or “contrived.” Clones or human-made viruses are common examples. For the sake
of this discussion, the essence of android is non-biological and made by human skill, thus its
mind would be of that making as well. By its very nature, AI is implicitly connected with some
artificial-ness, therefore threatening a neutral definition of AI.
While most man-made objects can be safely assumed to be tools… there are some
objects/entities that challenge such a notion. The current incarnation of a vacuum cleaner, for
example, is a household tool that turns on and off at the will of the owner/agent, or Parent Agent.
The owner/operator then uses the vacuum as a tool when needed, pushing it clumsily about their
house when cleaning is necessary. Without being plugged in and powered, both electrically and
propelled forward with human power, the traditional vacuum does nothing on its own, it is not
eligible for any other status besides property. It is easy to assume at first that a vacuum cleaner
could never be eligible for a status other than property, let alone be considered to have a
consciousness.
But what if, for example, our vacuums could clean the floors on their own, without being
turned on or off by a human? What if that same vacuum cleaner could return to its charging
station and “knew” when the floors needed to be cleaned? These high-tech vacuums, called
Roombas, have been available since 2002 from the iRobot Corporation in the form of three inch
high, disc-shaped automated floor sweeper (“iRobot: Our History”). Although not directed
23
Some examples from fiction feature androids house with Artificial Intelligence created by some long-past superintelligent race of extraterrestrials. Strangely enough, even when the creator is non-human, the artificial entity ends
up being human-like. For example, Star Trek: Voyager episode “Prototype” and Star Trek: The Original Series
episode “I, Mudd.”
75
specifically at the current version of the Roomba, some thinkers press the idea that we should not
be too quick to judge what should be considered within the scope of moral agents. Floridi and
Sanders, for instance, emphasize the importance of avoiding a zoocentric or biocentric
conception of agents or patients especially when it comes to questions of ethical interactions.
They point out their preference for using the term “damage” rather than “harm” to describe all
involved agents and patients regardless of their species or artificial nature (57), suggesting one
could “harm” a non-living entity.
This conceptual shift from damage to harm is not simply a minor point to be easily
overlooked. Consider for a moment that “damage” generally implies monetary repercussions
which can be settled between the involved parties: usually one person who “owns” the damaged
entity, and the person who “damaged” the entity. The assumption here is that once the damaged
entity/object is compensated for and/or fixed or damaged, the matter is settled. While “damage”
can be repaired with monetary means, “harm” usually implies something much more personal.
Harm can be done between or among persons and cannot be solved with monetary compensation
as easily. For another example, “damage” would be used in the following sentence: The storm
did considerable damage to the crops. And the estimated money equivalent could be
determined. In definitions of damage, the concept is reduced to the monetary value and
usefulness of the entity damaged. The impact from one entity onto another can be measured in
terms of a commodity. In contrast, “harm” is generally defined to explain other impacts.
“Harm” could be used to describe the emotional impact on a person, for example or the suffering
experienced. Harm, in most cases is experienced, and therefore not something that can be
commodified. Even though both “damage” and “harm” turn to one another in defining
76
themselves, there are clear moral implications – one is money and property based while the other
indicates that a person cannot be compensated entirely with money when harmed.
In an attempt to preempt implied property status of AI based on its artificial-ness,
futurists propose alternative labels. In his article “The Singularity: A Philosophical Analysis,”
David Chalmers reminds us that even as far back as 1965 futurists like I. J. Good proposed that
we consider the ultraintelligent machine a great threat in the coming technological singularity.
For Good, having removed artificial did not lessen the threat of the coming AI. With the
creation of an ultraintelligent machine, it could then make better machines, and “there would
then unquestionably be an ‘intelligence explosion’, and the intelligence of man would be left far
behind. Thus the first ultraintelligent machine is the last invention that man need ever make”
(Chalmers 1). Singularitarians and Extropians, like Kurzweil or Good, are not alone in this
prediction and how they describe this coming AI is telling of their attempt to shape opinions.
Ben Goertzel, Seth Baum and Ted Goertzel, for h+ (Humanity, Plus), share their results from a
collection of surveys of AI researchers and try to reassign a new label to AI. Their findings
“suggest that significant numbers of interested, informed individuals believe that AGI [Artificial
General Intelligence] at the human level or beyond will occur around the middle of this century,
and plausibly even sooner” (n.p.). The writers for h+ emphasize that “the possibility of ‘humanlevel AGI just around the corner’ is not a fringe belief. It’s something we all must take
seriously” (n.p.). Notice how these futurists, while keeping artificial in their naming, elaborate
with general to suggest that it is somehow different from how we have generally thought about
AI. They also include that it is human-level, perhaps to add more of a human feel to this
otherwise artificial entity.
77
In defining AAs and AI in a non-species-centric or zoocentric manner, I would clarify
that allowing a being a right to life and/or personhood does not exclude them from pre-existing
laws that maintain a functioning society. For example, just because a hypothetical War-Bot is
allowed personhood rights does not mean that it is exempt from laws that punish for murder. If
we consider that that same War-Bot is acting of its own independent desires, is aware of its
actions and illustrates its ability to express its right to life, but is acting in a way that breaks
existing laws, then he/she/it should be punished as such. The question of punishing and or
otherwise reprimanding AIMs who misbehave (just as many humans do) will become a problem
for future generations. While there will certainly be many difficult circumstances for future
generations that cannot be predicted, I believe we can give our future generations the best hope
for friendly negotiations with beings very unlike ourselves by following these guidelines of
personhood assignment.
If we are to believe, as many are beginning to, the futurist’s predictions that human-or
higher-level machine intelligence is possible within the next couple of decades, it is time to begin
considering a framework of moral responsibility that includes non-human, potentially artificial,
entities. Philosophers discuss such things as “theoretical entities” and try to put terms to use
which will incorporate moral responsibility for those entities that fall outside of the traditional
human-centric association with intelligence and selfhood. Gips asks: “Would a robot that
follows a program and thereby behaves ethically actually be ethical? Or, does a creature need to
have free will to behave ethically?…Of course, one can ask whether there is in fact any essential
difference between the ‘free will’ of a human being and the ‘free will’ of a robot” (250). Himma
asks a similar question by reminding us that “artificial free will presents different challenges: it is
not entirely clear what sorts of technologies would have to be developed in order to enable an
78
artificial entity to make ‘free’ choices – in part, because it is not entirely clear in what sense our
choices are free” (28).
Even with the very nature of free will in debate, some thinkers like McCarthy, who takes
a compatibilist view, argue that our future with robots should include free will for those robots.
Indeed, “for optimal functionality, robots’ knowledge of their possibilities need to be made more
like that of humans. For example, a robot may conclude that in the present situation it has too
limited a set of possibilities. It may then undertake to ensure that in future similar situations it
will have more choices” (McCarthy). In McCarthy’s case, simply accepting that humans have
free will is a way to describe our actions and can thus be enough to include sAIs as AAs.
For those like Floridi and Sanders the question of free will for AAs can simply be
dispensed with: “All one needs to do is to realize that the agents in question satisfy the usual
practical counterfactual: they could have acted differently had they chosen differently, and they
could have chosen differently because they are interactive, informed, autonomous and adaptive”
(sic, 366). Just as Floridi and Sanders dispense of concepts like free will when defining AAs,
other thinkers take a similar route for concepts like intelligence and agency. Although there is
no way to guess how society will react to the invention of sAIs, it is clear that new non-species
specific concepts need to be adopted to incorporate sAIs into our ethical frameworks. Obviously
answers to our own levels of free will, moral actions and abilities are not readily apparent and
will be further discussed by ethicists and philosophers. In the meantime, we grow ever closer to
unleashing sAI and our ethical structures and concepts of free will need to be refined to address
the AMAs that will be among us.
If, as the humanists believe, strong AI is “just around the corner,” and roboticists
continue to make ever more humanlike robots, our concept of the “Self” must be reevaluated in
79
the coming technoculture. Considering moral agent status for Artificial Intelligence may be a
long time in the making; however, there are social changes taking place. For Turkle, there is a
conceptual shift in our understanding of machines…when working with computers “people tend
to perceive a ‘machine that thinks’ as a ‘machine who thinks’. They begin to consider the
workings of that machine in psychological terms” (Second Self 20). How and why such a
conceptual shift occurs is the focus of her 1984 book and is arguably all the more significant to
the discussion today and will be explored later in Chapter Four, as I consider the android as a
social entity. But first, I return to finding boundaries that are more tangible. Clearly, trying to
find boundaries and definitions for something that is fundamentally without boundaries is
difficult, so the next chapter will return to more solid ground. The body appears as something
that defines our form and our selves – it is our “window through which we see the world.” In the
case of an artificial or synthetic entity, the same is true, but how we understand that body, as
explored in Chapter Three, will also impact our future with artificial entities.
80
CHAPTER THREE: UNCANNY INTERACTION
AND ANTHROPOMORPHISM
“Artificial intelligence is growing up fast, as are robots whose facial expressions
can elicit empathy and make your mirror neurons quiver.”
-- Diane Ackerman, New York Times (2012).
INTRODUCTION
In Chapter Two I explored the first of two components related to the android, the mind.
While the first component is literally the most intangible element of the android, the second
component is the physical part of the mind/body dualism. This chapter works toward an
exploration of the physical form, the biological or synthetic body, which the AI may be housed
in. Embodiment comes in many forms. This could be in the form of a box or a robotic chassis.
It could also be in the form of a whole spaceship whose humanlike appearance is projected
holographiclly to the crew, as in the videogame Mass Effect (2007). While the AI can be housed
in a whole array of bodies, or potentially no body or chassis at all, this chapter pays particular
attention to the humanoid body, especially one that behaves in a human-like manner – an
important element which ultimately defines an android as an android. Even if futurists are
considering potential futures in which our minds are removed from our bodies, others contend
that the body will never be left behind. Allucquère Rosanne Stone believes that “bodies will
determine the nature of cognition [and] communities [lived and in virtual reality]” (n.p.).
Another critical posthumanist, Anthony Miccoli, explores how our bodies and agency are
“traumatized” as we become posthuman and take technology into ourselves (x-xi). But even as
we define technology as part of ourselves or separate from the self, I ask, what will that
technology look and feel like?
81
Robot designers and marketers must keep many things in mind when designing a product
for the home, and androids (AI included) pose unique challenges to design and marketing.
Because consumers want technology that is both helpful and predictive, it is likely that the
market will drive further development into sAI. But as our programs become smarter, there is a
growing interest in robotics that appropriately embody the “smarts” of AI. Will we want things
that interact with the real world or do we want entities that are disembodied but still achieve
tasks for us in the real world? In partial answer to this question I first consider what I call the
“HAL Effect.” Because embodiment can come in so many forms, I first want to consider the
consequences of disembodied artificial intelligences. Using the films 2001: A Space Odyssey
and Moon (2009) I set the stage for a more complete deconstruction of the HAL effect in the
Terminator series by focusing on the malevolent program, Skynet.
Considering the implicit belief that a disembodied artificial intelligence is malevolent, the
field of Human-Robot-Interaction (HRI) explores how to most effectively embody artificial
entities so that they will be more likeable. In exploring HRI, anthropomorphism is accepted as a
crucial step to understanding the likeability of an artificial entity – whether animate or inanimate.
Because humans have a great capacity to project human-like identities to entities, HRI explores
which conditions exactly must be in place for optimal human-robot interaction. While most HRI
is studied using humans interacting literally with robots-in-the-making, I choose to explore HRI
through the Terminator, especially given the rich examples of human-robot-interaction in
Terminator: Judgment Day (1991). In a discussion of the film, I introduce the concept of the
uncanny valley, proposed by roboticist Masahiro Mori. For the remainder of the chapter, I
consider the primary elements of robotics design that are considered in relation to the uncanny
valley.
82
Humans have a remarkable ability to attribute human intentionality and mental states to
the many non-human entities (living and artifact) that abound. In most cases that attribution
requires a physical form to interact with, but that is not the only way intentionality and mental
states are attributed.24 Without a physical form, interaction with an entity like an AI or Artificial
Intelligent Mind (or AIM), anthropomorphism is seriously hindered. But before I consider our
ability to “see human”25 everywhere, especially in the physical, I want to explore the roll that
embodiment plays in human interactions with artificial entities. For designers of both robotics
and AI, how the entity is presented to the consumer is an important first step in conception and
design and is considered crucial for positive Human-Robot Interaction, also called HRI.
EMBODIMENT: TO BODY OR NOT TO BODY?
Considering how an AI will interact with humanity, the form it takes will influence our
time with it. Our ability to anthropomorphize human-like features in the non-human is one part
of how those relationships will be built. Another part is built around that tricky part of
embodiment and to begin this exploration, I use examples in popular fiction. In the case of AI,
regardless of how advanced, productive, or well-programmed an AI is in fiction, its embodiment
– whether it appears as a humanoid android or as a booming voice over a loudspeaker – together
with how it behaves socially with humans, gives a reliable predictor of its alignment toward good
or evil. Just as HAL’s disembodiment lends itself to evil in 2001: A Space Odyssey, the
assumption is that disembodied AIs lack the capacity to understand human social behavior.
24
“Intentionality” according to the Stanford Encyclopedia of Philosophy, refers to “the power of minds to be about,
to represent, or to stand for, things, properties and states of affairs” (Jacob). Intentionality is not synonymous with
“having intentions.” In this case, to have intentionality, is to have certain mental state that are about, or aimed at,
the world. You may intend to do something, but your mental state has intentionality about that which you want to
do.
25
The concept of “seeing human” comes from Epley, Waytz and Cacioppo, and Scott McCloud, along with others.
83
The importance of embodied social agents i.e., other humans who are physically present,
is well known and has been explored by psychologists for decades but this phenomenon has only
recently been considered as crucial for roboticists to consider for AI implementation on the mass
market. Of particular note is the study by Sara Kiesler, et al. in which they measured differences
in the social facilitation of the artificial agent when interacting with humans, hoping to show that
“face–to–face interaction with a humanlike machine should prompt greater anthropomorphism of
the machine” (171).26 In their conclusion, “results indicate that interacting with the embodied
robot was a more compelling experience for participants and elicited more anthropomorphic
interaction and attributions [than those participants working with a disembodied robotic body,
projected onto a screen]. Participants spent more time with the robot, liked it better, attributed to
it stronger, more positive personality traits, and said it was more lifelike” (Kiesler, et al. 171).
Furthermore, Kiesler, et al. come to the determination that “embodiment and presence could
make a machine salient and important, encouraging anthropomorphism” (171).
But simply saying that an entity needs to be embodied in order to facilitate positive
interactions between robot and human participants, in other words, fostering feelings of
“likeability” and attributing “positive personality traits,” is insufficient. Embodiment comes in
many shapes and forms. While humanoid robotics is a very popular field of robotics, especially
in Japan, another field of robotics development believes in finding an accord with the humanoid
and non-human for better human robot interaction. Meaning, it doesn’t have to look human for
the best interaction. For example, this particular field of development is seen in the work by
26
Kiesler’s team based their understanding of social facilitation on Robert Zajonc’s 1965 proposal that, in the
presence of others, people interact with different levels of emotional involvement. If the robot exhibited human like
traits, Kiesler’s team predicted, the people interacting with it would have a similar emotional response to that of
interacting with a human agent. While Zajonc considered levels of arousal, among the human participants in groups
which ultimately led to distraction in Zajonc’s conclusion, Kiesler’s team considered anthropomorphism as a whole.
84
Cynthia Breazeal who specializes in making “Personal Robots” for the home and family. Her
work includes robots named Kismet, Leonardo and, most recently, Jibo. Leonardo is a
particularly interesting robot for discussions of “like-ability.” Leonardo looks like a teddy bear
with big pointy ears and ape-like hands. To some, Leonardo resembles a not-so-cute Gizmo
from the 1984 horror comedy film Gremlins (Temple).27 Others may feel this critter is “cute” as
it reaches out to grasp objects in the world, but clearly, roboticists are working on finding the
right balance for embodied AI.
Embodiment could be in the form of light and photonic particles, like the holograms in
Star Trek. In fact, embodiment is really more a matter of scale, not necessarily a question of the
shape of those physical boundaries. To the best of our scientific knowledge, all of what we
consider embodied entities have boundaries, be they cellular or spatial. The nearest entity that
could be considered entirely disembodied would be God, but with only one entity “in that
category” I believe it would be productive to include AI in the disembodied category – perhaps
something more similar to what could simply described as an “agent” or software. I’m by no
means making the claim that an AI is equivalent to God; however the resemblance in some
fictional cases is striking. For example, when a malevolent disembodied AI like Skynet in the
Terminator story arc, has the power to cause complete global annihilation with a single thought
that triggers Armageddon while simultaneously having the power over a robot army and time
itself… that feels awfully godlike.
27
Gizmo, in the popular Gremlins film (and its subsequent sequel), is one of the critters known as Mogwai. When a
young boy, gets a hold of Gizmo, he is told he must follow certain rules in the care of Gizmo. Through a series of
happenstance the rules are broken. Because of these broken rules, Gizmo multiplies into many other Mogwai who
then all go through a metamorphosis and change into impish reptile-like monsters, called “gremlins.” By using this
reference, Temple is likely implying that Leonardo looks more like a monster hidden behind a cute furry exterior.
85
Robin Stoate, in his article about the 2009 film Moon and caring computers, offers a nice
introduction to embodiment and AI, although he is reluctant to use the word disembodied. In the
film, a lone human, Sam (Sam Rockwell) works on a mining station on the Moon for a threeyear contract. During that time, he
has no direct communication with
other humans – even his messages
to and from Earth are on a delay.
Although he has no human
companions, he has an assistant in
the form of a robot named GERTY
Figure 3: GERTY Greets Sam with a smile and a cup of coffee, "Good
morning Sam. I'm here to help you." (Screen shot Moon, 2009.)
(voiced by Kevin Spacey). GERTY
is about the size of a vending
machine and attached to the ceiling of the station, but he still follows Sam around; although he
can only emote with his voice and an assortment of smiley or frowny faces, he appears to care
for Sam’s well-being (Figure 3). Stoate opts for a generalized description, labeling GERTY and
other disembodied AI, as a “free-floating essence” in a “non-anthropomorphic mode.” I take his
point that yes, these systems are “bounded within the more recognizable, traditional shape of the
immobile computer system” (204), and yet I think it can be more concise. In describing the
possibilities for embodiment of AI, Stoate differentiates between “computer-instantiated” AIs
and AIs embodied in androids or “more recognizably embodied” forms. But I still believe the
disembodied label is more descriptive, especially when considering the scale of the entities in
question.
86
Consider for example, the difference between a parasitic entity and the body in which it
exists. From the perspective of the parasite, the body it inhabits is vast and seemingly endless.
Should that macro-body become aware of the micro parasite and address it, the effect would be
much like the voice of God. I’m not intending to compare humans to parasitic entities, but
consider how Dave must have felt within the confines of the spaceship in 2001: A Space Odyssey
when HAL’s ability to control the entire ship renders Dave’s actions nearly inert. The death of
his crewmates seem equivalent to a human swatting a fly. The justification that “this mission is
just too important for me to allow you to jeopardize it” feels insufficient to the audience and yet,
to HAL its/his actions were necessary. HAL eliminated the human annoyance. For Michael
Webb, this feeling of smallness in the face of such AI is described as “Technology no longer
breaks down doors, it infiltrates, or worse, it is all around us, invisible and malevolent…. When
Hal announces that he is incapable of error and decides he knows better than the crew how to run
the ship, they are compelled to perform a lobotomy in order to save themselves” (6).
Semantics aside, Stoate’s point that “the fact that these subjects lack a cohesive, visible
body (and are, in many cases, seen to be omniscient through multiply-deployed camera ‘eyes’)”
is useful, and Stoate takes his point about AI further into understanding how AI may be
considered in the future. For Stoate, these disembodied AI “[are] given as a constant source of
fear,” generally because of the fact that they do “lack a cohesive, visible body” (204).
Holograms, while perhaps not as cohesive as other embodied entities can be useful for
considering the potential for human-robot-interaction. Relke writes the “potential [for holograms
to serve] as bridges across the human/posthuman divide is considerable” (116). In the context of
the show, holograms are generally treated as entertainment and part of fictional stories and
scenarios within holodecks or holo-suites. In some cases, they are given limited levels of
87
personhood. The Emergency Medical Hologram (or EMH, played by Robert Picardo) of Star
Trek: Voyager (1995-2001) is an integral member of the crew and over the course of the series
the narrative explores his identity. At first he is confined to sickbay (the starship’s equivalent of
an infirmary) until a “mobile emitter” is made.28 His interactions with Captain Janeway (Kate
Mulgrew) reveal the struggles of a nonhuman entity hoping for acceptance. For Relke,
Janeway’s interactions with the Doctor indicate a contradiction: “on the one hand [Janeway]
struggles to contain the posthuman within the circle of the liberal humanist assumptions…; on
the other, often putting [herself] at risk, [she] actively engages in redefining the human so that it
opens out onto the posthuman” (85). Relke uses the episode “Fair Haven” as an example. The
Doctor must defend the “real-ness” of a fellow hologram which Janeway has feelings for – she is
skeptical of the posthuman-human relationship. The Doctor declares, “He’s as real as I am!
Flesh and blood, photons and forcefields – it’s all the same, as long as your feelings are real…
He says something that makes you think. Does it matter how his molecules are aligned?” (“Fair
Haven”). This seems to be an argument in support of the functionalist perspective for a theory of
mind, as described in chapter two.
Not all Star Trek characters adhere to the belief that “intelligence is as intelligence does”
as illustrated by Schick’s deconstruction of the crewmate beliefs across the series. For Schick,
Star Trek episodes act as thought experiments: “putting various theories of personal identity to
the test by examining their implications in imaginary situations” (218). In this case, he explores
personal identity and finds that many characters through Star Trek history seem to embrace a
28
In making his own holo-novel, a fictional story of the “struggles of a hologram,” the Doctor chooses to make the
user participating in the story to wear a heavy and cumbersome backpack to symbolize the “weight” of the holoemitter. In doing so, he explained that used it as a metaphor: “A symbol of the burdens that I live with every day.”
He wanted the user to understand that it’s a “constant reminder that you’re different from everyone else” (“Author,
Author”).
88
humanist perspective on identity. For example, Captain James T. Kirk (William Shatner) of the
original Star Trek (1966-1969) appears to agree with Dr. Julian Bashir from Deep Space Nine in
the belief that the self cannot exist without the original body or, specifically, brain (“Life
Support”). Schick uses the episode “What Are Little Girls Made of?” as his example, in which
the Enterprise discovers a missing person, Dr. Korby (Michael Strong), who turns out to be an
android. The android-Korby seems to be like the human-Korby it was modeled after in every
way, but Kirk leaves the android on the planet, declaring, “Dr. Korby… was never here.”
According to Schick, “Since the android seems to have the [original human’s] memories, beliefs,
and desires, Kirk’s view seems to be that our bodies make us who we are: we cannot exist
without our bodies” (218). Besides basic original embodiment being a requirement for selfhood,
many captains and their senior officers believe, according to Schick, that “a disembodied
existence is impossible… Having a particular physical constitution is essential to you. Once that
constitution is gone, you no longer exist” (emph. added 220).
THE HAL EFFECT
While Star Trek offers rich opportunity for discussing the human-self in its many
potential posthuman forms, the embodiment of AI can also be examined from the perspective of
fiction. Indeed, for the philosopher of mind, the Terminator series gives a unique opportunity to
explore embodiment of AI and the necessity of social cognition in human-android interaction.
AI and robotics designers all seem to agree that to encourage successful Human-RobotInteraction (or HRI) AI should be both embodied and behave in a human-like manner.29 Fiction
explores this idea and presents challenges to this rule. While it might seem trite to simplify the
many and diverse types of AI down to the good vs. evil trope based on the embodiment and
29
MacDorman and Ishiguro, along with others are actively exploring what they define as competent social presence
otherwise lacking in current robotics development.
89
behavior of the artificial entity, this dichotomy is easily accepted and often expected by the
science fiction audience. Here I take that expectation a step further to help explain how audience
members identify with and potentially empathize with the Androids and AI in fiction based on
their physical or nonphysical construction, together with their imagined social cognition.30
This acknowledgement of conventions in science fiction is endorsed by other film critics.
For example, Stoate describes his belief that “[AIs] have a particular set of representational
conventions in fiction…. the non-anthropomorphic mode of embodiment taken by most AIs in
fiction assigns them a particularly belligerent character…” and further that “these intelligent,
self-aware subjects exist as free-floating essences, nevertheless bounded within the more
recognizable, traditional shape of the immobile computer system – and they almost always seem
to inspire a certain discomfort, or even outright paranoia” (204). With such boundaries presented
in fiction defining how humans will potentially interact with AI, designers and marketers of AI
and Androids in today’s market need to be acutely aware of these undermining factors, or the
HAL Effect.
In 1984 the first Terminator film introduced audiences to two of the most potentially
threatening AIM’s: one is the Terminator himself, the first one sent from the future to kill the
mother of a future leader of the rebel force; the second is in the form of Skynet, the disembodied
global “defense” network with power over American nuclear warheads. Even though the
opening introduces the audience to two potential antagonists (two “humans” appear in
mysterious lightening balls, naked and alone), the film clearly establishes how the social
30
In general it should be assumed that the androids discussed are imbued with an AI. It is true that there are several
androids that are simply very advanced robotic human-shaped bodies, but that form does not always assume AI.
These robots appear both in fiction and in reality as programmed for specific tasks, but not necessarily intended to
duplicate human behavior. Also, AI here assumes a strong AI, as described in Chapter Two. In fiction, this is
generally accepted as an AI that has reached near-human abilities and/or sentience.
90
differences between the two give a guide to their moral alignment – even when it is not yet
established which is human and which is not. Sure, one man (later identified as Kyle Reese,
played by Michael Biehn) breaks some laws like stealing from cops and a homeless man; but he
at least abides by social norms – he stays out of peoples way and generally doesn’t do things to
make people angry. This man also exhibits common instinctual human behavior like running
and hiding rather than intimidating and killing. He shows pain and we see his exposed scars. On
the other hand, the muscular and physically near-perfect Arnold Schwarzenegger, the
Terminator, is described as “a couple cans short of a six pack.” This tall and muscular man kills
with indiscriminate coolness within the first five minutes. As the film progresses, we are
presented with even more proof that the huge muscle man “has a serious attitude problem”: from
loading a shotgun and killing a salesman, to shoving innocent phone booth users.
Even after Reeses’ identity as human is revealed to the protagonist, Sara Conner (Linda
Hamilton), with his famous phrase “Come with me if you want to live,” the malevolence of the
Terminator must be further established. In Reeses’ description of the Terminator to Sarah, he
emphasizes that “It can’t be reasoned with, it can’t be bargained with… it doesn’t feel pity or
remorse or fear… and it absolutely will not stop. Ever. Until you are dead.” In case Reeses’
description doesn’t prove how inhuman the Terminator is, we are told that it is programmed by
the “enemy” Skynet – a computer defense system. Ultimately, the only way to “win” against this
first Terminator (supposedly the only one in this particular time line at this point in the story arc)
is to systematically disassemble the android body. As if to further emphasize the Terminator’s
non-human status, Sarah and Reese systematically eliminate the human-like parts from skin to
legs and ultimately crush the machine by the end. In fact, perhaps a bit ironically, the
Terminator is crushed by a cold robotic press. Driven into the press by his directive to kill Sara
91
Connor, even self-preservation is not part of his programming, further illustrating that this
Terminator is unable to act beyond his programming from Skynet.
The first Terminator film expresses a common fear of the embodied AIM as
Schwarzenegger personifies the “mindless” killer, who kills without remorse or any trace of
emotion. Considering my list of requirements from Chapter Two, this first Terminator is not
even able to meet some of the basic requirements for sentience, such as independent thought and
expression of emotional reactions and/or desires beyond his programming. In fact, who or
whatever programmed the Terminator, in this case the supposedly ultra-smart computer network
Skynet, would be responsible for the actions of the robot. Many audience members displace
their fear of the evil-future-being onto the Terminator itself without considering that his body is
simply following the programming assigned to him. It is not an autonomously thinking being;
therefore the programmer is to blame and would be the better subject of the audiences’
apprehension.31
The fear of the disembodied AI or Artificial Intelligent Mind (AIM), as opposed to one
housed within a humanoid robotic body, is further illustrated in the second film, Terminator 2:
Judgment Day. The main characters struggle with the existence of the AI called “Skynet” that
will supposedly initiate a nuclear holocaust in the “near future,” and in the second film they set
out to kill the man who will create the AI Skynet in the future. Although much of the details of
Skynet’s existence are left out of the story, the audience understands the danger of an AI
network, so much so that the audience more easily condones the act of murder to protect their
31
In this paragraph my pronoun use of “it” rather than “him” or “he” is intentional. This Terminator is not eligible
for personhood rights based on my guidelines, so I can safely call it an “it” without violating its individual identity.
This particular use of pronouns when discussing AIMs is an interesting part of the rhetoric surrounding AIMs in
fiction. You will notice human characters referring to an AIM that they feel particularly comfortable with, allowing
an anthropomorphic assignment of pronoun rather than the non-human “it.”
92
future selves. Perhaps their fears are not misplaced: from finances and socializing to networks
and defense systems, most people are aware that our lives are becoming more digital and the
threat of having something “artificial” operating and controlling those parts of our lives can be
very frightening. Rather than fearing the eventual AIM that has the potential to emerge from our
immense global, digital connections, I think it necessary to foster feelings of acceptance,
understanding and perhaps even respect for entities that request and even require personhood.
The third film Terminator: Rise of the Machines (2003) brings audiences even closer to
understanding the moral delineation and opposition between embodied minds and malevolent
machines as we are introduced to Skynet as a far-reaching disembodied entity that identifies us
as the enemy. In fact, Skynet’s autonomy is described as an “opening of Pandora’s box.” Here,
John Conner (Nick Stahl) is once again saved from certain destruction by a malevolent
Terminator, the T-X (Kristanna Loken) by a repurposed T-101, captured by the resistance army
in the future. This T-101 has again been “disconnected” from Skynet’s destructive
programming, but the battle of mind over matter in an embodied machine becomes more
poignant at the end of the film when the T-101 fights the T-X virus designed to return the T-101
to its directives from Skynet – to kill John Conner and friend, Kate Brewster (Claire Danes).
The T-101 has learned to overcome the virus by will of mind… a mental state we assume is
powerful enough to shut itself down, preventing the destruction of John and Kate.
Even as the Terminator story continues into the fourth film, Salvation (2009), we can
further see how embodiment is understood. Terminator: Salvation is mere years after the fall of
human civilization, as the adult John Connor (Christian Bale) fights against Skynet’s robotic
army, not only introduces the “face” of Skynet, but also defines the strength of humanity versus
the machines. In the opening we are introduced to the legendary John Conner, now grownup and
93
an emerging leader of the resistance who must confront his definition of machines as he must
decide whether or not to trust a supposed human with machine parts. Having died several years
earlier, Marcus Wright (a newly introduced character to the Terminator story arc, Sam
Worthington), has been augmented by Skynet with what is described as “a secondary cognitive
center” – he’s both human and machine, and this combination of elements is pointed out as often
as possible. At first Marcus is automatically believed to be “evil”: because he has machine parts,
he must be an instrument of Skynet’s evil plan. The humans of Post-Judgment Day have
established a firm belief in humans as essentially different from machines and have survived
only through that belief. Marcus’ hybrid body presents a challenge to John Connor’s belief, but
not everyone sees Marcus as a threat, but rather as friend, and member of the resistance. Blair
Williams (Moon Bloodgood) tells John Conner that she “saw a man, not a machine” and is thus
sure that Marcus is worth saving from the tortures of the anti-machine resistance warriors.
By the conclusion of the film, Salvation brings the audience face-to-face literally with
another legendary character: Skynet. This time, the audience sees the full force of the HAL
effect. Marcus, standing before a large set of screens, sees the familiar face of a Cyberdyne
representative, Dr. Serena Kogan (Helena Bonham Carter), whose face then morphs to the young
Kyle Reese (Anton Yelchin), John Conner, then back to Kogan. The talking head tells Wright
that it can take any form that is pleasing to Marcus, then “she” insists that Marcus can be nothing
but a machine, that his secondary cognitive center overrules his human side. Rather than
succumbing to his machine nature, Marcus pulls out the “chip” connecting him to Skynet and
declares, “I’m better this way.” With that dramatic flurry and the destruction of the Skynet
screens, Salvation confirms the HAL effect. Humans and even not-quite-humans, like Marcus
Wright, stand on the side of good while fighting the evil disembodied Skynet – the only way to
94
be free from the disembodied Skynet is to remove the wireless link. Even though Marcus is still
part machine, he is no longer subject to the imposing program of Skynet – similar to the T-101 in
Rise of the Machines.
UNCANNY INTERACTION: THE HUMAN IN THE HUMANOID
Whether by design or simply wildly entertaining speculative story-telling, the Terminator
story arc has illustrated that how the AI or AIM is embodied has an impact on how audiences
understand the needs, desires and intensions of the artificial entity. But while fiction explores a
potential future with AI, robotics designers are working to find the best results for Human Robot
Interaction. Foremost when it comes to embodied robotics with social behavior, or the illusion
of social cognition, the embodiment of the AI must be considered. While the HAL Effect may
negatively affect human interaction with a disembodied AI, anthropomorphism plays a crucial
part in HRI.
While the disembodied AIs have the potential to bring forth fear and paranoia as
explored with Skynet, embodied AI are more tangibly eligible for anthropomorphism – in
positive ways through appropriately human appearance and behavior. The fact that an android is
made to look human is not an accident or a simple exercise in hubris. Humans have a natural
inclination to anthropomorphize anything in our environment as a way to make connections with
the environment that we can somehow understand. The same is true with the robot. When it
looks and acts human, the theory is, we will be more inclined to anthropomorphize and have
positive interactions with it. Anthropomorphism occurs on two fronts with mechanical wonders.
On the one front, designers and roboticists are striving to make their inventions more humanlike
in appearance and behavior. On the other front, humans interacting with or observing androids
imagine human-likenesses, both in appearance and in behavior, are inclined to attribute similar
95
mental and emotional states. But this inclination is based on the assumption that the humanrobot-interaction is not hindered by the cognitive dissonance of the uncanny. In essence,
observers are anthropomorphizing the entities they interact with and if the roboticist and AI
programmers have done their job, the observer will experience limited to no feelings of
apprehension. This measurement of comfort to discomfort when observing an entity is “too
human but not quite” is described as the “uncanny valley.” If the uncanny valley, or the measure
of comfort gained, but then lost in the valley, has been successfully bridged, interaction with
androids will become naturally humanlike as human observers “see human” everywhere.
The theory in HRI studies goes that, through that process of anthropomorphism, humans
are more likely to interact positively and even share feelings of empathy for a non-human agent.
This feeling of empathy for the nonhuman is an easy move to make because of our speciescentric talent to “see human” everywhere. “Anthropomorphism itself,” defined by Nicholas
Epley, et al.:
Involves a generalization from humans to nonhuman agents through a process of
induction, and the same mental processes involved in thinking about other
humans should also govern how people think about nonhuman agents… Indeed,
the same neural systems involved in making judgments about other humans are
also activated when making anthropomorphic judgments about nonhuman agents.
(“Seeing Human” 867)
Anthropomorphism is predicted to improve HRI, in the future: “The inner workings of most
modern technological agents are every bit as obtuse as the mental states of biological agents, but
the incentives for understanding and effectively interacting with such agents are very high”
(Epley, et al. “Seeing Human” 879). Essentially, anthropomorphism consists of accepting two
96
generally human-specific characteristics as part of a non-human agent. On the one, hand a
synthetic non-human agent has physical characteristics similar to humans – for example, a car
that has “eyes” for headlights. On the other hand, anthropomorphism can include the ability to
believe that an entity has a similar mind, and by extension human emotion – for example, a
thermostat that “wants” to keep the room at a particular temperature. This concept of attributing
a human-similar mind to a human-like synthetic is also called “shinwa kan” in Japanese by
robotics theorist Masahiro Mori. Mori is known as father of the “uncanny valley” theory, which
seeks to plot this move from feelings of discomfort to ease when interacting with a humanoid
robot. The theory proposes that as a human-like entity becomes more human-like in appearance,
viewers feel discomfort. This drop from feelings of comfort to feelings of discomfort is called
the “valley,” for when charted, the deep dip into unease then returns to comfort as the entity
becomes completely human-like. Of course not everyone accepts that attributing a similar mind
to something entails the belief that it actually has a similar mind. Considering the thermostat for
example, while I might say it “wants” to keep the room warm, I don’t actually believe that it
wants, i.e., has wants and desires similar to my own. Rather, the use of “want” in that case could
be used metaphorically. Regardless of how conscious I am of the word choice in such a case,
there are thinkers who would argue that such an association is all that is necessary for successful
human to non-human interaction.
We are naturally able to make anthropomorphic assignment to non-human entities, and
robotics designers are trying hard to master and facilitate anthropomorphism. Waytz, et. al
explain:
People show an impressive capacity to create humanlike agents—a kind of
inferential reproduction—out of those that are clearly nonhuman. People ask
97
invisible gods for forgiveness, talk to their plants, kiss dice to persuade a
profitable roll, name their cars, curse at unresponsive computers, outfit their dogs
with unnecessary sweaters, and consider financial markets to be ‘anxious’ at one
moment and ‘delirious’ the next. (59)
Anthropomorphism doesn’t occur only when seeing or interacting with an entity that is human
like; humans anthropomorphize everything from cars to bears and frogs, but that shift comes
with a certain level of risk. For example, Tom Geller reminds us of the humanized side of the
uncanny valley: “anthropomorphic characters that are clearly nonhuman generally don’t cause
the ‘creepy’ feeling associated with the uncanny valley” (11). He goes on to remind us of Scott
McCloud’s famous (literal) illustration of our ability to see human in everything (11-12).
McCloud describes the power of icons and that “we humans are a self-centered race… We see
ourselves in everything… We assign identities and emotions where none exist… And we make
the world over in our image” (32-33). “Our image” includes the iconic features: two eyes and a
mouth which appeal to our desire to “externalize inner concepts of self” (Geller 12 and
McCloud).
Although our natural ability to anthropomorphize abstract objects and entities is great, in
some cases roboticists believe that the physical construction whether biological or synthetic, may
help or hinder the process of anthropomorphism, as explored in the theory of the uncanny valley.
Director of the Institute of Philosophy at the University of Stuttgart, Catrin Misselhorn, in her
discussion of the uncanny valley explains that “Human aesthetic preferences also transfer to nonhuman objects and beings” (349). And in that transference, Misselhorn refers to an experiment
by David Hanson in which Hanson explains that “more realistic faces trigger more demanding
expectations for anthropomorphic depictions.” In other words, as Misselhorn puts it, “realism
98
has an impact on the phenomenon, but, from Hanson’s point of view, the aesthetic features carry
the main burden” (349).
Imagine the following scene: A young boy, John Conner, and his friend talk outside of
a gas station in a dusty California desert. John informs his older male friend that his attitude
needs an adjustment: “You could lighten up a bit… This severe routine is getting old… Smile
once in a while!”
“Smile?” his friend asks inquisitively and with a straight, severe face.
The problem is that this “man” has never smiled before. He wasn’t programmed for
emotional facial movement. John’s friend is called the Terminator, a synthetic robot made to
look human in every way.32 Even though the Terminator looks human and successfully
infiltrated the human population in the first Terminator film, as viewers of this scene would be
aware, he is now confronted with the difficult task of fitting in with humans based on his
behavior, not just appearance. Even with observation and practice, smiling doesn’t quite work
for the Terminator. John looks on doubtfully and sarcastically responds, “That’s good… Maybe
you should practice in the mirror sometime…” He then rolls his eyes, seemingly giving up on
teaching his robotic companion to be more humanlike, settling for the promise that the
Terminator won’t kill anyone anymore.
As you can see in Figure 4, the Terminator’s “smile” is not just unconvincing to John,
it is downright creepy to the audience. Granted, as audience members, we are fully aware that
the celebrity icon Arnold Schwarzenegger is a human playing the fictional character of a super-
32
By this time in the Terminator film series, the Terminator has been reprogrammed by John Conner in the future
and sent back to protect his young self. Furthermore, John and his mother Sarah have removed a Skynet device that
otherwise prevented the Terminator from learning new things. Without the Skynet chip the Terminator is now free
to “learn to be more human.”
99
advanced bio/synthetic android from the future, but that seems to further emphasize the
Figure 4: The Terminator Smiles at John Conner. (Screen shot Terminator: Judgment Day, 1991.)
absurd notion of an android attempting to imitate human emotional behavior. In case audience
members were not convinced the smile was creepy, viewers are given the red-tinted, vector
drawn analysis of “Terminator-vision” to show the robot’s perspective while analyzing the smile
of a nearby human. Not simply a comic relief in the otherwise action-packed Terminator 2:
Judgment Day, the attempt by the Terminator to “lighten up” illustrates what is commonly
known as the “uncanny valley” as well as the concept shinwa kan, or very generally, the feeling
of having similar mind.
Anthropomorphism is not only the ability to see human everywhere but, put simply by
Hiroshi Ishiguro, one of the leading Japanese roboticists, anthropomorphism is much more: “[it
is] the attribution of a human form, human characteristics, or human behavior to non-human
things such as robots, computers and animals” (271). One of the most important characteristics
of anthropomorphism of robots is the belief that a nonhuman agent can have a similar mind.
Similar to earlier discussion about the HAL Effect, the ability to attribute similar mind is affected
by not only the ability to “see human” but also the ability of the artificial entity to seem human
through behavior, or our ability to imagine similar social cognition.
100
Robotics designers Ishiguro and Minoru Asada, in their contribution to “Human-Inspired
Robots” in the journal Institute of Electrical and Electronics Engineers (IEEE), describe their
interest in anthropomorphism: “Humans always anthropomorphize targets of communication
and interaction, so we expect much from humanoids. In other words, we find a human in the
humanoid” (74). If we can accept that an AI can exist and will likely be embodied in such a way
that promotes optimal human-robot interaction, then we must consider the uncanny valley. In
1970 Mori published the essay “Bukimi No Tani” (or “the uncanny valley”) introducing one of
the first proposals to understanding the intricacies of HRI. Mori’s theory of the uncanny valley
circulated through the fields of robotics development, psychology, and sociology for decades,
even though the article was not officially translated into English until 2012. Establishing itself as
a controversial theory to either prove or disprove, or simply elucidate upon, the uncanny valley
has generated dozens of papers and presentations dedicated to understanding and charting the
theory, it has been given special attention at the IEEE conferences and symposiums.
The concept of “the uncanny” was originally made famous by Freud, and roboticist Mori
elaborated on this theory to explain how when a robot (or perhaps inanimate human-like toy) is
too similar to a human in appearance and/or behavior, humans will experience “cognitiveemotional confusion” as the entity engenders feelings of discomfort. MacDorman and Ishiguro
explain that “Mori predicted that, as robots appear more human, they seem more familiar, until a
point is reached at which subtle imperfections give a sensation of strangeness” (“Uncanny
Advantage” 300). And that strangeness, for Mori, could illicit discomfort more so for androids:
“To build a complete android, Mori believed, would multiply this eerie feeling many times over:
Machines that appear too lifelike would be unsettling or even frightening inasmuch as they
resemble figures from nightmares or films about the living dead” (MacDorman and Ishiguro,
101
“Uncanny Advantage” 300-301). Judith Halberstam in Queer Art of Failure (2011) refers to the
“uncanny” in describing stop-motion animation that is appropriate for androids as well: “it
conveys life where we expect stillness, and stillness where we expect liveliness” (178).
To understand the uncanny valley, one should first assume, as Mori did in 1970, that the
goal of robotic design is to achieve humanlike robotics designs and in fact, accept that roboticists
are always progressing toward achieving identical synthetic humanoid simulacrum. Even today,
this is generally accepted as the goal of robotics, especially android development (MacDorman
and Ishiguro, “Toward Social”). Ishiguro confirms this in more contemporary terms: “The
interactive robots that have been developed thus far are humanoid to encourage the human
tendency to anthropomorphize communicative agents” (320). Setting aside for the moment
questions or whether or not humanlike robotics should be the goal of robotics development, Mori
formulates the uncanny valley as
the following: “in climbing
toward the goal of making robots
appear like a human, our affinity
for them increases until we come
to a valley, which I call the
uncanny valley” (Mori 98). With
the simple idea of a rise and
sudden fall of “affinity for
Figure 5: Mori's charting of the uncanny valley (WikiMedia).
robots” when a humanlike entity
becomes too humanlike and yet not quite right Mori then plotted a chart of the uncanny valley
(Figure 5). As you can see, the idea is that after having “fallen into the uncanny valley” a
102
robotics design can theoretically rise out of that valley, meaning our affinity increases again to
the point where the entity is near or equal in comfort level with a living human. For another
description of the uncanny valley, I refer to Misselhorn: “[Mori] stated that the more human-like
a robot or another object is made, the more positive and empathetic emotional responses from
human beings it will elicit. However, when a certain degree of likeness is reached, this function
is interrupted brusquely, and responses, all of a sudden, become very repulsive. The function
only begins to rise again when the object in question becomes almost indistinguishable from real
humans” (346).
Looking closely at the graph, notice that the “bunraku puppet” is marked at about half
way up the ascension of the valley. Mori placed it there as an example of an entity which we are
able to “become absorbed in this form of art, [hence] we might feel a high level of affinity for the
puppet” (Mori 99). Upon close inspection the bunraku puppet is clearly non-human. A
scrutinizing observer can see its joints and rods used by the human operator to manipulate its
movements. For Mori, however, none of those details matter to the audience. Observers from
afar are “absorbed” in the art or performance, essentially suspending their disbelief that the
puppets are inanimate objects even though the human manipulators of the puppets are clearly
visible on stage. Bunraku puppets illustrate the way that humans are able to suspend their
disbelief that inanimate objects don’t have emotions. For Roland Barthes, “[bunraku is]
Concerned with a basic antinomy, that of animate/inanimate, [and] Bunraku jeopardizes it,
eliminates it without advantage for either of its terms” (Bolton 729).
To extend the concept more directly to robotics, when encountering an android a human
observer may be able to imagine that the entity has emotions and intensions, regardless of their
physical construction. This happens, according to Mori’s uncanny valley theory, at some point
103
after the entity is no longer “creepy.” Of course this is a fragile balance – the fear of making
uncanny robots is on the minds of many robotics designers and marketers.
The species-centric attitude toward android creation is investigated by behaviorists who
find overwhelmingly that people prefer interacting with embodied machine intelligences that are
human like. Kiesler, et al. found that interacting with the embodied robot was a more
compelling experience for participants and elicited more anthropomorphic interaction and
attributions. Participants spent more time with the robot, liked it better, attributed to its stronger,
more positive personality traits, and said it was more lifelike” (sic., 177). By “embodied” they
mean a program or software that is physically present within a robotic form rather than
interacting with the same AI in the keyboard to screen situation. But beyond basic embodiment,
other behaviorist researchers believe that “very humanlike robots may provide the best means of
pinpointing what kinds of behavior are perceived as human, since deviations from human norms
are more obvious in them than in more mechanical-looking robots” (MacDorman and Ishiguro,
“Uncanny Advantage” 299). While roboticists continue to design their robots with ideal,
positive robot-human interaction in mind by making them more human-like, these more humanlike, sociable androids illicit stronger emotional bonds from their humans.
Although the act of attributing “similar mind” to inanimate objects moves us into the
metaphysical questions usually discussed by philosophers of mind, questions of mental states
come up in fiction as well to help illuminate anthropomorphism of androids. Returning again to
the Terminator franchise, by the second Terminator movie, Judgment Day, philosophical
questions begin to arise for Sarah Conner (Linda Hamilton) and her son John Conner (Edward
Furlong), the proclaimed future savior of humanity, as well as for the audience, especially
regarding the mental state and our feelings of anthropomorphism toward the humanoid robot.
104
Similar to the opening of the original Terminator, the protagonist is confronted with two
unknown humanoids whose alignment is unknown. Will the familiar (Schwarzenegger) T-800
be the indiscriminate killer or will the new guy, the cop killer, be the protective agent?
Terminator 2: Judgment Day introduces two very differently aligned androids who are at first
indistinguishable from humans. The T-800, any other designation unknown at first, arrives in a
time portal while the scene cuts to another humanoid arriving in a similar manner. It is not until
the Schwarzenegger Terminator shouts at John Connor to “Get down,” shotgun in hand and
aiming at the cop-impersonating T-1000 (Robert Patrick), that the audience must question
alignment based on embodiment.
Although it is clear that the T-800/101 (i.e., John Connor’s reprogrammed Terminator
from the future sent to protect John above all else) does not have the same mental states as a
human, John begins to tease out the differences and surmises that once the Terminator learns
some basic human behaviors he/it will be able to be less of an antisocial “dork.” Although the
first Terminator illustrated how an android’s social behavior, or lack thereof, helps identify the
entity as separate from a human and in fact, quite evil, Judgment Day explicitly introduces the
conceptual requirement of learning social cognition as part of alignment identity. Under the
original programming by Skynet, the T-800 explains that his “CPU is a neural-net processor… a
learning computer. But Skynet presets the switch to ‘read-only’ when we are sent out alone.”
Sarah Conner translates that Skynet “doesn’t want you thinking too much, huh?” and that
appears just fine to Sarah. John Conner, on the other hand wants the T-800 to learn to be more
human… so he’s not “such a dork all the time.” It’s not just that John doesn’t want the
Terminator to kill people so much, although that is a major contributor to his decision to
reprogram the Terminator; John also seems to be building himself a father-figure.
105
Young John obviously knows that the Terminator is a robot, and yet he can’t help but
teach it more human-like behavior, from language to facial expression – essentially
anthropomorphizing the Terminator for both himself and for viewers. From the perspective of
philosophy of mind, John and the viewers are clearly aware that the Terminator does not have a
mind physically like ours – we are shown that he has no brain. But the lack of a brain does not
necessarily mean that we cannot attribute similar mental states to the entity.
A philosopher who subscribes to the functionalist argument might first answer with “it
depends on what the mind does, not what the mind is made of.” For a functionalist, an entity can
qualify as an intelligent being based on proof that its behavior appears to be intelligent,
especially based on behavioral output responses to environmental input (i.e., if it acts intelligent,
it is intelligent). But intelligence alone is not enough for anthropomorphism, as cognitive
theorists have illustrated with embodiment. Social Psychologist Adam Waytz and colleagues
remind us that:
Xenophanes (6th Century philosopher) was the first to use the term
anthropomorphism when describing how gods and other supernatural agents
tended to bear a striking physical resemblance to their believers. Xenophanes’s
observation reflects one of two basic ways of anthropomorphizing. The first
involves attributing humanlike physical features to nonhumans (e.g., a face,
hands), and the second involves attributing a humanlike mind to nonhumans (e.g.,
intentions, conscious awareness, secondary emotions such as shame or joy).
Anthropomorphism therefore requires going beyond purely behavioral or
dispositional inferences about a nonhuman agent and instead requires attributing
human form or a human mind to the agent. (220)
106
With the simple “flip of a switch” (supposedly a literal switch though the audience never sees it),
the T-101 goes from being simply programmed to keep John safe, to a companion and father
figure, an entity with mental states similar to our own. This companionship grows with every
social interaction between the Terminator and John – from learning to smile to discussing the
fear of death – the humanlike benevolence becomes clearer and more poignant to the audience.
Antti Kuusela describes his feelings about Terminator 2: Judgment Day, in particular his
reaction to the scene toward the end of the film in which the viewer sees the T-101 melted down
in a vat of molten steel: “We may feel sorry for the T-101 because it is going to lose its
existence. We may think of the Terminator’s act as being unselfish because it puts the interests
of humans before its own. But of course, these views make sense only if we believe that the T101’s mental life is similar to ours. And if it is, then there may be good reason to reevaluate the
real difference between machines and persons” (Kuusela 267)33. If Kuusela’s observations are at
all accurate, there are reasons for the audience to feel an emotional reaction but not because we
know the T-101’s brain is at all physically like our own but because we believe that it has mental
states like ours – thus anthropomorphizing the otherwise robotic android.
While I believe there is, as of yet, no way to measure a machine’s or even another
person’s qualia, or state of mind, perhaps the answer can be found by addressing the other
elements of anthropomorphism, returning us to embodiment and social cognition. The fictional
portraits of AI robots like those in Terminator do not stop roboticists from designing more
33
Note here the paradox that Kuusela may or may not even be aware of regarding the state of being of fictional
characters. Throughout his article Kuusela describes his understanding of the Terminator and other fictional
characters in the film as if they are real entities in the world that can be referred to. This is an impossible reference
because there is no actual Terminator that Kuusela could refer to; however the important turn here is that this is a
common treatment of fictional characters in media theory. For audiences then it is accepted that even though there
is no actual Terminator to refer to, it is possible to refer to the potential mental states or states of existence of a
fictional being without confusing actual with fictional entities. While this is a very interesting paradoxical part of
media studies, it is easily dismissed by viewers and cannot be explored further here.
107
human-like robots or AI researchers from coding more complicated synthetic minds. Indeed,
Ishiguro reminds readers that “The recent [as in 2007] development of humanoid and interactive
robots such as Honda’s ASHIMO (Sakagami et al. 2002) and Sony’s AIBO (Fujita 2001) is a
new research direction in robotics. The concept of these interactive robots is partner. Partner
robots will act as human peers in everyday life and perform mental and communicational
[support] for humans as well as physical [support]” (106). To make effective robot partners
designers and marketers will need to be well aware of the generally accepted stereotypes
associated with both the AI that will be imbued into robots as well as the basic discomfort
associated with the uncanny valley.
ROBOPSYCHOLOGY AND THE UNCANNY
Consumer robots are predicted to appear on the mass market soon. In fact, the growth of
an aging population is motivation for the creation of more domestic-assistance robots. The
uncanny valley is a potential challenge as developers want to make robots that are appealing
without being frightening. As robotics designers are working toward more humanlike robotics,
psychologists and sociologists are working to anticipate the effects that socializing with such
entities will be, while exploring the market possibilities. A team of marketers hoping to improve
human-robot interaction with robotic products point out the need for more studies of the uncanny
valley: “Developing a scale to capture individual desire for humanlike aesthetic appearance in
consumer robots may provide useful insights into individual evaluation of the appearance of
consumer robots” (Oyedele, et al. 631). In other words, to better understand how consumers
evaluate and interact with robots, a better understanding of the uncanny valley is necessary.
For Mori, movement was the first and foremost key to understanding why humans
experience the uncanny feeling associated with humanlike entities. Even in the 1970s substantial
108
advancements were being made toward humanlike prosthesis and other movement designs to
simulate humanoid body structure and motion, but Mori foresaw a high potential for humans to
experience unease as the humanlike entity approaches a too-near-to-human appearance and
movement. To illustrate this idea Mori used the example of a prosthetic hand. At first glance an
onlooker may assume that the hand is a normal biological human appendage, but upon grasping
the prosthetic, in a hand shake, for example, the perceiver may suddenly be overcome with a
feeling of creepiness, as Mori explains (100). “Imagine a craftsman,” Mori offers another
illustration, “awakened suddenly in the dead of night. He searches downstairs for something
among a crowd of mannequins in his workshop. If the mannequins started to move, it would be
like a horror story” (Mori 100).
Mori wasn’t far off when he predicted how well such a graphic theme could illicit fear in
an audience. This scene described by Mori to illustrate the uncanny valley in action is strangely
similar to the opening episode to the rebooted Doctor Who in 2005. The (Ninth) Doctor
(Christopher Eccleston) and his newly met companion, Rose (Billie Piper), are pursued by killer
mannequins. Indeed, this is not an unusual phenomenon in popular culture. Internet Movie
Database user “wortkunst” shares a list of “Scary dolls, puppets, dummies, mannequins, toys,
and marionettes,” which includes ninety-six different film and television instances of horrifying
life-like dolls.
Another example of the uncanny associated with movement is in the form of zombies.
From the original uncanny valley chart, you can see that Mori has placed a zombie on the
deepest trench of the valley, indicating that zombies were the most eerie and uncanny entity.
When a human body is known to be dead and yet walks and moves, the zombie is the most
uncanny entity. Although not directly connected to man-made entities, Mori and others have
109
connected this fear of zombies with the importance of self-preservation. Misselhorn extends this
fear of undead to explain our feelings of empathy toward inanimate objects. She believes that
the feeling of the uncanny associated with robots is associated with the instinct to avoid illness:
“[certain] characteristics are universally regarded as ugly or disturbing, for instance, sickly eyes,
bad skin, extreme asymmetry, and poor grooming. Those features are considered to be signs of
illness, neurological conditions or mental dysfunction, and, therefore, supposed to lead to
repulsive reactions” (349). When interacting with a humanlike robot, Misselhorn describes how
humans may unconsciously fear the potential illness that could be transmitted. Another way of
thinking of this uncanny feeling is that it is a manifestation of “our fear of the nonliving alive”
(Geller 17).34
Although movement and appearance are important when trying to grasp the uncanny,
roboticists propose other factors to take into consideration. Within movement, several
subcategories of movement should be considered. For example, Zia-ul-Haque, et al. explain that
“Recently the role of humanlike appearance is claimed to be as important as its behavior for the
robot to be recognized as a social identity and to elicit more natural response from human [sic].
Having a human like face provides the benefit of universally recognized facial expressions, an
understood focal point of interaction etc.” (emphasis added, 2228). For Zia-ul-Haque, et al., eye
movement was crucial for establishing engagement, trust and motivation with the human user
(2229).
34
Geller reminds us that even when the “uncanny” was being formulated, psychologists were considering the
dissonance related to living vs. death. Freud, in describing the aesthetics of the uncanny suggested that the uncanny
was in fact connected to “death and dead bodies, to return of the dead, and to spirits and ghosts” (Geller 11).
Perhaps another angle for investigating the uncanny would be to consider the death drive... but that is an inquiry for
another time.
110
Another team of robotics designers, Christoph Bartneck and Ishiguro’s team, has spent
many hours in research and development trying to “disprove” Mori’s uncanny valley theory,
arguing that “[the popularity of Mori’s theory] may be based on the explanatory escape route it
offers” (“Robotic Doppelgänger” 275). Apparently, this “escape route” has prevented humanoid
robotics to develop to its fullest. Zia-ul-Haque’s research team notes that “The fear of falling
into the uncanny valley… has restricted the developers of humanoid robots to avoid achieving
this height of design [sic]” (2230). For Zia-ul-Haque and Bartneck’s research teams, the
uncanny valley can easily be surmounted by creating robots with socially acceptable movements:
“Movement contains social meanings that may have direct influence on the likeability of a robot.
The robot’s level of anthropomorphism does not only depend on the appearance but also its
behavior. A mechanical-looking robot with appropriate social behavior can be
anthropomorphized for different reasons than a highly human-like android” (Bartneck, et al.
“Robotic Doppelgänger” 275). Meaning, the more likely key to climbing out of the uncanny
valley (if it exists, which Bartneck and team are skeptical of) is to develop humanlike behavior
more so than humanlike appearance. Zia-ul-Haque, et al. describe this effect as the following:
“Human will feel more comfortable [sic], pleasant and supporting with systems which (at least
to some extent) possess ethical beliefs matching that of their own, do not make a decision or
perform an action that is harmful to their moral values, and honor their basic social values and
norms. Thus where interaction with human is desired [sic], the robots are desired to behave as
social machines” (2228).
Here it is important to bring in some further context regarding Bartneck and Ishiguro’s
research team, along with some images of these research robots. Ishiguro, a member of
Bartneck’s research team, is known as the maker of his own “doppelgänger” robot, the Geminoid
111
HI-1, shown in Figure 6. Surprisingly, or
perhaps not, the general American response
to Ishiguro’s work could be categorized as
right in the valley of the uncanny. For
example John Brownlee, writing for the
well-known tech savvy magazine Wired,
describes the Geminoid HI-1 as a “creepy
Figure 6: Robotic Doppelganger. Android, Geminoid HI-1
(left) with creator, Hiroshi Ishiguro (right).
waxen mummy… its dead eyes cold
appraising the succulence of the flesh of the children around it. It is part cyborg, part real doll,
part Shigeru Miyamoto, part Dracula. It is horrible. It hates you” (n.p.). This is not a very warm
review of a robot from Wired, a magazine and website known for its open welcome to new
technologies and robotics. While McCloud draws our attention specifically to the use of the eye
shape (similar to a bull’s eye) in the process of identifying humanness, that humanness can also
lead to the uncanny interpretation. In essence, one instinct is to “see human” while the other is to
see dead human.
Part of understanding that perception originates from the power of framing – when
encountering a new object or entity humans are expected to “frame” the entity, or categorize it.
Sometimes this categorization is based on previous experience, as Bartneck and Ishiguro
described. Oyedele, Hong and Minor confirm this theory in the context of studying consumer
perceptions of humanlike robots. They describe the incongruency theory “which suggests that
individuals have a high probability to evaluate a stimulus on the basis of their prior experience
and knowledge. For example, an individual approaching an unfamiliar object for the first time
will attempt to resolve the incongruency between his or her expectation and the present
112
encounter with the unfamiliar object” (626). Based on previous experience, consumers will have
a set of expectations relating to a similar but new entity. If the expectation, i.e., “it looks like a
doll, therefore it should not move,” is sharply different from the actual experience, i.e., “that doll
behaves more like a human!” then the consumer experiences the uncanny. Kiesler and her team
explain this association as a process of combining concepts usually associated with people, then
transferred to robots: “a lifelike robot that tells a joke might activate exemplars of the nonsocial
category, machines, and of the social category, humorous people. Combining these exemplars
could lead to the experience of an integrated concept, such as cheerful robot” (170).
This act of categorization can also be based on categories of things not just experience. If
the perceived category does not match the actual category, a level of discomfort arises. Upon
encountering a new entity, it is likely placed in simple categories: living vs. dead, for example.
Misselhorn describes this natural inclination: “If something appears very humanlike – as the
objects in the uncanny valley – it is seen as part of the human species, and emotionally evaluated
by the same standards, although it is known that it is not a human being” (349). Those “frames”
or categories in turn establish emotional expectations which, when violated, cause the uncanny
feeling. Bartneck, et al. performed studies of HRI considering the “framing theory.” For this
team of robot designers, “When we encounter a very machine-like robot we select a ‘machine
frame’ and its human-like features deviate from our expectation and hence attract our attention.
This deviation is usually positive since we tend to like other humans. In contrast, when we
encounter an android, we select our ‘human frame’ and its machine-like features grab our
attention. However, the machine-like features [are seen as] deviations [from the norm,] that are
otherwise found in sick or injured people, which we find disturbing” (“Uncanny Cliff?” 368).
113
Another element of HRI that can elicit cognitive dissonance is the idea of “aesthetic
unity.” For Oyedele, et al., “the perceived aesthetic unity of a humanlike consumer robot made
with both humanlike and machinelike components may be unfavorable because of a high degree
of incongruity associated with the different components of the humanlike consumer robot” (626).
“The perceived unity mismatch associated with the humanlike consumer robot,” according to
Oyedele, et al., “may create an uncanny response in terms of the individual evaluation of the
humanlike consumer robot” (626). Much like the idea of the cyborg, a mishmash of human and
robot parts, can be disconcerting. Aesthetic unity is also connected, to the health and survival of
the observer. For Hanson, “The explanation why we react with aversion against the android
robots along these lines is that they do not match our aesthetic standards, respectively, that they
show anomalies which make us react as we would to deviant human individuals showing the
‘ugly’ signs of poor health or bad genes” (Misselhorn 349). The idea is that with a mismatch of
aesthetic elements, the natural response is to react negatively.
One way to return to the valley in hopes of resolving it for designers is to return to
Mori’s original argument. In their 2009 contribution to the 18th Institute of Electrical and
Electronics Engineers (IEEE) International Symposium on Robot and Human Interactive
Communication, Christoph Bartneck and his team discuss the common understanding of the
English translation of Mori’s 1970 paper. Even after the English translation was published in
2012, there is still debate over the phrasing of Mori’s words. Of particular interest to Bartneck
and Ishiguro’s team, as well as myself, is the use of the Japanese word that appeared in Mori’s
original work as “shinwa-kan” to refer to the “familiarity” against which a humanlike robot is
plotted on a graph. When the likeness becomes too human-like the familiarity or “shinwa-kan”
drops off into the uncanny. Ishiguro claims that part of Mori’s description of the uncanny valley
114
was “lost in translation” as “shinwa-kan” is not as commonly used as the English word
“familiarity.” According to Ishiguro, “The best approach is to look at its components ‘shinwa’
and ‘kan’ separately. The Daijirin Dictionary (second edition) defines ‘shinwa’ as ‘mutually be
friendly’ or ‘having similar mind’ [sic]. ‘Kan’ is being translated as ‘the sense of’” (270).
Although I am not entirely sure what is meant by “mutually be friendly,” I’m quite familiar with
the “sense of having similar mind.”
THE UNCANNY AND ANTHROPOMORPHISM
Even in the 1970s, as robotics development was beginning, Mori predicted not only the
infamous “valley” but also believed “it is possible to create a safe level of affinity by deliberately
pursuing a nonhuman design” (Mori, emphasis added). Mori wanted to pursue the robotics
design with the goal of aesthetic beauty like that of Buddha statues: “I came to think that there is
something more attractive and amiable than human beings in the further right-hand side of the
valley. It is the face of a Buddha statue as the artistic expression of the human ideal” (Mori, qt’d
in Geller 17). Regardless of gut responses to humanlike robotics, another part of HRI is the
overall process of anthropomorphism. Recall what Waytz, et al, have this to say about the
process of anthropomorphism: “[it] is a critical determinant of how people understand and treat
nonhuman agents from gods to gadgets to the stock market, is central to multibillion dollar
industries such as robotics and pet care, and features prominently in public debates ranging from
the treatment of Mother Earth to abortion rights” (58). But if the process of Anthropomorphism
is a fundamentally human trait and yet may be causing the uncanny feelings, how can robotics
designers continue to pursue their assumed goal of making humanlike simulacra? For Shimada,
et al. “The uncanny valley must be avoided from the viewpoint of communication robot design”
(374).
115
Rather than continuing to work toward the hyper-real, animators of 3D animated films
are coming to the agreement that “good” animation doesn’t need to be “realistic.” Geller
describes how the lead character animator for The Polar Express, Kenn McDonald, points to a
“need for movie makers to stylize their characters away from realism to make them effective,
‘much like putting makeup on a flesh-and-blood character’” (12). Or more succinctly,
McDonald explained: “A good way to avoid the uncanny valley is to move a character’s
Figure 7: Characters from Polar Express (left). Hiro from Big Hero Six (right).
proportions and structure outside the range of ‘human’… The audience subconsciously says,
‘he’s not human; I don’t have to judge him by the same rules as if he were’” (Geller 12-13) (see
Figure 7). The same argument appears in studies by robotics designers. For example, Walters,
et al. in a study of social robots with both anthropomorphic features and more mechanical
features report that “Our results do not support the notion that increasing the human-likeness of a
robot will necessarily make it more preferable to interact with” (174). They conclude that even
with more robot-like features, participants experienced more comfort when behavior was social
but appearance was less human. For an example from fiction of a robot that is more mechanical
but behaves in a very humanlike manner, consider Sonny from I, Robot (2004). Sonny is bipedal
and has a transparent skin-like covering. Even his face reveals his robotic parts. Even though
Spooner (Will Smith), the human protagonist is at first uncomfortable working alongside a robot
116
like Sonny, over the course of the film the two bond. Spooner fights side-by-side with Sonny
against the greater threat of a disembodied AI who turns other robots against the human
population.
In some studies, the humanlike appearance of a robot is found to be unimportant for
anthropomorphism. Glenda Shaw-Garlock describes the research by Turkle, known for her
specialized work in Social Studies of Science and Technology, regarding empathy and robots, to
illustrate that a human form – i.e. arms and legs in a bipedal form and with a human face, is not
necessarily the best approach to successful HRI. “Paro [a robotic baby harp seal] successfully
elicited feelings of admiration, loving behavior, and curiosity,” describes Shaw-Garlock. “But
[Turkle] felt that these interactions raised ‘questions about what kind of authenticity we require
of our technology’” (5). Shaw-Garlock quotes Turkle, “Do we want robots saying things that
they could not possibly ‘mean’? What kinds of relationships do we think are most appropriate for
our children and our elders to have with relational artifacts?’” (5). Shaw-Garlock further reports
that authors Sharkey and Sharkey “raise parallel concerns when they consider the natural human
tendency to anthropomorphize and suggest we question the ethical appropriateness of deceiving
people into believing that a machine is capable of mental states and emotional understanding”
(5). For some, a robot like Paro, becomes one of the few connections to social interaction,
something that brings happiness. Reporter Jocelyn Ford for Radiolab explored this phenomenon
in a senior citizens home in Japan where Paro is used as “one of the world’s first therapy robots”
and the furry robot takes visits to the elderly: “They adored it. They were loving it and it was
loving them – in their minds.” For the elderly, at least in this particular facility that Paro visits,
Ford wondered if it were possible to engineer “compassion and companionship” to ease the
117
stress of aging. At that moment, it seemed it would as the women and men smiled and cheered
at their interaction with Paro.
When Mori proposed that “we should begin to build an accurate map of the uncanny
valley so that through robotics research we can begin to understand what makes us human”
(Mori 100), I doubt he expected how far into the culture of robotics design and science fiction
film this valley would extend. We are still asking how well robots reveal the secrets of “what
makes us human.” While robotics designers like Ishiguro may be well on their way to
developing humanoid robots with ideal HRI they are really only taking interaction and
familiarity into consideration; a future with robots may well evolve into looking more
complicated – like what audiences experience in fiction like Terminator and Battlestar
Galactica. It is likely that designers, although potentially inspired by science fiction, don’t know
how their inventions will integrate into society. While the HAL Effect brings us closer to
understanding a possible future with disembodied AI, how will a future with androids, fully
synthetic or biological, end up looking? Appearance continues to help define persons as separate
from humans. Much like Mori’s uncanny valley theory, any deviation from the physical
definition of “human” forces dolls into toys, i.e., from human to non-human. But the more
complicated part of HRI is when bodies are concerned.
Norah Campbell, borrowing from Featherstone and Burrows, believes that “Because the
technologized body is ‘replete with utopian, dystopian and heterotopian possibilities,’ an
exploration of cyberbodies, or images that imagine posthuman bodies of the future, take us
beyond speaking about technology in an instrumental way, as a mere tool for getting things done,
and enables us to think about the philosophy of technology” (N. Campbell). Although androids,
AIs, or other artificial entities are not strictly posthuman bodies, their portraits in fiction are
118
similarly enabling for thinking about the philosophy of technology. I say that they are not
“strictly” posthuman in that they are creations from little to no biologically human parts; their
construction does not spring from human bodies as biological reproduction or alteration. There
is still debate over what constitutes a cyborg versus an android. For the sake of understanding
posthuman bodies, I understand a cyborg as an entity that began as human but has been altered
through cybernetic implants or other artificial enhancements. An android, on the other hand, was
never biologically human. It/he/she was made, constructed, or otherwise emerged (in the case of
a strong AI). With the forces of anthropomorphism at work and feelings of empathy for
anthropomorphized entities, we reach what I call the “nonhuman dilemma.” Yes, we may see
human in many things, but there is much discussion over what should be deserving of rights
generally reserved for humans only – this is the debate over personhood which I begin to explore
in the next chapter.
119
CHAPTER FOUR: CITIZEN ANDROIDS
“To treat humanlike android others as dispensable objects is to
dehumanize the self and ignore the inherently ethical relationship
among humans in which these androids enter.”
-- Christopher Ramey, Cognitive Science Society (2005).
INTRODUCTION
With the growing momentum in research and development in strong AI and humanoid
robotics – and the anticipated merging of the two – it is time to begin a serious discussion of how
we will treat these novel creations. Will they be simply objects for our pleasure and professional
utilization (i.e., property) or will they be life companions to be given rights and respect (i.e.,
persons)? Thinkers from many fields are only recently coming to realize the importance of
considering the developing relationships between man and artificial beings. Maarten Lamers and
Verbeek Fons, at the third international conference for human-robot personal relationships,
remind us of the growing multidisciplinary interest: “…artificial partners increasingly [are]
likely in any of the many forms imaginable. Increasingly, researchers from scientific fields such
as (social) robotics, human-computer interaction, artificial intelligence, psychology, philosophy,
sociology, and theology are involved in their study” (V).
Any nonhuman entity entering the human-defined sphere of personhood will face
challenges to successful coexistence. For the humanoid AA (Artificial Agent), the challenge is
on three fronts. First, as explored in Chapter Two, there is the challenge of assigning
“intelligence.” If intelligence is not easily accepted, no notions of personhood will ever be
entertained. Next, because the android is designed specifically with notions of
anthropomorphism in mind to supposedly facilitate HRI (Human-Robot Interaction) as discussed
120
in Chapter Three, the ways in which we “see human” lead to the following two challenges: 1) the
innate threat to our human self that androids embody (and disembody) and, 2) the human trait to
demonize anything that seems to fit in a threatening Other category. The android becomes the
subject of a confused human identity and I will explore that confusion in fiction. For Annette
Kuhn, in her introduction to Alien Zone: Cultural Theory and Contemporary Science Fiction
Cinema, “science fiction cinema often assumes a rather confused attitude towards science and
technology. On the one hand it views them as redemptive forces that can lift humanity out of the
muck and mire of its own biological imperfections. On the other, it sees them as potentially
destructive forces, inimical to humanity” (32). Sidney Perkowitz explains the long-standing
fascination with automatons and bionic humans as basic to human nature: “Least noble, perhaps
but understandable is the desire to ease our lives by creating workers to till our fields, operate our
factories, and prepare our meals, tirelessly and without complaint” (5). In this chapter I explore
those redemptive forces in the potential partnership between mankind and artificial entities as
well as our own destructive nature as we dehumanize the nonhuman.
To make effective robot partners designers and marketers will need to be well aware of
the generally accepted stereotypes associated with both the AI that will be projected onto robots
as well as the basic discomfort associated with the uncanny valley. This chapter begins with
popular culture representations of the sub-category of androids, manufactured humans,
specifically the Cylons from Battlestar Galactica to bring to light the racialized and sexualized
stereotypes associated with humanoid embodiment. Innately associated with those stereotypes
are the abuses and violations of any entity falling outside of “normal” categories and into the
other. By queering those artificial bodies in fiction, in particular Cylons Eight (better known as
Sharon) and Six, I aim to draw parallels between theories of dehumanization and androids.
121
In making such connections, I also draw attention to what I call the “nonhuman
dilemma.” Because future articulations of artificial entities will likely act and appear human in
every way, it behooves us to treat them as moral equals, but we will still face a dilemma. When
confronted with an entity that is known to be nonhuman but appears human in every way, the
dilemma is whether to treat it as human and therefore open the definition of human up to others,
or to hold fast to traditional humanisms. To treat the nonhuman-human as anything but human
would undermine our own moral standing, but it is a natural inclination when encountering
others (human or not) outside the boundaries of “normalcy.” Returning to the Cylons of BSG, I
explore the nonhuman dilemma in relation to Cylon Six (Tricia Helfer) and her imprisonment
aboard the Battlestar Pegasus. Because the audience has built a strong human-like connection
with Six, along with other Cylons, the abuse that Six suffers aboard the Pegasus speaks volumes
about our coming dilemma regarding the treatment of nonhuman others – will they be objects of
mistreatment and therefore violate our own moral code as we abuse them, or will they be
companions to be loved and trusted?
In answer to the question above, the remainder of this chapter explores what I call the
post/human love affair and the uncanny companionship that comes with living among humanlike androids. Although still a fringe culture, the transformation into post/human relationships is
already taking place as men and women take on silicone companions. Here I explore the
subculture of iDollators (people who live and love life-sized dolls). Inspired by The Learning
Channel (TLC) show My Strange Addiction, I follow the public persona and reactions to selfproclaimed doll-lover, Davecat. Apart from living in personal relationships with nonhuman
others, I foresee family transformations as nonhuman others enter the domestic sphere. By using
the tropes of realiTV makeover shows that “save” the family in distress (Nanny 911 and
122
Supernanny), I propose that a similar makeover occurs in the film Fido. Fido follows the story
of that most uncanny figure, the zombie, who performs each stage of the traditional makeover in
the home: the observation, the intervention, the “work,” and finally the reveal (or happy ending).
It is easy for many to simply disregard the question of what the human self means by
proclaiming “I’m human because I am,” and therefore I feel no threat from nonhuman others.
But as doubles and doppelgängers appear, they amend the concept of the “human.”35 Noreen
Giffney reminds us of Haraway’s definition of human: “Human …requires an extraordinary
congeries of partners. Humans, wherever you track them, are products of situated relationalities
with organisms, tools, much else. We are quite a crowd” (55). “Thus,” for Giffney, “the Human
is historically contingent and culturally marked in its formulations and is neither stable nor
singular in its articulations or resonances” (56). As new articulations of tools and entities such as
androids are introduced to our crowd, that we are altered in new ways and this particular addition
speaks volumes about who and what humans have been and who/what we will become. But in
that “becoming,” we must consider what the other entities will be considered. For Ferrando,
there is a risk of “turning the robotic difference into a stigma for new forms of racism” (16).
And as we “osmose with the robot ontology, humans have to undergo a radical deconstruction of
the human as a fixed notion, emphasizing instead its dynamic and constantly evolving side, and
celebrating the differences inhabiting the human species itself” (Ferrando 16).
Personhood rights are “recognized” by the entity of the State to individuals within the
species of Homo sapiens sapiens. But as with every language, the categorization of and
recognition of said categories is always changing. Even in the process of defining human,
35
This simplification of the definition of “human” rings eerily similar to Badmington’s description of Cartesian
dualism: “The human, in short [according to Badmington’s reading of Descartes work], is absolutely distinct from
the inhuman over which it towers in a position of natural supremacy. I think, therefore I cannot possibly be an
automaton” (18).
123
anthropologists and biologists have argued over the species and subspecies for millennia. For
example, do we define Homo sapiens sapiens by their physical features – such as skull shape,
brain capacity, brain shape, etc? Or do we define a species based on their behavior – behavior
like tool use, linguistic structure, arbitrary symbol usage, etc.? Classical definitions of humans
are based on a combination of all of the above, but what happens when other entities begin to
meet and or exceed our expectations for humanness? Cognition, for example, is something
computer designers have been trying to duplicate and or improve upon. Computers can already
perform many humanlike tasks: information storage, concept processing, categorizing, recall,
problem solving, and more.
Even in the films of the ‘80s, Telotte took note of the theme of questioning ones
humanity simply due to the threat of the duplicate, the robot. In describing The Thing (1982),
Telotte refers to the scene in which the main characters don’t know whether or not they are
themselves or if their comrades are human. One man turns to the other and asserts “I know I’m
human.” In this moment, Telotte sees “The very need for such an assertion, hints at an
unexpected uncertainty here, even as uneasiness about one’s identity and, more importantly,
about what it is that makes one human” (“Human Artifice” 44). Returning to Telotte, he argues
in Replications that “[films featuring humanoid robots] are significant for investigation…
because of the way they approach the body as a plastic medium and the self as a variable
construct. In them, the body becomes raw material, subject to the experiments …” (17). Not
simply a report of possible futures, in a cautionary manner “[Films like The Thing, Stepford
Wives, and others about the robot] suggest how the human penchant for artifice – that is, for
analyzing, understanding and synthesizing all things, even man himself – seems to promise a
reduction of man to no more than artifice” (Telotte, “Human Artifice” 44). In that reduction of
124
the human to an artifice is the cogent fear that humans will lose their uniqueness, their agency,
and even their Self. Part of that uniqueness, agency, and Self, as I will show, is tied up with the
relationships we have with others, human and nonhuman.
MANUFACUTRED HUMANS
Ridley Scott’s 1982 film Blade Runner opened the door to using film as an exploration
of human-android relations, especially in a way that engages thinking about morality. Kerman
explains that “[Blade Runner] makes clear that the replicants are beings to whom moral
discourse is applicable. In doing so, it asks the kinds of questions which make science fiction so
valuable in thinking about the technological, political and moral directions our society is taking”
(2). Because of their multiplicity, manufactured memories, and collective nature, the android in
fiction threatens the Western concept of Self – that is, the celebration of the individual (human)
identity made up of unique memories and the ability to act with singular agency.
In fiction, we see that manufactured human in figures like the Cylon in the reimagined
Battlestar Galactica, in the Borg from Star Trek, Fabricants from Cloud Atlas, and most
famously the Replicants from Blade Runner. The imagery of the manufactured human is
frightening for several reasons, the first and foremost is how such entities are literally
dehumanized. In David Livingston Smith’s book, Less Than Human, he recalls psychologist
Nick Haslam’s description of dehumanization which rings eerily of robots: “Haslam suggests
that when people are stripped of their typically human attributes they’re seen as cold and inert –
as inanimate objects lacking warmth, individuality, and agency” (94). Mass produced and
robbed of individual agency, the manufactured human/robot may be one of the greatest offences
to our sensibilities. Especially in the United States, there is a strong cultural emphasis on the
unique individual, but the android threatens that uniqueness. The android can be mass produced
125
with perfect efficiency and with increasingly biological capabilities. In the android we see
embodied the fear the idea that we, humans, could be manufactured.
The fear of manufactured humans raises concerns for the threat to the self: for one that
is the fear that the unique human is no longer individuated. If “human” itself is in question, the
Self would be questioned next. Although seemingly unrelated to artificial intelligence, clones in
SF film often wrestle with what it means to be an individual self. Is it simply a matter of saying
“We’re not programs, we’re people!” as the clone Sam Bell asserts to his robot companion,
GERTY, in the film Moon? But Sam’s assertion rings hollow. For Sam, his individuality ended,
unbeknownst to him, the moment his was cloned. Perhaps he means the human we… But in his
case even his humanness is in question – Sam Bell is one of many clones, manufactured for hard
labor on the Moon. If being a human person includes the Kantian and natural rights assumed to
“emphasize the sanctity of individual ‘persons,’” as Charles Mills suggests (55), then Sam was
robbed of his individuality long ago. But even being a “person” doesn’t guarantee individuality,
freedom or agency either – just as the Sam Bells from Moon and clones for organ harvest in The
Island know all too well. I’ve used clones here to illustrate my point about robots, which may
seem a stretch, as discussed earlier. Aren’t clones examples of entities “created from biological
materials to resemble a human”? If, in the case of Sam and the clones of The Island, their
memories have been tampered with and altered so they believe they live a life their bodies did
not, and they are mechanically performing their given tasks without question, doesn’t that also
make them automatons?
One example of manufactured humans, especially manufactured humans who question
their individual identity, is the Cylons from BSG. The 2004-2009 show, an action/adventure
space drama (also called a “space opera,” not because the characters sing – there is no singing,
126
just lots of drama) loosely inspired by a 1978 television show of the same name, follows a ragtag group of humans from the planet Caprica who are displaced after their ancient enemy, the
Cylons, attack. Believing that the crew of the starship, Battlestar Galactica, is on the only
remaining military vessel, the crew rallies what civilian ships are left and sets the goal of
returning to the mythical planet Earth, where it is believed all human life began. Bravely
defending the civilian refugees, the Battlestar Galactica must fight off an assortment of Cylon
troops and starships, many of which are clearly robotic (an homage to the 1978 show), but as the
show progresses, the crew discover that the Cylons have “evolved” into humans that infiltrate
their ranks, some as sleeper agents who are unaware that they are Cylons.
In a particular story arc
Sharon (Grace Park), also called
Athena, struggles with a very
visceral experience of being
dehumanized simply because she
is a Cylon (Figure 8). At this point
in the series, the human crew of
Figure 8: Opening credits for BSG after it is revealed that there are
multiples of Sharon -- "Athena," on the planet of Caprica (left) and
"Boomer," aboard Galactica (right). (Screen shot BSG, 2004.)
the Battlestar Galactica have
already encountered the humanoid “Skin Job”36 Cylons and so far accorded them some
personhood, depending on the situation. Even the Cylons explain this personhood in reference to
Sharon: Cylon Six (Tricia Helfer) and another Cylon, Doral (Matthew Bennett), are waiting for a
36
The “Skin Jobs” are those in human form and with seemingly identical biological make up. These models,
including Six and her many copies, are in stark contrast to the “Toasters” who are apparently without consciousness
and (almost always) follow orders from the more developed models. These Toasters are modeled after the Cylons
from the original Battlestar Galactica and are linked more directly to fears of technology in the computer age.
These models are primarily used in the 2007 BSG as simply mindless killers, the boarding party that is expendable.
127
rendezvous with Sharon on Caprica. Doral notes that Six is calling Sharon “Sharon” rather than
by her number designation, Eight. Six explains that she thinks of her as one of “them” (human)
and explains that “in the scheme of things, we are what we do. She acts like them, thinks like
them, she is one of them.” With this phrase and with Sharon’s love for Karl “Helo” Agathon
(Tahmoh Penikett), Sharon has become an individual apart from the Cylon collective.
Even her “love” for Helo is questioned by viewers because she is a Cylon. Boomer has
been in a long-term relationship with Chief Tyrol (Aaron Douglas) but viewers and fans of the
show debate about the nature of that “love” – and by extension, the nature of love that a Cylon
could have. In an interview, David Weddle, one of the show’s writers, argues that there is no
“neat answer” to the question of whether or not Boomer could have ever loved either Chief or
Helo. No matter what, according to Weddle, “Boomer is deeply conflicted. I think the process of
having false memories planted in her… [has] left her severely disturbed” (Ryan n.p.). For the
show’s writer it seems the fact that Sharon’s memories are fragmented leads to an unstable
personality.
The audience has already seen at least two copies of Sharon, Boomer and Athena, in
season one, and in the two-part episode “Kobol’s Last Gleaming,” Boomer, comes literally faceto-face with her Cylon nature as a duplicate. Through the episode, Boomer has been struggling
with whether or not she is a Cylon as she performs acts of sabotage on the Battlestar against her
conscious wishes. While the audience is aware that she is just one of many copies of Sharon,
she, and the members of Galactica, are unaware.37 She even goes as far as attempting suicide –
unsure whether she is actually a Cylon or simply crazy. An opportunity arises for Boomer to
prove “who” she really is and her loyalty to the human fleet as she offers to destroy a Cylon
37
At this time in the show only one person from the Galactica crew is aware that she is a Cylon. Helo discovers
Sharon on Caprica when he knows the “other” Sharon is back on Galactica.
128
base-star. Upon her arrival inside the base-star, Boomer is welcomed “home” by dozens of
copies of herself. They beckon her to join them, whispering her name, and remove her helmet,
as if to shed her of her human identity. In resistance to them, Boomer begins to chant the things
she believes she knows about herself: “I’m not a Cylon. I’m Sharon Valerii. I was born on
Troy…” Her chanting is similar to a war prisoners repeating his or her name and rank rather
than answer questions from interrogation. Even though they let her go, leaving a bomb behind,
they tell her they love her and know that “you can’t fight destiny, Sharon. It catches up to you.”
And for Boomer it does. Even after her long personal struggle regarding her identity, and her
attempt to (literally) destroy her Cylon self and confirm her individual human identity, she
cannot fight her destiny/programming. Upon returning to Galactica, she is congratulated by
Commander Adama (Edward James Olmos) and as a shocking cliff-hanger ending to the
episode, Boomer shoots him in the torso.
Boomer’s destruction of the base-star is just one example of how violent the fight against
multiplicity, embodied in the Cylons is portrayed in BSG. With the surprise arrival of the
another surviving military Battlestar, Pegasus, the theme of individuation is again complicated as
Cain (Michelle Forbes), captain of Pegasus, reveals her plan to destroy the Cylon Resurrection
ship. In particular, the imagery associated with Cylons and the Resurrection ship help clarify the
collective as a threat to be destroyed, even if they appear human-like. While the previous Cylons
have been perceived the majority of the time either as the mindless Raiders and Centurions or as
a few clusters of the humanoid models, the Resurrection Ship represents the epitome of the
collective and becomes the singular threat which both Battlestars can agree to destroy. Indeed,
when we see the Resurrection Ship up close we see row after row of bodies of Number Six. To
129
further emphasize the point that the Resurrection ship represents a fear of the collective, as it is
being blown to bits, naked bodies, all the same, fly lifeless through space.
Apart from their multiple bodies, the Resurrection Ship also holds the key to truly
keeping the Cylons nonhuman: it is the mechanism that allows them life again after death.
Rather than dying in their body as a human would, a Cylon would just “download” into a new
body aboard the Resurrection Ship, keeping all the memories of the previous life. Having first
established the fact that Cylons cannot be individuated, Smith’s use of Herbert C. Kelman in
Less Than Human is appropriate:
To accord a person identity is to perceive him as an individual, independent and
distinguishable from others, capable of making choices, and entitled to live his
own life according to his own goals. To accord a person community is to
perceive him – along with oneself – as part of an interconnected network of
individuals, who care for each other, who recognize each other’s individuality,
and who respect each other’s rights. These two features together constitute the
basis for individual worth. (sic. 86)
Having equated the Cylons to a collective the ultimate threat of the Cylons is destroyed as the
Battlestar Galactica continues on its way.
While bodies can be reproduced and indistinguishable from original biological humans,38
not only the body is a plastic medium, the mind is as well. Having a malleable memory means
that the Self must be questioned. The Island for example, features a compound full of clones all
of whom have implanted memories; memories that make them believe they had a childhood and
38
While trying not to privilege an essential humanness, this is the best way I can think to differentiate humans who
are conceived and born through “natural means.” Of course, the “natural” process of human birth is becoming more
and more technologized. Consider hormone therapy or “test tube babies.” It has not yet been determined where
“original human” ends and the “technologized human” begin.
130
a life outside the facility. Similar to the Replicants in Blade Runner, and in Cyrus R. K. Patell’s
words, “because memories have been implanted in their brains, they cannot be sure which of the
words and actions they remember having said or done are truly their own” (27). Replicants, like
the clones in The Island, were made in man’s image and indeed are biologically human in almost
every way except for their superhuman strength and mental processing powers. “Instead of
reinforcing the border between humans and replicants,” as Kevin R. McNamara believes Philip
K. Dick’s inspirational novel did, “Blade Runner projects a world in which technologies of
image and memory production render human experience and memory ultimately
indistinguishable from the experience of, and the memories created for, the replicants” (423).39
In many cases, in fiction that ability to be reprogrammed calls into question our human anxieties
relating to memory.
With both threats to the individual body and to the ability to maintain individual
memories, androids embody the anxiety of individuality even further in a very metaphysical
way: their existence represents the threat of being robbed of free will and agency. Even without
being manufactured, androids are also a perfect duplicate of an entity without independence or
agency – in many cases they are made with “laws” and programs that cannot be violated. For
example, to use the Borg from Star Trek, Mia Consalvo reminds us that “the term ‘individual
Borg’ is actually a misnomer, as drones are not individuals at all, instead merely functioning
matter, available for upgrade, modification or recycling. Flesh and technology coalesce in a
posthuman figure that is denied individuality” (192). The existence of the Borg is not isolated to
the Borg ships. Indeed, the main story arc between seasons three and four feature the
39
Interestingly and unintentionally, this description fits almost identically what Webb described of the “more
traditional popular view of the robot.” Webb describes the robot as “a machine-man that possesses super-human
capabilities and functions in a technological sphere somewhere between people and machines” (5).
131
assimilation of Captain Picard (Patrick Stewart). After his rescue and the successful removal of
his Borg implants, Picard describes his experience: "They took everything I was; they used me to
kill and to destroy and I couldn't stop them, I should have been able to stop them. I tried, I tried
so hard. But I wasn't strong enough, I wasn't good enough, I should have been able to stop
them.” For Consalvo, this particular speech is an expression of his “dis- or in-ability to fight
back [which] reveals many of the implicit assumptions about individualism in contemporary
Western society” (193-194).
I will concede that the Borg may not be considered manufactured humans. In fact, within
the Star Trek universe, the Borg are presented as a parasitic entity, with a shared singular
consciousness. They travel around the galaxy assimilating the technology of other alien races in
order to perfect what they believe to be a superior blending of biology and technology. Using
my definition of cyborg in the introduction, the Borg are more likely to fit in that category;
however, for the sake of identifying the anxieties related to technology and technologized bodies,
the Borg are appropriate – especially with their ability to rob us of our individuality. Kevin
Decker identifies the Borg in two ways that are appropriate here: (1) “they assault our vaunted
sense of individuality [because] they have a single-minded desire not only to conquer all
opposition, but to assimilate the best of all worlds into their collective;” and (2) they are
unnatural…“by their very nature [the Borg], transgress boundaries of ‘human’ versus ‘machine’
(131).
When considering whether or not an entity has individual agency, free will often enters
the discussion. As we have seen in earlier discussion about defining artificial entities in a way
that assigns “agency” rather than “intelligence” free will appears as a cornerstone in defining
agency. Kenneth Himma chooses to tackle this issue by accounting for the standard definitions
132
of agency, consciousness, and free will. He states that “the relevant mental states might be free
or they might not be free. Volitions, belief-desire pairs, and intentions might be mechanistically
caused and hence determined…. Agency is therefore a more basic notion than the compound
concepts of free agency, rational agency, and moral agency – although it may turn out that one
must be rational or free to count as an agent” (20). Himma believes in a sort of humanequivalence based on behavior when considering artificial entities and the problem of other
minds: “If something walks, talks, and behaves enough like me, I might not be justified in
thinking that it has a mind, but I surely have an obligation, if our ordinary reactions regarding
other people are correct, to treat them as if they are moral agents” (28). In other words, while the
Borg may behave in their surroundings with individual directives, they are clearly not walking,
talking or behaving enough like a human to be considered such. They have been robbed of the
most individual label “I” and forced to adhere to a shared consciousness of “we.”40
For some, simply being an agent in the world, does not qualify one for ethical treatment
or judgment. For example, Moor points out that “Some might argue that machine ethics
obviously exists because humans are machines and humans have ethics. Others could argue that
machine ethics obviously doesn’t exist because ethics is simply emotional expression and
machines can’t have emotions” (18). If the ethics of the machines are transmitted from humans
to their robots, then something like Asimov’s Laws of Robotics would likely apply. Beginning
as early as 1942, Isaac Asimov explored robot-ethics through several short stories and novels.
These “Three Laws of Robotics” were then picked up in popular culture and explored in other
fictional pieces, while also often being referred to in robotics development. Although Asimov
40
For further exploration of individual Borg versus the collective, Star Trek: The Next Generation, season five
features the episode “I, Borg” in which a single Borg is discovered separated from the Collective.
133
explored variants throughout his work, the three primary laws are as follows from his
foundational text I, Robot (1950):
1.
A robot may not injure a human being, or through inaction, allow a human
being to come to harm.
2.
A robot must obey the orders given it by human beings, except where such
orders would conflict with the First Law.
3.
A robot must protect its own existence as long as such protection does not
conflict with the First or Second Law.
While at first the Three Laws may seem straight forward, Asimov explored the possible
contradictions that could arise from this kind of logic, especially when applied to machine
intelligence.
Others, like James Gips, argue that simplifying ethics for robots down to something as
“simple” as Asimov’s Three Laws of Robotics is “not suitable for our magnificent (free-ranging)
robots. These laws are for slaves. We want our robots to behave more like equals, more like
ethical people” (243). Whether or not we actually want our robots to “behave more like ethical
people” is certainly up for debate, but I do take Gips’ point that when given laws or directives to
follow (or programming), the artificial entity has no individual agency.
THE NONHUMAN DILEMMA
While the metaphysical concepts of free will and agency can be explored endlessly, I
want to return us to the actual creation of androids and how that will affect our understanding of
personhood. Consider for example the creations by roboticists like Ishiguro. Ishiguro’s
Geminoid looks human… but do we then take into consideration the construction of the android
and say “well, it’s made of an alloy frame and covered in silicon skin, therefore a nonhuman
134
other of some sort?” I foresee two possible debates on the horizon. One is what I call the
“nonhuman dilemma.” When faced with an entity that appears human in every way, the
nonhuman dilemma is a question of ethics: might it be more ethically sound to treat that entity as
human? Below I offer a reading of the reimagined Battlestar Galactica series to begin a
dialogue about the frameworks of racial construction that set the humanoid robots (Cylons in this
case) apart from the humans. When the two beings are indistinguishable, and indeed the Cylons
sometimes do not even know they are nonhuman, audiences are faced with a unique challenge to
their feelings of moral responsibility. This is what I call the “nonhuman dilemma” and can be
better understood by applying a framework of racial construction to the human-like synthetic
beings (in this case the Cylons). The nonhuman dilemma has great power to assign moral status
to artificial entities. For Smith, “Our intuitive moral psychology seems to conform to the
following principle: We grant moral standing to creatures to the extent that we believe that their
essence resembles our own” (223). In the case of the nonhuman dilemma, if we believe a
nonhuman’s essence is like our own, we will more likely extend personhood to it/them.
Just as using a gender label can be a natural “mental shortcut,” so too is the label of
personhood when it comes to physical form. In the case of a more complete assignment of
personhood, if it is true that small shifts in aesthetic features can plunge an entity into the
uncanny, then body appearance as a whole can make all the difference. Hanson, in his article
about anthropomorphism, reminds us of how important physical differences are: “In Mary
Shelley’s [classic] novel Frankenstein, the synthetic creature suffers the rejection of humans
simply because the creature looks terrible…. If only his designer had been more attentive to his
appearance….” Hanson goes on to emphasize the tragedy of poor design: “Imagine if Victor
135
Frankenstein had provided his creature with nice skin, a warmly expressive smile, and big
attractive eyes; we may expect that the story would have been much less tragic” (n.p.)
In fact, it is the hope of robotics designers that their humanoid robots will be warmly
welcomed by consumers because of their human appearance. Ishiguro, designer of his own
doppelganger double, reminds us of the uncanny valley, and he believes the theory extends to
behavior as well. Ishiguro’s argument is that “familiarity increases for well-balanced appearance
and behavior. We refer to this as the synergy effect. For example, a robot should have robot-like
behavior and a human should have humanlike behaviors” (113). Hence, for humans to best
interact with another entity, humans prefer very human-like entities in both appearance and
behavior.
But in that interaction, roboticists like Shanyang Zhao predict huge changes to our known
society: “The rise of a synthetic social world where human individuals and humanoid social
robots co-mingle calls for a new conceptualization of society” (414). In that new society with
robots, Kerstin Dautenhahn sees the robot companion as one that (i) makes itself 'useful', i.e. is
able to carry out a variety of tasks in order to assist humans, e.g. in a domestic home
environment, and (ii) behaves socially, i.e. possesses social skills in order to be able to interact
with people in a socially acceptable manner” (685). Not only robotics designers, but others share
this vision for robotics and AI design; futurists are convinced that both strong AI and synthetic
humanoids are not far off. Kurzweil argues that “My prediction is that tomorrow’s machines
will become indistinguishable from biological humans, and they will share in the spiritual value
we ascribe to consciousness. This is not a disparagement of people; rather it is an elevation of
our understanding of (some) future machines” (“How Infinite” 55).
136
If these roboticists and futurists see a future with androids that will have humanlike
behavior and skills that will alter our ideas of society, while simultaneously appearing human,
then we reach a new dilemma. With the forces of anthropomorphism at work and feelings of
empathy for anthropomorphized entities, we reach the “nonhuman dilemma.” Yes, we may see
human in many things, but there is much discussion over what should be deserving of rights
generally reserved for humans only. Consider how we use personhood and the current debate
over who or what is deserving of personhood rights. A human embryo, for example is
genetically human, but does not yet fit the criteria for personhood rights. If awarded the title of
“person,” it would then be illegal to abort an unborn fetus. In fact, this debate has been extended
to nonhumans, including other species and potentially even non-terrestrials. From David
Livingston Smith’s book, we know that to be human does not protect us from dehumanization,
but we have also seen that to be considered a person (and be given rights of personhood), one
may not have to meet the criteria of human.41 The fact remains that we are on the verge of
having very human-like entities among us that may demand the validity of personhood, and
Smith’s definition rings of speciesism, but if we incorporate the racial theories of Albert Memmi
and Charles Mills, the portrayals of the Cylons in Battlestar Galactica meet the major criteria for
dehumanization. By appealing to racial categorization, the writers of Battlestar Galactica have
successfully conjured the nonhuman dilemma for viewers and give us one potential glimpse into
two potential futures with human-like synthetic beings: one with hypersexualized beguiling
fembots and one with abused companions.
Helfer/Six and Fembots
41
Here I am thinking of philosophers Michael Tooley, Peter Singer, Tom Regan, and several others who argue that
personhood rights should be extended to include nonhuman animals that meet certain criteria; including dolphins
and great apes. This argument also includes legal scholars who fight for animal rights and a non-speciesist approach
to law.
137
Patricia Melzer addresses the question regarding androids and human relationships, and
her argument offers some useful insight into the upcoming discussion about Six. Melzer uses the
character Call (Winona Ryder) from Alien: Resurrection (1997) as an example of an android in a
gendered world in terms of “gender passing” and the de-humanized treatment she receives after
being “outed” as robotic. While passing as human, Call is “coded as a desirable woman,” Melzer
explains (123). But once she is outted, “[j]ust as racial and gender passing are threatening to the
social order, technological passing, undermines hierarchies and denaturalizes categories by
disclosing them as constructed…Once she is known to be an android, she is recategorized as unhuman” and thus no longer desirable (124). The argument for the technological being as a threat
to the natural and therefore undesirable may be the case for Call and also in the case of Six, but
becomes problematized as the audience becomes involved.
With the debut episode of the
show (a two hour “preshow” episode),
audiences know of only one humanoid
model Cylon: a tall, blonde and
hypersexualized Six, who is almost
always in a red dress. In fact, she is the
first humanoid Cylon we see as she
enters the neutral-zoned space station and
Figure 9: Six (Tricia Helfer) flanked on either side by Cylon
Centurions. (Screen shot BSG, 2003.)
murders the human holding his post there
in case the Cylons ever returned to Caprica (Figure 9). She is portrayed as a sexual predator who
tricks an unwitting Gias Baltar into downloading viral codes that ultimately wreak havoc on the
defense network of Caprica, allowing the Cylons to do the most possible damage to the
138
defenseless world. This version of Six has no official name other than the fan-given “Caprica
Six” because, as an individual, she primarily existed on Caprica. Before the invasion begins,
Caprica Six “comes out” as a Cylon to Baltar. “Now you’re telling me you’re a machine?”
Baltar asks, skeptical. Six defends herself by asserting, “I’m a woman.” Apparently,
emphasizing her sexuality above her species. Still unsure of categories, Baltar barrels ahead:
“You’re a machine – a synthetic woman, a robot.” The boundaries of species for Baltar come
down to a machine/woman hybrid… who just told him that, with his help, she was able to gain
access to the defense network and leave Caprica helpless.
The fact that a hypersexualized robot woman beguiled a man into sex and then ultimate
damnation comes as no surprise to the science fiction viewers. Robot women, or gynoids, have
appeared in cinema for decades. Of particular interest is the way that these fembots are
implicitly connected with the enthralling ability to manipulate men, ultimately robbing men of
their free will. Telotte, in Replications, reminds us that even in the first film portrayal of a robot,
the threat was not just of the technology itself but of the hypnotic power of Maria in Fritz Lang’s
1927 Metropolis:
This most important of early robot figures is, tellingly, feminized and presented as
a seductive creature, an artificial body whose primary function is to deprive men
of their self-awareness, to lure both men and women to a kind of forgetfulness of
their human responsibilities, and to bring them all under the sway of those
technological powers that have given her birth. (16-17, emphasis added)
Not only is the threat that we will become enslaved by our own creations, but that we can also
become them by being stripped of agency.
139
“Whether they’re blonde, brunette, a redhead or bald, a fembot is a beautiful thing to
behold. Quick, calculating glances coupled with the strength to bully several large men are just a
couple reasons why we love the fembots of film so much,” declares the Manolith Team at
Manolith.com, a “Men's Lifestyle website” that calls itself the “best place to get your
testosterone on!” They set the stage for understanding “sexy women androids” in popular
culture. In the United States, we see them everywhere in popular film and increasingly in the
technosciences, bringing real, tangible form to the longtime fantasy of the “all services” ideal
woman. These “services” include everything from, as Bitch Magazine points out, serving beer
through a handy refrigerator-womb (as seen in a Heineken commercial), to helping a man shave
in the shower (in a Gillette commercial) (Sarkeesian).
Using “cyborg feminism,” to do a deconstruction of Six and the actress who plays her,
Tricia Helfer, reveals more about the nonhuman dilemma and boundary crossing – in this case,
from the safety of the screen to audience appropriation. Six entered popular culture in a time of
trans-ition with Americans facing “trans” identities. Here I will show that one “future
articulation,” to use Butler’s language from Undoing Gender, of a trans identity is in front of us
now. The gendering of the human species is implicated in that discussion, and so, therefore, are
the rights and expectations afforded to those gendered post/human bodies. If we accept
Haraway’s assertion in the Cyborg Manifesto that “we are all chimeras, theorized and fabricated
hybrids of machine and organism; in short, we are cyborgs” (150), then Helfer, and her merged
identity with Six, is one such subject. Robertson explains that “how robot-makers gender their
humanoids is a tangible manifestation of their tacit understanding of femininity in relation to
140
masculinity, and vice versa” (4).42 But in robotics, the choice of gendering is always in the
hands of the maker. Robertson refers to Suzanne Kessler and Wendy McKenna who introduced
the concept of “cultural genitals,” meaning that what we show on the outside (not necessarily
physical genitals) are what identify us as male or female. For roboticists, the simple decision of
placing lipstick on the lips of an android helps facilitate HRI, if in a gendered fashion. With this
gendering act already in place with the creation of an artificial entity (unless it is gender neutral),
than the way he/she is treated will be mediated by the cultural standards already in place. In the
case of Six and Helfer, that role is multifaceted.
On the one hand, Helfer/Six is a Cylon in BSG: her body appears human but is synthetic.
On the other, as a human, the audience who constructed the world-wide-web version of Tricia
Helfer, she is continually defined by her role as Six. I contend that Helfer and her role as Six is a
queer body – culturally mediated and ultimately trans-human. Illustrating the language used on
blogs and web articles reveals a slippage into the new trans-identity assigned to “Helfer/Six” or
“Six/Helfer” (which I will use interchangeably as is often done in cyberspace). Body images
presented in the media to please the viewer have always had a cyborgian quality. From the
power of photo-manipulation to the body-manipulation undergone by actors and actresses (this is
by no means isolated to the female body), the visual has always been askew from reality. To
help further illustrate the extent to which humanity is becoming more and more cyborgian, the
images and representations of Tricia Helfer are explored within the cyberscape of internet
publications and the blogosphere, revealing a new trans-being. Helfer is no longer simply a
42
An interesting side note is that Robertson clearly accepts gender construction as a social act, rather than reminding
readers of the extensive work of Judith Butler and other feminist theorists. Perhaps a reminder of past gender theory
would strengthen her argument, but may not be necessary for the general audience.
141
woman/model/actress from Canada,43 but rather can be described as posthuman. However, as a
trans/human or post/human woman, Helfer’s body is not simply a transcription of a human body
augmented or alienated by technology, but rather incorporates the differences of the gendered
body – appropriated by the male driven media standard of the “statuesque, leggy blonde,” Helfer
signifies the contested female body, reinscribed by technology.
During the time that the show aired, it received a variety of criticism and praise
regarding the debate about whether the show was feminist or not. Elle magazine claimed that
BSG “might be the most feminist show on TV!” and, around that same time, Hugh Hart for
Wired writes, “BSG has … conjured a gender-blind universe filled with female characters of
genuine substance” in his article “Strong Women Steer Battlestar Galactica’s Final Voyage”
(“Strong Women”). These articles and other blog posts prompted Juliet Lapidos for Slate.com to
write “Chauvinist Pigs in Space: Why Battlestar Galactica is not so frakking feminist after all”
in which she argues that “beneath these attention-grabbing markers of gender parity, there's
plenty to make a feminist squirm” (n.p.). The drama on the blogosphere continued when Brad
East for Slant responded with his article entitled “Battlestar Galactica: Frakking feminist – So
say we all!”44 Already, the characters within the show were being discussed as gendered,
political bodies.
The debates over the feminist nature of BSG surround the primary human and nonhuman
roles of President Laura Roslin, Kara “Starbuck” Thrace (Katee Sackhoff), Sharon
“Athena”/“Boomer” Agathon, and the many other “strong” characters. Although feminist
43
This is not to say that any of the above qualifications (woman, model, or actress) are “simple” identities. Indeed,
the identity of an actress and model is already far from any normal or even natural way of life, arguably not far from
another species of cyborg, but that is a discussion for another time.
44
“Frakking” here is not a reference to a technique used to extract natural gas from the ground. In the context of
BSG, “Frakking” is used throughout in place of another swear word. Whether this was done to make the world of
BSG feel different culturally, like the fact that papers are not square but rather have the corners clipped, or if the
writers of the show were simply avoiding language censorship, is unclear.
142
messages in media are often simply reduced to the label of “strong” characters, it’s not ever clear
what this supposed strength means. Included in the list, but less discussed by the “critical”
blogosphere is the “science fiction icon”, dubbed by UGO Entertainment as “sultry Cylon
Number Six” (“Six Appeal”). Whether BSG is promoting feminist claims or not, the struggle
between the human and the post-human, which is often feminized, becomes the main concern as
Helfer/Six’s identity moves beyond the “safe” and contained boundaries of the television show.
Helfer’s human identity becomes appropriated and hybridized into a cyborgian body. No longer
entirely human, Helfer is described by media consumers as Six.
Perhaps a reason Six is one of the least discussed of the group of actively “strong”
women is that she and her many copies are unable to be reduced to the simple binary of
“Feminist or not.” It is easy to argue that Six is an anti-feminist image because she is highly
sexualized and is the seductive femme fatal who seduces Gaius Baltar, enabling her to gain
access to the global defense network of the human home colony of Caprica, ultimately leading to
her assumed destruction. Six appears to be the stereotypical dangerous threat of technology and
female manipulation, much like Maria in Metropolis. But Six becomes more complex as she
exists in the mind of Baltar, the male fantasy of sexy seductive, and yet manipulative, driving
him mad with questions and answers about faith and God. Helfer, herself, argues that even
though her character could often be seen as simply “eye candy”, she clarifies that “Reducing Six
to just a sex object would have been a little bit one note, and that wasn’t what it was about” (Hart
“Strong Women”). What, then, is Six all about? How can she be understood as both a human
actress, who is lusted after by the SF geek audience, and as the many versions of Cylon Six?
Both of which are lusted after as a single entity.
143
Helfer/Six is not simply a woman augmented by the technology around her, rather the
technology, the Cylon, has become her and vice versa. On the one hand, she has been
appropriated by the blogosphere of readers, writers and the cyber community as a body inscribed
by her sex appeal – a veritable cyber Barbie doll to dress up, choosing the appropriate costume
for the persona as laid out by the show, and playing her role as a Cylon. In this way, she is the
perfect Cyborg. On the other hand, Helfer can be read within the context of the BSG narrative.
Most readings of BSG accept Helfer as quite separate from her role as a Cylon. In this case, I
argue in opposition to most readings of BSG, and often science fiction as a whole. Helfer (her
real world human as inscribed by the blogosphere) and Six (the Cylon Helfer portrays on
television) exist together and in conflict on the screen. Six has been created by not just a script,
but the body of Helfer, the star, was chosen as the ideal woman for the role. In fact, starting with
the identity of Helfer, I argue that Helfer’s body has become interchangeable with that of her
fictional character, Six.
Although Tricia Helfer was awarded the 1992 “Supermodel of the World” by the Ford
modeling agency, she did not hit the celebrity blogosphere until her appearance in Battlestar
Galactica was re-imagined for television in 2003. Since her role as Number Six, Helfer has
gained popularity for her “jaw-dropping good looks and innate sexiness” (“Tricia Helfer:
Biography” TV Guide) but also for the mystery associated with her. Even Wikipedia (the
generally accepted source for all “truth” on the internet45) chooses to include this excerpt from an
interview by Adam Vary for Entertainment Weekly with Director, Michael Rymer: “It wasn't just
45
This is not to say that what Wikipedia offers is the Truth, but rather the culturally and socially mediated truth
considering the fact that Wikipedia is “peer-reviewed” (that is read and reviewed by the members of the online
community) and edited by the populace.
144
the way she looked; she just has this vibe about her. Nobody gets how hard that role is, to bring
the depth, the vulnerability and the mystery to essentially a robot chick” (n.p.).
Besides being eternally sexy, Helfer/Six is unchanging for her viewers both as she is
always “uploaded” into identical bodies, each existing simultaneously, but also unaltered by
traditional female changes, like pregnancy or menopause. For example, it is essential to
remember that Cylons cannot reproduce and indeed, this problem becomes the focus for the
major narrative struggle for the season arch. With the sterility of the Cylons in mind, many of
the “patriarchal anxieties,” Graham describes as surrounding the female body, are alleviated, or
at least transformed into a new anxiety, the prospect of the child-less woman who steals babies
and snaps their necks. The Cylon is unchanging and potentially immortal, and this trait is
presumably shared among all “skin job” Cylons. Rather than presenting us with a “permeable
body” that falls into the category of “monster”, which Graham explains as “contribut[ing] to the
anxiety of the patriarchal mindset” (52), the stable Cylon body presents less of a threat. Or at
least less of a threat to the bachelors who don’t feel the threat of procreation as strongly. For a
man who is not seeking a mate for procreation, but rather a sexual partner (sometimes only parttime), a sterile Cylon is ideal.
But Six is a living, breathing human being, therefore subject to the potential threat of
procreation during sexual fantasies, contributing to anxiety, right? But she is also an actress,
understood to be human off screen and therefore should be decoded as two separate entities: Six
on one side, and Helfer on the other, correct? When exploring the vast volume of blog-obsessed,
image-mongering publications on the internet, it becomes clear that Helfer no longer fits the
category of “human woman.” Even in the process of casting the role of Six, Helfer was chosen
for her “sexiness”, not her talent. In Adam Vary’s interview with the creators of the show, David
145
Eick (producer) and Michael Rymer (director) explain the process of casting of Helfer as the
following:
EICK: I would say the biggest casting drama was Number Six. Tricia Helfer had
this thing that you couldn't put your finger on, but she hadn't
done anything. She had played a dead body in an episode of CSI.
RYMER: Tricia has this effect on men. You just noticed this giddiness that would
infect the men. It wasn't just the way she looked; she just has this vibe
about her. … So I said, ''Look, I can't make another girl sexy, but I can
help her act.'' (10)
Since the show became popular, even simple news articles remember the mysterious “robot
chick” and slip from Helfer to Six interchangeably. JustJared.com shares their interview for
Buzznet, reporting on a release of BSG, interviews “Sexy Cylon, Tricia” who appears on the
cover of Maxim, a men’s lifestyle magazine, with Grace Parke/Sharon; apparently, in her words,
“there will certainly be a lot more explicit action on the DVD. And I refuse to have a body
double, so you will see a little bit of skin from the Six character” (sic, “Grace Park & Tricia
Helfer Cover”). It appears important for the audience members to advertise the fact that
Helfer/Six will be getting more naked. In another example, TV Squad, for AOL online media,
announces that Helfer will be appearing in the show Lie to Me, the opening line reads “Watch
out, Dr. Lightman, Number Six is coming. That’s right, Battlestar Galactica’s star Tricia Helfer
is coming to Fox’s Lie to Me” (Harnick). Note here that even Dr. Lightman, played by Tim
Roth, is “collapsed” into his screen identity for popular reference. The difference here is that
Roth is connected with a well-respected and talented deception expert while Helfer is “coming”
(with ominous tones).
146
Like most celebrity women, her procreative preferences are not among the lists of her
“bountiful [human] traits.” Among the most popular “hits” on Google, you are more likely to
find Tricia Helfer linked with keywords like “height,” “weight,” “sexy pics,” and “hot.” There is
little to no mention of Helfer as a woman beyond her modeling and acting career. In fact, the
majority of websites discussing Helfer automatically describe her as “the alluring cyborg in
Battlestar Galactica” (“Tricia Helfer: Biography” Who2.com). Even the title pages for Maxim
and Playboy inevitably connect Helfer directly with Six with phrases like “The Battlestar Babe is
Frakking hot!” One particular blog entitled “Hot Bitch of the Day” goes so far as to simply
reduce her photo shoot in Playboy to this: “it didn’t answer the burning question of whether
Cylon wax or leave a landing strip. Oh well, at least we see frakking Cylon boobs! [sic]”
(Jimmy). As you can see, Helfer’s image is no longer one of an autonomous and singular
woman; she has become the new cyborg. No longer is being cyborg simply limited to the
technology that we use and augment our lives with, but now the boundary of transhumanism
extends to identities built on the internet, between the “safe” websites of electronic pleasure –
there is no physical threat beyond that requiring a hard drive wipe or anti-virus protection.
Sharon Sharp discusses what she calls “collapse” of the identity of the fictional cyborg
into the human woman through her reading of the 1976 show The Bionic Woman, which is very
similar to my understanding on Helfer/Six. Examining the blending of identities between
Lindsay Wagner as a human woman with a “Star” identity and the Bionic Woman, Jaime
Sommers on the show, Sharp explains that there is a distinct lack of feminist discussion of the
show itself and yet, “the collapse of Wagner’s star image with the bionic woman performed in
these accounts opened up the possibilities for an implicit feminist reading of The Bionic Woman”
(512). In the case of BSG, there is no lack of feminist discourse throughout the show, nor in the
147
criticism of the show; however where the lack appears is in the public appearances of Helfer as
Six. While Wagner is often featured in articles “[discussing] her star image in terms that closely
parallel feminist concerns of women’s rights and equality, particularly in the work place” (Sharp
512), Helfer is often framed as discussing the troubles with always being equated with her Cylon
identity. For example, in an article in Wired, Helfer is described in terms that first highlight her
Cylon-self and then making her safe within gender-normative ways: “Her character's intensity -and Six's propensity for dying horribly brutal deaths, only to be reincarnated, Cylon-style -might intimidate fanboys who spot Helfer at sci-fi conventions. But she's really just an easygoing Canadian farm girl made good, far from the angry and erotic part she brings to life on the
show” (Hart “Death and Fanboys”). It is important to note that this article was written by the
same man who wrote about the “Strong women steering Galactica’s final voyage,” but he
contradicts that view in this earlier article. In the 2008 interview, parts of the discussion with
Helfer that Hart chooses to feature in the article includes things not featuring her “strength” as a
woman, but rather the fact that she enjoyed playing the Cylon Six who was treated horribly: “I
really liked Gina. She'd been gang-raped, beaten, tortured and starved, so she was a lot darker
and a complete departure from what I had been playing. In that respect, it was fun to get into
what was almost a whole new character. That's much more interesting than just playing the
pretty face, which would get really boring” (Hart “Death and Fanboys").
Very opposite from Wagner’s pro-feminist behavior, Helfer is featured as the woman
who enjoys abuse and torture, the ultimate male fantasy, as explained by the media surrounding
Helfer/Six. To clarify this idea of the fantasy in relation to cyborg women, Annalee Newitz, for
PopSci.com explains that “to some, fembots represent the perfect male fantasy: They’re sexy and
submissive and have more techie features than the Xbox 360. But they also have a dangerous
148
side that can reduce walls to rubble and make an army retreat. Perhaps the fembot’s allure
resides in her ability to walk the line between total obedience and unfathomable power” (n.p.).
The bloggers keep no secrets when it comes to declaring Six/Helfer as part of their sexual
fantasies. For example, about her photo shoot for Playboy magazine, a feature in UGO
Entertainment about Helfer explains that, “Fans were treated to what could be considered a
fantasy come true when the world got their first glimpse of Tricia Helfer's Playboy pictures in
the February 2007 edition of the famed men's magazine” (“Six Appeal: Tricia Helfer”).
Blogging viewers were not the only ones fascinated with fembots, and the selfproclaimed feminists of Bitch Magazine are well aware of this trend. Anita Sarkeesian, for Bitch
Media online, says “The fembot fantasy is an expression of total control, especially over women”
(n.p.). She argues that advertising has taken the parts of science fiction that she loves the most
(“imagining alternative societies and futuristic technology”) and has turned it into a “tool of
subordination and oppression” (n.p.). There has been a long standing tradition of feminists
viewing anything man made as another “tool of subordination and oppression”, but in Helfer/Six
we see something more complicated. True, she is not entirely human and she is clearly lusted
after by audiences, as reported by the celeb-news and blogs. However, is it wise to view this
new incarnation as oppressed, needing liberation? It is important to notice that in many of these
versions, Helfer/Six is described as “powerful” and “dangerous.” In CurrentTV, Helfer describes
her experience with Playboy. In an interview, she is introduced to the readers as “Number Six,
Clothes zero,” but she explains the fun she had while on set: “I got to choose the photographer
and I always wanted to work with [the photographer] and I had photo approval and my husband
is 100 per cent behind it” (sic. “Tricia Helfer Playboy Pictorial”). Even though she appears to
149
qualify her statement with the fact that her husband was accepting of her choice, she says, “I
didn’t do it for other people, I did it for myself” (n.p.).
Tricia Helfer/Cylon Six has been constructed outside of the bounds of the television show
by the people that consume her, consume her image. In this consumption and re-writing of
Helfer/Six a new being is formed – something both woman and synthetic, created for pleasure (in
that Helfer’s image as Six has been circulated for the “boner sequence” of many men), yet
expressive of alternative desires (appearing to own some of Six’s more destructive wants). In
Six/Helfer we see the combination of woman and machine, two very erotic images. Claudia
Springer explains that “artistic renderings of technology since the early twentieth century have
often expressed techno-erotic impulses” (3). Springer goes on to say that “mechanical objects
have been imbued with male or female sexual characteristics for centuries; consequently,
representations of machines long have been used to express ideas about sexual identity and
gender roles” (9). In exploring Helfer/Six and how she appears in cyberspace and in BSG, I have
shown how her body occupies the boundaries of cyberspace: machine and human, Cylon and
Helfer, erotic-mechanical and unnatural, defiant of death and yet capable of destruction, and
even resurrection, the ultimate post/human body. In the construction of Helfer/Six are
expectations of gendered appearance and behavior which are important to be aware of as we
approach a future with humanoid robots. Robertson, writing about the decisions that robotics
designers already make when designing robots, explains that “gender for them constitutes
common-sense knowledge, or a cognitive style through which they experience the social world
as a factual object” (4). But in these creations, we see a multiplication, a duplication of the
existing standards, at the hands of roboticists: “The practice of attributing gender to robots not
only is a manifestation of roboticists’ [own] tacit, common-sense knowledge, or habitus, but also
150
an application of this knowledge to create and sustain, or to leave self-evident, the facticity of
their social world” (Robertson 4).
From exploring the metaphysical, purely visual existence of Tricia Helfer on the internet
and blogs, it is clear that yes, cyborgs exist and indeed are among us. Helfer appears online as
no longer entirely human; her identity is inexorably linked with being a Cylon. She is the “sexy
cyborg” that represents sexual desire for technology that is unharnessed and mysterious while
still safe from procreative repercussions and relationship details. Why worry about children and
the real consequences of wrinkles and aging when your lover is a robot a là Cherry 2000. Step
aside Fritz Lang’s Metropolis Maria and her doppelgänger; for better or for worse, the real
cyborgs are among us and redefining what the gendered world of androids and human bodies
will look like.
Race and Robot Doppelgängers
The fact that the Cylon is not human is an obvious point, in fact emphasized for the
audience at the beginning of every episode with the words “Cylons were created by man…they
evolved… they rebelled…there are many copies…” but is a point that comes into question by the
characters of the show. In fact, until “outed,” the Cylon is for all intents and purposes human
and this makes it all the more complicated to navigate our feelings for them. By the end of
season 2.5, audiences are aware that the “bad guy” Cylons, the destroyers of their home world
Caprica, are hiding in their ranks in human form so the drama is high. Even though only a few
have been revealed (either by assassination attempts or by their multiplicity), most people in the
remaining fleet (about 50,000 human survivors) assign the Cylons as “bad” even without being
fully understood, very much as Memmi would describe racial stereotypes: “Racism seeks to
render definitive the stereotype that it attaches to the other. Not only does the other belong to a
151
group of which all members are rotten, but they will be that way forever. Thus, all is in order for
eternity. The bad guys are the way they are definitively, and the good guys as well” (114).
Within the civilian fleet, led by Commander Adama and President Laura Roslin (Mary
McDonnell), humans have dealt with their interactions with the Cylons in their own way: keep
them prisoner until we figure something out, but allow them to keep their person status (giving
them the traditional comforts afforded a human prisoner).
Aside from the general treatment of “outed” Cylons in Galactica’s fleet, with the
discovery of the lost Battlestar, Pegasus, the crew of Galactica is forced to make some
uncomfortable decisions about how to treat the Cylons. After a joyous reunion of the two crews,
it quickly becomes clear that the Pegasus, under the command of Admiral Cain, follows very
different protocol when it comes to Cylon treatment. Over the course of the episode “Pegasus,”
the audience is confronted with brutal and disturbing treatment of Cylon Six, challenging
viewers to consider their own view of the nonhuman dilemma. In fact, by returning to Smith’s
definitions of dehumanization/depersonization and Memmi and Mills’ description of racism, the
unfolding of these two episodes illustrates clear lines between acceptable and morally abhorrent
behavior toward a nonhuman Other. Because of the emotional ties built from the nonhuman
dilemma through the first season, the audience is confronted with extremely “depersonizing” (to
take a non-species specific spin on “dehumanization”) treatment that ends up being rejected by
the crew of Galactica. In particular, Cian and her officers’ treatment of Cylon Six far exceed
normal treatment of human prisoners, even in a time of war.46
46
For the sake of brevity here I speak of the Pegasus and her crew as is presented in the Battlestar Galactica
television series and the series alone. I will not include the back story of Pegasus as presented in the special
television miniseries Razor (2007).
152
Before the arrival of Pegasus a few important human-Cylon relations are formed, giving
the “Skin Job” Cylons a very racially-charged identity, one which needs to be understood before
the arrival of the Pegasus can have its strongest impact.47 At first (especially in the first season)
Adama readily accepts Cylons as evil. He sends his pilots to kill the machine-like Cylon ships
and, upon the discovery of a Skin Job named Leoben (Callum Keith Rennie) in the fleet, he has
the Cylon interrogated roughly exclaiming, “First of all, it’s not a him, it’s an it, and second of all
it can’t be trusted.” Later he voices his reluctance for the Cylons that look human, especially
knowing that some of his crew have fallen in love with Cylons and maintained those feelings
even after they have been revealed as nonhuman.
President Laura Roslin also struggles with establishing boundaries in the first season, but
later helps to humanize the Cylons. One example of Roslin’s first distinction between treatment
of Cylons versus how they could be treated ethically as humans, is seen in the episode “Flesh and
Bone” in Season 1 as main character Kara “Starbuck” Thrace tortures Leoben for information
about a warhead supposedly hidden in the fleet. Interrupting the rough interrogation, Roslin asks
firmly “What the hell is going on here?” to which Kara responds, “It’s a machine, sir. There is
no limit to the tactics I can use.” Disappointed that Kara has no answers to the location of the
warhead Roslin says “You don’t know? You’ve spent the last eight hours torturing this
man…this machine…whatever it is…and you don’t have a single piece of information for us?”
Even though she put a stop to the torture Roslin ends up throwing Leoben out the airlock into
space, explaining to Kara, “He puts insidious ideas in our minds, more lethal than any warhead.
47
Although the derogatory phrase “Skin Job” isn’t used until later in the series, other derogatory words are used to
describe the Cylons and humans who have relations with Cylons. Even the singular use of Skin Job is important for
its reference to Blade Runner. For authors like Marleen Barr, the use of derogatory language is Blade Runner is an
unavoidable connection to racism. The humans of Battlestar choose their own language to reference the “old
school” look of the Cylons: “Toaster” is the most commonly used, but “Cylon” is often connected with “Fracking”,
the BSG-world equivalent of “fucking.” The discussion of language use has its own time and place, but I want to
spend more time on other elements.
153
He creates fear. But you’re right, he’s a machine and you don’t keep a deadly machine around
when it kills your people and threatens your future. You get rid of it.” Her uncertainty of what
to call and how to react to the Cylons has changed by the second season when the newly arrived
copy of Sharon “Athena” has developed a romantic relationship with Helo. Roslin explains that
the pregnant Cylon Sharon should be treated gently even when imprisoned: “She thinks she’s in
love. Maybe it’s software, but the important thing is that she thinks she’s in love with
Agathon… and the baby” (“Home: Part 1”). By repeating the fact that Sharon thinks easily
invokes in the audience the notion that “to think is to be”, affirming the humanness that the crew
wrestles with. But this tenuous balance is about to be disrupted with the arrival of a previously
thought lost Battleship.
Before seeing the Cylon prisoner, the audience suspects that the Pegasus has a different
mentality regarding the treatment of the enemy. For example, the first sign that things are run
differently on the Pegasus is noticed by Kara as she sees a “scorecard” of Cylon kills painted on
the side of the Viper ships. The Captain of the Viper pilots claims that keeping count
“encourages morale,” but Kara and others are skeptical. Rather than seeing something that
encourages morale, Kara thinks it’s an act of bragging that takes the conflict to another level
beyond self-defense. There are other suggestions that Cain’s ship is run with a different, much
more military approach, one that absorbs civilians rather than protects them.
Treatment of humans in the fleet aside, the introduction of Cylon Six as a prisoner on the
Pegasus quickly draws lines for audiences and the crew of Galactica. We first see this new
version of Six, or Gina Inviere as she is known among the Pegasus crew, as a prisoner when
Baltar enters the cell to examine the Cylon prisoner. However, to emphasize the separation
between Six as we have known her (the hypersexualized fembot) versus Six as prisoner, Baltar
154
does not enter the cell alone; he is accompanied by his imagined version of Six (or Head Six), a
female Cylon with which he had sexual relations on Caprica.48 Before we see Six directly, Baltar
and Head Six walk into the cell block and from the perspective of inside the cell, behind glass,
we see the reflection of a person lying on her side, but we also see Head Six as a tall, attractive
blonde in her signature red dress. Emphasizing their reactions, the camera tracks Baltar and
Head Six as they gape, horrified at the form of a broken person lying, nearly naked, on the floor.
“Oh my God,” Head Six gasps and shudders, “Gias, it’s me.” With that we are shown Six on the
floor of the cell: dark hair a mess, chained with a collar around her neck to the floor in a barren
cell, she is covered in cuts and bruises, wearing nothing but a filthy sack-like shirt, leaving her
thighs bare. Compared to how viewers are used to seeing Six, we are invited to gasp and gag
along with Baltar and Head Six as they enter her cell. Here the audience feels with Baltar and
Head Six how Six has been demoted to a nonhuman category. As Smith describes this kind of
treatment:
Demoting a population to subhuman status excludes them from the universe of
moral obligation. Whatever responsibilities we have toward nonhuman animals,
they are not the same as those we have toward members of our own species. So,
if human-looking creatures are not really people, then we don’t have to treat them
as people. They can be used instrumentally, with complete disregard for their
human worth – they can be killed, tortured, raped, experimented upon, and even
eaten. (159)
48
For viewers of the show, this version of Six is fraught with controversy. For some viewers she is called “Chip
Six,” referring to the theory that she is actually a projection of an existing chip in Baltars brain – one that transmits
to the Cylon fleet and tampers with Baltars head. Other viewers call her “Head Six” and maintain that she is purely
a figment of his imagination without any outside influence from the Cylon fleet. Regardless of whether or not she is
a projection of Cylon technology, Baltar loves and lusts after this imagined Six and even follows her advice. For
this paper I simply use “Head Six” for clarity and the simple fact that regardless of what she is, she is in his head.
155
As Head Six and Baltar begin to examine the seemingly comatose Six, Head Six echoes words
similar to Smiths, “She must have been abused…tortured.” When Baltar examines Six rather
than seeming sympathetic, Head Six gets upset, “Can’t you stop being a scientist for one moment
and look at the abused woman lying there in front of you?” Later, as Baltar describes the state of
Six to Cain he explains that her comatose state is due to psychological damage: “It’s quiet clearly
traumatized, which would suggest that its current condition is psychological in nature… [This]
shows that the Cylon consciousness is just as susceptible to the same pressures as the human
psyche. It can be manipulated in the same fashion.” Even though Baltar calls Six an “it” to
placate Cain, his sympathies are clearly with Six and all her models.
If this portrayal of Six weren’t enough to invoke the nonhuman dilemma, the “Cylon
Interrogator” and the behavior of the Pegasus crewmembers solidify the difference between
morally abhorrent and morally acceptable behavior. With the capture and imprisonment of Six
on the Pegasus, Cain and her officers adopt a certain code of behavior that sets Cylons apart as
the threatening Other to be abused, neglected, and depersonized. Apparently this works well for
Cain and confirms Memmi’s argument that “The derogatory stance, the act of denigrating
another, permits people to close ranks with each other; the identification of an external threat,
whether real or imagined, restores fraternity” (63).
Apart from the neglect of imprisonment suffered by Six on the Pegasus, viewers also
learn that the Cylon “Interrogator” from Pegasus allows and even encourages the sexual abuse of
their Cylon prisoners. Having already developed an emotional connection with the Cylon
Sharon on Caprica and on Kobol just a few episodes earlier, the audience is likely susceptible to
feelings of sympathy for Sharon/Athena. While socializing with the Galactica hanger deck crew
members of the Pegasus begin to speak freely about how Cylons are treated on their ship. At
156
first their conversation seems a bit of misogynistic bragging as the Pegasus crewmen joke about
the fact that Galactica has a Cylon prisoner as well: “I heard you guys got yourselves a Cylon.
Heard she’s a hot one too!” and “Gotta’ get me some of that Cylon stuff.” As Chief Tyrol and
Helo49 overhear the Pegasus crewmen, their anger rises and the audience is encouraged to side
with Tyrol and Helo in defending Sharon’s honor as the Pegasus crewmen’s taunts become more
vicious: “Sensitive? You got a soft spot for the little robot girl, do you?”
Even the music intensifies, building the sympathy for audience members, as the camera
cuts to Sharon’s cell and to the Cylon Interrogator, Thorne, entering her cell. With another cut,
we return to Tyrol and Helo with the Pegasus crewmen as they taunt:
“Remember when Thorne put that ‘Please Disturb’ sign on the brig?”
“I got in line twice.”
“I remember she was just lying there with that blank look on her face, like ‘uhhhhh.’”
Even Cally (Nicki Clyne), a Galactica crewwoman who shot and killed Sharon/Boomer after
Boomer had attempted to assassinate Adama, asks the Pegasus crewmen to stop as they continue
their crass remarks and gestures mimicking sexual intercourse. The camera gives a reverse angle
shot between Sharon/Athena who is being pushed around by Thorn and then cuts back to the
Pegasus crewmen who are saying they want a chance at Galactica’s Cylon too, but Thorne said
he would have to “break her in first.” Horrified at the realization of what is about to happen to
Sharon, it is easy to see how the audience feels Helo and Tyrol’s violent intervention is justified.
The Pegasus crewmembers’ behavior further solidifies their abhorrent behavior toward
the Cylons over the course of the two episodes following “Pegasus”: In the words of a Pegasus
49
Remember here that Helo has been romantically involved with Sharon/Athena on Caprica and we know she is
pregnant with his baby. It is also important to note here that Tyrol and Sharon/“Boomer” were together for the first
season of the show and he clearly has feelings for Sharon regardless of which model. Both men have strong feelings
for Sharon and the audiences’ sympathies for her, either Athena or Boomer, are strong.
157
crew member, “You can’t rape a machine!” These words sound much like the way Smith
describes the rationalization of extermination or dehumanization through metaphor. Smith uses
the example of the Nazis seeing their victims as subhuman animals and thus “excluded from the
system of moral rights and obligations that bind humankind together. [For example,] It’s wrong
to kill a person, but permissible to exterminate a rat” (15). Disgusted by such language,
members of Galactica do not condone the behavior and indeed it leads to a near-war between
Galactica and Pegasus. Later, Adama and Dr. Cottle (ships doctor aboard Galactica, played by
Donnelly Rhodes) explain to Sharon (and by extension the audience) their opinions about the
way Sharon was treated. “What happened to you...” Adama begins… “Was unforgiveable,” Dr.
Cottle finishes. Adama continues, “…happened aboard my ship, on my watch, and it’s my
responsibility. So I just want you to know that I personally apologize” (“Resurrection Ship:
Parts 1 &2).
I am not alone in drawing parallels between treatment of artificial entities and racial
theory. Machine intelligence at its original conception was understood as abnormal,
marginalized and misunderstood. Not only is the very possibility of machine intelligence still up
for debate but Alan Turing, oft considered the “father of artificial intelligence,” predicted the
queerness of AI. David Leavitt, The Man Who Knew Too Much: Alan Turing and the Invention
of the Computer, explains in a Radiolab podcast feature about Turing’s life, that Turing believed
that “the machines [thinking machines] were more likely to be victims. Victims of prejudice.
Victims of injustice” (“Turing Problem”). Turing, according to Leavitt, believed that thinking
machines, regardless of how they were embodied, were doomed to be victims of people who
believed machines could never think like humans because they could never feel the way humans
do. Although Turing argued for the theory of other minds and functionalism, meaning that
158
essentially if it seems intelligent then one should assume it is intelligent, he had many naysayers.
Jeffry Jefferson, for example in Leavitt’s words was saying to machines, “you don’t think
because I say you don’t think.” And England was saying to Turing, “you can’t be what you are
[gay] and we’re going to change you,” so Turing was sympathetic to the prejudice he believed
thinking machines would receive in the future (he assumed that thinking machines would be
made) (“Turing Problem”). For Turing, the marginalization that he went through on a very
physical level (namely a chemical castration as a “cure” for homosexuality) would also be felt by
future thinking machines.
By facilitating the audiences’ identification with the nonhuman as person-like, we arrive
at the nonhuman dilemma and are conflicted with feelings of moral responsibility. As Brooks
muses, “One of the great attractions of robots is that they can be our slaves… But what if our
robots we build have feelings? What if we start empathizing with them?” (155). Brooks
continues with a warning: “Although we will continue to make unemotional, unconscious and
un-empathetic robots … those that we make more intelligent, that we give emotions to, and that
we empathize with, will be a problem. We had better be careful just what we build, because we
might end up liking them and then we will be morally responsible for their well-being. Sort of
like children” (151). BSG echoes this warning as Caprica Six warns Baltar of the oncoming
attack in the first episode: “Humanities children are returning home.” Through all of this
confusion about what it means to be human and who/what counts as human, we should recall
Adama’s words after the destruction of the Resurrection ship: “It is not enough to survive. One
has to be worthy of surviving.” How then will we prove ourselves worthy of surviving?
159
THE POST/HUMAN LOVE AFFAIR
Christopher H. Ramey introduces his interest into the actual creation of AI and androids
with his 2005 article for Cognitive Science Society: “We must not forget our selves while making
others in our own image. That is, to deny a role for androids in our lives, through deeming them
(in principle) humanlike is to threaten the very manner in which we understand what it is to be a
human being for the sake of others” (n.p.). Though his argument is trenched in the philosophical
concept of other minds, his main argument, boils down to the idea, in my words, that “if it looks
human, seems human and in every way that matters appears to be human, then we should treat it
as human to maintain our own moral engagement with the rest of the world.” Another
philosopher who would be in this camp with Ramey is Anatol Rapoport who writes about the
battle between those who argue for a “special essence” of humanity. Simons describes this
perspective: “…man has usually been seen as something more than a mechanistic artifact, as a
creature embracing spiritual or metaphysical elements in addition to the indubitably physical
frame” (1). For Rapoport, the “vitalists” are forever retreating from this idea by asking what
more can the android do, but for him, “All these questions could be dispensed with by a trivial
answer: If you make an automaton that would be like a human being in all respects, it would be a
human being” (45; emphasis added). Indeed, Rapoport dismisses the idea of a “special essence”
in a way that I believe should be used more often in discussing a posthuman future. He is
recalling the Russian philosopher Mikhail Vasilevich Lomonossof, “who once and for all
demolished the Aristotelian concept of ‘essence’ by declaring that substances are what they are
because of their properties, not because of some ‘essence’ that alone defines their identity”
(Rapoport 45). Some futurists take a point like Ramey and Rapoport’s very seriously and argue
160
that we are near approaching a time when we will have to consider the place of robots in society,
especially because by their presence, they will transform our families.
This transformation is taking place in new ways, especially when posthuman families are
becoming the new threat to normative family relations. The posthuman is appearing in many
ways from the obvious robots and Roombas (the automated vacuum cleaner) to the more
mundane cell phones and social networking tools online. For theorists like Haraway, the
posthuman condition was closer than anyone suspected with the publication of The Cyborg
Manifesto in 1991. Twenty years later and the posthuman is becoming part of the everyday
family. Robots and androids often fit into the narrative of the posthuman family as a non-human
presence and their interaction triggers many types of “makeovers,” bad and good.
Robots have already been thought of as part of the family—at least in so far as futurists
and others are already discussing robots as if they already are part of the family. Discussing
future possibilities with androids, Moravec easily refers to our future androids as our children.
In his book Robot, Moravec describes these intelligent robots as “our progeny … Like biological
children of previous generations, they will embody humanity’s best chance for a long-term
future. It behooves us to give them every advantage and to bow out when we can no longer
contribute” (13). In a Time magazine report on Rodney Brooks’ 1996 robotics project, “Cog,”
reporter Julian Dibbell describes the way “Brooks gushes like a first-time parent about the things
his baby [Cog] can do” (56). She even concludes that Brooks is “as ambitious for his progeny as
any father is for his child” (56).
As robotic companions are moving into the mainstream market, some writers and
roboticists voice the concern for the impacts such integration will have. While not thinking
about robots as our children, Van Oost and Reed express their apprehension about personal
161
relationships with robots: “Not only are there hopes that social robots might befriend and care for
people but also fears are given voice… The first fear is on deception. The main argument is that
robot companions deceive vulnerable people, such as mentally impaired elders and toddlers, by
faking human behaviors, emotions and relations” (van Oost and Reed 13). The idea is that when
“vulnerable people” put trust in artificial entities their emotions cannot be reciprocated and thus
are devalued. Their second concern is that of substitution, in other words the fear that humans
will spend more time with robots rather than humans. Van Oost and Reed explain that this fear
comes from the “[belief] that robots will replace humans in care situations, and thus deprive
care-receivers from human contact, empathy, caring and the like” (18). Isolation from social
behavior through technological replacement is also discussed in Michael Cobb’s Single:
Arguments for the Uncoupled. Looking at the cover of Cobb’s book we see the image of the
man in bed with what appears to be a crash-test-dummy or a manikin, but the cover is missing
something. Cobb’s book is not about couples with non-humans, but the anxiety of anti-social
behavior illustrated in his book extends to human/non-human relationships.
Uncanny Companionship
While Van Oost and Reed come from a perspective of robotics development and Cobb is
writing specifically about single humans, I extend their discussion to consider an episode of the
reality show My Strange Addiction (2011) from TLC. If, as Cobb’s introduction suggests, one
can read the “figurative language of singleness as it traverses the literary objects, film,
television… [etc.]” (33) as a way to understand the effects of the couple, a reality-television (or
realiTV) show like My Strange Addiction, even its title alone, suggests that the people depicted
are abnormal and in need of “saving” from their addiction – and in this case an addiction to a
strange relationship. In this particular episode (“Married to a Doll/Picking My Scabs”), the
162
audience is presented with two “addicts”: one struggles with her compulsion to pick scabs and
the other is a man in his mid-30s, Davecat, who is “married” to his RealDoll. Both cases present
the audience with words of “professional” wisdom in the form of psychologists, counselors and
doctors, who all proclaim these individuals to be social deviants and, more importantly, at risk
for anti-social behavior. To add to the horrifyingly “abnormal” behavior of the people portrayed
in the show, the opening begins with the disclaimer: “This program depicts addictive behaviors
that are dangerous and risky in nature. Viewers should not attempt.” Thus, Davecat is framed
by the “expert” of the show as engaging in a “risky” and “dangerous” relationship that should not
be attempted. Davecat, according to the resident psychologist, Dr. Margaret Jordan, is defined as
“suffering from a schizoid personality disorder.”
Davecat’s plastic partner is named Sidore Kuroneko and is a synthetic gynoid designed
by her creators as a sex object but in this case dressed up and posed to act as a real woman/life
partner. Scrutinized with the
camera in his home (and indeed we
are invited to spy on their
abnormal relationship), Davecat’s
interactions with Sidore take on an
ominous nature as the soundtrack
adds overtones of dramatic piano
Figure 10: Davecat and Sidore cuddle on the couch. (Screen shot My
Strange Addiction, 2011.)
music, the musical theme used in each episode to underscore the strangeness of the person’s
behavior. He explains to the camera how his relationship with Sidore works: “The whole
interaction on a day-to-day basis, along the lines of getting her dressed, you know, brushing her
hair, things of that nature, it brings us closer together as a couple” (Figure 10).
163
While it is clear that Davecat is very aware of how his interactions model a
human/human relationship, it is also spelled out for the viewers as the scenes of Davecat’s
descriptions often cut away to a psychiatrist who explains in medical (yet pedestrian terms) how
and why this relationship is so abnormal. Dr. John Zajecka (filmed in his office, complete with
the “Dr.” title as a subtitle) speaks directly to the camera as he describes that a “fantasy
relationship” or daydream is a common, “often soothing” behavior, but one that can become
problematic when it becomes the “predominant way that they live their life.” Indeed, Davecat is
cast for the viewers as lonely. Dr. Zajecka explains: “When people are lonely they often look
toward alternative means to find relationships – that could be through a pet, sometimes just going
online, but there are really no substitutes for true human companionship.” Of course the doctor
is referring to normative human/human relationships (likely hetero), but why does Davecat’s
relationship with Sidore need to be cast as a substitute? Why then is the automatic assumption
that Davecat is alone? Cobb muses on Laura Kipnis’s words regarding love in Against Love:
“Saying ‘no’ to love isn’t just heresy it’s tragedy: for our sort the failure to achieve what is most
essentially human. And not just tragic, but abnormal” (18).
In Davecat’s case, however, he isn’t really saying ‘no’ to love, just saying ‘no’ to the
tradition of human/human love. Indeed, contrary to Dr. Zajecka’s diagnosis (aided by the
production teams editing and post-production) the audience could see a happy couple. While it
is true that this relationship does not meet one of the major criteria for “coupledom” (i.e., there is
only one person involved), one can read Davecat and Sidore as behaving as a “normal” domestic
couple would: he sleeps with her, goes shopping for her, tells her about his day, he even shows
her pictures of his trips, because obviously (as Davecat readily points out) she can’t go with him.
Reminding us of Eve Sedgwick’s discussion regarding “pairings” and the essential “two sides of
164
the conversation,” reading Cobb’s introduction makes me wonder if a relationship with a silent
non-human other is, in effect, the epitome of a conversation of one. And can that conversation of
one be a sustaining and fulfilling relationship? One thing that is (conveniently) avoided by such
a relationship is that there is no “fatality of the couple” for only one partner is necessary for the
relationship to endure. Even Davecat tells viewers “synthetic love lasts forever.” But if it is
true, as Patell muses, that “Low tech is the mark of the true individualist” (27), then our growing
dependence on technology is robbing us of that individuality, regardless of how much we are
promised that such things are made “just for us.”
Davecat and Sidore’s relationship is easily relegated to the sphere of the private. They
cannot reproduce. Their lives together could easily fall out of the eye of the public without
having been on My Strange Addiction. However, there are other human to non-human
relationships that threaten the very intimate sphere of the home. Human-Robot Interaction is not
just on a one-on-one level. There are predictions that robots will be in homes to assist with the
elderly and the raising of children. And with that prediction, we must remember that “family,”
in the words of Roddey Reid, “has never been a private matter” (193). For Reid, “the normative
family household and the bodies of its members have never constituted either a sanctuary of
values and relations at a safe remove from the outside of a depth of feeling and sentiment in
opposition to the public sphere and the bodies of social others” (193). In this context, the
synthetic other is part of the non-stop public performance, the social norms and expectations
projected onto it, and its human companions.
Davecat’s relationship with his doll, or RealDoll, can be explored through the lens of
queer theory. While queer in that they invoke the uncanny and allow for transbiologial
companionship, Davecat and Sidore are nonqueer by their very gendered construction. One
165
could argue, that like Haraway’s image of the cyborg, the RealDoll provides an example of a
melding between natural and artificial and thus offers a post-gendered world that Chilcoat
describes as a world “in which the constraints of gender, like those of sex, are loosened, and
other identities are freed to emerge” (158). However appealing a post-gendered world may seem
(and one which futurists like Kurzweil probably have in mind), this first step into uncanny
relationships are already fraught with gendered boundaries by the very construction of such
entities.
These gendered boundaries are not accidental. In fact, according to Chilcoat this return
to biological based divides are to be expected: “though science and information technologies
have advanced the notions that humans can move beyond their biology in deciding what to make
of themselves, it is equally true that these same technologies inspire an anxious need to retain
biology (or at least an anachronistic notion of biology) as the rationale for heteronormative
relations” (166). In other words, much like the threat of the posthuman future that suggests a
loss of humanism, the gendered robot simply reinforces the gendered boundaries that Haraway’s
cyborg promises to erase.
This reliance on the gender dyad in robotics is not limited to contemporary robotics
development. In her interrogation of Japanese popular culture featuring cyborgs and androids,
Sharalyn Orbaugh finds that “contemporary Japanese cyborg narratives are still very much
concerned with the binary oppositions of sex and gender, and the sexuality presumed to
accompany them” (448). Anne Balsamo confirms a similar trend in American culture: “As is
often the case when seemingly stable boundaries are displaced by technological innovation
(human/artificial, life/death, nature/culture), other boundaries are more vigilantly guarded.
166
Indeed, the gendered boundary between male and female is one border that remains heavily
guarded despite new technologized ways to rewrite the physical body in the flesh” (9).
In rewriting the boundaries and exploring new territories of relationships, we face a new
future of pleasure with non-humans. For some, this relationship already exists. The 2007
documentary Guys and Dolls, or Love Me, Love My Doll, directed by Nick Holt, interviews other
real world iDollators from around the world. One man, Everard, defends his relationship with
his dolls: “There are worse things in life than living with dolls… Like living alone. What would
I do if I didn’t have my dolls?” Even testimonials from actual RealDoll buyers echoes Everards
words. “John” from Massachusetts writes on the RealDoll testimonial page: “Jenny's [his
RealDoll] presence here has had a dramatically positive effect on me psychologically and
emotionally. A far more positive effect than I had ever expected” (n.p.). Claudia Springer
suggests that this pleasure could be considered a new beginning, “one in which technology will
become part of an egalitarian social configuration and inequalities will be rejected as
anachronisms from a bygone age that was merely human” (161). In the meantime, hopefully we
can continue to enjoy being “merely human.”
The Post/Human Family Transformation
Part of being “merely human” is living a life within a family unit. If, as Reid claims, the
family is a public performance, then traditional family construction will be altered with androids.
Returning to the Terminator films for an introduction to how the android will alter the family, in
Terminator 2: Judgment Day, the android has been successfully anthropomorphized and
integrated into the family through the detachment from Skynet and continued HRI with young
John Connor. The audience sympathy was built by watching John Conner bond with the
167
machine over the course of the film. By the end of the film the Terminator has essentially
become a father for the otherwise fatherless young John.
Director James Cameron consciously included the father/son imagery into the script. For
example, the script describes one of the first moments of bonding between John and the
Terminator: “Terminator, with John in front of him on the Harley, roars down the empty street.
John cranes his neck around to get a look at the person/thing he is riding with. The image is
strangely reminiscent of father/son, out for an evening ride.” Just in case the audience hasn’t
picked up on the subtle imagery of the scene, John’s mother, Sarah Conner, features a voiceover
while watching John and the Terminator hang out together:
Watching John with the machine, it was suddenly so clear. The Terminator
would never stop, it would never leave him... it would always be there. And it
would never hurt him, never shout at him or get drunk and hit him, or say it
couldn't spend time with him because it was too busy. And it would die to protect
him. Of all the would-be fathers who came and went over the years, this thing,
this machine, was the only one who measured up. In an insane world, it was the
sanest choice.
Not simply a surrogate father figure, the Terminator is a better father than Sarah could imagine.
Having risen out of the uncanny valley from the first film, the Terminator now brings tears to the
eyes of John when he knows his friend is going to be melted down.
For another example of how androids will assimilate into the family, I turn to a zombie
film and draw parallels to the tradition of the reality television genera of the “family in crisis.”
Although usually without zombies, realiTV has a sub-genre that focuses specifically on the
remaking of the family in crisis, for as Ron Becker explains, “Fox and ABC ostensibly brought
168
‘help’ to the nation’s parents in the guise of two nearly identical and thoroughly formulaic reality
series in which no-nonsense British nannies help ‘transform’ a different ‘family in crisis’ every
week” in the form of shows like Supernanny and Nanny 911 (175). Even without a specific
“nanny” to steal the show, Fido’s zombie is particularly appropriate for the posthuman
intervention into the family as we shall see. In order to better understand the new dimensions of
this posthuman involvement in families, fiction is one place to start. Indeed, while many science
fiction films feature the horrifying possibilities of the destructive forces of posthuman bodies,
such as robots and zombies, the 2006 film Fido presents a story of positive posthuman
intervention.
Besides being so very uncanny, as per Mori’s charting of the uncanny, zombies also fit
into the category of a posthuman body. What is particularly appropriate about zombies in the
case of the posthuman is that they literally are “post” “human” – they were once fully alive
humans, but in the story of this film, were reanimated after death due to radiation from space.
This is a common trope of zombie related science fiction films and television shows. How the
zombies come to be vary from film to film, but often have similar “creation stories.” Unknown
radiation is often blamed, like in the genre-defining Night of the Living Dead (1968) and Dawn
of the Dead (1979) by George Romero. Other times the zombies come to be through the
tampering of scientists, as seen in the Resident Evil franchise, inspired by 1996 video game and
later films from 2002 to the most recent release in 2012. Regardless of how zombies come to be
in fiction, the stories generally follow a small group of survivors as they try to stay alive against
all odds. The zombies are almost always portrayed as “mindless beasts,” either a single zombie
who jumps out of nowhere to startle the protagonists, or as a massive horde that claws and bites.
Zombies are also applicable here to the imagery of androids in that, even though they may not
169
have AI to guide them within the world, their “lack of smarts” makes them just as much a threat
to humans as an android with AI. Just as a sAI threatens to beat humans in all intellectual ways,
a zombie aims to destroy our intellect, by literally consuming brains.
While many of the zombie films have maintained the tropes of the horror genre, which
includes the zombies as always evil and unredeemable, some film makers branch into the horrorcomedy. Shaun of the Dead (2004), while not the first horror-comedy with zombies, introduced
a posthuman relationship that could have easily inspired Fido. By the end of Shaun of the Dead,
Shaun (Simon Pegg) and his friends have successfully fought off the hordes of zombies and life
has basically returned to normal, except for the fact that Shaun’s best friend, Ed (Nick Frost) has
turned into a zombie. Rather than kill Ed, as the traditional zombie film would have us think
would happen, Shaun keeps Ed tied up in his backyard where he feeds and cares for Ed like he
would a dog.
Fido picks up where this posthuman story of companionship left off. Even though it’s a
different fictional world, the opening of Fido establishes this film as a non-traditional zombie
horror and, simultaneously, a family comedy about a makeover. Indeed, with a cliché classroom
set in a grade school of an idealistic 1950s town, Fido establishes the dark humor in the first few
minutes with an informative/news video that plays in full screen as if the audience is viewing a
news reel from World War II. This time however, the war is described as the Zombie Wars and
the company responsible for containing the zombie threat, ZomCon: “Thanks to ZomCon, we
can all become productive members of society, even after we die!” With the combination of
nostalgic imagery, 1950-style evocative dark humor, and the Lassie-like companionship of a
zombie, we see a unique family intervention unravel on screen. The average family makeover,
as we are familiar with today from reality makeover shows such as Supernanny, consists of four
170
major elements: the observation, the intervention, the “work,” and finally the reveal (or happy
ending). Each of these phases are placed carefully within the context of the “average family”
household, but because Fido is a story of a zombie, a potentially terrifying posthuman body,
some ground rules of genre must be established before the makeover can begin.50 In order to
maintain the films functionality within the makeover tradition, an assortment of appropriate
normative codes are established for the audience in order to place this ridiculous story within a
context that can be related to the average family.
Much like Derek Kompare’s reading of Ozzy Osbourne’s family in The Osbournes
(2002-2005), a reality show dedicated to following and “documenting” their everyday lives, Fido
grounds its aesthetic and character codes for a normative, yet off-kilter “reality.” These
normative codes are crucial for understanding the larger meaning in Fido, for as Kompare
explains, “In tracing these familiar codes in these otherwise unique texts, we can understand how
the categories of the ‘ordinary’ and the ‘extraordinary’ are deployed in the pursuit of textual
coherence and cultural significance” (101-102). When thinking of the ideal family images to
share when creating a story about family, the most commonly referenced images include that of
the 1950s home, making it perfect for eliciting a sympathetic response from the audience.
Stephanie Coontz confirms this in her book The Way We Never Were: “Our most powerful
visions of traditional families derive from images that are still delivered to our homes in
countless reruns of 1950s television sit-coms” (23) and indeed, “Such [traditional, idealized]
visions of past family life exert a powerful emotional pull on most Americans, and with good
reason given the fragility of many modern commitments” (8). With this “emotional pull” and
50
These “ground rules” that work to normalize this bizarre film both as a family makeover and a cult horror film do
not work for all viewers. Despite the several awards, some viewers were quite turned off by the film, giving
negative reviews. For example, Mark Rahner for the Seattle Times wrote, “It's a one-gag movie that starts off clever
and cute, but wears thin after half an hour.” But perhaps it is that “one-gag” that keeps the story cohesion.
171
nostalgia in mind, here enters Fido and the Robinson family with all the trappings of the perfect
town. Aside from the informational video in the opening, the film utilizes traditional imagery
and icons such as riding bicycles, milk and paper delivery men (only this time they are domestic
zombies) and apple pies, women in floral dresses and men in work suits.
With the ambiance of the Leave it to Beaver appropriately established, the makeover can
begin. The story of Fido the zombie’s arrival and ultimate assimilation into the Robinson family
speaks volumes of the potential impact of other non-humans on the American family. Rather
than wreaking havoc and tearing the bonds of the family apart (as most zombie and other science
fiction about the non-human or posthuman Other), Fido illustrates how non-humans can actually
offer a “positive” family makeover. Ultimately, Fido breaks the desire to match the ideal but
instead builds a new, happy family within the context of an alternative family framework (that is
a family that does not fit the normative definition of the nuclear family – mother, father, and
child).
For the first phase of the makeover, the observation, the observation is conducted by us
as audience members in the way the film introduces the characters and establishes that there is
indeed a problem that needs to be solved. Although in the case of the realiTV family makeover
like Nanny 911 or Supernanny, the observation is conducted by the authority who intervenes,
Fido is in no way an “authority” but rather an oblivious instigator for the intervention. In the
context of the posthuman family, this form of intervention is appropriate because many of the
forces that intercede on the family are non-human and neutral to the family dynamics. Despite
these differences, the audience has a similar understanding of the situation, it may be entirely
fictional but the feeling is the same: there’s “trouble in paradise.” Just as the audience sits
172
quietly beyond the third wall, the Nanny takes account of the things that make the family in
“desperate need of saving.”
In makeover shows like Nanny 911 the intervention comes in the form of a deliberate
intrusion that places the family, especially the parents on the defensive, but in the case of Fido,
the intervention for the Robinson family is a bit more subtle. Because Fido (Billy Connolly) is a
zombie and unable to verbally articulate his desires or even tell the family that there is work to
be done in order to make the family functional, the intervention occurs in a different way –
mostly at the hands of the matriarch, Helen Robinson (Carrie-Anne Moss). Before Fido enters
the film at all, Helen stages what appears at first to be a classic intervention scene: complete with
fancy dinner, red dress, and “Bill’s favorite,” a three olive martini, all ready for his return from
work. The audience is introduced to both patriarch Bill Robinson (Dylan Baker) and Fido at the
same time, placing them at odds for the role of father figure in the household, but with the first
interaction between Timmy (Kesun Loder) and his father, it is clear that Bill is not the father of
choice. Although son of the family, Timmy is clearly hoping for his father’s affection, the
audience is privy to a different feeling as Bill asks his son how he’s doing with an obligatory air
and then brushes Timmy off (literally with a gesture of his hand) before even hearing the
response. As the camera cuts back to Timmy to emphasize his feelings of dejection, the
audience feels an intervention is necessary: Queue the zombie!
As with every intervention, it is difficult for a family to accept that it needs help and also
that the help that is offered is what is needed. For example, in episode eight of season one,
Nanny 911, Deborah, mother of the Fink family, tells the camera that “My husband made the call
to Nanny Deb. I did not… I personally do not believe she’s doing right by me: coming in here
and acting this way [telling me how to treat my children]” (“The Fink Family”). Just as Deborah
173
resists the help of Nanny Deb, Bill is not at all welcoming to the presence of Fido: when Fido
lumbers in with the roast, Bills face changes from a thankful smile to a horrified grimace as
Helen says, “Isn’t it wonderful? Now we’re not the only ones on the street without one.” Helen
is excited about the arrival of Fido, mostly to show their normalcy as a family, but also to help
fill out the family.
The real intervention in Fido begins as Bill literally walks out the door. The audience
sees Bill, golf clubs over his shoulder, tip-toeing toward the front door as Timmy watches
television. Caught, Timmy asks if they are going to practice today, but Bill carefully dodges the
question with, “Oh was that today? ...But, I’ve already got the driving range booked.” Then he
ducks out the door. Prompted by his mother not to play baseball by himself (because, as she tells
him, “It makes you look lonely, dear”) Timmy takes the zombie to the park. While there, the
real intervention begins, as Timmy is bullied by some of the ZomCon Scouts from school (a lot
like Boy Scouts, only more like cronies for ZomCon). The two boys who push Timmy around
get a terrible fright as Timmy’s
zombie chases them away. As
the mean boys are running and
crying for their mothers,
Timmy thanks his zombie and
decides to name him “Fido,”
and with that, the bond between
Timmy and his zombie is sealed
Figure 11: Timmy and Fido enjoy a day at the park. (Screen shot Fido,
2007.)
(Figure 11).
174
The first major theme of the work of the makeover in the case of the posthuman family is
when the human begins to see the non/posthuman as more human than Other. In Fido’s case this
occurs early on after being dubbed Fido. While the boy and Fido are bonding for the first time,
Timmy and Fido get into a bit of trouble, but the conflict brings them closer. Apparently, Fido’s
inhibitor collar is not perfect and it occasionally turns off, forcing Fido to revert to his natural
mindless-brain-eating-zombie state. Unfortunately, Fido eats one of the neighbors, Mrs.
Henderson (as is only appropriate in a zombie movie), but Timmy believes it wasn’t Fido’s fault
– “He couldn’t stop himself!” As Timmy tries to cover up the murder of Mrs. Henderson, he has
to strip the uniform off Fido and wash him down with a hose. Before the audience even sees the
wound, Timmy points it out, speaking to himself but also for the audience: “Heart attack, eh?
My Grandpa had a heart attack…” Even though his zombie-ness is obvious and unavoidable in
this moment (he is has sickly grey skin and visibly rotting flesh), his “Y” shaped and stapled
incision on his chest becomes a reminder for both Timmy and the audience that Fido was indeed
once human. This very physical part of his body is just one of the reminders that Fido is a bit
more human than first assumed by his zombie status. Of course, he still is a “flesh eating
maniac”, as Bill calls him at one point, but Fido exhibits other human behavior in the form of, in
Mr. Theopolis’ (Tim Blake Nelson) words, “old habits [that die] hard.” Fido smokes, even
seems to enjoy the cigarette once put in his mouth. He also gazes at Helen with adoring eyes that
do not go unnoticed by Helen. Just these few human tendencies help to establish for both the
viewers and the family members that Fido is more than simply a zombie, a crucial step in the
makeover process in the posthuman family.
The “work” of a makeover in the posthuman family has a tradition of two extra phases:
the “risk” and the “save” which appear as the plot of Fido develops. This classic plot
175
development was established by Isaac Asimov in his well-known robot novels. Asimov’s 1939
short story “Robbie” is the perfect example of an early posthuman family transformation story.
Robbie is a robot purchased for a child named Gloria by her father as a playmate and closely
parallels Timmy and Fido’s relationship with “the risk” and the “the save.” Robbie the robot and
Fido have a lot in common when it comes to their non/posthuman status. Perkowitz writes about
how the mother reacts to Robbie illustrates many of the expected reactions to a nonhuman
intervention in the family with the following questions: “how far humans would trust artificial
beings to make sensitive judgments, and is it really good for Gloria to play with a machine rather
than with other children?” (33). After all, as Gloria’s mother protests, “It has no soul, and no one
knows what it may be thinking!” Even though the gender roles are swapped in this case, Bill has
the same reaction to Fido, cringing at the sight of him and making it perfectly clear that his
family will never become zombies. In fact, he has worked hard and saved his money so that they
can all have “head coffins”, a single box/coffin for the head which guarantees that you will not
be reanimated, further pushing the idea that Bill doesn’t tolerate posthumans.
Once the tension between a family member and the post/nonhuman is established, the
makeovers initiated and undertaken, then the two essential plot elements, the risk and the save
are ready to proceed. In “Robbie,” just as in Fido, the non/posthuman is returned to the
“factory” from where they came. Devastated by the loss of a genuine friend (for by this time the
relationship between the human and non/posthuman has been tested and proven to be strong), the
children in both stories take a risk and break the rules to be close to their friend. For Gloria and
Robbie the risk and the save happen almost simultaneously: once Gloria sees Robbie on the other
side of the factory floor, she runs out to Robbie and is nearly run over by heavy machinery – the
“risk.” In just that same moment Robbie throws himself under the machine to save her but at the
176
cost of Robbie’s robot body. Having proved his worth, Gloria’s mother consents to allow Gloria
be reunited with her reassembled friend.
In the case of Fido, the risk and save happen in slightly different ways but are still there
to maintain the framework of the posthuman bonding and makeover. Timmy and Fido have built
a mutual friendship over the course of the film, but the real test of Fido’s human status, and thus
one of the “saves” comes after the ZomCon Boy Scouts suspect that Fido and Timmy were
responsible for Mrs. Henderson’s death. Having tied Timmy to a tree to stage a rescue with
themselves as heroes, the boys turn off Fido's “domestication collar” (the clever invention that
“contains the zombie’s desire for human flesh… making the zombie as gentle as a household
pet”) and try to turn Fido on Timmy. Rather than attacking and eating Timmy, Fido staggers to
the boy who tied Timmy up. After feasting on the mean little rascal (who clearly disserved what
was coming – it is after all, a zombie film), Fido makes his approach toward Timmy, and as the
terrifying music plays and the camera tilts, making Fido all the more ominous, he then tries to
untie Timmy from the tree. Crisis averted and in its place, Timmy tells Fido to “Go get help,
boy!” firmly establishing Fido as the new Lassie figure, no longer the threat.
With equal parts “human-ness” and “man’s best friend” qualities, the stage is
appropriately set for the save from Timmy. After Fido is shipped to ZomCon for murdering Mrs.
Henderson, it is Timmy’s turn to take a risk and rescue his friend. This is when an interesting
twist occurs that further maintains the “cult” genre of the film. Throughout, the audience has
understood Bill as the anti-father who is incapable of love for his wife or child, but in the end he
is given a futile attempt to prove himself. Aware that Timmy has snuck into the ZomCon
compound to rescue Fido, Bill and Helen drive toward the compound and Bill finally voices for
the audience what we have known to be true: “I suppose you think he got so attached to the
177
damn thing [Fido] because I’m a bad father…” As this realization sinks in, it becomes clear that
Bill is preparing to do whatever he can to save his boy. He jumps out of the car, shotgun in hand
and insists on going in after his son despite Helen’s pleas for him to stay behind. To further seal
his fate, Bill reminds Helen that he wants a head coffin. Having found Fido and faced his fears,
Bill appears ready to sacrifice himself for the sake of his family.
Having risked everything for his friend, the audience knows that Fido needs to return the
favor in order to finalize the makeover and put the family back together. In order to emphasize
the danger of the threat, the difference between a “normal” all-human family and the chaos of the
undead, Mr. Bottoms, head of ZomCon security, pushes Timmy into the “Wild Zone.” This
Wild Zone has been established since the beginning of the film as the place where the zombies
have taken over, the posthumans have won and this area is clearly the danger. With a dotted line
the informative video for ZomCon illustrates the cities and towns as safe zones with the Wild
Zone marked clearly with skull and crossbones. Now that the audience clearly sees the Wild
Zone as an ominous threat to Timmy, Mr. Bottoms further clarifies that the Wild Zone is “out
there is chaos. In here is safety!” The delineation between the dangerous posthuman and the
safe world in this case is as clear as the metal fence surrounding the perfect nostalgic time
capsule. Having clearly established the posthuman as no only a physical but also a geographic
threat where one can be sent, Fido and Timmy’s father appear to save the day. This save is
particularly poignant when Timmy’s father is shot and killed in the struggle – the symbolic
father figure who is clearly unable to hold the father role effectively is replaced by the
posthuman Fido, even with his domestication collar off. The save is complete and the reveal is
ready.
178
For Brenda Weber, “One of the makeover’s more critical premises is that it does not
reconstruct, it reveals. …the makeover does not create selfhood but rather it locates and salvages
that which is already present, but weak” (7). Over the course of the film the audience has
watched on and participated in both Timmy and his mother’s makeover triggered by the arrival
of the posthuman zombie. Having replaced an insufficient father with a more caring fatherfigure and found confidence in his belief that zombies are also human, as opposed to simply a
threat to be eliminated as Mr. Bottoms always argued, Timmy finds a comfortable place in a
family. Fido closes in the makeover tradition, similar to how Becker explains closure in the
Nanny shows: “With the end of [an episode like Supernanny or Nanny 911] comes a positive
resolution as scenes of family chaos are replaced with images of family bliss—happy, welladjusted children and their parents playing in the backyard, going on a picnic, or reading bedtime
stories” (184). In fact, with an unabashed clichéd scene, Fido ends with a backyard scene of
Timmy playing baseball with Fido wearing a Hawaiian shirt and actually able to catch the ball.
Once Bill is out of the picture and the newly formed family is basking in the sun in the backyard,
Fido is established as the most fitting patriarch for the family as he and Helen share a loving
smile. Then, to show that the posthuman can indeed fit well in the human family, Fido leans over
the pram and coos gently at the stirring baby, tenderly caressing the infant’s cheek. With the end
shot, the posthuman family photo is complete with Fido grinning widely, Hawaiian shirt on,
cigarette in hand and baby nearby.
For Becker “America’s families are always having to be made over – made to conform to
an idea of family life that is always shaped by political forces and never inclusive of the diversity
of human experience” (176), but Fido challenges that assumption; proving that the makeover, in
particular a posthuman makeover, can have a positive outcome. If, as Reid explains “Family,
179
like gender, doesn’t simply exist; it must be a non-stop public performance” (193) then family
performance can include alternative individuals, including the post- and non-human. By
completing the makeover cycle of observation, intervention, work and then reveal, a fictional
story of the non/posthuman family makeover can show a successful family renovation. Fully
aware of the negativity toward disruptions in the human family, Fido voices these fears through
Mr. Bottoms with phrases like “When people get attached to their zombies it only spells trouble.
Trouble for me, trouble for you. Trouble for Mrs. Henderson…” and “Because you made friends
with a zombie, a lot of nice people in this neighborhood got killed.” But despite these warnings,
the more adjusted and happy family reveals itself in the end, having embraced their posthuman
family member.
If we can accept that humans anthropomorphize entities that appear human without taking
construction into question, it would also be wise to theorize if and how humans may interact with
entities with similar, or at least organic, physical structure. So far scientists have not ventured
far into the fields of genetics, cloning, or biological creation to have many examples from which
to draw in actuality for discussion. But fiction offers several examples of biological and
bio/synthetic combinations. Perhaps we should also be taking a cue from van Oost and Reed
who remind us that not only will very humanlike robots be invented, they will also be an intricate
part of our social world: “Social robots as companions will exist, and gain meaning, in such
dynamic networks, and hence it is important that we understand them as such” (van Oost and
Reed 16). We are far from answers but writers like Shaw-Garlock see hope in the inquiry at
hand: “What seems clear is that in the context of robo-nannies, therapeutic seals, military bots,
and domestic and surgical robotics there is need for methodological and theoretical perspectives
180
that enable us to think through techno-cultural hybrid configurations of people and machines. In
short, what is needed is a ‘theory of machines’” (6)
To conclude, I return to science fiction and one of the most influential texts, Star Trek:
Captain Picard reminds viewers at the end of “The Measure of a Man” that “Starfleet was
founded to seek out new life (indicating the character Data). Well, there he sits.” His conclusion
returns us to the fact that we, as inventors and artists as part of this unique species of hominid,
are actively creating new life and new selves. Although our notions of a “special self” are
clearly shaken by the coming artificial entities that will potentially think, act, and appear like
humans in most, if not every, way, we do need to consider a framework of personhood that
incorporates our coming companions.
181
CHAPTER FIVE: CONCLUSION
“The product of the human brain has escaped the control of human
hands. This is the comedy of science."
-- Karel Čapek (1923).
It all started with Teddy Ruxpin, a teddy bear that seemed to come to life to tell me
stories and wanted to be my friend. A child of the ‘80s, I was doing my own exploration of AI
and robotics. Sherry Turkle would have been proud.
“Hi I’m Teddy Ruxpin and I want to be your friend,” the stuffed bear said to me, his eyes
blinking and jaw moving with the sound of spinning servos. I easily ignored the mechanical
whir and jerky movements that accompanied the movements of my new-found-friend. “Yes
Teddy, I can be your friend!” I happily exclaimed, taking off the layer of fur, uncovering the
mechanics underneath. Even knowing that Teddy wasn’t a “really real bear,” my curiosity about
how he worked was not diminished.
Not long after Teddy, I was introduced to another figure that captured my imagination –
Data from Star Trek: The Next Generation. I remember being fascinated by how he was treated
as human but not human. He was the Tin Man who wanted a heart. His non-humanness status
surrounded questions about whether or not he is alive. An entire episode was even dedicated to
the question of whether or not Data should be considered a person or property. But his nonhumanness didn’t prevent him from having very human-like relationships with the members of
the starship Enterprise. Data was somehow different than the other alien entities aboard the
starship. He was human but not quite; his presence a constant reminder to be skeptical of how
“life” and “person” are defined.
182
Over twenty years later… Teddy’s batteries died long ago, never to be replaced. Star
Trek: The Next Generation on Blu-Ray is an honored fixture on my movie shelves. Even though
technology has advanced to the point where my old Teddy Ruxpin seems archaic and Star Trek
has become a household name with growing attention with the recent reboots, I seek a link
between them – between the robot that I loved like a friend and the fiction of the Tin Man
searching for his heart.51 The growing anxiety surrounding the potential for post/human bodies
begs for further discussion and I hope that my exploration here will begin a dialogue that
includes science fiction.
Science fiction and science fact are rapidly becoming indistinguishable as technological
advancements are inspired by fiction and then invented by entrepreneurs. This leaves the
everyman at a troubling crossroads and so I choose the body of the android as my “evocative
object.” The android, by definition, literally embodies two major components that determine our
human-ness – the body and the mind. Ahead of us lies a future with new creations and new
technologies that may seem human in most, if not every, way. Not only does the android look
human, it may also be constructed with biological materials, raising questions about new species,
new entities, outside our established hierarchy of biological systems. Beyond the surface, based
on the goal of robotics design outlined by MacDorman and Ishiguro, the android will be housed
with a human-like or beyond-human-like intellect so that it may better interact with humans.
This inclusion of sAI further challenges our understanding of knowledge, consciousness, and the
self.
51
This is a curious question that continues to nag at me, Why the heart? In many “tin man”-type fictions, from The
Wizard of Oz to the story of Boq in the musical Wicked, there always needs to be some human organ, a biological
part of a human that somehow imbues the tin man with “life.” In some cases it’s the heart and in others it’s the
brain. It seems that as our understanding of where our thinking-part-of-life comes from changes, the fiction has also
shifted to use the brain as the biological manifestation of giving something life, rather than the heart, which used to
represent where the passions and life force came from. And yet, with androids, the thinking-part appears in the
narrative as AI. Has strong AI become the modern equivalent of “having a heart”?
183
As we have seen, the target of robotics and AI developers has been to meet and/or exceed
the expectations from science fiction: to have strong artificial intelligence in man-made synthetic
entities that resemble humans. Without asking why, engineers and programmers continue to
work on the how and humanity finds itself ever closer to having strong AI in human-like robotic
bodies. Chapter Two explored the overtones in popular culture related to current incarnations of
AI (Deep Blue and Watson) and this showed that there is a growing unease over the supposed
threats of AI. Fully embodied articulation and cognitive independence (in other words, the
ability for a program, embodied in a robotic chassis, to move and function in the environment
without direction from a human user) is not far off and, with growing concerns (at least in the
public) about what such a future will look like, “tests” and measures for AI are being considered.
Of course such tests and thought experiments have been around for centuries, but they are
becoming more directed toward defining AI. The concept of “intelligence” is very much
disputed in philosophy and in popular culture. Considering that dispute gets us closer to
understanding AI. Glymour, Ford and Hayes tell us that even if we accept that the being on the
other side of the game is able to imitate intelligence, it is not the same as being intelligent, and
they use Searle’s “Chinese Room” argument (14). In essence, just because something is able to
produce answers that seem smart, does not mean that the entity itself is smart. Much like
Turing’s test, a question would be queried to someone/something anonymous and an answer
would be produced in the form of an “intelligent” and comprehendible response, but still there
would be no way to know if the mind on the other side understood the response produced. We
already face such beings today: think of the search engine on a web browser, a program like
Google. You enter a query, hit “search” and an answer appears, in fact a number of answers
appear. There is no way to know if Google is intelligent, but we assume it is not because of the
184
interface used. Russell and Norvig suggest the task of making “consciousness” in AI is
something that we not equipped to take on, “nor one whose success we would be able to
determine” (1033).
While there are several “tests” like the Turing Test, to help identify and explain emerging
artificial intelligences, there are still others who ask questions that aren’t measureable by tests.
For example, to avoid the tricky nature of the Turing Test or other functionalist-based
measurements of AI, Ford and Hays explain that “most contemporary AI researchers explicitly
reject the goal of the Turing test. Instead they are concerned with exploring the computational
machinery of intelligence itself” (28). Remember that for thinkers like Rapoport, who examine
the vitalist perspective of human thought (the idea that humans have something vital that allows
us to think, that machines or computers could simply never have), the concerns of the vitalists
would always be rebutted with a trivial answer: “if you made an automaton that would be like a
human being in all respects, it would be a human being.” Just as with AI, Ford and Hayes
express Alan Turing’s goal of embracing a functionalist philosophy: “not to describe the
difference between thinking people and unthinking machines but to remove it” (35).
In that removal of boundaries, questions still remain about what it means to be a person
as opposed to “merely human” arise. But in that “being human” we also “see human”
everywhere. Chapter Three took that discussion of AI and embodied it. While
anthropomorphism is a powerful force, it is also a double-edged sword. On the one hand,
robotics developers believe that anthropocentrism can assist in human-robot-interaction, leading
to growing personal relationships with robots. On the other hand, if a robot looks too human, it
approaches the potential uncanny divide, engendering feelings of unease and distrust. Robot
marketers are well aware of the power of anthropomorphism and work to make their creations
185
more likeable and, therefore, more profitable. Something that will be unavoidable by marketers
is the fact that the android is not human, but is a product packaged to be human-like in every
other way. Part of our greatest fear of the android, that has yet to be explored to its fullest, is the
fact that the android is more durable, long lasting and smarter than humans. Human bodies are
fragile and our minds are subject to the aging process. It seems, we are always prisoners of our
bodies, but the android is free to live on as long as its processors and parts are upgraded.
A robot doesn’t need to be human-like to endear us to their humanness. Stoate reminds
us that “‘People’ may be human, but they do not need to be human to be people. People arise in
an ecology of co-constitution; their specificity emerges only from their affecting (and effecting)
other such people. People do not pre-exist their relations with other people, rather they are
defined by them” (209-210). In other words, the definition of “person,” or one who is eligible
for rights and protections under the law, is always changing. They do no “pre-exist” but are
rather constructed through social relationships and shared understandings of the entities that
surround us. Not just those that are disembodied like an AI, but those that will be embodied and
in our homes.
But these appearances are not necessarily cites of interrogation of the heteronormative
standard in relationships. If, at first, the queer non/human relationship with humans appears to
trouble coupledom as explored in Chapter Four, the fictional narratives about queer non/human
relationships reify the heteronormative prescription to life – indeed with potentially violent
consequences. It might at first appear that a relationship that ruptures a border like biology would
be one of the queerest relationships. However, upon examining in media two cases of this
relationship which crosses the uncanny divide, it becomes clear that such uncanny
companionships so far have primarily reified the heteronormative tradition. For Halberstam, the
186
queer relationships between humans and nonhumans are not queer at all, but rather maintain the
status quo. Halberstam describes, “The porous boundary between the biological and the cultural
is quickly traversed without any sense of rupture whatsoever, and the biological, the animal and
the nonhuman are simply recruited for the continuing reinforcement of the human, the
heteronormative and the familial” (266). Part of this reinforcement comes from the “makeover”
stories that have gained attention in the past decades: “America’s families are always having to
be made over – made to conform to an idea of family life that is always shaped by political
forces and never inclusive of the diversity of human experience” (Becker 176).
As explored in
Chapter Four, even a movie about zombies can help illustrate a potential family makeover that
the non-human can bring about. In Fido we see how a story that transforms the family makeover
to the normative into the abnormal, making room for the non-human, the strange posthuman
Other, thus showing that the makeover can potentially result in new acceptances of alternative
families and individual acceptance of unique selfhood.
Similarly, Davecat’s relationship with his RealDoll Sidore expresses a coming change in
personal relationships. His expression of a desire to be treated as a “decent ‘normal’” individual
echo the kind of plea from queer communities, people “Who simply have different preferences
for their partners,” However, upon further investigation, relationships with RealDolls, including
Davecat’s, return us to Halberstam’s argument that these non/human relationships “[continue]
reinforcement of the human, the heteronormative and the familial” (266). The fact that
RealDolls are anatomically correct, by and large exaggerated versions of “sexy women” is only
part of the heteronormative un-queer formation of such relationship. These dolls, manufactured
by Abyss Studios, are mostly purchased by white men (Davecat seems to be an exception to the
rule), reaffirming not only the hetero-standard but also can be seen as confirming what David
187
Eng describes as “queer liberalism.” By purchasing your “love doll,” these men are participating
in affirming their citizenship with money but also performing “acceptable” practices of desire:
the synthetic partner is female, passive and literally “made to obey.” Is this another stereotype
that will need to be overcome as androids come to market: “…in popular culture and historically,
fembots are the ultimate, misogynistic, male fantasy. They’re a mythic construct used to serve
male desires. These fembots are sex slaves and sex objects, ready and willing to serve and clean
up after these men” (Sarkeesian). The reification of heteronormative ideals opens up
possibilities for further research.
At the same time as leaps are being made in robotics development, AI programmers are
also working toward human-like behavior and simulation of emotion. I believe it’s not
productive to project the “whens” and the “how-soons” and “hows.” Rather, it’s important to
consider the “what-ifs” and build a framework for defining sentience and personhood in a nonspecies specific way. As these developments continue, we must be aware of the stereotypes
from popular culture and the tropes from science fiction surrounding the Android, or else we run
the risk of becoming our own cliché from a horror robot-apocalypse. Market forces are clearly
pushing robotics in an embodied and gendered direction. According to Brian Cooney, “Because
artificially intelligent machines with agent capabilities will increase the productivity of many
enterprises they will be selected by the market forces for development and reproduction” thus
leading us to a future when robots will become like our children as Moravec predicts (xxii).
Indeed, studies in HRI show that humans are more comfortable interacting with embodied
robots, and as long as the uncanny valley is accounted for, I find it highly likely that we will
188
soon have fully-human-like synthetic androids.52 Rapoport considers that we might someday
“construct an automaton with a will of its own, a creative imagination of its own, that is, an
ability to imagine things that we could not have imagined” (46). But Rapoport argues, as I do,
that “the crucial question… is not whether we could do such a thing but whether we would want
to” (46). He goes on to point out that “Some will say yes; others no. If both kinds of people
engage in a serious, really honest self-searching dialogue, they may learn a good deal about both
themselves and others” (46).
Looking at what the theory of fiction surrounding androids offers us, and keeping the
truth of robotics and AI development in focus, where do we stand today? Is it time to “welcome
our robot overlords”? Or perhaps we must remind ourselves of what Judith Butler tells us in
Undoing Gender: “[it is necessary to keep] our very notion of the ‘human’ open to a future
articulation” because that openness is “essential to the project of a critical international human
rights discourse and politics” (222). Our self and boundaries of the self are becoming more and
more porous with technology. John Law, as cited by Elaine Graham, explains: “the very
dividing line between those objects that we choose to call people and those we call machines is
variable, negotiable, and tells us as much about the rights, duties, responsibilities and failings of
people as it does about those of machines” (Law, cited in Graham, 20).
Whether it’s ultra-smart computers or aliens that we encounter in the future, ethics and
law have so far been reactionary – only concerned with the here and now. But we need a further
reaching frame of mind. It would be reckless to believe that in a posthuman future, humans will
still be the essential measurement against which all other morals should be judged.
52
I cannot speak with as much certainty about biological androids, although there have been significant
developments in 3D organ printing and cloning.
189
While I consider possible futures and am optimistic about the human ability to find
kindness and compassion for unfamiliar others, I am not a transhumanist, or someone who
believes in improving the human condition (body, self, and society) through technological
enhancement. There are many views surrounding how to reach those goals from biological and
genomic enhancement to cybernetic implants and exploration into artificial life. In general, their
thinking is good, in that they are looking forward to a posthuman future in which all problems
will be solved with science, but I think that a healthy skepticism of the promises of technology is
necessary for the best possible futures to come to fruition. That said, I do not propose a halt to
all AI or robotics development. In fact, that would simply push research to the fringes and out of
the reach of legal and ethical oversight. The critical posthuman perspective is useful for the
continuing discussion.
While the transhumanist cause is fighting for a literal posthuman future; critical
posthumanist theory considers how our minds, body, and society are already becoming
posthuman through technological interventions already taking place. Whatever world view or
epistemological/metaphysical view endorsed, the transhumanists are correct about one thing –
we are rapidly approaching a time when our bodies and selves will inextricably be tied with
technology to enhance our existence. Some posthumanists (like Haraway and Miccoli) argue
that we are all already posthuman in the ways that technology has metaphorically become
enfolded into our existence. But in that enfolding, the human has still been part of our
“specialness.” I argue that in our becoming literally posthuman, i.e., through implants and
cybernetic or biological enhancement, that our humanness should open outward to encompass
other potential entities who meet guidelines for sentience. To do otherwise would surely lead to
a continuing human essentialist perspective that champions bigoted and oppressive beliefs.
190
Perhaps some of the discussions of the posthuman will come in forms that the android
only peripherally addresses, but still considers the world in a perilous balance between human
and posthuman. For example, the story of X-Men introduced popular culture to posthuman
bodies that literally evolved out of biological humans. Through the filmic story-arc of X-Men
(starting in 2000, to the most recent X-Men: Days of Future Past, 2014), the two main characters,
Professor X/Charles Xavier (Patrick Stewart and James McAvoy) and Magneto/Eric Lehnsherr
(Ian McKellen and Michael Fassbender, respectively), go from friends because of their shareddifference as mutants, to ultimate enemies with deadly mutant-powers. The historical definition
of “human” is literally under attack as the mutants (or posthumans) known as Homo superior,
fight for control of the planet. “The war” is no longer between nations (the Soviet Union versus
America, as portrayed in X-Men: First Class) but rather is between ideals about the future of
humanity. On one side stands Professor X, who believes mutants can live side-by-side with
Homo sapiens, and on the other stands Magneto, who believes that mutants are the future. For
Magneto, “Peace was never an option… We are the future Charles, not them!” The X-Men films
challenge audiences to pick a side: Is humanity, as it has been, worth saving and protecting, or
has the historical “human” outlived its welcome?
Many of the discussions surrounding our posthuman futures have featured the android as
merely a passing figure, a side effect of the continuing development of robotics and AI, but I
believe the android should be (re)figured or (re)introduced into the discussion of critical
posthumanism. By using the android as an evocative object that explores the porous boundaries
of body, self and society, the discussion can begin externally to the human self and be used to
reflect back upon ourselves. We should take the time now rather than be left behind.
191
In early 2014 The Guardian published an article about the shift in power within the
mega-corporation Google. Ray Kurzweil has been hired on as Google’s Artificial Intelligence
guru and Director of Engineering. With this hand off of power and an whole undisclosed
amount of money, Kurzweil immediately began buying up robotics and AI developers,
supposedly, in the hopes of making (at least one of) his long-time predictions come true. In
particular is his prediction that “by 2029, computers will be able to do all the things that humans
do. Only better” (Cadwalladr 2). Although Carole Cadwalladr’s article teeters between 1)
making fun of Kurzweil’s seemingly outrageous worldview and, 2) sounding an alarm, the
overall message is that change is coming regardless of whether or not we are ready for it.
Aside from the attention The Guardian gives to developing AI and the world of
transhumanism, other news sources are jumping on the bandwagon of both alarmist and anti-AI
arguments or at least attempting to raise awareness. Watson stirred the pot in 2011 with his
performance on Jeopardy!, but concern is growing with the September 2, 2014 release of Nick
Bostrom’s book Superintelligence: Paths, Dangers, Strategies. In fact, once they hit the shelves,
it gained mass popularity as Elon Musk, inventor of the fuel-efficient electric car Tesla, and
science icon, tweeted his concern for AI: “Worth reading Superintelligence by Bostrom. We
need to be super careful with AI. Potentially more dangerous than nukes” (Augenbraun). In fact,
with that fear in mind, Wired reported that shortly after sharing that Tweet, Musk committed $10
million, “to support research aimed at keeping AI beneficial for humanity” (Alba). With Musk’s
very public commentary on his concern for the development of AI, many scholarly and popular
journals have been addressing the question as well. For example, The Chronicle of Higher
Education asks the question in their title: “Is Artificial Intelligence a Threat?” (Chen). The
article sparked over a hundred comments from readers and ultimately left readers with the
192
feeling that technology should not continue to develop without critical thought. If change comes
in the form of the singularity or in some other non-linear change that we cannot predict, I believe
science fiction offers some predictions all of us can understand.
After this exploration, I return to Himma’s argument, in my words: “if it looks human,
seems human and in every way that matters appears to be human, then we should treat it as
human to maintain our own moral engagement with the rest of the world.” It is true that, in
Relke’s words, “We have no choice but to take our humanism with us into the posthuman, for
there is no Archimedean point outside our human selves from which we can proceed directly to
the posthuman” (131). But in honoring the most crucial part of our human-ness, our ability to
love and care for others, it is imperative that for the sake of the coming world with humanoid
robots and strong Artificial Intelligence, we borrow from the pages of fiction to see all potential
futures – the good and the bad. Hopefully, through our everyday interactions with nonhuman
others will see ourselves.
If theory about the posthuman has preoccupied itself by thinking reflexively about the
human self and intrusions upon it through technology, then the android offers a different subject
for examination. In other words, if thinking about how we think has so far been focused on
human thinking, then it is cyclical – as a human, one cannot step out of oneself to think about
thinking, it is still an interior process, it is something that happens “up here,” with a gesture to
the head, indicating the brain. Using a different metaphorical reference (one that is in the
process of literal development), the android, descriptions of thinking and feeling can be
discussed outside the human body. Using Android Theory we can liberate the discussion of
cognition from a human-centric conception to one that is transbiological, getting us closer to a
concept of the self that can survive in a posthuman world.
193
WORKS CITED
2001: A Space Odyssey. Dir. Stanley Kubrick. Perf. Douglas Rain and Keir Dullea. MetroGoldwyn-Mayer, 1968. Warner Home Video, 2007. Blu-ray.
Achenback, Joel. “No-Brainer: In the Kasparov-Deep Blue Chess Battle, Only the Human Has
His Wits About Him.” The Washington Post 10 May 1997, Final Edition: C, 1.2. Web.
ProQuest. 15 Mar. 2015.
Ackerman, Diane. “Living, Thinking Houses.” The New York Times 4 Aug. 2012 (Opinion
Pages). Web. 2 Mar. 2015.
AK-. “I, For One, Welcome our New Insect Overlords.” Know Your Meme. Updated by Brad,
2013. Cheezburger Network. Web. 25 Mar. 2015.
Alba, Davey. “Elon Musk Donates $10M to Keep AI From Turning Evil.” Wired. 15 Jan.
2015. Web. 30 Jan 2015.
Arquilla, John. “What Deep Blue Taught Kasparov – and Us.” Christian Science Monitor
89.120 (16 May 1997): np. Web. Newspaper Source Plus. 15 Mar. 2015.
Augenbraun, Eliene. “Elon Musk: Artificial Intelligence may be ‘more dangerous than nukes.’”
CBS News.com. 4 Aug. 2014. Web. 6 Aug. 2014.
“Author, Author.” Star Trek: Voyager S7, E20. Dir. David Livingston. Perf. Robert Picardo
and Kate Mulgrew. Paramount Television, 18 Apr. 2001. Netflix. Web. 15 Mar. 2015.
Badmington, Neil. “Theorizing Posthumanism.” Cultural Critique 53. Winter (2003): 10-27.
JSTOR. Web. 10 Dec. 2014.
Baker, Stephen. Final Jeopardy: Man vs. the Machine and the Quest to Know Everything. New
York: Houghton Mifflin Harcourt. 2011. Print.
194
Balsamo, Anne. Technologies of the Gendered Body: Reading Cyborg Women. Durham: Duke
University Press. 1996. Print.
Barr, Marleen. “Metahuman ‘Kipple’ or, Do Male Movie Makers Dream of Electric Women?
Speciesism and Sexism in Blade Runner.” Retrofitting Blade Runner: Issues in Ridley
Scott’s Blade Runner and Philip K. Dick’s Do Androids Dream of Electric Sheep?
Bowling Green, OH: Bowling Green State University Popular Press, 1991. 25-31. Print.
Bartneck, Christoph, et al. “Is The Uncanny Valley An Uncanny Cliff?” 16th IEEE
International Conference on Robot & Human Interactive Communication in Jeju, Korea.
26-29 Aug. 2007. 368-373. IEEE Xplore. Web. 12 May 2013.
--. “My Robotic Doppelgänger – A Critical Look at the Uncanny Valley.” 18th IEEE
International Symposium on Robot and Human Interactive Communication in Toyama,
Japan. 27 Sept. – 2 Oct 2009. 269-276. IEEE Xplore Web. 12 May 2013.
Battlestar Galactica: The Complete Series. Dir. Michael Rymer. Writ. Glen Larson and Ronald
Moore. Perf. Edward James Olmos, Tricia Helfer, Grace Park. Universal Studios, 2010.
Blu-ray.
Becker, Ron. “’Help is on the Way”: Supernanny, Nanny 911, and the Neoliberal Politics of the
Family.” The Great American Makeover: Television, History, Nation. Ed. Dana Heller.
New York, NY: Palgrave MacMillian, 2006. 175-192. Print.
Belsie, Laurent. “Chess Match Tests Limits of Computer ‘Intelligence.’” Christian Science
Monitor 89.110 (2 May 1997): n.p. Newspaper Source Plus. Web. 15 Mar. 2015.
Bendle, Mervyn F. “Teleportation, Cyborgs and the Posthuman Ideology.” Social Semiotics
12.1 (2002): 45-62. EBSCOhost. Web. 1 Mar. 2015.
195
Benford, Gregory and Elisabeth Malartre. Beyond Human: Living with Robots and Cyborgs.
New York: Forge, 2007. Print.
Bolton, Christopher. “From Wooden Cyborgs to Celluloid Souls: Mechanical Bodies in Anime
and Japanese Puppet Theatre.” Positions: East Asia Cultures Critique 10.3 (2002): 729771. Project Muse. Web. 25 Oct. 2010.
Borenstein, Seth, and Jordan Robertson. “IBM ‘Watson’ Wins: Jeopardy Computer Beats Ken
Jennings, Brad Rutter” (sic). Huffingtonpost, Tech. 17 Feb. 2011.
TheHuffingtonPost.com, Inc. Web. 22 Apr. 2011.
Brabham, Daren C. “Our Robots, Ourselves.” Flow TV. 25 Mar. 2011. Flow FlowTV.org.
Web. 14 Apr. 2011.
Braidotti, Rosi. The Posthuman. Cambridge, UK: Polity Press. 2013. Orbis Cascade Ebook.
Web. 16 Feb. 2015.
Brain Games. “Watch This!” Creators Jerry Kolber and Bill Margol. Nar. Neal Patrick Harris.
National Geographic Television, 2011 - Current. Netflix. Web. 5 Mar. 2015.
Bringsjord, Selmer. “Chess is Too Easy.” MIT’s Technology Review 101.2 (1998): 23-29.
EBSCOhost. Web. 15 Mar. 2015.
Brooks, Rodney A. Flesh and Machines: How Robots Will Change Us. New York: Pantheon
Books, 2002. Print.
Brownlee, John. “Professor Ishiguro’s Creepy Robot Doppelganger.” Wired. 26 Apr. 2007.
Web. 15 Feb 2013.
Butler, Judith. Undoing Gender. New York: Routledge, 2004. Print.
Cadwalladr, Carole. “Are the Robots about to Rise? Google’s new director of engineering
thinks so…” The Guardian 22 Feb. 2014. Web. 12 Sept. 2014.
196
Campbell, Murray, A. Joseph Hoane, Jr. and Fen-hsiung Hsu. “Deep Blue.” Artificial
Intelligence 134 (2002): 57-83. Elsevier Science Direct. Web. 12 Mar. 2015
Campbell, Norah. “Future Sex: Cyborg Bodies and the Politics of Meaning.” Advertising &
Society Review 11.1. Project Muse. Web. 25 Oct. 2010.
Chalmers, David. “The Singularity: A Philosophical Analysis.” Journal of Consciousness
Studies 17: 7-65. 2010. Web. 21 Aug. 2014.
“Chat with Ramona.” KruzweilAI.net. Web. 1 Mar. 2015.
Chen, Angela. “Is Artificial Intelligence a Threat?” The Chronicle of Higher Education. 11
Sept. 2014. Web. 14 Sept. 2014.
Chilcoat, Michelle. “Brain Sex, Cyberpunk Cinema, Feminism, and the Dis/location of
Heterosexuality.” Feminist Formations 16.2 (2004) Indiana UP. 156-176. Project
Muse. 26 Oct. 2012. Web.
Cipra, Barry. “Will a Computer Checkmate a Chess Champion at Last?” Science 271.52449
(1996): 599. JSTOR. Web. 15 Mar. 2015.
Cleverbot.com. Rollo Carpenter, 2014. Web. 1 Mar. 2015.
Clocksin, William F. “Artificial Intelligence and the Future.” Philosophical Transactions:
Mathematical, Physical and Engineering Sciences, 361.1809 (2003). 1721-1748. Web.
14 Nov. 2008.
Cobb, Michael. Single: Arguments for the Uncoupled. New York: New York University Press,
2012. Print.
Consalvo, Mia. “Borg Babes, Drones, and the Collective: Reading Gender and the Body in Star
Trek.” Women’s Studies in Communication 27.2 (2004): 176-203. Communication &
Mass Media Complete. Web. 20 May 2013.
197
Cook, Gareth. “Watson, the Computer Jeopardy! Champion and the Future of Artificial
Intelligence.” Scientific American. 1 Mar. 2011. Scientific American. 14 Apr. 2011.
Coontz, Stephanie. The Way We Never Were: American Families and the Nostalgia Trap. New
York, NY: BasicBooks, 1992. Print.
Dator, Jim. “Futures of Identity, Racism, and Diversity.” Journal of Futures Studies 8.3 (Feb.
2004): 47-54. Tamkang University Press. Web. 26 Nov. 2011.
Dautenhahn, Kerstin. “Socially Intelligent Robots: Dimensions of Human-Robot Interaction.”
Philosophical Transactions: Biological Sciences 362.1480 (2007): 679-704. JSTOR.
Web. 4 May 2011.
Davecat. “Any Doll-related news, Davecat? (June 09).” Shouting to Hear the Echoes: Contra
naturam, now and forever. Deafening Silence Plus, 5 June 2009. Web. 21 Nov. 2012.
Decker, Kevin S. “Inhuman Nature, or What’s It Like to Be a Borg?” Star Trek and
Philosophy: The Wrath of Kant. Popular Culture and Philosophy 35. Eds. Jason T. Eberl
and Kevin S. Decker. Peru, IL: Open Court, 2008. Print.
“The Deep QA Project: FAQs.” IBM. Web. 15 Mar. 2015.
“Deep Blue.” IBM Icons of Progress. IBM. Web. 15 Mar. 2015.
“Deep Blue Intelligence – And Ours.” St. Louis Post 15 May 1997, Five Star Lift Edition: B,
6.1. ProQuest. Web. 15 Mar. 2015.
DeFren, Allison. “Technofetishism and the Uncanny Desires of A.S.F.R. (alt.sex.fetish.robots).”
Science Fiction Studies 36.3 (2009): 404-440. JSTOR. Web. 23 Oct. 2012.
Deyle, Travis. “TED Talks about Robots and Robotics (Part 1).” Hizook: Robotics News for
Academics and Professionals 16 Jan. 2012. Travis. Web. 10 Mar. 2015.
198
Dibbell, Julian. “The Race to Build Intelligent Machines.” Time 147.13 (1996): 56-59.
EBSCOhost. Web. 12 Mar. 2015.
Doctor Who. “Rose.” Season 1, episode 1. BBC Television 17 Mar. 2006. Netflix. Web. 28
May 2013.
East, Brad. “Battlestar Galactica: Frakking Feminist – So say we all!” Slant Magazine. Slant
Magazine, 20 Mar. 2009. Web. 16 Oct. 2010.
Eberl, Jason T. and Kevin S. Decker, eds. Star Trek and Philosophy: The Wrath of Kant.
Popular Culture and Philosophy 35. Peru, IL: Open Court, 2008. Print.
Eng, David. The Feeling of Kinship: Queer Liberalism and the Racialization of Intimacy.
Durham: Duke UP, 2010. Print.
Epley, Nicholas, Adam Waytz, and John T. Cacioppo. “On Seeing Human: A Three-Factor
Theory of Anthropomorphism.” Psychological Review 114.4 (2007): 864-886. The
American Psychological Association. ProQuest. Web. 7 Dec. 2012.
“Fair Haven.” Star Trek: Voyager S6, E 11. Paramount Television, 12 Jan. 2000. Netflix. Web.
15 Feb. 2015.
Ferrando, Francesca. “Is the post-human a post-woman? Cyborgs, robots, artificial intelligence
and the futures of gender: a case study.” European Journal of Futures Research 2.43
(2014): 1-17. Springer. Web. 10 Mar. 2015.
Fido. Dir. Andrew Currie. Perf. Billy Connolly, Carrie-Anne Moss and Dylan Baker.
Lionsgate, 2007 (American release). DVD.
“The Fink Family.” Nanny 911. Perf. Deborah Carroll. Granada Entertainment 14 Mar. 2005.
Web. 10 Apr. 2011.
199
Floridi, Luciano and Sanders, J.W. “Artificial Evil and the Foundation of Computer Ethics.”
Ethics and Information Technology 3.1(2001): 55-66. SpringerLink. Web. 26 June
2014.
Foremski, Tom. “Deep Blue’s Human Game: Technology.” Financial Times 19 May 1997,
USA Edition: 14. ProQuest. Web. 15 Mar. 2015
Ford, Jocelyn. “Fountains of Youth.” Radiolab. Robert Krulwich and Jad Abumrad. 30 June
2014. Web. 23 Feb. 2015.
Ford, Kenneth, and Hayes, Patrick. “On Conceptual Wings.” Thinking about Android
Epistemology. Cambridge, Mass: MIT Press, 2006. Print.
Geller, Tom. “Overcoming the Uncanny Valley.” IEEE Computer Graphics and Applications
28.4 (2008): 11-17. IEEE Xplore. Web. 10 May 2013.
Ghost in the Shell. Dir. Mamoru Oshii. Production I.G, 1995. 24th Anniversary, Anchor Bay,
2014. Blu-ray.
Ghost in the Shell: Arise. Dir. Kazuchika Kise. Funimation and Manga Entertainment, 2013.
Netflix. Web. 13 Mar. 2015.
Ghost in the Machine. Dir. Rachel Talalay. Perf. Karen Allen and Chris Mulkey. 20th Century
Fox. 1993. DVD.
“Ghost Pain.” Ghost in the Shell: Arise S1, E1. Dir. Kazuchika Kise. Funimation and Manga
Entertainment, 21 Oct. 2014 (English release). Netflix. Web. 13 Mar. 2015.
“Ghosts & Shells.” Ghost in the Shell Wiki. Wikia Entertainment. Web. 9 June 2014.
Gimbel, Steven. “Get with the Program: Kasparov, Deep Blue, and Accusations of
Unsportsthinglike Conduct.” Journal of Applied Philosophy 15.2 (1998): 145-154.
Wiley Online Library. Web. 15 Mar. 2015.
200
Giffney, Noreen. “Queer Apocal(o)ptic/ism: The Death Drive and the human.” Queering the
Non/Human. Eds. Noreen Giffney and Myra J. Hird. Cornwall: Ashgate, 2008. 55-78.
Print.
Gips, James. “Towards the Ethical Robot.” Android Epistemology. Ed. Kenneth M. Ford, Clark
Glymour and Patrick J. Hayes. Menlo Park, CA: American Association for Artificial
Intelligence, 1995. Print.
Goertzel, Ben, Seth Baum and Ted Goertzel. “How Long Till Human-Level AI?” h+
(Humanity +). 5 Feb. 2010. H+ Magazine. Web. 15 Nov. 2012.
“Grace Park & Tricia Helfer Cover Maxim Magazine.” Just Jared. Celebuzz, 14 Oct. 2009.
Web.
Graham, Elaine. Representations of the Post/human: Monsters, Aliens and Others in Popular
Culture. Great Britain: Manchester University Press, 2002. Print.
Greenfield, Rebecca. “How Star Wars Influenced Jibo, The First Robot for Families.” Fast
Company. 21 July. 2014. Web. 10 Feb. 2015.
Gustin, Sam. “Watson Supercomputer Terminates Humans in first Jeopardy Round.”
Wired.com. 15 Feb. 2011. Condé Nast Digital. Web. 20 Apr. 2011.
Guys and Dolls (or Love Me, Love My Doll). Dir. Nick Holt. Featuring Mark Strong, narrator.
BBC. Top Documentary Films. 2007. Web. 27 Nov. 2012.
Gwaltney, Marilyn. “Androids as a Device for Reflection on Personhood.” Retrofitting Blade
Runner: Issues in Ridley Scott’s Blade Runner and Philip K. Dick’s Do Androids Dream
of Electric Sheep? Ed. Judith Kerman. Bowling Green, OH: Bowling Green State
University Popular Press, 1991. 32-39. Print.
201
Halberstam, Judith. Queer Art of Failure. Durham and London: Duke University Press, 2011.
Print.
Hanson, David. “Expanding the Aesthetic Possibilities for Humanoid Robots.” 2005: n.p.
Android Science. Web. 20 May 2013.
Haraway, Donna. “A Cyborg Manifesto: Science Technology, and Socialist-Feminism in the
Late Twentieth Century.” Simians, Cyborgs and Women: The Reinvention of Nature.
New York: Routledge, 1991. Print.
--. “Forward: Companion Species, Mis-recognition, and Queer Worlding.” Queering the
Non/Human. Eds. Noreen Giffney and Myra J. Hird. Queer Interventions. Padstow,
Cornwall: Ashgate, 2008. xxiii-xxvi. Print.
Harnick, Chris. “Tricia Helfer to Guest Star on ‘Lie to me’.” TV Squad. AOL, 3 Sept. 2010.
Web. 2 Dec. 2010.
Hart, Hugh. “Death and Fanboys: Q&A With Galactica’s Sexy Cylon.” Wired.com. Condé
Nast Digital, 3 Apr. 2008. Web. 16 Oct. 2010.
--. “Strong Women Steer Battlestar Galactica’s Final Voyage.” Wired.com. Condé Nast
Digital, 15 Jan. 2009. Web. 16 Oct. 2010.
Harris, Elizabeth A. “On Jeopardy! Watson Rallies, then Slips.” The New York Times, Arts
Beat. 14 Feb. 2011. The New York Times Company. Web. 22 Apr. 2011.
Hayles, Katherine N. How We Became Posthuman: Virtual Bodies in Cybernetics, Literature,
and Informatics. Chicago: University of Chicago Press, 1999. Print.
Hedberg, Sara. “Smart Games: Beyond the Deep Blue Horizon.” IEEE Expert 12.4 (1997): 1518. IEEE Xplore Journals. Web. 12 Mar. 2015.
202
Herbrechter, Stefan and Ivan Callus. “What is a Posthumanist Reading?” Angelaki 13.1 (2008):
95-111. EBSCOhost. Web. 1 Mar. 2015.
Himma, Kenneth Einar. “Artificial Agency, Consciousness and the Criteria for Moral Agency:
What properties must an artificial agent have to be a moral agent?” Ethics and
Information Technology. 11 (2009): 19-29. LexisNexis Academic. Web. 3 May 2012.
Hoffman, Donald D. “Computer Consciousness.” The Encyclopedia of Perception. B.
Goldstein, Ed. UC Irvine (2009): 283-285. Web. 3 May 2012.
“Home: Part 1.” Battlestar Galactica S2, E6. Dir. Jeff Wollnough. Perf. Katee Sackhoff and
Mary McDonnell. NBC Universal Television 26 Aug. 2005. Universal Studios, 2010.
Blu-ray.
Huyssen, Andreas. “The Vamp and the Machine: Technology and Sexuality in Fritz Lang’s
Metropolis.” New German Critique 24/25 (1981/82): 221-237. JSTOR. Web. 12 Mar.
2007.
“iRobot: Our History.” iRobot Corporation, 2015. Web. 12 Mar. 2015.
Ishiguro, Hiroshi. “Android Science: Conscious and Subconscious Recognition.” Connection
Science 18.4 (2006): 319-332. Academic Search Complete. Web. 17 Jan. 2013.
Ishiguro, Hiroshi and Minoru Asada. “Humanoid and Android Science.” Ed. Silvia Coradeschi.
“Human-Inspired Robots.” IEEE Intelligent Systems 21.4 (2006): 74-85. Print.
The Island. Dir. Michael Bay. Perf. Ewan McGregor, Scarlett Johansson and Steve Buscemi.
Dream Works and Warner Bros., 2005. DVD.
Jacob, Pierre. “Intentionality.” Stanford Encyclopedia of Philosophy, 7 Aug. 2003. Revised 15
Oct. 2014. Stanford Encyclopedia of Philosophy. Web. 20 Mar. 2015.
203
Jimmy. “Tricia Helfer in Playboy.” Wrestle With Jimmy’s Hot Bitch of the Day. 16 Jan. 2007.
Web. 26 Nov. 2010.
“Kasparov vs. the Monster.” Christian Science Monitor 88.56 (15 Feb. 1996): n.p. Newspaper
Source Plus. Web. 15 Mar. 2015.
Kerman, Judith B., ed. Retrofitting Blade Runner: Issues in Ridley Scott’s Blade Runner and
Philip K. Dick’s Do Androids Dream of Electric Sheep? Bowling Green: Bowling Green
State University Popular Press, 1991. Print.
Kiesler, Sara, Aaron Powers, Susan R. Fussell and Cristen Torrey. “Anthropomorphic
Interactions with a Robot and Robot-Like Agent.” Social Cognition 26.2 (2008): 169181. Web. 11 Nov. 2012.
Kind, Amy. “Qualia.” Internet Encyclopedia of Philosophy. Web. 12 Mar 2015.
Klass, Morton. “The Artificial Alien: Transformations of the Robot in Science Fiction.” Annals
of American Academy of Political and Social Science 470 (Nov. 1983): 171-179.
Klima, Ivan. Karel Čapek: Life and Work. North Haven: Catbird Press, 2001. Print.
“Kobol’s Last Gleaming: Part 1.” Battlestar Galactica S1, E12. Dir. Michael Rymer. Perf.
Grace Park and Edward James Olmos. NBC Universal Television 25 Mar. 2005.
Universal Studios, 2010. Blu-ray.
Kompare, Derek. “Extraordinarily Ordinary: The Osbournes as ‘An American Family.’” Reality
TV: Remaking Television Culture. 2nd ed. Susan Murray and Laurie Ouellette, eds. New
York, NY: New York University Press, 2009. 100-119. Print.
Krauthammer, Charles. “Psyched Out by Deep Blue.” The Washington Post 16 May 1997,
Final Edition: A, 25.6. ProQuest. Web. 15 Mar. 2015.
204
Kroker, Arthur. Exits to the Posthuman Future. Cambridge, UK: Polity Press. 2014. Orbis
Cascade Ebook. Web. 16 Feb. 2015.
Kuhn, Annette. Alien Zone: Cultural Theory and Contemporary Science Fiction Cinema. New
York: Verso, 1990. Print.
Kurzweil, Ray. “How Infinite in Faculty.” Discover Magazine. Nov. 2012. 54-56. Print.
--. How to Create a Mind. New York: Penguin. 2012. Print.
Kuusela, Antti. “Wittgenstein and what’s Inside the Terminator’s Head.” Terminator and
Philosophy: I’ll Be Back, Therefore I Am. Eds. Irwin, William; Brown, Richard and
Decker, Kevin S. Hoboken, NJ: John Wiley & Sons, 2009. 266-277. Web. 15 Mar.
2012.
Lamers, Maarten H. and Fons, Verbeek J., eds. Preface. Human-Robot Personal Relationships.
3rd International Conference, HRPR 2010, Leiden, Netherlands. June 23-24, 2010.
Leiden, Netherlands: Springer, 2011. Print.
Lapidos, Juliet. “Chauvinist Pigs in Space: Why Battlestar Galactica is not so Frakking
Feminist After All.” Slate Magazine. Washington Post, 5 Mar. 2009. Web. 16 Oct.
2010.
Le Guin, Ursula K. Dancing at the Edge of the World: Thoughts on Words, Women, Places.
1989. New York: Grove Press, 1997. Print.
Leonard, Elisabeth Anne, ed. Into Darkness Peering: Race and Color in the Fantastic.
Contributions to the Study of Science Fiction and Fantasy, Number 74. Westport:
Greenwood Press, 1997. Print.
205
Libin, A. V., Libin E. V. “Person-Robot Interactions from the Robopsychologist’s Point of
View: The Robotic Psychology and Robotherapy Approach.” Proceedings of the IEEE,
92.11, Nov. 2004. Web. 2 June 2013.
“Life Support.” Star Trek: Deep Space Nine S3, E13. Dir. Reza Badiyi. Perf. Alexander
Siddig, Nana Visitor and Philip Anglim. Paramount Television, 30 Jan. 1995. Netflix.
Web. 15 Mar. 2015.
Lynch, Dan and Bert Herzog. “Forum: A Time for Celebration.” Communications of the ACM
39.2 (1996): 11-12. ACM Digital Library. Web. 12 Mar. 2015.
McCarthy, John. “What is Artificial Intelligence?” Stanford Formal Reasoning Group. 11 Dec.
2007. Web. 12 Apr. 2011.
McDermott, Drew. “Yes, Computers Can Think.” New York Times 14 May 1997: A21.
ProQuest. Web. 15 Mar. 2015.
MacDorman, Karl and Hiroshi Ishiguro. “The Uncanny Advantage of using Androids in
Cognitive and Social Science Research.” Interaction Studies 7.3 (2006): 297-337.
EBSCOhost. Web. 11 Nov. 2011.
--. “Toward Social Mechanisms of Android Science.” Interaction Studies 7.2 (2006): 289-296.
EBSCOhost. Web. 11 Nov. 2011.
The Manolith Team. “15 Hottest Fembots of All Time.” Manolith. Tsavo Network, 3 Nov.
2009. Web. 16 Oct. 2010.
Markoff, John. “The Coming Superbrain.” New York Times Online. 23 May 2009. Web. 10
Mar. 2013.
--. “Scientists Worry Machines May Outsmart Man.” New York Times Online. 25 July. 2009.
Web. 10 Mar. 2013.
206
“Married to a Doll/Picking My Scabs.” My Strange Addiction. S1, E8. Prod. Jason Bolicki.
TLC, 26 Jan. 2011. Netflix. Web. 18 Oct. 2012.
McCloud, Scott. Understanding Comics: The Invisible Art. New York, NY: HarperCollins,
1993. Print.
McNamara, Kevin R. “Blade Runner’s Post-Individual Worldspace.” Contemporary Literature
38.3 (1997): 422-446. JSTOR; The University of Wisconsin Press, Journals Division.
Web. 6 Feb. 2013.
“The Measure of a Man.” Star Trek: The Next Generation S2, E9. Writ. Melinda M. Snodgrass.
Dir. Robert Scheerer. Perf. Patrick Stewart, Brent Spiner and Amanda McBroom.
Paramount Studios, 11 Feb. 1989. Paramount, 2012. Blu-ray.
Melzer, Patricia. Alien Constructions: Science Fiction and Feminist Thought. Austin:
University of Texas Press, 2006. Print.
Memmi, Albert. Racism. 1982. Trans. Steve Martinot. Minneapolis, MN: University of
Minnesota Press, 2000. Print.
Miccoli, Anthony. Posthuman Suffering and the Technological Embrace. Lanham, MD:
Lexington Books, 2010. Print.
Mills, Charles. The Racial Contract. Ithica, NY: Cornell University Press, 1997. Print.
Milutis, Joe. “Pixelated Drama.” Afterimage 25.3 (1997): 6-7. EBSCOhost. Web. 15 Mar.
2015.
“Mind over Matter.” New York Times 10 May 1997. ProQuest. Web. 15 Mar. 2015.
Misselhorn, Catrin. “Empathy with Inanimate Objects and the Uncanny Valley.” Minds &
Machines 19.3 (2009): 345-359. Academic Search Complete. Web. 10 May 2013.
“Mission.” Humanity+. Humanity Plus, inc. N.d. Web. 10 Mar. 2015.
207
Moon. Dir. Duncan Jones. Perf. Sam Rockwell and Kevin Spacey. Stage 6 Films and Sony
Pictures Classics, 2009. DVD.
Moor, James H. “The Nature, Importance, and Difficulty of Machine Ethics.” IEEE Intelligent
Systems 21.4 (2001): 18-21. IEEE Xplore Journals. Web. 26 June 2014.
Moravec, Hans. Mind Children: The Future of Robot and Human Intelligence. Cambridge, MA:
Harvard UP, 1990. Print.
--. Robot: Mere Machine to Transcendent Mind. Oxford UP, 1998. Print.
--. “Watson and the future of AI.” Kurzweil Accelerating Intelligence. 31 Jan. 2011.
KurzweilAINetwork. 15 Mar. 2011. Web.
More, Max. “The Philosophy of Transhumanism.” The Transhumanist Reader. Ed. Max More.
Oxford, UK: Wiley-Blackwell, 2013. Print.
More, Max and Natasha Vita-More, eds. The Transhumanist Reader: Classical and
Contemporary Essays on the Science, Technology, and Philosophy of the Human Future.
West Sussex, UK: John Wiley & Sons, 2013. Print.
Mori, Masahiro. “The Uncanny Valley.” Energy 7.4 (1970): 33-35 (Japanese). Translated by
Karl F. MacDorman and Norri Kageki. IEEE Robotics & Automation Magazine 19.2
(2012): 98-100. IEEE Xplore. Web. 15 Feb. 2013.
“Mori Uncanny Valley.” Wikimedia Commons. 1 May 2007. Web. 20 Feb. 2015.
Newitz, Annalee. “The Fembot Mystique.” PopSci. 10 Aug. 2006. Web. 10 Nov. 2010.
“Number Six (Battlestar Galactica)” Wikipedia. Wikipedia, 31 Oct. 2010. Web. 14 Nov. 2010.
Orbaugh, Sharalyn. “Sex and the Single Cyborg: Japanese Popular Culture Experiments in
Subjectivity.” Science Fiction Studies 29.3 (Nov. 2002). 436-452. JSTOR. 16 Aug.
2012. Web.
208
Oyedele, Adesegun, Soonkwan Hong and Michael S. Minor. “Contextual Factors in the
Appearance of Consumer Robots: Exploratory Assessment of Perceived Anxiety toward
Humanlike Consumer Robots.” CyberPsychology & Behavior 10.5 (2007): 624- 632.
Academic Search Complete. Web. 10 Mar. 2013.
“Pegasus.” Battlestar Galactica S2,E10. Dir. Michael Rymer. Perf. Tricia Helfer and James
Callis. NBC Universal Television 23 Sept. 2005. Universal Studios, 2010. Blu-ray.
Patell, Cyrus R. K. “Screen Memory: Androids and Other Artificial Persons.” Harvard Review
3 (1993): 23-29. JSTOR; Harvard College Library. Web. 16 Aug. 2010.
Perkowitz, Sidney. Digital People: From Bionic Humans to Androids. Washington, DC: Joseph
Henry Press, 2004. Print.
Peterson, Ivars. “Computer Triumphs over Human Champion.” Science News 151.20 (1997):
n.p. EBSCOhost. Web. 12 Mar. 2015.
Plug and Pray. Dir. Jens Schanze. Featuring Ray Kurzweil and Joseph Weizenbaum. Mascha
Film, 2010. Hulu. 12 Feb. 2014.
Pordzik, Ralph. “The Posthuman Future of Man: Anthropocentrism and the Other of
Technology in Anglo-American Science Fiction.” Utopian Studies 23.1 (2012): 142-161.
EBSCOhost. Web. 6 Mar. 2015.
“The Professors Doppelgänger Robot.” Next Nature, 6 Mar. 2007. Post by Dr. Natural. Web.
15 Mar. 2015.
Ramey, Christopher H. “‘For the sake of others’: The ‘personal’ ethics of human-android
interaction.” Cognitive Science Society: Toward Social Mechanisms of Android Science:
A CogSci 2005 Workshop, Stresa, Italy. 26 July. 2005. Web. 5 June 2013.
209
Rahner, Mark. “Fido is a dog of another kind.” The Seattle Times. 6 July. 2007. Seattle Times
Company. 10 Apr. 2011. Web.
Rapoport, Anatol. “The Vitalists’ Last Stand.” Thinking about Android Epistemology. Eds.
Kenneth M. Ford, Clark Glymour, and Patrick Hayes. Menlo Park, CA: American
Association for Artificial Intelligence, 2006. Print.
RealDoll. “Testimonials.” Abyss Creations. Web. 30 Oct. 2012.
Reid, Roddey. “Death of the Family, or, Keeping Human Beings Human.” Posthuman Bodies.
Halberstam, Judith and Ira Livingston, eds. Bloomington and Indianapolis: Indiana
University Press, 1995. 177-199. NetLibrary. Web. 20 Apr. 2011.
Relke, Diana M.A. Drones, Clones, and Alpha Babes: Retrofitting Star Trek’s Humanism, Post9/11. Alberta, Canada: U of Calgary Press, 2006. Print.
“Resurrection Ship: Parts 1 & 2.” Battlestar Galactica S2, E11-12. Dir. Jeff Wollnough. Perf.
Edward James Olmos, Katee Sackhoff and Tricia Helfer. NBC Universal Television 6
Jan. 2006 and 13 Jan. 2006. Universal Studios, 2010. Blu-ray.
Robertson, Jennifer. “Gendering Humanoid Robots: Robo-Sexism in Japan.” Body & Society
16.1 (2010): 1-36. Sage Publications. Web. 6 July. 2010.
RoboCop. Dir. José Padilha. Perf. Joel Kinnaman and Gary Oldman. Strike Entertainment, 30
Jan. 2014. Netflix. Web. 4 Mar. 2015.
Roden, David. “Deconstruction and Excision in Philosophical Posthumanism.” Journal of
Evolution and Technology 21.1 (2010): 27-36. JetPress. Web. 8 Nov. 2010.
Rupert, Robert D. Cognitive Systems and the Extended Mind. New York, NY: Oxford, UP,
2009. Print.
210
Russell, Stuart J. and Norvig, Peter. Artificial Intelligence: A Modern Approach. Prentice Hall,
2010. Print.
Ryan, Maureen. “Play it again, Starbuck: Talking to Weddle and Thompson about ‘Someone to
Watch Over Me.” Chicago Tribune 28 Feb. 2009. Web. 15 Jan. 2015.
S., John. “How Does Watson Know What Watson Knows?” No Pun Intended. 15 Feb. 2011.
WordPress. Web. 23 Mar. 2011.
Sarkeesian, Anita. “Mad World: Fembots, Advertising and the Male Fantasy.” Bitch Media. 20
Apr. 2010. Web. 6 July. 2010.
Schick, Theodore. “‘Your Big Chance to Get Away from It All’: Life, Death and Immortality.”
Star Trek and Philosophy: The Wrath of Kant. Popular Culture and Philosophy 35. Eds.
Eberl, Jason T. and Kevin S. Decker, eds. Peru, IL: Open Court, 2008. Print.
“Sentience.” Memory Alpha. WikiaEntertainment, n.d. Ed. Renegade54. Web. 10 Nov. 2010.
Serial Experiments Lain. Dir. Ryūtarō Nakamura. Triangle Staff and Pioneer LDC, 1998.
Funimation, 2014. Blu-ray.
Sevy, Steven. “Big Blue’s Hand of God.” Newsweek 129.20(1997): 72. EBSCOhost. Web. 12
Mar. 2015.
Sharp, Sharon. “Fembot Feminism: The Cyborg Body and Feminist Discourse in The Bionic
Woman.” Women’s Studies, 36:507-523, 2007. EBSCOhost. Web. 18 Oct. 2010.
Shaw-Garlock, Glenda. “Loving Machines: Theorizing Human and Sociable-Technology
Interaction.” Lecture Notes of the Institute for Computer Sciences, Social Informatics
and Telecommunications Engineering 59 (2011): 1-10. Springer Link. Web. 13 Nov.
2012.
“Shit That Siri Says.” Tumblr. N.d. Web. 15 Mar. 2015.
211
Shook, Carrie. “Banks that chat, and other irrelevancies.” Forbes 161.8 (1998): 224-225.
EBSCOhost. Web. 21 Mar. 2015.
Simons, Geoff. Is Man a Robot? Great Britain: John Wiley and Sons, 1986. Print.
“Six Appeal: Tricia Helfer.” UGO Entertainment. UGO Entertainment, n.d. Web. 18 Oct.
Smith, David Livingstone. Less Than Human: Why We Demean, Enslave, and Exterminate
Others. New York: St. Martin’s Press, 2011. Print.
Springer, Claudia. Electronic Eros: Bodies and Desire in the Postindustrial Age. Austin:
University of Texas Press, 1996. Print.
Stoate, Robin. “’We’re Not Programmed, We’re People’: Figuring the Caring Computer.”
Feminist Theory 12.2 (2012): 197-211. Sage Journals Online. Web. 1 June 2013.
Stone, Allucquère Rosanne. “Will the Real Body Please Stand Up?” Cyberspace: First Steps.
ed. Michael Benedikt. Cambridge: MIT Press (1991): 81-118. Sodacity.net. Web. 12
Feb. 2015.
Telotte, J.P. “Human Artifice and the Science Fiction Film.” Film Quarterly 36.3 (1983): 4451. JSTOR. Web. 12 Dec. 2007.
--. Replications: A Robotic History of the Science Fiction Film. Chicago: University of Illinois
Press, 1995. Print.
Temple, James. “Boston Researcher Cynthia Breazeal Is Ready to Bring Robots Into the Home.
Are You?” Re/Code. Part 9 of the Boston Special Series. CNBC, 2015. Web. 12 Mar.
2015.
Terminator Anthology. Dirs. James Cameron, Jonathan Mostow, McG. Perf. Arnold
Schwarzenegger, Linda Hamilton, Michael Biehn, Edward Furlong, Robert Patrick.
Warner Home Video, 2013. Blu-ray.
212
Thacker, Eugene. “Data Made Flesh: Biotechnology and the Discourse of the Posthuman.”
Cultural Critique 53 (2003): 72-97. Project Muse. Web. 4 Mar. 2015.
Tooley, Michael. “Abortion and Infanticide.” Philosophy and Public Affairs. 2.1: 37-65 (1972).
Web. 14 Nov. 2010.
Transcendence. Dir. Wally Pfister. Perf. Johnny Depp and Rebecca Hall. Alcon Entertainment,
2014. Amazon Prime Instant Video. Web. 7 Feb. 2015.
“Transhumanist FAQ.” Humanity+. Web. 6 Dec. 2010.
“Tricia Helfer: Biography.” TV Guide. TV Guide, n.d. Web. 4 Nov. 2010.
“Tricia Helfer Biography.” Who2? Who2, n.d. Web. 27 Nov. 2010.
“Tricia Helfer Playboy Pictorial.” Current TV. Celebuzz, 26 Apr. 2010. Web. 2 Dec. 2010.
Tucker, Ken. “Jeopardy! Review: Ken Jennings, Brad Rutter vs. Watson the Computer: Round
1 Goes to…” Entertainment Weekly. Entertainment Weekly and Time, 14 Feb. 2011.
Web. 15 Mar. 2011.
Turing, Alan M. “Computing Machinery and Intelligence.” Mind 59, 433-460. Web. 10 Dec.
2010.
“The Turing Problem.” Radiolab. Robert Krulwich and Jad Abumrad. Interviewed David
Leavitt. 19 Mar. 2012.
Turkle, Sherry. The Second Self: Computers and the Human Spirit. 1984. Cambridge: MIT
Press, 2005. Print.
--. “What Makes an Object Evocative?” Evocative Objects: Things We Think With. Ed. Sherry
Turkle. Cambridge: MIT Press, 2007. Print.
213
Van Oost, Ellen and Darren Reed. “Towards a Sociological Understanding of Robots as
Companions.” Lecture Notes of the Institute for Computer Sciences, Social Informatics
and Telecommunications Engineering 59 (2011): 11-18. Springer. Web. 13 Nov. 2012.
Vary, Adam. “The Beginning of the End: A ‘Battlestar Galactica’ Oral History. Entertainment
Weekly. Entertainment Weekly and CNN, 12 Mar. 2009. Web. 12 Nov. 2010.
Waytz, Adam, Nicholas Epley, and John T. Cacioppo. “Social Cognition Unbound: Insights Into
Anthropomorphism and Dehumanization.” Current Directions in Psychological Science
19.58 (2010): 58-62. Sage Publications Online. Web. 7 Dec. 2012.
“Watson and Jeopardy!” IBMs FAQs. IBM Research. Web. 14 Apr. 2011.
Webb, Michael. “The Robots Are Here! The Robots Are Here!” Design Quarterly 121 (1983):
4-21. Walker Art Center; JSTOR. Web. 11 Jan. 2010.
Weber, Brenda R. Makeover TV: Selfhood, Citizenship, and Celebrity. Durham and London:
Duke University Press, 2009. Print.
Weber, Bruce. “A Mean Chess-Playing Computer Tears at the Meaning of Thought.” New York
Times 19 Feb. 1996: A1-B6. ProQuest. Web. 15 Mar. 2015.
--. “What Deep Blue Learned in Chess School.” New York Times 18 May 1997: 1. ProQuest.
Web. 15 Mar. 2015.
“What Are Little Girls Made Of?” Star Trek (TOS). S1, E9. Dir. James Gladstone. Perf.
William Shatner, Leonard Nimoy and Michael Strong. Desilu Productions and
Paramount Television, 20 Oct. 1966. Netflix. Web. 18 Feb. 2014.
Wilson, Eric C. “Moviegoing and Golem-making: The Case of Blade Runner.” Journal of Film
and Video 57.3 (2005): 31-43. JSTOR. Web. 1 Mar. 2007.
214
Wolfe, Cary. What is Posthumanism? Minneapolis, MN: University of Minnesota Press, 2010.
Posthumanities Series, 8. Print.
Woollaston, Victoria. “We’ll be uploading our entire MINDS to computers by 2045…”
DailyMail.com. 19 June 2013. Web. 10 Mar. 2015.
Wortkunst. “Scary dolls, puppets, dummies, mannequins, toys, and marionettes.” Internet
Movie Database (IMDb) 02 July. 2012. Last updated Nov. 2014. Web. 22 Mar. 2015.
X-Men. Dir. Bryan Singer. Perf. Patrick Stewart and Ian McKellen. Twentieth Century Fox,
2000. Twentieth Century Fox, 2009. DVD.
X-Men: Days of Future Past. Dir. Bryan Singer. Perf. James McAvoy and Michael Fassbender.
Twentieth Century Fox, 2014. Blu-Ray.
X-Men: First Class. Dir. Matthew Vaughn. Perf. James McAvoy and Michael Fassbender.
Twentieth Century Fox, 2011. DVD.
Zajonc, Robert B. “Social Facilitation.” Science 149. 3681 (1965): 269-274. American
Association for the Advancement of Science; JSTOR. Web. 24 June 2014.
Zhao, Shanyang. “Humanoid Social Robots as a Medium of Communication.” New Media
Society 8.3 (2006): 401-419. Sage Journals Online. Web. 30 May 2014.
Zia-ul-Haque, Qazi S. M., Zhiliang Wang and Nasir Rehman Jadoon. “Investigating the
Uncanny Valley and Human Desires for Interactive Robots.” Proceedings of the 2007
IEEE International Conference on Robotics and Biomimetics in Sanya, China. 15-18
Dec. 2007. 2228-2233. IEEE Xplore. Web. 12 May 2013.
215