Download ROBOTICS IN FUTURE WARFARE

Document related concepts

Neural correlates of consciousness wikipedia , lookup

Hard problem of consciousness wikipedia , lookup

Artificial consciousness wikipedia , lookup

Sources of the Self wikipedia , lookup

Embodied cognitive science wikipedia , lookup

Transcript
Do we need
robot
morality?
1
WHAT IS INTELLIGENCE?
1. Pragmatic definition of intelligence: “an intelligent
system is a system with the ability to act
appropriately (or make an appropriate choice or
decision) in an uncertain environment.”
–
An appropriate action (or choice) is that which
maximizes the probability of successfully achieving
the mission goals (or the purpose of the system)
2. Intelligence need not be at the human level
Human-Robot
Interaction
interaction
intelligence
morality
Consciousness?
3
Robot Morality is a relatively new
research area which is becoming
very popular because of military
and assistive robotics.
WHY ROBOT MORALITY ?
 Robots are becoming
technically extremely
sophisticated.
These robots live in human
environment and can harm humans
physically.
Military unmanned vehicles are robots
 The emerging robot is a
machine with sensors,
processors, and effectors able
to perceive the environment,
have situational awareness,
make appropriate decisions,
and act upon the environment
 Various sensors: active and
passive optical and ladar
vision, acoustic, ultrasonic,
RF, microwave, touch, etc.
 Various effectors: propellers,
wheels, tracks, legs, hybrids
Space, air, ground, water
Ethical concerns: Robot behavior
• How do we want our intelligent systems to behave?
• How can we ensure they do so?
• Asimov’s Three Laws of Robotics:
1. A robot may not injure a human being or, through inaction,
allow a human being to come to harm.
2. A robot must obey orders given it by human beings except
where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such
protection does not conflict with the First or Second Law.
6
Ethical concerns: Human behavior
1. Is it morally justified to create intelligent systems with these
constraints?
–
As a secondary question, would it be possible to do so?
2. Should intelligent systems have free will? Can we prevent them
from having free will??
3. Will intelligent systems have consciousness? (Strong AI)
–
If they do, will it drive them insane to be constrained by artificial ethics
placed on them by humans?
4. If intelligent systems develop their own ethics and morality, will
we like what they come up with?
7
Department of Defense (DOD) PATH
TOWARD AUTONOMY
A POTPOURRI OF MILITARY ROBOTS
 Many taxonomies have been used for robotic air, ground, and water vehicles:
based on size, endurance, mission, user, C3 link, propulsion, mobility, altitude,
level of autonomy, etc., etc.
All autonomous future military robots will need morality,
household and assistive robots as well
WHICH TECHNOLOGIES ARE RELATED TO ROBOT
MORALITY?
 Various control system
architectures:
 deliberative,
 reactive,
 hybrid
 Various command, control, and
communications systems:





cable,
fiber optic,
RF,
laser,
acoustic
 Various human/machine
interfaces:
 displays,
 telepresence,
 virtual reality
 Various theories of intelligence
and autonomy;





Evolutionary
Probabilistic
Learning
Developmental
Cognitive
Can we build morality without
intelligence?
Morality for non-military robots that deal
directly with humans.
The Tokyo University of Science: Saya
Robots that look human
• "Robots that look human tend to be a big hit
with young children and the elderly,"
– Hiroshi Kobayashi, Tokyo University of Science
professor and Saya's developer, said yesterday.
• "Children even start crying when they are
scolded."
13
Human-Robot Interaction with
human-like humanoid robots
• "Simply turning our grandparents over to
teams of robots abrogates our society's
responsibility to each other, and encourages a
loss of touch with reality for this already
mentally and physically challenged
population,„
– Kobayashi said.
14
Can robots replace humans?
• Noel Sharkey, robotics expert and professor at
the University of Sheffield, believes robots can
serve as an educational aid in inspiring
interest in science, but they can't replace
humans.
15
Robot to help people?
http://news.xinhuanet.com/english/2009-03/12/content_10995694.htm
• Kobayashi says Saya is just meant to help
people and warns against getting hopes up
too high for its possibilities.
• "The robot has no intelligence. It has no
ability to learn. It has no identity," he said. "It
is just a tool.„
16
Receptionist
robots
Receptionist
18
MechaDroyd Typ C3
Business Design, Japan
What kind of morality
we expect from:
- Robot for disabled?
- Receptionist robot?
- Robot housemaide?
- Robot guide ?
19
Human Robot
Interaction:
Robots for elderly in
Japan
20
Jobs for robots
http://uk.reuters.com/article/idUKT27506220080408
• TOKYO (Reuters) - Robots could fill the jobs of
3.5 million people in graying Japan by 2025,
– a thinktank says, helping to avert worker
shortages as the country's population shrinks.
21
Robots to fill jobs in Japan
• Japan faces a 16 percent slide in the size of its
workforce by 2030 while the number of
elderly will mushroom, the government
estimates, raising worries about who will do
the work in a country unused to, and unwilling
to contemplate, large-scale immigration.
22
HR-Interaction
in in
Japan
Robots
to fill jobs
Japan
• The thinktank, the Machine Industry
Memorial Foundation, says robots could help
fill the gaps, ranging from microsized capsules
that detect lesions to high-tech vacuum
cleaners.
23
HR-Interaction
in in
Japan
Robots
to fill jobs
Japan
• Rather than each robot replacing one person,
the foundation said in a report that robots
could make time for people to focus on more
important things.“
24
What is more important than work?
• What kind of „more important things“?
• This is an ethical question.
25
using robots that monitor the health of
older people in Japan
„Japan could save 2.1 trillion yen ($21 billion) of
elderly insurance payments in 2025 by using
robots that monitor the health of older
people, so they don't have to rely on human
nursing care, the foundation said in its report.
26
Plans for robot nursing in Japan
• What are the consequences for relying on
robot nursing?
• This is an ethical question.
Assistive Robots
• Caregivers would save more than an hour a
day if robots:
1.
2.
3.
4.
5.
helped look after children,
helped older people,
did some housework
reading books out loud
helping bathe the elderly
How children and elderly will respond?
1. How will children and elderly react to robots
taking „care“ of them?
2. This is an ethical question.
Seniors in Japan
– "Seniors are pushing back their retirement until
they are 65 years old,
– day care centers are being built so that more
women can work during the day,
– and there is a move to increase the quota of
foreign laborers.
– But none of these can beat the shrinking
workforce,"
• said Takao Kobayashi, who worked on the
study.
HR-Interaction
in Japan
Seniors in Japan
"Robots are important because they could help
in some ways to alleviate such shortage of the
labor force."
HR-Interaction
in Japan
Seniors in Japan
• How far will they alleviate such shortage of
the labor force?
• And with what consequences?
• This is an ethical question.
HR-Interaction
in Japan
Seniors in Japan
• Kobayashi said changes was still needed for robots to
make a big impact on the workforce.
• "There's the expensive price tag, the functions of the
robots still need to improve, and then there are the
mindsets of people," he said.
• "People need to have the will to use the robots."
HR-Interaction
in Japan
Seniors in Japan
The „mindsets of people“: This is THE
ethical question!
Entertainment
robots
First robots in Entertainment
 Neologism derived from Czech noun
"robota" meaning "labor"
 Contrary to the popular opinion, not
originated by (but first popularized by)
Karel Capek, the author of RUR
 Originated by Josef Capek, Karel’s older
brother (a painter and writer)
 “Robot” first appeared in Karel Capek’s
play RUR, published in 1920
 Some claim that "robot" was first used in
Josef Capek's short story Opilec (the
Drunkard) published in the collection Lelio
in 1917, but the word used in Opilec is
"automat“
 Robots revolt against their human masters
– a cautionary lesson now as then
WHAT IS A ROBOT?
 Many taxonomies
 Control taxonomy




Pre-programmed (automatons)
Remotely-controlled (telerobots)
Supervised autonomous
Autonomous
 Operational medium taxonomy





Space
Air
Ground
Sea
Hybrid
 Functional taxonomy




Military
Industrial
Household
Commercial
 Etc.
Entertainment
http://www.thepartypups.co/
Sony: Aibo
Football
RoboCup
„Love robots“ in Japan
http://jankcl.wordpress.com/2007/08/12/lovecom-18/
EMA (Eternal Maiden Actualization) in Japan
http://www.fun-on.com/technology_robot_girlfriend.php
What kind of
intelligence and
morality you
would expect
from an ideal
robot for
entertainment?
Why Ethics
of Robots?
Why Ethics of Robots?
1. Robots behave according to rules we program
2. We are responsible for their behavior
3. But as they are „autonomous“ they can
„decide“ what to do or not in a specific
situation
4. This is the human/robot moral dilemma
Ethics of Robots: West and East
Rougly speaking:
1. Europe: Deontology (Autonomy, Human Dignity,
Privacy, Anthropocentrism): Scepticism with regard
to robots
2. USA (and anglo-saxon tradition): Utilitarian Ethics:
will robots make „us“ more happy?
3. Eastern Tradition (Buddhism): Robots as one more
partner in the global interaction of things
Ethics & Robots: West and East
• Morality and Ethics:
1. Ethics as critical reflection (or problematization)
of morality
2. Ethics is the science of morals as robotics is the
science of robots
Concrete moral traditions
• Different ontic or concrete historical moral
traditions, for instance
1. in Japan:
1. Seken (trad. Japanese morality),
2. Shakai (imported Western morality)
3. Ikai (old animistic tradition)
2. In the „Far West“:
1.
2.
3.
4.
Ethics of the Good (Plato, Aristotle),
Christian Ethics,
Utilitarian Ethics,
Deontological Ethics (Kant)
Ethics & Robots: Ontological Dimensions
• Ontological dimension: Being or (Buddhist)
– Nothingness as the space of open possibilities that allow
us to critizise ontic moralities
• Always related to basic moods (like sadness,
happiness, astonishment, …)
– through which the uniqueness of the world and human
existence is experienced (differently in different cultures)
Asimo‘s evolution
http://www.rob.cs.tu-bs.de/teaching/courses/seminar/Laufen_Mensch_vs_Roboter/
Asimo‘s evolution
http://www.rob.cs.tu-bs.de/teaching/courses/seminar/Laufen_Mensch_vs_Roboter/
If the robot looks like a human,
do we have different
expectations?
Would you “kill” a robot car?
Would you “kill” a robot insect
that would react by squeaky
noises and escape in panic?
Would you “kill” a robot biped
that would react by begging you
to save his life?
Why Ethics
of Robots?
Why Ethics of Robots?
• Ethics is thinking about human rules of
good/bad behavior:
1.
2.
3.
4.
5.
6.
Towards each other
Towards non-human living beings
Towards the environment
Towards artificial products
Towards other societies or nations
Towards the God or gods, culture-depending
AA versus AC versus
AE versus AI?
•
•
•
•
Artificial Agency (AA)
Artificial Consciousness (AC)
Artificial Ethics (AE)
Artificial Intelligence
… our interaction with them;
… and our ethical relation to them.
ARTIFICIAL
CONSCIOUSNESS
Artificial X
• One kind of definition-schema:
• Creating machines which perform in ways which
require X when humans perform in those ways…
– (or which justify the attribution of X?)
• ‘Outward’ performance, versus psychological
reality ‘within’?
X= Intelligence, = Life, =
Morality, etc.
Artificial Consciousness
• Artificial Consciousness (AC):
 creating machines which perform in ways which
require consciousness when humans perform in
those ways (?)
• Where is the psychological reality of
consciousness in this?
 ‘functional’ versus ‘phenomenal’ consciousness?
Shallow and deep AC research
• Shallow AC – developing functional replications of
consciousness in artificial agents
– Without any claim to inherent psychological reality
• Deep AC – developing psychologically real
(‘phenomenal’) consciousness
Continuum or divide?
• Continuum or divide? (discrete or analog?)
– Is deep AC realizable using current computationally-based
technologies (or does it require biological replications)?
– Will it require Quantum Computing or biology-like
computing?
• Thin versus thick phenomenality
– (See S.Torrance ‘Two Concepts of Machine Phenomenality’,
(to be submitted, JCS)
Real versus simulated AC an ethically significant boundary?
1. Psychologically real versus just simulated artificial consciousness…
-> This appears to mark an ethically significant boundary
 (perhaps unlike the comparable boundary in AI?)
• Not to deny that debates like the Chinese Room have aroused strong
passions over many years…
– Working in the area of AC
– (unlike working in AI?)
– … puts special ethical responsibilities on shoulders of researchers
Techno-ethics
• This takes us into the area of techno-ethics –
– Reflection on the ethical responsibilities of those who are involved in technological R &
D
(including the technologies of artificial agents (AI, robotics, MC, etc.))
• Broadly, techno-ethics can be defined as:
– Reflection on how we, as developers and users of technologies,
…ought to use such technologies to best meet
our existing ethical ends,
within existing ethical frameworks
– Much of the ethics of artificial agent research comes
under the general techno-ethics umbrella
From techno-ethics to
artificial ethics
• What’s special about the artificial agent research is that
the artificial agents so produced may count (in various
senses) as ethical agents in their own right
– This may involve a revision of our existing ethical conceptions
in various ways
– Particularly when we are engaged in research in
(progressively deeper) artificial consciousness
• Bearing this in mind, we need to distinguish between
techno-ethics and artificial ethics
– (The latter may overlap with the former)
Techno-ethics – our
responsibility for our
creations
Artificial ethics – what
ethics we will put to
future robots
ARTIFICIAL
ETHICS
Towards artificial ethics (AE)
• A key puzzle in AE
– Perhaps ethical reality (or real ethical status) goes
together with psychological reality??
Can a robot be ethical if
he is not psychologically
similar to you?
Shallow and deep AE
• Shallow AE –
•
1. Developing ways in which the artificial agents we produce can conform to,
simulate, the ethical constraints we believe desirable
2. (Perhaps a sub-field of techno-ethics?)
You do not want your
robot to hurt humans
Deep AE –
(or other robots?)
– Creating beings with inherent ethical status?
• Rights of robots, rights of human “owners” of robots?
• Responsibilities of robots, responsibilities of humans towards robots?
•
The boundaries between shallow and deep AE may be perceived as fuzzy
– And may be intrinsically fuzzy…
Proliferation of new technologies in
the world
• A reason for taking this issue seriously:
– AA, AC, etc. as potential mass-technologies
• Tendency for successful technologies to proliferate across
the globe
– What if AC becomes a widely adopted technology?
• This should raise questions both:
– of a techno-ethical kind;
– and of a kind specific to AE
1.
2.
3.
Every body would like
to have a robot slave.
Every educated/rich
roman had a slave
Every professor in 19
century had a maid.
Instrumentality
Instrumental versus intrinsic stance
– Normally we take our technologies as our tools or instruments
•
Instrumental/intrinsic division in relation to psychological reality of
consciousness?
• As we progress towards deep AC there could be a blurring of the
boundaries between the two…
– (already seen in a small way with emerging ‘caring’ attitudes of humans
towards ‘people-friendly’ robots)
• This is one illustration of the move from ‘conventional’ techno-ethics and
artificial ethics
Instrumental – robot is
just a device
Intrinsic – if an old lady has a robot that
she loves, her children cannot just throw
the old robot to the garbage can.
Artificial Ethics (AE)
• AE could be defined as
– The activity of creating systems which perform in ways which
imply (or confer) the possession of ethical status when humans
perform in those ways. (?)
• The emphasis on performance could be questioned
• What is the relation between AE and Artificial
Consciousness (AC)?
• What is ethical (moral) status?
Two key elements
of moral status of
a robot
1. Can robot harm community?
2. Can community harm the robot?
( Totality of moral agents )
X is a
member of
community
( one moral agent )
( Totality of moral agents )
Two key elements of X’s moral
status (in the eyes of Y)
• (a) X’s being the recipient or target of moral
concern by Y (moral consumption) [Y X]
• (b) X’s being the source of moral concern
towards Y (moral production) [X  Y]
Ethical status in the absence of
consciousness
1. Trying to refine our conception on the relation
between AC and AE
2. What difference does consciousness make to
artificial agency?
3. In order to shed light on this question we need
to investigate
–
the putative ethical status of artificial agents (AAs)
when (psychologically real) consciousness is
acknowledged to be ABSENT.
Retired general has a superintelligent robot that does not look like a
human and is not psychologically humanoid. Can he dismantle the
robot to pieces for fun? Can he shoot at him as he paid for it?
Our ethical interaction with nonconscious artificial agents…
• ?? Could non-conscious artificial agents have
genuine moral status …
• (a) As moral consumers?
– (having moral claims on us)
• (b) As moral producers?
– (having moral responsibilities towards us (and
themselves))
The dog or horse
that kills a human
is ordered by the
law to be killed
?
The robot that
kills a human is
killed?
A Strong View of AE
• ‘Psychologically real’ consciousness is necessary
for AAs to be considered BOTH
(a)as genuine moral consumers
AND
(b) as genuine moral producers
• – AND there are strong constraints on what
counts as ‘psychologically real’ consciousness.
• So, on the ‘strong’ view, non-conscious AAs will
have no real ethical status
The MIT “strong AI researchers” will
be now in trouble, explain why?
• One way to weaken the strong view:
– by accepting weaker criteria for what counts as
‘psychologically real’ consciousness –
– e.g. by saying ‘Of course you need consciousness
for ethical status, but soon robots, etc. will be
conscious in a psychologically real sense.’
A weaker view of AE
• Psychologically real consciousness is NOT
necessary for an Artificial Agent (AA) to be
considered
– (a) as a genuine moral producer
• (i.e. as having genuine moral responsibilities)
• But it may be necessary for an AA to be considered
– (b) as a genuine moral consumer
• (i.e. as having genuine moral claims on the moral community)
A version of the weaker view
A version of the weaker view is to be found in:
1. Floridi, L. and Sanders, J. 2004. On the Morality of Artificial
Agents, Minds and Machines , 14(3): 349-379.
Floridi & Sanders: Some (quite ‘weak’ * kinds of) artificial agents
may be considered as having a genuine kind of moral
‘accountability’
• even if not moral ‘responsibility’ in a full-blooded sense
– * ( i.e. this kind of moral status may attach to such agents
quite independently of their status as conscious agents)
Examining the strong view
• See Steve Torrance, “Ethics and Consciousness in Artificial
Agents”, Artificial Intelligence and Society
•
Being a fully morally responsible agent requires
1. empathetic intelligence or rationality;
2. moral emotions or sensibilities
• These seem to require presence of psychologically real
consciousness
• BUT….
Shallow artificial ethics: a paradox
• Paradox:
– Even if not conscious, we will expect artificial agents to
behave ‘responsibly’ –
 To perform ‘outwardly’ to ethical standards of conduct
• This creates an urgent and very challenging
programme of research for now…
developing appropriate ‘shallow’ ethical simulations…
1. How you can make a robot responsible for its actions if he has
no real morality.
2. If he has real morality you cannot kill him.
Who is responsible: robot or the
designer?
• Locus of responsibility
• Where would the locus of responsibility of such systems lie?
– For example, when they ‘break down’, give wrong advice, etc…?
• On current consensus: With designers, operators rather than with
AA itself.
•
If only with human designers/users, then such ‘moral’ AAs don’t
seem to have genuine moral status – even as moral producers?
»BUT…
1. Is Alan responsible if his robot will insult the US
President during a visit?
2. Is the robot responsible?
3. Is PSU responsible?
4. Perkowski?
Moral implications of increasing
cognitive superiority of AAs
• We’ll communicate with artificial agents (AAs) in
richer and subtler ways
• We may look to AAs for ‘moral’ advice and support
• We may defer to their normative decisions
– E.g when multiplicity of factors require superior cognitive
powers to humans
 Automated ‘moral pilot’ systems?
Busy parents
professionals will rely
on a robot to give
moral advice to their
children.
Whom to blame for
bad behavior of
children?
What if the child will
love robot more than
the Mommie?
Roman children
loved often their
Greek slave teachers
more than parents.
Non-conscious AAs as
moral producers
• None of these properties seem to require
consciousness
 So the strong view seems to be in doubt?
 Perhaps non-conscious AAs can be genuine moral
producers
Killing a slave or
“low-class” people in
the past
• The question of ‘When can we trust a moral judgment
given by a machine?’
 See answer in: Blay Whitby, “Computing Machinery and
Morality” submitted, AI and Society
• So…
• So non-conscious artificial agents perhaps
could be ‘genuine’ moral producers
– At least in limited sorts of ways
• In contrast, in a paper ‘Ethics and
Consciousness in Artificial Agents’ the author believes:
• Having the capacity for genuine morally
responsible judgment and action require a kind of
empathic rationality
• And it’s difficult to see how such empathic rationality
could exist in a being which didn’t have psychologically
real consciousness
• In any case, it will be a hard and complex
job to ensure that
the “robots designed for morality”
will simulate moral production
in an ethically acceptable way.
Non-conscious
AAs as
moral
consumers
Non-conscious AAs as
moral consumers
• What about non-conscious AAs as moral
consumers?
– (i.e. as candidates for our moral concern)?
– Our moral responsibility for a robot?
• Could it ever be rational for us to consider
ourselves as having genuine moral obligations
towards non-conscious AAs?
Consciousness and
moral consumption
• At first sight – being a ‘true’ moral
consumer seems to require being
able to consciously experience pain,
distress, need, satisfaction, joy,
sorrow, etc.
– i.e. psychologically real consciousness
• Otherwise why waste resources?
 Can we dispose robots at our will when convenient?
 ….
Example of our responsibility for a robot:
The case of property ownership
• AAs may come to have interests which we
may be legally (and morally?) obliged to
respect
• Andrew Martin – he is a robot in Bicentennial
Man
– Andre acquires (through courts) legal entitlement
to own property in his own ‘person’
Bicentennial Man
Bicentennial Man
• Household android is
acquired by Martin family
– christened Andrew
• His decorative products
– exquisitely crafted from
driftwood –
become highly prized
collectors' items
Bicentennial Man (cont)
Bicentennial Man (cont)
• Andrew, arguably, has legal
rights to his property;
• It would be morally wrong for us not to
respect them (e.g. to steal from him)
• His rights to maintain his property
– (and our obligation not infringe those rights)
… does not depend on our attributing
consciousness to him …
Bicentennial Man (cont)
A case of robot moral
(not just legal) rights?
• Andrew, arguably, has moral, not just legal
rights to his property;
• Would it not be morally wrong for us not
to respect his legal rights?
– (morally wrong, e.g., to steal from him?)
Bicentennial Man (cont)
Does it matter if he is non-conscious?
• Arguably, Andrew’s moral rights to
maintain his property
– (and our moral obligation to not infringe those
rights)
… do not depend on our attributing
consciousness to him …
Bicentennial Man (cont)
• On the legal status of artificial agents, see
– David Calverley, “Imagining a Non-Biological Machine
as a Legal Person”,
• Submitted, Artificial Intelligence and Society
• For further related discussion of Asimov’s
Bicentennial Man, see
– Susan Leigh Anderson, “Asimov’s “Three Laws of
Robotics” and Machine Metaethics”
SuperIntelligent
Robots?
Can developing
Super-Intelligent
Robots affect the
whole human
civilization and
fate of the
Universe ?
Hugo De Garis
The question is not if we will design
intelligent robots, the questions is if we
should design gods who will supersede
our intelligence and consciousness.
Artilects, Artilect wars?
TECHNOLOGY FORECASTING
 First order impacts: linear
extrapolation – faster, better,
cheaper
 Second and third order
impacts: non-linear, more
difficult to forecast
 Analogy: The automobile
in 1909
 Faster, better, cheaper
than horse and buggy
(but initially does not
completely surpass
previous technology)
 Then industrial changes:
rise of automotive
industry, oil industry,
road & bridge
construction, etc.
Having no intelligence and
consciousness, our life affected
morally and intellectually by new
technology development like cars
or TV or computers.
Influence of cars on our lives!
 Then cars affected
social changes:
 clothing,
 rise of suburbs,
 family structure
(teenage drivers,
dating),
 increasing wealth
 and personal
mobility
 Then cars affected
geopolitical
changes:
 oil cartels,
 foreign policy,
 religious and tribal
conflict,
 wars,
 environmental
degradation
 and global
warming
Conclusions
1. We need to distinguish between shallow and deep AC and AE
2. We need to distinguish techno-ethics from artificial ethics
(especially strong AE)
3. There seems to be a link between an artificial agent’s status
as a conscious being and its status as an ethical being
4. A strong view of AC says that genuine ethical status in
artificial agents (both as ethical consumers and ethical
producers) requires psychologically real consciousness in such
agents.
Conclusions,continued
5. Questions can be raised about the strong view (automated ethical advisors; property ownership)
6. There are many important ways in which a kind of
(shallow) ethics has to be developed for present day
and future non-conscious agents.
7. But in an ultimate, ‘deep’ sense, perhaps AC and AE go
together closely
–
(see paper ‘Ethics and Consciousness in Artificial Agents’
for defense of the strong view much more robustly, as the
‘organic’ view.)
Sources of slides
Robert Finkelstein
Steve Torrance, Middlesex University, UK
ラファエル・カプーロ
http://www.capurro.de/home-jp.html
Steinbeis Transfer Institut – Information Ethics (STI-IE)
http://sti-ie.de
Cybernics
University of Tsukuba, Japan
http://www.cybernics.tsukuba.ac.jp/index.html
September 30, 2009
This is an expanded version of a talk given at a
conference of the ETHICBOTS project in
Naples, Oct 17-18, 2006.
See S. Torrance; ‘The Ethical Status of Artificial Agents – With and
Without Consciousness’ (extended abstract), in G. Tamburrin and E.
Datteri (eds) Ethics of Human Interaction with Robotic, Bionic and AI
Systems: Concepts and Policies, Napoli: Istituto Italiano per gli Studi
Filosofici, 2006.
See also S. Torrance, ‘Ethics and Consciousness in Artificial
Agents’, submitted to Artificial Intelligence and Society