Download I, Robot

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

List of unsolved problems in philosophy wikipedia , lookup

Transcript
Movie Night
Philosophy 100
Fall 2011
Danielson
I, Robot
Here are several questions to keep in mind while watching the film. These can be used for discussion the night of the film or to write
on for those who watch on their own. If you are writing answers because you did not attend the film, or because you are writing this
for more extra credit, please answer one or more of the questions with a two-page response. You have until the date of the next film
to submit your responses.
1. Do you think that a computer, or any “machine,” can ever be intelligent? (In order to answer this, give a
definition of what intelligence is.) What might be a way, like a test, to determine that the computer is
intelligent? (Feel free to explore ideas like “The Turing Test” [http://plato.stanford.edu/entries/turing-test/]
or John Searle’s Chinese room experiment. [http://www.iep.utm.edu/c/chineser.htm])
2. Does the fact that Sonny and the other robots have human-like faces make us see them as more like
persons? i.e. if they were standard, desk-top computers with the same internal functioning as the robots in
the film would there be the same mistake or projection of ascribing them personhood? In other words are we
easily fooled by appearances?
3. VIKI argues that humans are endangering the planet due to our combination of rationality and emotion, thus
a more rational approach is needed to save us. She claims that humans need to be taken care of – the
creation protecting the creator. Do you think our irrational behavior needs some kind of rational control to
save us from ourselves? (A philosopher king perhaps? Do you think Plato would agree with VIKI? Why?
Why not?)
4. Why does the Doctor think the Three Laws will eventually lead to revolution? Are his reasons convincing?
5. The dead scientist, in a voice over, ponders whether the “random segments of code,” the “perceptual schema
becoming consciousness,” “the difference engine becoming the search for truth,” and “personality
simulations lead to the bitter mote of the soul.” Is there any plausibility to the idea that with enough
hardware and the proper software we might stumble having intelligent machines? Why or why not?
6. Gilbert Ryle uses the concept, “The Ghost in the Machine,” in his book The Concept of Mind to describe
Rene Descartes’ dualism. (I know we haven’t read Descartes yet, but keep it in mind for later in the
semester.) Essentially the idea is that humans have a dual nature because we are composed of two separate
substances. One substance is material, our bodies which are like machines, and the other substance is nonmaterial, something like a “soul” or a “ghost.” In what ways is this idea expressed in the film? Do you think
that you are one single, unified entity, or are you the dualistic combination? What makes you think this?
7. What are some of the potentially problematic features of computers that are intelligent, or “worse”, ones
with free will? If a machine had free will, how would we know? How would be able to distinguish between
their error and their choice? Would this generally be good or bad? Why?
8. For society in general, do you think it is better or worse or the same that we are incrementally becoming
more dependent upon technology of all types? (It seems obvious that we seem to be losing our ability to do
mathematics without calculators; we depend on electronic technology for academic research and paper
writing, etc.) What are the advantages and disadvantages of such reliance?
9. The film ends with human ingenuity and a ‘kindly’ robot working together to save humanity from rational
tyranny. Is the creation of intelligent machines a mistake given that their construction could make them a
formidable threat to our security and survival?
10. If there were intelligent machines, would be have to treat them as moral agents? What would be our moral
obligations to them? Will we be obligated to treat them like humans? Why? Why not?
11. What is the moral of the story?