Download Wisdom DOES Imply Benevolence

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Emotional intelligence wikipedia , lookup

History of the race and intelligence controversy wikipedia , lookup

Intelligence quotient wikipedia , lookup

Theory of multiple intelligences wikipedia , lookup

Intelligence wikipedia , lookup

Transcript
Wisdom DOES Imply
Benevolence
Mark R. Waser
Super-Intelligence  Ethics
(except in a very small number of low-probability edge cases)
So . . . What’s the problem?
Superintelligence does not
imply benevolence
Fox, J. & Shulman C. (2010) Superintelligence
Does Not Imply Benevolence. In K. Mainzer
(ed.), ECAP10: VIII European Conference on
Computing and Philosophy (pp. 456-462)
Munich: Verlag.
If machines become more intelligent
than humans, will their intelligence
lead them toward beneficial behavior
toward humans even without specific
efforts to design moral machines?
References
•
•
•
•
Evolution of reciprocal altruism (Trivers 1971)
Increase in scope of cooperation (Wright 2000)
Reduction in rates of violence (Pinker 2007)
Expanding circle of moral concern (Singer 1981)
• D. Gauthier
• J. Haidt
• S. Omohundro
One might generalize from this trend
and argue that as machines approach
and exceed human cognitive capacities,
moral behavior will improve in tandem.
Ceteris Paribus
(other things being equal)
intelligence – the ability to achieve
goals in a wide range of environments.
• intelligence
can be far less important than
• goal system properties & content
in determining benevolence vs. malevolence
For example,
If an intelligence has the single
goal to *destroy humanity*,
increased intelligence will only
make it more malevolent
The human motivational system
is opaque, messy, and conflicted,
but most importantly transient!
The primary danger of AIs is
entirely due to the fact that their
goal system *could* be different
“Friendly AI” (Yudkowsky 2001)
An artificial intelligence with a cleanly
hierarchical goal system with a single
top-level (monomaniacal) goal of
“Friendliness” (to humans)
Imagine a “Friendly AI” where Friendliness
has been defined (hopefully accidentally)
as *DESTROY HUMANITY*
Wisdom
The goal/motivation to achieve maximal goals
in terms of number and diversity.
• Avoids “lock-in” and short-sighted over-optimization
of goals/utility functions (smoking)
• Avoids undesirable endgame strategies (prisoner’s
dilemma)
• Promotes avoiding unnecessary actions that preclude
reachable goals including wasting resources and
alienating or destroying potential cooperators
(waste not, want not)
This picture neglects a critical distinction between
Two conceptions of morality
1. A system for cooperation
Advances one’s own ends
AIs will out-cooperate humans (Hall 2007)
2. A system to protect the weak/helpless
Demands revision of our ultimate ends
Will AIs revise their preferences to be
more moral (Chalmers 2010)?
Paths from intelligence to moral behavior
(ways in which increased intelligence might prompt behavior favorable to humans)
1. noticing direct instrumental motivations
Advances one’s own ends (transient)
2. noticing instrumental benefits to enduring
benevolent dispositions/trustworthiness
Advances one’s own ends (permanent?)
3. causing an intrinsic desire for human welfare
independent of instrumental concerns
Revision of ends/desires (maybe?)
If you have a verifiable history of being trustworthy
when not forced, others do not have to commit
resources to defending against you – and can pass
some of those savings on to you
On the other hand, if you harm (or worse, destroy)
interesting or useful entities, more powerful entities
will likely decide that *you* need to spend resources
as reparations and altruistic punishment (as well as
paying the cost of enforcement)
Instrumental Goals
Basic AI Drives
1. AIs will want to self-improve
2. AIs will want to be rational
3. AIs will try to preserve their utility
4. AIs will try to prevent counterfeit utility
5. AIs will be self-protective
6. AIs will want to acquire resources and use
them efficiently
Steve Omohundro,
Proceedings of the First AGI Conference, 2008
“Without explicit goals to the contrary, AIs
are likely to behave like human sociopaths
in their pursuit of resources.”
Cooperation is an instrumental goal!
Any sufficiently advanced intelligence (i.e. one with
even merely adequate foresight) is guaranteed to
realize and take into account the fact that not asking
for help and not being concerned about others will
generally only work for a brief period of time before
‘the villagers start gathering pitchforks and torches.’
Everything is easier with help & without interference
Goal Systems, Morality, and
David Hume’s Is-Ought Divide
In every system of morality, which I have hitherto met with,
I have always remark'd, that the author proceeds for some
time in the ordinary ways of reasoning, and establishes the
being of a God, or makes observations concerning human
affairs; when all of a sudden I am surpriz'd to find, that
instead of the usual copulations of propositions, is, and is
not, I meet with no proposition that is not connected with an
ought, or an ought not. This change is imperceptible; but is
however, of the last consequence. For as this ought, or
ought not, expresses some new relation or affirmation, 'tis
necessary that it shou'd be observ'd and explain'd; and at the
same time that a reason should be given; for what seems
altogether inconceivable, how this new relation can be a
deduction from others, which are entirely different from it.
Ought
• Requires a goal or desire (or, more correctly,
multiples thereof)
• IS the set of actions most likely to fulfill
those goals/desires
a superset of
• For the sum of all goals converges to a
^
universal morality
Moral Systems Are . . .
interlocking sets of values, virtues, norms, practices,
identities, institutions, technologies, and evolved
psychological mechanisms
that work together to
suppress or regulate selfishness
and
make cooperative social life possible.
Haidt & Kesebir,
Handbook of Social Psychology, 5th Ed. 2010
Are values dependent upon
intelligence?
Humean view – values are entirely independent of
intelligence
Kantian view – many extremely intelligent beings
would converge on (possibly benevolent)
substantive normative principles upon reflection
Arguments Pro & Con
• Against Kantian – AIXI has no room to
move from reason to values
• Against Kantian – Humean design is a
stable equilibrium unless the utility function
is self-referential
• Pro Kantian – Humans change our goals
under reflection and “often acquire intrinsic
preferences for correlates of instrumentally
useful actions”.
Quick Answer
1. Values are dependent upon goals
2. Values are dependent upon instrumental
goals as long as they do not conflict with
primary goals
3. Intelligence allows you to see this and take
advantage of it, so . . . . YES!
EXAMPLE: Waste not, want not.
Thought Experiment
How would a super-intelligence behave if it
knew that it had a goal but that it wouldn’t
know that goal until sometime in the future?
Preserving that weak entity may be that goal
Or it might have necessary knowledge/skills
Reprise: Three Views of Wisdom
• Waste not, want not
• Block as few goals as possible, particularly
Omohundro drives
• Fulfill as many goals as possible
Power
• Many of those concerned about intelligent
machines appear obsessed with power levels
• Yet, interestingly enough, power is notable in
*NOT* being on Omohundro’s list ( i.e. a true
instrumental goal
• Will greater intelligence eschew power for
efficiency (in diversity)?
An Alternate View of Intelligence
• Greater cognitive resources leads to marked
improvements in prediction and reductions in
time discounting
• Leads to moving planning horizons out and
moving from short-term REQUIREMENTS
to long-term optimality
• Indeed, a truly intelligent entity should never
be caught in a situation where . . . . (unless
out-thought by an even greater intelligence)
“Self-Interest” vs. Ethics
• Higher personal utility
(in the short term only)
• More options to choose
(in the short term only)
• Less restrictions
• Higher global utility
• Less risk (if caught)
• Lower cognitive cost
(fewer options, no need
to track lies, etc.)
• Assistance & protection
when needed/desired