Download CentimentWhitePaperMachineEthics.pase.comments

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Embodied cognitive science wikipedia , lookup

Technological singularity wikipedia , lookup

AI winter wikipedia , lookup

Intelligence explosion wikipedia , lookup

History of artificial intelligence wikipedia , lookup

Philosophy of artificial intelligence wikipedia , lookup

Existential risk from artificial general intelligence wikipedia , lookup

Ethics of artificial intelligence wikipedia , lookup

Transcript
Machine Ethics and the Progress of Artificial Intelligence
Micah Ainsley Brown, HnD, MBCS
Centiment
[email protected]
Shane Pase, PhD
Feilding University
Matthew Price, PhD
Feilding University
Tunisha Singleton, PhD
Fielding University
Gareth Greenidge Dip DigM
Centiment
[email protected]
Abstract
This paper discusses the current state of artificial moral agents and some of the economic, moral and
technological challenges associated with the fast-moving pace of AI and what that means for the
workforce as well as military, sovereignty and progress based questions around the development and use
of artificial moral agents.
Also considered at length are the legal implications of the lack of legal structure in many countries around
Artificial moral agents and other artificial Intelligence tools
The paper is an opinion based publication more than an academic paper, therefore the voice of writing is
that of the author as opposed to academic vernacular.
Introduction
Human experience is what informs decisions, how a person communicates and executes on what needs to
be done in a management position is influenced by life experiences. Most people actually prefer to
exercise humanity and the ability to actually help the people they work with. I personally choose this over
function and profiteering, treating people like numbers. Research has found that when you do that at what
would be the expense of profit, you actually do better as a business, in other words the immediate cost of
trying to treat people well is worth it because the long term ROI of a human being who is respected
creates high alpha.
For the first time in human history since Alan Turing’s first creation, Colossus and ENIAC, Konrad Zuse,
Arthur Samuel and the creation of Margaret Mastermans semantics nets for machine translation, we have
the right combination of factors for a quantum leap in the capabilities of artificial intelligence to make the
production of an artificial moral agent possible.
What I would like to discuss in this is the woeful inability of governments to create laws that govern this
stuff to date and more importantly the thing that international powers seem to have no problem doing,
which is weaponizing it.
My sentiments at the beginning of this article show my ability as a human being who has been in
positions to make decisions to empathize and sympathize with other human beings. In the next 2-3 years
we will see some parts of what I do replaced with AI and some of the myriad small decisions I make as a
manager replaced with the cascaded mandates of a higher human adjudication executed through layers of
machine logic that dictate the treatment of humans in various forms, as users, employees, partners and
other groups of people.
We already see enough soullessness in corporate America when people are in charge and people execute
policy and process.
Now, imagine if all empathy disappears in the automated execution of decisions made by people focused
on profit.
Sovereignty, Legality and Moral Agents
What about military capabilities?
Aleppo, Nazi Germany, Mussolini, Iraq, North Korea.
Maybe even soon, depending on the electoral college, America.
People follow populist leaders blindly and soldiers follow orders because of ingrained training and
sometimes because their interests are directly tied to the temporary direction of the despot, there's even a
term for it: Superior Order.
However, the followers of dictators are still human, they choose, they protest, sometimes they resist.
Now imagine that the next despot had at his disposal an automated army.
An unstoppable untiring force that follows orders without question or protest, no matter how terrible,
inhumane, even self-destructive and counter-intuitive.
You don't have to imagine it, it already exists.
There is one final bulwark before we get to an apocalypse - the people who build these tools, us, the AI
and computer scientists of the world.
The great news is we are still human, and will be for a while.
We are trying to stop this.
What do you do to regulate this field? What is the field? What are the in and outs?
Welcome to machine ethics 101.
I could write a white paper about this, it will be part of my PhD dissertation, but for the sake of ease I will
simplify it as much as I can.
At a very high level there is roboethics which is the description of how we behave as we design artificial
intelligence. Then there is machine ethics, the behavior of the AMAs themselves.
Roboethics
The rights of a machine as humans design them - Robot rights are the moral obligations of society
towards its machines, similar to human rights or animal rights.These include the right to life and liberty,
freedom of thought and expression and equality before the law. The issue has been considered by the
Institute for the Future and by the U.K. Department of Trade and Industry.
Roboethics is a concern with the moral behavior of humans as they design, construct, use and treat
artificially intelligent beings.
Issues that matter in relation to roboethics:
The things that matter in this field to us as human beings are privacy, dignity and the need for
transparency as regards the current level of advancement in relation to machines that can think and the
motives of the humans that create them.
The things that matter to machines that are sentient (or anywhere near it) are the same as basic human and
civil rights (bigger debate here) the main concern is that as these machines are created these rights are
impacted by the behavior of said machines in relation to humans, both their creators and those who
interact with them.
Artificial machines can invade our privacy. Laws need to be created to stop that from happening and
govern the penalties to said machines and their creators if it occurs, which are qualitatively different to
current privacy laws.
Our dignity is under threat if powerful corporations and governments replace jobs that require human
empathy with machines that do not empathize without consideration to the human effects, both
economically and psychologically.
In order to understand how to interact with such entities, laws must be created on keeping the public up to
date with the development of such machines by the government and other public (and private) entities, in
a similar way to the way financial disclosure laws currently operate as well what development areas are
off limits.
These laws must be tempered with the rights of the machines themselves if they are anywhere near
sentient as we create them.
Machine Ethics
Machine ethics is the concern with the moral behavior of artificial moral agents (AMAs).
The high-level description of an Artificial Moral Agent is that it is an intelligent piece of software that
does the things that we could do ourselves or that interacts with humans in some way shape or form.
The real definition is much more detailed.
This field is huge, but for the sake of this article, I will focus on what you probably care about, which is,
how does this affect you? Why should you care?
Well, the answer is that it’s currently affecting you right now and you should really care, especially if you
have an iPhone, use the internet or have a bank account or generally do anything.
You see through companies like akamai, limelight and of course the previously top secret PRISM
program, your data is already being used to feed entities private and public which can take us toward the
singularity. Even if that does not come about in our lifetimes, steps towards it almost certainly will and if
these laws do not exist, the effect on your daily life will be devastating.
Indulge me for a second. What if Siri accidentally posted to facebook your most private and sensitive
pictures, or Echo started reproducing your most private conversations to other people publicly at
inappropriate times?
Under current US laws, you could sue Amazon or Apple for invading privacy, however the guilty entities
are AMAs and not human beings, not employees of Amazon or Apple and more importantly, your
property at the point of purchase and doli incapax. The extremely effective defense of these companies
would be that these entities behaved that way because that's how they were trained to behave, by you,
their legal owner. This argument would effectively reverse the case and put you in a legal actus reus
position against yourself, more over if these devices were offline at the time of the offense/malfunction
what is the ad quod damnum? Who pays it? You? To yourself?
This is the tip of the iceberg, because of the brain drain imposed by reduction in university budgets, lack
of funding going to minority companies and a general lack of understanding of computer science in the
public at large, people don't even know these are the issues. The only time there will be understanding is
when terrible consequences take place.
This is where we transition from generalized hypothetical ethics to murder and war.
The scenarios I am about to describe are such a concern that the international AI community has asked for
a blanket ban on autonomous weapons.
But don't take my word for it - it's here.
In short, without the laws I articulate in the article and without the ban being asked for, international
superpowers are free to produce whatever terrible weapon they wish with no restriction, in the dark and
kill us all.
If you call this saber rattling craziness, I ask you, was Hiroshima? Was Nagasaki?
Nuclear experimentation by governments was rampant until the nuclear non-proliferation treaty. In this
event real world laws finally began to catch up with the technology, by which point it was too late,
thousands of people had died, from the weapons themselves and the side effects of a lack of
understanding of the underlying technology.
This is the next global arms race, except this time there is an additional variable, the time it takes for a
machine to become sentient, which if misunderstood and misused from a military perspective in the same
way as previous technologies could result in a disastrous conflict in the short term which is devastating.
Progress Towards Moral Agents
You see the sentient quotient equivalence level has already been broken at a near-human level (at least
from a semantic processing perspective) by IBM Watson. If this technology is applied to weapon
processing systems with mission critical level server resources, the subject device could very well achieve
pseudo-sentience without the knowledge of the system managers and cause a disaster very quickly.
Let me make this simple, if "skynet" is run by people with bad agendas who have no idea what they are
doing, it may not launch nuclear weapons (even the clunkiest of fail-safe systems should prevent that,
hopefully) but it may start to kick voters off rolls, suspend payments to government employees or
reassign military assets, redirect drones incorrectly (or deliberately)… the list goes on.
Those are just the mild consequences of supervised artificial intelligence systems, the community is
worried about autonomous weapons... they are already here.
So if there is anything I want you to do because of this article, it is to wake up.
Read up on this stuff, it matters.
Conclusion
Clearly the development of AMAs is just around the corner, it's essential that governments across the
globe begin to develop legal structures before the development of these entities takes place and AI as a
technology needs to be used carefully with regard to its power.
Micah Ainsley Brown, HnD, MBCS
Centiment
[email protected]