Download Exploring the Limitations on Cognition in Artificial Intelligence

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Ecological interface design wikipedia , lookup

Herbert A. Simon wikipedia , lookup

Human–computer interaction wikipedia , lookup

AI winter wikipedia , lookup

Technological singularity wikipedia , lookup

Embodied cognition wikipedia , lookup

Enactivism wikipedia , lookup

History of artificial intelligence wikipedia , lookup

Intelligence explosion wikipedia , lookup

Existential risk from artificial general intelligence wikipedia , lookup

Philosophy of artificial intelligence wikipedia , lookup

Embodied cognitive science wikipedia , lookup

Ethics of artificial intelligence wikipedia , lookup

Transcript
Exploring the Limitations on Cognition in
Artificial Intelligence
(Image from ATCA, London. 2013)
The thing molded will not say to the molder, "Why did you make me
like this," will it? Or does not the potter have a right over the clay?
Book of Romans 9:21
Brad Myers
Abstract
Cognition is a complex system that involves acquiring knowledge through
reasoning, meta-reasoning, experiences and understanding. In this report I
will seek to understand some of its implications on advancements in
Artificial Intelligence, which will also be referred to as AI. By analyzing
previous research, schools of thought and physics I seek to determine what
limitations exist on current AI technology that keep it from attaining humanlike cognition. Through this process it has been found that the many
advancements in AI all lack similar qualities such as; 1.) To process
ambiguity, 2.) To learn items not able to be observed or expressed, 3.) To
make correct decisions, regardless of their rationality or coherence to
probabilities, and 4.) Are held back by the same cognitive limitations that
limit their creators (humans) cognition. Although it does not seem as if AI
will be able to possess a cognitive capacity that surpasses that of humans, it
would be wise to proceed with caution when researching and implementing
new technological systems relating to the field of AI.
Introduction
As improvements continue to be made in technology surrounding the field of Artificial
Intelligence many people find themselves asking the question “ Will the created
overpower the creator?” At the heart of this question lies another question, which is, “
Does the created have the ability to overpower the creator?” For this research I have
focused on the latter question in order to draw conclusions about its implications on the
former.
In the consideration of whether or not Artificial Intelligence can overpower, or surpass,
humans we will look at the issue of cognition, or the ability to think and connection
multiple thoughts into higher order processing.
For the parameters of this paper I will set four main foundations in place as a basis for
research and conclusions. The first of these foundations is that machines utilizing
Artificial Intelligence must possess a cognitive capacity that is equal to, or exceeds, that
of the human race for it to threaten human dominance. The second is that cognition is
dependent upon, and is bound by, the laws of physics. The third foundation is a
philosophical notion that something is not fully capable of understanding itself. The final
foundation that we will make is that in order for Artificial Intelligence to surpass humans
in cognitive capacity and ability, humans must first create it.
In the following sections we will explore what kind of limitations exist on the cognitive
ability on Artificial Intelligence technologies in order to prove that the human race is not
in danger of being over powered or surpassed, in a cognitive sense, by machines of our
own creation.
Computation and Recognition
Computational abilities of computers are often compared to that of the capacity of the
human brain in popular media by saying that the computational power of the average
computer will surpass that of the human brain in the year 2025. These predictions are
often based upon Moore’s Law. Moore’s law states that the numbers of transistors in an
integrated circuit will double every two years. Gorden E. Moore, Co-founder of Intel,
introduced this Law in 1970 and it has proven to be an accurate prediction. While
Moore’s Law has prevailed and points to the computing capacity of machines to
outweigh humans by 2015, it does not account for higher-order cognitive abilities.
Donald Norman (1997), VP of research for Apple and Professor Emeritus at the
University of California San Diego, points out that cognition relies on the notion of
ambiguity. Norman argues that ambiguity is what differentiates AI’s ability to compute
large sums of information from the human brains ability to interpret it. With that he also
points to the fact that machines have not yet been able to process ambiguity in any study
that has been done thus far. Norman also points out that ambiguity is crucial for the
survival of a species.
If ambiguity is necessary for cognition and is vital to a species survival then it would be
logical to conclude that even computers that use transistors to imitate the neural networks
of the human mind, as we will discuss later, would be hard pressed to correctly interpret
enough situations to actually learn and process their surroundings effectively.
In another study related to human versus machine processing of ambiguity
As well, researchers have also dictated that AI lacks another thing that it would need to
have cognition and be able to surpass human abilities. This quality they lack is called
common sense. In an article Published by Communications of the ACM Authors Ernest
Davis and Gary Marcus point out that all the advancements that we have seen in AI, of
which there have been many, we have not seen anything that remotely resembles the
common sense that humans, theoretically, possess (Davis, 2015).
An area that seems to be making the most progress towards giving AI machines cognitive
abilities is through mathematical algorithms which operate their transistors, which are
components that process and retrieve information in technological systems. The idea is
that through algorithms scientists replicate pathways similar to that of the neural
networks of the human brain.
However, Researchers studying computational rationality have found that algorithms
have limitations to what they can bring to AI’s cognitive abilities (Gershman, 2015).
They also posit that AI cannot make rational, or appropriate, decisions with their bounded
computing abilities.
A lack of common sense, limitations of algorithm contributions and an inability to
process ambiguity all contribute to the same conclusion that computation does not equal
cognition and that AI that demonstrates the ability to compute more information than the
human brain is not paramount to being able to process it in such ways as the brain does.
Imitation and Cognition
Some research has shown that robots, or machines utilizing artificial intelligence, have
displayed human-like qualities such as hand waving and other replications of human
movement. MIT researchers Cynthia Breazeal and Brian Scassellati posit that although
imitation is a sophisticated form of social learning robotic, AI, learning does not
demonstrate social characteristics and is only seen in moving in one direction, and that is
from demonstrator to learner. Meaning that there is no exchange of information between
both demonstrator and AI (Breazeal). Now Breazeal and Scassellati do, however, state in
their research “Imitation could one day play a role in understanding the social cognition
of robots as they begin to co-exist with people.” This statement is presumptuous and is
based upon the inference that imitation is equal to that of cognition.
To demonstrate that imitation does not necessarily equal cognition we are drawn into the
realm of linguistics. The Turing test, which is supposed to determine whether or not a
machine can exhibit intelligent behavior, relies on observations of the machines ability to
use language and communicate with a demonstrator. Below is an illustration of what the
Turing test looks like in application.
This test is meant to show that through the machines ability to recall what it has learned,
as we will discuss in more detail later, and imitate human language to convey a message
that it possesses intelligence, which for our purposes is interchangeable with cognition.
Notice that at the bottom left hand side of the image it states that researchers differ as to
whether the test actually determines intelligence or thinking. One such researcher that
differs in their conclusion is David Premack (2004). He concludes that although imitation
can be a representation of intelligence, that it is not the sole factor to the development of
intelligence. Meaning that AI would have to prove much more than simply having a robot
that can talk to humans in order for it to be considered intelligent, or having complete
cognition.
Another area in which we can draw the imitation connection is when scientists claim that
AI can actually learn for itself, as mentioned above. In Universities across the country,
and world, there have been attempts to create robots or computer systems that can think
or learn like humans do.
One such example is from the Robot and Cognition Lab at the University of Maryland.
There they have developed an academy to help robots learn. The Problem with this is
that the robots have only ever been able to learn by watching what humans do and
imitating them (Knight, 2015). They have not been able to create any new gestures or
patterns of their own. If they cannot utilize their learning to create new items, thoughts or
gestures then they will constantly have to be fed with new information in order for AI to
advance. This indicates to me that not only does AI lack the ability to truly educate a
robot or to give it cognition, but also the constant need to be given new information
indicates that it could not thrive or continue without human intervention and therefore
would not be able to surpass human ingenuity. It has also been noted by scholars that AI
is more efficient and effective when applied to specific tasks, like smart phones, rather
than to broader tasks like learning or using language (Falk, 2012).
After considering differences of opinion amongst researchers I would conclude that
imitation, in the sense of AI, is not reflective of intelligence or that AI has complete
cognitive abilities. However, I would also posit that it does not disprove the possibility of
cognition being present in AI.
AI and the Decision-Making Process
Making decisions is one of the most important, and perhaps one of the most complicated
things, that a person does on a regular basis. An entire field of study has emerged to find
out more about decision-making and its implications on peoples lives. I believe that
looking at the findings of such research will have implications on our question as well.
The field of research in decision-making is considered multi-disciplinary and can give us
insights into AI cognition from the fields of psychology and economics.
First we look within the fields of economics and sociology at what is called the Rational
Choice Theory, which is briefly described in this YOUTUBE clip (2012). This theory
basically explains that economies and predictions made by experts are based on what
would happen if people behaved rationally. The problem is that this doesn’t often happen.
For many different reasons people behave irrationally. Let us first take a look at what
rationality is.
A decision is said to have been rational if it adheres to the laws of probability and
maximizes utility while minimizing risk (Hastie, 2010). The advanced computational
abilities of AI would allow for better and faster analysis of a situations probabilities that
is certain, however, why do humans often make irrational choices, and why do they
sometimes work out for the better? Well the second question we may never know the
answer to, but the answer as to why humans make irrational choices is because of what
has been termed Bounded Rationality.
Bounded Rationality describes that people are limited as to how rational they can be due
to a lack of information, limited cognitive abilities and a lack of time (Hastie, 2010). It is
here within the definition of Bounded Rationality that I think the strongest case can be
made for the limitations on AI cognition. Looking at the three limitations on rationality
may help us to better formulate our argument.
Starting with the lack of time we see an unavoidable parallel. That is that whatever
advancement happen in AI the abilities of the machines would limited by time just as
humans are, which would limit AI’s ability to be rational. This also has implications on
the lack of ability to process ambiguity as we discussed previously. If we have limitations
as humans being able to process ambiguity, which irrevocably exists in this universe,
then the limitations on AI would be greater then that of the human mind, until a time
would come that AI could process ambiguities.
Limitations are also referred to in the definition of Bounded Rationality as saying that
humans are cognitively limited, which is unfortunately true. This is where one of our four
presumptions from the beginning fit in. If something is not capable of fully understanding
itself, and humans have limited cognitive abilities, then human scientists and researchers
cannot fully understand the human brain or its cognitive abilities. If we cannot fully
understand how the brain works, then we cannot re-create it and as we saw before in the
imitation section of this paper machines can only process and perform in ways in which
we tell them to. Thus our limitations will, at least for the foreseeable future, remain AI’s
limitations as well.
The third limitation in Bounded Rationality is fairly self-explanatory. If we lack
information to give to AI machines, then they will be lacking in the information that they
have available to them when making decisions. In other words all facets of Bound
Rationality would limit the cognitive and processing abilities of AI, which would at the
very least provide an even playing field in the game of Man versus Machine.
Another thing to consider is that AI lacks heuristics and meta-reasoning capabilities
(Gershman, 2015). Heuristics sacrifice probabilities and logic for faster methods such as
recognition, when one option is recognized and the other is not, or availability, when
more information is available on one option making it seem like the correct choice.
Although these heuristics would be illogical if we were using rationality as a basecomparison, they somehow lead to correct decisions being made in many cases (Hastie,
2010).
The final circumstance surrounding the decision-making abilities of AI that we will
consider is the ability, or lack there of, for AI to figure out meta-reasoning behind actions
and tasks (Gershman, 2015). In simple terms, meta-reasoning is the ability for humans to
think about their own thinking. To have an introspective paradigm that analyzes why we
do what we do. An example that applies to our discussion on AI would be if someone
were to hit another person in a playful fashion and the person were to smile in return. In
the case of AI the reason behind not only the thinking, but the action as well, would not
be able to compute, therefore triggering a defensive reaction from the computer. It would
do so because the computer would think that it was under attack rather then being flirted
with, as may be the case in human interaction. AI lacks the ability to perceive its
surroundings, and this implies not only that it is not yet capable of reasoning for itself,
but that it cannot think about its own thinking.
All in all the terms of Bounded rationality combined with AI’s lack of ability to metareason, use heuristics or interpret ambiguity would I believe lead AI machines to make
catastrophic decisions if they were one day able to do so on their own.
Physical Laws of Thought
There are many scientists who believe that day of AI systems making decisions on their
own will never come. One such Scientist, Zygmunt Pizlo, posits that AI systems will
never be able to outperform humans or even have true cognition because of limitations in
the laws of physics (Zygmunt Pizlo, personal communication, September 24th, 2015).
Three laws of physics that guide thought are Symmetry, Action Principles and
Conservation Laws. For the parameters of this research we will focus on Symmetry,
although Pizlo does give accounts for all three areas in his research.
Although I will not convey that I completely understand the laws of physics, Pizlo
affirms that without symmetry perception would not be possible. He also states that
circuits, which AI depends on for processing capabilities, are not inherently symmetrical
in what they can do. This leads me to conclude that AI does not currently, nor is it likely
to, have the ability to perceive its surroundings.
According to Psychologist Michela Tacca perception and cognition are highly interrelated and dependent upon one another (Tacca, 2011). Imagine trying to think about
something that you could not perceive, although it is not impossible, it would be very
difficult. Even if or when humans do think about things they cannot perceive I would
posit that their thoughts would likely be incorrect due to a lack of information, refer back
to Bounded Rationality, that perception would provide such as sight, sound, smell and
environmental factors.
What we can draw from this is that because AI lacks symmetry, it lacks perception and
because it lacks perception it lacks accurate cognition, which relies upon perception.
Conclusions
The difficulty in the unknown surround the topic of Artificial Intelligence and thus makes
answering the question of whether or not AI could surpass human cognitive abilities a
challenging task. Although many pieces of evidence included within this paper point to
the fact that technology could never exist that gives AI systems the ability to think, I am
reluctant to draw that conclusion. I hesitate in this area because from what we have seen
in historical accounts people often scoff at or underestimate the potential of new
technological advances. Not wanting to be among those who doubt, I will concede that
advancements in technology may one day create the ability for AI to demonstrate several
aspects of cognition, such meta-reasoning and decision-making.
I do posit that future advancements in AI systems involving cognition will continue to be
limited by the same factors that limit human cognition. I also believe that, when
considering the laws of physics, even if a technology were to re-create an exact replica of
the neural capabilities of the brain that it would not be able to surpass the functionality of
the human brain because it would be limited to the same laws of physics that the human
brain is. This would allow human ingenuity a chance to overcome any threats that the
advancements would pose. Although I do not believe that advancements in AI will
produce viable cognition or be able to surpass the human abilities to think, not just
compute, I would issue a strong word of caution.
Through research on technological systems people like Tim Elmore (2013), Author, and
Tim Healy, a professor at the University of Santa Clara, point to the unintended
consequences of technological systems by saying that the inventor often does not foresee
all of the implications, both positive and negative, that the advancement will have on our
society (Healy). Along side of the fact that we as humans have demonstrated destructive
patterns in the past (Handy, 1992), and are not immune from recycling to those patterns
in the future, it appears that even technological advances that do not seem to be a threat
could one day lead to negative results. This being considered, I would urge caution in
how we move forward with developments in AI so as not to allow those among us who
would seek the technology for destructive purposes to be able to utilize it. Although a
complete understanding of future consequences cannot often be known for technological
systems, it is important that when we as a society implement technological advancements
in AI in an orderly fashion that can be controlled, and destroyed, if necessary.
Resources
Breazeal, C., & Scassellati, B. Robots That Imitate Humans. Retrieved from
http://web.media.mit.edu/~cynthiab/Papers/Breazeal-TICS02.pdf
Davis, E., Marcus, G. (2015). Commonsense Reasoning and Commonsense Knowledge in Artificial
Intelligence. Communications of the ACM, Vol. 58, 92-103.
http://delivery.acm.org/10.1145/2710000/2701413/p92davis.pdf?ip=128.211.168.1&id=2701413&acc=OPEN&key=A79D83B43E50B5B8%2E2BA8E8EA4DB
C4DB7%2E4D4702B0C3E38B35%2E6D218144511F3437&CFID=558848242&CFTOKEN=65306717&
__acm__=1446763929_53a5b7ec2643d64f65d7ee588b616e54
Elmore, T. (2013, March) Artificial Maturity, Unintended Consequences of growing up with technology. K
Magazine, 55-57.
Falk, D. (2012, August 21). How long before robots can think like us?. The Telegraph. Retrieved from
http://www.telegraph.co.uk/news/science/science-news/9489002/How-long-before-robots-can-think-likeus.html
Gershman, S. J., Horvitz E. J. (2015). Computational rationality: A converging paradigm for intelligence in
brains, minds, and machines. Science, Vol. 349, 273-278.
http://web.mit.edu/sjgershm/www/GershmanHorvitzTenenbaum15.pdf
Handy, Charles B. (1992) The age of unreason. London: Century Business
Hastie, R., Dawes, R.D. (2010). Rational Choice in an Uncertain World. Thousand Oaks California: SAGE
Publications.
Healy, T. (ND). The Unanticipated Consequences of Technology. Retrieved from
http://www.scu.edu/ethics/publications/submitted/healy/consequences.html
Knight, W. (2015). Robot See, Robot Do: How robots can learn new tasks by observing.
http://www.technologyreview.com/news/541871/robot-see-robot-do-how-robots-can-learn-new-tasks-byobserving/
Norman, Donald A. (1997). Melding mind and machine. MIT’s Technology Review, Vol.100, p29-31.
http://web.a.ebscohost.com.ezproxy.lib.purdue.edu/ehost/detail/detail?sid=4311f050-ae86-4380-80fd7e18a3dd2614%40sessionmgr4002&vid=0&hid=4106&bdata=JnNpdGU9ZWhvc3QtbGl2ZQ%3d%3d#A
N=500432134&db=asf
Premack, D. (2004). Is Language the Key to Human Intelligence?. Science, Vol. 303, 318-319.
http://web.b.ebscohost.com.ezproxy.lib.purdue.edu/ehost/pdfviewer/pdfviewer?vid=2&sid=8c2d73d585f8-4e37-85a1-193b202488c0%40sessionmgr112&hid=110
The Turing Test (Image). Retrieved from
http://www.turnerfenton.com/Students/lessons/ITGS/5_hw/AI/2_TuringTest.htm
Tacca, M. C. (2011). Commonalities between Perception and Cognition.Frontiers in Psychology, 2, 358.
http://doi.org/10.3389/fpsyg.2011.00358
(2012) Rational Man (video clip). Retrieved from https://www.youtube.com/watch?v=JaKMimJPxyA