Download Future of Computing and Society - final

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Wizard of Oz experiment wikipedia , lookup

Embodied cognitive science wikipedia , lookup

AI winter wikipedia , lookup

How to Create a Mind wikipedia , lookup

Singularity Sky wikipedia , lookup

Human–computer interaction wikipedia , lookup

History of artificial intelligence wikipedia , lookup

Existential risk from artificial general intelligence wikipedia , lookup

Philosophy of artificial intelligence wikipedia , lookup

Intelligence explosion wikipedia , lookup

The Singularity Is Near wikipedia , lookup

Ethics of artificial intelligence wikipedia , lookup

Technological singularity wikipedia , lookup

Transcript
The Future of Computing – the Implications for
Society of Technology Forecasting and the Kurzweil
Singularity
P. S. Excell and R. A. Earnshaw
Glyndwr University, Wales, UK
{p.excell, r.earnshaw}@glyndwr.ac.uk
Abstract—Increases in the power and capability of collaboration
and information exchange point to a trend towards artificial
intelligence, at least of a form capable of designing and
assembling technological devices, sometime in the present
century. The exponential growth of the world’s information and
the Internet of Things will enable intelligent processing devices to
deduce context. This combination of processing power and
contextual knowledge appears to be well within the ability of
current technology and, further, appears to have the ability to
deliver the level of machine intelligence and skill that would lead
to the phenomenon of “The Singularity”, in which machines
could become “cleverer” than human beings. This poses a major
challenge for human beings to determine what their role would
be after this event and how they would control the machines,
including prevention of malevolent control. These developments
in technology have significant implications for society whether or
not they cause the large scale impact predicted by proponents of
the singularity concept, such as Raymond Kurzweil.
Keywords—Moore’s law; singularity; artificial intelligence; postsilicon technologies; paradigm shift; futurology, Raymond
Kurzweil; societal implications
being considered as possible successors when the integrated
circuit based on silicon has run its course.
Thus for the immediate future, Moore’s Law will ensure that
computational power will continue to increase at current rates,
bringing more speed and capacity to handle more
sophisticated applications and end-user requirements. Devices
are becoming increasingly “intelligent” (in simplistic terms,
setting aside arguments about deeper aspects) and are able to
monitor data and environment. Automobiles can contain up to
100 microprocessors to monitor the various functions of a car.
New cars carry 200 pounds of electronics with over a mile of
wiring [26]. On a wider front, the Internet of Things is able to
connect together embedded devices that can provide a wide
variety of data and sensor information. Gartner [3] estimates
that there will be 26 billion devices on the Internet by 2020.
Such a network of autonomous smart devices will enable a
whole range of operations and applications to be carried out
without direct intervention by the user.
I. MOORE’S LAW
Moore’s Law states that the density of processing components
on an integrated circuit doubles every 1.5-2 years, or less [1].
In approximate terms, this can be said to correlate with the
growth of overall processing power for computers and a
similar rate of growth of power has been observed in
telecommunications. Although a general guide rather than a
fundamental law, it has proved remarkably consistent since the
implementation of the first semiconductor integrated circuit in
1960 (Fig.1). However, because of the steadily increasing
number of functional components within a circuit and the
limited space for them, the question has been raised as to
whether there is a physical limit to the growth of computing
power. Futurists have a variety of views about this limit
depending on which aspect is regarded as the fundamental
constraint.
Is it the limit to ever-finer grained
photolithography (a process used in the microfabrication of
chips), the speed of light, the quantum scale, the gravitational
constant, or the Boltzmann constant? Whatever the current
limit may prove to be, alternative technologies are already
Fig.1. Plot of CPU transistor counts against dates of introduction (source GFDL (http://www.gnu.org/copyleft/fdl.html )], via Wikimedia Commons)
Perhaps more significantly, this network of sensors will have
the capacity to feed contextual information to the distributed
“intelligent” processing system and it can be argued that it is
the lack of contextual information that has been holding up the
progress of computational devices. This view can be deduced
from the general observation that the processing power of
modern computers is arguably becoming equivalent to that of
the human brain and hence the processing power of networks
of computers in the Internet substantially exceeds the human
brain [21]; yet computing devices struggle to justify the label
of “intelligent” and certainly the question of the nature and
replication of consciousness remains unanswered and
apparently beyond technological systems at present. In
addition, the comparison between modern computers and the
human brain is open to challenge. Cognitive neuroscientists
are seeking to understand the mental processes underlying
cognition. A comparison can be made of the speed of
performing a given task by a human and a machine. However,
it becomes more difficult to compare speeds when each is
asked to interpret a more complex task where aspects of the
task are not fully specified.
Initial utilisation of pre-digital media systems followed by
progressive reliance on social media appears to follow the law
of sharing, an equivalent of Moore’s law in the context of
social media. The law of sharing states that the average
amount of shared information doubles every year [4].
Material is probably mainly shared “because it can be” and
hence sharing is linked to the effect of Moore’s Law on
storage - a point often overlooked.
This increasing power and capability presents a scenario
where machines could become cleverer than humans. What
are the implications of this? What are the potential problems
and what are the likely effects upon society?
II. COMPUTING TECHNOLOGY POST -SILICON
The smallest transistors in production are currently around 14
nanometers in size. Reducing this introduces a range of
problems that are difficult to solve, although progress towards
7nm is being made [22]. However, it is envisaged that a
technology to replace silicon will be needed at some stage if
Moore’s Law is to continue. Possible alternative technologies
include optical computing, quantum computing, DNA
computing, germanium, carbon nanotubes, and neuromorphic
computing.
A. Optical computing
Optical computing uses photons for computation, with a
potentially higher bandwidth than current technology.
However, there is current uncertainty on whether they would
be better overall than silicon when the full range of
performance criteria are taken into account, especially size,
but also speed, power consumption, and cost.
B. Quantum computing
Quantum computing makes direct use of quantum-mechanical
phenomena, such as superposition and entanglement, to
perform operations on data. It is expected to improve
computational power for particular tasks such as prime
factoring, database searching, cryptography, and simulation.
Various approaches are being developed but it is not yet clear
which will have the best chances of success [5]. There has
been recent experimental verification that quantum
computation can be performed successfully [6].
The
significance of quantum computing may be gauged by the
recent interest in the area shown by Google, IBM, Microsoft
and major research laboratories [7].
C. DNA computing
A further possibility is to use DNA as a carrier of information
to make arithmetic and logic operations. It is therefore
operating at a molecular scale. Shapiro and Ran [8] have
demonstrated that DNA molecules can be programmed to
execute any dynamic process of chemical kinetics. They can
also implement an algorithm for achieving consensus between
multiple agents. There is also the possibility of using
nucleotides, and their pairing properties in DNA doublehelices, as the alphabet and basic rules of a programming
language. Thus hardware and software can be represented by
DNA and can provide a direct interface for the digital control
of nanoscale physical or biological systems. It can also use
many different molecules simultaneously and therefore run
computing operations in parallel.
D. Germanium
A new design for germanium nFETs which improves their
performance significantly has been reported by Bourzac [9]
and rekindled interest in this technology.
E. Nanotubes
In theory, carbon nanotubes could be substantially more
conductive than copper. They are also semiconducting. Thus
it has the capability for replacing silicon on a nanometer scale
[10].
F. Neuromorphic computing
Neuromorphic computing seeks to utilise neural systems to
process information. Neuromorphic engineering is a new
interdisciplinary subject that takes inspiration from the
biological and natural sciences to design artificial neural
systems, such as vision systems, head-eye systems, auditory
processors, and autonomous robots, whose structure and
properties are based on those of biological nervous systems.
III. THE KURZWEIL SINGULARITY
The stimulus for technology developing beyond a point where
the consequences would be difficult to predict was first
developed by Vinge in 1993 and named the Technological
Singularity [23]. Kurzweil [2] and others have proposed that
the exponential growth in processing power observed in
Moore’s Law will continue, even if replaced by another
technology. They argue, further, that this is a trend that dates
back to a time long before the development of electronics.
From these postulates they predict that the pace of change will
eventually become so rapid that it will be beyond the ability of
human beings to understand or control it. This results in an
expectation of a critical transition point at which humans will
have to cede control of the technology to the machines (n.b.
this is a substantial simplification of the predictions). The
proponents apply the name “The Singularity” to this point,
although this is strictly mathematically anomalous, since the
exponential curve never reaches a singularity: nonetheless the
term is not unreasonable as a way of focusing human thinking.
The Singularity proposal is the subject of much discussion.
However, if it were to occur, it could result in significant
changes to technology, and also society because of its
dependence on technology. During the Singularity Kurzweil
predicts “human life will be irreversibly transformed” and
humans will transcend the “limitations of our biological
bodies and brain” [2]. He looks beyond the singularity to
indicate that “the intelligence that will emerge will continue to
represent the human civilization”, and that “future machines
will be human, even if they are not biological” [2].
The predictions of those who support the concept suggest that
the approximate date for the occurrence of the Singularity is
around the middle of the present century and hence there is a
credible argument that today’s students should at least be
aware of it and debating it, even if there is a body of thought
that rejects the prediction.
Fig. 2. Evidence suggesting progression analogous to Moore’s Law predates
the development of integrated electronics [2]. By Courtesy of Ray Kurzweil
and Kurzweil Technologies, Inc. (en:Image:PPTMooresLawai.jpg) [CC BY
1.0 (http://creativecommons.org/licenses/by/1.0 )], via Wikimedia Commons
IV. THE DEBATE ON THE SINGULARITY
There is continuing debate on the issue of whether machines
can effectively become superior in intelligence to the humans
who created the programs to give them the intelligence in the
first place. At one level, the issue centres around the Turing
Test which, broadly expressed, expresses the question of
whether a machine can think, or be able to respond to various
kinds of behavioural tests for the presence of mind, or the
attributes of mind.
It is possible to reverse-extrapolate the trend in Moore’s Law
to before the development of integrated electronics, suggesting
that it is a trend that has been inherent at least since the
Industrial Revolution (Fig. 2) [2]. Furthermore, a more radical
analysis (but using data from sources of repute) has suggested
that the trend can be traced back to before the creation of
human beings, before the appearance of life, before the
creation of the Earth (Fig. 3) [2], although it should be noted
that this graph is log-log (power law) rather than exponential
and it plots a somewhat different parameter, the time to the
next significant event.
Fig. 3. Evidence suggesting progression somewhat analogous to Moore’s Law
predates the evolution of human beings [2]. By Tkgd2007 [CC BY 1.0
(http://creativecommons.org/licenses/by/1.0 )], via Wikimedia Commons
The singularity proposal is the subject of much discussion by
computer scientists, biologists, neuroscientists, and
philosophers.
The proposal is also contested, that is,
arguments are presented for and against. It is generally agreed
that the speed-up of electronic circuitry is expected to continue
at a rate paralleling Moore’s Law, whatever technology is
utilized. In theory, the operating cycle of the human brain is
determined by the speed at which neurons fire (200 Hz),
communicate information (100 m/sec), and physical size.
Computers on the other hand already operate at GHz and
communicate information at the speed of light, and there is no
limit on size. This has the potential to increase the speed at
which machines are able to invent options for the future, and
exceed the speed at which humans are able to do this. This
could have the effect of telescoping the future into the present.
What is contested is whether the intelligence of the machine
increases at the same rate as its speed. This in turn focusses
the attention on the definition of intelligence, and what
functions can be performed by humans that machines may be
unable to perform, irrespective of their speed or capacity. An
additional point at issue is whether the kind of intelligence that
humans manifest is specific to the way they are constructed
biologically and the way neurons fire in the brain. If it is
human-specific, a computer may be able to perform analogous
tasks but may not reach the level of ‘being able to think for
itself’, as defined by humans. The fact that consciousness can
be argued to be a human faculty beyond intelligence gives
weight to the argument that humans are different, but can this
be so if the brain is just a chemical computer? [29]
Clearly if a machine is to run an AI program which is capable
of learning and extending itself, it will have to be given some
initial goals and objectives. The question arises whether these
goals can include human values – so that any future actions
performed by the machine will be consistent with these. Is it
possible for these actions to be performed in future contexts
not currently envisaged? Is it possible to ensure perfect safety
of operation of the machine in all circumstances in advance?
This is a control problem and its specification needs to be
addressed in detail in advance.
The nature of this computational environment can be argued to
have some philosophical implications. Descartes regarded
reason as the primary source and test of knowledge, in
opposition to empiricism where knowledge comes primarily
from sensory experience.
Descartes stated –
“Even though some machines might do some things
as well as we do them, or perhaps even better, they
would inevitably fail in others, which would reveal
that they are acting not from understanding, but only
from the disposition of their organs. For whereas
reason is a universal instrument, which can be used in
all kinds of situations, these organs need some
particular action; hence it is for all practical purposes
impossible for a machine to have enough different
organs to make it act in all the contingencies of life in
the way in which our reason makes us act”.
(Translation by Robert Stoothoff) [11]
This is suggesting that no machine could respond in the way
adult humans do in an arbitrary variety of situations.
However, machines can clearly be programmed to learn
constructively from their environment (i.e. by receiving inputs
from it) as humans do, and can also be programmed to do
more than simple tasks such as pattern matching (so-called
weak AI). The current Internet of Things is an environment
where various kinds of object, sensors and computers combine
together to provide a hardware and software framework which
can operate autonomously to a greater or lesser degree. On
the other hand, proponents of strong AI believe that human
intelligence can be replicated by machine. How far this will
work out in practice has yet to be determined (Fig 4).
Fig. 4. Countdown to Singularity (courtesy of M. Mackay, The future of AI).
Cochrane [12] has modelled expected growth of machine
intelligence and compared this to biological equivalences (Fig.
5). According to this model, AI fails to close the gap.
Fig. 5. Modelling of growth of AI (courtesy of P. Cochrane).
The expected growth in supercomputer power to 2020 is
shown in Figs. 6 and 7. Note that Hruska in Figure 7 takes a
different view of the equivalents of computers with the human
brain compared with the view of Cochrane [21]. While such
disagreements may look significant they are actually trivial
within the wider sweep of geological time or even of human
history.
of Boston Dynamics, an engineering and robotics design
company operating with a wide range of computer intelligence
and simulation systems [14].
V. SCIENCE AND PREDICTION
Science and technology have advanced by means of
experimentation (including learning from prototypes), the
construction of theories, and the testing of theories by further
experiments and data analysis. Where such theories are found
to be inadequate in the light of further data, they are revised or
replaced. Falsifiability, as defined by Popper [15], defines the
inherent testability of any scientific hypothesis.
Thus
scientific progress is characterised by being able to make
predictions based on the theories established to date.
However, scientific progress by means of iterative refinement
has been challenged in those areas where significant progress
has been made by means of unexpected discoveries. It has
been argued that these constitute a paradigm shift [16-19]. In
particular, Kuhn’s concept of scientific revolution [16] has
some similarities with, and may provide some insights into,
the effect of the putative predicted technological singularity.
Fig. 6. Expected growth in supercomputer power (courtesy of R. Kurzweil).
This raises the issue of whether the decisions of any future
computational environment are predictable, in the sense that
they were envisaged by the creator of the original program, or
whether the outputs in certain circumstances could be
completely different from those expected.
A. An Illustrative Exemplar
Early scenarios combining image based modelling and
immersive projection displays sought potential applications in
the office of the future (Fig. 8). It sought to bring
collaborators who were in different physical locations into the
same virtual space in real-time for the purpose of research
investigations. Although a simple example in itself, its power
lies in the combination of a variety of technologies and of
human imagination to be able to exploit it.
Fig. 7. Towards Exascale (courtesy of J. Hruska [24]).
Google’s recent acquisition of DeepMind, an artificial
intelligence company specialising in machine learning,
advanced algorithms, and systems neuroscience, is an
indication of the increasing interest in this aspect of automated
learning. DeepMind had already designed a system capable of
playing computer games and learning from the experience.
Thus the objective is to create computer systems that are able
to think more like humans, particularly in reinforcement and
deep learning [13]. This follows on from Google’s acquisition
a degree programme in Politics, Philosophy and Economics
(PPE, or Modern Greats). This gives a general picture of
influential ideas and ways to be leaders and opinion formers in
society. This suggests that there is now an opportunity to
update this format into something that could be styled “New
Modern Greats”, consisting of Politics, Futurology and
Economics (PFE), but it is also vitally important to offer this
kind of “leadership oriented” programme to a much wider
demographic of students so that a broad pool of talent can be
accessed and encouraged to think in terms of becoming
leaders, steering this exciting and challenging epoch in
technology.
VI. CONCLUSIONS
Fig. 8. Conceptual Office of the Future (courtesy of Prof Henry Fuchs,
University of North Carolina [25]).
In applications where many channels of information have to
be viewed by a number of people and critical decisions have to
be made in real-time, it is clearly essential that the information
displayed be unambiguous.
It also needs to provide
opportunity for interaction to enable the collaborators to play
out alternative scenarios with regard to the future in order to
make optimum decisions in the present. A key question is
whether such a system with collaborating components would
be able, after training, to make optimum decisions based on
new data without human intervention. This would constitute a
significant paradigm shift.
B. Technological Forecasting, Futurology and Education
Futurology is still a very inexact area of study, although it is
an inescapable fact that all human beings and all businesses
have to form a view on probable pathways in the future, in
order to avoid squandering resources on technologies (in
particular) with limited viability.
Several tools and
methodologies exist already, but a new one that is proposed
for investigation is “retrospective nowcasting”, by analogy
with techniques that have been explored in space weather
studies [20]. This would document previous predictions and
their correctness as of the predicted date of occurrence: this
would, of course, focus on predictions that now occur in the
past. This is proposed as a viable research project for the
immediate future.
Some universities teach themes of the form of “Future and
Emerging Technologies” and these should prioritise teaching
about Moore's Law and the Kurzweil Singularity. A key
objective should be to encourage students to "think big" and to
believe that they have a chance of becoming major influencers
in the world of tomorrow. The traditional pathway (in the
UK) for such persons of influence has very often been through
Computers are continuing to increase in power according to
Moore’s Law and networking is increasing in speed and
capacity. This in turn increases the power and capability of
collaboration and information exchange and points to a trend
towards artificial intelligence, at least of a form capable of
designing and assembling technological devices, sometime in
the present century. This trend is reinforced by reverse
extrapolations that suggest that it has been immanent for far
longer than the industrial era.
The world’s information is being compiled at a rate of
approximately 3 exabytes per day: this is also to be expected
to grow exponentially with the growth of the Internet of
Things and the combination of data from the “things” with the
archived information will greatly facilitate the intelligent
processing devices to deduce context, which can be argued to
be a factor that has held back artificial intelligence in the past.
This combination of processing power and contextual
knowledge appears to be well within the ability of current
technology and, further, appears to have the ability to deliver
the level of machine intelligence and skill that would lead to
the phenomenon of “the Singularity”, in which machines, in
crude terms, would become “cleverer” than human beings, at
least in basic non-emotional functions. This poses a major
challenge for human beings to determine what their role would
be after this event and how they would control the machines,
including prevention of malevolent control. There is much
that is speculative in these concepts, but since the predictions
suggest that they could occur within the present century, it is
appropriate and moral to alert young people of student age to
the possibility so that they can focus their thoughts on the
handling of such a phenomenon.
Futurology and technology forecasting are both inexact
sciences. However, it has been noted that the Internet
provides an accelerating effect on traditional processes. One
year of Internet time has been estimated to be equivalent to
seven years of calendar time. Thus considerations about
possible future developments need to be taken seriously and
treated with increasing priority.
This acceleration of developments is both an opportunity and
threat. It is an opportunity to set in place a range of research
projects which analyse the current situation in more detail. It
is a threat because the rate of change could move faster than
humans can cope with.
These developments in technology have significant
implications for society whether or not they cause the large
scale impact predicted by Kurzweil. In order to evaluate how
machines will operate in the future, a detailed examination has
to be made on the goals that can be specified. This in turn
requires a proposition for the human values that are required
in these goals, and also to what extent it is possible to embed
these in the AI programs in future machines. In addition, it is
essential to consider how safety and machine ethics can be
ensured in the future operation of machines.
REFERENCES
http://www.techrepublic.com/blog/cio-insights/peter-cochranesblog-why-ai-fails-to-outsmart-us/
[13]
H. Devlin, “Google develops computer program capable of
learning tasks independently”, The Guardian, 25 February 2015.
http://www.theguardian.com/technology/2015/feb/25/googledevelops-computer-program-capable-of-learning-tasksindependently
[14]
S. Gibbs, “What is Boston Dynamics and why does Google want
robots?”, The Guardian, 17 December 2013.
http://www.theguardian.com/technology/2013/dec/17/googleboston-dynamics-robots-atlas-bigdog-cheetah
[15]
K. Popper, “The Logic of Scientific Discovery”, Routledge, 2002
(originally published 1959)
[16]
T. S. Kuhn, “The Structure of Scientific Revolutions”, University
of Chicago Press, 1996 (originally published 1962).
[17]
K. Cook, R. A. Earnshaw, J. Stasko, “The discovery of the
unexpected”, IEEE Computer Graphics and Applications, pp 1519, 2007.
[18]
J. Dill, R. A. Earnshaw, D. J. Kasik, J. A. Vince and P. C. Wong
(Eds), "Expanding the Frontiers of Visual Analytics and
Visualization", Springer, pp519, London ISBN: 978-1-4471-28039, 2012.
[19]
R. A. Earnshaw, R.A. Guedj, A. van Dam and J. A. Vince (Eds),
"Frontiers of Human-Centered Computing, Online Communities
and Virtual Environments", Springer-Verlag, pp 482, ISBN: 185233-238-7, 2001.
[1]
G. E. Moore, "Cramming more components onto integrated
circuits", Electronics, pp. 114–117, April 19, 1965.
[2]
R. Kurzweil, “The Singularity is Near”, Penguin Books, 2005.
[3]
"Gartner Says the Internet of Things Installed Base Will Grow to
26 Billion Units By 2020", 2013.
http://www.gartner.com/newsroom/id/2636073
[4]
P. Boutin, “The Law of Online Sharing”, MIT Technology
Review, 2011
http://www.technologyreview.com/review/426438/the-law-ofonline-sharing/
[20]
[5]
T. D. Ladd, F. Jelezko, R. Laflamme, Y. Nakamura, C. Monroe, J.
L. O’Brien, “Quantum computers”, Nature, 464, pp45-53, 2010
http://www.nature.com/nature/journal/v464/n7285/abs/nature0881
2.html
J. W. Freeman, “Storms in Space”, Cambridge University Press,
2001.
[21]
S. Barz, J. F. Fitzsimons, E. Kashefi, P. Walther, “Experimental
verification of quantum computation”, Nature Physics, 9, 727-731,
2013.
http://www.nature.com/nphys/journal/v9/n11/abs/nphys2763.html
P. Cochrane, “When will the net become intelligent?”
TechRepublic CIO Insights, 2007.
http://www.techrepublic.com/blog/cio-insights/peter-cochranesblog-when-will-the-net-become-intelligent/
[22]
E. Gibney, “Physics: Quantum computer quest”, Nature, 3
December 2014.
http://www.nature.com/news/physics-quantum-computer-quest1.16457
J. Hruska, “Intel forges ahead to 7nm – without the use of EUV
lasers”, ExtremeTech, 25 Sept. 2014.
http://www.extremetech.com/computing/190845-intel-forgesahead-to-7nm-without-the-use-of-euv-lasers
[23]
E. Shapiro, T. Ran, “DNA computing: Molecules reach
consensus”, Nature Nanotechnology, 8, 703–705, 2013.
http://www.nature.com/nnano/journal/v8/n10/full/nnano.2013.202.
html
http://www.dna.caltech.edu/Papers/two-domain-CRN-to-DNA2013-news-views.pdf
V. Vinge, “The Coming Technological Singularity: How to
Survive in the Post-Human Era”, in Vision-21: Interdisciplinary
Science and Engineering in the Era of Cyberspace, G. A. Landis,
ed., NASA Publication CP-10129, pp. 11–22, 1993
https://www-rohan.sdsu.edu/faculty/vinge/misc/singularity.html
[24]
K. Bourzac, “New Chip points the way beyond silicon”, MIT
Technology Review, 19 December 2014.
http://www.technologyreview.com/news/533586/new-chip-pointsthe-way-beyond-silicon/
J. Hruska, “Supercomputing director bets $2,000 that we won’t
have exascale computing by 2020”, ExtremeTech, 17 May 2013.
http://www.extremetech.com/computing/155941-supercomputingdirector-bets-2000-that-we-wont-have-exascale-computing-by2020
[25]
H. Fuchs et al. “The Office of the Future”, University of North
Carolina, 2000. http://web.media.mit.edu/~raskar/UNC/Office /
[26]
J. Turley, “Motoring with microprocessors”
http://www.embedded.com/electronics-blogs/significantbits/4024611/Motoring-with-microprocessors
[27]
Penrose, R. “The Emperor’s New Mind”, Oxford: OUP, 1989.
[6]
[7]
[8]
[9]
[10]
G. Duncan, “Life after Silicon: How Nanotubes will power future
gadgets”, 2012
http://www.digitaltrends.com/mobile/carbon-nanotubes-couldpower-the-next-generation-of-processors/
[11]
R. Descartes, “Discourse on Method and Meditations on First
Philosophy”, Hackett Publishing Co Inc, 1998.
[12]
P. Cochrane, “Why AI fails to outsmart us”
Author Biographies
Prof Peter Excell
Peter Excell is Deputy Vice-Chancellor and Professor of
Communications at Glyndwr University. His interests cover
computing, electronics, and creative industries, with a strong
spirit of interdisciplinarity that is needed for the digital
knowledge economy. He gained his BSc in Engineering
Science at the University of Reading and PhD in Electronic
Engineering at the University of Bradford. His work on future
mobile communications devices is being carried out in
conjunction with colleagues from wider discipline areas,
analysing human communications in a holistic way and
developing new ways of using mobile multimedia devices. He
has published over 400 papers. He is a Fellow of the British
Computer Society, the Institution of Engineering &
Technology and of the Higher Education Academy, a
Chartered IT Professional and Chartered Engineer. He is a
member of the UK and Ireland committee of the IEEE Society
on Social Implications of Technology
http://www.glyndwr.ac.uk/en/StaffProfiles/PeterExcell/
Prof Rae Earnshaw
Rae Earnshaw is Professor of Creative Industries at Glyndwr
University. He gained his PhD at the University of Leeds. He
was Dean of the School of Informatics at the University of
Bradford (1999-2007) and Pro Vice-Chancellor (Strategic
Systems Development) (2004-09). He has been a Visiting
Professor at Illinois Institute of Technology, George
Washington University, USA, and Northwestern Polytechnical
University, China. He is a member of ACM, IEEE, CGS, and
a Fellow of the British Computer Society and the Institute of
Physics, and a recipient of the Silver Core Award from the
International Federation for Information Processing, Austria.
He has authored and edited 36 books on computer graphics,
visualization, multimedia, art, design, and digital media, and
published over 200 papers in these areas.
http://sites.google.com/site/raearnshaw/