Download Downloaded

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Document related concepts

Electrical resistance and conductance wikipedia , lookup

Transcript
RSC Advances
View Article Online
REVIEW
View Journal
Towards artificial neurons and synapses: a materials
point of view
Downloaded by RSC Internal on 03 January 2013
Published on 27 November 2012 on http://pubs.rsc.org | doi:10.1039/C2RA22507G
Cite this: DOI: 10.1039/c2ra22507g
Doo Seok Jeong,*a Inho Kim,a Martin Zieglerb and Hermann Kohlstedtb
We overview several efforts to emulate functionalities of basic building blocks, i.e. neurons and synapses,
of a mammal’s brain by means of non-biological inorganic systems. These efforts have been put to realize
Received 12th October 2012,
Accepted 24th November 2012
ambitious goals such as the achievement of artificial inorganic brains on silicon wafers, i.e. neuromorphic
systems, and neuroprosthetic systems taking part in real brain functionalities by interfacing with real
brains. In terms of the keywords, ‘threshold’, ‘analogue’, ‘plasticity’, and ‘elasticity’, which describe the
DOI: 10.1039/c2ra22507g
behaviour of neurons and synapses, various functional systems, with particular emphasis on nanoionic
www.rsc.org/advances
systems, exhibiting these key behaviours, are dealt with in this review.
1. Introduction
It is very interesting to compare mammals’ brains with
computers’ central processing units (CPUs) since they, to
some extent, appear to play similar roles, e.g. calculation,
memory. Such comparisons have shown that mammals’ brains
carry out much more sophisticated functions than CPUs, for
instance, ‘learning’. What is more interesting is the nonneuroscientists’ effort to build up non-biological systems
functioning similarly to brains in at least limited terms.1,2
This kind of work is often called neuromorphic engineering
that was first named by Carver Mead at Caltech.1
a
Electronic Materials Research Centre, Korea Institute of Science and Technology,
Hwarangno 14-gil 5, Seongbuk-gu, Seoul, 136-791, Republic of Korea.
E-mail: [email protected]; Fax: +82-29585509; Tel: +82-29585490
b
Nanoelektronik, Technische Fakultät, Christian-Albrechts-Universität zu Kiel, D24143, Kiel, Germany
Doo Seok Jeong
Doo Seok Jeong is a senior scientist
at the Korea Institute of Science and
Technology (KIST), South Korea. He
received his BE and ME in materials
science from Seoul National
University in 2002 and 2005,
respectively. He received his PhD
degree in materials science
from RWTH Aachen University,
Germany, in 2008. Since 2008, he
has worked for KIST. His research
interests are developments of artificial synapses and neurons by means
of nanoionic systems and understanding of nanoionic behaviour.
This journal is ß The Royal Society of Chemistry 2012
The main purpose of such efforts is to achieve nonbiological systems acting as basic building blocks of a brain,
i.e. neurons and synapses. The methodologies are approximately three-fold: i) circuit engineering using conventional
passive and active circuit elements; ii) employing new physical
concepts such as ferroelectrics, phase-change in higher
chalcogenides, valence-change in transition metal oxides; iii)
introducing organic systems that are compatible with nerve
cells and partly take over nerve cells’ functionalities.
The first methodology is the conventional meaning of
neuromorphic engineering. The main goal of this methodology is to realize analogue-type circuits working as neurons or
synapses by employing conventional digital circuit elements,
e.g. passive elements: resistor, ordinary capacitor,3 diode, and
active element: ordinary transistor.1,3–12
The second one is inspired by non-volatile memory
technologies, e.g. ferroelectric memory, phase-change mem-
Inho Kim is a senior researcher at
the Electronic Materials Research
Center in the Korea Institute of
Science and Technology (KIST),
South Korea. He received his PhD
in materials science and engineering at Arizona State University, US,
in 2010. His research interests are
design and fabrication of nanostructures for light trapping in
photovoltaics, and plasmonic based
chemical sensors.
Inho Kim
RSC Adv.
View Article Online
Downloaded by RSC Internal on 03 January 2013
Published on 27 November 2012 on http://pubs.rsc.org | doi:10.1039/C2RA22507G
Review
RSC Advances
ory, ferromagnetic memory, resistive switching memory. They
have been considered as emerging memories in boolean-type
computers where one bit consists of binary numbers, i.e. ‘0’
and ‘1’.13 The aforementioned memories form two distinctive
physical states based on their physical concepts, which
correspond to the binary numbers. Although these physical
concepts were initially introduced for boolean-type memories,
it has afterwards been attempted to achieve multi-level cells
using the same physical concepts and, even further, analoguetype memories that can ideally store unlimited information at
a single memory-bit. In fact, mammals’ brains are typical
examples of analogue-memory. Therefore, it is rather natural
that the non-volatile memory technologies have extended their
scope up to neuromorphic engineering.
We differentiate this approach from the first one since, in
the second methodology, attempts are made to realize
‘analogue-type passive elements’ based on different physical
concepts. Thus, a single passive element is able to work as a
neuron or a synapse, whereas, in case of the first methodology,
an electronic circuit composed of digital circuit elements plays
the same role. The second methodology therefore enables the
architecture of neuromorphic systems to be simpler than that
employing the first methodology. However, both methodologies have the same goal: realization of analogue-type circuits.
The third methodology differs from the above-mentioned
methodologies in its purpose. This is for neuroprosthesis, i.e.
the implantation of organic systems working as artificial
neurons or synapses into brains, while the first two ones are
for neuromorphic systems, i.e. the achievement of brain-like
working computers. Nevertheless, as that neuroprosthetic
systems are also non-biological, but, work as artificial nerve
cells, it is reasonable for the terminology, neuromorphic
engineering, to include the neuroprosthesis category to some
extent. In this category, several works such as building organic
systems (presynaptic neurons) releasing and delivering neurotransmitters to ‘real’ postsynaptic neurons interfacing with the
organic systems have been performed. In this way, signals
from the artificial presynaptic neurons can be transmitted to
the postsynaptic neurons, so that the signals can be
biologically encoded.14–17
When one attempts to mimic particular systems (here,
nerve cells), one has to first figure out the systems and find out
crucial keywords that the systems and those mimicking them
can share. To begin with, we will briefly overview the
behaviour of nerve cells including neurons and synapses in
terms of their ‘elasticity’, ‘plasticity’, and ‘threshold’ in section
3. Note that this review covers the second methodology, i.e. ii).
Then, several suggested physical concepts that share the
above-mentioned keywords of nerve cell behaviour will be
reviewed. Up to now, researches on artificial nerve cells are
mainly focused on artificial synapses rather than neurons, so
that artificial synapses will be of main concern in this review.
However, we will briefly address artificial neuron issues as
well.
From the physical chemistry point of view, it can be seen
that nerve cells’ behaviour is often based on the ion migration
due to drift and diffusion and electrochemical reactions, i.e.
redox reactions. Interestingly, these mechanisms are thought
to be in charge of resistive switching phenomena observed in
transition metal oxides as well. Particular emphasis will be on
nanoionics-based artificial synapses.
2. Definition of important terms
3 Ordinary capacitors mean conventional ones utilizing ordinary dielectric
materials, i.e. paraelectrics. This term should be clarified to avoid any confusion
since most inorganic systems reviewed in this article are formed in the capacitor
structure as well.
Martin Ziegler
RSC Adv.
Martin Ziegler received his PhD
degree in experimental physics from
the Christian-Albrechts University
(CAU) Kiel, Germany, in 2009. He
worked on transport properties in
single atomic contacts and tunnel
junctions. In 2010, he joined the
research group of Prof. Hermann
Kohlstedt at the technical faculty at
the CAU Kiel. His current research
interests include the developments
of memristive devices and their
integration in neuromorphic circuits.
Several important terms are defined in this section.
Hermann Kohlstedt is a professor of
Nanoelectronics at the Technical
Faculty of the Christian-Albrechts
University (CAU) Kiel, Germany.
Prior to his appointment at CAU,
he led a research group from 1991
to 2009 at the Forschungszentrum
Jülich. He received his PhD in
physics from the Kassel University
in 1989 on superconducting tunnel
junctions for heterodyne receivers.
Kohlstedt‘s representative work
includes superconducting, magnetic
Hermann Kohlstedt
and ferroelectric tunnel junctions in
the framework of transport properties and thin film analysis. Since
2009 he is working in the field of memristive devices for neuromorphic
circuits.
This journal is ß The Royal Society of Chemistry 2012
View Article Online
Downloaded by RSC Internal on 03 January 2013
Published on 27 November 2012 on http://pubs.rsc.org | doi:10.1039/C2RA22507G
RSC Advances
Review
Analogue system
Plasticity
A system representing ‘continuous’ rather than discrete
information. Thus, an unlimited amount of information can
be realized in such a system. For instance, the hands of a
clock, rotating continuously, provide the correct time of day
unlimited times a day.
Property exhibiting unrecoverable changes in a system’s state,
arising from an external force that is large enough to deform
the system.
Digital system
3. Microscopic and macroscopic behaviour
of neurons and synapses
A system representing discrete information. For instance, the
hands of a clock, rotating discontinuously, gives the correct
time of day limited times a day. In general, the second hand
discontinuously rotates with a time interval of 1 s, so that this
clock is able to provide the correct time of day 3600 times per
hour.
Boolean system
A system representing discrete information based on binary
numbers, i.e. ‘0’ and ‘1’. Boolean systems therefore belong to
the digital-system-category. For instance, the current memory
and logic devices are based on binary number memory and
calculation.
Action potential
A voltage-spike travelling along a neuron’s membrane, which
triggers signal transmission between neighbouring neurons.
Action potential firing behaviour is explained in detail in
section 3.
Chemical synapse
A cleft between two neighbouring and interacting neurons,
where signal transmission from one to the other takes place by
means of a chemical manner via neurotransmitters (chemical
messengers). The flow of neurotransmitters is ‘unidirectional’
in the cleft so that one can define a ‘presynaptic neuron’ as a
neuron releasing neurotransmitters and a ‘postsynaptic
neuron’ as a neuron receiving the neurotransmitters.
Chemical synapses are active elements so that they provide
gain of transmitted signals, i.e. amplification.
The other type of synapse is electrical synapse. In this case,
signals are directly transmitted across the synapse by drift of
ions without chemical reactions. In general, the signal flow
between neighbouring neurons is ‘bidirectional’ so that
bidirectional signal transmission is possible unlike chemical
synapse. A neuron sending signals is defined as a presynaptic
neuron and the other as a postsynaptic neuron. Thus, the
names of the two neurons are reliant on the direction of signal
transmission. Moreover, electrical synapse is passive, i.e. no
gain. It is known that this type of synapse is not involved in
memory and learning. Thus, we take into account only
chemical synapse in this review.
Elasticity
Property exhibiting a restoring (recovery) force against an
external force. When an external force is imposed on a system
representing elasticity, the force prevails against the restoring
force. However, as soon as the external force disappears, the
system recovers the original state, due to the restoring force.
This journal is ß The Royal Society of Chemistry 2012
Neurons transmit electrical signals, i.e. action potentials,
through them and, by means of action potentials, they are
able to communicate with each other. In 1939, Hodgkin and
Huxley successfully recorded action potential evolution in a
500 mm giant axon.18 A neuron consists of a lipid membrane
that demarcates intracellular and extracellular media. The cell
membrane causes discrete distribution of several ions in the
intracellular and extracellular media, which are mainly Na+,
K+, Ca2+, and Cl2. For instance, the concentration of Na+ and
Cl2 ions is much higher in the extracellular medium, whereas
K+ ions are more concentrated in the intracellular medium.
The discrete distribution of these ions causes electromotive
force that is described as the Nernst potential through the
membrane, i.e. chemical potential gradient of the ions.
Therefore, one can compare the cell membrane with a
membrane-based battery where the evolution of electromotive
force is also realized by chemical potential gradient through
the membrane. When the cell membrane is at its resting state,
i.e. no external potential application, the membrane stays
polarized with a particular resting potential (ca. 260 mV, but
somewhat varies upon some factors).19 The resting potential
arises from the aforementioned discrete distribution of the
ions through the membrane.
Although a cell membrane demarcates intracellular and
extracellular media, the ions can still be exchanged between
the media through ion channels and ion pumps in the
membrane.20 Electrical energy in the membrane is evolved by
ion pumps that lead to the aforementioned difference in ion
concentration between the intracellular and extracellular
media. Therefore, the nerve cell acts as a self-maintained or
self-charged battery, whereas, in case of a membrane-based
battery, chemical potential difference between the media
separated by the membrane is maintained by an external
power source. This leads to the spontaneous polarization of
the nerve cells at the resting state.
However, when external electric stimuli are above a certain
‘threshold’, a polarized nerve cell at the resting state becomes
depolarized, i.e. the membrane potential becomes .260 mV
or even positive. This depolarization is termed as action
potential firing.18 What leads to the depolarization is a change
in ion concentration difference between the separated media,
which is attributed to voltage-gated ion channels.21–24 That is,
the over-threshold stimuli open the ion channels, so that ions
are redistributed to lower the chemical potential gradient.
Nevertheless, the depolarized state cannot last for a long
time because Na+/K+ ion pumps recover the ion concentration
distribution of the resting state, which are known as sodium-
RSC Adv.
View Article Online
Downloaded by RSC Internal on 03 January 2013
Published on 27 November 2012 on http://pubs.rsc.org | doi:10.1039/C2RA22507G
Review
potassium adenosine triphosphatase (Na+/K+-ATPase).25 That
is, these ion pumps recharge the cell membrane by converting
chemical energy into electrical energy.26 Therefore, the cell
membrane state change is ‘elastic’, i.e. permanent state
changes do not take place. At the same time, the cell
membrane transmits electric impulses towards a particular
direction, so that action potential transmission proceeds. One
can compare the action potential transmission with dominoes.
Hence, the cell membrane acts as a lossless and active cable
for action potential transmission. In summary, neurons or cell
membranes exhibit their state change that is ‘elastic’ when
external stimuli are above a certain ‘threshold’. The dynamic
action potential generation process is well described by the
simple equivalent circuit model suggested by Huxley and
Hodgkin.27
It is believed that activity-dependent synaptic plasticity is in
charge of memory and learning.28 The change of synaptic
weight arises from interaction between neighbouring neurons
by means of action potential transmission as Hebb described
as ‘‘neurons that fire together wire together’’.29 The change of
synaptic weight exhibits ‘plasticity’, i.e. the change can last for
a long time. The synaptic weight change can be either positive
or negative, which are termed as potentiation and depression,
respectively. Depending on the lasting time of a synaptic
weight change, these changes can be classified as short-term
potentiation (STP) or depression (STD) and long-term potentiation (LTP) or depression (LTD). LTP is thought to be in
charge of long-term memory, which can last for many days.
LTP is thought to take place under the particular condition
that both presynaptic and postsynaptic neurons are activated.
Note that, in this section, we deal with only chemical synapses.
When a presynaptic neuron is activated, neurotransmitters are
released from the presynaptic neuron. Among several kinds of
neurotransmitters, glutamate is known to play the most
important role in LTP. Released glutamate neurotransmitters
bind to two main subtype receptors on the postsynapse side,
which are N-methyl-D-aspartate receptors (NMDAR) and
a-amino-3-hydroxy-5-methyl-4-isoxazolepropionic
receptor
(AMPAR). The AMPAR and the NMDAR have ion channels
for the monovalent cations, Na+ and K+, and both monovalent
and divalent cations, Na+, K+, and Ca2+, respectively.
The ion channel of the NMDAR is voltage-gated, so that the
depolarization of the postsynapse side leads to its opening.
Consequently, the intracellular concentration of Ca2+
increases.28,30 The increase of Ca2+ concentration also arises
from Ca2+ inwards flux through L-type voltage-gated calcium
channels (VGCCs).31–33 It is well understood that Ca2+ plays a
key role in LTP.28,34 Schematics of this process are depicted in
Fig. 1. In LTP, a dominant ion-channel-type, either NMDAR or
VGCC, appears to be determined by a stimulation-type.32
It should be noted that there is a ‘threshold’ Ca2+
concentration for LTP induction, i.e. an increase in Ca2+
concentration by the above-mentioned process should reach
the threshold to result in LTP, otherwise the basal level Ca2+
concentration is recovered by the reverse of the abovementioned process, exhibiting STP rather than LTP.34 The
RSC Adv.
RSC Advances
Fig. 1 Schematics of potentiation procedures of a chemical synapse. The
presynaptic activation releases glutamate neurotransmitters (Glu) and they bind
to NMDAR and AMPAR. If the postsynaptic side is polarized, i.e. resting state,
only the AMPAR-related ion channel contributes to the inwards diffusion of
monovalent cations, e.g. Na+, since the NMDAR-related ion channel is voltagegated. When the postsynaptic side is depolarized, i.e. activated, the voltagegated channel is open so that divalent cations, e.g. Ca2+, are able to diffuse
inwards. Reprinted with permission from Ref. 28. Copyright 1999, American
Association for the Advancement of Science.
processes for the change of postsynaptic Ca2+ concentration
fulfil detailed balance, i.e. they are paired processes.35 In the
intracellular medium, Ca2+ ions bind to calmodulin, forming
Ca2+/calmodulin (CaM). The key component in LTP induction,
calcium-calmodulin-dependent protein kinase II (CaMKII),
can undergo autophosphorylation on Threonine 286 when
CaM or Ca2+ concentration is above the threshold.34,36,37 As a
result, the activity of CaMKII is no longer dependent on Ca+ (or
CaM) concentration.36,37 Consequently, the autophosphorylated CaMKII can keep its activity even after the Ca+
concentration recovers its basal level, implying LTP. Note that
a third reaction (independent of the paired reactions), i.e.
CaMKII autophosphorylation, should be involved in the longterm change, otherwise the aforementioned detailed balance
perhaps leads to immediate relaxation of the excited state.38
This aspect can also be found in cation-migration-based
artificial synapse systems. Detailed explanation on these
systems will be given in section 4.4.1.
Several macroscopic descriptions have attempted to
account for the synaptic plasticity induced by presynaptic
and postsynaptic activation. These mathematical equations
are empirical, i.e. they are not based on the aforementioned
synaptic behaviour at the molecular level. As briefly mentioned
earlier, the Hebb rule elucidates the synaptic plasticity driven
by interaction between neighbouring neurons by means of
action potential through the synapse. For a single synapse
whose weight (w) changes depending on the presynaptic
activity (upre) and the postsynaptic activity (upost), the Hebb
rule can be expressed as a time-dependent synaptic weight
change equation as follows:
tw
dw
~upre upost
dt
(1)
This journal is ß The Royal Society of Chemistry 2012
View Article Online
Downloaded by RSC Internal on 03 January 2013
Published on 27 November 2012 on http://pubs.rsc.org | doi:10.1039/C2RA22507G
RSC Advances
where tw is a time constant of the synaptic weight change. In
case of many simultaneous presynaptic activities from
different neurons, both synaptic weight w and presynaptic
A
A
activity upre should be vector quantities w and u pre, which are
1 6 N or N 6 1 matrices. N means the number of presynaptic
inputs.
Because synaptic activity cannot be negative, the right term
of eqn (1) is either positive or zero, i.e. the synaptic weight
never decreases. Hence, the Hebb rule has a limitation in
implementing the important term, ‘threshold’ for LTP as well
as LTD induction. Moreover, the Hebb rule in eqn (1) accounts
for unlimited growth of synaptic weight, which takes place as
long as there is activity on both presynaptic and postsynaptic
sides.
In fact, by experiments, it was revealed that there is a
threshold value of external stimulation for LTP induction, as
can be seen in Fig. 2. Dudek and Bear measured excitatory
postsynaptic potential (EPSP) in CA1 of the hippocampus of
adult rats with varying the frequency of external current
pulses.39 Increasing EPSP slope, i.e. EPSP change against time,
from its baseline slope denotes NMDAR-related channel
opening, and thus the inwards diffusion of cations, e.g. Ca2+
and Na+, which play an important role in LTP induction as
discussed earlier. However, unless the frequency is higher
than a particular threshold, depression, rather than potentiation, takes place. Moreover, the unlimited increase of synaptic
weight cannot be observed.
Later, Bienenstock, Cooper, and Munro suggested an
empirical equation taking into account threshold postsynaptic
activity for LTP induction, which is referred to as the BCM
rule. The equation is given by
Fig. 2 (a) Dependence of EPSP slope change from the baseline EPSP slope,
induced by conditioning current pulse stimulation of various frequencies. These
measurements were performed on CA1 of the hippocampus of rats. The positive
EPSP changes imply the depolarization of the postsynaptic cell resulting from
NMDAR ion channel opening, i.e. potentiation, whereas the negative changes
imply depression. Time-dependent changes in the EPSP slope normalized by its
baseline slope with three different conditioning stimulation frequencies, (b) 3,
(c) 10, and (d) 50 Hz. Conditioning stimulation start point was set to be time
zero in (b), (c), and (d). Reproduced with permission from Ref. 39.
This journal is ß The Royal Society of Chemistry 2012
Review
tw
dw
~upost upre upost -hpost
dt
(2)
where hpost denotes threshold postsynaptic activity for LTP
induction.40 Unless the postsynaptic activity is larger than the
threshold, the right term becomes negative, implying a
decrease in the synaptic weight. Moreover, the BCM rule
indicates that the threshold also varies on the postsynaptic
activity with time, as described by the following equation:
th
dhpost
~u2post -hpost
dt
(3)
where th denotes a time constant of the threshold change. In
eqn (3), one can notice that the threshold increase with the
postsynaptic activity. Increase in the threshold with the
postsynaptic activity does not lead to its uncontrolled increase,
i.e. the weight becomes saturated even under continuous
stimulation (see eqn (2)), which is more realistic. This aspect is
missing in the Hebb rule in eqn (1).
Oja modified the Hebb rule to avoid uncontrolled growth of
synaptic weight with time and suggested the Oja rule, which is
given by
2
d!
w
2
tw
~2u2post 1-a!
(4)
w
dt
a is a positive constant.41 Note that there are several more
equations attempting to account for time-dependent
(dynamic) synaptic plasticity with a particular threshold and
continuous change in synaptic weight, such as the covariance
rule.42 These aforementioned macroscopic models account for
encoding information in neural networks by means of
neighbouring neurons’ activation.
The mechanism for synaptic plasticity, shown in Fig. 1, is
required to be analyzed in the time frame of the process. The
reason is that the activation of presynaptic and postsynaptic
neurons does not necessarily lead to the plasticity. If the spike
timing of presynaptic and postsynaptic spiking is out of a
particular timing window, despite of activation of both
neurons, no plasticity takes place. That is, the spike timing
is an important factor for plasticity, i.e. the spike timing is
encoded in neural networks.43 This relationship is referred to
as spike-timing-dependent plasticity (STDP).43 When a presynaptic (postsynaptic) spike is generated ahead of a postsynaptic (presynaptic) spike and the time lag is less than
approximately 20 ms, the synapse undergoes potentiation
(depression) as shown in Fig. 3.43 Note that STDP is observed
in neighbouring neurons in a subthreshold connection, i.e.
presynaptic activation is weak enough to suppress the action
potential firing of the postsynaptic neuron.43 As a matter of
fact, STDP is one of the behaviours that have been most often
presented by researchers working on artificial synaptic
systems to demonstrate their devices’ synapse-like behaviour
in various input application schemes. The input application
schemes depend on each system’s physical concept. The
details of these schemes will be discussed in the following
sections.
RSC Adv.
View Article Online
Downloaded by RSC Internal on 03 January 2013
Published on 27 November 2012 on http://pubs.rsc.org | doi:10.1039/C2RA22507G
Review
Fig. 3 STDP of hippocampal neurons. The positive (negative) change in the EPSC
amplitude means potentiation (depression). The positive Dt denotes presynaptic
activation ahead of postsynaptic activation whereas the negative Dt denotes the
opposite case. Reproduced with permission from Ref. 43. Copyright 1998, The
Society for Neuroscience.
4. Realization of artificial synapses using
inorganic systems
4.1 Ferroelectricity
Ferroelectric materials exhibit two distinctive states of their
spontaneous polarization, i.e. ferroelectric up (+P) and down
(2P). These two states are defined by thermodynamics, that is,
both states ideally correspond to free-energy-minimizing
configuration.
By changing the direction of applied electric field, i.e.
polarity alternation, ferroelectric polarization can be reversed.
That is, ferroelectric switching is ‘bipolar-type’.
However, there is a ‘threshold’ electric field for polarization
reversal, which is referred to as coercive electric field (Ec), and
thus electric fields higher than the coercive electric field can
lead to spontaneous polarization reversal.
Basic device structure utilizing ferroelectrics is a passive
metal-ferroelectric-metal (MFM), i.e. capacitor. This type of
ferroelectric-utilizing device has been investigated for more
than two decades for ferroelectric random access memories
(FRAMs) application.44 However, due to the aspect that these
capacitor-type MFM devices exhibit only binary numbers, i.e.
‘0’ and ‘1’ corresponding to +P and 2P, this type of device does
not appear to be possibly utilized to mimic neurons and
synapses that exhibit analogue-type state change. However, a
ferroelectric medium is often composed of multi- rather than
single-domains.45 In an MFM device, the ferroelectric region
between the electrodes, therefore, includes multi-domain and
its remnant ferroelectric polarization is determined by the
average polarization value. That is, by controlling domain
configuration, ‘various polarization states’ lying between +P
and 2P can basically be achieved. In this case, remnant
polarization, i.e. capacitance, works as synaptic weight.
RSC Adv.
RSC Advances
Another parameter related to ferroelectric switching is dc
electric current, strongly coupled with ferroelectric switching,
which is able to work as synaptic weight. This currentferroelectric coupling can be observed in particular device
types such as ferroelectric tunnelling junction (FTJ).46–49 The
electric potential profile of a tunnel barrier is altered relying
on the ferroelectric polarization reversal, and thus the
tunnelling matrix element changes by the ferroelectric switching.46,47,50 By means of ferroelectric domains, ‘various average
resistance states’ appear to be realized, which lie between the
two distinctive states corresponding to +P and 2P. Fig. 4
identifies a strong correlation between ferroelectric domain
configuration and FTJ’s resistance, measured on a Au/Co/
BaTiO3(2 nm)/La0.67Sr0.33MnO3 FTJ.49 Depending on the
programming voltage-pulse-height, various resistance states
could be achieved as shown in Fig. 4(a). In Fig. 4(b), it can also
be seen that the ferroelectric domain configuration is
controllable, and thus so is the FTJ’s resistance. Therefore,
this type of FTJ can fulfil the requirements as an artificial
synapse. Note that since ferroelectric switching is a bipolartype, so is the resistance-change of the FTJ.
Metal-ferroelectric-semiconductor field-effect transistors
(MFSFETs) have been estimated to be a ferroelectric device
structure able to achieve analogue-type changes in the device’s
resistance state relying on the ferroelectric switching.51–53 The
idea is that the ferroelectric layer serving as a gate insulator
undergoes the change of its ferroelectric domain configuration
under the electric field applied to the gate (similarly to the
change shown in Fig. 4(b)), and this change in the domain
configuration leads to the modulation of the source-to-drain
resistance. That is, the source-to-drain resistance works as
synaptic weight. Of course, the domain configuration change
takes place by applying an electric field higher than the
coercive electric field serving as ‘threshold’. In this MFSFET
Fig. 4 (a) Resistance versus voltage-pulse-height loops measured on a Au/Co/
BaTiO3(2 nm)/La0.67Sr0.33MnO3 FTJ. The arrows denote the measurement
sequence. The width of the programming voltage pulses was 20 ns. The readout voltage was set 0.1 V. (b) A resistance versus the percentage of down
domains relationship, implying a strong correlation between the resistance and
the ferroelectric effect. The domain configuration was measured using a PFM.
The red symbols and the blue symbols denote down-to-up and up-to-down
switching behaviours, respectively. Reproduced with permission from Ref. 49.
Copyright 2012, Nature Publishing Group.
This journal is ß The Royal Society of Chemistry 2012
View Article Online
Downloaded by RSC Internal on 03 January 2013
Published on 27 November 2012 on http://pubs.rsc.org | doi:10.1039/C2RA22507G
RSC Advances
structure, the plasticity, i.e. potentiation and depression, of
synaptic weight, is achieved by alternating the polarity of
applied voltage, i.e. the plasticity is bipolar-type. Each polarity
is in charge of each type of plasticity, potentiation and
depression.
As mentioned earlier, STDP appears to be a synaptic
behaviour that researchers working on artificial synaptic
systems most often present to demonstrate their devices’
synapse-like behaviour. Recently, Nishitani et al. have suggested the STDP behaviour of a Pb(Zr,Ti)O3–based MFSFET
device.53 What they claimed is that the three terminal devices
have the advantage that ‘signal processing’ and ‘learning’ can
be simultaneously done as simultaneous ‘reading’ and
‘programming’ are possible in three terminal digital devices.
Using a multiplexer (switch) attached to the gate electrode,
selective gate voltage (VGS) application could be achieved as
shown in Fig. 5. The multiplexer connected a presynaptic
voltage (VPRE) to the gate electrode (VGS = VPRE) when a
postsynaptic voltage (VPOST) . 0.1 V, otherwise VGS = 0.
Because VPRE has voltage gradient with respect to time (see
Fig. 5(a)), the timing of VPRE and VPOST assigns different
Review
voltage pulses to VGS as can be seen in Fig. 5(c). Consequently,
the source-to-drain resistance is altered and, as mentioned
earlier, VGS polarity alternation is required for depression-topotentiation transition.
Employing the resistance, rather than the capacitance, of
ferroelectric as synaptic weight has the advantage that it is
much freer from scaling issues at small dimensions as is
resistance-based non-volatile memory devices in digital memory technologies.54 Nevertheless, there have been several trials
attempting to utilize ferroelectric capacitors rather than
resistors in artificial synaptic devices.55,56 Concerning that
ferroelectric domains in a ferroelectric do not allow a sudden
change in the average polarization at a single EC, one can write
various polarization states, i.e. analogue polarization, by using
different electric field windows.55,56 Depending on the
polarization state, a voltage across the ferroelectric capacitor
changes and this voltage referred to as weight voltage can be
utilized as synaptic weight. For the aforementioned two types
of ferroelectric-based synapses, synaptic weight is supposed to
converge into one of the two poles, i.e. +P and 2P states,
unless other mechanisms are involved in the plasticity, such as
dielectric breakdown due to localized electrochemical
effects.57–59 Therefore, these systems can avoid the uncontrolled growth of synaptic weight.
4.2 Phase-change of higher chalcogenides
Fig. 5 (a) VPRE and (b) VPOST applied to a multiplexer connected to the gate
electrode of a Pb(Zr,Ti)O3-based MFSFET. The multiplexer transmits the voltage
pulses shown in (c) to the gate, meaning that the timing of VPRE and VPOST
determines VGS. (d) The source-to-drain current change of the MFSFET with
respect to the timing of VPRE and VPOST. Reproduced with permission from Ref.
53. Copyright 2012, American Institute of Physics.
This journal is ß The Royal Society of Chemistry 2012
Higher chalcogenides, i.e. compounds including chalcogens
except oxygen, are well known to exhibit microstructurecontrolled resistance states. That is, disordered (amorphous)
and ordered (crystalline) states lead to high and low resistance
states, respectively. Moreover, reversible transitions between
these states can be achieved by means of Joule heat resulting
from current flow through the chalcogenide. Moreover, these
transitions are of non-volatility, i.e. written information can be
retained for a long time. Phase-change random access
memories (PRAMs) are representative devices utilizing two
distinctive resistance states, i.e. amorphous and crystalline
phases, for binary information storage.60 Note that these two
states differ from those of ferroelectrics in terms of thermodynamic stability, i.e. unlike ferroelectrics the two distinctive
states are defined by kinetics rather than thermodynamics.13
What matters in employing the phase-change behaviour in
artificial synaptic devices is the realization of various
(analogue) resistance states lying between two extremes, i.e.
amorphous and crystalline phases. It appears that intermediate states can be obtained by controlling programming inputs,
which consequently leads to partial crystalline phases, so that
the average resistance states are in-between.61 For this
purpose, nucleation-dominated phase-change materials such
as GeTe-based alloys are preferable to growth-dominated ones
such as Ge2Sb2Te5.62–64
The resistance-change of phase-change chalcogenides is
‘unipolar-type’, i.e. voltage polarity alternation is not required
for both resistance increase and decrease. This aspect is
attributed to the key component in the phase-change
mechanism, which is Joule heat that is barely affected by
current flow direction. Therefore, a plasticity induction
scheme for these materials differs from that for ferroelectricbased artificial synaptic devices where a voltage polarity
RSC Adv.
View Article Online
Downloaded by RSC Internal on 03 January 2013
Published on 27 November 2012 on http://pubs.rsc.org | doi:10.1039/C2RA22507G
Review
alternation scheme should be employed. Instead, voltage pulse
width and height alternation is required for plasticity
induction. For reset switching, i.e. crystalline-to-amorphous
phase transition (low-to-high resistance transition), voltage
pulses of large height and short width are required because
the transition is driven by a melt-quench process, i.e. large
heat with a short duration time is necessary.65 For set
switching, i.e. amorphous-to-crystalline phase transition,
voltage pulses of lower height but larger width are required
because one should heat up the material slowly to crystallize it
and also the phase transition is time-consuming.65 These set
and reset switching voltages work as a ‘threshold’ for the
plasticity.
By using the aforementioned scheme, Kuzum et al. have
successfully demonstrated the STDP of Ge2Sb2Te5 (GST), as
can be seen in Fig. 6.66 Basically, the STDP scheme that they
employed is similar to the scheme that Nishitani et al. used as
shown in Fig. 5 in the sense that a square voltage pulse
(postsynaptic spike), superimposed on a well designed voltage
pulse train, triggers plasticity. And the timing of pre- and postspikes indeed assigns various voltage pulse heights to the GST
layer, so that plasticity becomes dependent on the timing.
In phase-change-based artificial synaptic devices, it seems
that there is a difficulty in gradual LTD induction, i.e. gradual
amorphization of crystalline chalcogenides.64,67 Bichler et al.
have suggested the architecture using a crystallization process
to implement both LTP and LTD induction.67 A building block
of this architecture is called 2-PCM synapse that consists of a
pair of phase-change memory cells: they are termed LTP and
LTD devices. A positive change in the current through the LTP
device, which is attributed to the crystallization, contributes to
higher currents to the postsynaptic complementary metaloxide-semiconductor (CMOS) neuron.
However, the same positive change in the current through
the LTD device negatively contributes to the current to the
Fig. 6 STDP of a Ge2Sb2Te5 based device using the pre- and post-spike timing
scheme shown in (b), (c), and (d). Resistance-change of the phase-change
material is unipolar. Reproduced with permission from Ref. 66. Copyright 2011,
American Chemical Society.
RSC Adv.
RSC Advances
postsynaptic CMOS neuron because of an inverter in serial
connection with the LTD device.64,67
So far, there have been no attempts to realize the ‘elasticity’
of resistance-change of higher chalcogenides materials for its
application in artificial neurons. In general, elastic resistancechange can arise from mono-stable resistance-change. In this
sense, the well-known mono-stable resistance-change behaviour of higher chalcogenides, which is referred to as threshold switching, may be suitable for realizing artificial
neurons.68–70
4.3 Ferromagnetism
In the late 1990s, spin-transfer torque switching was discovered, forming a basis for a new type of magnetic
memory.71–74 Recently, spin-transfer-torque-induced magnetization in magnetic tunnel junctions (MTJs) and consequent
analogue memory-type resistance-change have been theoretically investigated by Wang et al.75 Moreover, these MTJs’
resistance-change is of a ‘bipolar-type’. That is, MTJs perhaps
behave as analogue-type memory devices, which are popularly
referred to as memristors. Beside this theoretical prediction,
Krzysteczko et al. have experimentally achieved MgO-based
MTJs working similarly to memristors.76,77 Both parallel and
antiparallel states were found to allow non-volatile resistive
switching resulting in a resistance difference between high
and low resistance states of few percents.76 The resistancechange is gradual and it is thought to be induced by the
cumulative applied flux, the integration of the applied voltage
over time. These aspects are consistent with the general
memristor concept.78–82 The memristor concept may be
suitable to emulate STDP behaviour as theoretically identified
by Zamarreño et al.83 Note that the mechanism for the resistive
switching was estimated to be similar to that for resistive
switching in transition metal oxides (as will be discussed in
the next section) instead of magnetic effects.
The aforementioned memory effect forms a basis for
artificial synapses. It was found that the resistive switching
requires an applied voltage higher than a particular ‘threshold’
for plasticity.
Similar to ferroelectric-based synapses (but unlike phasechange-based ones), the alternation of programming voltage is
required for transition between potentiation and depression.
The STDP behaviour of MgO-based MTJs was demonstrated by
means of the timing of two sawtooth spikes.77 The timing of
two sawtooth spikes leads to different voltage maxima. The
voltage maximum increases with decreasing the timing Dt
when Dt . 0 whereas the voltage minimum decreases with
decreasing timing Dt when Dt , 0 (see Fig. 7(a)).
The same MTJ was found to emulate the most important
functionality of neurons, i.e. action potential firing.77 As
discussed in section 3, action potential firing is caused by
elastic, rather than plastic, state change, so that the plasticity
having discussed up to now cannot be utilized for action
potential firing. Spin-transfer torque switching has been found
to lead to not only parallel and antiparallel magnetic states but
also several intermediate states. These intermediate states can
be switched back to their original states and this backswitching is popularly referred to as back-hopping.77,84–86 This
back-hopping takes place in MTJs with low magnetic thermal
This journal is ß The Royal Society of Chemistry 2012
View Article Online
Downloaded by RSC Internal on 03 January 2013
Published on 27 November 2012 on http://pubs.rsc.org | doi:10.1039/C2RA22507G
RSC Advances
Fig. 7 (a) STDP behaviour of a MgO-based MTJ using sawtooth spikes,
measured at room temperature. (b) Action potential firing of the same MTJ at a
temperature of 130 K. For this measurement, spin-transfer torque induced
magnetization and subsequent back-hopping effects were utilized. Reproduced
with permission from Ref. 77. Copyright 2012, Willey-VCH.
activation energy, i.e. low thermal energy barrier, so that it
perhaps results from thermal assisted switching.84,85 This
back-hopping enables the MTJs to exhibit the ‘elasticity’ of
state, i.e. resistance, change. Fig. 7(b) shows a resistance
oscillation induced by the back-hopping effect.77 They have
observed the back-hopping effect only in case of parallel-toantiparallel switching.77 This implies that action potential
firing and its transmission takes place in one way. However,
Min et al. have found the same back-hopping effect for both
cases, i.e. parallel-to-antiparallel and antiparallel-to-parallel
switching cases, although back-hopping probability for parallel-to-antiparallel switching prevails over the other case.85
4.4 Nanoionics
Nanoionics-based memories have been considered as emerging non-volatile memories in digital computers and depending on their working principles they are differently
termed.13,87,88 In terms of working principle of nanoionicsbased memories, one can classify them as anion- and cationmigration-induced memories. For the former and the latter,
non-volatile switching between at least two distinctive resistance states is triggered by the migration of anions and
cations, respectively, and related redox reactions.87 Note that
both types are referred to as valence-change-memory since
This journal is ß The Royal Society of Chemistry 2012
Review
redox reactions are involved in the switching.88 Depending on
operation schemes, the memories are classified as unipolarand bipolar-type. In the former case, resistive switching
operation is barely affected by the polarity of programming
voltage as long as the voltage is higher than a ‘threshold’
voltage, whereas, in the latter case, voltage polarity alternation
is of importance for switching operation.13
Up to now, a number of materials, e.g. binary transition
metal oxides, perovskite-type complex transition metal oxides,
have been found to exhibit their ‘intrinsic’ resistive switching
behaviours.13 These oxide materials most likely belong to the
anion-migration-induced memory class. In case of cationmigration-induced memory, the switching cell consists of
(diffusive metal electrode, e.g. Cu and Ag)/(solid electrolyte)/
(inert metal electrode, e.g. Pt and Au) stacks. The switching is
believed to be attributed to redox reactions of Cu atoms/ions,
their migration, and the evolution of a second phase, i.e. Cu
metal precipitation.88–90 Concerning the solid electrolyte,
various materials are available, for instance, SiOx, GeSe,
TaOx.90
Switching materials and/or switching operation conditions
determine a resistive switching type, unipolar- or bipolar-type.
Even the same materials, e.g. TiO2, can show both unipolar
and bipolar behaviours relying on the maximum current
allowed to flow through the cell during the operation, i.e.
compliance current.91 However, it appears that bipolar-type
resistance-change is perhaps preferable to unipolar-type for
artificial synapse application. In bipolar-type switching operation, electric field direction is of great importance unlike
unipolar-type switching.13 That is, the electric field direction
can be encoded in the bipolar-switching-based artificial
synapse, which would be ignored in case of unipolar-type
switching. For instance, implementing STDP behaviour
requires distinguishing presynaptic and postsynaptic action
potentials having opposite transmission directions. Moreover,
unipolar switching is, in general, known to show very abrupt
set (high-to-low resistance transition) and reset (low-to-high
resistance transition) switching, so that there is perhaps a
difficulty in achieving analogue-type resistance states.
4.4.1 Short- and long-term plasticity. Cation- and anionmigration-induced memories have been often employed to
emulate neural functionalities, where the memory’s resistance/conductance serves as synaptic weight.92–97 For instance,
long-term plasticity has been observed in various nanoionic
systems such as cation-migration-induced memories, e.g. Cu/
SiO2/Ge0.3Se0.7/Pt,92 Ag/Ag : Si/Si/W,94 Ag/Ag2S/Pt,93, anionmigration-induced memories, e.g. Pt/Cu2O/W,98 Al/TiOx/W,99
TiN/HfOx/AlOx/Pt,100 InGaZnO,101 and cation-anion-migrationinduced memories, e.g. Ti/RbAg4I5/2-methoxy-5-(2’-ethylhexyloxy)-p-phenylene inylene (MEH-PPV)/Si.102 Not only twodimensional materials systems but also self-assembled Ag
nanowire has been found to show long-term plasticity.103
If one goes through several related papers, it can be noticed
that long-term plasticity/memory as well as short-term
plasticity/memory is of concern.93,95–97 Ohno et al. have
reported the STP of Ag/Ag2S/Pt atomic switches under the
voltage pulse train (80 mV height, 0.5 s width, and 0.0488 Hz
frequency) as can be seen in Fig. 8(a).93 However, when the
frequency of a voltage pulse train increased up to 0.4 Hz, the
RSC Adv.
View Article Online
Downloaded by RSC Internal on 03 January 2013
Published on 27 November 2012 on http://pubs.rsc.org | doi:10.1039/C2RA22507G
Review
RSC Advances
Fig. 8 Conductance change of a Ag/Ag2S/Pt atomic switch on applied voltage
pulse trains. (a) STP of the atomic switch, resulting from the voltage pulse train
shown below (80 mV pulse height, 0.5 s pulse width, and 0.0488 Hz frequency).
(b) LTP of the atomic switch, resulting from the voltage pulse train shown below
(80 mV pulse height, 0.5 s pulse width, and 0.4 Hz frequency). Reproduced with
permission from Ref. 93. Copyright 2011, Nature Publishing Group.
atomic switch showed LTP after six voltage pulses (see
Fig. 8(b)). This aspect demonstrates that there is a particular
‘threshold’ input frequency for LTP induction. As a matter of
fact, this is what takes place in mammals’ brains and, as
mentioned earlier, several theories, e.g. the BCM rule and the
Oja rule, eqn (2) and (4), respectively, were suggested to
explain a threshold for LTP induction.
Moreover, what Fig. 8(b) implies is that repeating action
potential application to the artificial synapse induces LTP if
the action potential frequency is high enough. Otherwise,
memory retention decay, implying forgetting, merely takes
place. With increasing the number of action potential pulses
memory retention time gradually increases. For the Ag/Ag2S/Pt
atomic switch, the memory retention against time relationships were analyzed using the power-law, y = b 6 t2m, where
the exponent (2m) is taken as a parameter implying the
memory retention time, i.e. a decrease (increase) in m means
an increase (decrease) in retention time. In fact, this power-law
was suggested for quantitative analysis on forgetting events by
Rubin and Wenzel.104 It should be kept in mind that the
power-law-based forgetting mechanism is required to be
understood at the complex neural network level rather than
individual synapse level.
Similar experimental observations were reported on Pd/
WOx/W artificial synapses, belonging to anion-migrationinduced memory, by Chang et al.97 As increasing the number
of action potential pulses, the current decay time, i.e. memory
retention time, and absolute current level, i.e. conductance,
increase as can be seen in Fig. 9. For the memory-retentioncurve fitting, they used a stretched-exponential function, Q(t) =
I0exp[2(t/t)b], where I0, t, and b denote a pre-exponential
factor, a relaxation time constant, and a stretch index,
respectively.105
The memory retention decay phenomena are perhaps
related to ionic relaxation. Here, ionic relaxation can include
any ionic processes induced by restoring force. Considering
the atomic switch that Ohno et al. used, the Ag electrode serves
as a Ag+ source when the Ag electrode is the anode. The redox
reaction involving the Ag electrode is as follows:
Ag « Ag2 + e2
RSC Adv.
(5)
Fig. 9 (a) Memory retention of a Pd/WOx/W-based artificial synapse after a
voltage pulse train (voltage pulse height: 1.3 V, each pulse width: 0.4 ms, and
frequency: ca. 16.5 Hz). N means the number of voltage pulses in each voltage
train. (b) Memory-retention-curve fitting was performed using the equation, Q(t)
= I0exp[2(t/t)b]. Important fitting parameters, t and I0, were plotted with
respect to the number of stimulation. The increase of t and I0 means the
increase of retention time and synaptic weight, and thus the graph denotes the
appearance of LTP with increasing the number of voltage pulses. (c) A
suggested mechanism for the STP-to-LTP transition. In this work, the lateral
diffusion of oxygen vacancies was regarded to be a reason for the memory
retention decay. The number of oxygen vacancies increases with voltage pulse
number, and thus oxygen vacancies prevent their lateral diffusion. Reproduced
with permission from Ref. 97. Copyright 2011, American Chemical Society.
At the anode, the above reaction proceeds in the forward
direction (oxidation) only if the ‘overpotential’ at the anode is
positive. Otherwise, the reaction direction becomes the other
way around (reduction). During the forward reaction, the
concentration of Ag+ increases, and thus the Nernst potential,
i.e. chemical potential, is evolved. This Nernst potential leads
to restoring force, i.e. force for the backward reaction, so that if
the applied potential (to be precise, the potential across the
Helmholtz layer) is high enough to cancel out the Nernst
potential, i.e. the overpotential . 0, the oxidation reaction
proceeds. However, as soon as the applied potential is
removed, the backward reaction proceeds due to the evolved
Nernst potential. Therefore, the relaxation process takes place.
Eventually, the Nernst potential recovers its equilibrium value
that is determined by the standard potential of the reaction.
These coupled, i.e. paired, reactions are accounted for by the
detailed balance.35 The above mentioned theory perhaps
explains the memory loss kinetics.
Now, the question is how the LTP takes place if the
aforementioned ionic relaxation works. The Nernst potential
is a function of the concentration of Ag+ ions, and thus an
This journal is ß The Royal Society of Chemistry 2012
View Article Online
Downloaded by RSC Internal on 03 January 2013
Published on 27 November 2012 on http://pubs.rsc.org | doi:10.1039/C2RA22507G
RSC Advances
Review
increase in Ag+ concentration leads to an increase in the
Nernst potential. If the system has a Ag+ ion sink, the increase
of the Nernst potential can be avoided so that the restoring
force can be largely reduced. Fortunately, we are aware of the
evolution of Ag filaments, i.e. Ag precipitation, in Ag-based
cation-migration-induced memory. The Ag filaments can work
as Ag+ sinks, following the reaction in eqn (5) in the backward
direction. In fact, the filaments are thought to grow from the
cathode, so that the filaments’ front ends serve as moving
cathode. That is, the reduction reaction can proceed at the
surface of the moving cathode. Therefore, it is estimated that a
third reaction (independent of the paired reactions), decreasing the restoring force, and thus retarding the backward
reaction, is required to be involved in long-term state change,
i.e. LTP.
As discussed in section 3, the LTP of a synapse involves a
third reaction, i.e. CaMKII autophosphorylation, that enables
the synaptic activity to be retained even after Ca2+ concentration drops down to its basal level. This aspect is indeed very
similar to the above-mentioned electrochemical behaviour.
The Ca2+ concentration increase, due to presynaptic and
postsynaptic activation, corresponds to the increase of Ag+
ions due to the oxidation reaction. The CMKII autophosphorylation corresponds to Ag precipitation. And the decrease
of Ca2+ ions during restoring the resting state of the synapse is
comparable with the Ag+ ion decrease due to the restoring
force resulting from the Nernst potential evolved during the
oxidation process.
Compared with cation-migration-induced memory, anionmigration-induced memory appears to be of complexity in
terms of mechanisms for its resistance-change. Nevertheless,
there is the consensus that this memory-type also involves
some redox reactions.
A key redox reaction is perhaps different depending on point
defect structure, e.g. hypo-stoichiometry, hyper-stoichiometry.13 Fig. 10(a) and (b) are schematics of migration of point
defects in hyper- and hypo-stoichiometric oxides, respectively.
The major defect structures of the hypo- and hyper-stoichiometric oxides are assumed to be oxygen vacancy and cation
vacancy, respectively. For both systems, the anode was
regarded to be an oxygen vacancy source, so that the oxygen
vacancies introduced at the anode migrate towards the
cathode, mainly due to electrostatic force, i.e. drift. In this
case, the redox reaction in charge of the resistance-change is
as follows:
Oxo « VNNo + 2e9 + 1/2O2(g)
(6)
This reaction is expressed in the Kröger-Vink notation,
where Oxo and VNNo denote an oxygen ion on an oxygen site and
an oxygen vacancy, respectively.106 Oxygen vacancies serve
as electron donors, i.e. self-doping, so that, in hypostoichiometric oxides, introducing oxygen vacancies by
eqn (6) leads to their conductivity increases.87,107 The
reaction in eqn (6) appears to be more complicate than
eqn (5) since eqn (6) involves two different phases, i.e. solid
and gas phases, and the oxygen gas activity of the reservoir
affects the reaction.107–109
This journal is ß The Royal Society of Chemistry 2012
Fig. 10 Schematic of point defect generation at the anode and the migration
due to the applied electrostatic force for (a) hyper-stoichiometric NiO and (b)
hypo-stoichiometric TiO2. The point defects are described by the Kröger-Vink
notation. For instance, V99Ni and VNNo are a Ni vacancy having a charge of 22 and
an oxygen vacancy having a charge of +2, respectively. Reproduced with
permission from Ref. 13. Copyright 2012, Institute of Physics.
Sinks in anion-migration-induced memories also play a
crucial role in the evolution of restoring force. In this system,
lower oxide phases perhaps serve as an oxygen vacancy sink,
which have higher conductivities than stoichiometric high
oxides.13 For instance, when reduced, TiO2, VO2, and WO3
likely have Magnéli phases. Especially, by means of transmission electron microscopy, it was revealed that reduced TiO2
includes TinO2n21 Magnéli phases.110–112 Unless lower oxide
phases are available for some oxides, e.g. NiO, metal
precipitation is likely preferable. Therefore, with the evolution
of these lower oxides or metals, which finally form conducting
filaments, the restoring force can largely be reduced, and LTP
eventually appears.
4.4.2 Non-associative and associative learning. Up to now,
we have dealt with synaptic weight change by a single input
applied to the artificial synapse. The synaptic weight change is
a course of learning. In biological systems, implicit memory is
known to alter the function and structure of synaptic
connections, where LTP is believed to be the important
precondition for learning and memory processes. The class
of implicit memory underlies non-associative learning (habituation and sensitisation) and associative learning (classical
conditioning) and is automatic in quality.113,114 The basic
concept of non-associative learning is shown in Fig. 11.92
RSC Adv.
View Article Online
Downloaded by RSC Internal on 03 January 2013
Published on 27 November 2012 on http://pubs.rsc.org | doi:10.1039/C2RA22507G
Review
RSC Advances
Fig. 11 (a) Schematic of neural circuitry for non-associative learning. (b)
Electrical circuit layout mimicking the neural circuitry shown in (a). R1 and RM
denote a voltage divider and an artificial synapse, respectively. Vin and Vout
mean input and out voltage, respectively. Reproduced with permission from Ref.
92. Copyright 2012, Wiley-VCH.
Depending on the synaptic weight change a signal (stimulus)
can be transmitted from the presynaptic neuron (labelled as
sensory neuron in Fig. 11(a)) to the postsynaptic neuron
(labelled as motor neuron in Fig. 11(a)).
The implementation of the aforementioned artificial
synapses in non-associative learning circuits, based on voltage
sensing, in fact, requires voltage dividers. It can often be
noticed that the definition of potentiation and depression is
such that the former means a resistance decrease and the
latter a resistance increase. However, as a matter of fact, this
definition is not necessarily correct. Fig. 11(b) shows a voltagesensing-based circuit emulating non-associative learning.92
Potentiation (depression) is a case of output voltage, Vout,
increase (decrease). When the high resistance state of the
artificial synapse (RM), Vout is higher than that when the low
resistance state, so that an increase in RM denotes potentiation. However, this relies on the circuit configuration. If RM is
ahead of the voltage divider (R1), an increase (decrease) in RM
causes a decrease (increase) in the output voltage, implying
depression (potentiation).
Besides non-associative learning emulation, associative
learning process involving more than one input have been
successfully emulated using the cation-migration-induced
memory Pt/SiO2/Ge0.3Se0.7/Cu as an artificial synapse.92 In
the experiments, two input voltages, conditional and unconditional inputs, were simultaneously applied to the electronic
circuit shown in Fig. 12(a). The two inputs were superimposed
by the adder in the circuit. To demonstrate the learning
behaviour of the circuit, a comparator was additionally used in
the associative learning circuit. When the circuit learned a
certain input pattern, the output voltage of the associative
learning circuit was changed. Note that this part of the circuit
of course digitized the system’s output. However, one should
bear in mind that this system is a simplified circuit utilized to
detect the success of the learning process.
It is well-known that the artificial synapse has a threshold
voltage for set (high-to-low resistance state transition) and
reset switching (low-to-high resistance state transition). The
input voltages were carefully chosen to enable the sum of the
voltages to be above a threshold voltage for potentiation, i.e.
reset switching. Note that each voltage is less than the
threshold. Therefore, only when the conditional and uncondi-
RSC Adv.
Fig. 12 (a) Electrical circuit mimicking the neural circuitry for associative
learning, shown in (b). RM is a Cu/SiO2/Ge0.3Se0.7/Pt cell. (c) The top graph is a
conditional stimulus profile with respect to time. The middle one is an
unconditional stimulus profile. The bottom one is the response, i.e. Vout, to the
superimposed conditional and unconditional stimuli. After two unconditional
voltage pulses, the response to every conditional voltage pulse appears,
implying associative learning. Reproduced with permission from Ref. 92.
Copyright 2012, Wiley-VCH.
tional stimuli overlap constructively with each other in the
time frame, potentiation proceeds. Consequently, after the
potentiation, the unconditional stimulus can cause output
voltage response, meaning that the synapse learnt by means of
the association of the conditional and unconditional stimuli.
This work is worth being compared with the associative
learning emulation done by Pershin and Di Ventra.8 They
designed an electronic synapse circuit composed of an
analogue-to-digital convertor and a microcontroller and
demonstrated two-input-associated learning process.
5. Concluding remarks and outlook
Several functional materials that are possibly utilized in
artificial synaptic devices were overviewed in this review. Of
course, so far introduced materials cannot cover all functionalities of neurons and synapses. After all, they can emulate
some limited, but important, functionalities such as long- and
short-term plasticity of synapses. For this purpose, understanding of fundamental physiology of neurons and synapses,
at least the functionalities that one attempts to emulate, is of
great importance. The aforementioned four keywords, i.e.
This journal is ß The Royal Society of Chemistry 2012
View Article Online
Downloaded by RSC Internal on 03 January 2013
Published on 27 November 2012 on http://pubs.rsc.org | doi:10.1039/C2RA22507G
RSC Advances
‘threshold’, ‘analogue’, ‘plasticity’, and ‘elasticity’, are likely to
describe the behaviour of neurons and synapses. There may be
more physical concepts including the keywords, which are not
covered by this review.
The main focus of this review is on the overview of several
materials systems that are thought to fulfil the keywords
(threshold, analogue, and plasticity) of synaptic behaviour and
those (threshold, analogue, and elasticity) of neuronal
behaviour. Therefore, the application-wise overview of artificial synapses is not dealt with in this review. Ha and
Ramanathan have recently reported a good review on adaptive
oxide electronics, which includes higher level functionalities
of synaptic circuits and possible applications.115
Emulation of higher level functionalities of complex neural
networks appears to be a matter of understanding of neural
networks. That is, ‘‘mimickers’’ should know what functionalities they want to catch and how the functionalities work. At
present, unfortunately, mammals’ brains do not seem to reveal
their nature completely, leaving many open questions. It may
not, therefore, be an easy task to achieve the ambitious goal of
neuromorphic engineering (realization of brain-like working
computers). An interdisciplinary approach, e.g. neuroscience,
physiology, materials science, physics, chemistry, computer
science, is most likely helpful in realization of artificial neural
networks, as well as understanding of ‘‘real’’ neural networks.
This is why we (neither neuroscientists nor physiologists)
dedicate one section in this review to the physiology of
neurons and synapses.
Acknowledgements
D. S. J. would like to acknowledge research grants from the
Korea Institute of Science and Technology (grant no 2V02410).
We thank Angelica Foggetti at Christian-Albrechts-Universität
zu Kiel for careful reading and correcting of the manuscript.
References
1 C. Mead, Analog VLSI and Neural Systems, Adison-Wesley,
Reading, MA, 1989.
2 R. Ananthanarayana, S. K. Esser, H. D. Simon and D.
S. Modha, Cognitive Computation: Proc. Conf. High Perf.
Computing Networking, Storage and Analysis, Portland,
Oregon, 2009.
3 C. Mead, Proc. IEEE, 1990, 78, 1629.
4 H. D. Crane, Proc. IEEE, 1962, 50, 2048.
5 H. C. Card and W. R. Moore, Neural Networks, 1990, 3,
333.
6 M. Mahowald and R. Douglas, Nature, 1991, 354, 515.
7 G. Indiveri, E. Chicca and R. Douglas, Cogn. Comput.,
2009, 1, 119.
8 Y. V. Pershin and M. Di Ventra, Neural Networks, 2010, 23,
881.
9 S. Ramakrishnan, P. E. Hasler and C. Gordon, IEEE Trans.
Biomed. Circuits Syst., 2011, 5, 244.
This journal is ß The Royal Society of Chemistry 2012
Review
10 G. Rachmuth, H. Z. Shouval, M. F. Bear and C.-S. Poon,
Proc. Natl. Acad. Sci. U. S. A., 2011, 108, E1266.
11 G. Indiveri, B. Linares-Barranco, T. J. Hamilton, A. van
Schaik, R. Etienne-Cummings, T. Delbruck, S.-C. Liu,
P. Dudek, P. Häfliger, S. Renaud, J. Schemmel,
G. Cauwenberghs, J. Arthur, K. Hynna, F. Folowosele,
S. Saı̈ghi, T. Serrano-Gotarredona, J. Wijekoon, Y. Wang
and K. Boahen, Front. Neurosci., 2011, 5, 1.
12 C.-S. Poon and K. Zhou, Front. Neurosci., 2011, 5.
13 D. S. Jeong, R. Thomas, R. S. Katiyar, J. F. Scott,
H. Kohlstedt, A. Petraru and C. S. Hwang, Rep. Prog.
Phys., 2012, 75, 076502.
14 J. Isaksson, P. Kjall, D. Nilsson, N. Robinson, M. Berggren
and A. Richter-Dahlfors, Nat. Mater., 2007, 6, 673.
15 K. Tybrandt, K. C. Larsson, S. Kurup, D. T. Simon, P. Kjäll,
J. Isaksson, M. Sandberg, E. W. H. Jager, A. RichterDahlfors and M. Berggren, Adv. Mater., 2009, 21, 4442.
16 M. C. Peterman, J. Noolandi, M. S. Blumenkranz and H.
A. Fishman, Proc. Natl. Acad. Sci. U. S. A., 2004, 101, 9951.
17 M. C. Peterman, N. Z. Mehenti, K. V. Bilbao, C. J. Lee,
T. Leng, J. Noolandi, S. F. Bent, M. S. Blumenkranz and H.
A. Fishman, Artif. Organs, 2003, 27, 975.
18 A. L. Hodgkin and A. F. Huxley, Nature, 1939, 144, 710.
19 D. E. Goldman, J. Gen. Physiol., 1943, 27, 37.
20 D. C. Gadsby, Nat. Rev. Mol. Cell Biol., 2009, 10, 344.
21 W. S. Agnew, S. R. Levinson, J. S. Brabson and M.
A. Raftery, Proc. Natl. Acad. Sci. U. S. A., 1978, 75, 2606.
22 T. M. Tempel, D. M. Papazian, T. L. Schwarz, Y. N. Jan and
L. Y. Jan, Science, 1987, 237, 770.
23 M. Noda, S. Shimizu, T. Tanabe, T. Takai, T. Kayano,
T. Ikeda, H. Takahashi, H. Nakayama, Y. Kanaoka,
N. Minamino, K. Kangawa, H. Matsuo, M. A. Raftery,
T. Hirose, S. Inayama, H. Hayashida, T. Miyata and
S. Numa, Nature, 1984, 312, 121.
24 F. Bezanilla, IEEE Trans. NanoBiosci., 2005, 4, 34.
25 T. Shinoda, H. Ogawa, F. Cornelius and C. Toyoshima,
Nature, 2009, 259, 446.
26 J. Astrup, P. M. SQrensen and H. R. SQrensen, Stroke,
1981, 12, 726.
27 A. L. Hodgkin and A. F. Huxley, J. Physiol., 1952, 117, 500.
28 R. C. Malenka and R. A. Nicoll, Science, 1999, 285, 1870.
29 D. O. Hebb, Analog VLSI and Neural Systems. The
organization of behavior. Wiley & Sons, New York, 1949.
30 I. Cavus and T. Teyler, J. Neurophysiol., 1996, 76, 3038.
31 K. Wankerl, D. Weise, R. Gentner, J.-J. Rumpf and
J. Classen, J. Neurosci., 2010, 30, 6197.
32 E. P. Bauer, G. E. Schafe and J. E. LeDoux, J. Neurosci.,
2002, 22, 5239.
33 M. T. Rogan and J. E. LeDoux, Neuron, 1995, 15, 127.
34 J. Lisman, Proc. Natl. Acad. Sci. U. S. A., 1989, 86, 9574.
35 B. H. Mahan, J. Chem. Educ., 1975, 52, 299.
36 J. Lisman, Trends Neurosci., 1994, 17, 406.
37 R. C. Malenka, R. A. Nicoll and R. Malinow, Science, 1997,
276, 2001.
38 R. C. Malenka and R. A. Nicoll, Trends Neurosci., 1993, 16,
521.
39 S. M. Dudek and M. F. Bear, Proc. Natl. Acad. Sci. U. S. A.,
1992, 89, 4363.
40 E. Bienenstock, L. Cooper and P. Munro, J. Neurosci.,
1982, 2, 32.
41 E. Oja, J. Math. Biol., 1982, 15, 267.
RSC Adv.
View Article Online
Downloaded by RSC Internal on 03 January 2013
Published on 27 November 2012 on http://pubs.rsc.org | doi:10.1039/C2RA22507G
Review
42 P. Dayan and L. F. Abbott, Theoretical Neuroscience, The
MIT Press, London, 2001.
43 G.-Q. Bi and M.-M. Poo, J. Neurosci., 1998, 18, 10464.
44 J. F. Scott and C. A. P. D. Araujo, Science, 1989, 246, 1400.
45 G. Catalan, J. Seidel, R. Ramesh and J. F. Scott, Rev. Mod.
Phys., 2012, 84, 119.
46 E. Y. Tsymbal and H. Kohlstedt, Science, 2006, 313, 181.
47 H. Kohlstedt, N. A. Pertsev, J. R. Contreras and R. Waser,
Phys. Rev. B, 2005, 72, 125341.
48 A. Chanthbouala, A. Crassous, V. Garcia, K. Bouzehouane,
S. Fusil, X. Moya, J. Allibe, B. Dlubak, J. Grollier, S. Xavier,
C. Deranlot, A. Moshar, R. Proksch, N. D. Mathur,
M. Bibes and A. Barthelemy, Nat. Nanotechnol., 2012, 7,
101.
49 A. Chanthbouala, V. Garcia, R. O. Cherifi,
K. Bouzehouane, S. Fusil, X. Moya, S. Xavier,
H. Yamada, C. Deranlot, N. D. Mathur, M. Bibes,
A. Barthélémy and J. Grollier, Nat. Mater., 2012, 11, 860.
50 A. Gruverman, D. Wu, H. Lu, Y. Wang, H. W. Jang, C.
M. Folkman, M. Y. Zhuravlev, D. Felker, M. Rzchowski, C.
B. Eom and E. Y. Tsymbal, Nano Lett., 2009, 9, 3539.
51 H. Ishiwara, Jpn. J. Appl. Phys., 1993, 32, 442.
52 S.-M. Yoon, E. Tokumitsu and H. Ishiwara, IEEE Electron
Device Lett., 1999, 20, 229.
53 Y. Nishitani, Y. Kaneko, M. Ueda, T. Morie and E. Fujii, J.
Appl. Phys., 2012, 111, 124108.
54 R. Waser, Nanoelectronics and information technology,
Wiley-VCH, Weinheim, 2003.
55 L. T. Clark, R. O. Grondin and S. K. Dey, Computers and
Communications, 1992. Conference Proceedings, Eleventh
Annual International Phoenix Conference, 1992.
56 L. T. Clark, S. K. Dey and R. O. Grondin, Ferroelectrics,
1991, 116, 205.
57 X. J. Lou, M. Zhang, S. A. T. Redfern and J. F. Scott, Phys.
Rev. Lett., 2006, 97, 177601.
58 X. J. Lou, C. X. Yang, T. A. Tang, Y. Y. Lin, M. Zhang and J.
F. Scott, Appl. Phys. Lett., 2007, 90, 262908.
59 H. Kohlstedt, A. Petraru, K. Szot, A. Rudiger, P. Meuffels,
H. Haselier, R. Waser and V. Nagarajan, Appl. Phys. Lett.,
2008, 92, 062907.
60 M. Wuttig and N. Yamada, Nat. Mater., 2007, 6, 824.
61 K. Nakayama, M. Takata, T. Kasai, A. Kitagawa and
J. Akita, J. Phys. D: Appl. Phys., 2007, 40, 5061.
62 V. Weidenhof, N. Pirch, I. Friedrich, S. Ziegler and
M. Wuttig, J. Appl. Phys., 2000, 88, 657.
63 J. H. Coombs, A. P. J. M. Jongenelis, W. V. Es-Spiekman
and B. A. J. Jacobs, J. Appl. Phys., 1995, 78, 4918.
64 M. Suri, O. Bichler, D. Querlioz, B. Traore, O. Cueto,
L. Perniola, V. Sousa, D. Vuillaume, C. Gamrat and
B. DeSalvo, J. Appl. Phys., 2012, 112, 054904.
65 M. Wuttig and N. Yamada, Nat. Mater., 2007, 6, 824.
66 D. Kuzum, R. G. D. Jeyasingh, B. Lee and H. S. P. Wong,
Nano Lett., 2011, 12, 2179.
67 O. Bichler, M. Suri, D. Querlioz, D. Vuillaume, B. DeSalvo
and C. Gamrat, IEEE Trans. Electron Devices, 2012, 59,
2206.
68 D. Adler, H. K. Henisch and S. N. Mott, Rev. Mod. Phys.,
1978, 50, 209.
69 D. S. Jeong, H. Lim, G.-H. Park, C. S. Hwang, S. Lee and B.K. Cheong, J. Appl. Phys., 2012, 111, 102807.
RSC Adv.
RSC Advances
70 J. H. Lee, G. H. Kim, Y. B. Ahn, J. W. Park, S. W. Ryu, C.
S. Hwang and H. J. Kim, Appl. Phys. Lett., 2012, 100,
123505.
71 L. Berger, Phys. Rev. B, 1996, 54, 9353.
72 J. C. Slonczewski, J. Magn. Magn. Mater., 1996, 159, L1.
73 M. Tsoi, A. G. M. Jansen, J. Bass, W. C. Chiang, M. Seck,
V. Tsoi and P. Wyder, Phys. Rev. Lett., 1998, 80, 4281.
74 E. B. Myers, D. C. Ralph, J. A. Katine, R. N. Louie and R.
A. Buhrman, Science, 1999, 285, 867.
75 X. Wang, Y. Chen, H. Xi, H. Li and D. Dimitrov, IEEE
Electron Device Lett., 2009, 30, 294.
76 P. Krzysteczko, G. Reiss and A. Thomas, Appl. Phys. Lett.,
2009, 95, 112508.
77 P. Krzysteczko, J. Münchenberger, M. Schäfers, G. Reiss
and A. Thomas, Adv. Mater., 2012, 24, 762.
78 L. O. Chua and S. M. Kang, Proc. IEEE, 1976, 64, 209.
79 L. O. Chua, IEEE Trans. Circuit Theory, 1971, 18, 507.
80 D. B. Strukov, G. S. Snider, D. R. Stewart and R.
S. Williams, Nature, 2008, 453, 80.
81 J. J. Yang, M. D. Pickett, X. Li, D. A. A. Ohlberg, D.
R. Stewart and R. S. Williams, Nat. Nanotechnol., 2008, 3,
429.
82 Y. Pershin and M. Di Ventra, Adv. Phys., 2011, 60, 145.
83 B. Linares-Barranco, T. Serrano-Gotarredona, L.
A. Camuñas-Mesa, J. A. Perez-Carrasco, C. ZamarreñoRamos and T. Masquelier, Front. Neurosci., 2011, 5.
84 J. Z. Sun, M. C. Gaidis, G. Hu, E. J. O’Sullivan, S. L. Brown,
J. J. Nowak, P. L. Trouilloud and D. C. Worledge, J. Appl.
Phys., 2009, 105, 07D109.
85 T. Min, J. Z. Sun, R. Beach, D. Tang and P. Wang, J. Appl.
Phys., 2009, 105, 07D126.
86 S.-C. Oh, S.-Y. Park, A. Manchon, M. Chshiev, J.-H. Han,
H.-W. Lee, J.-E. Lee, K.-T. Nam, Y. Jo, Y.-C. Kong, B. Dieny
and K.-J. Lee, Nat. Phys., 2009, 5, 898.
87 R. Waser and M. Aono, Nat. Mater., 2007, 6, 833.
88 R. Waser, R. Dittmann, G. Staikov and K. Szot, Adv. Mater.,
2009, 21, 2632.
89 W. Lu, D. S. Jeong, M. Kozicki and R. Waser, MRS Bull.,
2012, 37, 124.
90 I. Valov, R. Waser, J. R. Jameson and M. N. Kozicki,
Nanotechnology, 2011, 22, 254003.
91 D. S. Jeong, H. Schroeder and R. Waser, Electrochem. SolidState Lett., 2007, 10, G51.
92 M. Ziegler, R. Soni, T. Patelczyk, M. Ignatov, T. Bartsch,
P. Meuffels and H. Kohlstedt, Adv. Funct. Mater., 2012, 22,
2744.
93 T. Ohno, T. Hasegawa, T. Tsuruoka, K. Terabe, J.
K. Gimzewski and M. Aono, Nat. Mater., 2011, 10, 591.
94 S. H. Jo, T. Chang, I. Ebong, B. B. Bhadviya, P. Mazumder
and W. Lu, Nano Lett., 2010, 10, 1297.
95 T. Ohno, T. Hasegawa, A. Nayak, T. Tsuruoka, J.
K. Gimzewski and M. Aono, Appl. Phys. Lett., 2011, 99,
203108.
96 T. Chang, S.-H. Jo, K.-H. Kim, P. Sheridan, S. Gaba and
W. Lu, Appl. Phys. A, 2011, 102, 857.
97 T. Chang, S.-H. Jo and W. Lu, ACS Nano, 2011, 5, 7669.
98 S.-J. Choi, G.-B. Kim, K. Lee, K.-H. Kim, W.-Y. Yang,
S. Cho, H.-J. Bae, D.-S. Seo, S.-I. Kim and K.-J. Lee, Appl.
Phys. A, 2011, 102, 1019.
This journal is ß The Royal Society of Chemistry 2012
View Article Online
Downloaded by RSC Internal on 03 January 2013
Published on 27 November 2012 on http://pubs.rsc.org | doi:10.1039/C2RA22507G
RSC Advances
99 K. Seo, I. Kim, S. Jung, M. Jo, S. Park, J. Park, J. Shim, K.
P. Biju, J. Kong, K. Lee, B. Lee and H. Hwang,
Nanotechnology, 2011, 22, 254023.
100 S. Yu, Y. Wu, R. Jeyasingh, D. Kuzum and H. S. P. Wong,
IEEE Trans. Electron Devices, 2011, 58, 2729.
101 Z. Q. Wang, H. Y. Xu, X. H. Li, H. Yu, Y. C. Liu and X.
J. Zhu, Adv. Funct. Mater., 2012, 22, 2758.
102 Q. Lai, L. Zhang, Z. Li, W. F. Stickle, R. S. Williams and
Y. Chen, Adv. Mater., 2010, 22, 2448.
103 A. V. Avizienis, H. O. Sillin, C. Martin-Olmos, H. H. Shieh,
M. Aono, A. Z. Stieg and J. K. Gimzewski, PLoS One, 2012,
7, e42772.
104 D. C. Rubin and A. E. Wenzel, Psychol. Rev., 1996, 103,
734.
105 J. C. Phillips, Rep. Prog. Phys., 1996, 59, 1133.
106 F. A. Kröger and H. J. Vink, Solid State Phys., 1956, 3, 307.
107 D. S. Jeong, H. Schroeder, U. Breuer and R. Waser, J. Appl.
Phys., 2008, 104, 123716.
108 K. Szot, W. Speier, G. Bihlmayer and R. Waser, Nat.
Mater., 2006, 5, 312.
This journal is ß The Royal Society of Chemistry 2012
Review
109 K. Szot, M. Rogala, W. Speier, Z. Klusek, A. Beshmehn and
R. Waser, Nanotechnology, 2011, 22, 254001.
110 D.-H. Kwon, K. M. Kim, J. H. Jang, J. M. Jeon, M. H. Lee, G.
H. Kim, X.-S. Li, G.-S. Park, B. Lee, S. Han, M. Kim and C.
S. Hwang, Nat. Nanotechnol., 2010, 5, 148.
111 G. H. Kim, J. H. Lee, J. Y. Seok, S. J. Song, J. H. Yoon, K.
J. Yoon, M. H. Lee, K. M. Kim, H. D. Lee, S. W. Ryu, T.
J. Park and C. S. Hwang, Appl. Phys. Lett., 2011, 98, 262901.
112 J. P. Strachan, M. D. Pickett, J. J. Yang, S. Aloni, A. L. David
Kilcoyne, G. Medeiros-Ribeiro and R. Stanley Williams,
Adv. Mater., 2010, 22, 3573.
113 E. R. Kandel, Cellular Basis of Behavior: An Introduction to
Behavioral Neurobiology, W. H. Freeman and Company,
San Francisco, 1976.
114 E. R. Kandel, Cellular Basis of Behavior: An Introduction to
Behavioral Neurobiology. In Search of Memory: The
Emergence of a New Science of Mind, W. W. Norton &
Company, New York, 2007.
115 S. D. Ha and S. Ramanathan, J. Appl. Phys., 2011, 110,
071101.
RSC Adv.