Download Effects of Correlated Input on Development of Structure in an

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Neuroplasticity wikipedia , lookup

Neural modeling fields wikipedia , lookup

Development of the nervous system wikipedia , lookup

Neuroeconomics wikipedia , lookup

Multielectrode array wikipedia , lookup

Nonsynaptic plasticity wikipedia , lookup

Stimulus (physiology) wikipedia , lookup

Synaptogenesis wikipedia , lookup

Molecular neuroscience wikipedia , lookup

Rheobase wikipedia , lookup

Connectome wikipedia , lookup

Single-unit recording wikipedia , lookup

Feature detection (nervous system) wikipedia , lookup

Electrophysiology wikipedia , lookup

Neural oscillation wikipedia , lookup

Activity-dependent plasticity wikipedia , lookup

Circumventricular organs wikipedia , lookup

Neural coding wikipedia , lookup

Premovement neuronal activity wikipedia , lookup

Holonomic brain theory wikipedia , lookup

Neuroanatomy wikipedia , lookup

Catastrophic interference wikipedia , lookup

Optogenetics wikipedia , lookup

Channelrhodopsin wikipedia , lookup

Central pattern generator wikipedia , lookup

Pre-Bötzinger complex wikipedia , lookup

Neuropsychopharmacology wikipedia , lookup

Biological neuron model wikipedia , lookup

Convolutional neural network wikipedia , lookup

Recurrent neural network wikipedia , lookup

Synaptic gating wikipedia , lookup

Metastability in the brain wikipedia , lookup

Types of artificial neural networks wikipedia , lookup

Nervous system network models wikipedia , lookup

Transcript
Effects of Correlated Input on Development of
Structure in an Activity Dependent Network
Alexander J. H. Fedorec
CoMPLEX, University College London
Supervisors: Dr. Simon Farmer, Dr. Luc Berthouze
March 6, 2014
Abstract
The complexity of the brain is much discussed both within the scientific community and amongst the public. The network of neurons that make up the
brain are connected in a manner that is important to its correct functioning.
The development of these connection, from a disorderd, unconnected group
of neurons to a modularised, “small-world” network is activity-dependent.
Data from the EEGs of pre term babies shows that the activity present in
the developing brain exhibits long-range temporal correlations. We build a
neuronal network model to asses whether the form of the activity input in
to a developing network effects the topology of the developed network. Our
results disagree with the a recent study that showed a difference between a
network generated with random process input and one with LRTC input.
We leave the door open for future investigation with a model such as ours.
Chapter 1
Introduction
The brain is a complex system, often referred to as the most complex system
in the universe, which allows us to comprehend the world around us. How
the brain carries out the complex and varied tasks that it is capable of
remains an area shrouded in uncertainty. However, it is believed that the
form of the brain, the layout of its constituent parts, is intrinsically linked
to its function.
The developing brain The developed brain consists of large numbers of
neurons, clustered in to modules which carry out different functions (Chialvo
2004). The main process that occurs in the early stages of development is
connectivity formation (Kostovic & Rakic 1990). Neurites grow out from
the neuron, forming connections with the neurites of other neurons. This
growth and connection is dependent on the levels of activity within the
neuron and the activity of neighbouring neurons. This activity dependent
connection formation was theorised by Hebb and can be summarised with
the statement: “cells that fire together, wire together”.
The developed brain Investigations into the connectivity of the brains
of a number of species have shown that the network topology is “neither
entirely random nor entirely regular” (Sporns & Kötter 2004). These neuronal networks are characterised by high levels of clustering and a small
average path length (Sporns & Kötter 2004). Such networks are known as
small-world networks and are seen, not just in the brain, but in many areas
including the western United States electrical power grid and the network
of collaborations of feature film actors (Watts & Strogatz 1998). As shown
in figure 1.1, the topology of a small-world network falls between a regular
network, in which nodes only form connections with their neighbours, and
a random network in which nodes form connections that are not related to
path length.
1
CHAPTER 1.
2
a)
b)
Figure 1.1: a) One can create a random network by rewiring a regular network, in
which each node is connected to its nearest neighbours (in this case its four nearest
neighbours). The rewiring is done by proceeding around the ring and at each node,
with probability p, reconnecting the edge to its nearest neighbour to a node chosen
uniformly at random. This is repeated for the second nearest neighbour of each
node and so on until each edge in the original network has been considered. b) The
normalised clustering coefficient, C, and mean path length, L, plotted against the
rewiring probability. Where there is a small probability of rewiring, in the network
produced some nodes will have long range connections which shortens the mean
path length while maintaining a high level of clustering i.e. a small-world network.
Figures from Watts & Strogatz (1998).
What drives cortical development? In the early stages of development of the cerebral cortex, an “axonal scaffold” is formed by the subplate,
bridging between the thalamus and the developing cortical plate (McConnell
et al. 1989). The subplate is a transient structure which disappears after the
first postnatal week (Price et al. 1997), though some cells may remain (McConnell et al. 1989). It is thought that the activity passed into the cortical
plate by the subplate influences the development of the neuronal network of
the cortex (Dupont et al. 2006). The subplate activity is able to provide this
input due to its position as a thalamic intermediary (Dupont et al. 2006).
Activity-dependent development Most research conducted in the area
of connectivity development in the brain, a network in which development
is activity-dependent, has focused on changes caused by suppressing or removing activity (Tolner et al. 2012, Dupont et al. 2006). These studies show
reduced cortical patterning and weaker thalomocortical connectivity. A less
well explored area is whether the form of the activity driving the development is important in the networks developed. The EEG activity measured
from brains shows that the activity is discontinuous, a series of bursts of activity punctuating a constant low level of oscillation, see figure 1.2. Recent
studies of EEG data in pre term babies has shown that the activity displays
long-range temporal correlations (LRTCs) (Hartley et al. 2012).
CHAPTER 1.
3
Figure 1.2: The EEG of a pre term baby shows low level noise interspersed with
bursts of activity.
Long-range temporal correlations A process with long-range temporal
correlations is a process in which the autocorrelations decay slowly, typically
with a power law like decay (Craigmile 2003). This is as opposed to a shortrange process in which the coupling of values decays rapidly the further
apart they are in time (or in space). One can establish whether a process
shows LRTCs by estimating its Hurst coefficient, H, with “ 12 < H < 1
corresponding to long-term dependence” (Davies & Harte 1987). Although
it is beyond the scope of this report, it should be noted that estimating the
Hurst coefficient should be done using more than one method as there are
biases inherent in each technique (Dieker 2004).
LRTCs and neuronal development Several papers have been written regarding the use of Poisson processes as an input to neuronal models
(Liu et al. 2003, Brown et al. 1999). To our knowledge, the only research
conducted to determine what impact this long-memory process has on connectivity development has been undertaken by Hartley (2014). As well as
looking at the effect of an LRTC process on network topology, Hartley examines whether the dynamics of developed networks exhibit LRTCs. The
results showed that small-world, modular and random networks can produce
dynamics with long-range correlations. In a study of pre term baby EEG
data, (Hartley et al. 2012) revealed that despite changes in connectivity during the developmental period, EEGs show discontinuous activity with the
same Hurst exponent.
Chapter 2
Replication of
Activity-Dependent
Neuronal Network Model
In order to explore the effects of an LRTC process on the clustering motifs
formed in a neuronal network, we use a model from Van Ooyen & Van Pelt
(1996). This model of activity-dependent neurite outgrowth allows us to
input various processes and analyse the network of connections that forms.
In their model, two properties of the neurons interact with each other. One
is the slow process of neuritic growth, leading to the formation of connection
with “overlapping” neurons. The second is the fast dynamics of electrical
activity.
Intracellular calcium is intrinsically linked to neurite growth (Kater &
Mills 1991). There is an optimum level of intracellular calcium, above and
below which neurite outgrowth is inhibited and even reversed (Kater et al.
1988). Levels of intracellular calcium are altered through several mechanisms, such as membrane depolarisation, and have been implicated in controlling the morphology of neurons (Kater et al. 1989) as well as the formation of patterns in neuronal circuitry (Lipton & Kater 1989). As such, the
model uses membrane potential, and the mean firing rate associated with
that potential, to modulate the growth or retraction of the “neuritic field”.
The membrane potential, Xi , of a cell i is described by the following
modification of the shunting model (Van Ooyen & Van Pelt 1996):
N
X
dXi
= −Xi + (1 − Xi )
Wij F (Xj )
dT
(2.1)
j=1
where N is the total number of cells in the network, T is the membrane time
constant, Wij is the connection strength between neurons i and j and
F (X) =
1
1+
4
e(θ−X)α
(2.2)
CHAPTER 2.
5
where F (X) is the mean firing rate and α and θ determine the steepness and
the firing threshold respectively. The connection strength of two neurons is
proportional to the area of overlap of their neuritic fields:
Wij = Aij S
(2.3)
where Aij is the area of overlap which we consider as analogous to the total
number of synapses formed between i and j. S is a constant representing the
average strength of the synapses. Van Ooyen & Van Pelt (1996) suggest that
calculating the actual area of overlap between neuritic fields is not necessary
to capture the essential behaviour and that simpler functions can be used.
To describe the growth of a neuron, which is dependent on its electrical
activity, we take the change in the radius of the neurons neuritic field to be:
dRi
= ρG(F (Xi ))
dT
(2.4)
where Ri is the radius of the neuritic field of neuron i and ρ is a constant
determining the rate of growth. The function G can be any function that
fulfils the following criteria:

G(u) > 0
 for u < i
for u > i
G(u) < 0
(2.5)

for u = i
G(u) = 0
This captures the property of neurons, described earlier, that there is a
level of intracellular calcium, , beyond which growth will stop and further
beyond which the neuron will retract. We use a function for G suggested in
Van Ooyen & Van Pelt (1996).
G(F (Xi )) = 1 −
2
1 + e(i −F (Xi ))/β
(2.6)
where β is a constant determining the steepness of the function.
We used MatLab’s built in Runge-Kutta ordinary differential equation
solver to produce solutions to the model described above. Where given, we
used the parameter values set by Van Ooyen & Van Pelt (1996). Though the
model set out by Van Ooyen & Van Pelt (1996) does not state that the plane
which the neurons inhabit is toroidal, we have made it so. This allows us,
when producing solutions where all neurons are identical other than in their
position on a grid, to use only a few neurons and avoid boundary conditions.
The neuritic growth/retraction threshold In order to look at the
impact of changing , we placed 16 neurons on a grid, equally spaced and
with their neuritic fields touching but not overlapping. Figure 2.1 shows the
mean membrane potential and mean radius of the neurons over 30,000 timesteps. Increasing , increases the membrane potential threshold at which the
CHAPTER 2.
6
neuritic field of a neuron begins to retract. By increasing this value we are
able to alter the steady state membrane potential. A value too low leads to
persistent oscillations and a network that never settles.
The synaptic strength S We now explore the impact of changing the
value of the synaptic strength parameter, S, in equation 2.3. A value of
= 0.6 is used for these simulations as it produces a steady state but is, potentially, not incapable of being pushed away from it by small perturbations.
Increasing S means that the overlap of neuritic fields has a greater impact on
the change in a neuron’s membrane potential, X. Our simulations, shown
in figure 2.2, exhibit behaviours that one would expect. As S increases,
the membrane potential of a neuron is affected to a greater extent by the
activity of its neighbours. This means that, for larger values of S, a small
overlap with neighbours is able to generate a change in membrane potential
that for lower values of S would require a larger overlap. This means that
the maximum radius of the neuritic fields is reduced as S increases. Further,
the stability of the steady state is reduced as the impact of small changes in
overlap lead to larger changes in membrane potential. For later simulations
we use S = 0.5 as this balances the need to allow neuritic fields to grow and
form multiple connections while generating a network that isn’t so stable
that external inputs are unable to produce any change in the system.
CHAPTER 2.
mean membrane potential
7
mean neuritic field radius
mean R vs. mean X
a
b
c
d
e
f
Figure 2.1: Time, along the x-axis of the first two columns, runs from 0 to 3 × 104 .
The third column displays mean neuritic field radius along the x-axis against mean
membrane potential on the y-axis. Sixteen identical neurons on a toroidal grid with
initial neuritic fields touching but not overlapping. The synaptic strength S = 0.5.
a) With = 0.05 the radii of the neurons grows slowly, meaning a slow growth in
neuritic field overlap and as such slow increase in membrane potential. As epsilon
grows [b) = 0.2, c) = 0.4] the rate of increase in radii increases. d) When = 0.6,
following an overshoot, a steady state is reached. This occurs when e) = 0.8 but
when f) = 0.97, the relaxation of the neuritic fields is so slow that a steady state
is not reached before the simulation elapses.
CHAPTER 2.
mean membrane potential
8
mean neuritic field radius
mean R vs. mean X
a
b
c
d
Figure 2.2: As the synaptic strength, S, increases [a) S = 0.2, b) S = 0.5, c)
S = 0.8, d) S = 1] the maximum radii that the neurons reach is reduced. Further,
the stability of the steady state decreases with increasing S as a slight increase in
neuritic field overlap has a greater impact on membrane potential change.
Chapter 3
The Impact of External
Input on Neurite Growth
The only stimulation supplied to the neurons as described in the model so
far, comes from interactions with the neuritic field of other neurons. We wish
to determine the impact of external stimulation on the network of neurons.
In order to do so we add an input to equation 2.1:
N
X
dXi
= −Xi + (1 − Xi )(Ii +
Wij F (Xj ))
dT
(3.1)
j=1
where Ii is an input to neuron i. Van Ooyen & Van Pelt (1996) describe
the impact of a constant input however we are interested in discontinuous
inputs.
Figure 3.1 shows the decreasing importance of the neuritic fields of other
neurons as the strength of a constant input into a neuron increases. With a
very high input strength the neuritic fields are barely able to grow, limiting
the potential connectivity of the network. This is obviously not useful when
considering connection formation and shows the necessity of restricting input
strength.
3.1
Discontinuous Input
Electrical activity in the brain does not take the form of a constant input of
the form used in figure 3.1. Nor is it a continuous oscillation. EEGs show low
level noise interspersed with bursts of activity (André et al. 2010). Anderson
et al. (1985) carried out analysis on EEG readouts from pre-term babies.
Their results show that the average duration of a burst of activity is 4-5
seconds. This value doesn’t show variation with conceptional age over the
range that tested. Inter-burst intervals show a decrease both in average and
longest period as conceptional age increases, with values of 7-12 seconds and
9
CHAPTER 3.
mean membrane potential
10
mean neuritic field radius
mean R vs. mean X
a
b
c
d
Figure 3.1: With a constant input to each neuron the membrane potential is no
longer solely effected by overlap with neighbours. As the strength of the input
increases, the influence of the neighbours decreases. The neurons have = 0.6 and
the synaptic strength = 0.5.
CHAPTER 3.
11
20-48 seconds respectively. More recently, a study by Hartley et al. (2012)
quotes a figure of 251.7 ± 55.1 events per hour, or an average inter-event
interval 14 seconds. They show no variation due to conceptional age but do
note that babies with cerebral haemorrhages have ”significantly lower event
rate” of 110.1±38.9 events per hour. The EEG recordings used in the article
from Hartley et al. (2012) were taken over a longer period (median duration
21.6 hours) than the Anderson et al. (1985) study (26.5 minutes), though
the former only had 11 subjects compared to the latter’s 33 subjects.
3.1.1
Poisson Process
Algorithms to generate Poisson processes are well documented. We use an
algorithm described by Pasupathy (2011). In order to show the effect of
such a process on the model, as described, we generated a Poisson process
with a mean rate of 0.07 events per second, corresponding to the figure of
251.7±55.1 events per hour measured by Hartley et al. (2012). The duration
for each event was taken to be 5 seconds, as per the findings of Anderson
et al. (1985). Figure 3.2 shows the effect that such a process has on the same
network of neurons on a grid that we have used previously. As the strength
of the input increases, its impact on the activity of a neuron overtakes that
of the neuron’s neighbours. This leads to an increase in the rate of change
of membrane potential and neuritic field radius. If there is a steady state
for a particular set of neuron parameters, the input can push the neuron
away from it. Again we see that when the strength of the input is high, the
maximum radii of the neurons is limited. As such, we decide to increase the
value of that we use for our simulations to = 0.8. This should allow the
network to avoid falling in to an endlessly oscillatory state.
Random spatial positioning Now that we have established the ability
to input a process in to a network of neurons we can begin to look at the
effect of such processes on the formation of connections. While on a grid,
each neuron can form connections with its neighbours in the North, South,
East and West positions and may, if the value for the synaptic strength is
suitably low, form connections beyond. This is obviously very restrictive
to the characteristics of the network that can form. As such, the initial
positions of the neurons were randomised1 , still on a toroidal plane with the
same density of neurons as before. First we tested whether the number of
neurons in the network made a noticeable difference to network dynamics.
Figure 3.3 shows that changing from 49 to 100 neurons leads to a slight
dampening in the rate of increase in mean membrane potential but the
”steady” level of the membrane potential and mean radii reached were the
1
The seed used when generating the positions was the same, unless stated, in order to
avoid the outcomes differing due to differences in initial set up rather than differences due
to variables we control.
CHAPTER 3.
mean membrane potential
12
mean neuritic field radius
mean R vs. mean X
a
b
c
d
Figure 3.2: A Poisson process input with mean rate of 0.07 and duration of events
equal to 5. The neurons have = 0.6 and the synaptic strength = 0.5 and are
equally spaced on a toroidal plane. Increasing the strength of the input [a) 0, b)
0.1, c) 0.2 and d) 0.5], destabilises the network due to the ability of the input
to push the membrane potential of the neurons over its value, thus leading to
neuritic field retraction. As the strength of input increases, the neurons’ membrane
potentials are increasingly linked to the fluctuations of the input.
CHAPTER 3.
a) = 0.8, N = 49
13
b) = 0.8, N = 100
c) = 0.7, N = 100
Figure 3.3: As the number of neurons in the network is increased from a) 49 to b)
100, the dynamics remain rather similar other than a slight dampening in the rate
of increase in mean membrane potential (meanX). A bigger difference is clearly seen
in the change of from b) 0.8 to c) 0.7. As we saw in previous sections, reducing reduces the stability of a steady state for membrane potential.
same. One can also see that by lowering the value of from 0.8 to 0.7,
the input is able to perturb the dynamics of the network enough to cause
a dramatic drop in mean membrane potential and an associated change in
neuritic field growth. In later simulations we run with both = 0.8 and 0.7
in order to ascertain whether periods of instability are necessary to generate
different network outcomes from different inputs. Another important result
from setting equal to 0.7 is that it shows that we can’t assume a system
will remain in a steady state just because it has been in one for a certain
period. If we had cut off the simulation at time-step 20000, the = 0.8 and
0.7 would have looked much the same.
3.1.2
LRTC Process
We have shown how a Poisson process as an input to a neuronal network can
affect the dynamics of that network. What we are really interested in, however, is whether an LRTC process produces different network characteristics
to the Poisson process. Our LRTC process is a fractional, autoregressive
integrated moving-average (FARIMA) process. This works by allowing the
degree of differencing to take fractional values, with a differencing value
between 0 and 21 producing a long-range dependence (Hosking 1981). As
stated previously, an LRTC can be described by its Hurst exponent. Hartley et al. (2012) found that the Hurst exponent of EEGs in pre-term babies
was ∼ 0.6 − 0.7. We created LRTC processes with the same inter-event intervals as the Poisson processes used in the previous section and used them
as the input in neuronal networks with the same initial conditions as used in
figure 3.3. The results, in figure 3.4 show similar dynamics to those produced
CHAPTER 3.
a) = 0.8, N = 49
14
b) = 0.8, N = 100
c) = 0.7, N = 100
Figure 3.4: With an LRTC process as input, the neuronal dynamics appear very
similar to those produced by a Poisson process with identical strength, mean interevent interval and event duration.
using a Poisson process input.
3.2
Network Topology
Although the neuronal dynamics of the networks with Poisson and LRTC
process input look similar, this tells us little regarding the topology of the
respective networks. In order to asses whether LRTC process input creates
networks with different topologies to memoryless processes we look at the
clustering coefficient and mean path length of the networks. Figure 3.5
shows how these two statistics vary as the networks develop with the two
different processes as input. The results show very similar trajectories for
clustering coefficient and mean path length. This would seem to indicate
that the networks connectivities are very similar and the long-term memory
process doesn’t generate a different form of network to that of a memoryless
process.
However, in order to make a genuine comparison between the to simulations the clustering coefficient and mean path length must be normalised.
From each of the connectivity matrices at the final step of the networks generated, we created 100 randomised networks. This was done by randomly
shuffling the the values in the upper triangle of the connectivity matrix and
then reflecting along the diagonal to produce a symmetric matrix2 . This
method retains the number of connections and the weight of connections
within the network. The mean clustering coefficient and mean path length
of these random networks allow us to normalise the mean clustering coefficient and mean path length from the the simulated networks (Bassett &
2
2.3
Our connection matrix is symmetric because the weight of Aij = Aji , see equation
CHAPTER 3.
15
a
b
Figure 3.5: The mean clustering coefficient and mean path length with LRTC
process input (red) are very similar to those of the network with Poisson process
input (blue). The only noticeable differences are an earlier second spike in clustering
coefficient with an LRTC process with = 0.7 (a) whereas the Poisson process with
the same network set up takes longer to affect a second spike in clustering coefficient
but a third spike follows soon after. Further, with = 0.8 (b) the drops in clustering
coefficient with an LRTC input lag behind the same network with a Poisson input.
CHAPTER 3.
16
Bullmore 2006). Table 3.1 shows these normalised values for the last point in
the time series. This also shows that the networks generated by both inputs
share similar topologies. However, we want to see if the topologies remain
as synchronised throughout the time series as figure 3.5 would suggest. Due
to time restraints we were unable to normalise the clustering coefficient and
path length at each data point but we were able to do so at a less precise
level (1 normalised point for every 100 data points for LRTC and 1 for every 500 for Poisson). Figure 3.6 shows the normalised values. Although
the clustering coefficient for LRTC input has a greater level of variation on
short time-scales, the Poisson process follows the same trend as the LRTC
process. For the mean path length the similarity is much clearer.
Poisson
LRTC
Mean Clustering Coefficient
measured random normalised
0.2431
0.0264
9.2083
0.2420
0.0247
9.7976
Mean Path Length
measured random normalised
5.9710
3.6081
1.6549
6.0478
3.6524
1.6558
Table 3.1: The mean clustering coefficient and mean path length of the networks
of 100 neurons, with = 0.8. These are taken from the connectivity matrix at the
last recorded time step.
We can calculate the small-world index from the normalised values shown
in table 3.1. For the LRTC network the small-world index is 9.7976/1.6558 =
5.9171 and for the Poisson network it is 9.2083/1.6549 = 5.5643. A value
of 1 would show that the network has similar characteristics to that of a
random network (Bassett & Bullmore 2006). Both of these networks have a
value greater than one, showing that they both display small-world network
characteristics. In fact one can see from figure 3.6 that the network shows
small-world properties throughout the time-series.
CHAPTER 3.
17
a
b
Figure 3.6: The a) normalised mean clustering coefficient and b) normalised mean
path length with LRTC process input (red) and Poisson process input (blue) with
= 0.8. The LRTC line is calculated to a greater precision (1 point at every
100th output point [Not every 100th unit time-step. Due to using a Runge-Kutta
method, output point are not equally spaced or corresponding to integer timesteps.]) compared to the Poisson line (1 point every 500th output point). One
can see that although there is a greater degree of variation in the LRTC line, the
Poisson process does follow the same overall trend in clustering coefficient. The
similarity is much more noticeable for the mean path lengths. It should be noted
that, at all points in the time series, both networks display small world properties
i.e. clustering coefficient / path length > 1.
Chapter 4
Discussion
We replicated a model of neuronal dynamics described by Van Ooyen &
Van Pelt (1996) and extended it to allow for discontinuous input. Difficulties
arose while trying to run simulations with this model due to the difficulties
of integrating discontinuous functions in MatLab. We adopted a ‘hybrid’
method by using the Runge-Kutta method of analysis between events. This
allowed us to take advantage of the efficiency savings that Runge-Kutta
offers while also being certain that the events weren’t missed. Though this
method was faster than simply reducing the size of integration time steps
to a suitably small number, the computation still took 1-3 hours with 100
neurons. As is evidenced by the graphs in figure 3.3 and figure 3.3 in which
= 0.7, extending the simulation time is necessary in order to capture
behaviours that we are likely missing. These graphs also bring in to question
whether the membrane potential and radii with values of = 0.8 are as stable
as we assume.
The methods for calculating clustering coefficients and mean path lengths
were taken from the Brain Connectivity Toolbox (Rubinov & Sporns 2010).
Bolaños et al. (2013) point out certain errors in a number of methods for
calculating clustering coefficients in weighted directed networks, mentioning
Rubinov & Sporns (2010) in the paper. A new method is proposed and
should be used in future work when calculating network topology characteristics.
The graphs in figure 3.3 and figure 3.4 in which = 0.7 show activity
occurring at the end of the simulation. It is necessary to run the simulations for longer in order to ascertain whether the membrane potential finds
a steady state or whether oscillations continue. In addition to running for
longer, it is also necessary to use more neurons. The large amount of variation in the normalised clustering coefficient graph in figure 3.6 is likely to
be due to the disproportionate effect that the disconnecting/reconnecting of
a small number of neurons can have on a network with a relatively small
number of neurons in it.
18
CHAPTER 4.
19
Our results currently show no difference in the connectivity of neuronal
networks when stimulated with LRTC or Poisson process input. This goes
against the findings of Hartley (2014), who showed that there was both a
difference in the way connectivity evolved and in the final topology. The
difference in findings could be due to several factors but is most likely down
to the parametrisation of the model. We made no effort to develop neurons
that reflected the “physiology” of those that Hartley (2014) modelled. The
strength and relative frequency of the input processes was not reflective of
the processes used by Hartley (2014) either. Since there was no difference
in the network between the two inputs, it may be the case that the impact
of the inputs on the network wasn’t great enough. We chose parameters for
the network in order to try and generate one that was reasonably stable.
Although we attempted to determine the effect on a slightly less stable neuronal set up, this simulation wasn’t run for a long enough period to be able
to assess the effect of this change. In the future it would be of interest to
determine whether increasing the strength of the input on both a relatively
stable and relatively unstable network produces greater differentiation between the two processes. Another alteration that would be interesting to
explore would be to change to frequency of the input. The values we used of
0.07 for the mean rate and 5 for event duration were taken from literature.
However, in the literature these were in relation to a real clock. In our simulation we had them in relation to unit simulation time. We did not assess
the relation of a unit of simulation time to that of a second. This relation
can be altered by varying ρ, the growth rate, in equation 2.4 (Van Ooyen &
Van Pelt 1996).
Chapter 5
Conclusion
The brain is a complex network of neurons and their connections. The topology of the neuronal network is important for the proper functioning of the
brain. It is thought that connectivity development in the brain is activitydependent. Recent EEG data from pre term babies has shown that activity in the developing brain shows long-range temporal correlations (Hartley
et al. 2012). The question of whether the form of the activity input to the
developing brain has an impact on the developed network topology has only
recently been addressed by Hartley (2014). We built an activity-dependent
neuronal network model to determine whether the topology of a network
generated under the influence of an LRTC process differs from that formed
under a Poisson process. Our results don’t show any difference, either in
the evolution of the network or in the final network. We have identified
areas that we weren’t able to attend to due to time constraints but that
may produce results that reflect the findings of Hartley (2014).
20
Bibliography
Anderson, C. M., Torres, F. & Faoro, A. (1985), ‘The eeg of the early premature’, Electroencephalography and clinical neurophysiology 60(2), 95–105.
André, M., Lamblin, M.-D., d’Allest, A.-M., Curzi-Dascalova, L., MoussalliSalefranque, F., Nguyen The Tich, S., Vecchierini-Blineau, M.-F., Wallois,
F., Walls-Esquivel, E. & Plouin, P. (2010), ‘Electroencephalography in
premature and full-term infants. developmental features and glossary’,
Neurophysiologie Clinique/Clinical Neurophysiology 40(2), 59–124.
Bassett, D. S. & Bullmore, E. (2006), ‘Small-world brain networks’, The
neuroscientist 12(6), 512–523.
Bolaños, M., Bernat, E. M., He, B. & Aviyente, S. (2013), ‘A weighted small
world network measure for assessing functional connectivity’, Journal of
neuroscience methods 212(1), 133–142.
Brown, D., Feng, J. & Feerick, S. (1999), ‘Variability of firing of hodgkinhuxley and fitzhugh-nagumo neurons with stochastic synaptic input’,
Physical Review Letters 82(23), 4731.
Chialvo, D. R. (2004), ‘Critical brain networks’, Physica A: Statistical Mechanics and its Applications 340.
URL: http://dx.doi.org/10.1016/j.physa.2004.05.064
Craigmile, P. F. (2003), ‘Simulating a class of stationary gaussian processes
using the davies–harte algorithm, with application to long memory processes’, Journal of Time Series Analysis 24(5), 505–511.
Davies, R. B. & Harte, D. (1987), ‘Tests for hurst effect’, Biometrika
74(1), 95–101.
Dieker, T. (2004), ‘Simulation of fractional brownian motion’, MSc theses,
University of Twente, Amsterdam, The Netherlands .
Dupont, E., Hanganu, I. L., Kilb, W., Hirsch, S. & Luhmann, H. J. (2006),
‘Rapid developmental switch in the mechanisms driving early cortical
columnar networks’, Nature 439(7072), 79–83.
21
BIBLIOGRAPHY
22
Hartley, C. (2014), ‘Temporal dynamics of early brain activity exlpored using
eeg and computational models’, PhD thesis, University College London,
London, UK .
Hartley, C., Berthouze, L., Mathieson, S. R., Boylan, G. B., Rennie, J. M.,
Marlow, N. & Farmer, S. F. (2012), ‘Long-range temporal correlations in
the eeg bursts of human preterm babies’, PloS one 7(2), e31543.
Hosking, J. R. (1981), ‘Fractional differencing’, Biometrika 68(1), 165–176.
Kater, S. B., Mattson, M. P., Cohan, C. & Connor, J. (1988), ‘Calcium regulation of the neuronal growth cone’, Trends in neurosciences 11(7), 315–
321.
Kater, S., Guthrie, P. & Mills, L. (1989), ‘Integration by the neuronal growth
cone: a continuum from neuroplasticity to neuropathology.’, Progress in
brain research 86, 117–128.
Kater, S. & Mills, L. (1991), ‘Regulation of growth cone behavior by calcium’, The Journal of neuroscience 11(4), 891–899.
Kostovic, I. & Rakic, P. (1990), ‘Developmental history of the transient subplate zone in the visual and somatosensory cortex of the macaque monkey
and human brain’, Journal of Comparative Neurology 297(3), 441–470.
Lipton, S. A. & Kater, S. B. (1989), ‘Neurotransmitter regulation of neuronal
outgrowth, plasticity and survival’, Trends in neurosciences 12(7), 265–
270.
Liu, F., Feng, J. & Wang, W. (2003), ‘Impact of poisson synaptic inputs with
a changing rate on weak-signal processing’, EPL (Europhysics Letters)
64(1), 131.
McConnell, S. K., Ghosh, A. & Shatz, C. J. (1989), ‘Subplate neurons pioneer the first axon pathway from the cerebral cortex’, Science
245(4921), 978–982.
Pasupathy, R. (2011), ‘Generating homogeneous poisson processes’, Wiley
Encyclopedia of Operations Research and Management Science .
URL: http://dx.doi.org/10.1002/9780470400531.eorms0356
Price, D. J., Aslam, S., Tasker, L. & Gillies, K. (1997), ‘Fates of the earliest
generated cells in the developing murine neocortex’, Journal of Comparative Neurology 377(3), 414–422.
Rubinov, M. & Sporns, O. (2010), ‘Complex network measures of brain
connectivity: uses and interpretations’, Neuroimage 52(3), 1059–1069.
BIBLIOGRAPHY
23
Sporns, O. & Kötter, R. (2004), ‘Motifs in brain networks’, PLoS biology
2(11), e369.
Tolner, E. A., Sheikh, A., Yukin, A. Y., Kaila, K. & Kanold, P. O. (2012),
‘Subplate neurons promote spindle bursts and thalamocortical patterning
in the neonatal rat somatosensory cortex’, The Journal of Neuroscience
32(2), 692–702.
Van Ooyen, A. & Van Pelt, J. (1996), ‘Complex periodic behaviour in a neural network model with activity-dependent neurite outgrowth’, Journal of
theoretical biology 179(3), 229–242.
Watts, D. & Strogatz, S. (1998), ‘Collective dynamics of ’small-world’ networks.’, Nature 393(6684), 440–442.
URL: http://dx.doi.org/10.1038/30918