Download Project themes in computational brain modelling and brain

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

AI winter wikipedia , lookup

Affective computing wikipedia , lookup

Mathematical model wikipedia , lookup

Catastrophic interference wikipedia , lookup

Convolutional neural network wikipedia , lookup

Concept learning wikipedia , lookup

History of artificial intelligence wikipedia , lookup

Neural modeling fields wikipedia , lookup

Machine learning wikipedia , lookup

Time series wikipedia , lookup

Pattern recognition wikipedia , lookup

Transcript
Project themes in computational brain modelling and brain-like computing
Supervisor, contact person: Pawel Herman ([email protected])
Department of Computational Biology (CB)
Please bear in mind that project details and specific research questions within the proposed themes are
discussed individually with students depending on their interests. There is also a lot of flexibility in
defining the scope and size of these projects. Some project ideas at the cross-sections of the following
themes can be proposed/found. Students will have an opportunity to learn to use dedicated simulation
software (with a possibility to rely on Python interface) or exploit their programming competence to
build their own computational tools for theoretical or applied research.
Projects are organized in three main themes, each of which describes a set of proposed topics. The lists of
topics and some project ideas are not meant to be limiting in any sense and can therefore be easily
expanded by students’ own ideas.
Theme 1: Computational brain, neural network simulations and brain-like computing
algorithms
General focus is here on developing, studying and/or applying connectionist (network based)
brain models. The proposed topics range from simulating detailed spiking neural networks to
investigating and validating more abstract brain-like computing architectures. Projects can be
formulated to either address theoretical questions or test the networks’ functionality in
applications.
1.1 Simulations and analysis of neural network models with emphasis on attractor memory networks
General theme:
There have been a range of theoretical concepts of brain computations proposed in computational
neuroscience. Among the connectionist (network-based) approaches to modelling brain function, an
attractor theory of neural computations has recently received particular attention. The functionality of
attractor networks has been found helpful in explaining various perceptual and memory phenomena.
Consequently, these models can be considered as fundamental components of systems level approach
to modelling brain function within the framework of network-of-networks architecture. An
implementation of attractor memory models can range from a more biologically plausible networks of
spiking neurons to more abstract networks of units with continuous rate-based input/output.
More biophysically detailed models with spiking neurons and synapses provide an opportunity to study
rich neural dynamics in close relation to biological data, and specifically, recordings from the brain
tissue. This way both dynamical and functional aspects of fascinating cortical phenomena can be
studied. Such spiking neural network models are usually developed using dedicated simulation software,
e.g. Nest, Neuron, Genesis etc.
More abstract networks relying on rate-based units (i.e. with non-spiking real-valued input/output) on
the other hand allow for constructing larger systems with the aim of exploring functional aspects of the
simulated attractor memory system. In this context, both generic theoretical investigations into
computational capabilities of memory (learning, memory capacity etc.) as well as specific applications in
pattern recognition, whether in a biological or non-biological data mining context, can be pursued.
Within this theme other computational theories of the brain, e.g. liquid state machines, can also be
studied. In this regard, computational or dynamical aspects as well as application-oriented questions
may be explored. Students can make use of existing software simulators or developed their own
implementations of network models.
Project ideas
a) Studying the effect of different connectivity patterns, network architectures and their
dimensionality on the dynamics and function of the attractor model.
b) Investigating the sensitivity of the model to the level of biological detail being accounted for
(discussion on the required level of complexity and the relevance of biological constraints).
c) Exploring population-level (e.g. simple mean-field approximation) approaches to describing the
neural dynamics exhibited by a modular attractor network.
1.2 Brain-inspired or brain-like computing algorithms – theoretical developments and applications
General theme:
Development of brain models to study neural phenomena, as broadly discussed in topic 1.1 above, often
leads to better understanding of the nature and purpose of neural computations. From a broader
perspective, these computations can be seen as an inspiring model for novel approaches to generic
information processing. Good reputation of neural network architectures in this regard is largely due to
the impressive capabilities of information processing in the brain, which robustly handles large volumes
of noisy multi-modal data received in continuous streams. Consequently, brain-like computing has long
been considered as a particularly appealing concept in a broad field of information science. With the
increasing availability of powerful computing platforms and intensive development of brain models as
well as a growing body of knowledge about computational mechanisms underlying brain function, there
is a surge of interest in adapting these functional aspects to devise algorithms for more generic
applications in the field of data mining, pattern recognition etc. These efforts are urgently needed and
particularly relevant to real-world problems involving so-called big data, for example in exploratory
analysis of large volumes of high-dimensional neuroimaging data for research or clinical purposes.
Project ideas
a) Adapting selected brain-like computing paradigms for large-scale data mining, e.g. to perform
exploratory search for patterns in brain imaging data (medical diagnostics, see also Theme 3).
b) Devising new brain network inspired approaches to generically process temporal or sequential
data and/or comparing to the existing state-of-the art attempts.
c) General evaluation and validation of brain-like computing algorithms on speech recognition,
computer vision or other challenging real-world problems.
d) Testing robustness (sensitivity analysis, noise handling capabilities, computational speed) and
benchmarking brain-like computing methods against more conventional machine (/statistical)
learning techniques on a selected set of benchmark problems.
e) Devising network hierarchical architectures to model behavioral phenomena like prediction,
expectation and filtering (at a reasonable level of abstraction).
1.3. Bayesian learning in spiking neural network models
General theme: The theoretical framework of Bayesian statistics is commonly considered as an
intuitively attractive model for representing and processing uncertain information in the brain. It has
received a lot of attention in computational studies of learning and inference mechanisms underlying
brain function. Since the Bayesian machinery for capturing probabilistic information in distributed neural
networks corresponds to a commonly accepted and biologically inspired Hebbian idea of synaptic
processes taking place in the connections between cells, there have been numerous attempts to adapt
Bayesian inference as an unsupervised learning principle. In this context it is particularly challenging to
translate Bayesian algorithms from abstract theoretical formulations to biologically plausible
computations in spiking neural network models. In the following projects a student will support the
ongoing research efforts in the lab, where our own Bayesian learning scheme (Bayesian Confidence
Propagating Neural Network, commonly referred to as BCPNN) has been developed.
Project ideas
a) Benchmarking synaptic Bayesian-Hebbian learning rules in a spiking sparse activity cortical
associative memory and/or in popular pattern recognition/machine learning tasks.
b) Simulation and analysis of spiking neural network models pre-trained with a Bayesian learning
algorithm; studying implications of Bayesian learning on the network dynamics and function.
1.4 Visualisation in large-scale neural modelling
General theme:
Visualisation is one of the most neglected aspect of a rapidly developing field of computational biology.
Only recently can we observe an emerging trend for combining neural simulation frameworks with
visualisation software. Still there are a plethora of challenging problems that need to be urgently
addressed (high-dimensional data, pre-processing, integration with a simulation software, 3-D
visualization of ongoing brain model activity, demands for purely visual aspects, interactive
environment) to render visualisation a practical tool in computational studies. This is envisaged to
facilitate computational modelling and assist in demonstrating scientific findings.
Project ideas
a) Visualisation of existing data produced by models (different types of high-dimensional
spatiotemporal data are available).
b) Conceptual integration with simulating environment to help with data pre-processing (or postprocessing) and facilitate interactive mode with the user.
c) Review of the state-of-the-art methodology and a motivated choice of a tool for the
computational problem at hand.
1.5. Investigations into parallel implementations and simulations of brain network models on graphics
processing unit (GPU) clusters
General theme:
Simulations of large-scale brain models have gained growing importance in neuroscience mostly due to
the better availability of comprehensive sets of relevant experimental data and, certainly, due to
continuously increasing computational power. In the broad field of scientific computing the latter factor
is particularly appreciated as it allows researchers to expand the complexity and size of their models.
The majority of brain models are nowadays deployed on supercomputers. However, their availability is
rather limited and they are commonly dedicated to large-scale simulations. Recently, graphics
processing units (GPUs) have attracted attention as cheaper and more widely accessible simulation
platforms, particularly for prototyping and evaluating models at lower-scales. Developments of GPU
environments for neural simulations are still at early stages, especially when compared to
supercomputer platforms. This opens up a lot of interesting research opportunities and the proposed
projects within this theme could serve as suitable starting points. Students choosing this set of projects
should have some prior experience with CUDA or OpenGL.
Project ideas
a) Testing of existing GPU software for neural simulation. Potential development and a
comparative analysis of parallel implementations of simple neural models (spiking or more
abstract neural networks).
b) Investigations into parallel simulations of simple brain models (distributed spiking or rate-based
models, basic mean-field models etc.) at different scales deployed on GPU clusters using
different programming interfaces - OPEN CL and CUDA.
Theme 2: Machine learning aspects of brain-inspired learning systems
The projects proposed under this theme are concerned with Machine Learning (ML) and
computational tools that benefit from brain inspirations but at the same time are not
necessarily even considered biomimetic. Unlike for the projects proposed under Theme 1, the
focus here is rather on the relevance of separate ideas borrowed from brain research (or more
loosely related to computational neuroscience), such as various architectures in artificial neural
networks including deep hierarchies, self-organization, local learning in distributed systems, a
wide range of unsupervised and reinforcement learning approaches etc., to ML problems.
Potential projects can explore the usefulness of ML algorithms inspired by brain computations
in specific applications (spatio-temporal pattern recognition, time series prediction, inference in
noisy environments under uncertainty, novelty detection, control of agents in computer games
etc.) or study the nature and robustness of these biomimetic contributions to ML.
2.1 Exploring computational capabilities and properties of deep neural networks.
2.2 Exploiting self-organization and/or competitive learning principles in network computations.
2.3 Studying the potential of unsupervised or semi-supervised learning methods in problems with
limited, unbalanced, noisy and uncertain data.
2.4 Designing/testing algorithms to adaptively control behaviour of virtual agents, e.g. in computer
games, possibly with reinforcement learning methodology (or relating to the state-of-the-art
methods).
More specific project ideas within these broadly formulated and highly exploratory research themes will
be added at a later time. For now, please contact Pawel Herman for details.
Theme 3: Computational approaches to real-world problems in large-scale data mining,
pattern recognition, operational research etc.
Projects listed below describe a wide range of applied tasks within broad areas of data mining,
pattern recognition, time series prediction and operations research. They all require methods
capable of learning from noisy data (sometimes nonstationary) or heuristic approaches (to NPcomplete problems), hence offering an opportunity to identify interesting research questions
beyond serving only a specific application. The list of application areas can be easily extended at
students’ initiative and projects will certainly have to eventually be made more concrete. The
ambition is to propose, evaluate and examine approaches to specific computationally
demanding problems with available data sets for prototyping and benchmarking. In some
application areas there is an opportunity to work on real-world data sets exploited in previous
data science research projects.
3.1 Brain signal pattern recognition for future applications
General theme:
Pattern recognition and machine learning have significantly advanced the field of biological data analysis
leading in consequence to the development of effective diagnostic tools and supporting research
efforts. The contribution of novel pattern recognition methods has been particularly appreciated in
brain data mining as this new approach allows for exploratory search for spatio-temporal patterns in
large quantities of high-dimensional nonstationary recordings of brain activity. The emerging trend is to
combine machine learning techniques with brain-inspired computing algorithms to address increasingly
demanding objectives of brain signal analysis in novel applications.
Project ideas
a) Develop your own approach or build upon the existing approaches to a specific brain signal
pattern recognition problem, e.g. electroencephalographic (EEG) signal classification for a braincomputer interface (BCI), automated sleep scoring based on physiological signals including EEG,
drowsiness or cognitive load detection, EEG-based epileptic seizure prediction (identifying
precursors in high-dimensional brain signal recordings)
b) Alternatively, select and compare a few existing state-of-the-art methods. Focus on selected
aspects of a brain signal pattern recognition problem of your choice (handling signal, extracting
patterns, classifying and interpreting brain signal correlates).
3.2 Medical diagnostics
General theme:
Computer-aided diagnosis has been extensively validated in various medical domains, ranging from
biomedical image or signal analysis to expert systems facilitating the process of decision making in
clinical settings. Although the usefulness of computational approaches to medical diagnostics is beyond
any doubt, there is still a lot of room for improvement to enhance the sensitivity and specificity of
algorithms. The diagnostic problems are particularly challenging given the complexity as well as diversity
of disease symptoms and pathological manifestations. In the computational domain, a diagnostic
problem can often be formulated as a classification or inference task in the presence of multiple sources
of uncertain or noisy information. This pattern recognition framework lies at the heart of medical
diagnostics projects proposed here.
Project ideas
a) Define a diagnostic problem within the medical domain and examine the suitability of machine
learning, connectionist (artificial network-based), statistical or soft computing methods to your
problem.
b) Survey the state-of-the-art in computational tools supporting classification of disease symptoms
and comparatively examine the diagnostic performance of some of them on a wide range of
available benchmark data sets. Define a measure for diagnostic performance. Discuss most
recent trends in the field and address some of the urgent challenges for computer-assisted
diagnostics in medicine.
3.3 University timetabling problem or other challenging problems in scheduling
General theme:
Planning is one of the key aspects of our private and professional life. Whereas planning our own daily
activities is manageable, scheduling in large multi-agent systems with considerable amounts of
resources to be allocated in time and space subject to multitude of constraints is a truly daunting task. In
consequence, scheduling or timetabling as prime representatives of hard combinatorial problems have
increasingly become addressed algorithmically with the use of computational power of today's
computers. This computer-assisted practice in setting up timetables for courses, students and lecturers
has also gained a lot of interest at universities around the world and constitutes an active research field.
Within this topic, students can address a scheduling problem of their own choice or they can use
available university timetabling benchmark data and tailor it to the project's needs. An important aspect
of such project would be to select or compare different algorithms for combinatorial optimisation, and
define a multi-criterion optimisation objective. It could be an opportunity to test computational
intelligence and machine learning methodology.
3.4 Intelligent control
General theme:
There is a clear trend for smarter machines that are able to collect data, learn, recognize objects, draw
conclusions and perform behaviors to emerge in our daily life. Advanced intelligent control systems
affect many aspects of human activities and can be found in a wide range of industries, e.g. healthcare,
automotive, rail, energy, finance, urbanization and consumer electronics among others. By adapting and
emulating certain aspects of biological intelligence this new generation of control approaches makes it
possible for us to address newly emerging challenges and needs, build large-scale applications and
integrate systems, implement complex solutions and meet growing demand for safety, security and
energy efficiency.
Project ideas
a) Select a real-world control problem (traffic control, energy management, helicopter or ship
steering, industrial plant control, financial decision support and many others) and propose a
new approach using machine learning and soft computing methodology (computational
intelligence) that enhances functionality, automatisation and robustness when compared to
classical solutions.
b) Demonstrate functional (and other) benefits of “computationally intelligent” control approaches
in relation to the classical methodology in a range of low-scale control problems (benchmarks).
Discuss a suitable framework of comparison and potential criteria.
c) Consider a robotic application with all constraints associated with autonomous agents and realworld environments (which can be emulated in software). Propose “computationally intelligent”
methods to enable your agent to robustly perform complex tasks (learn from the environment,
evolve over time, find solutions to new emerging problems, adapt to new conditions etc.).
3.5 Optimisation and parameter search in computational modelling
General theme:
Model's parameters have a decisive effect on its behaviour and dynamics. Search for parameters is at
the same time the most tedious component of computational modelling. Neural simulations are no
exception. On the contrary since they account for nonlinear and stochastic effects in brain data,
parameters need to be carefully tuned to obtain a desirable functional and/or dynamical outcome. This
optimisation procedure is commonly carried out manually on a trial-and-error basis. It is thus desirable
to automatise this tedious process by providing an effective parameter search and optimisation scheme.
One of key challenges to address is computational efficiency of the implemented method and the
definition of a cost function based on the existing "manual" evaluation criteria. Tests in the project will
be performed with the use of existing neural models or a low-scale simulation demo will be developed.