Download Brain-Computer - University of South Australia

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Artificial neural network wikipedia , lookup

Holonomic brain theory wikipedia , lookup

Nervous system network models wikipedia , lookup

Development of the nervous system wikipedia , lookup

Neuropsychopharmacology wikipedia , lookup

Types of artificial neural networks wikipedia , lookup

Neuroinformatics wikipedia , lookup

Recurrent neural network wikipedia , lookup

Time series wikipedia , lookup

Neural engineering wikipedia , lookup

Metastability in the brain wikipedia , lookup

Brain–computer interface wikipedia , lookup

Transcript
Brain-Computer Interfaces: a technical approach to supporting
privacy
Kirsten Wahlstrom1,2, N. Ben Fairweather2, Helen Ashman1
1
School of Computer and Information Science
University of South Australia
South Australia 5095 Australia
2
Centre for Computing and Social Responsibility
De Montfort University
Leicester LE1 9BH UK
Abstract
Brain-Computer Interfaces (BCIs) are an emerging technology with implications for privacy.
However, so far there have been no technical approaches to supporting the privacy of BCI
users reported in the literature. An initial conceptual model for such a technical approach is
presented for consideration in this paper. The initial conceptual model has three foundations.
Firstly, BCI technologies are reviewed and technical components relevant to interoperability
are identified. Secondly, privacy is conceptualised as a measurable requirement predicated
upon enculturation, personal preference and context. Finally, the European Union’s privacy
directives are reviewed to clarify legal context and requirements. As the suggested conceptual
model is the first of its kind, analysis and critique are invaluable and are fostered through
three discussion themes. The paper concludes with suggestions for further research.
1. Introduction
Brain-Computer Interfaces (BCIs) provide a communication pathway between a brain and an
external electronic device. Warwick’s self-experiments [Warwick and Gasson, 2004]
demonstrated the technical feasibility of extending the peripheral nervous system, via the
Internet, to other people and to external devices. Also, there have been recent advances in
interpreting spontaneous neural activity [Coffey et al, 2010] and in concurrent interpretation
of more than one intentional neural activity [Allison et al, 2010, Leeb et al, 2011].
Furthermore, there has been research into BCIs that stimulate perceptions in people with
acquired blindness [Schmidt et al, 1996].
To the best of our knowledge, a BCI that stimulates human perceptions via the Internet
remains undeveloped, although the popular press reports US Department of Defence research
projects investigating ‘synthetic telepathy’ [Drummond, 2009, Shachtman, 2008] and there
have been experiments in stimulating motor intent with animal models [London et al, 2008,
Mavoori et al, 2005, Talwar et al, 2002]. A future BCI (fBCI) integrating these technical
features would concurrently interpret both intentional and spontaneous neural activity, and it
would stimulate perceptions. This would facilitate communication, via the Internet, between
humans and also between humans and external devices. In addition to these research
advances, BCIs interpreting intentional neural activity via electroencephalography (EEG) are
available to consumers [Emotiv Systems, 2011, Intendix, 2011].
In a society, privacy is limited because participation necessitates communication, which
results in some observation. When a person has the minimum possible privacy, they are under
continuous observation and perhaps even scrutiny. Observation and scrutiny restrict
autonomy and when autonomy is restricted, behaviours normalise and freedom and identity
are compromised. Thus, the pursuit of freedom through autonomy requires that privacy be
available. Given the research advances outlined above and the commercial availability of
BCIs, an ethical and legal obligation to support the privacy of BCI users exists. Should an
fBCI be commercially viable, the obligation will be pressing. This paper offers an initial
conceptual model for refinement towards a technical approach to meeting this obligation.
In related work, autonomy and identity are the focus of an argument for monitoring the
development of BCI technologies from an ethical perspective [Lucivero and Tamburrini,
2008]. Additionally, an experiment using a simulated BCI to examine participants’ responses
to stimuli concluded that “... one can be fooled into believing that one had an intention that
one did not in fact have” [Lynn et al, 2010], a finding that suggests significant implications
for autonomy. Furthermore, the human brain’s plasticity renders users of BCIs vulnerable to
long-term restrictions of autonomy [Salvini et al, 2008]. Privacy has been specifically
identified as an ethical issue relevant to BCIs [Wolpaw et al, 2006]. However, the only
detailed discussion of privacy and BCIs [Denning et al, 2009] conflates privacy with data
security and therefore conceptualises privacy as being susceptible to malicious attacks,
whereas it is also at risk of unintentional, and even well-meaning, transgressions.
In this paper, we propose an initial conceptual model that might be developed into one for a
technical approach to supporting privacy in BCIs. In order to facilitate interoperability, the
initial conceptual model is premised upon an understanding of the technical components of
BCIs. It is also premised upon a conceptualisation of privacy as a perception which differs
from person to person and which changes according to circumstances. Finally, in order to
foster uptake, the initial conceptual model aims to enable compliance with the European
Union’s various directives on privacy (see section 3).
The paper puts forward the initial conceptual model, which is the first of its kind and may
therefore be insufficient. Thus, a secondary contribution is an opportunity for scrutinising the
initial conceptual model via stimulation of discussion and critique. These contributions may
inform the design of a prototype for future implementation. If so, the prototype may be
implemented and tested to measure the extent to which it is usable and the extent to which it
enables compliance with the European Union’s privacy directives.
The rest of this paper is organised as follows. Section two describes relevant technical
components of BCIs and section three establishes a conceptualisation of privacy and reviews
the European Union’s privacy directives. Together, these two sections provide a technical,
conceptual and regulatory background for section four, which describes the initial conceptual
model. Section five poses questions to stimulate discussion and critique. Section six
concludes the paper by identifying options for future research projects.
2. BCI technology
The cerebral cortex provides sensory and motor functioning, reasoning, planning and
language [Nijholt et al, 2008]. BCIs identify and measure the electrical activity associated
with activating specific neural pathways in the cerebral cortex [Berger et al, 2007].
Measurements of activity are then applied to the control of external devices, bypassing the
peripheral nervous system [Hochberg et al, 2006]. Although there have been advances in
interpreting spontaneous neural activity [Coffey et al, 2010], a brain generates a profusion of
concurrent neural activity and separating a specific intention from neural ‘noise’ is difficult
[Curran and Stokes, 2003]. Therefore, most BCIs require users to target neural activity at
specific outcomes rather than sending and receiving information via the peripheral nervous
system. Learning to direct thoughts in a way that can be understood by a BCI can take months
and machine learning has been applied to relieve the burden of this task [Müller et al, 2007].
2.1 Brain imaging
Thought occurs when neurons in the brain send electrical signals. When sending or receiving
an electrical signal, neurons require an increased supply of oxygen and glucose [New
Scientist, 2011]. Therefore, an increase in blood flow occurs. Brain imaging technologies
detect and depict increases in blood flow or electrical activity in order to illustrate brain
functions.
However, brain imaging does not enable observation of meaning; it depicts the type of neural
activity only (examples include motor intent and visual perception) [Nijholt et al, 2008].
Thus, when brain imaging is applied in a BCI, semantic interpretation is provided in the
BCI’s engineering.
2.2 Machine learning
BCIs incorporating a machine learning component to map a user’s neural signals to their
intentions have to be trained to recognise and classify a specific neural signal [Krusienski et
al, 2011]. For example, consider a scenario in which Alice has purchased a new BCI to use
with her mobile phone (This is not an unlikely scenario. BCIs have been applied to the control
of mobile phones [Campbell et al, 2010]). Alice must train the BCI to: identify each unique
pattern of neural activity that corresponds to each person in her mobile phone’s address book;
classify a specific neural event as representative of a specific person; identify unique patterns
of neural activity that correspond to the ‘call’ and ‘hang up’ intentions; and, lastly, to map the
‘call’ and ‘hang up’ intentional neural activities to the correlating functions provided by the
mobile phone.
In BCIs, machine learning uses pattern recognition to identify and classify real time neural
activity. In pattern recognition, a ground truth function classifies previously unknown data
[Müller et al, 2007]. In order to approach optimal performance, real-time data must display
recognisable features. With respect to BCIs, the real-time data represents a human’s neural
activity and therefore recognisable features in the real-time data may be obscured by noise.
While there is some scope for instructing a human participant in using a BCI, by virtue of the
brain’s capacity for multi-tasking, the deliberate production of a neural signal displaying
recognisable features is intellectually taxing. Outcomes vary from person to person, with
some people never achieving sufficient signal intensity [Curran and Stokes, 2003].
3. Privacy
When using technologies, people try to create and maintain privacy to assert freedom, identity
and autonomy. The creation and maintenance of privacy can be achieved by declining to
participate, or by using anonymity, pseudonymity or misinformation [Fuster, 2010, Lenhart
and Madden, 2007]. When people opt out, adopt anonymity or pseudonymity, or engage in
misinformation, the effectiveness of any technology reliant upon accurate and representative
data is compromised.
3.1 Individual people
Privacy expectations are shaped in three ways. Firstly, privacy emerges from a society’s
communication practices [Westin, 2003]. For example, in some cultures, a house offers an
opportunity to withdraw from the community, whereas in others, a community shares housing
in an ad hoc manner. Thus, the extent to which a person expects privacy in a specific context
emerges from their enculturation. Secondly, in addition to enculturation, privacy expectations
are informed by personal preferences [Gavison, 1980]. In certain contexts, a person living in a
culture of shared housing may require more privacy than others. Therefore, while cultural
norms are influential, privacy expectations are diverse. Finally, a person’s expectation of
privacy is dependent on changes in their immediate context [Solove, 2006].
For example, Bob expects complete privacy, even uninterrupted solitude, in his morning
shower and conversely, very little in a busy shopping mall. His expectation of privacy differs
according to context and a change to that context may cause a change in his privacy
expectation. If he is alone during his morning shower, there is no difference between his
privacy expectation and his perception of privacy; however, if someone were to enter the
bathroom unexpectedly, the difference between Bob’s privacy expectation and his privacy
perception would grow and a requirement for more privacy would be catalysed.
Thus, a person’s privacy requirement, pr , can be defined in part as the difference between
their privacy expectation, pe , and their immediate perception of privacy, p p
pr  pe  p p

(Equation 1)

Thus, when pr 
0 privacy equilibrium appears to exist. When pr  0 , privacy expectation is
less than privacy perception and there is more privacy available than the person believes they
need. Finally, when pr  0 , privacy expectation is greater than privacy perception and there is
a requirement for more privacy.


3.2 Regulating privacy

As privacy is a necessary enabler of important freedoms, it is logically required for it to be
available to citizens of those nations upholding human rights and pursuing emancipation. The
conceptualisation of privacy as emerging from enculturation and as unique for each person
and their immediate context is well understood, long-standing, and widely applied by law and
policy makers. It forms the basis for legislative and other regulatory approaches such as the
Australian Privacy Act, the European Union’s privacy directives and the OECD’s guidelines.
These legal obligations and further ethical obligations [Floridi, 2006] mandate support for
privacy with respect to technologies.
In order to foster uptake of a future privacy-enhancing technology, possibly based upon the
initial conceptual model, this project aims inter alia to enable compliance with the European
Union’s Privacy Directives and therefore an overview of them is necessary. The European
Union (EU) has published four directives relevant to data privacy, which are colloquially
known as the data protection directive [European Parliament and the Council of the European
Union, 1995], the e-privacy directive [European Parliament and the Council of the European
Union, 2002], the data retention directive [European Parliament and the Council of the
European Union, 2006] and the cookie (or citizen’s rights) directive [European Parliament
and the Council of the European Union, 2009].
Briefly, the data protection directive “... requires Member States to protect the rights and
freedoms of natural persons with regard to the processing of personal data, and in particular
their right to privacy, in order to ensure the free flow of personal data in the Community”
[European Parliament and the Council of the European Union, 1995]. The e-privacy directive
“... translates the principles set out in [the data protection directive] into specific rules for the
electronic communications sector” [European Parliament and the Council of the European
Union, 2002]. The data retention directive amends the e-privacy directive, addressing “... the
retention of data generated or processed in connection with the provision of publicly available
electronic communications services or of public communications networks” [European
Parliament and the Council of the European Union, 2006]. Finally, the cookie directive also
amends the e-privacy directive, requiring consent for cookies installed on users’ devices
[European Parliament and the Council of the European Union, 2009]. To summarise, the
cookie and data retention directives amend the e-privacy directive, which is an interpretation
of the data protection directive. Thus, the data protection directive is the legal foundation for
supporting data privacy in the EU.
The data protection directive aims to enable unimpeded flow of data between EU member
states. It applies to the processing of personal data, that is, data describing natural people. The
directive makes no distinction between whether data is processed manually or automatically,
except that it requires manual data processing (that is, human intervention) when legally
binding decisions are being made.
The directive ensures that processing of personal data meets three conditions: transparency,
legitimacy of purpose and proportionality [European Parliament and the Council of the
European Union, 1995]. Transparency means that a person is explicitly informed of the
specific purpose when their personal data is processed. Legitimacy of purpose means that the
purposes for which personal data are processed are legitimately related to the business needs
of the data controller. Proportionality means that personal data must be processed only to an
extent compatible with the explicitly stated purpose.
Finally, the data protection directive restricts the transfer of data to settings that provide
comparable levels of privacy protection. Thus, its breadth of influence extends beyond the
EU’s member nations, requiring those wishing to process data about EU citizens to set up
regimes to protect that data. The EU’s approach to data privacy is consequently the most
comprehensive attempt to support privacy to date.
4. Initial conceptual model
The initial model has two main conceptual goals: to be interoperable with BCI technologies,
and to support privacy (which is unique for each person and changes according to
circumstances). As privacy requirements differ from person to person and over time, a
conceptual model that neglects flexibility cannot efficiently serve a wide user base. Therefore,
BCIs incorporating a machine learning component to map a user’s neural signals to their
privacy requirements are of relevance to this project.
If a BCI’s pattern recognition component can detect a person’s neural activity, then it
logically might be able to detect their privacy requirement. The privacy requirement can then
be applied to any information being shared. For example, consider a scenario in which Bob is
using a BCI to interact with his mobile phone. He is calling Charlie but does not want the call
to be logged in the mobile phone’s storage. First, he thinks of Charlie and the mobile phone
retrieves Charlie’s number. Then Bob thinks of not logging the call and the mobile phone
saves this privacy requirement in its working memory. Finally, Bob thinks ‘call’ and the
mobile phone places the call without logging it and clears its working memory.
This scenario has only two outcomes: log the call or don’t log the call. It also relies on a
conscious decision to indicate privacy is required. However, privacy requirements are more
diverse than this. The definition for privacy requirements (see equation (1) above) enables a
diversity of privacy requirements and can be used to inform the initial conceptual model.
Equation (1) accounts for a person seeking no change in, less, or more privacy. For example,
Alice has a BCI which she plans to use for enabling data privacy. In a training phase
conducted in controlled circumstances, the BCI’s pattern recognition component establishes a
ground truth function to identity her neural activity corresponding to the three privacy
aspiration states defined by equation (1). Then, in an operational setting, the BCI’s pattern
recognition component classifies Alice’s real time neural activity via the ground truth
function. Alice’s real time context-specific desire for no change in, less or more privacy can
then be applied to communications with external devices.
However, in practice, it may be that a person rarely actively seeks less privacy. Therefore, in
an implementation rather than specifying a requirement for less privacy, an approval of a
privacy reduction may be more appropriate. Figure 1 provides an overview of this conceptual
approach; Figure 2 illustrates the initial conceptual model’s training phase in which a person
deliberately thinks about no change in privacy, approving a privacy reduction and more
privacy in turn; Figure 3 illustrates the initial conceptual model’s operational use in which a
person spontaneously thinks about no change in privacy, approving a privacy reduction or
more privacy, according to circumstances.
To continue the previous example, if Alice requires more privacy, it can be provided; and if
she continues to require more privacy, more can be provided; eventually she will require no
change in privacy. Thus, the importance of accurately detecting the ‘no change’ neural state is
clear: it is the stopping condition and it enables the extent to which Alice requires privacy in
that given context to be measured by the BCI.
However, the human brain has a high degree of plasticity and a person’s neural patterns
corresponding to ‘no change in’, ‘less’ or ‘more’ privacy provision may not remain constant
over time. In order to adapt to small but constant changes in neural activities, the BCI’s
pattern recognition component can be configured to intermittently recalibrate its ground truth
function. This requires that Alice receive feedback from the BCI and that she confirm or deny
its interpretation of her privacy aspirations. A BCI providing haptic feedback was found to
better enable attentiveness and accuracy when compared to a BCI providing visual feedback
[Cincotti et al, 2008]. A similar approach may be useful in providing feedback for Alice’s
recalibration of the BCI’s ground truth function.
The initial conceptual model appears to achieve its main goals. The model satisfies the
condition that it be interoperable with BCI technologies because it leverages a BCI’s preexisting pattern recognition component. As the initial conceptual model requires that the
ground truth function be generated for each person using the BCI, it enables privacy
requirements to differ from person to person. Furthermore, as the model requires that the
ground truth function be intermittently recalibrated, it also supports privacy requirements that
change over time. In addition, the initially conceptual model meets the transparency,
legitimacy of purpose and proportionality conditions of the EU’s data protection directive as
it responds to the BCI user’s immediate privacy requirements, rather than the data collection
objectives of the data controller.
5. Discussion and critique
The initial conceptual model presented above is the first of its kind and may provide a
foundation for future research, informing the design and implementation of a privacyenhancing technology (PET) for BCIs. Thus, analysis and critique of the model are essential,
welcome, and are fostered here through preliminary identification of some of the model’s
questions for theory, legal and regulatory frameworks, and its operational and technical
problems. We welcome suggestions of other questions for theory, legal and regulatory
frameworks, and its operational and technical problems.
Ground truth
function
Training
Operational
use
Figure 1: Overview of initial conceptual model.
Controlled environment
More
privacy
No change
in privacy
Machine
learning
Ground truth
function
Less privacy
is OK
Figure 2: Training phase.
Uncontrolled environment
Ground truth function
More
privacy
No change
in privacy
Less privacy
is OK
Figure 3: Operational use.
Privacy
requirement
5.1 Theory
The initial conceptual model is interesting from the perspectives of neuroethics and
transhumanism. Neuroethics encompasses two themes: arguments that focus on the ethical
issues emerging from neuroscience and its products, and arguments emerging from the ways
in which advances in neuroscience enable reconsideration of long-standing philosophical
problems [Levy, 2008]. Transhumanism is a related area of investigation. Transhumanists
theorise the impacts of leveraging technologies in human evolution, envisaging the effects of
enhancing human intelligence and physical and psychological capacities [Agar, 2007].
BCIs have emerged from a range of research disciplines, one of which is neuroscience. While
enabling at least some elements of autonomy in terms of privacy, the initial conceptual model
might facilitate the uptake of BCI technologies. Thus, it poses a wide range of neuroethical
problems. For example, BCIs restrict autonomy to the extent that BCI users may not always
be able to determine whether an intention originates with themselves [Lynn et al, 2010] and
loss of autonomy is linked to the loss of freedom; therefore, do the initial conceptual model’s
benefits outweigh any loss? Is such a utilitarian analysis of the initial conceptual model
appropriate? Should a prototype of the conceptual model be implemented after it has been
further developed? If so, to what extent should it test for loss of autonomy and identity?
With respect to transhumanism, BCIs extend the capabilities of the human nervous system
beyond its biological limits. Should the eventual conceptual model enable BCIs to be used in
humanly ways? If so, to what extent should it support humanly ways of using BCIs?
There may be other ethical questions arising from these avenues of enquiry. We welcome
suggestions of what they might be.
5.2 Legal and regulatory frameworks
One of the initial conceptual model’s goals was to foster compliance with the EU’s privacy
directives. However, the directives were not examined in detail, instead the three conditions
of the data protection directive sufficed to inform legal context. Does the initial conceptual
model overlook relevant features in the EU’s privacy directives? If so, how may it be
amended?
5.3 Operational and technical problems
The initial conceptual model is susceptible to noisy data, user fatigue and an unwieldy
semantic burden. It might be that these are such substantial problems as to prevent practical
implementation, and it may be that other technical problems remain to be identified.
As noted above, machine learning would be applied to transfer the learning curve from the
user to the BCI. However, in the initial conceptual model, the ground truth function is derived
under controlled conditions. As operational conditions involve interference and noisy data,
the ground truth function may produce unreliable inferences. Such inferences may lead to
inaccuracies in the extent to which privacy is provided.
A person using an implementation developed from a refinement of the conceptual model may
have to concentrate in order to produce a sufficiently clear signal, which may cause fatigue.
Once fatigued, the person’s capability of producing subsequent signals will be reduced.
Assuming these problems are overcome, and a person’s privacy requirement can be
accurately identified, there remains the issue of how best to support their privacy. The initial
conceptual model has a semantic capability limited to three indicators of privacy
requirements, yet it may be used with a BCI performing any communicative task. If the BCI
has been designed with a rich semantics of its domain of use, applying a suitable privacy
enhancing technology (for example, encryption) is a trivial task. Otherwise, the burden of
deploying an appropriate privacy enhancing technology may have to rest with the user.
6. Conclusion
This paper contributes an initial conceptual model for a technical approach to supporting the
privacy of BCI users. The model offers interoperability with existing BCI technology. Also, it
is premised upon a view of privacy as unique for each person and changing over time. Lastly,
the model could foster compliance with the transparency, legitimacy of purpose and
proportionality conditions of the EU’s data protection directive. A preliminary analysis and
critique of the model has been attempted, but many questions remain and feedback is
welcome.
Future research will apply the critique stimulated by this paper to the initial conceptual model.
Later, the developed and refined (or re-designed) conceptual model may be used to inform the
design of a PET for BCIs. If a design is feasible and sufficient, a prototype may be
implemented and tested for usability. Finally, such a prototype can be tested against the three
goals which shaped the development of the initial conceptual model. These research avenues
would enable timely discussion and investigation of other, currently unforeseen, technical
approaches to supporting the privacy of BCI users.
7. References
Agar, N. (2007), Whereto transhumanism? The literature reaches a critical mass, Hastings
Center Report, 37, 3, 12-17.
Allison, B. Z., Brunner, C., Kaiser, V., Muller-Putz, G. R., Neuper, C. and Pfurtscheller, G.
(2010), Toward a hybrid brain-computer interface based on imagined movement and
visual attention, Journal of Neural Engineering, 7, 2, 026007.
Berger, T., Chapin, J., Gerhardt, G., McFarland, D., Principe, J., Soussou, W., Taylor, D. and
Tresco, P. (2007), International assessment of research and development in braincomputer interfaces, World Technology Evaluation Center, Inc.
Campbell, A., Choudhury, T., Hu, S., Lu, H., Mukerjee, M., Rabbi, M. and Raizada, R.
(2010), Neurophone: Brain-mobile phone interface using a wireless EEG headset,
MobiHeld 2010, 3-8.
Cincotti, F., Mattia, D., Aloise, F., Bufalari, S., Schalk, G., Oriolo, G., Cherubini, A.,
Marciani, M. and Babiloni, F. (2008), Non-invasive brain-computer interface system:
Towards its application as assistive technology, Brain Research Bulletin, 75, 6, 796803.
Coffey, E., Brouwer, A.-M., Wilschut, E. and van Erp, J. (2010), Brain-machine interfaces in
space: Using spontaneous rather than intentionally generated brain signals, Acta
Astronautica, 67, 1-2, 1-11.
Curran, E. and Stokes, M. (2003), Learning to control brain activity: A review of the
production and control of EEG components for driving brain-computer interface
(BCI) systems, Brain and Cognition, 51, 3, 326-336.
Denning, T., Matsuoka, Y. and Kohno, T. (2009), Neurosecurity: Security and privacy for
neural devices, Neurosurgical FOCUS, 27, 1, E7.
Drummond, K. (2009), Pentagon preps soldier telepathy push, Wired, May 14.
Emotiv Systems (2011), Emotiv - brain computer interface technology, online at
http://emotiv.com accessed 09.05.2011.
European Parliament and the Council of the European Union (1995), Directive 95/46/EC,
Official Journal of the European Union, L 281, 0031 - 0050.
European Parliament and the Council of the European Union (2002), Directive 2002/58/EC,
Official Journal of the European Union, L 201, 0037 - 0047.
European Parliament and the Council of the European Union (2006), Directive 2006/24/EC,
Official Journal of the European Union, L 105, 0054 - 0063.
European Parliament and the Council of the European Union (2009), Directive 2009/136/EC,
Official Journal of the European Union, L 337, 0011 - 0036.
Floridi, L. (2006), Four challenges for a theory of informational privacy, Ethics and
Information Technology, 8, 3, 109-119.
Fuster, G. (2010), Inaccuracy as a privacy-enhancing tool, Ethics and Information
Technology, 12, 1, 87-95.
Gavison, R. (1980), Privacy and the limits of law, The Yale Law Journal, 89, 3, 421-471.
Hochberg, L., Serruya, M., Friehs, G., Mukand, J., Saleh, M., Caplan, A., Branner, A., Chen,
D., Penn, R. and Donoghue, J. (2006), Neuronal ensemble control of prosthetic
devices by a human with tetraplegia, Nature, 442, 7099, 164-171.
Intendix (2011), Personal EEG-based spelling system, online at http://www.intendix.com
accessed 09.05.2011.
Krusienski, D., Grosse-Wentrup, M., Galán, F., Coyle, D., Miller, K., Forney, E. and
Anderson, C. (2011), Critical issues in state-of-the-art brain-computer interface signal
processing, Journal of Neural Engineering, 8, 2, 025002.
Leeb, R., Sagha, H., Chavarriaga, R. and del R Milán, J. (2011), A hybrid brain-computer
interface based on the fusion of electroencephalographic and electromyographic
activities, Journal of Neural Engineering, 8, 2, 025011.
Lenhart, A. and Madden, M. (2007), Teens, privacy and online social networks: How teens
manage their online identities and personal information in the age of myspace, Pew
Internet and American Life Project.
Levy, N. (2008), Introducing neuroethics, Neuroethics, 1, 1, 1-8.
London, B., Jordan, L., Jackson, C. and Miller, L. (2008), Electrical stimulation of the
proprioceptive cortex (area 3a) used to instruct a behaving monkey, IEEE
Transactions on Neural Systems and Rehabilitation Engineering, 16, 1, 32-36.
Lucivero, F. and Tamburrini, G. (2008), Ethical monitoring of brain-machine interfaces, AI &
Society, 22, 3, 449-460.
Lynn, M., Berger, C., Riddle, T. and Morsella, E. (2010), Mind control? Creating illusory
intentions through a phony brain-computer interface, Consciousness and Cognition,
19, 4, 1007-1012.
Mavoori, J., Jackson, A., Diorio, C. and Fetz, E. (2005), An autonomous implantable
computer for neural recording and stimulation in unrestrained primates, Journal of
Neuroscience Methods, 148, 1, 71-77.
Müller, K., Krauledat, M., Dornhege, G., Curio, G. and Blankertz, B. (2007), Machine
learning and applications for brain-computer interfacing in Human interface and the
management of information. Methods, techniques and tools in information design, eds
Smith, M. and Salvendy, G., Springer Berlin / Heidelberg.
New Scientist (2011), The human brain, online at http://www.newscientist.com/topic/brain
accessed 30.05.2011.
Nijholt, A., Tan, D., Allison, B., del R Milan, J. and Graimann, B. (2008), Brain-computer
interfaces for HCI and games, CHI '08 extended abstracts on Human factors in
computing systems, 3925-3928.
Salvini, P., Datteri, E., Laschi, C. and Dario, P. (2008), Scientific models and ethical issues in
hybrid bionic systems research, AI & Society, 22, 3, 431-448.
Schmidt, E. M., Bak, M. J., Hambrecht, F. T., Kufta, C. V., O'Rourke, D. K. and
Vallabhanath, P. (1996), Feasibility of a visual prosthesis for the blind based on
intracortical micro stimulation of the visual cortex, Brain, 119, 507-522.
Shachtman, N. (2008), Army funds 'synthetic telepathy' research, Wired, August 18.
Solove, D. (2006), A taxonomy of privacy, University of Pennsylvania Law Review, 154, 3,
477-560.
Talwar, S. K., Xu, S., Hawley, E. S., Weiss, S. A., Moxon, K. A. and Chapin, J. K. (2002),
Rat navigation guided by remote control, Nature, 417, 6884, 37-38.
Warwick, K. and Gasson, M. (2004), Extending the human nervous system through internet
implants - experimentation and impact, IEEE International Conference on Systems,
Man and Cybernetics, 2, 2046-2052.
Westin, A. F. (2003), Social and political dimensions of privacy, Journal of Social Issues, 59,
2, 453.
Wolpaw, J. R., Loeb, G. E., Allison, B. Z., Donchin, E., Do Nascimento, O. F., Heetderks, W.
J., Nijboer, F., Shain, W. G. and Turner, J. N. (2006), BCI meeting 2005 - workshop
on signals and recording methods, IEEE Transactions on Neural Systems and
Rehabilitation Engineering, 14, 2, 138-141.