You are here: Resources > FIDIS Deliverables > HighTechID > D12.2: Study on Emerging AmI Technologies > 
Software Agents  Untitled
BRAIN COMPUTER INTERFACES
 Nanotechnology

 

Brain Computer Interfaces

Traditional Brain Computer Interfaces (BCIs) are typically designed to respond to specific patterns detected in spatiotemporal electroencephalograms (EEG) measured non-invasively from the scalp, see [Wolpaw et al. (2002)] for a review. The EEG signal originates from the electrical activity of thousands of neurons in the brain, but can be viewed as a mixture of five distinct frequency spectra of note: Alpha, Beta, Theta, Delta and Mu waves, an important property since each represents activity associated with different conscious or unconscious states.


Figure : Basic block diagram of a BCI system incorporating signal detection, processing and deployment [Thorpe et al. 2005]

In current implementations, BCI based systems generally allow the user to select characters or icons from a screen by either moving a cursor or halting a scrolling list of options. However, the problem of generating a suitable EEG control signal and subsequently detecting it is a non-trivial one. With theory and methods drawn from a vast array of disciplines, this area of research has developed a substantial nomenclature. One method is to correlate specific movements, or thoughts of movements with particular patterns in the EEG signal [Bozorgzadeh et al. (2000) & Lusted and Knapp (1996)]. Another is to train the user to control the amplitude of their Beta or Mu waves (associated with an alert state of mind and the motor cortex respectively) in order to control the movement of a cursor [Wolpaw et al. (1991) & Kubler (1999)] or other external device [Millan et al. (2004)]. However, in addition to being contaminated by noise sources such as eye blinks and facial movements, the EEG signal becomes highly attenuated by the skull and surrounding tissue making on-line decoding of surface detected EEG a complex problem. For examples of such techniques see [Sykacek et al. (2004), Nicolaou and Nasuto (2003), James and Gibson (2003), Penny et al. (2000) & Jung et al. (2000)]. Issues are further compounded since scalp electrodes are notoriously difficult to accurately position and attach [Pfurtscheller et al. (2003)].

BCIs for Identification

Although clearly still in its infancy, non-invasive BCI technology has been highlighted as having several potential applications both in terms of digital identity, and specifically in AmI environments. For example, although fingerprinting has been hailed as a unique way to identify a person, the axiom on which this is based has more recently come under more detailed scientific scrutiny. The discovery that the individuality of fingerprints is questionable [Pankanti et al. (2000)] has prompted further research into novel biometric systems for identification (see FIDIS D6.1 for more information on current biometric techniques). EEG has to date produced convincing results [Ravi et al. (2005)], which suggest that an individual is actually uniquely identifiable using brain patterns, with an accuracy of up to 96.6%, a figure which actually rivals some other more established biometric systems.

Such research capitalises on the innate behaviour of the brain when presented with visual images that have previously been seen, or are being consciously sought after, i.e. the brain produces a unique signature of activity in the sub-conscious when an image is recognised. The appeal of this technique for identification is that is would be hard to replicate, i.e. forge, this biometric. However, other researchers have indicated the possibility of broadening the scope of such technology to allow the characteristics of any thought process to be uniquely identified. This has several advantages over ‘classic’ biometrics in that the user is able to change the thought (known as a ‘pass-thought’ [Thorpe et al. 2005]) in the same way that a password or PIN could be changed, as opposed to biometrics such as fingerprints which cannot be easily modified. Additionally, the available entropy (i.e. set of possible inputs) is notably large since such a thought could be anything from a simple word from any language, to a personal memory.

The existing solutions to actually detect the raw brain activity are still cumbersome, however, remote brain-activity sensors are becoming a reality. Optical sensors for example which use light to infer neural activity near the outer layers of the cortex by measuring reflection changes due to changes in blood-oxygenation levels have been developed. Such a device does not physically make contact with the head. 

 

Security & Privacy

With any emerging security technology, one question is paramount: ‘How reliable is it?’ Two technical issues exist that can be used to quantify an answer, that of a false positive, i.e. concluding that someone is wrongly identified as an enrolled user of the system, and false negative, i.e. being unable to confirm the identity of a valid user. Whether BCIs are ultimately able to provide the correct trade off between correct and incorrect identification remains to be seen. However this technology has clear advantages when it come to the two most pertinent security issues relating to biometrics: the lack of secrecy of biometric data (for example, fingerprints are routinely left on objects during everyday activities) and non-replacability (i.e. once a fingerprint has been compromised it cannot be changed). Additionally whereas typical biometrics also suffer from failure to enrol i.e. fingerprints may be worn out or digits missing, it is thought that such techniques would not be as susceptible.

Beyond the inherent error-prone nature of biometric technology, there is the real possibility of deliberate attack in an attempt to compromise security. There are essentially a series of vulnerabilities of any biometric system regardless of the type of biometric being utilised. 

 


Figure : A block diagram of a generic biometrics system, with eight potential attack points highlighted. Adapted from [Uludag et al. 2004]

 

highlights the significant eight points within a biometric system which are open for potential attack [Uludag et al. (2004)]. The extent to which these can be exploited has a direct relation to the overall security of the system (see FIDIS D3.2 for more detail). The eight points of attack are:

  1. Utilising an imitation biometric. 

  2. Although more complex, replaying previously submitted biometric data to cause a false positive identification.

  3. By attacking the system at the feature module point, it is possible, albeit unlikely, for an attacker to force the system to produce values unrelated to the sensor input that subsequently generates a false positive result. 

  4. Replacing the system generated feature values with known valid ones will result in unauthorised access. 

  5. If the matcher can be forced into generating an incorrectly high matching score, then a false positive will result. 

  6. The template matching component is particularly vulnerable since incorrect data stored here (through error, collusion or attack) is open to abuse at any time. Data may be added, edited, removed or replaced so that an invalid user is authenticated. Database security is the key here to reducing vulnerability since unsecured templates can be reverse-engineered and synthetic data added.  

  7. By intercepting the transmission of the template data, and replacing the original templates with false data, a false positive can be generated. 

  8. By attacking at the decision end of the system, the binary result ‘Yes / No’ can be modified to falsify the result.

 

Attack (1) is perhaps the most intuitive, yet minimised through the use of BCI based biometrics. The remaining attack techniques require a more intimate understanding of the specific authentication system and typically some degree of access to its inner workings. However, all component parts of the authentication system represent a potentially exploitable issue.  

The ethical questions regarding such ‘mind-reading’ technology are clear, although currently the length to which this is exploitable is still scientifically unproven. 

BCIs for profiling

In order to create a comprehensive profile of an individual, it is necessary to gain as much information as possible regarding their interactions, thoughts and feelings. If you can metaphorically “get into someone’s head”, then you have ‘insider’ knowledge as to the individual’s feelings, reactions and emotional state. Whilst the concept of observing someone’s thoughts like one can watch a television is intriguing, the scientific evidence currently does not support its feasibility. However, the fact that bodily responses are under neural control suggests that they may be able to provide windows into underlying psychological processes. For this reason there is great interest in the use of non-invasive BCIs for affective computing (see section ).

A new field of research termed Augmented Cognition has emerged over the last few years, where the basic premise is to address cognitive bottlenecks (e.g. limitations in attention, memory, learning, comprehension, and decision making) via technologies that assess the user’s cognitive status in real time. A computational interaction employing such novel system concepts monitors the state of the user, through behavioural, psycho-physiological and/or neurophysiological data acquired from the user in real time, and then adapts or augments the computational interface to significantly improve their performance on the task at hand [Schmorrow et al. (2004)]. In essence, such systems tend to reduce the amount of information the user is being exposed to, if indications reveal that the user is being overloaded. Unsurprisingly the applications in question are typically military related, i.e. reducing screen information to essential material if a pilot is becoming stressed. However, the advantages are that the technology being developed is designed to be robust and unobtrusive (in theory), and has clear application in AmI environments.

ICT Implants

While the field of Information and Communication Technologies (ICT) has expanded significantly over the past few years, few areas have created as much controversy as that of ICT implants. The recent developments in basic engineering technologies have meant that the integration of silicon with biology has already become a reality, and has prompted much research into the ethics of these technologies (see e.g. [EGE (2005)]).  

 

Medical devices 

Part of the drive behind the development of ICT implant devices is medical – i.e. restoring lost human abilities. Whilst society in general has come to accept artificial mechanical body parts such as artificial hips and heart valves, debate now rages about technologies based around computer technology. There is a fair range of such ‘restorative’ devices already in clinical use, the pacemaker being one of the better known. However, of greater interest is the development of technologies which are able to interact with us on a neural level. The most ubiquitous sensory neural prostheses is by far the cochlea implant [Zeng (2004)]. Where destruction of cochlear hair cells and the related degeneration of auditory nerve fibres has resulted in sensorineural hearing loss [McCabe (1979)], the prostheses is designed to elicit patterns of nerve activity via a linear array of electrodes implanted in the deaf patient’s cochlea that mimics those of a normal ear for a range of frequencies. Current devices enable around 20 percent of those implanted to communicate without lip reading and the vast majority to communicate fluently when the sound is combined with lip reading. Its modest success is related to the ratio of stimulation channels to active sensor channels in a fully functional ear, with recent devices having up to 24 channels, while the human ear utilises upwards of 30,000 fibres on the auditory nerve. 

With the limitations of the cochlea implant in mind, the retinal implant [Rizzo et al. (2001)] is certainly substantially more ambitious. While degenerative processes such as retinitis pigmentosa selectively affect the photodetectors of the retina, the fibres of the optic nerve remain functional, so with direct stimulation of the nerve it has been possible for the recipient to perceive simple shapes and letters [Liu et al. (2003)]. However, the difficulties with restoring full sight are several orders of magnitude greater than those of the cochlear implant simply because the retina contains millions of photodetectors that need to be artificially replicated. An alternative methodology is to bypass the optic nerve altogether and use cortical surface or intracortical stimulation to generate phosphenes [Dobelle (2000)]. However, progress in this area has been hampered by our lack of understanding of brain functionality, so to date has produced results no better than systems which simply utilise other functioning sensory receptors [Meijer (1992)] and thus potentially capitalise on cross-modal neural plasticity [Shimojo and Shams (2001) & Cohen et al. (1997)].

Electrical stimulation of the CNS has also proved useful in the treatment of other medical conditions. Earlier work by Delgado to artificially induce schizophrenia has subsequently been brought into question; however, more recently direct stimulation of areas within the brain has proven successful in treating the tremor, rigidity and bradykinesia symptoms of Parkinson’s disease by manipulating basal ganglia activity [Gasson et al. (2005b)]. Work on rats [Talwar et al. (2002)] has also demonstrated how direct brain stimulation can be used to guide them through a maze problem, essentially by reinforcement, by evoking stimuli to the cortical whisker areas to suggest the presence of an object, and stimulation of the medial forebrain bundle (thought to be responsible for both the sense of motivation and the sense of reward) when the rat moves accordingly.

The ability of electrical neural stimulation to drive behaviour and modify brain function without the recipient’s cognitive intervention is clear, however, it can also be used to replace the natural percept, for example the work by Romo et al. [Romo et al. (2000)], demonstrating that local electrical microstimulation of the somatic sensory cortex can substitute for skin vibration in perceptual tasks that require frequency discrimination.

In the devices discussed above, the interaction is directly from the device to the human, i.e. information is internalised. To fulfil the requirements of seamless interaction, dataflow needs to be bi-directional and thus externalising technologies are required. The invasive alternative to surface EEG recordings (see section ) is to record neural activity from the cortex by either placing electrodes inside the skull [Kennedy et al. (2004)] or by implanting electrodes into the brain, see [Donoghue (2002)] for a review. This invasive procedure has already given interesting insights into the neurophysiological functionality of the brain, with work on rats and primates [Chapin (1999) & Wessberg (2000)], supporting the hypothesis that the direction and speed of an intended movement is predicted in the motor cortex by the activity of populations of neurons.

 

Discussion 

In the short term, the use of medical ICT implants raises several identity related security and privacy concerns. Although in the most part existing applications involve a uni-directional connection with the nervous system, the devices themselves are often capable of bi-directional communication with the outside world. This communication is typically via some wireless means to avoid the infection risks associated with percutaneous connections. By remotely accessing an implanted device it is usually possible to control the device, adjust its settings, read back stored data, and in some cases even gain access to ‘live’ biological information. An example of this is the more established pacemaker technology whereby data is logged internally for subsequent patient management which relates to the performance of the heart, activity of the device and so on. Access to the data is obviously vulnerable to typical attack methodologies. 

More long term, the technologies discussed above may prove to be the basis of future commercial technologies which allow us improved interaction in more AmI related environments. The ability to form direct bi-directional links with the human brain certainly opens up the potential for many new application areas. Scientists predict that within the next twenty years neural interfaces will be designed that will not only increase the dynamic range of senses, but will also enhance memory and enable “cyberthink” - invisible communication with others and technology [Gee (2004)]. Already the foundations of this are being investigated, with direct nervous system to nervous system, augmented sensory function, and ‘internet ready’ implants being demonstrated (see e.g. [Gasson et al. (2005), Warwick et al. (2003, 2004a, 2004b, 2005)]).

 

Non-medical application 

In less invasive procedures, human implantation of RFID devices has also been proposed for a variety of applications. In 1998, Professor Kevin Warwick of the Department of Cybernetics at the University of Reading, UK became one of the first people to have such a device implanted. By being able to track and uniquely identify him, the departmental building was able to build a profile of his behaviour, and customise it to his preferences, including adjusting light levels, starting his computer, and even brewing the coffee on his arrival. 

 


Figure : Prof. Kevin Warwick has a 2cm long identifying implant (shown enlarged, right) surgically inserted into his arm

 

In other applications, some four years later, implanted identifying tags have been commercialised to essentially replace ‘medic alert’ bracelets and to relay medical details when linked with an online medical database. Other implanted devices have been used to allow the individual access to secure areas, and even to identify clubbers such that payment for drinks can be automatically debited from their account.

 

 

Software Agents  FIDIS_D12.2_v1.0.sxw  Nanotechnology
8 / 26