You are here: Resources > FIDIS Deliverables > HighTechID > D12.2: Study on Emerging AmI Technologies > 
Context: Infoethics  Untitled
FUNDAMENTAL RIGHTS AND EMERGING TECHNOLOGIES
 Conclusion

 

Fundamental rights and emerging technologies

Simple Sensors

As described, sensors and particularly remote sensors can have an important impact on privacy and data protection where the persons concerned are not at all aware that information relating to them is remotely perceived. In addition, the mere presence of sensors in public places, even if noticed, does not inform about important facts such as who is controlling the sensors, which information about them is ‘sensed’, for which purposes the data are used by whom and in particular: are the data stored or are they immediately ‘forgotten’ when not needed for the particular purpose? (article 7 and 8 ).  

UNESCO indicates that information derived from such sensors is very useful but often appropriated although such information belongs to the public domain. Where information is obtained from sensors put in the public domain, such as market places or even earth images captured by a satellite, such information should be readily available to anybody. There are many ambiguities on how the vast amount of benefits of public domain sensor information will be shared (article 11 - freedom of information) [Gutwirth (1993)].

How will the freedom of speech and particularly the freedom of expression respond to sensors that are programmed to detect abnormal, deviant behaviour and then to react correspondingly (starting to record, triggering an alarm etc), e.g. when someone has a strange voice, walks into the opposite direction, screams to the other street side to say hello to a friend, staggers etc.? How does the freedom of expression in a sensor-world - that is almost committed to objectify the environment as much as possible - relate for example to the freedom of speech, including body language (article 11 - freedom of expression)?  

Article 21 of the Charter explicitly prohibits “any discrimination based on any ground such as sex, race, colour, ethnic or social origin, genetic features, language, religion or belief, political or any other opinion, (…), property, birth, disability, age or sexual orientation”. How will a technical possibility - and in fact the exact functional design of sensors - to ‘sense’ sex, race, colour, genetic features, language, disability, age or other features and to subsequently respond differently on the basis of such characteristics, influence the fundamental principle of non-discrimination? How then must discrimination be understood in a society where computers will make automated decisions on the basis of such characteristics?  

 

Radio Frequency Identification

RFID is already being extensively discussed in law, science & technology, and society, particularly with regard to privacy (article 7). But the impact goes much further. What impact does a massive deployment of consumer products with mere small and passive RFID tags have with regard to fundamental rights and freedoms, when such tags only allow for unique identification and location of such products? We assume problems with regard to data protection in the first place: what information is on the tag, when is the tag being read and by whom and for which purposes, to which information is the information on the tag linked? Is it by the way necessary that the information on an RFID tag falls under data protection law in order to enjoy the right to be informed, which information can be read from the product you possess (article 8)?  

There will be many questions with regard to consumer protection (article 38), such as whether companies can impose that RFID tags must remain attached to the products in order to receive product guarantee, hereby circumventing a possible right of the consumer to remove the tag once the product is bought.  

What about environmental protection, if billions of products - of which so many are meant to be thrown away after consumption - are equipped with small silicon? (article 37) 

Are the right to security of people, as granted by article 6 of the Charter, the right to healthy work environment (article 31 para. 1) and the “high level of human health protection” (article 35) safeguarded, when it is not a hundred percent sure, whether radio waves - in mega-quantities as will surround us in an RFID-enabled AmI world - are healthy or dangerous for us?

 

Brain-computer interfaces (neural signal processing)

Article 8 of the Charter provides for a fundamental right of protection of personal data and hereby states that personal data may only be processed “on the basis of the consent of the person concerned or some other legitimate basis laid down by law”. Brain-computer interfacing (BCI) allows processing of neural signals and it is assumed that neural signals may indicate - even represent - thoughts. The Guardian reported in 2007 that “a team of world-leading neuroscientists has developed a powerful technique that allows them to look deep inside a person’s brain and read their intentions before they act”.

Under what conditions can the neuro signals be considered creative and specific enough to invoke intellectual-property rights (Article 17 para. 2)? Can certain thoughts, when registered by a brain-computer interface, be a work of invention that falls under copyright law? Or can a sequence of neuro-signals, for example, a sequence that invokes a happy, yellow feeling in the brain, be patented by the ‘thinker’ of these signals?

In our assumption that brain-computer interfacing evolves in such a way that neural signals can be detected and that these ‘data’ give various kinds of ‘information’, a wide range of questions concerning the acceptability of brain ‘reading’ arises. One question is whether processing of neural signals (personal data) without consent of the data subject (thus on the basis of another legitimate basis) can be lawful in any situation. If yes, what situation would that be and under which conditions? For example, can employers (such as schools afraid of hiring paedophiles, or intelligence services screening personnel for infiltrators), insurance providers, or the police (lie detection) ever be allowed to compulsorily process brain signals? Can any individual - convicted or not - ever be obliged to have his cortex connected to a machine without consent? These questions are not only relevant in light of Article 8 (data protection), but also – and more so – in light of Articles 7 (personal life), 3 para. 1 (physical and mental integrity), 1 (human dignity), and 10 (the freedom of thought, conscience and religion).

The freedom of thought, conscience and religion, human dignity, and mental integrity are not only at stake through brain ‘reading’ but also, even more poignantly, through the reverse process, brain ‘writing’, i.e., inputting neuro-signals to the brain to trigger certain behaviour, thoughts or feelings. Should law enforcement be allowed to send a ‘stop leg motion’ signal to the brain of a convict who tries to escape while on probationary leave? This would be an extreme infringement of the human rights mentioned. More complex is the question whether someone could voluntarily agree to have his brain altered through neuro-signal processing. This is readily accepted for patients with Parkinson’s disease, since neuro-stimulation can suppress a substantial amount of its symptoms. But how do we feel about persons having their brains altered in the way of A Clockwork Orange, to eradicate paedophilic or aggressive inclinations? The question rises to what extent this can be really voluntary (e.g., if someone faces a choice of five years’ imprisonment or brain treatment) and whether consent is ever given in an informed way (it being very hard to oversee the consequences). Also, given that this type of brain stimulation may be prone to change the identity of the person (there are indications for this with current treatments of Parkinson patients), should society allow this to happen at all, given the primacy of human dignity (Art. 1) and the fact that the sense of self (ipse identity) resides in the brain?

If neural signal processing generates data that can be considered as certain thoughts, can someone ever infringe a criminal law purely on the basis of thoughts? Can someone ever be convicted merely by ‘thinking’ something, if such thinking can be traced by neural signal processing? Currently, criminal law only penalises activities, even if more and more acts are penalised that are not criminal in themselves but which prepare the way for crimes. Conspiracy, preparatory activities, misuse of devices (Art. 6 Cybercrime Convention), and jihad recruitment are examples of this. A next step could be, if neurotechnology allows the discovery of intentions within the brain, to penalise intending to commit a crime. Also, like current law penalises certain types of offensive material (child pornography, racist statements, and in certain countries holocaust denial), one might consider whether criminal law should be extended in future to also include ‘criminal thoughts’. Should thinking of a child rape or thinking that the holocaust has never happened be punishable? And if so, should the remedy be to ‘implant’ in the brain the conviction that the holocaust actually did happen? In short, should criminal law be extended with punishable thoughts, with all its implications for human dignity, freedom of thought, and mental integrity? This, we think, is one of the most important questions for the 22nd century.

 

ICT Implants

Not all ICT implants will a priori endanger human rights, but ethical issues do surface when ICT implants can be read out remotely, or when such implants are used for goals that escape consent or control by the person concerned.

Article 8 provides for a fundamental right of protection of personal data and hereby states that personal data may only be processed “on the basis of the consent of the person concerned or some other legitimate basis laid down by law”. Body-Computer Interfacing and particularly implants allow processing of body traits and it is assumed that body signals may indicate - even predict - emotions and behaviour. The Observer reported in 2002 that British Government considered implanting tags in convicted paedophiles that would be able to monitor the heart rate and blood pressure, hereby “alerting staff to the possibility that another attack was imminent”. In our assumption that body-computer interfacing through implants evolves in such a way that concrete information can be detected upon body signals, our question is whether any Court that will deal with the interpretation of this article may conclude that processing of body signals (personal data) without consent of the data subject (thus on the basis of another legitimate basis) can be lawful in any situation? If yes, in what situation would that be and under which circumstances (article 8)?

This particularly relates to the presumption of innocence and the right of defence in a fair trial: what is presumption of innocence if technology intervenes a priori, before an unlawful act is committed? What if I am preventively but automatically arrested when an intelligent camera detects my aggressive behaviour? The question arising from Philip K. Dick’s story filmed in ‘Minority Report’ - where a police department called ‘pre-crime’ apprehends (presumed would-be) criminals based on foreknowledge provided to the police department by ‘pre-cogs’ before the crime is even committed – seems suddenly not irrelevant at all (article 47 and 48).

To what extent can having an ICT chip implanted be an obligation de facto or de jure? What if participation to the information society necessarily supposes the implantation of an ICT device to have access to the necessities of life including food and shelter? [Radoykov et al. (2007)] What if just entering a city or enjoying emergency health care depends on having a chip implanted with respectively an identity number or a DNA fingerprint on it?

If the implantation of a tag is legally or even contractually imposed to have access to a product or service (particularly when there is a de facto monopoly such as being an emergency service), is this not an infringement of the right to human dignity (article 1 - people may find it horrible to have some silicon in their body), the right to physical and mental integrity (article 3 - respectively with regard to bodily and brain implants), the prohibition of degrading treatment (article 4 - leading to a distinction of humans and cyborgs, if cyborgs are considered to be of a higher grade because they enjoy more rights factually and even legally), the right to liberty (article 6 - is someone free when he must wear an un-removable chip), the right to privacy (article 7), the right to protection of personal data (article 8), the freedom of thought, conscience and religion (article 10 - brain implants may impede the freedom of thought, unlawful thoughts may be detected), the freedom of expression (article 11 - the ICT implant determines the expression which is not controllable by the subject), the rights of the child (article 24 - depending on the age when the chip is implanted and whether the parents can decide over the child) and the rights of the elderly (article 25 – depending on whether children can decide over parents)?

Many questions with regard to implants have something in common. ICT implants make it possible to store and process information inside, outside, towards and coming from the human body (and brain). The question is whether there will remain a difference whether data will be present (stored or processed) inside or outside the body. If law enforcement agencies today have already access to data processed by electronic communications not only between people (phone calls) but also between people and objects (websites, browsing information), will they have the right to have access to the same data when they are stored inside the human body? If law enforcement agencies have access to the hard disk of someone who browsed child-related pornographic places on the Internet, will such law enforcement agencies also have access to the hard disk, when the hard disk is implanted in the person? Should an ICT implant be considered to fall under the right to property (article 17 para. 1) or under the right to physical integrity (article 3 para. 1)? If the merger between body and ICT takes place in the sense that the internal body and/or the internal brain are connected to external computers and processors, does the difference between ‘public’ and private’ - that is crucial for privacy - still make sense?

The European Group on Ethics in Science and New Technologies (EGE) has already produced an Opinion to the European Commission [EGE (2005)] entitled “Ethical Aspects of ICT Implants in the Human Body” in which such questions as above have been dealt with (also with a special emphasis on the precautionary principle).

 

Peer-to-Peer network architectures

Article 12 foresees the freedom of assembly and of association. Can the use of peer-to-peer technology be considered as ‘assembly’ and can the interdiction to use a peer-2-peer network or system be ever considered as an infringement to exercise the right to freedom of assembly? How will this right to assemble and associate be valid for virtual assemblies and virtual associations? Does the data retention directive not infringe this freedom?

The same applies with regard to Articles 10 and 11 (freedom of thought, conscience, religion, expression and information). How can the freedom of thought, expression and information still persist when such activities take place in cyberspace, a place where each activity seems to be monitored and an increasing amount of data be stored? Will there be places like churches or bathrooms on the Internet, where someone can enjoy the same freedom as he has within the walls of his room?  

 

Second Life and virtual worlds

Many questions can be asked in relation to the emergence of virtual worlds, such as Second Life. For example: are avatar data personal data of the person behind the avatar (art. 8), does the right to physical and mental integrity extend to avatar integrity (article 3), who has intellectual-property rights over the avatar and virtual things created by the avatar (article 17 para. 2), how is the freedom of assembly and association given shape in Second Life (article 12), is expropriation of an avatar an inhuman or degrading treatment of the person behind the avatar (article 4, 19), and is expropriation of an object (such as an expensive island or dragon sword) in Second Life a violation of the right to property (article 17 para. 1)? What will be the impact on people’s behaviour and sense of identity of facts like news reports of the Belgian police starting official investigations (including police patrols) in Second Life as a result of a virtual rape of a female avatar?

 

 

Context: Infoethics  FIDIS_D12.2_v1.0.sxw  Conclusion
18 / 26