Resources
- Identity Use Cases & Scenarios.
- FIDIS Deliverables.
- Identity of Identity.
- Interoperability.
- Profiling.
- Forensic Implications.
- HighTechID.
- D3.1: Overview on IMS.
- D3.2: A study on PKI and biometrics.
- D3.3: Study on Mobile Identity Management.
- D3.5: Workshop on ID-Documents.
- D3.6: Study on ID Documents.
- D3.7: A Structured Collection on RFID Literature.
- D3.8: Study on protocols with respect to identity and identification – an insight on network protocols and privacy-aware communication.
- D3.9: Study on the Impact of Trusted Computing on Identity and Identity Management.
- D3.10: Biometrics in identity management.
- D3.11: Report on the Maintenance of the IMS Database.
- D3.15: Report on the Maintenance of the ISM Database.
- D3.17: Identity Management Systems – recent developments.
- D12.1: Integrated Workshop on Emerging AmI Technologies.
- D12.2: Study on Emerging AmI Technologies.
- D12.3: A Holistic Privacy Framework for RFID Applications.
- D12.4: Integrated Workshop on Emerging AmI.
- D12.5: Use cases and scenarios of emerging technologies.
- D12.6: A Study on ICT Implants.
- D12.7: Identity-related Crime in Europe – Big Problem or Big Hype?.
- D12.10: Normality Mining: Results from a Tracking Study.
- Privacy and legal-social content.
- Mobility and Identity.
- Other.
- IDIS Journal.
- FIDIS Interactive.
- Press & Events.
- In-House Journal.
- Booklets
- Identity in a Networked World.
- Identity R/Evolution.
In FIDIS deliverable 7.7 we have investigated the relationship between RFID systems, Ambient Intelligence (AmI) and profiling, from a technological, legal and sociological perspective. Together with e.g. sensor technologies RFID is clearly one of the enabling technologies of AmI, creating wireless embedded networks that constitute multi agent systems (MASs), which form the technological infrastructure for AmI.
From a technological perspective we found that the ‘always on’ nature of today’s mainly used RFID tags introduces new types of problems, especially in combination with the invisibility of the tags. Once a ubiquitous reader infrastructure (fixed and mobile reader) with a seamless network connection is in place, this will allow real time automated data collection and processing. Taking into account that at this moment large parts of RFID systems are easily accessible, one can foresee a host of security and privacy problems, as listed in this chapter.
As regards the legal perspective (and as also discussed above in Section ), D7.7 also detected a major problem in relation to the applicability of the Data Protection Directive 46/95/EC, because it is as yet unclear whether data collected from a person without identifying that person in terms of name, address etc. qualify as personal data. As Dötzer points out, there is a substantial difference between ‘re-recognition’, and ‘identification’, that is relevant here. He defines re-recognition as ‘keeping identifiers and relating them to other received identifiers’, while defining identification as ‘correlating the identifier with a real-world identity’. If a person is wearing an RFID tagged pair of shoes, a shop may store this data as an identifier, which is re-recognised every time this person enters the shop (as noted also in the comment to scenario S4). As long as the shop does not link this identifier to the real-world identity of this person (name, address etc.) this person is not identifiable in the traditional sense of the term. That could mean that the data on the tag are not personal data in the sense of the Directive, unless and until the data are linked in a way that makes the person identifiable (c.f. section ). In section 3.3, of this document a very broad view is taken of what constitutes personal data, encompassing both re-recognition and identification. It is important to note that even if the directive would apply to such re-recognition, in practice this will not have many consequences because (1) the invisibility of RFID tags implies that people will not be aware of leaking personal data or of data that are being remotely collected, (2) it seems practically impossible to require their consent or to inform them of the processing of the data as this would require real time M2M communication between the RFID system and the PDA of the person concerned, plus the availability of a HMI on the PDA that makes the process comprehensible, and (3) this still does not imply access to profiles except in the case that art. 15 of the directive is applicable.
This last point, which was already raised in the legal section of FIDIS deliverable 7.3 (on profiling and AmI), concerns the fact that data protection legislation is mostly focused on the protection of personal data, while the more serious threats to privacy and several other constitutional values (equality and fair treatment, due process) derive from the application of (group or personalised) profiles. The legal status of profiles, generated by means of data mining, is as yet unclear. Compared to data (which are either noise or information, depending on the perspective, purpose and context of the user), profiles are a kind of knowledge, because they denote relevant patterns discovered in data bases. Profiling is what has been called knowledge discovery in databases (KDD, cf. ; and FIDIS deliverable 7.2). Precisely because profiles constitute knowledge, not mere data, their application can have serious impact on citizens, while they are mostly unaware of being categorised. However, this type of knowledge may be protected as a trade secret or as forming part of a database that is protected by means of an intellectual property right. Only when art. 15 (and 12) of the directive apply, some form of protection against profiling may be available. Art. 15 reads:
Article 15
Automated individual decisions
1. Member States shall grant the right to every person not to be subject to a decision which produces legal effects concerning him or significantly affects him and which is based solely on automated processing of data intended to evaluate certain personal aspects relating to him, such as his performance at work, creditworthiness, reliability, conduct, etc.
2. Subject to the other Articles of this Directive, Member States shall provide that a person may be subjected to a decision of the kind referred to in paragraph 1 if that decision:
(a) is taken in the course of the entering into or performance of a contract, provided the request for the entering into or the performance of the contract, lodged by the data subject, has been satisfied or that there are suitable measures to safeguard his legitimate interests, such as arrangements allowing him to put his point of view; or
(b) is authorized by a law which also lays down measures to safeguard the data subject’s legitimate interests.
In the case of AmI, decisions will have to be automated, to allow smooth and real time adaptation of the environment to inferred preferences. In as far as this implies transactions and contracts, with legal consequence, or significantly affects a person, he has the right not to be subject to such ‘decisions’. However, as Bygrave has noted, this does not necessarily imply that such decisions are unlawful. It just implies that a person can exercise his right and demand that such decisions are not taken in an automated fashion. If people don’t exercise the right, the processing of data is lawful. For a user of a smart environment that runs on autonomic and proactive computing it makes no sense to start exercising this right. In that case, art. 12 at least provides a transparency right:
Article 12
Right of access
Member States shall guarantee every data subject the right to obtain from the controller:
- knowledge of the logic involved in any automatic processing of data concerning him at least in the case of the automated decisions referred to in Article 15 (1);
However, the issue raised by Bygrave applies here as well. If I choose not to exercise the right, this logic need not be disclosed. We add that for exercising the right a person would have to be capable of understanding algorithmic processing of data, which is not a standard part of our education. Again, practically speaking, in the case of the automated processing of data derived from RFID systems, due to the invisibility and the ‘always-on nature’ of the tags, most people will not be aware at which point their data are being collected and they will probably lack the time and the resources to exercise a right to the knowledge of the logic involved.
This has led the authors of D7.7 to conclude that to preserve and enhance constitutionally protected values like privacy, fairness, equality and due process we may need to develop transparency enhancing tools (TETs), integrating legal rights of access to data as well as profiles with the technological devices that can actually provide such access and translate the findings into understandable information. This should at least decrease the asymmetry between profilers and profiled, following a principle of minimisation of knowledge asymmetry instead of focusing all efforts on data minimisation (the main objective of PETs). AmL’s tasks would thus be (1) to integrate the written norms of data protection into the technological infrastructures that are being put in place at this very moment and (2) to create a legal right of access to profiles that may impact one’s life, while also integrating this right into the technological infrastructure. Integrating norms or values into the design of technological devices or infrastructures has been the core focus of constructive technology assessment and work of scholars and policy advisors like Flanagan, Howe et al. on value sensitive design.
19 / 38 |