You are here: Resources > FIDIS Deliverables > Profiling > D7.9: A Vision of Ambient Law > 
Inventory of lacunae in the legal and technological framework  Title:
TRANSPARENCY OF THE PROCESSING OF PERSONAL DATA
 Conclusion: The potential of AmL to empower citizens in an AmI environment

 

Transparency of the processing of personal data

The requirement of transparency of data processing relates to the possibility of history management of one’s personal data and access to processed personal data with data controllers, to be made possible via M2M communication. 

European data-protection legislation, with its focus on the protection of personal data, provides a series of rights for individual citizens and obligations for data controllers to allow citizens to access their personal data and the way they are processed. Notably, data controllers have an obligation to inform data subjects about which data are being processed, about the identity of the data controller, the purpose, and possible third parties to whom data are sold or transferred. Even today it is nearly impossible for citizens to check compliance with such obligations; in the case of AmI, the processing of data will become ever more invisible. Without intelligent agents doing the job for us, keeping up with data exchanges seems an illusion. The problem thus seems to be that even though rights and obligations exist, their realisation is a mission impossible, especially in an AmI environment. We refer to the legal analysis of FIDIS deliverable 7.3 and 7.7 for similar conclusions. 

As discussed in sections 5.2 and 5.3, to exercise the right to transparency of data processing, several technological tools have been developed, notably P3P and History Management, while DRM and Trusted Computing could add alternative ways to track personal data and the way they are being processed. Such tools should be somehow embedded in a personal device, like a PDA, which enables users to interact with the environment in an automated way, while keeping substantial control over data flows, through preventing unauthorised data processing (ex ante) or through making data processing transparent to allow for redress procedures (ex post). However, two major problems arise:

  1. these transparency-enhancing technologies are not fully developed yet, let alone widely used; 

  2. individual citizens cannot be sure that they actually achieve the transparency they claim to provide.  

Purpose specification and use limitation

The transparency discussed in the previous section should enable one’s PDA to check – M2M – which purposes are specified, and whether the principle of use limitation has been complied with in light of these purposes. 

We refer to the legal and technological analysis summarised in the previous section to conclude that even if the legal obligation to specify the purpose and to restrict data processing to the purpose specified is in force (cf. section 4.3.1.2), it is nearly impossible to actually check to what extent which service provider complies with these obligations (see especially section 5.3). 

We add that tools for history management that keep track of personal data after their disclosure, as discussed in section 5.2.2.2, are relatively new. It is unclear to what extent such tools can keep up with unauthorised selling of data to third and following parties. It is also unclear what happens if the data are anonymised and used for group profiling.  

A device that communicates M2M with the Ambient Intelligent environment to check whether the data are in fact processed in accordance with the specified purpose has not been developed as yet (see again section 5.3). 

Consent

The issue of consent has been described as enabling one’s machine-proxy (the PDA that serves as a proxy when negotiating consent) to negotiate, e.g., the supply and processing of personal data according to one’s personal preferences, while taking into account the mandatory aspects of data-protection legislation. 

The legal chapter argues that the realisation of informed and explicit consent is highly problematic (cf. section 4.3.1.2), due to the fact that individual citizens have no idea about the consequences of the processing of their data (raising doubts as to the meaning of ‘informed’ in this context), while consent is often requested automatically whenever one wants to use a specific service (raising doubts as to the meaning of ‘explicit’ in this context). Reiterant requests for consent would in fact challenge a core feature of the vision of AmI: it’s the emphasis on invisible, ubiquitous computing. 

The technological chapter addresses the issue of consent indirectly, by referring to IMS that focus on user control (cf. section 5.2). Such an IMS would consist of a PDA that follows previously installed privacy preferences, executing a user’s privacy policy. However, if this device has no information as to how the data will match the group profiles that may be applied, it cannot provide any kind of informed consent. It seems that such an IMS builds on a static conception of privacy, while an AmI environment would require real-time adaptation of a person’s privacy policy to anticipate potential adverse effects. Perhaps Nissenbaum’s concept of ‘contextual integrity’ could provide for a more responsive mechanism to negotiate consent (cf. 3.5.2). IPTS’ concept of digital territories could make abstract notions like consent operational, by introducing the bubble that determines which consent is given, to maintain a person’s dynamic borders, using machine-readable markers, and capable of building bridges between contexts (cf. 3.5.3).  

Data quality and participation

This has been described as the capability of one’s machine-proxy to match data stored in data bases with one’s accurate personal data, and its capability to require adjustments if data are not correct (anymore). 

Without transparency, data quality and participation are empty concepts (see section ). A PDA should be able to track and trace one’s personal data and be capable of contesting data it considers either incorrect or processed in violation of mandatory data-protection legislation.

Accountability of the data controller

The requirement of accountability of the data controller implies that at all times, one’s machine-proxy should be capable of identifying the data controller who reads, collects, stores, or otherwise processes data, including all others that have access to these data. This could be done pseudonymously, as long as there is identifiability of the data controller in case of data-protection violations.

We again refer to , because to hold someone accountable, one must be able to identify the culprit. Otherwise, the legal right may be in force but hardly enforceable. A PDA should refuse access to any kind of personal data if it cannot verify the identity of the data controller. Again, this could happen pseudonymously, if the user can rely on a fool-proof procedure for lifting the pseudonym in case anything goes wrong later on. In the case of AmI, this would imply that untraceable data controllers have no access to personal data whatsoever. In view of the public nature of the need to identify data controllers, it should not be left to individual choice whether a data controller can be traced. Such identification should be mandatory.

Transparency of (group) profiles used to categorise individuals

Because applying (group) profiles to individuals can have significant consequences for the individual, it should be made transparent that a profile is being applied and also which (type of) profile. Simply put, this means that one’s machine-proxy should be able to monitor the environment’s anticipating actions (which are based on profiles), so that she can decide on the spot whether she wants to meet the environment’s action (for example, go into a café if the environment suggests she will find this enjoyable), change her behaviour (walk away from a shop window, so that the environment no longer thinks she is shopping), or complain because the profiling is based on unjust criteria (e.g., press a button of a coffee machine refusing to give her caffeine coffee, so that she can check on the machine’s profiling statement whether this refusal was based on her shaking hands, which is – in her case – not a sign of overagitation but a genetic disorder, and if so, file a complaint through her PDA to the coffee machine’s operator). AmI users should not have to make such decisions all of the time, of course: their PDA should learn in which cases they go along with the environment and in which cases they want to be informed of profiling in order to make a case-specific decision.

The present legal and technological framework does not provide for anything remotely like this. The focus is on hiding and keeping track of the personal data of a data subject, not on tracking the group profiles that are being inferred from other people’s data. As has been extensively argued in FIDIS deliverable 7.7, profiling is the enabling technology of AmI, and its impact on the lives of individual citizens will be far more impressive than the impact of data collection per se. Profiling, or pattern recognition, will increase the knowledge-asymmetry between data controllers and data subjects as long a data subjects have no means to access and understand the profiles. At this moment, we lack both the legal right to access profiles and to effectively contest their application and the technological means to achieve such access, while to be able to contest their application, we need to develop some kind of understanding of the meaning of the patterns they present.

 

Inventory of lacunae in the legal and technological framework  fidis-wp7-d7.9_A_Vision_of_Ambient_Law.sxw  Conclusion: The potential of AmL to empower citizens in an AmI environment
29 / 31