You are here: Resources > FIDIS Deliverables > Profiling > D7.9: A Vision of Ambient Law > 
Conclusion  Title:
ASSESSMENT OF PETS AND TETS: OVERVIEW, EFFECTIVENESS AND LACUNAE
 Overview of AmI-relevant PETs

 

Assessment of PETs and TETs: overview, effectiveness and lacunae

Introduction

AmI space, as described in FIDIS deliverables 7.3 and 7.7, implies real-time monitoring, proactive computing, and autonomic adaptation of the environment. In traditional AmI scenarios to facilitate this, large centrally available storages of personal data are used, which store any sensor data available and mine them dynamically to support the adaptation of the environment. In most cases, this happens unrecognised and unobserved by the data subject and under the control of one or more third parties. In these scenarios, data maximisation is an important quality element – the more data (relevant for a proper adaptation) the backend systems will have to mine, the better the quality of the AmI systems can be. 

As already outlined by Čas in 2005 (Čas 2005) and analysed in the previous chapter, these scenarios fundamentally run against the principles of data protection, especially reliable legal grounds, the data minimisation principle, the transparency principle, and in many cases the purpose-binding principle as well. Čas argues that as data storage becomes very cheap, there is no economic incentive for deleting data. In contrast he lists a number of examples where data collected and stored for one purpose in the AmI environment well may be used for a different purpose very cost efficient. He concludes that the technical functional principles of AmI together with the economic driving forces will lead to an increasing information asymmetry between data subjects and operators of AmI environments.  

One of the curious characteristics of Ambient Law as discussed in chapter 2 is that the seperation between the written law and its enforcement that is a hallmark of modern law can no longer be taken for granted. If legal rules are inscribed or embedded in computer code their inscription may rule out violation of the rules (which is one of the important issues in the design of AmL: safeguarding the contestability of such rules). Over the past years, in the context of AmI, a number of Privacy Enhancing Technologies (PETs) and Transparency Enhancing Technologies (TETs) concepts have been developed or discussed as potential future solutions. They might develop into interesting examples of AmL, in as far as their implementation becomes part of the legislators’ ways to enact privacy regulation. Taking the definitions used in FIDIS deliverable D7.7, we have:

PETs (privacy enhancing technologies) are defined as “a coherent system of ICT measures that protects privacy […] by eliminating or reducing personal data or by preventing unnecessary and/or undesired processing of personal data; all without losing the functionality of the data system.” (Borking 1996, translation taken from Borking, Raab 2001).

TETs (transparency enhancing technologies) anticipate profiles that may be applied to a particular data subject. This concerns personalised profiles as well as distributive or non-distributive group profiles, possibly constructed out of anonymous data. The point would be to have some idea of the selection mechanisms (application of profiles) that may be applied, allowing a person adequate anticipation. To be able to achieve this, the data subject needs access – in addition to his own personal data and a profiling / reporting tool – to additional external data sources, allowing some insight in the activities of the data controller. Based on this additional information the data subject could perform a kind of counterprofiling.

In contrast to PETs, where the principle of data minimisation is a key element, TETs take a very data-intensive environment into consideration where the data collection is not necessarily related to the individual as such. The more is known about the actual (or even possible) data processing and can be used by TETs to anticipate profiles, the more accurate these results can be. Thus, TETs are based rather on data maximisation which does not only include all data released (if available), but also information on methods and profiling algorithms as well as data from other sources utilised by data controllers, possibly also information on security breaches if this leads to other findings. These data could also be parameterised by probabilities if it is uncertain information. 

We see already from the quoted definitions that the terms are related because both PETs and TETs can help data subjects to maintain their privacy. Transparency as such – in the meaning of “clear visibility” – is an underlying principle both for PETs and for TETs:  

  1. Transparency of data processing usually is a prerequisite for effective protection of privacy; however, it is not a sufficient condition: mere transparency does not guarantee privacy compliance.  

  2. Transparency of the data subject’s personal data should be provided by PETs for the data subject itself to increase its knowledge on data processing and empower it to choose on desired and undesired processing of personal data – this is the same with TETs.  

There is an important difference, however: PETs should protect the data subject’s personally identifiable information against unauthorised access, e.g., by using confidentiality mechanisms. These confidentiality mechanisms can be on the content level (e.g., encryption, access control) or on the communication network level (e.g., anonymising/pseudonymising techniques or other mechanisms to prevent linkability). These mechanisms clearly do not belong to the transparency techniques, but rather to the contrary: the opacity techniques. 

According to the definition of TETs, the data subject is in the focus, too, to get information on the potential profiles about itself. It is debatable how much “counterprofiling” is possible in reality and how it can be balanced with other potential interests, e.g., from other data subjects: To really mimic the profiling which is done by a data controller (or which could be done by a hacker), i.e., to perform an effective counterprofiling, it may be necessary to make use of personal data of other data subjects as well. E.g., if a data subject has statistical twins whose detailed information is known, this knowledge may be extrapolated to apply to the data subject itself which originally has not provided that many data. Even in the case of anonymised instead of personally identifiable data, the profiles may contain so much information that the possibility to re-establishing the links to the related data subject could not be eliminated (cf. Hansen et al. 2007). Thus, there may be TETs which perform – with good intent to mimic the original (or potential) profiling done by a data controller – privacy-invasive processes. 

A pared-down interpretation of “counterprofiling” would limit its scope to information on the logic of data processing, possibly additionally taking into account data sources which are not privacy-invasive.

This chapter first elaborates on traditional PETs in both its opacity and transparency properties and then analyses potential technological concepts for TETs. In both subsections the current state of the art in development, their effectiveness, and lacunae are described. 

 

Conclusion  fidis-wp7-d7.9_A_Vision_of_Ambient_Law.sxw  Overview of AmI-relevant PETs
23 / 31