You are here: Resources > FIDIS Deliverables > Profiling > D7.9: A Vision of Ambient Law > 
Assessment of PETs and TETs: overview, effectiveness and lacunae  Title:
OVERVIEW OF AMI-RELEVANT PETS
 AmI-relevant TET concepts

 

Overview of AmI-relevant PETs

Traditionally, PETs are mostly categorised by the technologies used, the area of application, or standardisation efforts. Examples are the classes of Identity Management Systems (IMS) introduced by Bauer, Meints, and Hansen (2005: 19ff). In the context of AmI (and thus also AmL), however, relevant PETs can also be categorised using more abstract criteria, e.g., whether they primarily support: 

  1. transparency for the data subject on all relevant data processing concerning him, as far as his personal data, are concerned or 

  2. opacity for potential observers or data-processing entities of data or actions concerning the data subject.  

In the context of AmI, the most relevant PETs are the following. 

  1. Opacity-enhancing functions and tools concerning data and actions of the data subject, in particular tools and mechanisms limiting the linkability of data to a person, such as:

    1. use of different pseudonyms per context, possibly transferable to other users; 

    2. use of Privacy Preserving Data Mining (PPDM) techniques; 

    3. disabling or management of sensor functionality as desired by the data subject. 

  2. Transparency-enhancing functions and tools for the data subject, such as:  

    1. protocols to visualise and exchange privacy policies;  

    2. history management; 

    3. online functions for users to exercise their right to access their personal data according to the data protection legislation which enables the data subject to see what the data controller knows about itself, i.e., enhance transparency, and to take further actions if necessary (cf. Hansen 2007);

    4. provision of ad-hoc additional information (e.g., as audio-visual tags or asynchronously via separate channels such as websites, RSS feeds etc.) to data subjects concerning the environment they are in or concerning the reputation of data-processing entities involved. 

  3. Supporting Technologies: 

    1. Cryptography as an important instrument for access control and thus confidentiality (support for opacity). Cryptography is not elaborated further in this chapter. 

    2. Digital Rights Management (DRM) for personal data, supported by additional technologies such as Trusted Computing (TC, see section 5.2.3 for further explanation of DRM and TC). 

  4. Combined approaches: 

    1. Concepts to shift control in AmI environments to the user, e.g., supported by a personal digital assistant (user-controlled identity management). 

These tools will be elaborated in the next sections.  

Opacity-enhancing functions and tools

Opacity-enhancing functions and tools aim at limiting the linkability between personal data and the data subject. This can be achieved in many different ways. Commonly used methods are: 

  1. use of transaction-specific or context-specific pseudonyms, together with additional organisational measures to prevent linkability across borders of transactions or communicational contexts. Technically, this can be supported by credential systems such as “Credentica” and “Idemix” or sector-specific identifiers as used by the Austrian citizen card;

  2. use of Privacy Preserving Data Mining (PPDM) techniques and methods. Most relevant in this context are the modification of attributes (basic data) and the use of special privacy-preserving data-mining algorithms to allow data or rule hiding (Verykios et al. 2004). 

The use of context-specific pseudonyms is nowadays mature from a technical perspective. However, the AmI approach does not seem to provide relevant economic incentives for their use from the perspective of service providers. The use of pseudonyms and additional organisational measures increases the complexity of AmI systems, while at the same time limiting the amount of data available for processing, and thus potentially limiting the quality of AmI services.  

Privacy Preserving Data Mining has been an area of research already since 1991. PPDM is used to support the privacy of data subjects or organisations taking part in mining of distributed data. The latter application is not relevant in this context.

Though PPDM has been used especially in the health sector, so far, PPDM is not widely used. From the perspective of Oliveira and Zaïane (2004), the most relevant hindering factors are (a) very specific areas of application for many methods, (b) lack of integration in data-mining solutions, and (c) difficulties to measure the reached quality of the mined results and privacy protection reached along the way.

The understanding of PPDM seems to be largely technically focused (Meints, Möller 2007). As a result, PPDM is not able to cover all relevant data-protection principles, e.g., as stated in the OECD Guidelines (Oliveira, Zaïane 2004). Far better results to preserve privacy can be achieved when PPDM is used in the context of good-practice standards for data mining, such as CRISP-DM or Knowledge Discovery in Databases (KDD) in case compliance with data protection is understood as part of the business targets (Meints, Möller 2007).

In the context of RFID, selective and non-selective disabling of sensors has been discussed. Juels, Rivest and Szydlo (2003) have developed so-called blocker tags that perform a denial–of-service attack on any reader in range by simulating a large number of different tags. Based on the same technical functions used for the blocker tag, Rieback, Crispo and Tanenbaum (2005) have developed a device for RFID privacy management. This device allows selective blocking of single RFID tags in range. Blocker tags and the “privacy manager” can be used with certain types of RFID tags only; the remaining types of passive RFID and certain biometrics do not support any kind of user-controlled identity management yet (Bizer et al. 2006, 314).

Spoofing of RFID readers using manipulated or copied RFID tags also has been discussed as an approach to enhance the privacy of data subjects (e.g., by Thompson et al. 2006). This approach also has been used in the scenario II (section , scene 2).

Transparency-enhancing functions and tools

Automated privacy policies

Traditionally, to support transparency of data processing, privacy policies are used internationally. In the late 1990s, first attempts were made to formalise privacy policies with the aim to make them machine readable and to support automated processing. As a first result, the Platform for Privacy Preference (P3P) protocol was standardised by the W3 Consortium (W3C) in 2002 in a first version (V1.0). Based on P3P, A P3P Preference Exchange Language (APPEL) was planned, but standardisation is dormant. Instead of APPEL, together with P3P, also XPref can be used to exchange privacy preferences (e.g., Kolari et al. 2005).

P3P together with XPref supports expressing privacy policies by operators of web sites in a formalised language. These privacy policies can be compared with privacy preferences of the users entered in their browser or tools integrated in browsers (plug-ins). These tools support the comparison of privacy policies published by the operators of web sites and preferences entered by users, and they will inform users in cases of discrepancies.  

Other relevant approaches are the Enterprise Privacy Authorization Language (EPAL, suggested by IBM in 2003 based on XML). An overview of languages for formulating, comparing, and negotiating privacy preferences will be given in the FIDIS deliverable D3.8.

These approaches can be understood as an early version of machine to machine (M2M) communication between a client (in this case a browser and based on personal preferences) and a server offering a centralised service (in this case a web-server). Human Machine Interfacing (HMI) has been adapted to deal with the results of the previous M2M communication. This kind of M2M communication and HMI can be understood as predecessors of the communication of David’s MyComm-device at the hotel (section , scene 1).

History management

An overview of the state of the art in history management and recent research is given by Meints (2006). History management was introduced by Hansen et al. (2003) as an important identity-management function of user-controlled IMS (also called type 3 IMS). The most important prerequisites for the applicability of history management is that the data subject actively discloses his personal data with a device that allows for logging of transferred data and later analysis.  

Typically, history management so far is used in the context of web services accessed via a personal computer. Brückner (2003) developed the so-called data journal (later called IJournal), the first usable implementation of history management for Mozilla-based browsers. The IJournal is further developed as part of MozPETs (Mozilla PETs, Brückner, Voss 2005).  

This concept was significantly improved in the context of the project PRIME – Privacy and Identity Management for Europe (PRIME 2007). Basing themselves on the main architecture of the PRIME prototypes as well as on first descriptions of related TETs in FIDIS deliverable 14.2, Hansen et al. (2007) describe methods to increase transparency for users on privacy-relevant data processing, so that they can maintain the control over their private sphere. These transparency tools are particularly valuable because they inform the user via one integrated interface of a user-controlled identity-management prototype. Basically, five transparency tools are fully or partially implemented in PRIME prototypes: 

  1. log functionality on the user’s side: the so-called Data Track (cf. FIDIS deliverable 14.2),  

  2. warn functions for signalling possible mismatches with the user’s preferences, 

  3. tutorials and demo tools,  

  4. a security feed to report and react to vulnerabilities, and  

  5. on-line help functions which enable users to exercise their rights and to keep control over personal data which they have released. 

The PRIME prototypes do not address the AmI world but rather the world of Internet and mobile communication. However, the prototypes’ underlying concept is valid for AmI, too, as comprehensive information on data processing as well as on potential or actual risks has to be provided if users should be in control over their private sphere. 

Supporting technologies

Supporting technologies are mainly used in the context of combined approaches (see section 5.2.4).  

Digital Rights Management

In the context of transparency and opacity tools, Digital Rights Management (DRM) and Trusted Computing (TC) have been discussed as relevant supporting technologies. Depending on the implementation, both technologies can support transparency as well as opacity, and this both from the perspective of the data subject and from the perspective of an observer or data controller.  

The term DRM indicates methods that help copyright holders to control the access to digital content (Grimm et al. 2005: 16). Typical technical methods used in the context of DRM are: 

  1. access control for digital data (such as passwords for subscribers of online journals); 

  2. copying-protection mechanisms. 

From a technical perspective, today’s DRM solutions have mostly proved to be ineffective (Bizer et al. 2006: 132), as they can be circumvented, e.g., via analogue data channels (analogue audio tracks or paper printouts of (digital) documents). Analogue data can be re-digitalised, e.g. using, audio capturing systems or optical character recognition (OCR)). 

Currently new approaches using Trusted Computing to enforce digital rights are areas of research (see e.g., Reid, Caelli 2005).  

Trusted Computing

In terms of supporting technologies, Trusted Computing is an emerging, powerful tool to enforce multilateral legal policies and thus, to support the concept of Ambient Law. 

While traditional security technologies can provide adequate security and trustworthiness for one party (in terms of our example scenarios, either for the provider or for the user), providing fair, non-discriminating security and trustworthiness for all relevant parties needs a technology that supports multilateral security. The concept of multilateral security was introduced by Günter Müller et al. (1999) and aim to meet the security needs of all participants.  

In a nutshell, Trusted Computing technology provides multilateral security in information processing by providing enhanced components that act as a trustee, or in terms of IT security, as a Trusted Third Party (TTP): a (possibly non-human) entity, whom all parties trust, enforces legal obligations. These obligations (“policies”) are either globally specified or attached to every single portion of data. Examples for globally specified policies are data protection laws: they are mandatory and overrule any specific policy. Examples for attached policies are user- or provider-specific requirements (e.g., “here is my driver’s licence information, please, just use it for renting a car”). An example of implementing attached policies is “sticky policies” (see section 5.2.4). 

However, the key factor is the ability to resolve conflicting policies. On the one hand, this ability is an important prerequisite for technically enforcing AmL, on the other hand, this is exactly the technically challenging point: to build an efficient machine (or software agent or software component) that is able to make decisions based on a defined set of rules how to act. In this context, efficient is meant in terms of autonomously acting, i.e., without the necessity of human interaction. 

For a comprehensive summary of Trusted Computing with respect to the history, the objectives, the technology and the current spread, we refer back to FIDIS deliverable D3.9. At this point, we will provide a motivating example how this technology can be used as supporting technology to implement AmL. 

Imagine the following two scenarios.  

Scenario 1. Joe wants to book his next holidays and has decided to do this in the old-fashion way by going to a travel agency and negotiating the travel with a friendly travel vendor next to him. Joe wants to travel to a Greek island by air and to have a rental car at his destination. Hence, the travel vendor asks him to provide a bunch of personal data. Among these, his dietary data is needed for the airplane meals, his driving licence data for the car rental, and his passport data for the hotel registration. Clearly, Joe does not want all these data to be spread around. So, he agrees with the vendor that, e.g., the dietary information is just given to the airline. He assures that by adding a paragraph in the travel contract where this is stipulated.

Scenario 2. Now, let us assume that Joe wants to use a fancy, new software travel-agency-agent in order to organize his next business trip. A software agent is a piece of smart software, acting autonomously in order to fulfil a given task without user interaction. It can negotiate with other agents and make decisions. It can be regarded as actor in an Ambient Intelligence environment. Joe has now to equip his software agent with all personal data the software agent might need. At the same time, he defines two rules. Firstly, to each piece of personal information, a policy is attached which defines the usage permissions on his data (and possibly in which way): e.g., dietary information is just given to airlines. Secondly, data is just given to entities which technically guarantee the enforcement of the given policies. This guarantee (or “technical assurance”) is realized by employing Trusted Computing components.

Combined approaches

To support the enforcement of privacy policies agreed on by all participants, the use of “sticky policies” has been suggested by Casassa Mont, Pearson, and Bramhall (2003). In addition to policies using formalised languages, the use of Trusted Computing has been suggested to bind the policies irreversibly to personal data through the subsequent data-processing steps and to support policy enforcement in each step. This approach also can be understood as an approach to apply digital rights management (DRM) to personal data. 

In scenario II, AmI environments are partly controlled based on preferences from the data subject and supported via a device (e.g., the MyComm device as an example for a personal digital assistant). In many cases, the concepts used in these scenarios come close to user-controlled Identity Management Systems (type 3 IMS) as described in FIDIS deliverable D3.1. More realistically, it is to be expected that hybrid systems will be implemented. These combine centralised identity management (control by the operator of the AmI environment, implementing type 1 and/or type 2 IMS) with the user or data-subject-controlled identity management (type 3 IMS). In these cases, the user might get the impression to be in control, while in the background, data might still be collected and (ab)used (Bizer et al. 2006: 312ff). Therefore, besides some form of user control, transparency-enhancing tools are crucial in an AmI environment.  

 

Assessment of PETs and TETs: overview, effectiveness and lacunae  fidis-wp7-d7.9_A_Vision_of_Ambient_Law.sxw  AmI-relevant TET concepts
24 / 31