You are here: Resources > FIDIS Deliverables > Profiling > D7.9: A Vision of Ambient Law > 
Overview of AmI-relevant PETs  Title:


AmI-relevant TET concepts

Whereas the previous sections gave an overview on AmI-relevant PETs, which in many cases have been realised as products, this section illustrates TETs on a concept level because implementations are missing today. 

The main objective of TETs is to anticipate profiles that may be applied to a particular data subject. These profiles may be individually related to the data subject only or to groups of data subjects; they could contain personally identifiable information of the data subject, but they also may be anonymous. In any case decisions based on profiles may affect individuals, and therefore it is desired that they can know about the profiles and – if necessary – take actions if the (potential) decisions can be harmful. 

In a study on linkage of digital identities (Hansen et al. 2007b), the following important roles and tasks as well as resources in the workflow of enriching data by linking them with other information are identified: 

  1. “the address provider who assigns identifiers or addresses to a person according to an address schema defined by an address schema provider; 

  2. the data collector who monitors and stores information; 

  3. the linker who connects collected data items according to linking algorithms, possibly being provided by another party, the linking algorithm provider; 

  4. the analyzer who analyzes the data by applying analysis algorithms (so-called models), possibly being provided by another party, the analysis algorithm provider (or model provider); 

  5. the decision maker who decides on basis of the information available at that stage; 

  6. the data subject concerned by the decision and its consequences” (Hansen et al. 2007b: 8f). 

This workflow which is typical for profiling (combining linkage and analysis) shows that data as well as different algorithms (implemented in software) are needed to link and analyse the available information. If TETs should accurately mimic the profiling results which are – or can be – used by the data controller, they at best have to get access to the same data, the same linking algorithms and the same analysis algorithms which potentially take as additional input information from further data sources, possibly containing personal data of other users. These data and algorithms are not necessarily provided at one location, but different service providers may be involved, transferring their (intermediary) results to other parties involved. In many cases these algorithms are regarded as trade secrets and protected against access because the business models of specific service providers rely on them. Ambitious computing power (e.g., for High-Performance Computing realised by supercomputers or computer clusters) may be required to build the profiles which is not always possible for individuals using TETs. In addition the required access to data, algorithms and computing power may require legal action (e.g. contracts to use the required resources) and course costs.

Provided that TETs get access to the data and the (implemented) algorithms as they are applied by the profiling entities and that it can work with similar computing power, it will get the same results. Still, this does not automatically predict which decision the decision makers will make, basing on further factors probably not fully known by the data subject. 

TETs could only access all the data and algorithms to be used if the entities doing the profiling provide this information by themselves, e.g., by allowing direct access or by involving third parties where data and software are made available, e.g., a Data Protection Authority. Even if this was legally demanded and data controllers really fulfil such a legal obligation, this would only cover the actual and planned profiling. For additional profiling done spontaneously or combining further data sources, potentially without bothering to fulfil the law (or being outside the legislation), TETs would have to “guess”, i.e., collecting all kinds of (public and access-restricted) data and algorithms from various data sources and perform all kinds of data mining on the data. By this probably a vast number of different profiles related to the individual would result. The use of PETs for data minimisation would have an influence on the possible results, but still then, profiles could be calculated based on probabilities of linkage or they would be rather group profiles instead of individual profiles which also could lead to decisions directly affecting the individual concerned.  

TETs would have to work all the time to generate new profiles or update already created ones because each action, whether by the individual itself or some other entities with at least some relation to the individual, could influence them. Of course the use of TETs as such is also a piece of information which can be used for profiling. Moreover, the output of TETs may affect the decisions of users: not only to prevent their privacy, but it may overwhelm them, stop them from doing anything (because it could be bad for the profiles on them) or make them careless as they feel doomed by all that information out there. These reactions can be used for further profiling again. 

We conclude from the thoughts sofar that comprehensive TETs are hard to realise, and even if there were possible implementations, it would be necessary to teach individuals to use them in a way which empowers them instead of frightens or depresses them. 

Still, transparency is necessary if people should be able to make informed decisions on their behaviour in the information society. So there might be some components for TETs which do not claim to really perfectly anticipate profiles, but help users at least a bit (partially already mentioned in section 5.2): 

  1. The history logfile of past actions and data disclosures in the trusted area of an individual. 

  2. Information for data subject on their data stored and the way of data processing by data controllers (including data transfers), possibly more detailed than in a typical privacy policy if needed. 

  3. Information on security breaches which may lead or have led to unauthorised access. 

  4. Publicly available collection of known linkages and linkability of data (cf. Hansen et al. 2007b). 

  5. Computation of linkability of attributes considering explicit and implicit information (cf. Berthold, Clauß 2007). 

  6. Publicly availably information on known profiling algorithms and implementations; if not provided by the data controllers themselves, then possibly being re-engineered by peers and provided publicly. 

  7. Provision of computer power by peers to calculate profiles for authorised users. 

  8. Understandable presentation of all available information. 

  9. Media campaigns to teach individuals on what typically is being linked, how they can notice from decisions of possible profiles and categorisations and how they can react if these decisions are not desired.  

In all cases when information is being provided, individuals should get the means to interpret it, e.g., by technological support and/or help of third parties or peers trusted by them. 


Overview of AmI-relevant PETs  fidis-wp7-d7.9_A_Vision_of_Ambient_Law.sxw  Conclusion
25 / 31