You are here: Resources > FIDIS Deliverables > Privacy and legal-social content > D14.2: Study on Privacy in Business Processes by Identity Management > 
Privacy Aspects  Title:
 'Ambient Law'


Personalised Profiles and the Need for ‘Ambient Law’

In an ambient intelligent environment data can be aggregated and processed by means of knowledge discovery in data bases (KDD) to detect patterns of behaviour of clients (Custers, 2004). KDD cannot be equated with queries is a database, as these depend on prior classification (Zarsky, 2002). Instead data mining techniques like KDD for instance produce clusters of previously unknown sets or association rules not thought of by the data analyst. This implicates that such data mining or profiling generates new types of knowledge, which can than be applied and tested on new instances until the patterns are considered stable enough to apply to (potential) clients.

The result of this type of processing of personal and other data recorded and aggregated from an ambient intelligent environment will be a series of dynamic group profiles, based on real time monitoring by means of e.g. sensor technologies and RFID systems (Bohn and Coroama, 2005). If, for instance, behavioural biometric profiling becomes a reliable tool for assessing emotional states or intentional stances, this could in fact allow highly personalised profiles, constructed from intelligent combinations of individual profiles and a diversity of group profiles applicable to a particular individual. Such combinations will thus enable custom-made autonomically applied personalised services.

Effects of Intelligent Personalisation

The application of such combined profiles to an individual person may have a significant impact on the risks and opportunities that are attributed to this person (Zarsky, 2005). Real time monitoring and targeted servicing could allow refined segmentation of the market, providing previously unknown possibilities for price-discrimination. This regards buying and selling of consumer products like foodstuffs, cars or even real estate; services like hotel and catering industry, insurance, credits, but – if delivered to government agencies - could also be used for purposes of taxation, fraud detection and crime prevention.  

A second effect may be that as service providers are capable of anticipating a change in our preferences this not only allows them to cater to new preferences, but also give them an opportunity to influence our behaviour if these changes are not profitable. For instance, if I am on the verge of quitting the smoking of cigarettes, advanced profiling technologies may be aware of this and alert tobacco industry. I may be confronted with extra banners for smoking at specific times, calculated to have optimal impact and/or I may be provided with free samples when ordering groceries from a supermarket. Such targeted ‘servicing’ seems a threat to our personal autonomy, mainly because we are not aware of the way our preferences are being manipulated (Zarsky, 2002). Being autonomous implies having the possibility to reflect on alternative choices of action, which is complicated if our environment influences our choices as it were ‘behind our back’.

Data Minimisation and the Vision of AmI

The problem with personalisation based on refined combinations of automated group and individual profiles is twofold:  

  1. a person will not often be aware of the fact that a profile is constructed and applied; 

  2. she generally has no access to profiles that may be applied. 

Present data protection legislation is focused on the protection of personal data and has little to offer in terms of making profiles transparent. This is for instance the case because profiles will mostly be inferred from data coming from other people, thus not qualifying as personal data in terms of art. 2 (a) of the directive (Zarsky, 2002).

One could of course claim that data minimisation will reduce the total amount of data and thus reduce the possibility of inferring adequate profiles. However, to generate adequate personalised services, especially in the case of Ambient Intelligence (AmI), as many data must be collected and processed as possible. Holding back data would most probably make the environment less intelligent, because the process of KDD or pattern recognition will be based on incomplete data. This could imply that while data mining techniques like privacy preserving data mining (PPDM) may solve some of the privacy problems, the use of PPDM will also reduce the effectiveness of real time monitoring and real time adaptation of networked environments.

Data Maximisation and Personal Autonomy

If it is the case that an intelligent environment thrives on data maximisation – using profiling techniques to discriminate between noise and information, then we should consider the consequences of the realisation of the vision of Ambient Intelligence. The crucial issue here is control and this can be explained in terms of two requirements from the perspective of clients in a business process:  

  1. access to the autonomically generated profiles that may be applied to them and

  2. the possibility for (potential) clients to actively adapt their own profiles 

This is the only way to effectively empower clients to act in ways that prevent undesired categorisation, and to anticipate the potential consequences of legitimate categorisation. Personal autonomy does not mean that one is capable of complete isolation or opacity; it rather demands that one has the instruments to counter profile one’s environment, including the behaviours of intelligent machines or software programs.

The Need for a Vision of


Privacy Aspects  fidis_wp14_d14.2-study_on_privacy_in_business_processes_by_identity_management-v09_02.sxw  'Ambient Law'
14 / 38