You are here: Resources > FIDIS Deliverables > Profiling > D7.8: Workshop on Ambient Law > 
Decisions made during the workshop  Title:
THE STRUCTURE OF THE REPORT ‘A VISION ON AMBIENT LAW (AML)’ D7.9
 Participants to the Workshop

 

The Structure of the Report ‘A Vision on Ambient Law (AmL)’ D7.9

 

Chapter 1: A vision of Ambient Law, conceptual exploration

(Mireille Hildebrandt, VUB; Bert-Jaap Koops, TILT) 

 

In the deliverable, the concept of Ambient Law will be broader than data-protection legislation, and focus on embodying legal norms in technology in the context of Ambient Intelligence in general.  

 

Mireille: Draft concept Ambient Law 

‘A technological inscription of legal norms that makes possible: 

  1. to implement mandatory parts of e.g. D 46/95 EC 

  2. to trace – via M2M communication – how which personal data are being processed 

  3. to negotiate – via M2M communication – about the exchange and processing of personal data, while staying within the limits of mandatory data protection legislation 

 

In other words: using the technology against which data protection aims to protect in order to achieve effective protection.’ 

 

Technological embodiment of legal norms in a constitutional democracy demands specific checks and balances at three different levels: 

  1. the level of legislation (which is both legal and political). At this level, the use of specific technologies to support or enforce legal rules needs democratic legitimisation and needs to fit constitutional demands; 

  2. the level of administration (which is both legal and governmental). At this level, the use of specific technologies to support or enforce legal rules needs to comply with the principles of fair and transparent administration; 

  3. the level of adjudication (which is legal, political, and governmental, because it determines the scope of the law). At this level, the use of specific technologies to support or enforce legal rules must be made contestable. 

Major issues arising in the context of cooperating objects in networked environments that allow real-time autonomic profiling in order to seamlessly adapt the environment to a user’s anticipated preferences include: 

  1. unfair discrimination (unfair due to the fact that citizens are not aware of who knows what and who decides on which basis); 

  2. the autonomy trap (refined segmentation allows manipulation whenever the user is not aware). 

 

Chapter 2: Scenario I: User Control & Data Minimisation

 

2.1 Development of the scenario 

scenario I is user-centric: the user is empowered in AmI, carrying a device with which to control the environment, for example, by determining which data can be exchanged between user and environment. This may be a ‘privacy-friendly’ and perhaps a commercial doom scenario.  Key concepts are ‘data minimisation’, ‘contextual integrity’, ‘partial identities’ (pseudonyms).

 

2.2 Assessment of existing legal framework (focus on personal data) 

ICRI: Focus on Access to Personal Data, Consent, Purpose Limitation Principle. Some observations about the (lack of) enforceability and effectiveness. 

 

2.3 Assessment of existing PETs 

ICPP (together with SIRRIX?): anonymisation, pseudonymity, unlinkability, history management, privacy-preserving datamining, trusted computing. Some observations about the reliability of these technologies and their actual application; notes on the (lack of) socio-economic incentives to actually implement wide-spread use of these technologies. 

 

 

 

2.4 How to achieve AmL in this scenario 

VUB, TILT: the question is whether this is an AmI scenario at all: the intelligence seems to be with the user, not with the environment. Depending on the degree of adaptation and anticipation of preferences, this may or may not be called an AmI scenario. If we can call this AmI, AmL will be established through the architecture of user control. 

 

Chapter 3: Scenario II: Provider Control & Data Maximisation

 

3.1 Development of the scenario 

scenario II is provider-centric: AmI is controlled by the providers of services (and goods, if there still are goods by then). The environment knows exactly who is where and will interact without consent, and perhaps without knowledge, of the user. Data flows freely between users and their devices, service providers, and perhaps third parties as well. This may be a ‘user-friendly’ and commercial Walhalla scenario. Key concepts are ‘data optimisation’, ‘networked environment’ and ‘distributed intelligence’ (the intelligence flows from the interconnectivity). 

 

3.2 Assessment of the existing legal framework 

ICRI: attention to the opposing logic of data minimisation (in data protection legislation) and data maximisation (needed to achieve data optimisation in the scenario of ubiquitous, interoperable, real time and autonomic adaptation of the environment), analysis of the applicability of the directive on telecommunications and its effectiveness within this scenario, analysis of the data retention directive and the framework decision on data protection in the third pillar (police and judicial cooperation in criminal matters). Attention must be on the knowledge that is generated and applied: (how) does the legal framework protect against unfair use of such knowledge, (how) does the legal framework empower citizens (facilitate user control). 

 

3.3 Assessment of relevant PETs and TETs 

ICPP/SIRRIX: analysis of the opposing logic of data minimisation and data maximisation, exploration of the idea of TETs that provide citizens with knowledge of the knowledge that is used to influenced their behaviour; exploration of the issue of M2M communication between user and service provider and the ensuing problems of HMIs. 

 

3.4 How to achieve AmL in scenario II 

VUB and TILT: to what extent could M2M negotiation empower users in an AmI environment and how does this relate to AmL? What other AmL ways are there of checking the provider-controlled power to make decisions on citizens and consumers, without losing the AmI potential of this scenario?  

Chapter 4: Scenario III: Distributed Intelligence & Minimisation of Knowledge Asymmetry

 

3.1 Development of the scenario 

scenario III is a mix: in acknowledging that hiding data can make the environment less intelligent, while unlimited access to data can make individual citizens vulnerable to undesirable profiling, this scenario aims to achieve some kind of balance by minimising knowledge asymmetry. 

 3.2 Assessment of the legal framework 

ICRI: which legal rights and obligations should be invented or adjusted to allow this scenario to take on, especially regarding the knowledge (profiles) that are used to influence people; (how) could these rights be effective without risking the intelligence of the environment? 

3.3 Assessment of PETs and TETs 

ICPP: which balance of PETs and TETs could create the right balance between protection & empowerment of citizens on the one hand and the intelligence of the environment on the other? 

3.4 How to achieve AmL in scenario III? 

VUB and TILT: how to combine user-empowerment, M2M negotiations, flexibility, and citizen’s control with an intelligent environment that needs randomised data to prevent loss of intelligence with all the ensuing issues of discrimination based on false positives and false negatives?  

 

 

Concluding Chapter

(Mireille Hildebrandt, VUB and Bert-Jaap Koops, TILT) 

 

 

 

 

 

 

Decisions made during the workshop  fidis-wp7-del7.8.workshop_ambient_law_02.sxw  Participants to the Workshop
4 / 15