You are here: Resources > FIDIS Deliverables > Profiling > D7.9: A Vision of Ambient Law > 
Aim and research question: towards Ambient Law  Title:
SCENARIOS
 Outline

 

Scenarios

Since this deliverable is future-oriented, we thought it useful to illustrate the developments we are sketching here in various scenarios. These scenarios are not expectations or predictions of the future, but stories that illustrate possible developments of the world in a timeframe of a few decades. These stories may help readers in visualising and realising the kinds of world that Ambient Intelligence may create for us. Put differently, the stories may also help policy makers and technology developers to envision what kinds of world they are creating in the middle or longer term when working on Ambient Intelligence. At this point in time, fundamental choices can – and in our opinion should – be made with respect to the architecture of AmI. One such fundamental choice is the architecture of control: who will be able to push the buttons of AmI environments? The importance of this choice is well illustrated by offering two extreme scenarios: one with AmI providers in total control, and one with users at the buttons.  

The scenarios have been developed during a Workshop in January 2007 and further discussed among the authors via email. To explain how the scenarios came into being, we shall now briefly go into Shoemaker’s steps for developing scenarios, taking into account that we use the scenarios here as a heuristic, not as an actual planning tool, for illustrative purposes (Shoemaker 1995).  

  1. Define the scope (time frame) 

The vision of Ambient Law concerns the vision of Ambient Intelligence, so the time frame is 5-25 years.  

  1. Identify the major stakeholders (who will be affected, who could influence) 

The major stakeholders are citizens, businesses, and government authorities.  

    1. Citizens can influence the development of AmI by rejecting the technologies, by buying them or by using them in ways not anticipated by the developers. 

    2. Businesses and government authorities can influence the realisation of AmI by focusing on trust and usability, which involve concern for privacy and security and intelligent design of the human machine interfaces (HMI).  

Relevant issues in this respect are:  

  1. Trust may depend on user-control (interactive computing).  

  2. Usability may depend on provider-control (hidden complexity and proactive computing).  

  1. Identify basic trends 

Development of enabling technologies, PETs, autonomic computing, and AmI ecosystems, as well as further development of digital rights management (DRM) and trends in data protection and in civil and criminal liability. 

    1. Development of enabling technologies: RFID systems, sensor technologies, monitoring networks, behavioural biometric profiling, nanotechnologies.  

    2. Development of user-controlled Identity Management Systems, Privacy Enhancing Technologies (Privacy Preserving Data Mining, P3P platforms, PKI, Contextual Integrity model Nissenbaum, etc.). 

    3. Development of autonomic computing and AmI ecosystems. 

    4. Development of law: DRM, data protection and civil and criminal liability, e.g., for harm caused by autonomic computing. 

  1. Identify key uncertainties 

User control, provider control, a balanced combination of user and provider control, as well as developments in data protection. 

  1. Is user control going to be the paradigm (data minimisation, interactive computing)? Will this result in less intelligence in the environment?  

  2. Is provider control going to be the paradigm (data maximisation, proactive computing)? Will this result in ignorance and manipulation of the user? 

  3. Is a fruitful combination possible of user control (interactive), intelligence of the system (proactive), empowering the user by means of minimisation of knowledge asymmetry? 

  4. Will data protection develop further in the direction of a personality right (human-rights perspective) or will commodification of personal data be explored? Will strict liability for harm caused by automatic profiling be considered? 

    1. Construct initial scenario themes 

The central question is: how does the scenario demonstrate the issues Ambient Law is supposed to address? Think of technological embodiment of mandatory data-protection legislation, M2M negotiations about exchange of data for services, history management, and access to profiles. 

  1. Scenario I could show that proactive – provider-controlled – computing (1) provides comfort if the technologies work properly, (2) provides irritation if the technologies wrongly anticipate preferences, and (3) allows the providers to manipulate user behaviour because it is aware of preferences without the user knowing this. It could also indicate to what extent the user has access to what happens to her data (who stores, sells, buys and uses them), and how a lack of such access effects her (new opportunities and risks are attributed on the basis of profiles the user is not aware of). 

  2. Scenario II could show that interactive – user-controlled – computing (1) puts a burden on the user, and (2) makes it difficult for the environment to anticipate inferred preferences. It could also indicate to what extent the user has access to what happens to her data (who stores, sells, buys, and uses them how), and how a lack of such access effects her (new opportunities and risks are attributed on the basis of profiles the user is not aware of).  

  3. Scenario III could show that a right balance between inter- and proactive computing depends on who decides when the user shifts to proactive computing (what is the default position and which are the practical consequences?), on what knowledge basis this is done, and how this interferes with the intelligence of the environment and the possibility of manipulating the user. It could also indicate to what extent the user has access to what happens to her data (who stores, sells, buys, and uses them how), and how a lack of such access effects her (new opportunities and risks are attributed on the basis of profiles the user is not aware of). 

 

Aim and research question: towards Ambient Law  fidis-wp7-d7.9_A_Vision_of_Ambient_Law.sxw  Outline
4 / 31