You are here: Resources > FIDIS Deliverables > Profiling > D7.9: A Vision of Ambient Law > 
Provisional vision of Ambient Law (AmL)  Title:
LAW BY DESIGN
 Nissenbaum's contextual integrity and its operationalisation

 

Law by design

An interesting development within the ethics of technology is the idea of ‘value in design’. It basically refers to the fact – already discussed above – that technological artefacts are neither good not bad, but never neutral. This is the case because they actually influence the types of behaviour we (can) develop, and this may have moral implications. Because technologies can be designed in different ways, having a different impact on our behaviour, their moral significance will vary, depending on the design (Flanagan, Howe et al. 2007). One way to take responsibility for the moral significance of a specific design is to integrate design choices into the legal process, enabling democratic procedure to determine the way technologies steer our behaviour. One could thus understand Ambient Law as a type of ‘law by design’.  

This would imply that AmL articulates specific legal norms in the relevant technological devices or infrastructure. If we focus on the norms that aim to protect citizens against violation of their privacy, while empowering them by providing transparency rights, we could ask the legislator to require that the mandatory rules of data-protection legislation (transparency, use limitation, purpose specification, consent, data quality, participation, accountability of the data controller) are inscribed into the architecture of AmI, making the violation of these rules difficult by means of design. One could even say that whereas modern law (based on printed code) separates the written code from its implementation, this is not necessarily the case with AmL. This could mean that the law is not only more effective but also more equal in its application.

For instance: 

  1. transparency: history management of one’s personal data and access to processed personal data with data controllers should be made possible via M2M communication;

  2. purpose specification & use limitation: such transparency should enable one’s PDA to check (M2M) which purposes are specified, and to check whether the principle of use limitation has been complied with in light of these purposes;

  3. consent: one’s machine-proxy (the PDA that serves as a proxy when negotiating consent) should be capable of negotiating, e.g., the supply and processing of personal data, according to one’s personal preferences, while taking into account the mandatory aspects of data-protection legislation;

  4. data quality & participation: one’s machine-proxy should be capable of matching data stored in data bases with one’s accurate personal data, and be capable of requiring adjustments if data are not correct (anymore);

  5. accountability of the data controller: at all times, one’s machine-proxy should be capable of identifying the data controller who reads, collects, stores, or otherwise processes data, including all others that have access to these data; this could be done pseudonymously, as long as there is identifiability of the data controller in case of data-protection violations.

As argued in FIDIS deliverable 7.7, present data-protection legislation lacks adequate protection against the application of profiles, especially because citizens are not aware of the consequences of such application. This would require a new right of access to such profiles in order to empower citizens to contest the way they are being categorised, irrespective of whether these profiles have been derived from their own or others’ (personal) data. Such a right has no meaning if the technological infrastructure that enables AmI does not provide the technological means to achieve such access, or when it lacks user-friendly human machine interfaces to allow citizens to understand the profiles and how they can impact their lives. For this reason, FIDIS deliverable 7.7 argued for TETs: transparency-enhancing (legal and technological) tools. The concept of TETs can of course refer to transparency of the processing of personal data, but in relation to AmI and profiling we emphasise the need to develop TETs to make profiles transparent that may be applied even if they do not fall within the scope of the concept of personal data. Both TETs and privacy-enhancing technologies (PETs) could be examples of AmL in as far as they become integrated into the legal framework, combining a right to privacy or transparency with the technological inscription of the right into the AmI infrastructure.  

We could paraphrase the above as follows: Ambient Law in fact uses the technologies that data protection aims to regulate while protecting against their undesirable consequences, in order to facilitate this protection. This may sound like a paradox; indeed, it is quite similar to the famous paradox of the ‘Rechtsstaat’: protecting citizens against the state by allowing them to contest actions of the state in a court of law, which shares the authority of the state itself.

In Chapter , the state of the art regarding the technological implementation of mandatory parts of data protection is further discussed.

As to transparency rights concerning the application of profiles, major issues arise in the context of cooperating objects in networked environments that allow real-time autonomic profiling in order to seamlessly adapt the environment to a user’s anticipated preferences. The technological articulation of a legal right of access to profiles is still a challenge, as such technologies have not been developed yet. In FIDIS report D7.7, we have argued the need to develop transparency-enhancing tools (TETs), integrating legal and technological tools to prevent: 

  1. unfair discrimination (unfair due to the fact that citizens are not aware of who knows what and who decides on which basis); 

  1. the autonomy trap (refined segmentation allows manipulation whenever the user is not aware). 

 

Provisional vision of Ambient Law (AmL)  fidis-wp7-d7.9_A_Vision_of_Ambient_Law.sxw  Nissenbaum's contextual integrity and its operationalisation
15 / 31