Resources
Identity Use Cases & Scenarios.
FIDIS Deliverables.
Identity of Identity.
Interoperability.
Profiling.
D7.2: Descriptive analysis and inventory of profiling practices.
D7.3: Report on Actual and Possible Profiling Techniques in the Field of Ambient Intelligence.
D7.4: Implications of profiling practices on democracy.
D7.6 Workshop on AmI, Profiling and RFID.
D7.7: RFID, Profiling, and AmI.
D7.8: Workshop on Ambient Law.
D7.9: A Vision of Ambient Law.
D7.10: Multidisciplinary literature selection, with Wiki discussion forum on Profiling, AmI, RFID, Biometrics and Identity.
D7.11: Kick-off Workshop on biometric behavioural profiling and Transparency Enhancing Technologies.
Forensic Implications.
HighTechID.
Privacy and legal-social content.
Mobility and Identity.
Other.
IDIS Journal.
FIDIS Interactive.
Press & Events.
In-House Journal.
Booklets
Identity in a Networked World.
Identity R/Evolution.
Table of Contents
Abbreviations
DRM Digital Rights Management
DT D1gital Territ0ries
ECHR European Convention of Human Rights and Fundamental Freedoms
EU European Union
HMI human machine interface
IMS identity-management system
IPTS Institute for Prospective Technological Studies
M2M machine-to-machine
OJ Official Journal
P3P Platform for Privacy Preferences
PET Privacy-Enhancing Technology
PII personally identifiable information
PPDM Privacy Preserving Data Mining
TC Trusted Computing
TET Transparency-Enhancing Technology
TTP Trusted Third Party
VLE Virtual Learning Environment
VR Virtual Residence
XML eXtensible Markup Language
XPref a preference language for P3P
Executive Summary
Ambient Intelligence is a development of ICT which seamlessly integrates smart devices with a smart environment. If the vision of Ambient Intelligence (AmI) comes true, we move to an age where we equip our entire environment with tools to ‘think’ of its own and to make ‘smart’ decisions for us. No doubt AmI will emerge somehow somewhere in the future, but how it will look like is yet an open issue. Widely diverging scenarios are possible, with a very different look and feel for its users.
This report focuses on an essential element of how AmI will turn out: can it incorporate core values of the democratic constitutional state, in particular privacy and non-discrimination? Rather than approaching this question by describing a bleak, Big Brother vision of AmI, this reports uses a more constructive and forward-looking perspective, by bringing our conception of law a step forward. We use the concept of Ambient Law (AmL) as a key direction of thought for developing Ambient Intelligence. This denotes an integration of legal norms and the technologies these norms aim to regulate. It builds on concepts like ‘code as law’ (Lessig) and ‘value-embedded design’ (Nissenbaum), but it is new in that it claims that the norms embodied in technology should be constituted as legal norms.
This leads to the research question of this report: can law as embodied in the future Ambient Intelligence architecture – Ambient Law – safeguard the core values of privacy and non-discrimination, while at the same time helping to realise the potential of Ambient Intelligence? This question is answered by analysing Ambient Intelligence and the role of Ambient Law therein from a conceptual, legal, and technical perspective.
Conceptually, modern law depends on the printed script – ‘law in the books’, and hence, the formulation and the enforcement of the rule are often separated. Ambient Law differs from this, in that it could embed legal rules in the technological and organisational architecture of AmI in a way that ensures enforcement, supposedly in an uncircumventable way, e.g., a smart car that cannot start if it detects a drunk driver. ICT-embedded law does not have to replace text-based, printed law but – on the contrary – in an AmI world, ‘law in the books’ can be complemented with ‘law in other technologies’ to improve legal protection in an AmI world.
The legal analysis shows that core principles of the data-protection framework, notably data minimisation, purpose specification, goal binding, informed explicit consent, and accountability, are entirely at odds with the vision of Ambient Intelligence. AmI cannot develop at all if the current legal framework is applied. The development of new emerging technologies and AmI applications and systems should not be hindered from the start by legal provisions based on an increasingly outdated paradigm of preventing data processing. The vision of AmI therefore requires a revision of the legal data-protection framework as it currently exists, perhaps along the lines of ‘contextual integrity’ (Nissenbaum).
At the same time, the core values behind privacy and data protection, such as autonomy, self-development, and human dignity, also imply that AmI should not be allowed to develop in a legal vacuum. Where current guiding principles of the legal framework increasingly fall short, other mechanisms must be established to maintain some form of balance between users (consumers, citizens) and providers (businesses, government). Moreover, also autonomy and non-discrimination are at risk when AmI applications seamlessly and invisibly make customised decisions about people. A first prerequisite for non-discrimination in an AmI world is transparency, which is unlikely to be affected by mere legislative or self-regulatory measures of AmI providers. One problem with self-regulatory measures is that providers have a lot to gain from learning as much as possible about their users and much less incentives to protect their users’ privacy.
It is here that the technological perspective can help. Privacy-Enhancing Technologies (PETs) and Transparency-Enhancing Technologies (TETs) have a great potential of filling the gaps in legal protection that AmI creates. To enhance transparency, purpose-specification, and informed consent, tools like P3P and History Management have been developed, while DRM and Trusted Computing could add alternative ways to track personal data and the way they are being processed. Such tools should be embedded in a personal device, like a PDA, which enables users to automatically interact with the AmI environment, while keeping substantial control over data flows and while identifying the AmI provider (perhaps in a reversibly pseudonymous way) who processes data and profiles them. The PET- and TET-based PDA enables users to monitor the intelligent environment’s anticipating actions (which are based on profiles), so that they can decide whether they want to comply, change their behaviour, or complain because the profiling is based on unjust criteria. AmI users should not have to make such decisions all of the time, of course: their PDA should learn in which cases they go along with the environment and in which cases they want to be informed of profiling in order to make a case-specific decision, and it should translate the environment’s profiling in such a way that the user understands what happened.
However, this technological perspective is largely theoretical today. Many PET concepts and tools exist, but they are not broadly used in practice. Moreover, today’s PETs focus on personal data but are not developed to deal with group profiling, and most PETs work in the world of telecommunications but not in a comprehensive AmI setting comprising of all areas of users’ lives. There is much work to do, then, if PETs are to fulfil their promise of protecting users in an AmI environment. This is all the more true for TETs, which have been much less researched than PETs and for which only some components are available.
The conclusion that can be drawn from the conceptual, legal, and technical analysis, is that AmL can in fact significantly help to safeguard the core values of privacy and non-discrimination, without obstructing the development of Ambient Intelligence as such. It acts as a check on AmI providers who may know too much of AmI users or make unjustified decisions about them. To a much further extent than the present legal framework, the integration of legal rights of opacity and transparency with the technological architecture would enable citizens to actually exercise these rights. Building in these core values into the AmI architecture will help to enhance its social and legal acceptability and will thus further the development of AmI.
This will not be easy, however. It cannot be taken for granted that the embedded rules have sufficient legitimacy. Making technology-embedded rules meet core principles of law is a daunting task indeed. However, this should not deter us from trying. Otherwise, bleak alternatives arise, illustrated in this report by two scenarios: a provider-centric scenario where users are manipulated by ‘the system’ without knowledge or redress, and a user-centric scenario where AmI does not fulfil its potential since the environment is rather stupid and quite user-unfriendly. Therefore, we conclude that the vision of Ambient Law should be used as an important roadmap for making the vision of Ambient Intelligence come true, in such a way that the core values of privacy and non-discrimination are safeguarded.
Much is needed to further develop, refine, and implement this vision. We recommend that at least the following issues are taken up for further research:
revising the legal framework of privacy and non-discrimination in light of the advent of Ambient Intelligence;
creating an adequate legal framework for generating and applying profiles;
developing PET- and TET-based human machine interfaces that allow individual citizens to communicate with their environment;
developing technological tools and redefining legal rules in such a way that the rules can be digitised and built-in in the AmI architecture;
developing mechanisms for ensuring the legitimacy and conformity with core legal principles of technology-embedded rules.
| 1 / 31 |
