You are here: Resources > FIDIS Deliverables > Privacy and legal-social content > 

Privacy and the legal-social content of identity

D13.1: Identity and impact of privacy enhancing technologies

This document is a report on technologies that enhance privacy from the technological point of view. We examined neither policy-based solutions nor law, we provide a review of technologies available.



D13.1 Addendum: Identity and impact of privacy enhancing technologies

This document is an addendum to our report on technologies that enhance
privacy from the technological point of view, and where we provided a review
of technologies available.


[ Download ] [ Read Online ]

D13.3: Study on ID number policies

The objective of this deliverable is to present a view on the sensible use of theidentification numbers, especially in the public domain. The question ofwhether proper use can be achieved by a single global identifier or multipleidentifiers will be answered.In this deliverable several FIDIS partners investigate different aspects of IDnumbers, such as the history of the use of identification documents, the legalframework, the sociological theoretical aspects and the possible use of IDnumbers in the technique of profiling. Thus the investigations presented in thisreport provide a sound basis for determining the risks and opportunities in usingID numbers, especially the area of e-government.Country reports illustrate the choices made of using either a single globalidentifier or multiple identities. The report shows how the ID number can beput to good use while at the same time not unduly harming the privacy interestsof the individual.

[ Download ] [ Read Online ]

D13.4: The privacy legal framework for biometrics

The present report reviews the fundamental right to privacy and data protection
which shall be assured to individuals and the Directive 95/46/EC which
provides more detailed rules on how to establish protection in the case of
biometric data processing. The present framework does not seem apt to cope
with all issues and problems raised by biometric applications. The limited
recent case law of the European Court of Human Rights and the Court of
Justice sheds some light on some relevant issues, but does not answer all
questions. The report provides an analysis of the use of biometric data and the
applicable current legal framework in six countries. The research demonstrates
that in various countries, position is taken against the central storage of
biometric data because of the various additional risks such storage entails.
Furthermore, some countries stress the risks of the use of biometric
characteristics which leave traces (such as e.g., fingerprint, face, voice…). In
general, controllers of biometric applications receive limited clear guidance as
to how implement biometric applications. Because of conflicting approaches,
general recommendations are made in this report with regard to the regulation
of central storage of biometric data and various other aspects, including the
need for transparency of biometric systems.


[ Download ]

D13.6 Privacy modelling and identity

This document critically reviews existing approaches (most common theoretical
tools) for modelling relations of identity related information and also some
related aspects of their applicability for measurement or quantitative expression
of (the level of) privacy.


[ Download ]

D13.7: Workshop Privacy

The workshop examined and extended the findings of D13.6 (Privacy modelling
and identity), and thus provided a better insight into a possibly critical ability for
the future - to express the level of protection/state of identity related
information. This will be further studied in D13.8 and D 13.9.


[ Download ]

D13.8: Applicability of privacy models

In the present deliverable, we focus on the applicability of privacy models and
review as well as illustrate the applicability of models from Deliverable D13.6
using a real-world example. Besides, we show some shortcomings of the
approaches presented in D13.6 and include the aspects of combination of
information and of misinformation, i.e., information which (partly) cannot to
some extend and for some reason be verified by an adversary, hence approaches
which may potentially be of major influence in the computation of a measure of


[PDF Download]

D13.9: Estimating Quality of Identities

While in deliverable 13.8 the applicability of models/approaches for measuring privacy are illustrated by more-or-less declarative means, this deliverable focuses on testing and evaluating them. Due to the reason that real world data concerns real world people and their personal data, all data used were anonymised. Our main goals are demonstration of achievable results regarding privacy measurement by the data available for scientic research.

[ PDF Download ]

D13.12: Privacy – Self-valuation of communication traffic data

Description of an experiment conducted within FIDIS, targeting the value people attach to their private information. In this experiment the goal was to evaluate the value people attach to the information describing the use of online communication tools. We describe the experiment and the structure of our online questionnaire and provide a detailed evaluation of received answers.


D14.1: Workshop on Privacy in Business Processes

This workshop was the kick-off meeting for WP14 and aimed at coordinating the work on the deliverables D14.2 and D14.3 and to present the corresponding contributions of their participants. One external speaker from the EU-project "Privacy and Identity Management for Europe" (PRIME) presented PRIME’ approach for privacy in business processes. The workshop on privacy in business processes was held on September 11th and 12th, 2006,during the FIDIS 1st Research Event in Budapest, Hungary. The agenda, presentations andminutes can be found at

[ Download ] [ Read Online ]

D14.2: Study on Privacy in Business Processes by Identity Management

Privacy is not only a concern of customers. Service providers also fear privacy violations as a main hurdle for the acceptance of personalised services. Furthermore, the protection of privacy is an interest of service providers who take on customer relationship management activities of several service providers. They manage customers’ profiles, e.g. in loyalty programs and ehealth scenarios with electronic patient records, and offer the service of aggregation. If it is possible to link profiles of a customer without the need of such service providers, latter would not benefit from their aggregation service. Three case studies show privacy threats in business processes with personalised services.

The objective of this study is to identify privacy threats in business processes with personalised services, to suggest process models for modelling privacyaware business processes and to derive security requirements for user-centric identity management in order to preserve privacy.

The scenarios and use cases presented in this study are recommended for nontechnical readers, whereas the analysis of user-centric identity management protocols and approaches for identity management extensions are recommended for technical readers.

[ Download ] [ Read Online ]

D14.3: Study on the Suitability of Trusted Computing to support Privacy in Business Processes

The European Directives 95/46/EC and 2002/58/EC demand the consent of
users for a purpose-based processing of their data. In practice, users give their
consent to the privacy statements of service providers, if they want to use
personalised services. Since current privacy enhancing technologies focus on the disclosure of personal data and not on their usage, users are not able to verify whether service providers follow their privacy statement. It follows that users have to trust service providers that they enforce the rules of their privacy

The objective of this deliverable is to investigate on Trusted Computing whether
it is suitable to realise a trust model where service providers are able to show
users that they have enforced the agreed rules. The motive for choosing Trusted Computing is that Trusted Computing provides a tamper-resistant foundation for identifying an information system’s configuration and so to identify if specific services, e.g. for monitoring the usage of personal data, are used.

Approaches for using Trusted Computing in order to support the enforcement of
privacy policies are presented. This deliverable proposes a modification of the
specification by the Trusted Computing Group and a monitor for observing the
usage of personal data.

[ PDF Download ] [ Read Online ]

D14.4: Workshop on “From Data Economy to Secure Logging as a Step towards Transparency”

This workshop was the kick-off meeting for the WP14 work on privacy
evidences as an instrument for enforcing privacy policies after the disclosure of
personal data. It aimed at coordinating the work on the deliverables D14.5 and
D14.6 and to present the corresponding contributions of their participants.
This workshop was held on September 11th, 2007, at the FIDIS 2nd Research
Event in Athens, Greece. The agenda, presentations and minutes are available
on the internal FIDIS pages of WP14.

[  PDF Download ] [ Read Online ]

D14.5 Experimental Study on Profiling in Business Processes

The aim of this study is in tracing the behaviour of mainly commercial entities
with respect to their handling of personal data. Many profiling activities are
done without a clear legal base: personal data is passed through without the
explicit consent of the individuals. However, we are not aware of a clear
empirical analysis, giving an understanding how companies and authorities are
dealing with personal data. The study is proposed as a filed study were personal
data is marked (e.g., by slightly modifying names, data etc.) and given away to
commercial companies (e.g., buying portals, club cards etc.). The study is a
mid-term study. Based on the received postal and electronic advertisements, it
can be traced which entities has leaked personal data.


D14.6: From Regulating Access Control on Personal Data to Transparency by Secure Logging

Identity management controls the disclosure of personal data of data providers to data consumers. However, data providers do not obtain an indication as to whether data consumers use personal data according to the agreed privacy policy. Data providers are left with a number or privacy promises or expectation but do not get evidence that data consumers followed the agreed privacy policy. This deliverable proposes a “privacy evidence” by investigating on the data usage of data consumers for given data providers. This proposal is based on log views on accesses to personal data which can be checked by data providers on the compliance with privacy policies. Building blocks of system architecture for “privacy evidences”, their requirements and approaches for their realisation are presented.


[ Download ]

D14.7: Analysis of contemporary security techniques with respect to identification in business processes

In this deliverable, we give an overview of widely used identification
and authentication techniques, namely, password-based
authentication, smartcards, digital certificates and biometrics.
Then, we lay down the requirements for identification schemes from
the perspective of enterprises, business collaborators, and legal and
standardization institutions.
Afterwards, we shed light on the different properties of
contemporary identification schemes that are mostly considered for
business processes. Particularly, we focus on scalability and
flexibility, trustworthiness across domains, robustness, anonymity,
usability, efficient identity management, and legal grounds.


[ Download ]

D14.8: Privacy in Business Processes

This deliverable presents approaches for privacy in business processes where
personal data is stored on the business partners’ site, such as in applications for
eHealth or loyalty programmes. It presents current approaches elaborated
within FIDIS to privacy in business processes. The deliverable describes the
threats to privacy in this setting and the underlying trust model. In addition to
technical mechanism it also presents organisational means to preserve privacy.


D15.3/D13.10: FIDIS International Summer School

Training for PhD students within FIDIS is conducted through structured and timely events held at or in association with an academic institution from within the FIDIS network. To this end, the aim of this deliverable was to organise and execute the second FIDIS International Summer School, an event akin to a formal academic conference with a mixture of keynote lectures by senior researchers and workshop presentations by PhD students. These are designed to tie-in with other closely related events to expand the knowledge base and as a by-product to further publicise the work of FIDIS. This means that, whilst assuring the interests of FIDIS, the event also targeted participants from outside the FIDIS network. In this case it was jointly organised with IFIP (International Federation of Information Processing) Working Groups 9.2 (Computers and Social Accountability), 9.6/11.7 (IT Misuse and the Law) and 11.6 (Identity Management). To further the value, scope and appeal, it was also combined with WP13’s Privacy  Summer School (D13.10) to form an extended, enhanced and enriched event.

The combined event was successfully held on the 1st – 7th September in Brno, Czech Republic. The proceedings will subsequently be published by the official IFIP publisher Springer Science and Business Media.


D16.3: Towards requirements for privacy-friendly identity management in eGovernment

This report describes in a multi-disciplinary way requirements for privacyfriendly
identity management in eGovernment. The cooperation among the
large number of disparate entities is compared with so-called ‘circles of trust’,
whereby identity and service providers have to agree on procedures and
conclude agreements, including on the allocation of their roles and
responsibilities within the eGovernment context. The use of authoritative
sources, the importance of an authorisation management and the authentication
and assurance mechanisms are hereby further identified as basic legal
approaches for privacy-friendly IMS. Basic technologies that support the
fulfilment of these requirements are presented and discussed.
The deliverable also discusses various advanced technical approaches, which
are valuable for eGovernment, in particular techniques for the management of
identities in networking infrastructures. This includes Private Information
Retrieval, DC networks and MIX networks. The BBox architecture which may
provide a secure logging system under certain conditions is also given attention.
By way of concluding, an organisational framework for privacy policy handling
is suggested in combination with technical approaches to support privacy policy
handling. Various issues, however, are still open for further research.

[ Download ]

D16.4: Study on the relevance of Trusted Infrastructures for E-Voting

In this deliverable, the implications of Trusted Infrastructures on e-voting are
assessed. This is done by defining a trust model for three different e-voting
schemes (machine voting, internet voting, and sms voting), and assessing how
Trusted Infrastructure concepts (such as Trusted Computing functionalities) are
able to address the underlying assumptions which are necessary for a secure,
reliable and trustworthy e-voting process. In particular, we emphasize
conflicting requirements such as anonymity vs. authentication, and receiptfreeness
vs. vote verifiability. These identity-related requirements entail special
technical approaches that Trusted Infrastructures can provide. We also shed the
light on the social and legal implications of deploying a Trusted Infrastructure
to support an e-voting system.


[ Download ]

D17.2: New (Id)entities and the Law: Perspectives on Legal Personhood for Non-Humans

New entities in the information society that operate at increasing distance from
the physical persons ‘behind’ them, such as pseudonyms, avatars, and software
agents, challenge the law. This report explores whether such entities – abstract
persons – could be attributed legal rights and/or duties in some contexts, thus
creating entities that are addressable in law themselves rather than the persons
‘behind’ them. Are current legal constructions sufficient to solve potential
conflicts involving new entities, or would it help to create (limited) legal
personhood for these new entities? The report identifies three strategies for the
law to deal with the challenge of new entities: interpreting existing law,
changing the law with specific rules, and changing the legal system by granting
limited or full legal personhood to new entities. It provides a tentative
conclusion and an agenda for further research.


D17.3: Bridging the accountability gap: rights for new entities in the information society?

New entities in the information society, such as pseudonyms, avatars, software agents, and robots, create an ‘accountability gap’ because they operate at increasing distance from their principals. One way of addressing this is to attribute legal rights and/or duties in some contexts to non-humans, thus creating entities that are addressable in law themselves rather than the persons ‘behind’ them. In this article, we review existing literature on rights for nonhumans, with a particular focus on emerging entities in the information society. We discuss three strategies for the law to deal with the challenge of these new entities: interpreting and extending existing law, introducing limited legal personhood with strict liability, and granting full legal personhood. To assess these strategies, we distinguish between different types of persons (abstract, legal, and moral) and different types of agency (automatic, autonomic, and autonomous). We conclude that interpretation and extension of the law seems to work well enough with today’s emerging entities, but that sooner or later, attributing limited legal personhood with strict liability is probably a good solution to bridge the accountability gap for autonomic entities; for software agents, this may be sooner rather than later. The technology underlying new entities will, however, have to develop considerably further from facilitating autonomic to facilitating autonomous behavior, before it becomes legally relevant to attribute ‘posthuman’ rights to new entities.