Resources
- Identity Use Cases & Scenarios.
- FIDIS Deliverables.
- Identity of Identity.
- D2.1: Inventory of Topics and Clusters.
- D2.2: Set of use cases and scenarios.
- D2.3: Models.
- D2.6: Identity in a Networked World – Use Cases and Scenarios.
- D2.13: Virtual Persons and Identities.
- Interoperability.
- Profiling.
- Forensic Implications.
- HighTechID.
- Privacy and legal-social content.
- Mobility and Identity.
- Other.
- Identity of Identity.
- IDIS Journal.
- FIDIS Interactive.
- Press & Events.
- In-House Journal.
- Booklets
- Identity in a Networked World.
- Identity R/Evolution.
D2.1: Inventory of Topics and Clusters
The Identification processes (and more precisely authentication) can rely on different characteristics of a person such as:
Something the person is
Something the person does
Something the person knows
Something the person has
Identification can also be done via the use of a third (hopefully) trustworthy party, for example a certification authority.
The first identification mechanism (something the person is) relates to the characteristics directly associated to the user. For instance, identifiers based on biometrics represent a typical illustration.
The second identification mechanism (something the user does) relates to the behaviours that can be associated to the person, and can be considered as a particular case of the previous case. Characteristics can for instance refer to more implicit attributes such as the behavioural characteristics (behavioural patterns in a digital environment, attitudes in a social context) and that can be observed.
The next identification mechanism (something the user knows) relates to information that the user is supposed to know. This includes passwords, PIN or private information (e.g., the mother’s maiden name).
The next identification mechanism (something the user has) relates to the possession of an artefact that is used specifically for the identification process. Examples include hardware-based tokens such as smart cards, software tokens such as digital certificates, or keys.
These different mechanisms differ in their usability (e.g., the access to some biometrics characteristics may be difficult) and their reliability (e.g., a key can easily be transferred or stolen).
Protecting from identification (and protecting privacy)
As previously indicated, identification is not always desirable (privacy has to be protected), and some mechanisms have to be proposed to help people protect themselves from undesired identification. This protection is becoming even more important since the advances in technology (such as profiling) can even have an impact on the state of democracy (Hildebrandt 2005). For instance profiling could be used via personalisation to manipulation people behaviour in a massive scale, reducing freedom of self-determination and personal autonomy, and therefore eroding societal freedom. In a doomsday scenario, personalisation services could put cultural and social diversity at stake: one political or religious message dominates the whole discourse (Hildebrandt and Backhouse, 2005).
Different concepts, means and mechanisms can be used to protect the privacy of a person (Hansen and Pfitzmann, 2004), which consist principally in obfuscating the identification process (hiding the user characteristics or traces, and thus making the authentication more difficult).
Examples of such concepts and mechanisms include:
unlinkability
unobservability
encryption
anonymity
pseudonymity
Note: FIDIS Del 7.4 (Implications of profiling practices for democracy and rule of law) will focus on the implications of profiling on democracy and rule of law, integrating issues such as privacy and security but also posing the question: who is profiling who. This touches on issues such as equality (discrimination; dissymmetry) and transparency (invisibility of data processing; transparency of those that are profiled). The legal framework will be discussed in respect of building in checks and balances: facilitating opacity of individuals and transparency of data controllers/users.
The concept of Unlinkability
“Unlinkability of two or more items (e.g., subjects, messages, events, actions, …) means that within this system, these items are no more and no less related than they are related concerning the a-priori knowledge.” (Hansen and Pfitzmann, 2004)
This definition of Unlinkablitly is general, and deals with unlinkability of any sort of “items”. (ISO, 1999) provides another definition that is more focussed on the user. It defines this concept as: “[Unlinkability] ensures that a user may make multiple uses of resources or services without others being able to link these uses together. […] Unlinkability requires that users and/or subjects are unable to determine whether the same user caused certain specific operations in the system.”
We can also differentiate an “absolute unlinkability” (“no determination of a link between uses”) and “relative unlinkability” (i.e., “no change of knowledge about a link between uses”).
Unlinkability of an item can in particular be partial, and “protect” only some operations associated to this item. For instance, unlinkability of an item can only concern the linking with the originator of the item (such as the author of the message) or with the recipient of the item (the reader).
An example of an unlinkable item would be an anonymous message for which it is not possible to determine the identity of the author.
The concept of Unobservability
“Unobservability is the state of IOIs (the items of interest) being indistinguishable from any IOI at all.” (Hansen and Pfitzmann, 2004)
(ISO, 1999) provides the following less general definition: “[Unobservability] ensures that a user may use a resource or service without others, especially third parties, being able to observe that the resource or service is being used. […] Unobservability requires that users and/or subjects cannot determine whether an operation is being performed.”
As seen before, our approach is less user-focused and thus more general. With the communication setting and the attacker model chosen in this text, our definition of unobservability shows the method by which it can be achieved: preventing distinguishability of IOIs. Thus, the ISO definition may be applied to different settings where attackers are prevented from observation by other means, e.g., by encapsulating the area of interest against third parties.
Unobservability is stronger than Unlinkability since it protects the content of an operation, and even its existence. Certainly, an unobservable item it unlinkable, since a precondition of linkability is the awareness of the existence of the item.
A similar concept is untraceability. The definition of the antonym is: “traceability is the possibility to trace communication between application components and as such acquire private information”; traceability is the ability to obtain information about the communicating parties by observing the communication context (e.g., through the IP address).
An example of an unobservable item message would be a secret message for which other parties cannot be aware of its existence, and a fortiori, of its content.
Concept and Mechanism: Anonymity
“Anonymity is the state of being not identifiable within a set of subjects, the anonymity set. The anonymity set is the set of all possible subjects. With respect to actors, the anonymity set consists of the subjects who might cause an action. With respect to addressees, the anonymity set consists of the subjects who might be addressed.” (Hansen and Pfitzmann, 2004)
Therefore, a sender may be anonymous only within a set of potential senders, his/her sender anonymity set, which itself may be a subset of all subjects worldwide who may send messages from time to time. The same is true for the recipient, who may be anonymous within a set of potential recipients, which form his/her recipient anonymity set. Both anonymity sets may be disjointed, be the same, or they may overlap. The anonymity sets may vary over time (and normally decrease, since we can make the assumption that digital systems do not “forget”).
It should be noted that this definition applies to any sort of subject, and not only to users.
(ISO, 1999) however provides a definition that only applies to a user: “[Anonymity] ensures that a user may use a resource or service without disclosing the user’s identity. The requirements for anonymity provide protection of the user identity. Anonymity is not intended to protect the subject identity. […] Anonymity requires that other users or subjects are unable to determine the identity of a user bound to a subject or operation.”
Different levels of control of anonymity can be distinguished (Claessens et al., 2003):
Unconditional anonymity (no revocation possible)
User-controlled conditional anonymity
Trustee-controlled conditional anonymity
In some applications it could be the case that the user wishes to revoke his anonymity. For example, in the case of a medical database, if a patient asks for his medical records he should be able to prove his identity.
In some other cases (trusted-controlled anonymity), the anonymity may be revocable by third parties under some specific conditions (e.g., in the context of fighting criminal activities).
An example of anonymity is a person connecting to a web site or using a peer-to-peer network: this web site, or external actors, do not normally have the ability (if we exclude, for instance, some spyware techniques) to determine the identity of the person. However, under certain conditions (e.g., in the context of fighting against piracy), a third party (a judge) may have the ability to disclose this information by linking the connection information (provided by the Internet Service Provider of the person) and the IP number trace of the person.
Mechanism: The use of Pseudonyms
Pseudonyms are identifiers of subjects, of sender and recipients (Hansen and Pfitzmann, 2004). The subject whom the pseudonym refers to is the holder of the pseudonym.
“Pseudonym” comes from Greek “pseudonumon” meaning “falsely named” (pseudo: false; onuma: name). Thus, it means a name other than the “real name”. As the “real name” (written in ID papers issued by the State) is somewhat arbitrary (it can even be changed during one’s lifetime), we will extend the term “pseudonym” to all identifiers, including all names or other bit strings. A pseudonym can be considered as a mapping of the identifier “real name” into another name. The “real name” may be understood as a pseudonym resulted from the neutral mapping. To avoid the connotation of “pseudo” = false, some authors call pseudonyms (as defined here) simply nyms. Although this is concise, in this document the usual wording, i.e. pseudonym, pseudonymity, etc, shall be adopted. However the reader may find nym, nymity, etc. in other texts.
On a fundamental level, pseudonyms are nothing more than another kind of attribute. But whereas in building IT systems, the designer can keep pseudonyms under his and/or the user’s control; this is practically impossible with attributes in general. Therefore, it is useful to give this kind of system-controlled attribute a distinct name: pseudonym.
An example of a pseudonym is the name that a player chooses for himself when participating in an online game. Another example is the eBay vendors or buyers that want to transact without disclosing their identity.
Pseudonyms represent a particular indirect mechanism that helps isolating (and protecting) the identity of the person in the conduct of some activity. It should however be distinguished from anonymity in the sense that a pseudonym can have substance and persistence. For instance, some properties can be attached to a pseudonym such as reputation, which can be exploited by the owner of the pseudonym (e.g., for conducting some business, for developing social relationships or for playing).
Mechanism: Encryption
The final term is related to the protection of the item of information, in contrast to unobservability which was related to the protection of the exchange processes of this item. The circulation of an “encrypted” item may therefore be observable, and the sender and the recipient of the item may be visibly linked to each other. However, the nature of this link will remain unknown and private.
For instance, encrypting the content of an email (using the PGP system) represents a typical example of a mechanism contributing to the protection of the identity of the person via content information hiding.
Other Mechanisms for managing the identification
The perspective presented has a relatively technical orientation, and is used primarily in the context in digital systems. The management of identification and of identity can also rely on more traditional approaches (that have to be adapted in order to take into account the context of digital systems). For instance, it can be legal: laws and rules can be elaborated to regulate and define the limit of what is permitted. It can also be educational: making people aware of the identity risks, and providing them (educating them) with good practices for improved management of their identity by themselves. This represents a very effective “tool”, that is often overlooked (E&Y, 2004).
19 / 29 |