You are here: Resources > FIDIS Deliverables > HighTechID > D3.3: Study on Mobile Identity Management > 

D3.3: Study on Mobile Identity Management

Privacy for Mobile Users  Study on Mobile Identity Management
 Privacy in mobile ad hoc Networks


Freiburg Privacy Diamond

The Freiburg Privacy Diamond (FPD) is a model that tries to capture the essence of anonymity with regard to the most important forms of mobility in mobile communications: device mobility and user mobility. It must consider therefore at least four types of entities: the action itself, the device used for the action, the user who performs the action and the location which the device and the user are located at.

The FPD (see figure 3-1) describes how these entities are related and how an attacker can use knowledge about these relationships to break anonymity. With this completely interconnected graph it is possible to describe which information can be concluded from other information. The use of the FPD is illustrated in a very simplified fashion by the following example: 

An attacker attempting to disclose the identity of a user tries to reveal the relationship between the user and an action. To do this, he could find out which device was used for this action and then find out who used this device. If the identity of the device used for the transaction is concealed, e.g. using a mix network (Berthold, Federrath and Köpsell, 2001; Chaum, 1981) or crowds (Reiter and Rubin, 1998), this deduction is not possible. But it may also occur that the device and the location of the device are known, e.g. if the user goes to an Internet-Café. However, there is no a priori knowledge of the user of the device. This knowledge can only be gained, if the user reveals her or his identity directly.


Figure 3-1: Attacker model for mobile users: Freiburg Privacy Diamond 


Model Assumptions

The privacy diamond is used to represent the knowledge of the attacker in the following situation: a user operates a device at a certain location to initiate an action. Four entities are necessary to model the situation: the user, the action, the location and the device. These will be addressed below. Time will only be considered as an implicit parameter. 

Mobile users use a device to perform actions. These actions are considered to be atomic; during an action neither the user, the device of the user nor the location of the user changes. The action is also instantaneous; it is carried out while the user uses the device. To model location information, the world is divided into cells. The size of these cells determines the maximum resolution to which a device or user can be located. 

Users perform actions using a single device from a set of devices. The device is located at the same position as the user. This assumption is realistic as the user has to be in the proximity of the device to operate it. 

The scope of the definition of the device includes all software on this device. If the software is able to migrate from device to device, like mobile agents, this node of the graph would have to be replaced by several nodes. This situation is not considered here. 

Classes of Anonymity Mechanisms

Intuitively, there are five loop-free paths which can be used to deduce the identity of a user by linking this action to the user: 

  1. user to action directly 

  2. user via location to action 

  3. user via device to action 

  4. user via location and then device to action 

  5. user via device and then location to action 

For anonymising systems that are secure against an attacker calculating the transitive closure, all five paths have to be broken. There are four minimal ways of doing this (see figure 3-2), leading to four classes of minimal anonymity mechanisms. Minimal means that it is not possible to re-connect a severed relation in the privacy diamond without allowing the attacker to infer the relation of user to action through transitive closure. 

Anonymising mechanisms in the category described by the privacy diamond in figure 3-2a are those that do not require mobility, e.g Mixes (Berthold, Federrath and Köpsell, 2001; Chaum, 1981) and DC-nets (Chaum, 1988; Waidner and Pfitzmann, 2002). The privacy diamond in Figure 3-2b describes anonymising mechanisms that rely on user mobility like phone booths or Internet cafés. An anonymising mechanism in category c) relies on broadcasts to and from a specific device. Both categories b) and c) rely on the users changing their devices. Therefore, it is not possible to employ a personal device. Category d) requires terminal mobility, enabling users to use their own devices. It also permits the location of the action to be visible on the network, thus allowing optimisation of routing, etc. RFC 3041 (Narten and Draves, 2002) and location addressing are examples of mechanisms in that class. 

It is possible for an anonymity mechanism to be contained in two classes if it is not minimal. This can happen by a combination of mechanisms of different classes. Relations that must be obscured by the anonymity mechanism are shown by dotted arrows. 


Figure 3-2: Four minimal possibilities for anonymity mechanisms 


The Freiburg Privacy Diamond models the information processing of an attacker. It can be used to understand the impact of user and terminal mobility on anonymising systems. It can be seen that mobility offers new possibilities for designing anonymity mechanisms. This is important as new anonymising mechanisms are required that observe the constraints imposed by mobile communications systems.

By weighting the relations, it could be used to analyse anonymity mechanisms and provides a measure of the degree of anonymity and of the confidence that a specific user performed a certain action. For a real attacker, using this model to attack anonymity would have a major drawback. Since no exclusion of possibilities is possible in the model, the information being sought could get lost in the mass of inferences. 


Privacy for Mobile Users  fidis-wp3-del3.3.study_on_mobile_identity_management.final_04.sxw  Privacy in mobile ad hoc Networks
14 / 36