You are here: Resources > FIDIS Deliverables > Forensic Implications > D5.2b: ID-related Crime: Towards a Common Ground for Interdisciplinary Research > 

D5.2b: ID-related Crime: Towards a Common Ground for Interdisciplinary Research

Social and technical guidelines on preventing ID-related crimes  Title:
 Authentication technologies


Socio-economic guidelines

The socio-economic analysis in chapter 4 provides us with a starting point to elaborate on socio-economic measures that can be taken to address ID fraud.  

1.    Raise awareness for the social and economic consequences of ID fraud:

a.    Society pays for convenient, but weak authentication combined with strong authorisation with the loss of money and trust.

b.    But society equally pays for security with operational costs and loss of convenience, liberty, liberality and freedom.

2.) feed-back control systems should be used. These systems need a certain time for the balancing process which should be granted to them. To push new technologies for authentication into the market while security and privacy aspects are not discussed and properly resolved will only increase people’s distrust in online transactions, leading to calls for more security measures (cycle of distrust).

3.    The cycle of distrust above must be slowed down. This could be achieved e.g. by

a.    Using authentication schemes proportional to the communicational context and purpose; the authentication should only entail the information required for the steps to be taken within the authentication / authorisation. Especially personal identifiers granting strong rights (e.g. the credit card number) should not be used where simply a proof of the age or the determination of the social system and the role taken therein is needed. This means creating and using a set of verifiable credentials, such as a (biometric) smart card that can only be obtained at a certain age.

b.    Using, in addition, socially accepted technical methods for authentication. In the context of the communication from organisations to their clients, which takes place only once a month, a complicated and often to be changed password usually will not lead to improved security (as it probably would in the context of the day to day communication within an organisation) as clients will document the complex password somewhere, usually under the keyboard. Obviously, this doesn’t really improve security.

c.    Improve the security of authentication processes in accordance with cause-effect-correlations. Including biometrics in the passports to fight ID fraud and to fight terrorism (like the 9/11-attack in the USA), for instance, is a bad example in this respect: all terrorists had valid passports, so biometrics in the passport will not prevent terrorist passing the border. The problem was not the passport itself, but the process of issuing them and the process of granting visa. The argument that we need an improved passport doesn’t meet the cause for the problem in this case.

d.    Using multiple independent and technically strong authentication system if higher security in authentication and authorisation. In addition the authorisation provided by each of them should be limited (decoupling of systems with high security demands). Single Sign On may be convenient, but it also lowers security in the end as it makes the corresponding identifiers more attractive for identity thieves (Kent and Millett 2001).

Technological guidelines

Social measures can only go some way in improving online security and fight ID fraud. Awareness can be raised, people can be taught to behave more responsible with respect to their ID data, but in the end culprits find and exploit technical leaks in systems. Therefore, also technical measures have to be taken. 

Current national actions in Europe and some European actions, e.g. the European passport, focus on the establishment of a digital identity document to deter identity fraud by combining the use of biometrics and smart cards. These technologies are discussed further below. Particular attention is paid to the use of privacy-enhancing technologies (PET). The objective is to demonstrate how the problem of identity theft can be tackled without necessarily surrendering privacy in the process.  

On the basis established methods for a risk assessment such as described in the IT Baseline Protection Manual, we can list a number of guidelines for authentication systems. The following guidelines a are derived from the socio-economic chapter and the technical paper that was prepared for the FIDIS wp5/wp8 workshop in May 2005. The guidelines can be presented on two levels:

  1. Authentication technologies in general 

  2. More secure authentication 

Authentication in general

The authentication should be kept as general as possible, no unnecessary data for the purpose of authentication should be required and used (data minimisation). A positive example in the offline world is the anonymous purchase of goods paid in cash in a shop; no personal identification is required. A negative example is the use of a credit card number to verify that a customer is older than 18 years. To meet higher security needs generally more personal data are required.

Generally, active authentication should be used. Following the principle of informational self-determination, which is coherent with the EU Data Protection Directive and which can also be derived from Fair Information Practices, the user should be in control where, when and for what purpose she is authenticating. Especially passive or behavioural biometrics should only be used within active authentication procedures.

Technical authentications should be socio-psychologically accepted. In many cases authentication systems are implemented that are not accepted in a given communicational context and are therefore bypassed. One example is the use of a complicated password for authentication that is rarely used (e.g. once per month). Most users will store the password in a non secure way (e.g. under the keyboard)as they are unable to memorise these passwords. The aim of improving the security by using a strong password cannot be reached. In another context e.g. where used every day in an enterprise by employees a strong password may be a good and accepted solution.

Do not use irrevocable authentication data (e.g. a key or token). If these get stolen, abuse cannot be stopped easily. Many kinds of biometrics, such as the face when automatically recognised are difficult to revoke – one cannot change his face easily. Cryptographic keys in contrast are comparatively easy to revoke and to exchange against new keys. Of course the revocation process has to be defined and prepared in advance: It should be clear under which conditions authentication data is to be revoked and how this is to be performed (e.g. authenticated revocation).

Use methods for authentication without internal verification. User name, key and additional information / secrets (like the date of birth) should not be aligned by an algorithm. Once the algorithm is known (or reasoned from known data combinations), missing parts of this combination can be calculated or virtual identities can be created.

Do not use authentication systems that are difficult to update. Generally security requirements are rising and the enrolment of a new authentication system can be expensive, especially when used by numerous clients.

More secure Authentication

For higher security needs use two or more factor authentication. Use a combination of knowledge, possession and something you are which all have to match the authentication challenge to improve security.

To further improve security the different factors should be verified using independent authentication systems. 

Do not use centralised authentication systems for multiple purposes when higher security is needed. Authentication systems should be bound to a specified purpose. The use of context-specific keys is a method to raise the hurdles for identity thieves. Single Sign On (SSO) in contrast is convenient, but less secure.

Use one-time authentications where possible. One-time authentications cannot be reused and are therefore less vulnerable to identity crimes as repeat identifiers.

Use two-way authentication to improve security where needed. Usually organisations authenticate clients in a one-way process, while the client can’t authenticate the organisation sufficiently (see chapter ). In addition to the one-way authentication, the client authenticates the organisation e.g., by using certified signatures for the exchange of messages. Two-way authentication can make certain attacks like phishing or man-in-the-middle-attacks much more difficult and thus improves security.

Digital identities

Generally, partial digital identities identified by digital pseudonyms are the means to model identity within the digital world. This section describes a number of relevant terms related to digital identities from the perspective of preventing identity fraud based on Pfitzmann and Hansen (2006).

Digital identity denotes attribution of properties to a person, which are immediately operationally accessible by technical means. A digital identity is always a partial identity (Pfitzmann and Hansen 2006).

Partial identities are subsets of attributes of a complete identity in the real world. On a technical level, these attributes are data. Thus, a pseudonym might be an identifier for a partial identity. Whereas we assume that an identity in the real world uniquely characterizes an individual (without limitation to particular identifiably sets), a partial identity may not. This enables different degrees of anonymity. However, for each partial identity appropriately small identifiably sets may be found,  such that the partial identity uniquely characterizes an individual.

The identifier of a digital partial identity can be a simple e-mail address in a newsgroup or a mailing list. Its owner will attain certain reputation. Digital identity should denote all those personally related data that can be stored and automatically interlinked by a computer-based application. 

With regards to preventing identity theft (i.e. regarding accountability), a digital pseudonym needs to have special properties: 

  • It must be unique as ID (at least with very high probability) and  

  • must be suitable to be used to authenticate the holder. 

A digital pseudonym having such properties could be realized as a public key to test digital signatures where the holder of the pseudonym can prove holdership by forming a digital signature which is created using the corresponding private key (Chaum 1981). The most prominent example for digital pseudonyms are public keys generated by the user himself/herself, e.g., using PGP.

A public key certificate bears a digital signature of a so-called certification authority and provides some assurance to the binding of a public key to another pseudonym, usually held by the same subject. In case that a pseudonym is the civil identity (the real name) of a subject, such a certificate is called an identity certificate. An attribute certificate is a digital certificate which contains further information (attributes) and clearly refers to a specific public key certificate. Independent of certificates, attributes may be used as identifiers of sets of subjects as well. Normally, attributes refer to sets of subjects, not to one specific subject.

For preventing identity theft, also the following properties of pseudonyms may be of importance: 

  • limitation to a fixed number of pseudonyms per subject (Chaum 1981; Chaum 1985; Chaum 1990); 

  • guaranteed uniqueness (Chaum 1981; Stubblebine and Syverson, 2000); 

  • (controlled) transferability to other subjects, 

  • convertibility, i.e., transferability of attributes of one pseudonym to another (Chaum,  1985, Chaum, 1990);

  • possibility and frequency of pseudonym changeover; 

  • validity (e.g., guaranteed durability and/or expiry date, restriction to a specific application); 

  • possibility of revocation or blocking. 

Limitation to a fixed number together with guaranteed uniqueness is necessary in cases where a person needs to be non-ambiguously recognizable by her pseudonym. Controlled transferability to other subjects is important for scenarios where delegation of rights is needed. For instance in case where there are organisational rules for substitutes, a pseudonym can be transferred to a substitute for some defined time frame. Convertability, as well as the possibility of pseudonym changeover is a basic building block of privacy enhancing identity management (see section for more details). Validity verification and possibility of revocation or blocking are essential features for preventing from identity theft in cases, where a (business) relation ends or gets compromised (e.g. by a compromised a secret key for communication). In such cases it is important to be able to declare a used pseudonym invalid in order to prevent from fraudulent usage of this pseudonym by others.

In addition, there may be some properties for specific applications (e.g., addressable pseudonyms serve as a communication address) or due to the participation of third parties (e.g., in order to circulate the pseudonyms, to reveal civil identities in case of abuse, or to cover claims). 

Some of the properties can easily be realized by extending a digital pseudonym by attributes of some kind, e.g., a communication address, and specifying the appropriate semantics. The binding of attributes to a pseudonym can be documented in an attribute certificate produced either by the holder himself/herself or by a certification authority. 

The concept introduced and the properties associated with them can be used to assess some of the technologies that can be used to limit the risks of ID-related crimes.  

The guidelines discussed here are not intended as a checklist that has to be completed in order to build applications that are safe from ID fraud. Implementing all of them would probably even result in a system that is not usable at all. A balance between usability and security will have to be made. The guidelines presented can serve as a starting point for finding the proper balance.  


Social and technical guidelines on preventing ID-related crimes  fidis-wp5-del5.2b.ID-related_crime_03.sxw  Authentication technologies
Denis Royer 35 / 44