You are here: Resources > FIDIS Deliverables > HighTechID > D3.2: A study on PKI and biometrics > 

D3.2: A study on PKI and biometrics

Physiological (or Passive) Methods  Title:
IRIS RECOGNITION
 Soft Biometrics

 

Iris recognition

Iris recognition was developed in the early 1990s by John Daugman at the University of Cambridge. The method analyses individual patterns of the iris and today is mainly used for verification purposes (access control) e.g. at various airports in Europe and the USA and as a pilot project for non-Schengen travellers in Europe [RAI04].

Description of the method 

The complex structure of the iris is not genetically defined, but develops randomly in the 3rd to 8th month of pregnancy in the eye of the foetus. The structure of the iris remains constant throughout the whole life, but in the first years after birth the colour can change through further pigmentation. Today more than 400 different characteristics of these structures can be discriminated [AMB03] of which about 173 are being used for iris recognition. The likeliness that two people have identical structures of the iris is estimated to be 1:1078 [AMB03].

Starting point of the method is a monochrome camera picture taken by near infrared light (NIR) at a distance of 0.1 to 1 meter. The resolution of the taken picture varies from 50 to 140 dpi. Generally the procedure runs through the steps in [AMB03].

 


Figure 4‑: Procedural steps for iris recognition

 


Figure 4‑: Picture of an iris and graphical representation of an IrisCode

 

For the calculation of the so called IrisCode (a string of 2048 bits) e.g. the Daugman-Algorithm [DAU04] is used. The following picture shows an example of an iris picture together with a graphical representation of the calculated IrisCode. The outer and inner borders of the iris are marked using white lines as in .

The comparison of an IrisCode against a stored template is done mainly using the ‘Hamming Distance’ method [DAU04], [AMB03]. 

Value of the method 

The method is well tested and performs well against the ‘seven pillars’ [LIB05]. Only acceptability was determined to be low. This may change when knowledge and usability of the method can be improved.

Limitations 

There are a few known limitations especially in the use of the methods. They apply mainly to:

  • People having no eye or no iris (so called aniridia; this happens to 1.8 of 100,000 people)

  • Blind people having problems adjusting their eye to the camera taking the picture 

  • People having pronounced nystagmus (tremor of the eye) having problems getting a proper picture of their iris. 

Privacy aspects 

Current research gives evidence that from the (raw) picture of the iris certain diseases such as glaucoma and iritis can be diagnosed.

 

Hand Geometry Measurement

Hand geometry has been used for more than 20 years, mainly in the USA, but examples of the use of this method in Europe are reported as well. This method analyses individual geometric patterns of two fingers or the whole hand and today is mainly used for verification purposes (access control) e.g. at various airports or nuclear power stations in the USA.

 


 

Figure 4‑: Example of a commercial available hand geometry reader

Description of the method 

For this method the following patterns are used: length of the fingers, their width and height and the bending of the fingers [AMB03]. These structures remain, at least for some significant time, constant. 

The starting point of the method is a monochrome three-dimensional picture is taken with a CCD camera. For that purpose the hand is fixed in the reader using pegs (see ). A picture resolution of 150 dpi is typically used.

Generally the procedure runs through steps shown in :

To calculate the reference value, 25 to 90 reading points (see some examples in ) are used. The reference value is 9 to 25 byte long, for the calculation various algorithms are being used. To compare the reference value with a stored template various methods such as the Euclidian distance, the Hamming Distance or neural network methods such as Radial Basis Function (RBF) or Gaussian mixture model (GMM) are being used. Currently the GMM method seems to lead to the best FAR and FRR values.

 


 

Figure 4‑: Procedural steps for hand recognition

 

 


 

Figure 4‑: Examples of reading points used for hand geometry

Value of the method 

The method is well established, especially in the USA. Current systems reach FAR between 0.0001% and 0.1% and FRR between 0.0007% and 1.0%. An equal error rate (ERR) 0.1% has been reported. The method is used within a pilot project carried out by the American Immigration and Naturalisation Service Passenger Accelerated Service System (INSPASS). In 2001, 20000 verifications were carried out every month [AMB03].

Limitations 

The method needs quite bulky and heavy readers and compared to fingerprinting is more expensive in the infrastructure. The enrolment takes about 30 seconds. The method needs physical contact with a reader. This leads to some discussions about hygienic concerns [AMB03], especially in Asia. The method is vulnerable to changes of the hand geometry caused e.g. by aging, wearing of jewellery, injuries or diseases such as arthritis and gout. Owing to the rapid change of the geometry of the hand this method can not be applied to children. The uniqueness of the reference value used for the hand geometry is limited. This limits the use of the method to verification purposes .

Privacy Aspects 

From the raw picture many diseases such as arthritis, gout or marfan-syndrome can be diagnosed. But reverse calculation from the reference value to the hand geometry seems to be impossible.

 

Behavioural (or Active) Methods

Signature recognition

A person’s signature is regarded to be a trusted method for user identity verification, and is used in areas such as law enforcement, industry, security control, financial transactions. People are able to recognise their own signature at a glance, whereas examination of signature authenticity could be regarded as a scientific endeavour. Generally, the verification investigation of handwritten objects, including handwritten signature, is broadly used. Examples of applications are the confirmation of a document’s authenticity, access to controlled documents, as well as the investigation of forgery (concerning contracts, checks, formal agreements, etc.). People use their signature in daily transactions as a transaction-related identity verification means, such as in banks, in their work, when paid, as well as for a means of identity verification for every official document. The way a person signs their name is considered to be a behavioural biometric. An automatic signature verification system is able to verify a person’s identity by examining the person’s signature and comparing it, as well as the way the person writes it (where the system performs on-line processing of the data), to the person’s enrolled signature samples.

Technical description of the methods

The methods used for automatic verification of signatures depend on whether there is real-time input or it is provided as result of scanning and digitising of source documents. Two types of data are mainly collected from a person’s signature; the features concerning the process of signing, such as the time taken to sign, the speed, the acceleration, the pen pressure, the number of times the pen is lifted from the paper, the number of times this happens, the directions (which require additional hardware) and the features specific to the signature. Even if a person manages to duplicate the visual image of another person’s signature, there is great difficulty in duplicating the way that person signs their name. However, taking into account the fact that a person changes their signature as time goes by, the signature verification system should be able to adjust to such slight changes.  

In the case of verifying handwritten signatures provided off-line, information such as the speed, the acceleration and the time taken to be written are not available, and thus the signature verification becomes even more challenging. Researchers have reported however that there exist features in a person’s signature that could be considered to be invariant, such as the ratios of tall letter height to small letter height and distance between the letters. A method proposed by Juan J. Igarza et al [IGA03] on on-line signature verification concerns the use of Hidden Markov Models. They used a tablet which obtained values of pressure, X and Y coordinates, per azimuth and inclination for each sample. The signature data is digitised and stored in the database in the form of a matrix. Given the fact that the signature process includes varying factors such as the speed, the size, the rotation, this method includes a pre-processing stage during which time normalisation takes place. The training set is formed using a number of signatures of each person aiming at collecting representative positive examples. Further details of the use of the Hidden Markov Model are described in annex 3. 

A model for off-line signature verification aiming at limiting the number of genuine samples used for training was proposed by Santos et al [SAN04]. The number of genuine samples is rather limited and usually less than the number of samples required for training by classifiers such as Hidden Markov Models, neural networks and support vector machines in order to capture the variability in the way a person signs their name. This method includes a set of graphometric features and a neural network classifier. Based on personal methods, two pattern classes are used; a genuine signature set for each person and a forged signature set, which is divided into simple (signature sample of the same shape as the genuine one), random (genuine signature of another person with their signature not necessarily enrolled into the system) and simulated (satisfying imitation of the genuine signature) sets. The genuine signature sample set is used for the production of the training model. 

A subset of the features (such as calibre, white spaces, curvature and apparent pressure), which according to graphometric studies are used by experts on suspect document, was chosen and related to static and pseudo dynamic computational features such as space occupation (static), pressure area (pseudo dynamic), stroke curvature (pseudo dynamic), stroke regularity (pseudo dynamic), whereas a grid approach was used for the computation of the features. Thus, for instance the representation of pressure areas is through variability in signature trace width and material deposition in a specific area of the trace, whereas the computation is performed through the gray-level average in the cell .

 

 

Figure 4‑: (a) Genuine signature and forgeries (b-d)

 

They use a method proposed by Can to produce a feature vector to discriminate between genuine signatures and forgery ones, whereas their Euclidean distance vectors are used to provide the input to the neural network. The comparison stage involves two steps; training and verification. Taking into account a small variability (and thus small distance) in the signature samples of the same person, during the first step the distances between samples are computed setting the feature vector to 1 in the case of signatures of the same individual and 0 otherwise. In the second step, the two outputs of the trained neural network refer to two signature samples of the same person and of different individuals respectively. The final decision is based on the combination of all classifiers in a majority vote .

 

 

 

Figure 4‑: The off-line signature verification scheme based on the expert’s approach

Value and technical limitations

An on-line signature verification system performs better than an off-line signature verification system in terms of true positives and true negatives due to both the greater number and difficulty of forge factors taken into account, such as pen pressure, angle, speed. However, both methods require a substantial number of signature samples of the same person leading both to storage, speed and usability drawbacks. The reported results of the on-line signature verification method implemented by Igarza et al were rather satisfying including; FAR: 0% and FRR: 31.52% using 3750 genuine signatures and equal number of forgery ones, whereas in the off-line signature method the average total error in verification is reported to be ~8%. The latest figure is reported to be mainly due to the small training set and moreover the system was not prepared to identify simulated forgery. Generally however a signature verification system is vulnerable to the mood swings of a person, a factor affecting the way a person signs their name. 

Privacy aspects 

The main threat of a signature verification system lies in “who” has access to the data stored in the database containing the signature samples. Although this issue concerns all biometrics, the seriousness of this threat increases when it comes to signature verification. The reason is mainly due to the fact that signatures are the most widely accepted method of proving identity and thus used in most transactions and especially financial ones. Furthermore, a system performing signature verification can be used in order to track an individual’s transactions, and thus monitor their financial and other behaviour without their knowledge or consent. 

 

Key stroke analysis

Most biometric technologies require special hardware to convert analogue measurements of signatures, voices, or patterns of fingerprints and palm prints, to digital measurements that computers can read. The use of keystroke dynamics, on the other hand, offers a software-only solution, and relies only on an external existing keyboard to produce a digital measurement binding it to standard user ID and password procedures. 

Keystroke dynamics looks at the way in which a person types or pushes keys on a keyboard. This technique thus works by detecting patterns of typing on a keyboard by a person and comparing them against patterns previously enrolled. This biometric is based on the typing characteristics of the individuals such as typing error frequency, latencies between keystrokes, duration of keystrokes, inter-keystroke times and force keystrokes. The original technology was derived from the idea of identifying a sender of Morse code using a telegraphy key known as the “fist of the sender”, whereby operators could identify senders transmitting a message by the rhythm, pace and syncopation of the signal taps. During World War II, the Army Signal Corps discovered that an individual keying rhythm on a telegraph key was unique. In the early 1980s the National Science Foundation and the National Bureau of Standards in the United States conducted studies establishing that typing patterns contain unique characteristics that can be identified.

Keystroke dynamics works by monitoring both the typing and characteristics between letters when typing in a (pass)word. Verification is based on the concept that how a person types, in particular their rhythm, is distinctive. Even if intruders guess the correct password, they cannot type it in with the proper rhythm. However, if someone is physically located next to you, for example, they could possibly observe key clicks when typing in a password. However, once an impostor is removed from the environment (causing a lapse in memory), the impostor is unable to replicate the legitimate biometric template. 

The main application of this biometric is in “hardening” password entry; providing greater assurance that a password was typed by the same person who enrolled it, by comparing the pace at which it was typed. The username-password paradigm is common throughout virtually all web services and system login programs, whereas passwords are also used for file encryption and authentication to private networks. Although a password is the most widely used identity verification method in the computer security domain, it has proven to be a rather vulnerable means of verifications and thus a fairly weak mechanism for user authentication. A user’s account is entirely compromised if an adversary discovers the user’s username and password. The use of keystroke dynamics can provide a more secure verification system. In case the typing patterns differ, the keystroke-enhanced login system can prompt a personal question in order to proceed with identification.

Keystroke dynamics can also be used for the extraction of profiling information about an individual, which can then be used for the provision of personalised services. Given that this biometric requires nothing more than a keyboard as the interaction device between the human and the system, it is non-intrusive and easily used. 

Technical description of methods

Keystroke dynamics utilise the manner and rhythm by which each individual types in passwords and logon codes to create a biometric template. It measures the keystroke rhythm of a user in order to develop a template that identifies the authorised user. A verification system is able to measure the differences in how we type, and use the measurement when verifying different people.  

Monrose et al [MON99] proposed a method for the improvement of password security by combining keystroke biometrics with passwords. In this approach, the legitimate user’s typing patterns are combined with the user’s password to generate a hardened password that is convincingly more secure than conventional passwords against both internal and external attackers. This analysis is transparent to users while they are trying to gain system access via a password-authentication mechanism in the normal way (by entering a user ID and password string), whereas it is flexible enough to adjust to gradual changes in a user’s typing patterns. Their method was based on password “salting”, a method in which the user’s password is prefixed with a random s-bit number (the “salt”) before hashing the password and comparing it with an enrolled value. In this method, some or all of the salt bits are determined using the users typing characteristics. During enrolment, the information stored in the user’s database template (similar to UNIX systems) is not just the password (‘pwd’) or the hardened password (‘hpwd’), but also includes: 

 

  • a randomly chosen (typically 160bit) number (r)

  • an “instruction table” encrypted with pwd, which contains instructions for the generation of hpwd from the feature descriptor r and pwd. The instruction table is built following the next steps: first, a binary feature descriptor is created by thresholding the user’s keystroke features (typically 15 in number) and afterwards this binary feature descriptor and the random number r are used for the creation of the instruction table using Shamir’s secret sharing scheme.

  • a “history file” encrypted with hpwd.

 

During authentication, the system uses the instruction table and the random number r from the user’s database template and the entered authentication password ‘pwd’ and keystroke features acquired during the authentication process for the computation of ‘hpwd’. The latter is then used for the decryption of the encrypted history file. In the case of successful decryption, the user authentication is successful as well and the random number r and the user’s history file in the template are modified. If the opposite occurs, another instance of hpwd is generated but with some error correction, and another effort for authentication is made. Nevertheless, if the authentication is unsuccessful for a fixed number of attempts, the result is authentication failure. 

Different technical approaches can be used. Commercial systems tend to rely on methods whose technical operation is easily understood, thus offering “psychological” reliability beyond anything that can be proven by experiments with end-users. These approaches are generally based on classical statistical methods, forming statistical templates of user patterns and performing pattern matching based on well-established metrics. In the research community, the more effective, interesting and often presented methods rely on methods such as neural networks.  

Value and limitations 

The approach proposed by Monrose et al. is reported to be able to adjust to gradual changes in a user’s keystroke biometrics over time and still maintain the same hardened password, while it is easily used by users and uses actual keystroke data. The expected number of attempts for authentication before a user successfully logs into their account is report to be less than 2 with a reliability of 51.6%, whereas this number slightly increases for the improvement in reliability to 77.1%. This method however runs a risk in the case of a sudden change in the user’s typing behaviour that the user will be unable to log into their account, since the correct hardened password will not be generated. A simple example of such a case is if the user uses a keyboard of different style than their usual one.

It is important to recognise that keystroke dynamics, when layered on user ID and password, incorporates the statistical probability of password access control. Using a statistical formula, we can represent the probability of a security breach with the use of passwords alone. 

Selecting a password randomly from 100 possible PC keyboard characters (upper and lower case) that is unknown to an unauthorised user, the probability of a successful password occurrence is P=(l/100)n where “n” denotes the character length of the password. If the probability of selecting a proper password is independent of the probability of a security breach, then this will result in the probability of success as (1-(1- (l/100)”)1) where “t” = the number of tries. The mathematical formula 1- (l/100)n will essentially equate to zero, implying that without knowledge of the password, there is little or no chance of a security breach.

Unfortunately in the real world, this is not the case. Users share their passwords, write them down, select easily guessed passwords and users seldom change them unless they are forced to do so. The vast majority of passwords can be determined far too easily by intruders especially if written on a post-it-note stuck to the computer. 

 

User verification based on recognising Keystroke Dynamics provides a unique level of security for logon and passwords. This technology measures keystrokes to generate the biometric signature. To illustrate the unique statistics of keystroke dynamics, can be derived. When the probability of a security breach in the password only example is multiplied by the factors in the table, one can see that even with full knowledge of someone’s password, the security of a system is still within the acceptable limits because of keystroke dynamics. The proper design of a keystroke dynamics security system can provide additional deterrents to an impostor.

 

 

Number of Attempts 

Probability of a Security Breach 

.00001 

.00003 

.00005 

.00007 

.00009 

100 

.001 

 

Table 4‑: Probability of a security breach using Keystroke dynamics

    

An example of the effectiveness of a keystroke dynamics device can be assessed using the following assumption: 

 

Logon Name = known! 

Password = known! 

Decision point = .01% or .0001 

False rejections = 3% 

Tries before lockout = 6 attempts 

Lockout time = 5 minutes 

 

Every 6 attempts result in a probability of 1 in .0006 of successful access or 6 times in 10,000 tries. The system locks for 5 minutes every sixth try, or must be released by the system administrator, who would notice the attempt. If a 5-minute lockout were implemented, the system would be locked for (10,000/6) x 5 minutes or approximately 140 hours before a successful keystroke dynamics match. This is certainly a deterrent. 

 

The use of passwords is strongly embedded into computer access systems worldwide, so it makes sense that better password management is required. It also opens opportunities for complementary technologies, which can enhance password systems and make them more secure. One such solution is the combination of passwords and biometrics. As biometrics become more accepted within Enterprise IT security, keystroke dynamics are being recognised as an unobtrusive, non-invasive, and very cost effective solution. The only hardware requirement for Keystroke Dynamics is a computer keyboard. 

Privacy 

The application focus of Keystroke Dynamics recognition is that of verification of computer users at logon time. At this stage, verification is of course totally warranted and no privacy problems arise. 

Keystroke biometrics can be collected from virtually anywhere throughout the world via an Internet connection without requiring an individual to be at certain locations with access to specialised hardware. This combined with the fact that recognition of keystroke dynamics can be abused in a way that is extremely difficult to track leads to a serious threat of an individual’s privacy. A simple example could be a chat program that is able to measure users’ typing patterns and perform user identification regardless of the whether the individuals use multiple accounts, nicknames not related to their personal information, different computers at different places, etc. In a similar way and in case of centralised collection of identification data a person’s profile could be built through the tracking of their actions and preferences on the Internet without their permission.  

However some of these malicious applications face many limitations for the time being, since both the ability of creation of a centralised collection of such data by these applications is in question, whereas in many cases measurements are not feasible; for instance, a web server is not able to make such measurements, since data is sent in packets containing many key-presses. Moreover, the task of recognition itself includes many limitations. However, pattern recognition systems perform well in many cases: the recognition of password typing is an established example, and the recognition of free typing is a similar application that has also provided successful experimental results. Still, there exists no conclusive study of the size of the threat, i.e., the possibility and/or difficulties of exploiting typing recognition in the environment faced by malicious software. 

 

 

Physiological (or Passive) Methods  fidis-wp3-del3.2.study_on_PKI_and_biometrics_03.sxw  Soft Biometrics
Denis Royer 20 / 40