You are here: Resources > FIDIS Deliverables > Forensic Implications > D5.2b: ID-related Crime: Towards a Common Ground for Interdisciplinary Research > 

D5.2b: ID-related Crime: Towards a Common Ground for Interdisciplinary Research

Methods to manipulate Authentication Procedures  Title:
TWO SCENARIOS FOR IDENTITY FRAUD WITH BIOMETRICS
 Scenario 2: Possibilities of Identity theft with biometric devices

 

Two scenarios for identity fraud with biometrics

In this section we describe two scenarios for ID fraud using biometrics. These are written from different technical perspectives. The first focuses on laboratory tests to illustrate some general technical implementations of identity theft (see section ), comprising a situation that is not widespread at present, but one that is under study in numerous places as it provides a way to perform unobtrusive identification which makes it very suitable for public places and shopping malls. The second uses the perspective of forensic experiences with biometrics fraud. These scenarios suggest how the IDM system – biometrics – functions in different fields and can be approached differently in order to counter ID fraud.

Scenario 1: Attacking an authentication, identification and tracking system using physical biometrics

Suppose a system which performs real-time facial and/or body recognition and/or human tracking for surveillance, entertainment or commercial purposes. The digital identity in such a system, meaning the digital representation of a person, can be the digital representation of his face and/or body, a nickname, his height (if human tracking is performed as well). Given a single uncompressed or compressed colour image, the human detection/localization process aims at automatically and reliably identifying and determining all regions in the image which contain a human regardless of its three-dimensional position, orientation and lighting conditions. The human identification procedure uses human attributes extracted with the aid of face and body modelling, while the human tracking process aims at allowing cameras to “follow” the identified human under varying complicated conditions, such as occluded regions, abrupt motion, indoor/outdoor conditions, and varying illumination conditions. 

The main specifications of such a system involve: 

  • the functionalities of the cameras – such as that cameras should continuously acquire images, they should share a common time code and their field of view should be known in advance; 

  • the division of the system into modules (human content detection and localization, human content identification, human content tracking, real-time controller, cameras, database) and sub-modules; 

  • the input and the output of each module (exchanged data); 

  • the interaction between the system modules (exchanged control messages). 

For the sake of the scenario (experiment), such a system could be reduced to a simple indoor (for limited varying environmental conditions and background) testing area including a terminal and a camera for the registration phase and a corridor with 3 cameras. The registration phase includes the acquiring of the image of the person to be tracked as well as a nickname of their choice and their height (which will be used for the determination of the camera that is currently capturing images containing the person to be identified and tracked). After a person is enrolled into the system, the system will henceforth automatically identify and track him in the area that the cameras cover. 

 


Figure . Testing area of the scenario: The field of view and the human’s walking direction are depicted.


Figure . Hardware equipment of the scenario.

 

In this scenario, identity theft may take place either during registration to the system or during identification and tracking. The following cases of ID theft can be imagined in this setup. 

One case is the provision of fake height information during registration which can lead to the inability of the system in tracking – but not in identifying – the person since this information is used by the system to measure the distance of the identified person from specific cameras and thus determine the camera that is currently tracking them. This vulnerability of the system can be overcome by the automatic calculation of this information by the system (automatic regulation of the height of the registration camera, etc.). 

Another case is when a person disguises himself. The system will then have difficulties in identifying the subject if specific features of the person are changed (hair, facial colour, etc). However, since height plays such an important role in tracking people in the given system, if a person tries to disguise himself as another person who has been registered into the system, with a different height, then the identification process may provide a false positive, but the tracking process will not. In case of similar heights though, a person can mislead the system in two ways:  the system will either (a) be unable to match the person with an image in its database, or (b) provide a false positive, by misidentifying the disguised person. Moreover in case the person during the enrolment phase (registration) wears sunglasses or has a beard, their facial features are not easily distinguished and thus identity fraud is more easily performed. These faults can be addressed. One solution is the segmentation into very small pieces of both the captured image of the person to be authenticated and the registered image and the emphasizing on the segments that bear strong similarity. This allows human identification with a remarkable reduction, but not elimination, of the false positives and the false negatives, even if the face is occluded due to sunglasses or a hat.

And what happens when only one camera is used for human authentication? The person could fool the system just by providing a photograph of the authorized user to the camera that performs authentication. Aiming at making the system more robust, more cameras could be used during the authentication phase (acquiring both frontal and side views of the person’s face) and thus more than one facial images of the person should be captured and processed during the enrolment phase, or texture information could be used during face detection, but still without eliminating the danger of id fraud. The combination of facial recognition with another means of authentication (PIN, smart card, etc) could even further reduce the success of an id fraud effort. 

Another case leading to a false positive, which is very difficult to deal with, is that of identical twins. This system vulnerability – based on its difficulty to distinguish between the twins- could be exploited by one of the twins who may choose to use the identity of the other.  

Such a system could be also used when the registered people are those who should not be authorized to system. In such a case, the system is much more vulnerable to id fraud attack, since it is easier for a person to be not themselves than to be someone else! Id fraud can then take place through the growth of beard on the person’s face combined with sunglasses, the covering of a part of the face (pretence of a recent accident) or through surgery. Taking into account the fact that face recognition is a non-contact authentication technology (since images can be captured even from a distance), user authorization can be performed even without the person’s notice by identifying the registered non-authorized people. Frequent updates of the systems database could improve the system’s robustness and reliability, whereas amelioration of the authentication process could be attained through the capturing of body images as well, so that authentication requires both face and body features match. In long term applications, however, the frequent updates of the person’s records in the system are compelling, since human body features change more easily. 

Of course, the system actual vulnerabilities and counter measures depend on the system’s application domain. Entertainment, security and government, require different levels of reliability and face different likelihoods of attacks. For instance, installation of this kind of system in an amusement park, will probably not run the risk of serious attacks, whereas terrorists may pose a serious threat in trying to undermine the same kind of system in an airport. 

 

Methods to manipulate Authentication Procedures  fidis-wp5-del5.2b.ID-related_crime_03.sxw  Scenario 2: Possibilities of Identity theft with biometric devices
Denis Royer 30 / 44