Resources
Identity Use Cases & Scenarios.
FIDIS Deliverables.
Identity of Identity.
Interoperability.
Profiling.
Forensic Implications.
HighTechID.
D3.1: Overview on IMS.
D3.2: A study on PKI and biometrics.
D3.3: Study on Mobile Identity Management.
D3.5: Workshop on ID-Documents.
D3.6: Study on ID Documents.
D3.7: A Structured Collection on RFID Literature.
D3.8: Study on protocols with respect to identity and identification – an insight on network protocols and privacy-aware communication.
D3.9: Study on the Impact of Trusted Computing on Identity and Identity Management.
D3.10: Biometrics in identity management.
D3.11: Report on the Maintenance of the IMS Database.
D3.15: Report on the Maintenance of the ISM Database.
D3.17: Identity Management Systems – recent developments.
D12.1: Integrated Workshop on Emerging AmI Technologies.
D12.2: Study on Emerging AmI Technologies.
D12.3: A Holistic Privacy Framework for RFID Applications.
D12.4: Integrated Workshop on Emerging AmI.
D12.5: Use cases and scenarios of emerging technologies.
D12.6: A Study on ICT Implants.
D12.7: Identity-related Crime in Europe – Big Problem or Big Hype?.
D12.10: Normality Mining: Results from a Tracking Study.
Privacy and legal-social content.
Mobility and Identity.
Other.
IDIS Journal.
FIDIS Interactive.
Press & Events.
In-House Journal.
Booklets
Identity in a Networked World.
Identity R/Evolution.
D3.2: A study on PKI and biometrics
While cryptographic mechanisms are often and successfully used to secure biometric data the role of biometrics in enhancing cryptography (e.g. in the key management) is not so straightforward. It is necessary to realise that biometric data are in no way secret. Systems that incorrectly assume that biometric measurements are secret and base the whole security on the secrecy of biometric data cannot be considered secure.
First of all it is relatively easy to get a copy of the user’s biometric characteristics. People leave fingerprints on everything they touch, and a picture of the user’s face can be easily taken without the user’s consent. Similarly other biometric characteristics can be obtained; there are only few exceptions such as the image of the user’s retina where it is difficult or impossible to obtain biometric data without the user’s consent. Even so, the biometric data could be for example misused by an administrator of a biometric system against another biometric system where the user is also enrolled. It is also difficult to find out that the biometric data bas been disclosed or captured by someone else (in comparison with e.g. a lost or stolen token).
Even if the disclosure of biometric data (and its misuse) is evident, there is not much to be done as the biometrics cannot be changed. Additionally, users only have a limited number of biometric characteristics (i.e. two eyes, 10 fingers etc.) to enrol. Therefore it must be pointed out that biometric data cannot be used as capability tokens in the same way as secret keys or passwords. In secret key or password based access control schemes, a key/password itself can be used as a capability. Knowing a secret key or a password can mean that the user has the right to use certain application. However, this does not apply to biometric data. Advantages of biometric keys (“secret” data derived from biometric characteristics) seem to be very attractive. Such keys could be generated from fresh biometric data only when authorised users are present and after the use could be immediately deleted thus eliminating the need of any key management, which is often the weakest point of a cryptographic system. We can therefore be tempted to create so called “biometric signatures”, i.e., signatures based on biometric characteristics or base traditional symmetric encryption on a key derived from biometric characteristics.
While biometrics are good in authenticating users they cannot be used to authenticate messages or computers. There is no point in adding a biometric signature to a message. If a user’s fingerprint is not secret then it makes no sense to add it to every document he has written. Anybody else could do the same thing. The same holds for any other data derived from biometric measurement including “biometric keys”. It is not possible to derive secret data from public data by a publicly known algorithm. And for confidential/secret algorithms see Kerckhoff’s principle.
Other factors against the use of biometrics-derived keys may include the amount of data that is really 100% invariant over time and the difficulties when getting such an invariant part of the biometric sample. It is clear that a single flipped bit in the decryption key will cause the cryptotext to be decrypted completely incorrectly. Error correction codes can be deployed here but they further reduce the amount of usable information. Therefore it is not possible to base a reliable and secure encryption on measurements that cannot be considered secret, cannot be changed and provide only a low volume of invariant data.
The right place where biometrics can help is securing the private/secret key. Private/secret keys are commonly protected by passwords or PINs. Securing the private key with a biometric system might improve its protection. Even in this case, however, the system will not be a silver bullet. If the signer’s computer is attacked by a Trojan horse, for example, then the system might sign a forged document no matter whether a password, a PIN or some biometrics is used protect the private key.
All smartcard, mobile phone or PDA based solutions are therefore more promising. In such a case the device does all the cryptography as well as the biometric matching. Ideally the device would also comprise the biometric reader. If the biometric measurements are not acquired by the signing device itself, the signing device must verify that the data were really captured by the authentic biometric sensor, otherwise the use of biometrics does not provide real security.
During the signing process the signing device signs the hash value only. The hash is calculated by the application that wishes to sign the document. There is always a possibility that the applications shows one document on the screen, but another document is used to calculate the hash. This hash value is then submitted to the signing device. The signing device typically has no chance to recognise this scenario and signs the supplied hash. The application (e.g., a Trojan horse) can use the signed hash to obtain a correctly signed document with a completely other content than the user approved on the screen. General purpose computers such as PCs are far from being immune to such Trojan horses and as the smartcards have no display to allow the user to see the content of the document being signed, this scenario stands for a serious threat to the very popular combination of a PC and a smartcard.
Instead of the smartcard a more capable and/or intelligent device can be used to sign the documents. Many current palmtops, mobile phones and PDAs are capable of displaying the document first and signing it after the verification of the user and her approval of the content. Although the variety of multimedia documents makes the situation more complicated the use of PDAs, mobile phones and palmtops is a practical option to the use of popular smartcards.
Privacy Aspects
The implementation of biometric technologies is seen by many as an attempt to collect Terabytes of data for intelligence purposes, thus creating a global surveillance infrastructure. In addition to the fact that the effectiveness of biometrics is highly debatable (at least for wide-scale implementations), privacy concerns become even more intense.
The problems from a privacy standpoint take two different perspectives. On one hand we see an excessive control from governments that now have an overabundance of information about their citizens. The ‘Trust me, I am from the government’ quote has not even been accompanied by appropriate legislation since the problems with data protection are many. On the other hand, systematic and systemic failure from government to ‘control’ that information or use it for intelligence purposes gives rise to many different suspicions. On various occasions such inabilities have severely jeopardised the privacy of individuals who may now become even more prone to identity theft or have their personal data exploited in hitherto unexpected ways.
This disproportional exercise of power will lie at the very core of emergent phenomena, resistance to change and reaction to excessive control. With biometrics, the balance is drastically shifted as those able to exercise power will be few in respect of those over whom power is exercised. Citizens’ privacy is put at a disadvantage and too much power now rests with government [DAV94]. People are constantly being categorised, profiled and monitored and their personal data become constantly jeopardised on an unprecedented scale.
Identity theft then becomes just another by-product of such implementations and it comes at a significant cost. According to the 2003 survey of the Federal Trade Commission in the US, over 27.3 million Americans have been victims of identity theft within a period of 5 years. Losses to businesses and financial institutions totalled nearly $48 billion. Numerous other examples point in the same direction; side-effects from economic losses to counter-productivity suggest the need to pause and think about implementation of identity management systems.
Another threat to privacy comes from the implementation of biometrics but in quite a different way. Border control and biometrics lead to massive data collection and storage. However, there are no guarantees that this increasing data collection for one purpose (i.e. processing by governments) is not something that will lead to the repurposing of data and in contradiction to all privacy laws. On the contrary, past experience shows that this has always been the case. Systems are rarely used for the exact pre-conceived applications that their designers had in mind. Users, whether individuals or governments, have always been innovative with the information they collected. Especially now that such information is constructed from an entity that can be endlessly replicated at zero cost, replication and re-purposing become a necessary evil. In the UK for instance, information about 1,500 households can be ‘obtained’ for £250 when the government sells information from the census to junk mailers. An interesting ‘innovation’ comes from Canada which refuses to make public servants’ electronic mail addresses available. Its government does not produce any electronic directories on CD-ROMs, instead it sells information to private firms who resell it in different formats to the public, including CD-ROM. Such examples should come as no surprise. Governments, too, are interested in finding ways to make profit. Even the US pillar of defence, the Pentagon, set up a company selling biochemical suits in the Middle-East through an internet company at a 90% discount; to everyone’s surprise this occurred after 9/11, defying a clear Pentagon policy and was uncovered after a special investigation by the US congress [DEM04]. It is all about supply and demand. The only difference is that with data it is much easier to supply. Privacy will suffer precisely because of this; as more and more data become available, centralised and easier to manipulate, privacy will become severely jeopardised.
A different facet of the privacy problems that biometrics will cause comes from a change in the scope of surveillance. Whereas in the past, border control was simply that, now with the adoption of biometrics the scope becomes totally different. The boundary is reset and besides border control, centralised databases and biometrics allow for the monitoring of movement [ALT03]. This is something new, intensifying privacy concerns and changing the nature of pursuing criminality. The ‘innocent until proven guilty’ assumption is replaced by the opposite notion of ‘guilty till proven innocent’ while biometrics reset the boundaries for surveillance with no evidence of wrongdoing (ibid).
Privacy laws are no less alarming. The US law on data protection covers only US citizens. What can one make of how privacy and data protection are considered when US citizens, in any case, do not fall within the scope of the US-VISIT program? Furthermore, there is no legal basis for the handling of data regarding its collection, sharing, retention, or use. The creation of centralised databases, coupled with the lack of legislation, will create an unchecked surveillance system. Profiling of individuals and their behaviour is only the beginning. Identity theft, data sharing, and so on, will be the unavoidable by-product.
Such attacks against privacy cannot go unheeded. Shielded by a legitimacy stemming from a moral position against terrorism, nation states have embarked on a domination of the individual. Nation states want to maintain their power, and in an age that is characterised by an increased mobility this means that technology has to be incorporated into the schema of collecting, processing, tracking and categorising information. To protect a carefully constructed heroic image, that of the individual helping towards the fight against terrorism and increased scrutiny of identity, governments are abrogating to themselves the right to invade the privacy of their citizens [ANG96]. We have to come to a realisation that new technology has changed many things; politics, economics and society. That change however is no simple, tidy transition. It is more a severe and total dislocation from the past [ANG00]. A wide-range implementation of biometrics is an experiment that may incur great cost for society as a whole, the economy and privacy. While the cost of compliance will explode and interoperability problems will mount, cutting expense, following the easy route and doing it fast will be an option that many countries will follow. Even space programs that have traditionally been enjoying proper preparation, management and funding are sometimes under funded, leading to disasters. The ambitious ‘Beagle 2’ project for Mars Exploration ended up a failure; both the European Space Agency and the UK government failed to monitor the project, address weaknesses, commit funds and prevent organisational failures. What makes us believe that biometrics will work out better?
The message is therefore simple. A wide-range implementation of biometrics will create a highly complex infrastructure. Such an infrastructure will allow for the unrestrained monitoring and profiling of individuals, thus constituting an immense attack against privacy. Potentially, exploitation of that infrastructure will bring with it identity theft on a grand scale for various purposes and an unchecked system of surveillance that again will impact greatly on privacy. Attacks on privacy however will come at a cost that might not be easily quantifiable. No doubt, functionalists who look for answers within technology will have a techno-savvy answer for everything. Special computing techniques, hardware controls and cryptographic systems will all be incorporated on the altar of securing the implementation of biometrics. What is not always understood is that quite often technology is the problem and not the solution. As long as technology is used by people then there will always be aspects that cannot be ‘controlled’ in the cause-and-effect sense. What has to be balanced in the case of biometrics is the potential risk for privacy and its impact. This section argues that there are very serious privacy concerns with systemic complications that should be taken into consideration.
Denis Royer | 25 / 40 |