You are here: Resources > FIDIS Deliverables > HighTechID > D3.9: Study on the Impact of Trusted Computing on Identity and Identity Management > 

D3.9: Study on the Impact of Trusted Computing on Identity and Identity Management

Existing Uses Cases and Future Scenarios  Title:
DRM FOR PERSONAL FILES
 Controversial and Legal aspects of Trusted Computing

 

DRM for personal files

The cryptographic support of a TPM can enable an application such as e.g. a word processor to secure its files against unauthorised reading. Files could be encrypted so that they can only be opened on the original PC.  

Problems will occur when a user finds his TPM being destroyed or malfunctioning. It will be very hard to restore the data, if there is no unencrypted backup of the data (security risk) or no backup of the secret cryptographic keys from inside the TPM. As even in TPM specification 1.2 not all keys are migratable yet (although privacy protection authorities have demanded that users shall have complete and exclusive control [129] of their IT systems), users currently take the risk to irrecoverably lose their data when using TPM-supported DRM for personal files.  

Problems might (will) also occur, when a user buys a new PC and wants to transfer his files, as the two PCs will have different TPMs.  

DRM for media files

Media content providers want to use DRM mechanisms to protect their products against uncontrolled copying and distribution.  

On conventional PCs, DRM protection of media files usually fails as there are too many holes for the protected content to slip through. For example, virtual sound or video devices can write unencrypted media data to a file that then is no longer protected. Additionally, analogue copies of the content can not be prevented. 

Thus, there is a high interest in turning PCs into tamper proof devices. A PC with a TPM and an OS supporting its functionality (a Trusted Platform) can act as such a tamper proof device. Using a unique public key generated by the TPM to encrypt a media file shall then ensure that it can only be played on the device with the TPM that possesses the unique secret key. In the long run, DRM will work properly only if the control over a platform is taken away from the user to protect its mechanisms from being circumvented or broken. 

But even then there might still be some holes left. Therefore another DRM mechanism tags the unencrypted content with a unique identifier (fingerprinting, watermarking [130]). If an unwanted copy of a media file is found, this allows content providers to trace it to the owner. Of course this works only if there is a database that stores who has used what unique public key.

This is a big privacy issue, because as to be able to trace back any unwanted copy at any time, the information on who has purchased which media file has to be stored forever. As a result generating detailed profiles of users will be possible. 

TPM specification V1.2 introduced a protocol for Direct Anonymous Attestation (DAA [131]), but to ensure to be able to track the source of an unwanted copy, content providers are not interested in offering anonymity to their customers. 

DRM for media files therefore does not benefit home users in any way, but – as long as DRM concepts aim to identify them – is a severe threat to their privacy. 

DRM for software products

As with DRM for media files, DRM for software products is intended to disable uncontrolled copying and distribution. Furthermore, on a Trusted Platform, software can protect itself by restricting the right to manipulate its code or environment. 

As an example, cheating in games is both annoying and common. Game developers try to protect their products from cheaters, but as they can not control the environment the game runs in, cheaters usually find ways to manipulate.  

While game developers can not directly control a Trusted Platform either, they can let the game use the TPM (the OS has to support this) to detect manipulation and react accordingly. 

The same applies to any other software as well, word processors, spreadsheet analysis programs, media players, and even the OS itself. While this implies an increased resistance against malware such as viruses, Trojan horses, or spyware, it also means that once a virus etc. managed to install itself onto a Trusted Platform (e.g. by tricking a user with administrative privileges to click “OK” and authorise himself against the TPM), it can use the same TC mechanisms to protect itself against manipulation, e.g. removal.  

End User License Agreements (EULAs) that a user has to approve during software installations currently are not valid in Europe and many other parts of the world [132]. While users of those locations currently can simply ignore those EULAs, software can use TC to enforce such license regulations [133] ignoring the legal situation.

As software can not only use TC to protect itself against unwanted manipulation, it can also do so against manipulations that are intended by the user. For example, users of a certain OS currently can use programs such as xp-AntiSpy to disable mechanisms that automatically connect to the Internet without notice and uncontrollably transfer data to remote servers [135]. Using TC, such manipulation not intended by a software company could be prevented. 

Manufacturers of image editing software could protect the product not only from patches against the included Central Bank Counterfeit Deterrence Group (CBCDG) module that prevents users from manipulating images of banknotes [134], but also from other enhancements from independent developers that did not e.g. pay a fee. This would work just as the little chips in some ink cartridges that shall protect the printer aftermarket from third party products [135]. Again, this only works properly if control over the PC is taken away from the user to protect against circumvention. 

The ability of manufacturers to protect themselves against competition with third parties can on a technical level lead to the same negative monopoly effects that software patents can cause on a legal level. 

Anonymity Services

Despite the controversial discussions about possible further intentions or usage of Trusted Computing to enforce Digital Rights the same mechanisms can also serve to improve confidence in anonymity services. A look on TC related anonymity issues in the context of identification could be found in 8.6, but we shortly describe in the following the implications of TC on general anonymity services. 

We refer to an anonymity service as a service which allows its users to access a certain resource in the Internet anonymously. This could be surfing the Web anonymously, sending anonymous e-mails or establishing anonymous TCP/IP connections. We assume a client-server network architecture. The anonymity service itself consists of a number of servers and the end-users need some client software to use the anonymity service. Examples of such kind of anonymity services are AN.ON [108], Tor [112] and Mixminion [111] which are based upon the theoretical concepts of Mixes developed by David Chaum [110].

The security of such anonymity services depends on the fact that not k out of n anonymity servers collude and that the attacker will not get enough secret information (e.g. cryptographic keys) to deanonymise a user.

Remark: If we are willing to believe that Trusted Computing is absolutely secure, we can just establish the one big centralised anonymity server and use link encryption together with dummy traffic on the lines [107]. This will give us perfect anonymity. But history has taught us that such a perfect security technology does not and will not exist. Therefore we have to carefully decide how much and for what reason/purpose we trust in Trusted Computing. It seems to be hard to give a mathematical sound or formal model for this — but  generally speaking we see Trusted Computing as a complementary technology which can help to make things easier and more secure but we do not want to rely too much on it (like in “the on big anonymity server” scenario).

According to the client-server architecture of the anonymity service one can distinguish the following use cases for Trusted Computing: 

  1. Trusted Computing on the server side 

  2. Trusted Computing on the client side 

  3. Trusted Computing on both sides 

These cases will be discussed in the following sections. 

Trusted Computing on the server side

Trusted Computing on the server side of an anonymity service results in a number of advantages—for the users of the anonymity service as well as the operators of the servers. Moreover it will become easier to deploy and therefore feasible, mostly because  the number of servers compared with the number of clients is small and the servers are dedicated for the purpose of the anonymity service so that the resources (hard-/software) could be chosen to perfectly match the requirements of Trusted Computing. This is more complicate on the client side where typically general purpose machines are wanted.

From a user’s point of view the trustworthiness (the “security”) is one of the most important properties of the anonymity service. As stated above this often implies that he has to be sure that not k out of n anonymity servers collude and that the attacker will not get enough secret information (e.g. cryptographic keys) to deanonymise the user. Hence the user has to carefully select the servers he wants to use.

In order to make a substantiated decision one needs to know information about the servers and its operators, e.g. the hard-/software used for the server, who the operator of the server is (private person, company, privacy commissioner), location of the server etc. Naturally this information has to be authentic which is typically realised by the means of certification and digital signatures. The drawback is that this causes a lot of bureaucratic overhead paired with high (monetary) costs. Eventually this leads to deployment problems as volunteering or non-profit server operators cannot afford this. 

Trusted Computing can help to solve this problem by making the trustworthiness of an anonymity server independent from the trustworthiness of its operator. That means that some of the (organisational) trust in the operators shifts to trust in the technology of Trusted Computing. The reason for this is that Trusted Computing ensures confidentiality even against the owner (operator) of the anonymity server. Note that the client side does not need to implement Trusted Computing. It must only be able to act as a verifier in the remote attestation procedure. This merely means that the client can check digital signatures and cryptographic hash values.

Additionally Trusted Computing offers advantages to the server operator as well. Basically it enhances the security of the server helping the operator to stick to his guarantees regarding the provided anonymity. This is of course of special interest in the case of a commercially offered anonymity service. 

If we think of anonymity for the masses, then we need highly available high performance anonymity servers which offer a very good quality of service. Consequently the operation of the server hardware will be outsourced to an Internet service provider (ISP) which has the necessary connectivity and uptime guarantees. Now the operator (who is no longer the owner) of the anonymity server has to be sure that the staff of the ISP will not reveal confidential information. Again Trusted Computing solves this problem by protecting confidential information even against the owner of the machine.  

The outsourcing scenario becomes more complicated if we take into account that one will not rent real hardware but a virtual one. Virtualisation is another emerging technology to load the multicore server machines as much as possible. Typically renting a “virtual” server is much cheaper than renting a “real” one. Trusted Computing has to be designed to deal with virtualisation in a proper way. That means for instance, that not only the virtual machines running on one computer have to be securely isolated but it also has to ensured that the controlling supervisor gains no access to confidential information.

Another advantage for the server operator/owner is that any juridical, political, ethical etc. pressure to deanonymise someone imposed by politicians, law enforcement agencies or criminals is useless and therefore will not be done as the operator is not able to reveal any confidential information—even if he wants.  

As stated above we see Trusted Computing only as a complementary security technology. Therefore we do not propose changes to the anonymity protocols itself. One of such imaginable changes could be to remove nearly all asymmetric cryptographic operations in low latency anonymity services (like “AN.ON” and “Tor”). This could be achieved by establishing only one anonymous communication channel during the login to the anonymity service and when use this channel for all subsequent communication. This introduces linkability between independent communication actions—but that is not a problem because the linkability information is kept confidential within the trusted computing platform. 

Trusted Computing on the client side

Trusted Computing on the client side can improve the security of the anonymous communication even more. Naturally TC can be the “secure anchor” needed for any kind of confidential communication. Besides this general fact there exist additional advantages. 

According to [117] “…anonymity is the stronger, the larger the respective anonymity set is…”. Although Trusted Computing cannot directly be used to distinguish or authenticate individuals it can at least provide reliable information about how many distinct machines are connected to the anonymity service. This brings the user in a better position when he tries to estimate the size of the anonymity set as quantity of anonymity. The anonymity service provider can give more valid information about the quality of service (in terms of quantity of anonymity) he offers. If one day “authenticated locations” (i.e. location stamps) become part of the Trusted Computing technologies the calculation of the anonymity set could be further enhanced. 

Another interesting point of Trusted Computing on the client side is, that it could strengthen my own anonymity if the other users have Trusted Computing even if I have not. Like above this arises from the fact that my anonymity depends on the behaviour (cooperation) of the other users. Technically speaking that means that every user can check by itself (using remote attestation) if the others use secure client software, if they send the agreed amount of dummy traffic etc.

Trusted Computing on the client side of the other users is also a helpful tool if one wants to implement the techniques described in [108] to hide online/offline periods making intersection attacks more difficult. The basic idea of [108] is that a user prepares messages which other users send to the anonymity service during his offline times. One of the main problems is, that a user has to be sure that his “proxies” will in fact send the messages he prepared. With the help of Trusted Computing the user can at least be confident that the machines of the “proxies” will send these messages whenever possible (i.e. they are not switched off or disconnected from the Internet).  

Trusted computing on both sides

Until now we were not able to discover any scenario where the availability of Trusted Computing on the client and server side gives additional benefit compared with just sum up the properties described in the two previous sections. Therefore it is unclear if the use of Trusted Computing on both sides will lead to emergent phenomenons. 

 

 

 

 

 

Existing Uses Cases and Future Scenarios  fidis-wp3-del3.9_Study_on_the_Impact_of_Trusted_Computing_on_Identity_and_Identity_Management_v1.1.sxw  Controversial and Legal aspects of Trusted Computing
20 / 38