You are here: Resources > FIDIS Deliverables > Privacy and legal-social content > D14.3: Study on the Suitability of Trusted Computing to support Privacy in Business Processes > 
Employment of the Trusted Computing  Title:
APPLICATION AREAS OF TRUSTED COMPUTING ON THE SERVER SIDE
 TCP Employment on the Server Side

 

Application Areas of Trusted Computing on the Server Side

The first seminal publications in the field of Trusted Computing (TC) can be dated back to the early 1970s. It became an “emerging”’ technology in the past few years due to the fact that an industry consortium ― the Trusted Computing Group (TCG) ― started to implement the necessary hard- and software parts in order to build a sound and secure Trusted Computing platform. Nevertheless it is still (emotional) discussed what exactly Trusted Computing is and whether it has more benefits for users or for commercial organisations e.g. in scenarios like Digital Rights Management (DRM).

For now we assume that Trusted Computing comprises at least the following technologies and mechanisms: 

  1. Trusted computing base which is the minimal set of hardware (e.g. the TPM-chip specified by the TCG), firmware and software (e.g. a secure operating system) which is necessary for enforcing a security policy.

  2. Secure I/O which offers protected paths for all data from the input over the processing until the output. That means for instance that the user can be sure that the inputs he made can not be intercepted by malicious software like keyboard loggers.

  3. Sealed memory which refers to a protected memory which prevents other processes (and even unprivileged parts of the operating system) to read/write to it.

  4. Sealed storage a technology which ensures that persistent data can only be read and modified by exactly the same combination of hard-/software which has written the data.

  5. Authentic booting and (remote) attestation which allows a user to be sure with which well defined hard-/software he interacts and to prove this even to third parties.

  6. Unique digital identities for computers which means that each Trusted Computing base has a unique digital identity enabling the owner of a computer to prove that a certain message originated from a computer he owns or that two messages come from the same computer; that two messages do not come from the same computer.

An important fact and fundamental principle about Trusted Computing is, that Trusted Computing does not mean, that the computing environment (hard- and software) can be trusted—but instead one has to trust it. According to Ross Anderson, “In the US Department of Defense, a ‘trusted system or component’ is defined as ‘one which can break the security policy’.” This simply means, if the trustworthiness assumptions one has about a certain Trusted Computing based ICT system are wrong, then the whole protection offered by this system (in terms of security and privacy) can be broken.

Immediately the question arises to what extend one should trust the Trusted Computing. If one is willing to absolutely trust the Trusted Computing many (if not all) security and privacy related problems can be solved very easily. The reason is that most of the complex and complicated cryptographic mechanisms and protocols just exist or were designed with the goal to circumvent the untrustworthiness of the computing environment (soft- and hardware) used by the communication partners and third parties. As example FIDIS deliverable D14.2 identified the delegation of rights and secrets to proxies which act on behalf of the customer as one of the fundamental problems (with respect to security and privacy) in multi-stage businesses processes. Clearly if these proxies are not trustworthy, then they can use the data provided by the customer to contravene the interests of the customer and violate his privacy. Using Trusted Computing on the proxy side could easily solve this problem (under the assumption that one is willing to absolutely trust the Trusted Computing as mentioned above). In this case the proxy would be trustworthy (and can be trusted) “by definition”.  

On the other hand the history of security and privacy technologies as well as ICT in general has shown, that such absolute error-less and correct designed and working systems does not exist and will (with high probability) never exist. Therefore Trusted Computing should only be seen as a “helping tool” which could be used to enhance the overall security a system provides. 


Figure Overview of approach to “employment of Trusted Computing on the service side”.

In (Iliev and Smith, 2005) the fundamental property of Trusted Computing is described as follows: “We call the physically protected and trusted components of a server K, … In any given client-server application, we can view K as an extension of the client: from a trust perspective, K acts on the client’s behalf, but physically, K is co-located with the server.” 

Derived from this fundamental property, using Trusted Computing on the service side comprises at least the following three overall goals /approaches: 

  1. To prevent security threats by implementing (traditional) security mechanisms in a more trustworthy way or (more general) use Trusted Computing to secure the basic operations of the servers used on the service side. This comprises all the well known technologies offered by Trusted Computing.

As mentioned above the communication between the involved parties (users and services) has to be confidential and integer. The (cryptographic) protocols and measures used can benefit from TC and the TPM used on the service side, e.g. the cryptographic keys could be stored under the control of the TPM (using the Sealed Memory and Sealed Storage functionality) making attacks on the communication confidentiality much harder. 

In general it seems that this “classical” approach for enhancing the security on the service side is the one which is in the focus of the industry and corresponding business consultancies.

  1. Enable third parties to monitor and audit the activities of the service’s servers and infrastructure in a trustworthy way to detect security or privacy breaches. As the necessary log entries might contain sensitive data (e.g. personal data) as well it needs to be specified who has access to which log entries under which circumstances. Moreover the log entries should be anonymised or pseudonymised so that any unnecessary sensitive data is removed. The access rights could be specified in a policy. Trusted Computing could again help to enforce this policy.

Secure logging is in general a hard task. This is especially true, if the logs should be used for external audits, i.e. if a third party should be convinced and confident that the logs were not manipulated. The reason for this is based on the attacker model for secure logging: The one who produces the log is the attacker with respect to integrity and accountability. Basically the logging procedure can be divided into two subtasks: creating the log entries and storing them. In order to enhance the security and trustworthiness of the second subtask this task could be outsourced to a third party which does not collude with the party which generates the log entries (i.e. with the attacker). Sealed storage could be used to bind the log data to a trustworthy state. This would prevent manipulation of the log later on, too. 

But securing the first subtask is much harder: if one has full control of the whole system it is impossible to prevent a malicious party from manipulating the generation of log entries (e.g. suppressing or changing them).  

Here Trusted Computing can help to solve that problem. Fundamentally it takes away control over some parts of the system from the service provider. One possibility is that the party which is responsible for the audit installs a “trusted log entry generator” (e.g. a log entry generator based on Trusted Computing) within the system of the service provider. If the service provider tries to manipulate this device it would stop logging at all—giving the auditor evidence that some manipulation happened. Moreover the trusted log entry generator could also be verified by third parties by means of remote attestation. This would imply that even the end user (client) could check if the logging done at the service provider is trustworthy from the user’s point of view. 

Note that it is not sufficient that only the pure log entry generator is under control of Trusted Computing – in addition any part of the system which usually would generate events which trigger the log generator to create a new log entry have to be under the control of Trusted Computing as well (otherwise the attacker could simply suppress the generation of that triggering events). If for instance personal data is stored on a hard disk and access to these personal data should be logged then no ways of circumventing the triggering of log entry generation must exist (i.e. it must be impossible to remove the hard disk from the system and read it using another (non monitored) machine). In case of the hard disk example Sealed Memory in combination with Sealed Storage and Secure I/O can achieve this. 

  1. Enable third parties to check the security status of the service infrastructure and servers. Trusted Computing (and especially the remote attestation feature of it) would allow any third party to check whether the systems used by the service provider are in a trustworthy state or not. Here “trustworthy” means that the systems are in a state which enables them to enforce the security or privacy policies promised by the service provider.

Although it is possible (in theory) that the end user will perform this check, in practice it will require very complex verification procedures (especially in the case of multi-stage business processes). Therefore it seems to be more feasible that a trusted third party will do this (on behalf of the user). Data protection authorities could be such trusted third parties, which could also issue some kind of seals to the service providers. Of course these checks have to be repeated on a regular base. But as they could be done mainly automatically the overall effort would be within reasonable limits. 

Note that with respect to the last two methods the service provider can also take over the role of the “client”, i.e. he does not only offer that third parties can detect security breaches or check the security status of the systems used—but he also uses these offers from other third parties to which he has delegated parts of the business process to check whether they comply to their security and privacy policies or not. 

Besides these three basic principles applying Trusted Computing on the service side can be further divided into: 

  1. applying Trusted Computing on the infrastructure side

  2. applying Trusted Computing on the real back-end or other service related servers

Besides the components (servers etc.) which are obviously part of the overall system used by the service provider parts of the system might exist for which it is not so obvious that they need to be included in a security analysis. Below we will give a few examples for this infrastructural side of the system used by the service provider. We show how this infrastructure (which enables or supports the overall business process) can be secured by the usage of Trusted Computing, too. 

On the one side, RFID based systems are seen by many analysts as a key enabling technology for optimising the logistics and the supply chain management. On the other side the security weaknesses of today’s RFID systems could introduce a security risk (or even a violation of) the security and privacy policies given by the service provider to the user. Imagine for instance the typical scenario where a service provider chooses a third party for package and shipping. If the RFID tags used for these logistics (e.g. RFID based address labels instead of barcodes) are insecure (e.g. could be read by any third party without proper access control) this would violate a privacy policy stated by the service provider, which expresses that no unauthorised third party will ever learn the personal data provided by the customer. 

A general overview about security and privacy mechanisms for RFID based systems is given in the FIDIS deliverable D12.3 “A Holistic Privacy Framework for RFID” (Fischer-Hübner and Hedbom, 2007). Therefore we concentrate here on the possible solutions which use Trusted Computing (Molnar, Soppera and Wagner, 2005). The protection goal is the confidentiality of the data transmitted by a RFID tag to the RFID reader, whereas the attacker model assumes that the RFID reader is under control of the attacker. Trusted Computing (especially the remote attestation and sealed storage mechanisms) may be used within the RFID reader to take away the control over some parts of the reader from the attacker. In general the RFID reader is split into three parts: the Reader Core, a Policy Engine and a Consumer Agent. The Core provides the basic functionality of the RFID reader and has to be small enough so that the integrity measures of Trusted Computing are feasible (e.g. secure booting, secure operating system etc.). The Policy Engine is responsible for enforcing the security policy. This security policy is stored within the Policy Engine and determines which tags an RFID reader is permitted to read and which operations are allowed on the read data (e.g. transmission to the back-end system). Moreover the Policy engine controls access to all secret keys need for communicating with RFID tags (e.g. for confidentiality or authentication). The Consumer Agent enables third parties to audit the activities of the RFID reader. It logs all relevant operations of the RFID reader (like reading tags, accessing secret keys, transmitting data etc.) and makes these logs accessible by third parties. These third parties can use remote attestation to verify that the desired Consumer Agent is running on the RFID reader. Moreover if the Consumer Agent detects that the Reader Core or the Policy Engine were compromised it will stop the operation of the Reader and report this accordingly. 

The given RFID example shows once more the application of the three basic principles mentioned above: to prevent security threats, to detect security breaches and to allow third parties to check the security status. Moreover it underlines the need that each security or privacy relevant part needs to be under control of Trusted Computing if the overall system the service provider uses to provide his service should be secure even against the owner (i.e. the service provider).

Another field of application of Trusted Computing on the service side is related to the infrastructure which is used to support or enable the overall business process. Often parts of the infrastructure or of the processing of the data are outsourced to third parties. This could comprise “classical” procedures like renting servers from providers or giving data to a third party which does the processing and returns the desired results. In both cases typically bilateral agreements are concluded which (e.g. in terms of security and data protection) regulate the various duties and responsibilities. In future emerging technologies like Grid-computing (e.g. for “renting” computing power) or P2P based systems might be involved as well. These systems are much more decentralised and it is much harder to give any guarantees regarding security or data protection (Gasson and Warwick, 2007). 

Basically one can see this “outsourcing” as some kind of “low level” delegation compared to the more “high level” delegation in case of multi-stage business processes. Therefore the same general mechanisms (as described above) could be applied. The service provider could for instance use Trusted Computing to ensure that the data processing on a rented server is under his exclusive control and that the provider who owns the hardware has no access to the data. He can use secure logging and audit in conjunction with remote attestation to verify that the machines involved in a large Grid will not leak any sensitive data. 

 

Employment of the Trusted Computing  fidis_wp14_d14.3_v1.0.sxw  TCP Employment on the Server Side
21 / 39