You are here: Resources > FIDIS Deliverables > HighTechID > D3.9: Study on the Impact of Trusted Computing on Identity and Identity Management > 

D3.9: Study on the Impact of Trusted Computing on Identity and Identity Management

Trusted Computing – an Overview  Title:
MAIN CONCEPTS OF TRUSTED COMPUTING
 Trusted Computing Group Specifications

 

Main concepts of Trusted Computing

 

Integrity Measurement, Verifiable and Secure Booting

 

Integrity measurement is one of the basic and probably most fundamental mechanisms of Trusted Computing. Integrity measurement means to calculate and store the state of a computer or device under secured conditions.  

The process of measurement while booting is called authenticated booting (or authentic booting). It is a method that securely logs which software is booted on a computing device but does not influence the boot sequence (e.g. in deciding which software to execute and which not). After finishing the boot sequence, these logs can be used for checking the state of the system. An important remark is, that authentic booting by no means ensures that the computing device is in a “secure” state.

With the help of the log entries it is also possible to report the state of the computing device to remote entities. By analysing the logs remote entities can decide about the trustworthiness of a system. This approach is particularly suitable for secure open platforms, which can be modified in many ways [115].

The technology of authenticated booting is already used besides the TCG specification in other designs. According to Kauer [115] currently available designs of authenticated booting require new hardware or support untrusted applications insufficiently. 

There exists another type of booting a system in a trusted way called secure booting. Secure booting checks the code before executing it according to a set of security policies and therey avoids that malicious or unauthorized code is running on the computing device. The security policy comprises identifiers of authorized modules. Hash values can be used as identifiers. If an identifier or hash value is not found in the list of the security policy the process of booting is discontinued. Usually this technology is used to ensure locally that the platform is in a secure state, but not to prove this to a remote party. As mentioned in [115] the technique of secure booting can be used to construct secure closed platforms, which have a limited number of executable software. Designs for secure booting have been known for over 15 years [113][114].

Trusted booting combines both boot methods mentioned. After measurement of integrity the results are sent to a remote verifier that checks the results [119]. If the computing platform is in an invalid state the remote verifier may initiate a remediation. So the platform has to be updated. After that it starts the integrity measurement again until it is in a valid state.

When a boot sequence can be validated (remote or locally) it ensures that components of the platform are not emulated, so that “a specific hardware with a specific OS with a specific GUI and a specific application is indeed running in the identified device” (Härtig [114]). 

Binding, Sealing, and Attestation

The hardware integrated root of trust can provide methods to bind data, licenses and user authentication to a specific platform which consists of specific hardware and software executed on that hardware. This can be realised by cryptographic operations which are executed and stored in a protected hardware environment. Within the TCG specification a unique key, that is certified by the manufacturer and stored in a protected way inside the microcontroller, is essential for verifying the platform as trusted and can also be used to authenticate a special user accurately. This key is the initial point for certificates and further keys.  

Sealing allows binding data to a specific computing device and thus sealing sensible data which is stored on its hard disk. 

According to [118] a major security problem of computers is storing cryptographic keys securely. Keys or passwords that protect private documents are often retained locally on a hard drive of a PC, side by side with the encrypted documents themselves. Everyone who gains access to the computing system also gains access cryptographic keys and passwords stored. But keys should be kept secure so that only legitimate users can use them. 

The technique of sealed storage is based on a key that is partially generated by the identity of the software requiring the key. Furthermore the identity of the computing device that executes the software presents the second part of the key. So, these keys need not be stored on the hard drive but can be created whenever they are required. 

If a program different from the program which initiated the encryption or sealing of sensible information would try to decrypt or unseal this data this fails because the generated key is not equal to the original one. That follows from the different identifiers of the software that seal and unseal the data and consequentially the generated keys are different as well. A similar use case is that encrypted data is transferred to another computing device that tries to decrypt the data. This will also be unsuccessful. So, for example emails that you can read on your computer are unreadable on other computer systems. 

With the help of sealed storage you cannot prevent that confidential data is copied to another system but you can prevent others from reading it on this system. 

By using attestation it is possible to check the hardware and software states of a remote platform. Therefore the results of integrity measurements, which characterise the software and hardware environment of a computing system, are signed with a key by the hardware component. The signed outcomes can be verified by a remote party and needs no physical presence. Together with the signed outcomes certificates are sent that accredit the used key as trusted. A remote attestation can be conducted directly or through a trusted third party that verifies the remote platform as trusted.  

A trusted third party (TTP) checks keys and certificates of a computing device. If they are valid, the trusted third party issues a certificate that attests a computing device as trusted.     

The relation between a certain computing device and a certificate provided by a TTP should be hidden for anonymous usage. 

Direct anonymous attestation (DDA) without using a third party verifier is a further attestation technique. It can be proven that a valid certificate exists without disclosing it. So certificates could be generated anonymously. This technology is based on zero knowledge group signature schemes (see also [109]). Group signature schemes allow that every member of a group can sign messages on behalf of the whole group. This supports the anonymity of the group members and provides a valid signature. 

Another method to check that executable code is trustworthy is provided by code signing. Most of the code-signing implementations provide a digital signature mechanism to attest the identity of the author or the producer. A checksum is also attached to the code in order to verify that the code has not been altered. Due to this a user or a system can decide whether the source is trustworthy and whether the code has been modified. 

Proof-Carrying Code is a technique that guarantees safe execution of untrusted code. The author of the code adds information (called annotation) about the security policy that is fulfilled by the code. The receiver compares the security policy of the code with a set of safety rules established by him. The code will only be executed by the receiver if the examination of the security policy of the code conforms to requirements of the user. Each tampering of the code or the annotation can be noticed by the receiver. 

 

Process Isolation, Small TCB

In present computing devices it is possible for malicious code to read or alter sensitive parts of programs stored in memory. Process isolation shall increase the security of computing systems by providing full isolation of sensitive areas of memory for all programs expect authorised ones. This prevents programs from being able to read or write the memory of other programs. Access to protected memory of programs could be denied for unprivileged parts of the operating system as well. These protected parts of the memory are also called shielded locations or curtained memory. Even if an intruder controls the whole computing system, he is not able to tamper the shielded locations. 

Process isolation can be achieved in hardware and software. As mentioned in [118] software implementation requires rewriting of operating system and device drivers. If necessary even application software has to be rewritten. The software approach does not need new hardware components but requires rewriting lots of software. 

Implementing process isolation in hardware can provide backwards compatibility with present software. And the amount of software that has to be rewritten is small compared to the software approach.  

A further technique to shield software is called sandbox. Software is insulated from the rest of the system, put into a virtual sandbox. So the software cannot do damage to other parts of the system and it is possible to record its effects. This method provides a kind of test area wherein the user can run unknown or suspect software. 

The terms “small Trusted Computing Base” and “small secure platform” describe the idea to make the system core as small and compact as possible. Both include that a minimal set of hardware, firmware and operating system has to be sufficient for fundamental security. All components of a trusted computing base have to be trusted in order to guarantee the reliability of the techniques mentioned in this chapter. 

According to Härtig there are some requirements in order to achieve a small secure platform like [114]: 

  1. Flexibility so that it can be used for different kinds of platforms, 

      1. The functionality should be minimised but sufficient for applications that require high security, 

      2. Programs and users are only allowed to access such information and resources that are required for their legitimate purpose. 

      3. Separation of secure and insecure parts of the platform, 

      4. Attestation of its global identitfier to remote or local entities, 

      5. Allowing an accurate evaluation. 

 

As Härtig mentioned a small secure platform is small and simple when it can be completely controlled by about seven people. A lot of today’s systems like the Linux kernel are far from achieving this. 

There are some approaches to reduce the size of a trusted computing base like the Nizza security architecture in [121]. 

 

 

 

Trusted Computing – an Overview  fidis-wp3-del3.9_Study_on_the_Impact_of_Trusted_Computing_on_Identity_and_Identity_Management_v1.1.sxw  Trusted Computing Group Specifications
6 / 38