You are here: Resources > FIDIS Deliverables > Privacy and legal-social content > D14.2: Study on Privacy in Business Processes by Identity Management > 
Conclusion  Title:
CONCLUSION AND OUTLOOK
 References

 

Conclusion and Outlook

The mandatory presumption for privacy in business processes with personalised services was up to now customers’ trust in the participating service providers. The process models, which are considered in this study, for privacy-aware business processes, suggests an internal access control on customers’ data and the application of information security management process models for data protection. Concerning undesired profiling, both approaches require also that customers trust their service providers with respect to the processing of customers’ data.

As far as personalised services are realised by single-stage business processes and do not require identifying data of their customers, profiling can be prevented by disclosing different partial identities in combination with different pseudonyms. This is technically supported by current user-centric identity management systems.

By the change of personalised services towards an unconscious collection of customers’ data, e.g. in sensor networks, and multi-stage business processes where profiles are externally managed and have to be delegated to service providers, privacy cannot furthermore be preserved by current identity management systems. Concerning re-tracing an unconscious collection of customers’ data, the technical integration of data protection legislation in devices is proposed by the term ‘Ambient Law’. An approach for a re-tracing mechanism is shown by the ‘Data Track’ mechanism. Concerning an undesired profiling in multi-stage business processes protocols for an unlinkable delegation and revocation of access rights on customers’ profiles are proposed by DREISAM. Regarding profiling, the added value is that customers only have to trust the service providers which manage their profiles. Since DREISAM does not preserve privacy if the requested data unambiguously identify the customer, this customer has to trust this service provider further on.

Further work investigates on the verification of service providers whether they have processed customers’ data according to the negotiated arrangement between service provider and customer as well as according to the given data protection legislation. The aim is to get evidences concerning the use of customers’ profiles. In order to technically re-trace the information flow of services, the study D14.3 “Study on the suitability of trusted computing to support privacy in business processes” of the FIDIS workpackage 14 “Privacy in Business Processes” investigates on trusted computing as a platform to support the enforcement of privacy policies by service providers. By providing mechanisms for user-controlled access control and audit of private data, users are enabled to decide on the disclosure and use of personal data in business processes with proxies. In Information Filtering scenarios occurring e.g. in Recommender Systems, however, private data has to be collected and processed in order to provide the desired results, such as individualised content. In this case, the mechanisms described in this study may be used to prevent entities other than the actual service provider from acquiring private data, but they cannot be used to prevent the service provider itself from using the acquired data in a privacy-invasive manner.

Feasible solutions for privacy-preserving Information Filtering are either based on additional trusted parties or on trusted computing. In the latter case, the parts of an application that are actually used for processing private data have to be realised as trusted software, i.e. they have to be based on a trusted computing platform. Thus, it may be ensured that private data is actually only used in a privacy-preserving manner in order to generate individualised content, such as recommendations or a list of users with similar interests.

Typically, a number of different and variable components, such as the actual filtering techniques utilised for generating recommendations, are used within an information filtering application. With regard to trust relations, requiring each single component to remotely attest its privacy-preserving character would be rather inflexible. Therefore, a superior solution is realised by providing an environment into which these components may be deployed. In this case, only the environment itself has to remotely attest certain characteristics, independent of the actual components deployed within.

In FIDIS D14.3 “Study on the suitability of trusted computing to support privacy in business processes”, an approach for privacy-preserving information filtering based on multi-agent system technology (in which agents constitute the components, while agent platforms constitute the environment described above) will be described as an example for the suitability of trusted computing in the context of privacy-preserving information filtering.

 

Conclusion  fidis_wp14_d14.2-study_on_privacy_in_business_processes_by_identity_management-v09_02.sxw  References
37 / 38