You are here: Resources > FIDIS Deliverables > Privacy and legal-social content > D14.2: Study on Privacy in Business Processes by Identity Management > 
Case Study: Privacy Threats in a Loyalty Program  Title:
CASE-STUDY: PRIVACY THREATS OF INTELLIGENT SOFTWARE AGENTS
 Conclusion

 

Case-Study: Privacy Threats of Intelligent Software Agents

There are two main types of privacy threats that are posed by the use of ISAs:

  1. Threats caused by agents acting on behalf of a user (through loss of control over the activities that are executed to get the right results, through software-errors inside the agent, through the unwanted disclosure of the user’s personal information and when an agents runs into a more powerful or an agent in disguise), and;

  2. Threats caused by the fact that the ISA is acting on behalf of the user and thus, producing data traces, which may be linked to the user’s identity (traffic flow monitoring, data mining and even covert attempts to obtain personal information directly from the user’s agent or by entering databases and collecting personal data).

 

User Profiling 

It is this issue of "user profiling" that is at the core of the privacy risk associated with the use of ISAs. Typically, an ISA user profile would contain a user’s name, contact numbers and e-mail addresses.

Beyond this very basic information, the profile could contain a substantial amount of additional information about a user’s likes and dislikes, habits and personal preferences, frequently called telephone numbers, contact information about friends and colleagues, and a list of electronic transactions performed.

Depending upon the levels of security associated with the user profile and the data, this information has to be secured within the ISA. However, the security of the data residing within the agent is only one part of the concerns regarding privacy.

The more significant concern is the dissemination of information during transactions, and in the general conduct of the agent’s activities on behalf of the user.

As an agent collects, processes, learns, stores and distributes data about its user and the user’s activities, the agent will possess a wide variety of information which should not be divulged unless specifically required for a transaction. In the course of its activities, an agent could be required, or be forced to divulge information about the user that he or she may not wish to be shared.

The most important issue here is one of openness and transparency. As long as it is clear to the user exactly what information is being requested, what purpose it is needed for, and how it will be used (and stored), the user will be in a position to freely make decisions based on informed consent.

Of even greater concern is the situation where the ISA may not be owned directly by the user but is made available to the user by a service or by an organisation in order to assist in accessing one or more services.

Resuming, the user is required to place a certain degree of trust in the agent – that it will perform its functions correctly as requested. However, this trust could well come with a very high price tag, one that the user may have no knowledge or awareness of – the price to his or her privacy.

The challenge is to employ a ISA that independently performs its tasks, while fully preserving the privacy of the persons involved, up to the level specified by the persons themselves. The agent should for that purpose be able to distinguish what information should be exchanged in what circumstances to which party.

 

Case Study: Privacy Threats in a Loyalty Program  fidis_wp14_d14.2-study_on_privacy_in_business_processes_by_identity_management-v09_02.sxw  Conclusion
19 / 38