Resources
- Identity Use Cases & Scenarios.
- FIDIS Deliverables.
- Identity of Identity.
- Interoperability.
- Profiling.
- D7.2: Descriptive analysis and inventory of profiling practices.
- D7.3: Report on Actual and Possible Profiling Techniques in the Field of Ambient Intelligence.
- D7.4: Implications of profiling practices on democracy.
- D7.6 Workshop on AmI, Profiling and RFID.
- D7.7: RFID, Profiling, and AmI.
- D7.8: Workshop on Ambient Law.
- D7.9: A Vision of Ambient Law.
- D7.10: Multidisciplinary literature selection, with Wiki discussion forum on Profiling, AmI, RFID, Biometrics and Identity.
- D7.11: Kick-off Workshop on biometric behavioural profiling and Transparency Enhancing Technologies.
- Forensic Implications.
- HighTechID.
- Privacy and legal-social content.
- Mobility and Identity.
- Other.
- IDIS Journal.
- FIDIS Interactive.
- Press & Events.
- In-House Journal.
- Booklets
- Identity in a Networked World.
- Identity R/Evolution.
D7.2: Descriptive analysis and inventory of profiling practices
4. Purposes and effects of profiling
4.1 Introduction
Technology in itself is neither good nor bad, but its effects are never neutral. This deliverable does not aim at a comprehensive analysis of possible privacy invasions, caused by profiling practices. Subsequent deliverables within this work package will address such issues in more detail. However some attention should be given to anticipated negative impacts. In this section we will first face the major issue of data surveillance raised by Roger Clarke in his 1994 article on dataveillance. After that the risks and dangers of personalised profiling will be discussed. We will conclude this section with a summary of the purpose and effects of profiling.
4.2 Dataveillance
Profiling functions, intentionally or unintentionally, as a sophisticated assessment of risks and opportunities. It aims at discovering, for example, potential terrorists; criminals; insurance risks; new customers; potentially fraudulent employees; promising students; productive employees. In the case of automated profiling these assessments all depend on data surveillance, or dataveillance: ‘the systematic use of personal data systems in the investigation or monitoring of the actions or communications of one or more persons’.
In a pioneering article of 1994, Clark discussed personal and mass dataveillance, in which he summed up the social impact in terms of dangers to individuals and to society.
As to personal dataveillance, he points to
low data quality decisions;
lack of subject knowledge of, and consent to, data flows;
blacklisting and denial of redemption.
As to the dangers to the individual of mass dataveillance, he summed up:
arbitrariness;
acontextual data merger;
complexity and incomprehensibility of data;
witch hunts;
ex-ante discrimination and guilt prediction;
selective advertising;
inversion of the onus of proof;
covert operations;
unknown accusations and accusers and
denial of due process.
As to the dangers to society of mass dataveillance he summed up:
prevailing climate of suspicion;
adversarial relationships;
focus of law enforcement on easily detectable and provable offences;
inequitable application of the law;
decreased respect for the law and law enforcers;
reduction in the meaningfulness of individual actions;
reduction in self-reliance and self-determination;
stultification of originality;
increased tendency to opt out of the official level of society;
weakening of society’s moral fibre and cohesion;
destabilisation of the strategic balance of power;
and repressive potential for a totalitarian government.
One of the points of this rather loose but penetrating enumeration of the social implications of dataveillance, is the fact that tracking and monitoring the behaviour of individuals and groups has an impact beyond just privacy and security. Due process, weakening of social cohesion, destabilisation of checks and balances, total governance are of a different category than supposed trade-offs between privacy and security. In D7.4 (Implications of profiling practices for democracy and rule of law) and in D7.5 (the publication on profiling and its implications for privacy and security) the impact of profiling technologies on established power relationships will be further elaborated through the question who is actually profiling who? Profiling makes persons and groups transparent as correlated data subjects. Large organisations, such as the state, transnational commercial enterprise, healthcare institutions and insurance companies) can afford to invest in profiling technologies. This means that while profiling practices make citizens transparent, individual citizens have few means to profile large organisations. It also implies that one of the most important assets of the democratic constitutional state could become part of a trade-off: the idea that the actions of those in power should be transparent, while citizens should be granted a certain opacity in order to enjoy their freedom.
4.3 Implications of personalization, user information and profiling
(Simone van der Hof, TILT)
This section addresses several relevant considerations with respect to personalisation and the use of user information and consumer profiles, i.e. privacy, inclusion and exclusion, and transparency and quality.
4.3.1 Privacy
The other side of implementing more and more sophisticated personal data collection techniques in online personalised service provision is that the risk to the user’s privacy increases. Privacy in general is an important issue in online personalisation, as more collected user information may imply both better (personalised) service provision and privacy intrusion at the same time. Four main privacy issues related to online personalisation can be perceived:
A service-provider approach through which requested user information is not strictly related to the delivery and access of a specific service;
User-data collection using invisible methods, which use spy technologies, such as cookies, web bugs, etc. to trace, track and search user profiles;
Use of personal data for purposes different from those indicated and without the user’s previous and/or informed consent;
Lack of effective user access to personal data collected, e.g. at web sites.
However, online personalisation applications, such as recommendation systems, do not always need personally identifiable information. The only necessary connection between recommendation systems and users is a consistent pseudonym so the user can be recognised when visiting the website. This kind of user protection is not presently implmented on e-commerce sites, where payment and shipping require personal information that can be connected back to the pseudonym of the user. Because of the importance of privacy protection in online personalised service provision, several different privacy safeguards can be identified in relationships between organisations and users, including:
Notification by the service provider;
Opt-in (users must give permission before the service provider can use information);
Opt-out (users can remove their permission once given);
Limited access (user information is only used for personalisation of web content);
User customisation (users can adjust the level of personalisation/profiling to their desires);
Security (information used to personalise is only accessible by the user or by the authorised organisation);
Security technology (high level of password/encryption technology is used to safeguard user information).
Personalisation may be a threat to privacy because it provides the companies and organisations using personalisation techniques with a powerful instrument for knowing in detail what an individual wants, who he is, whether his conduct or behaviour shows certain characteristics, and so forth. What increases the problems in relation to privacy is first the potential for further use and sometimes abuse of the detailed and rich knowledge on individuals. Connecting and (re)selling data sources has become a highly profitable business and companies often compromise users’ privacy for profits. What is more, several bankruptcy cases have shown that databases with personal data and consumer profiles are a highly valuable asset. Companies may actually believe that they have ownership rights in the personal data compilations because the law itself offers indications for such a position. In addition to protection under the regime of trade secrets, businesses that have invested in the collection and compilation of personal data are granted exclusive rights under the European Directive on database protection. Another indication can be found in section 55 of the UK Data Protection Act, providing for a criminal sanction for stealing personal data from the data controller (i.e. not the data subject).
In the meantime, studies have shown that consumers and citizens are very particular about the type of information they are willing to provide in return for personalised content. Also, they have strong feelings regarding personalisation services that share information about them with other companies: the majority feel that a site that shares their information is invading their privacy. In addition, most consumers hardly understand how personalisation technologies actually work and thus have no opportunity to control the dissemination of their personal or behavioural information. Various personalisation services deploy hidden instruments to track and trace users and thus consumers are unaware that their data and preferences are being collected.
As far as the data that are being used for or generated by personalisation services qualify as personal data, legal protection may be available under data-protection legislation. The key European legal regime here is Directive 95/46/EC. The Directive stipulates various fair information practices, mentions among others the grounds for justified processing of personal data and accords data subjects with several rights, among which the right to object to the use of his personal data for direct marketing purposes (absolute right to opt-out, art. 14(b)). Although this provision does not restrict in advance the processing of personal data for direct marketing purposes, an individual may apply this provision to control the use of his data.
4.3.2 Inclusion and Exclusion
A consideration closely related to the use of personal data and privacy protection is the inclusion and exclusion of individuals when it comes to certain personalised services. The use of personalization applications will facilitate the widespread monitoring of what people read, view, or listen to. By using personalisation services, their proprietors will potentially have what Philip Agre has referred to as “God’s-eye view of the world”. To the extent that personalisation applications allow the user to be tracked easily and thoroughly, it is a simple matter to limit the scope of certain facilities to a tightly controlled group of consumers. For example, personalisation services will facilitate the selected provision of access to certain services only to consumers who live in preferred postcodes, or have certain levels of income. Also, personalisation services seem well suited as means for choosing who will be allowed to view or read a particular work and who will not. But personalisation is not only about inclusion or exclusion of certain services. It will also facilitate price-discrimination – that is, proprietors of services can ask different consumers to pay different prices.
Is this inclusion or exclusion good or bad? It could be argued that inclusion or exclusion is economically useful, because it will do a better job of getting the right information (commercial as well as public-sector information) to the right persons. Without personalisation techniques, organisations must make wasteful investments in distributing information that may not be appreciated by consumers. Thus, techniques that facilitate inclusion and exclusion may be especially useful for accommodating the changing preferences of consumers and citizens. As such, personalisation is a good way to achieve an efficient market. Personalisation further provides an efficient and effective tool with which companies can monitor who is granted access to certain works and who is not. By using personalisation techniques, content-producers obtain control over the uses of a variety of legally protected works and the techniques will allow providers to manage access rights with respect to particular works. The control facilitated by personalisation techniques will increase the copyright owners’ ability to uphold and enforce their copyrights.
One might even argue that inclusion and exclusion of access to certain services is essentially nothing new and as such there is nothing bad about it. Today, consumers’ and citizens’ behaviour is also predetermined by such matters as their attachment to a group, their cultural or social position or predisposition. Personalisation however provides a new dimension in that it may force individuals into constraining, one-dimensional models, based on the criteria set by technology and by those who own and apply the technology. With commercial personalisation services, the myriad of individual differences may be reduced to one or a few consumption categories, on the basis of which their preferences, character, life-style, and so forth are completely determined.
Also from another perspective, the ability of personalisation techniques to diminish preferences, differences and values seems disturbing. For example exclusion of access to and the use of information and copyrighted works (music, books, films) puts the values of free speech and information under pressure. Personalisation may even have wider societal and political consequences, when it shapes the overall movement of information and expression within society. Free citizens are the cornerstones of democratic constitutional societies. In an doomsday scenario, personalisation services could put cultural and social diversity at stake: one political or religious message dominates the whole discourse. When behaviour is manipulated, freedom of self-determination and personal autonomy are limited and societal freedom is eroded, personalisation may have serious consequences.
4.3.3 Transparency and Quality
The issues of privacy and inclusion/exclusion show that the use of personal data will more and more occur within, and be structured by, social, economic and institutionalised settings. It therefore appears crucial that the legal, technical and organisational mechanisms that determine the ways in which personalisation services are developed must be structured along the lines of control and visibility. To illustrate this point in relation to privacy: in order for individuals effectively to protect personal data that are used for personalisation purposes, they should be given the instruments to know and understand how their social and economic identities are constructed and influenced. This brings us to a fourth consideration: transparency and quality.
Transparency and quality reveal themselves in different aspects and on different levels of personalised services; however, these concepts are also correlated to a large extent in the sense that both contribute to each other. First of all, transparency with respect to the personalisation process itself is of importance, including information as to way the personalisation process works, the different configuration options or features that are included in the service. Moreover, the personalised system has to comply with certain user interface requirements to maximise its usability to the (average) user or large user groups. Preferably, users should be able to operate these systems intuitively, meaning they can find their way within the system without too many instructions. The usage of the personalised service – including its security and authentication options – should not be too complicated for users; otherwise users may walk away from the service before having actually tested it. Right from the start, usability should be part of the design process when developing personalised services in order to forestall implementation difficulties in this respect at a later stage.
When the personalised service provides transaction possibilities, users should be informed of the specifics of the transaction process, such as the point where the transaction is concluded, the general and individual prices of the service in order to prevent customer annoyance with respect to price discrimination, general terms and conditions, payment methods, security of transaction and payment processes etc. The information should be presented in such a way that customers are actually, easily and comprehensively notified, although customers generally do not actually have to read the information. This is relevant legally because many laws provide information duties and rules on general terms and conditions that must be observed in order for transactions to be enforceable.
Moreover, the purpose(s) for which personal data and related information (e.g., log-in information, transaction histories, and localisation information) is used within the personalised service or beyond should be transparent to users. Users should also be aware of the way in which their personalised identity is created and used by the personalised service provider (e.g., what methods are used to create identities and in what context(s) are personal data used and viewed). In addition, users should be informed of the way in which personal data can be accessed, reviewed and updated and the security of this process. Furthermore, users should know if and how (e.g., by sending an e-mail to a clearly specified address) they can restrict or object to (commercial) use of their personal and other data. Such information can be provided in a privacy statement or policy on the website of the service provider. Privacy statements should be complete and easy to access and understand. From a quality perspective, it is also important that the security of personal and other data is adequate and that usability of security and more specifically authentication mechanisms is optimised. Usability across different personalised services can, for instance, be addressed by implementing what is called single sign-on authentication mechanisms.
Transparency also demands that users can assess the objectivity, quality and reliability of information provided to them through the personalised process. More than one business or organisation may be involved in providing users with a variety of personalised services and information and, particularly, where there is a lock-in situation in which service providers determine the information to be received by individual users, users should be able to trace the origin of information in order to be (better) able to determine the quality, objectivity and reliability of such information.
Quality of personalised service provision requires that user preferences are closely and adequately matched with the contents of the service, e.g. information. For instance, recommender systems that make recommendations that do not reflect user preferences or tastes will be ignored or rejected by users and will ultimately not be viable. Personalisation should provide users with added value by making the right associations as to their needs; otherwise service providers are likely to be left aside or to forfeit goodwill and reputation.
The quality of personalised service provision is, moreover, dependent on the availability of the service. When the service is (frequently) unavailable because of technical breakdowns, users might lose interest or trust in the service. Personalised service providers must also more generally guarantee adequate security in order to prevent fraud and abuse with respect to the personalised service and (personal) data involved. In order for service providers to test the quality and usability of their personalised services, they can ask for user feedback and involve users in service trials.
4.3.4 Authentication and Identification
Many of the considerations surrounding personalisation come back to issues related to authentication and identification. Personalised services may be equipped with authentication mechanisms that can provide verification of content of data or transactions of the connection between data/transactions and identifiers (which identify an individual) or attributes (characteristics associated with the individual) and of the connection between individuals and identifiers. Authentication largely overlaps the identification concept, since it is the process that actually provides verification of claimed identities (is someone who s/he says s/he is?), however, such mechanisms are not restricted to verifying identities or identifiers; in some cases authentication takes place at the system level when hardware or software (e.g. in case of DRM, a computer on which licensed media files are used) is authenticated. Within the concept of authentication, user authentication and communication authentication can be distinguished. In most cases, the personalised service will also involve user authentication (also called authorisation), which grants certain permissions to individuals (e.g. updating personal data) and can be based upon identifiers and attributes. Single sign-on authentication is, for instance, interesting from a personalisation perspective, because it allows the integration of several authentication processes and providing different personalised (web and mobile) services with a one-time authentication. Furthermore, communication authentication concerns the verification of the identity of the origin of information, e.g., messages and websites that is communicated.
The choice for any mechanism with respect to authentication, identification and/or authorisation depends upon the kind of personalised service (e.g., public versus private, open versus closed network environments) that is provided, the kind of personal data (e.g. sensitive or non-sensitive data) that is involved, and, thus, the level of security that is required. Some systems (e.g., a combination of chip cards and biometrics) may be considered more reliable, yet also more expensive than others (e.g., based upon username/password-protection only). A functional approach to authentication and identification would allow determining the most adequate technology for the purposes of the respective service and the functions necessary to achieve these purposes. For instance, national identity cards may require higher security levels than access to personalised websites, because the assets at stake are more important and the risks are greater. Whereas in the first case, biometric technologies are explored and in some instances already used, these technologies are not considered in the second case which mainly uses username/password mechanisms (although, e.g., banking and payment options again require higher security levels). In this respect, it is also important to point out that a system is as secure as its weakest link. Users of personalised services have to be made aware by service providers in a accessible way to be careful with codes and keys, e.g. passwords, and general terms and conditions will likely contain (a) provision(s) on the liability of careless users. For instance, in the case of single sign-on authentication technologies user carelessness would make the system even more vulnerable, since undermining security affects many different services simultaneously.
Authentication and identification are closely connected with the issue of personal data discussed earlier. Since personalisation in online service provision means individualising online services on the basis of user information, which encompasses personal data, behavioural information, location information, and user profiles, it is necessary to link data on the basis of which personalisation will be performed for an individual person. The identity of individuals for personalisation purposes can comprise different attributes, e.g., personal data such as name, address, or e-mail address, which are connected to the individual’s preferences, location, behaviour. Identifiers can be personal when attributes are used that are impossible or difficult to change (e.g., date of birth, fingerprints), but identifiers can also be used in such a way to allow pseudonymous (trans)actions by individuals. In the latter case, identifiers are merely retraceable to non-personal identifiers, which are linked to certain attributes. Identifying an individual for the purpose of personalised service provision does not, therefore, necessarily have to mean that the person’s real-life identity (e.g., name, address, appearance) is used to provide the services. In a sense, it is sufficient to know that the service is provided to and individualised for the “right” person, i.e. the person to whom particular preferences and features on which personalisation is based “belong”, and – if applicable – is paid for (in time). However, databases with personal data and consumers profiles are valuable assets for businesses (e.g., for marketing and market analysis purposes) and governments (e.g., for the purpose of fraud detection, criminal investigations and national security), so the incentive to restrict the use of data that is retraceable to the actual identity is not very strong. In the light of all this, control and ownership with respect to personal data and identities of individuals are, therefore, important and persistent issues in (personalised) online service provision. These aspects can be built into business models, such that individuals (in this case, i.e. users of the personalised service) can manage personal data and identities within the system. As an example, privacy-enhancing technologies, such as P3P, allow users to control what personal data are disclosed and under what identity and/or identifiers a particular service provider knows the user. Although still in their infancy from an operational point of view, much is also expected from identity management systems. These systems provide an infrastructure for the use and storage of personal information and authentication mechanisms. The public sector may play an important role in the administration of these systems, because they themselves generate important tools for identification of individuals (e.g., driver’s licenses, passports) that are often used in private sector (e.g. banking) identification and authentication processes as well. Identity-management systems can be based upon pseudonymous identification processes, meaning that personal identifiers are not disclosed in transactions and, thus, personal data may be more effectively protected, depending on the right amount of security provided.
Standardisation and related to that: interoperability is a critical factor for security and authentication across web services, including personalised services. The standardisation organisations OASIS and W3C, for instance, work together on web services security and related issues. But there are many other initiatives that influence or aim at standardisation of authentication technologies (e.g., certification services), such as the International Standards Organisation (ISO/EN), the American National Standards Institute (ANSI/ABA/X9), ITU (X. 509), Liberty Alliance (identity management, including single sign-on).
4.3.5 Conclusion
The development and use of online personalised services raises a number of questions, dilemmas and fundamental issues. Four relevant considerations have been presented in this paper. Other considerations include issues, such as differences between private and public initiatives, organisation and user-controlled personalisation, and security. All in all, online personalisation is a highly complex development where numerous context-specific aims, ambitions, business-models and conditions may determine the actual development and use of online personalisation services.
4.4 Purposes and effects of profiling: good nor bad but never neutral
4.4.1 Introduction
In this paragraph we aim to provide a summary of the proclaimed purposes and (un)intended effects of profiling, which should allow us to construct some analytical tools for the evaluation of impacts in subsequent deliverables. Purposes are explicit objectives, formulated – in this case – by the data controllers that use profiling technologies. Effects are the visible or invisible, intentional or unintentional consequences on – in this case - the data subjects that are profiled. Following Custers, these effects are described in terms of (1) inclusion and exclusion (2) prototyping and stigmatisation; (3) providing information and confrontation with previously unknown risks; (4) targeted servicing, customisation and de-individualisation. These effects go beyond the often discussed trade off between privacy and security; they rather concern a changing socio-technic infrastructure that will impact the sense of self of those that are being profiled, even if they are not aware of this.
Data Protection legislation emphasises that data should only be used for the purpose for which they have been acquired, but more is at stake. Even if it were possible to control the use of data for legitimate purposes, this legitimate use will often have effects that were not intended but are hard to avoid. In deliverable 7.4 attention will be given to the (in)effectiveness of data protection legislation in this regard.
4.4.2 Selection - exclusion
Group profiles are constructed to enable selection. Whether this selection concerns potential customers, employees, insurance clients, persons suffering from a specific disease, offenders or terrorists, the aim is to limit the ‘group’ of subjects that are the focus of the data user’s attention. Thus customers can be serviced in a more personal way - employees can be contracted who fit the company’s profile, insurance companies can profile the risk of their clients, and so forth. In performing this task profiling also enables exclusion: customers will not be bothered by advertisements they are not interested in, and those who do not fit the profile of an offender or terrorist will be left in peace. In short, selection aims at either risk-assessment enabling risk-management or providing targeted services, including AmI. In a sense profiling thus allows a refined and smooth functioning of both government bureaucracy and market implementations.
However, the effects of this process of exclusion has two drawbacks: (1) selection and exclusion can be either legitimate or illegitimate, which is not the same as legal and illegal, and profiling provides the tools for massively enhanced selection mechanisms for both legitimate and illegitimate exclusions; (2) as group profiles are often non-distributive some people will be excluded on false grounds, a problem that becomes even more pervasive in the case of non-distributed profiles, meaning that not all individuals in the group share all the same characteristics of the profile (see par. 3.2.4). Levi and Wall conclude that profiling technologies will have a profound impact on access to and participation in the European Information Society, as profiles
‘could possibly be used against individuals without their knowledge, thus shaping their access to facilities, goods and services, also potentially restricting their movement and invading personal space. In fact, this would regulate their access to, and participation in, the European Information Society’.
4.4.3 Prototyping – stigmatisation
A group profile functions like a prototype, enabling an organisation to classify individuals as groups or categories. This makes it possible to deal with massive numbers of customers, clients, patients, citizens while also targeting them in a semi-customised manner. It enables detection by comparison with the prototype.
However, as prototypes become public knowledge, they may give rise to stigmatisation of people or groups of people. While the data user may be aware of the non-distributivity and/or polythetic nature of the group, the public image may not incorporate such complexities. Having visited a certain mosque may be one of the correlated data of a profile that promises to detect potential terrorists; this may lead to stigmatisation of all those that visit the mosque.
4.4.4 Information – confrontation
Profiling provides organisations with information that it can use to target specific persons as potential client, customer, patient, offender, terrorist. When the organisation starts interacting with those it takes to be a member of a certain group, this will entail a confrontation. The individual who is faced with the profile may suffer the consequences of finding out things about herself she was not aware of, while for non-distributive groups this confrontation may be based on wrong inference, or communicated in terms of probabilities. A person who thinks of himself as healthy may find out about a disease he (probably) has or will (probably) develop.
4.4.5 Targeted servicing – customisation
To be targeted with only those advertisements one is likely to be interested in, seems an excellent cure for the spam we are presently confronted with. Who would not want to live in a world that recognises our habits, desires, preferences in products and services? The relationship between targeted advertisement and ambient intelligence concerns this possibility to tune a person’s environment in the broad sense to her particular – even unconscious – wishes.
However the result of this custom-made environment may be a very limited perception of the world, that can lead to dangerous types of ignorance and social fragmentation.
4.4.6 Individual targeting – de-individualisation
As seems obvious, profiling enables individual, customised targeting. Problems may occur when a profile applies to the group but not to the targeted individual (non-distributivity; polythetic group): false positives as well as false negatives often cannot be avoided and can cause illegitimate exclusion.
But, even more interesting perhaps, profiling will produce normalisation processes:
‘profiles will begin to normalise the population from which the norm is drawn. The observing will affect the observed. The system watches what you do; it fits you into a pattern; the pattern is then fed back to you in the form of options set by the patterns; the options reinforce the pattern; the cycle begins again’.
On the one hand, those members of the group that do not fit the entire profile, may be normalised into the profile if constantly approached. This could be called de-individualisation. And worse, when individuals do fall within the profile used to target them, this might result in yet another type of de-individualisation. The limited perception referred to above can cause a loss of diversity, not only between individuals, but also within individuals. If self-identity builds on the permanent confrontation with a diversity of perspectives of others, individual targeting may give rise to the construction of groups that are focused solely on their own profiled characteristics.
5 / 10 |