You are here: Resources > FIDIS Deliverables > Profiling > D7.7: RFID, Profiling, and AmI > 

D7.7: RFID, Profiling, and AmI

The framework of democracy and rule of law  Title:
PROFILING, SELF-IDENTITY AND ‘THE INTERNET OF THINGS’
 Constitutional democracy in a tagged world

 

Profiling, self-identity and ‘The Internet of Things’

Profiling and self-identity

The problem with profiling is that it infers knowledge from data and that even in the case of group profiling on the basis of anonymised data, the impact of the profiles can be quite impressive. In the case of anonymisation Data Protection legislation is not applicable, which means that a data subject has no means to find out about the profiles that may impact the risks and opportunities he or she is subject to. Profiling allows targeted servicing, fine tuned price-discrimination e.g. in insurance, or black-listing of categories of people in the field van criminal investigation and intelligence. The freedom to take part in private and public networks may increasingly be created or denied on the basis of inferred profiles, while at this moment citizens don’t have the technological tools to screen which profiles have been applied to them, even if Data Protection legislation were applicable.  

 

Autonomic profiling, AmI and self-identity

In an AmI environment the adaptive capacities of the environment depend on adequate profiling. The aim of AmI is to allow seamless and autonomic adjustments to consumer’s inferred habits and desires. The aim is to anticipate these habits or wishes, which means that one does NOT provide a profile on the basis of what one thinks to be one’s preferences, but that one trusts the system to infer them and to adjust the environment accordingly. Evidently, this seems to cause a loss of control, because the preferences are not deliberately articulated by the end-user. Referring to what has been written in the introduction to this deliverable (section 2.1) autonomic profiling aims to discharge us from deliberate reflection on seemingly trivial logistics, just like our autonomic nervous system does. We do not get upset because a change in heart rate has been decided upon without our consent, and likewise we should not complain that doors open before we have reached them and room temperature is tuned to our liking without deliberate manual input. However, the proliferation of data leaked, collected, stored and aggregated in an AmI environment, and the subsequent proliferation of profiles may have an impact beyond anticipative logistics.  

 

This does not only depend on possible abuse of profiles, like use by a person or organisation that is not authorised to employ them, or use for an illegitimate objective (for which consent has not been given, or which is illegitimate of itself, like ethnic discrimination). More importantly, the focus should be on the extent to which it is (im)possible to detect whether or not profiles are abused, and whether or not a citizen can effectively counter such abuse. For this reason Data Protection legislation is first of all a tool of transparency, aiming to empower citizens to guard their liberty and freedom. To the extent that this transparency is a legal right but a technological impossibility the self-identity of citizens is at stake.  

 

The Internet of Things: The end of constitutional democracy?

In recent years books have appeared announcing the death of privacy. The question mark behind this heading indicates that this section is not a funeral announcement for constitutional democracy. However, widespread introduction of RFID systems* beyond supply chain management may unintentionally provide an infrastructure with totalitarian overtones, which may be difficult to keep in check once implemented. Totalitarianism is equivalent with the death of the private sphere, due to pervasive colonisation by the state. The cliché of Big Brother has been referred to in order to alert citizens to the loss of their privacy, due to permanent spying by state authorities. Legal scholar Solove indicates that this metaphor should be complemented with another metaphor that more aptly discloses the unintentional rather than deliberate, the omnoptical rather than panoptical and the anonymous rather than personalised enforcement mechanisms that may emerge in the wake of an ‘Internet of Things’. In The Digital Person Solove (2004) discusses Kafka’s The Trial as a more adequate metaphor to describe the network of data controllers that hold our data and our profiles, trading them to the highest bidder and thus allowing ever more precise profiling of individual people. The intention of these data controllers is not to target a particular person, nor to control an entire society. Their objective is more modest: to make a profit by tuning their services to the inferred personalised preferences of as many customer as possible. However, the availability of such profiles will suit intelligence and criminal investigation as well as insurance companies and social security or health care institutions. If the ‘Internet of Things’ takes on, any data from one’s past may be correlated at any time, anywhere, to produce a profile that allows the profiler to anticipate our actions, or our mental or physical state. The right to oblivion will be lost. Like Joseph K. in Kafka’s The Trial we may become aware of the fact that anything we do may be used against us at some point in time and we may begin to normalise our actions in order to prevent future harassment. As indicated before, the point is not whether government authorities or insurance companies actually abuse profiles to trace and track us for illegitimate purposes, but whether we have the means to detect such abuse and to restrict access to our pastness. In legal terms one could paraphrase by asking the question to what extent we need a legal right to oblivion, to counter the emergence of detailed ‘dossiers’ (Solove, 2004) containing all the brute facts of our lives. Or, rephrasing in terms of narrative self-identity (Ricoeur, 1990; Hildebrandt, 2006): to what extent must a person be able to articulate the story of her life, in which these brute facts are placed in the context of a first person perspective, having decided what to forget and what to remember? It should be obvious that complete control over one’s narrative identity is both impossible and undesirable, but at the same time it should also be obvious that a balance is needed that allows a person to reconstitute her identity without being judged entirely in terms of her machine-readable past.

 

Solove’s analysis confronts one aspect of the implications for constitutional democracy: Fear of being charged with unknown accusations on the basis of (long) past machine-readable events, (physical or mental) states or interactions. Another unintentional consequence for a viable democracy is described by Cas Sunstein (2001) in his essay Republic.com. This is again not yet about abuse of profiles (who would not be against that), but about the impact of profiling on the identity and agency presumed by democratic representation and participation. Sunstein argues that democracy is based on an effective practice of free speech, for which he detects two distinctive requirements.

  • First, he claims, ‘people should be exposed to materials that they would not have chosen in advance. Unplanned, unanticipated encounters are central to democracy itself’ (Sunstein, 2001:8).  

  • Second, he claims, ‘many or most citizens should have a range of common experiences. Without shared experiences, a heterogeneous society will have a much more difficult time in addressing social problems’ (Sunstein, 2001:9). 

Sunstein discusses profiling in terms of filtering, especially ‘collaborative filtering’ (a type of group profiling) and ‘personalised shopping’ (based on a type of personalised profiling). While he acknowledges the advantages of the ensuing customisation, he also warns for the disturbing consequences: ‘to encourage people to narrow their horizon, or to cater to their existing tastes rather than to form new ones’ (Sunstain, 2001:26). Democracy should foster the unexpected to keep its citizens on the alert and provide room for change, and democracy should celebrate common experiences even if they don’t all fit one’s individual preferences. Like Solove, but from a different perspective Sunstein again describes the potentially totalizing effect of an ‘Internet of Things’ that caters to everyone’s inferred preferences. Other than Solove he demonstrates that people are not being normalised under the pressure of fear for future incriminations, but – on the contrary – thanks to a totalising type of customisation. Before we have time to consider alternative choices the environment already caters to what it infers from our past interactions. Living in a filtered environment may isolate us and may render us impotent in confrontation with the polluted space of unfiltered encounters. Deliberative democracy needs such unfiltered confrontations to get to the bottom of things, before deciding how to proceed.  

 

 

The framework of democracy and rule of law  fidis-wp7-del7.7.RFID_Profiling_AMI_02.sxw  Constitutional democracy in a tagged world
Denis Royer 32 / 43