You are here: Resources > FIDIS Deliverables > Profiling > D7.4: Implications of profiling practices on democracy > 

D7.4: Implications of profiling practices on democracy

Playing the ever more dangerous game of societal evolution  Title:
 Reply Bert-Jaap Koops


Transparency at the level of government and corporation: joke, yoke, hoax or hope? And ambient intelligence?

Gutwirth and De Hert informingly discuss a breadth of views on the issue of transparency as a tool (whose applications are, of course, as varied as the range from government security requirements to the monitoring of individuals by corporations with the goal of providing customised services). Transparency requires the counterbalance of accountability, but there is also an alternative, which is, simply, opacity.  

At this point the current writer, going beyond agreeing with Gutwirth and De Hert and finding Brin and Etzioni dangerously one-sided in their (reported, summarised) views, will have to become ‘racist and press the point forward. Humans are unarguably a Better and Nobler ‘race’ than corporations and governments, and this fact should be made very explicit – in a pragmatic sense, it is just as critically important as the equality of people to each other. “Without any opacity […] dictatorship comes dangerously within reach” (Gutwirth & De Hert section ). This refers exclusively to opacity as a right for the flesh and bones human being, but this distinction is not clearly made. A corporation has its business interests to defend, and a government also – unfortunately – needs some secrecy for very special issues such as its military activities (i.e. for its so called “national security”), but such interests should by no means be pitched against the foundations of our society in democracy, the rule of law, and so forth, i.e., against the fundamental rights of human beings. Thus, opacity is critical for flesh and bones, but transparency and accountability can (and should be made to) work perfectly well for (other) legal entities.

In the general case, this is an issue for lawyers and policymakers to work out. Advanced profiling technologies are a new opportunity for their users, and the public should impose appropriate rules to govern their use as a condition to accepting them at all. It is an absolute requirement. Total transparency when corporations or government collect personal data for profiling. Technically abuse is still possible, and it is hard for an engineer to visualise how such abuse could be detected, but this is a major computer science challenge – as far as the government is concerned, the law should assist this effort in every possible way, rather than hinder it.

Of course, one can reasonably expect massive resistance from any powerful entities upon which one should dare to try to impose real transparency. Unless the broader population takes privacy issues with respect to profiling technologies very seriously indeed, it is unlikely that the government would take the initiative to pass such legislation just because of the recommendations of legal theory and philosophy. (Corporate and governmental transparency as a generalised blessing is even nicer to imagine, and would be wholly desirable, of course, but is quite pointless to muse about.) 

It is worthwhile, thus, to specifically consider the Ambient Intelligence vision. This is the technology which, for all its wonderful advantages, truly renders those Big Brother nightmares vivid, painful and recurrent. This is a technology that has not really hit the market yet, and must be strictly regulated from the very first, so as to achieve its advantages while banishing its dangers permanently. It is the technology which poses the most subtle and subversive threat (constant and massive data collection addressing seemingly trivial measurements, etc). But it is also the technology for which this author can see the most realistically possible solution. For, when, for example, a corporation performs data mining on its corporate database, how can one monitor whether the patterns it is analysing imply discrimination against citizens or not? How can the mathematics of the data mining process or the decision making strategy of executives be controlled? More blatant violations, such as purchasing databases of profiling information from third parties, can be combated, but the fundamental issues will be hard to deal with. On the other hand, Ambient Intelligence is, by definition, part of the environment of the client, the citizen, and not fundamentally part of the reciprocally sensitive private environment of the service provider. The client has the right to “know about the collection and storage of personal data, its purpose and the identity of the data controller” and to “erase, rectify, complete or amend” this data after it has been collected. It should be trivial for a lawyer to argue for the entire population, should people wish to defend themselves, that they simply do not authorise data collection. Unless – of course – privacy is guaranteed. In other words, the client has the power to fundamentally shape Ambient Intelligence implementation, or discard it if it is inappropriate.


How can we know that a piece of data is being processed according to an acceptable process? The only answer can be, if we know exactly what the software in question is doing. There is a single but good way to achieve this: the open source paradigm can solve the privacy issues of Ambient Intelligence completely. Just require that ALL software that processes any sensed data about an individual must be open-source. We can then ensure that the laws about legitimate uses of profiling data are being followed, by free inspection of the source code. There is an important implication of this universal requirement, that demands some further elaboration. If all software is guaranteed to be open source, and all applicable law has to be complied with, this automatically precludes the possibility of importing profiling data into a private environment (e.g. a corporate database). All processing must occur within the Ambient Intelligence environment itself, on computational elements that can be checked for conformance to the suggested rule, and all the data must be safe from being copied out of the system. Thus, sophisticated cryptographic and “secure processing” (computing with encrypted data without decrypting it) technology will need to be leveraged. There is no space here to analyse this proposal technically, but some behavioural examples of such a system can be given which are an indication of the technical requirements for any appropriately specialised engineer: a) a client walks into a shop and automatically receives a suggestion of what most appropriate products to consider; the shop itself, and the provider of the suggestion service, never find out what products were suggested to which clients; but the system itself can record whether the proposal was accepted; b) a company considering the launch of a new product can use data mining tools to predict the product’s performance based on records of consumer behaviour, but cannot infer any of the actual recorded behaviour measurements; compare with limited access rights to a database, allowing a user to query aggregate results but keeping the actual data records secret (even from inference), and extend this (existing technology) to a distributed computing context; c) a developer working on an Ambient Intelligence component maliciously includes illegal code in the software with the intention of doing as much damage as possible; locally, some damage can be done, for instance the controller for a sensor could maliciously broadcast unencrypted sensor readings; however, previously encrypted data cannot be compromised; inference of sensitive data is prevented by a system kernel and cannot be achieved without changing the very core of the software (even this can be prevented with Trusted Computing hardware); intercepting communications between other system components cannot be (usefully) achieved; and, of course, the illegal code will be detected very soon and the responsible person can be identified with confidence.


Angelos Yannopoulos 

(Institute of Communication and Computer Systems)


Playing the ever more dangerous game of societal evolution  fidis-wp7-del7.4.implication_profiling_practices_03.sxw  Reply Bert-Jaap Koops
Denis Royer 30 / 45