You are here: Resources > FIDIS Deliverables > Identity of Identity > D2.13: Virtual Persons and Identities > 

D2.13: Virtual Persons and Identities

Legal consequence, legal action and legal fact; civil and criminal liability  D2.13 Virtual Persons
THE VIRTUAL PERSON: EMERGING LEGAL LACUNAE?
 Towards a new legal category: the virtual person?

 

The virtual person: emerging legal lacunae?

What happens if machines begin to act, e.g., cause harm or initiate transactions? Like in the case of animals, machines are treated like legal objects, they have no power to act in law or to be charged with liability, either civil or criminal. If an animal causes harm, the possessor of the animal is usually liable, and this is mostly a matter of strict liability (this of course depends on the jurisdiction). If a horse wins a race, the legal obligation to provide the prize money is directed towards the owner of the horse, not to the horse itself. If a dog bites a child, it may be killed due to a court order to that effect, however, this is not considered a punishment but the destruction of a dangerous object.

In this section we will investigate the need and the utility of providing legal subjectivity to “virtual persons”, being machines, software programs, networked artificial agents, etc. As announced in the introduction, we will draw on Danièle Bourcier’s well-informed article “De l’intelligence artificielle à la personne virtuelle: émergence d’une entité juridique?”.

The virtual, the fictional and the artificial

Literature has created fictional characters with a mind of their own, and even if the author seems in charge, literary theory has accepted the fact that the reader may interpret his novel in a way the author never anticipated. Some robots have been created to simulate human beings, if only their brains, creating artificial intelligence or at least aiming for such a thing.

At this moment fictional characters and artificial intelligence are being combined in virtual communities – developing a life of their own in cyberspace – presenting us with a new phenomenon: the virtual person. This virtual person seems to be acting, directed by the human person that uses it while being constrained by the character it develops in the virtual world that is beyond its control.

Profiles, expert systems, virtual persons

Profiling is a statistical technique that allows a software program to find significant correlations in a mass of perhaps seemingly trivial data. We refer to the reports written within Workpackage 7 and the ensuing volume on Profiling the European Citizen for further exploration of the process of profiling. In the end, personalized profiles seem to constitute sophisticated representations of a particular person.

Expert systems are developing into much more than ordinary search machines, as they seem to represent certain cognitive functionalities particular to the human mind. Especially those expert systems that are trusted with autonomic decision making in fact represent one of the most crucial capacities of human beings: making rational decisions after taking into account the content of a specialized domain of knowledge. One may wonder to what extent such a system in fact replaces the professional whose knowledge it has integrated and what this means for the issue of legal subjectivity. Who is responsible for the decisions taken by an expert system?

When we move on to what Clarke called the digital persona, and what Bourcier calls the virtual agent, we arrive at the software program that can act in our name, taking trivial queries or decisions out of our hands and providing ourselves with goods and services we are assumed to prefer. Such digital butlers in fact represent us and may conclude contracts in our name and even commit torts that will be attributed to us, because the virtual agent has no legal subjectivity. In the vision of Ambient Intelligence (AmI) we may expect such digital butlers to negotiate with a host of digital agents representing the service providers that “people” the smart environment. What happens if the computer scientists who write the programs for such digital agents resort to autonomic computing, meaning that the software reprograms itself in unpredictable ways in order to repair eventual faults and enhance its performance? Who is to be blamed for harm caused, or contracts concluded that we would have never concluded had we given it our conscious attention? Who is to be legally liable: the author of the program, the service provider who took the risk of involving such independent machine (and enjoying the profits), or the user of the smart environment who took the risk of being out of control (and suffering the consequences)?

Intermezzo: Legal protection against automatic decision making

Before raising the question whether the virtual person may need a new legal status, Bourcier discusses an important legal protection in the case of automatic decision making. As discussed extensively in FIDIS Deliverable 7.5, Chapter 13, Art. 15, of the data protection directive (D 46/95 EC) grants the right to individual persons to resist the application of decisions that have a legal or significant other effect on oneself (for instance in the case of insurance, employment etc.). Some jurisdictions have gone even further and simply forbidden decisions with such consequences to be taken by machines.

We should realize that an AmI environment depends on recurrent real time decisions of precisely this sort, e.g., contracts concluded about services that are provided, all based on sustained monitoring, refined group profiling, and customization based on sophisticated segmentation of (potential) customers. However, as is often the case, the second paragraph of Art. 15 grants several grounds on which the applicability of the first paragraph is excluded, providing space for a very ambiguous grey zone. Whether autonomic computing, as inherent in unobtrusive pervasive and ubiquitous computing, will be illegal or fall into the exceptions of the second paragraph is as yet unclear. One could argue that once the smart technologies become liable for harm caused and are recognized as legal subjects, the need to protect human beings against their automatic decisions may diminish, but it remains to be seen whether such upgrading of the legal status of virtual agents would in fact solve problems without creating even more serious problems.

 

Legal consequence, legal action and legal fact; civil and criminal liability  fidis-wp2-del2.13_Virtual_Persons_v1.0.sxw  Towards a new legal category: the virtual person?
10 / 23