You are here: Resources > FIDIS Deliverables > Profiling > D7.9: A Vision of Ambient Law > 
Overview of the existing privacy and data protection legal framework  Title:
ANALYSIS OF THE LEGAL FRAMEWORK
 Conclusion

 

Analysis of the legal framework

Data Protection Directive

Applicability

When we have a look at the different scenarios, it is clear that Ambient Intelligence is impossible without the processing of personal data. But what exactly is meant by personal data? There seems to be some uncertainty and some difference of opinion regarding this concept, which may affect the proper applicability of the existing data-protection framework in different contexts. That is the reason why the Article 29 Working Party has recently published an opinion on the concept of personal data.

In the Data Protection Directive, personal data are defined as  

any information relating to an identified or identifiable natural person. An identifiable person is one who can be identified, directly or indirectly, in particular by reference to an identification number or to one or more factors specific to his physical, physiological, mental, economic, cultural or social identity.  

There are some exemptions to the obligations regarding processing of personal data. Apart from exemptions foreseen in community law, the exemptions under Article 3 take into account the way of processing (processing in manual non-structured form is exempted). The processing of personal data that is carried out by a natural person in the exercise of activities which are exclusively personal or domestic is also excluded from the Directive. It is not clear whether the directive applies when all data are stored and mined within the context of the home, when the processing is done by intelligent agents (e.g., in domotics applications).  

The questions remain what is “any information relating to an identified or identifiable person” and what is an “identifiable person”. Information is a very broad term. Any sort of statement about a person is personal data, although not necessarily true or proven. The concept of personal data includes information available in whatever form. It can be a sound, a smell, or an image. Images of individuals captured by a video surveillance system can be personal data to the extent that the individuals are recognisable.

In general terms, information can be considered to “relate” to an individual when it is about that individual. It is not always clear whether certain information relates to an individual. In some situations, the information conveyed by the data concerns objects rather than individuals. Those objects usually belong to someone or may be the subject of particular influence by or upon individuals, or they may be physically or geographically close to individuals. It is then only indirectly that the information can be considered to relate to those individuals. The Working Party also pointed to the fact that the same piece of information may relate to different individuals at the same time, depending on what element is present with regard to each one.

A person is identified firstly when the information is sufficient to immediately make clear who the person is. Name and fist name or the National Registry Number can be considered as data that directly identify a person. Secondly, a natural person is also identifiable when, although the person has not been identified yet, it is possible to identify him. According to Recital 26 of the Directive, when determining whether a person is identifiable or not, “account should be taken of all means likely reasonably to be used either by the controller or by any other person to identify the said person”. This means that a mere hypothetical possibility to single out the individual is not enough to consider the person identifiable, but a realistic possibility is sufficient.

The notion of identifiability is going to be a crucial one in the field of emerging technologies and AmI environments. The possibilities to link objects and natural persons that allow the profiling of the latter and enable their tracking and tracing will be multiplied. How broadly shall the term identifiability be defined? Is the civil identity of a person necessary in order for him to be identifiable, or would it be enough to be able to declare with a high certainty that it is the same person we refer to in different contexts, e.g., a returning customer of a supermarket that can be “identified” via the RFID tag of his watch? Unfortunately, the Opinion of the Working Party did not enlighten such unclear cases that are already a reality in the field of RFID technology and that are going to be exponentially multiplied in an AmI environment.  

And while every developer of AmI technologies and new applications is calling for a clarification of the term personal data, the answer provided by the Article 29 Working Party seems rather too relative to provide legal certainty in the present situation. The choice between a purpose based or a contextualised approach is already been in the core of legal debate especially with regard to sensitive personal data. The Working Party in its Opinion combines both approaches, meaning that whether the processing of particular data concerns personal data depend on both the purpose and the context, while even allowing the same data to count as ‘personal data’ with regard to one data controller while not counting as ‘personal data’ with regard to another.  However, as AmI produces highly contextualised knowledge and information, whereas purposes may become clear only long after the data have been recorded, the approach of the Working Party does make sense. The point is that written law cannot provide any kind of real time legal certainty in the case of such a casuistic application. As discussed in section 3.5.2 the concept of contextual integrity may provide new impetus to the protection of personal autonomy and informational self-determination in an AmI environment, but this would require practical, technological operability of the concept, as advocated by the vision of AmL  

The Working Party makes clear that the technological state of the art at the time of the processing, as well as the future technological possibilities during the period for which the data will be processed, have to be considered. Identification may not be possible today with all reasonable means in use today. If the data are intended to be stored for a month, identification may not be anticipated to be possible during the lifetime of the information, and they should not be considered as personal data. However, if they are intended to be kept for 10 years, the controller should consider the possibility of identification that may occur after 9 years and which may make them personal data at that moment. Since technology is developing with great speed, in many cases it will not be possible for the controller to “guess” the means that might be used within some years, let alone after 10 or more years. Especially in AmI environments, where more and more technologies are interlinked, it is likely that new possibilities for identifying people will emerge in due time, and hence, storing data for a period of several years involves a considerable chance for future identification of natural persons involved. Should this, however, imply that AmI data processing should already conform to the data-protection principles, i.e., a “just in case” applicability of the data-protection legislation?  

One relevant factor for assessing “all the means likely reasonably to be used” to identify the persons will in fact be the purpose of the data controller in the data processing. In cases where the purpose implies the identification of individuals, it can be assumed that the controller or any other person involved will have the means “likely reasonably to be used” to identify the data subject. This can have important implications in an AmI environment. For example in the context of video surveillance, actual identification will only take place when certain conditions are fulfilled, but because this identification is the purpose of the video surveillance, the whole application will have to be considered as processing of personal data.

There appears to be a division among Member States on whether or not to use a relative approach to the concept of personal data, in the sense that data are considered personal only for someone who can link the data to an identified individual. The laws in some Member States make clear that for instance encoded or pseudonymised data are ‘personal’ for someone who has access to both the data and the decoding key, but are not personal for someone without access to the key. The Austrian law refers to such data as ‘indirectly identifiable data’, while other laws add definitions of pseudonymised data, like the German law. The UK law considers only “data relating to a living individual who can be identified from those data or (…) from those data and other information which is in the possession of, or is likely to come into the possession of, the data controller” as personal data. In other Member States, like Belgium, in principle all data that can be linked to an individual are regarded as ‘personal’, even if the data are processed by someone who cannot make that link. The laws in several other Member States are ambiguous in this respect. Usually, the data-protection authorities tend to agree with the Belgian approach, but they are willing to be flexible with regard to the processing of non-immediately identifiable data. In the case of such processing, whether the laws apply depends on the probability of the data subject being identified, with the nature of the data taken into account. From this, it follows that diverging use is made of recital 26 of the Directive: some emphasise the term ‘likely reasonably to be used’, and others rather rely on the expression ‘to be used either by the controller or by any other person’.

The opinion of the Working Party also addressed the issue of pseudonymised data. It made clear that retraceably pseudonymised data may be considered as information on individuals who are indirectly identifiable. The use of a reversible pseudonym means that it is possible to trace back to the individual, so that his identity can be discovered, but only under predefined circumstances. In that case, data protection rules apply, but the risks at stake for the individuals with regard to the processing of such indirectly identifiable information will most often be low. That is why the Working Party suggests that these rules can justifiably be more flexibly applied than in cases where information on directly identifiable individuals is processed. With regard to key-coded data in statistical and pharmaceutical research, the explanation of the Working Party is somewhat confusing. However, it seems to come down to the view that if all technical and organisational measures have been taken to assure that the identification of the data subject is not expected or supposed to take place under any circumstance, the Data Protection Directive is not applicable. The Working Party does not want to go as far as the Belgian legislator did, stating that when data somehow can be linked to an individual, they are regarded as ‘personal’ even if the data are processed by someone who cannot make that link.

The Working Party states that the determination whether information can be considered as anonymous or not depends on the circumstances. A case-by-case analysis should be carried out with particular reference to the extent that the means are likely reasonably to be used for identification as described in Recital 26 of the Directive. This statement, however, does not make clear what has to be done when anonymous data are collected, but whether – as is often the case with profiling technologies – afterwards the combination of several anonymous data leads to identification. Such a vague approach can have crucial impact on the processing of biometric data, which are in principle used in a pseudonymous or even in an anonymous way. When the biometric data are backtracked to the individual they are linked to, the consequences for the latter can be immense, given that they can be used as the source to reveal a lot of information about him. In cases whereas it can not be guaranteed that backtracking to the individual the data refer to is technically ruled out, the biometric data shall be adequately protected.  

In an AmI environment, biometric data will often be used. These data may be defined as biological properties, physiological characteristics, behavioural traits or repeatable actions where those features or actions are both unique to that individual and measurable, even if the patterns used in practice to technically measure them involve a certain likelihood. Typical examples of such biometric data are fingerprints, retinal patterns, facial structure, voices, but also hand geometry, vein patterns and some deeply ingrained skill or other behavioural characteristic (such as handwritten signature, keystroke patterns, gait, or speech). A wide and unrestricted use of biometrics raises concerns with regard to the protection of the privacy of individuals. Though biometric data is not by definition “information relating to a natural person”, in the context of biometrical identification, the person is generally identifiable, as the biometric data are usually used for identification or authentication of the data subject. It follows, in the Working Party’s opinion, that biometric data fall under the definition of personal data within the meaning of the Directive. Consequently, their processing must take place in accordance with the principles and procedures stipulated in the Directive.

Effectiveness and adequacy

This section will not be a full analysis of the Data Protection Directive; instead, the main problems that can arise when the Directive should be applied to an AmI environment will be presented. 

Consent 

Personal data may only be processed in a legitimate way if the data subject has unambiguously given his consent. If there is no unambiguous consent, it is allowed if the processing is necessary for:  

        1. the performance of a contract to which the data subject is party or in order to take steps at the request of the data subject prior to entering into a contract, or  

        2. compliance with a legal obligation to which the controller is subject, or  

        3. protecting the vital interests of the data subject, or  

        4. the performance of a task carried out in the public interest or in the exercise of official authority vested in the controller or in a third party to whom the data are disclosed, or  

        5. the legitimate interests pursued by the controller or by the third party or parties to whom the data are disclosed, except where such interests are overridden by the interests for the purposes of fundamental rights and freedoms of the data subject.

Without prejudice to any other possibilities, the processing of personal data in an AmI environment may need to be justified by the consent of the data subject, which can be defined as “any freely given, specific and informed indication of his wishes”.

The requirement of informed consent presents particularities in AmI. Studies have shown today that users of AmI environments face basic difficulties not only in realising the involvement of devices, which they can not see or feel in most of the cases, but also in understanding their functionality and consequently their use, as well as the actual collection of their data revealing personal information about their preferences, location etc. As Beckwirth clearly states “reliable, inconspicuous sensing of personal information is problematic because users do not always understand the extent or methods of data collection and thus can not adequately evaluate privacy issues”. In AmI environments the vast majority of decisions are taken by automated procedures and are based largely on automated profiling, an issue that is dealt with in detail below under 4.3.1.2 – non discrimination.

The consent in AmI is not given by signing a contract, as it is usually the case in the conventional transactions that entail processing of personal data. It can be given by pressing a button or clicking an option, but the issue becomes much more complicated when no tactile interface is available and thus the expression of consent in such a way is physically impossible. Can the processing of personal data be based on one of the other grounds mentioned in Article 7 of the data protection directive and could then the consent of the person who is entering the AmI environment be considered as redundant? Some AmI applications are designed to assist elderly people, for instance, and in this case it can be sustained that the processing of the personal data is necessary in order to protect the vital interests of the data subject and therefore the consent of the latter is not needed.

Furthermore processing of personal data can be justified under the ground that the “processing is necessary for the purposes of the legitimate interests pursued by the controller or by the third party or parties to whom data are disclosed, except where such interests are overridden by the interests for fundamental rights and freedoms of the data subject […]”. For the application of this ground, the local law shall be examined in detail in order to determine its applicability and define what qualifies as legitimate interest, as in this case a balance between the interests of the data subject and the controller has to be found. It is interesting to question whether the pursuit, promotion and marketing of legitimate businesses can be considered as legitimate interest.

The existence of multiple devices in a system and the necessity of consenting to the collection and processing of personal data from all of them can become cumbersome for the user.  Profiles stored in a personal assistant, such as in the case of David in scenario II, who uses a family preferences agent, coupled to selectable identities, could provide situation-specific consent or dissent. Evidently consent may not always be necessary, because the grounds referred to above apply. For example, data can be collected to warn passers-by taller than 2.15 that a small passage is just around the corner (ground d or f, art. 7), while one can expect service providers to conclude contracts with consumers for long term and wide-ranging services (ground b, article 7). However, one could question to what extent the consent given at the time of concluding such contracts can be considered a free consent. AmI technologies will immensely enhance the quantitative and qualitative possibilities of monitoring and extend it to areas that are currently out of reach of permanent and unobtrusive surveillance. Customers will most probably not be aware of the scope and impact of the data that will be collected and the types of profiles that can be inferred from them. When the individuals wish to enjoy the benefits of such AmI technologies, how can they be informed about the constant collection and processing of their personal data? How can the traditional notion of consent be adjusted to the developing word of AmI technologies? There is almost no possibility to escape the supervision infrastructure for those parts of the population who do not want to be permanently observed. Can a “consent free of doubts” to something that is practically unavoidable count as a valid part of individual or collective agreement at all? And, even if consent is not necessary, the grounds for legitimate processing of personal data will continuously shift, depending on the context, requiring constant alertness of both the user and the provider. This would render the whole concept of unobtrusive proactive anticipation invalid. AmL could provide some answers here, meaning for instance that one carries a PDA that seamlessly negotiates consent while applying mandatory parts of the directive (checking relevant grounds for legitimate collection of data).

 

Purpose specification and proportionality 

Personal data must be collected for specified, explicit, and legitimate purposes and not further processed in a way incompatible with those purposes.

The aim of AmI technologies is not to serve single purposes, but to support the users in a variety of more or less foreseeable situations. The purpose of data collection lies entirely in the accumulation of as much information about individual behaviour patterns and preferences as possible. The context in which this knowledge is going to be applied remains necessarily unclear at the time of collecting the data.

It is difficult to comply with the rule that further processing of personal data must be compatible with the purpose specified at the time of data collection, if there is no initial purpose. And even if there is one, collected data may serve for other applications or purposes that are discovered only later. The creation and use of databases may often create additional benefits, for example in the case of profiling. Apart from numerous technical problems, a limitation of the transfer and use of data would entail that every attempt to enforce parts of this principle implies curtailing the benefit and the usability of Ambient Intelligence infrastructures. Cas points to the fact that benefits will be limited, because an invariable assignment of data to applications limits the adaptability and learning abilities of the system.

It also has to be kept in mind that the purpose-specification principle restricts the possibility to link different forms of processing and databases for profiling objectives. The purpose-specification principle is definitely at odds with the logics of interoperability and availability of personal data. When we look at the different scenarios, it becomes clear that profiling is crucial in an AmI environment, for example, for the dating service in scenario I. The purpose-specification principle makes this difficult, perhaps impossible. In the context of profiling, special consideration also needs to be given to Art. 15 (1) of the Data Protection Directive. This article gives the right to every individual

not to be subject to a decision which produces legal effects concerning him or significantly affects him and which is based solely on automated processing of data intended to evaluate certain personal aspects relating to him, such as his performance at work, creditworthiness, reliability, conduct, etc. 

This prohibition seems equally at odds with the logic of adaptive autonomic profiling, as discussed, since most decisions will be taken by machines in a process of machine-to-machine communication. 

Personal data must also be adequate, relevant and not excessive in relation to the purposes for which they are collected or further processed. This is the data-minimisation principle. It is clear that in an AmI environment, the amount but also the quality of data that are collected, will seriously increase, thus infringing the data-minimisation principle. Even if only part of this huge amount of data is stored or analysed, this principle will be fully turned upside down. The whole concept of AmI is data maximisation, i.e., to collect as much information about individuals as possible in order to be able to offer them customised services. Suggestions have been made, instead of focusing on reducing the amount of Data protection is a tool for empowering the individual in relation to the collection and processing of personal data, but it should not be turned into an insurmountable obstacle to progress and technological development. The Data Retention Directive, which calls for the retention of a large amount of personal data in order to have them available for law-enforcement purposes, reveals the willingness of the European legislator and the Member States to put the data-minimisation principle aside in certain circumstances. Whether a similar approach will be followed in AmI in order to allow the full operation of the relevant systems remains to be seen.

Transparency 

Transparency is an important principle of the Data Protection Directive. The Directive does not forbid the processing of data, but it does require data processors to inform data subjects with regard to which data are processed, the identity of the one who processes them, and the purpose. For example, when cameras are placed at a workplace, the employers have to be informed about the camera policy. The employer will have to inform the employees about every aspect of the camera surveillance, like the location of the cameras and whether the images will be stored and for how long.

Nowadays, it is already very difficult to know in full detail who collected which data, to which organisations the data were transferred, and for which purposes they were used, despite the data-protection legislation commanding otherwise. But still, users are most of the time aware that they providing personal data when, for example, filling in an online or offline form. This will be totally different in an AmI environment, where users will ever less be an active and conscious source for providing personal data. The desire to provide Ambient Intelligence in an unobtrusive manner requires a framework in which users are permanently observed and their behaviour and actions autonomously interpreted, taking into account location and other contextual information. The results are then fed into a continuous learning process, which will form the basis for autonomous decisions by the AmI system on how and when to use, or to pass on, the collected information. It is clear that the already existing information asymmetry between data subject and data controller will significantly enlarge.

The transparency principle is specified in the right to be informed. In case of collection of data not from the data subject himself, the controller must always provide the data subject with at least:  

  1. the identity of the controller or his representative, and 

  2. the purposes of the processing for which the data are intended.  

Further information to be provided if necessary to guarantee a fair processing, are  

  1. the recipients or categories of recipients of the data,  

  2. whether replies to the questions are obligatory or voluntary, as well as the possible consequences of failure to reply, and  

  3. the existence of the right of access to and the right to rectify the data concerning him must only be given “in so far as such further information is necessary, having regard to the specific circumstances in which the data are collected, to guarantee fair processing in respect of the data subject”.

It is clear that the practical application of this right in an AmI environment will bring along a serious burden for the data controller and for the data subject. Because of the large amounts of data to be processed in an AmI world, the help or support by intelligent agents to manage such an information stream seems indispensable.

Non-discrimination 

The majority of decisions in AmI are made based on profiles, either individual or group ones. The use of such profiles for making decisions upon services and applications to the users of AmI environments is of dual legal importance. On one hand it is to be examined whether such decisions can be considered as fully automated decision making, thus falling under the provisions of Article 15 of the data protection directive and on the other hand whether such practices infringe existing non discrimination legislation.  

Decision making procedures in AmI are mainly automated and are based on profiles of the users of the systems. According to Article 15 of the data protection directive the “Member States shall grant the right to every person not to be subject to a decision which produces legal effects concerning him or significantly affects him and which is based solely on automated processing of data intended to evaluate certain personal aspects relating to him, such as his performance at work, creditworthiness, reliability, conduct etc.”. As Bygrave has very clearly summarised for Article 15 to apply four conditions have to be satisfied cumulatively: 

  1. “a decision must be made; 

  2. The decision concerned must have legal or otherwise significant effects on the person whom the decision targets; 

  3. The decision must be based solely on automated data processing; 

  4. The data processed must be intended to evaluate certain personal aspects of the person who is targeted by the decision.”

Great importance in AmI has the first of the aforementioned conditions, i.e. to what extent a decision is made in an AmI environment. Bygrave suggests a broad interpretation of the term “decision” that does not necessarily entail the involvement of human and therefore the “decisions” made in AmI can be considered as such, and Article 15 applies. By accepting that Article 15 applies in AmI environments the data subject is also granted the right to know the logic involved in the automated processing. However as Beckwith has stated “[i]n the case of embedded sensor technologies, it would be practically impossible to teach anyone the system’s full implications”. The actual exercise of this right is still to be examined.

The fact that the decisions in AmI are based on profiles has raised the question whether they infringe existing non-discrimination legislation. Article 14 of the European Convention on Human Rights (the ‘Convention’) and Article 1 of Protocol No. 12 to the Convention (‘Protocol No. 12’) will serve as the basis. These articles prohibit the discrimination from the States towards individuals.  This would mean, that “if profiling practices conducted by public bodies representing the state (i.e. the (local) government) would be proven to be discriminatory, these provisions could be invoked by an individual who could require from the government that the discriminatory profiling practice in that specific case is stopped (‘vertical effect’)”.  However there are arguments to sustain that these rights can be applied also in private relationships.

The use of specific criteria, such as sex, race, colour, language, religion, political or other opinion, national or social origin, association with a national minority, property, birth or other status, as described in Article 14 ECHR can not be used as ground for discrimination. The wording of Article 14 ECHR “or other status” leaves it open as to what other criteria are considered a discriminatory ground. However the whole nature of AmI is based on a faddish way of discriminating among the users of a system and making decisions based on profiles that are built upon various criteria. Could it be sustained that AmI technologies are violating non-discriminating legislation as they are based on the processing of personal data and the consequent building of profiles?

ePrivacy Directive

Scope of application

Directive 2002/58/EC applies to the processing of personal data in connection with the provision of publicly available electronic communication services in public communication networks in the Community. An electronic communications service is defined as:

a service normally provided for remuneration which consists wholly or mainly in the conveyance of signals on electronic communications networks, including telecommunications services and transmission services in networks used for broadcasting, but exclude services providing, or exercising editorial control over, content transmitted using electronic communications networks and services; it does not include information society services, as defined in Article 1 of Directive 98/34/EC, which do not consist wholly or mainly in the conveyance of signals on electronic communications networks. 

Briefly put, this covers services that consist in the public conveyance of electronic-communication signals. It can be noted that information-society services are explicitly excluded. This means that the ePrivacy Directive contains obligations for transporters of data and not for content providers. This is important to take into account, as the distinction between an electronic-communications service and an information-society service is not always clear. Recital 10 of Directive 2002/21/EC (Framework Directive) in fact refers to electronic mail as an example of a service which constitutes both an electronic-communications service and an information-society service. It is also important that the ePrivacy Directive only applies to publicly available electronic communication services in public communication networks. In European legislation, there is no definition of what ‘public’ exactly means here. So it is not that simple to know exactly when the Directive is applicable. It is clear that when all data are stored and mined within the context of the home and remain within the private network of intelligent agents that are running the house, the ePrivacy Directive will not be applicable. This will be different when an individual makes use of a locator service when visiting New York. With regard to this it has to be said that the Article 29 Working Party has stated that the fact that provisions of the ePrivacy Directive only apply to public services over public communication networks is regrettable, because private networks are gaining in importance in everyday life; the risks increase accordingly, in particular because such networks are becoming more specific (e.g., monitoring employee behaviour by means of traffic data). In this respect it is questionable whether the restriction to ‘public’ networks and services can or should be upheld in the future.

The applicability of the ePrivacy directive in AmI

As already described under 4.3.2.1 the crucial point for the applicability of the ePrivacy directive is whether there is processing of personal data in connection with the provision of publicly available electronic communication services in public communication networks. The ePrivacy directive contains important provisions regarding traffic and location data, mainly Articles 5, 6 and 9, that will only apply when the relevant services and networks are public. In AmI, however, having a public network or offering a public service is not an intrinsic element of the system. This point however is crucial in deciding upon the applicability of the ePrivacy directive, and especially the application of the provisions regarding Location Based Services.

In an AmI environment Location Based Services can be either public or private services that are offered either in public or private networks. It would sound reasonable that no matter what technology is used for the offering of the service and whether it is public or privacy, the legal framework regulating it shall be the same. However the wording of the ePrivacy directive leaves no doubts that its provisions shall not apply when the service or the network used is not public.  

More specifically Article 9 deals with the location based services and states that location data other than traffic data may only be processed if the data are made anonymous, or with the consent of the users or subscribers of the service to the extent and for the duration necessary for the provision of a value-added service. Paragraph 2 of this article states that, even when the user or the subscriber has given his consent, he shall have the possibility to refuse the processing of the data temporarily or permanently at all times. The processing of the location data shall be necessary for the value-added service and shall be limited to the duration necessary to provide this service. So, with regard to location data other than traffic data, unnecessary processing is prohibited, unless the derogation of Article 15 applies to the situation or as far as the Data Retention Directive applies. It is obvious that location-based services play a central role in AmI, as a vast amount of services are offered based on the processing of location information of the user.  

An illustrative example from scenario II is the service offered to David via his MyComm device that allows him to find a restaurant close by that will match his taste and preferences. The MyComm device is described as “a 5th-generation mobile device with many useful functions and access to location-based services”. As such location bases services are accessed by MyComm via the public mobile network, they qualify as value-added services in the sense of Article 9 ePrivacy directive and the special provisions of the directive apply. But let’s take a look at the following scenario: Let’s assume that MyComm, as a 5th generation mobile device, is equipped with RFID and Bluethooth that also enable the provision of Location Based Services. Although these technologies transmit data in a wireless way, it is not beyond question that they would qualify as using a public communications network and offer a public communications service. Although, even in the latter case, the general provisions of the data protection directive will apply, not leaving the user of the services without any legal protection, the special provisions of the ePrivacy directive may not apply. In this case we would have the paradox that services offered from the same device via the use of different technology will fall under different legal provisions. Was such a result aimed by the European legislator when he was enforcing the ePrivacy directive aiming at a technology neutral piece of legislation?

 

 

Overview of the existing privacy and data protection legal framework  fidis-wp7-d7.9_A_Vision_of_Ambient_Law.sxw  Conclusion
21 / 31