CCG’s recommendations to the TRAI Consultation Paper on Privacy, Security and Ownership of Data in the Telecom Sector – Part III

In this series of blogposts, we discuss CCG’s responses and recommendations to the TRAI (available here), in response to their Consultation Paper on Privacy, Security and Ownership of the Data in the Telecom Sector. We focus on the principles and concerns that should govern the framing of any new data protection regime, whether limited to the telecom sector or otherwise. 

In our previous posts, we discussed the background against which we have provided our responses and recommendations, and the need for a separate regulatory framework for data within the telecom sector, in the context of the jurisdiction and powers of the TRAI.

In this post, we look at the basic data protection principles that we recommend form the basis for any new data protection regulation. Several of these principles are also discussed in the white paper of the Committee of Experts on a Data Protection Framework for India.

Any new data protection regulation, whether applicable across industries and sectors, or applicable only to the telecom sector, should be based on sound principles of privacy and data protection. As discussed in the Consultation Paper, the Report of the Group of Experts on Privacy[1] (GOE Report) identified 9 national privacy principles to be adopted in drafting a privacy law for India. These principles are listed below[2]:

  • Notice: A data controller, which refers to any organization that determines the purposes and means of processing the personal information of users, shall give simple to understand notice of its information practices to all individuals, in clear and concise language, before any personal information is collected from them. Such notices should include disclosures on what personal information is being collected; purpose for collection and its use; whether it will be disclosed to third parties; notification in case of data breach, etc.
  • Choice and consent: A data controller shall give individuals choices (opt-in/opt-out) with regard to providing their personal information, and take individual consent only after providing notice of its information practices.
  • Collection limitation: A data controller shall only collect personal information from data subjects as is necessary for the purposes identified for such collection.
  • Purpose limitation: Personal data collected and processed by data controllers should be adequate and relevant to the purposes for which they are processed.
  • Access and correction: Individuals shall have access to personal information about them held by a data controller and be able to seek correction, amendments, or deletion of such information, where it is inaccurate.
  • Disclosure of Information: A data controller shall only disclose personal information to third parties after providing notice and seeking informed consent from the individual for such disclosure.
  • Security: A data controller shall secure personal information using reasonable security safeguards against loss, unauthorised access or use and destruction.
  • Openness: A data controller shall take all necessary steps to implement practices, procedures, policies and systems in a manner proportional to the scale, scope, and sensitivity to the data they collect, in order to ensure compliance with the privacy principles, information regarding which shall be made in an intelligible form, using clear and plain language, available to all individuals.
  • Accountability: The data controller shall be accountable for complying with measures which give effect to the privacy principles. Such measures should include mechanisms to implement privacy policies, including training and education, audits, etc.

With the growth of businesses driven by big data, there is now a demand for re-thinking these principles, especially those relating to notice and consent[3].

While notice, consent and the other principles set forth in the GOE Report have formed the basis for data protection laws for many years now, additional principles have been developed in many jurisdictions across the world. In order to ensure that any new regulations in India are up to date and effective, it will be prudent to study such principles and identify the best practices that can then be incorporated into Indian law.

Graham Greenleaf has compared data protection laws across Europe and outside Europe and found that today, second and third generation ‘European Standards’ are being implemented across jurisdictions[4]. These ‘European Standards’, refer to standards that are applicable under European Union (EU) law, in addition to the original principles developed by the Organisation for Economic Co-operation and Development (OECD)[5]. The second generation European Standards that are most commonly seen outside the EU are:

  • Recourse to the courts to enforce data privacy rights (including. compensation, and appeals from decisions of DPAs)
  • Destruction or anonymisation of personal data after a period
  • Restricted data exports based on data protection provided by recipient country (‘adequate’), or alternative guarantees
  • Independent Data Protection Authority (DPA)
  • Minimum collection necessary for the purpose (not only ‘limited’)
  • General requirement of ‘fair and lawful processing’ (not only collection)
  • Additional protections for sensitive data in defined categories
  • To object to processing on compelling legitimate grounds, including to ‘opt-out’ of direct marketing uses of personal data
  • Additional restrictions on some sensitive processing systems (notification; ‘prior checking’ by DPA.)
  • Limits on automated decision-making (including right to know processing logic)

He also notes that there are several new principles put forward in the EU’s new General Data Protection Regulation[6] (GDPR) itself, and that it remains to be seen which of these will become global standards outside the EU. The most popular of these principles, which he refers to as ‘3rd General European Standards’ are[7]:

  • Data breach notifications to the DPA for serious breaches
  • Data breach notifications to the data subject (if high risk)
  • Class action suits to be allowed before DPAs or courts by public interest privacy groups
  • Direct liability for processors as well as controllers
  • DPAs to make decisions and issue administrative sanctions, including fines.
  • Opt-in requirements for marketing
  • Mandatory appointment of data protection officers in companies that process sensitive personal data.

We note that there exist other proposed frameworks that aim to regulate data protection and ease compliances required by businesses. Such additional frameworks may also be considered while formulating new data protection principles and regulations in India. However, it is recommended that the ‘European Standards’ described above, i.e. those set out in the GDPR may be adopted as the base on which any new regulations are built. This would ensure that India has greater chances of being recognised as having ‘adequate’ data protection frameworks by the EU, and improve our trade relations with the EU and other countries that adopt similar standards.

Professor Greenleaf’s studies suggest that the 2nd and 3rd General European Standards are being adopted by several countries outside the European Union. We note here that adoption of principles that are considered best practices across jurisdictions would also assist in increasing interoperability for businesses that operate across borders.

While adoption of these practices is likely to raise the cost of compliance, it is also likely to ensure that India remains a very competitive market globally for the outsourcing of services. In the long term, this will benefit Indian industry and the Indian economy. It will also safeguard the privacy rights of Indian citizens in the best possible manner.

[1] Report of the Group of Experts on Privacy, available at http://planningcommission.nic.in/reports/genrep/rep_privacy.pdf

[2] Report of the Group of Experts on Privacy, Chapter 3, as summarised in the TRAI Consultation Paper on Privacy, Security and Ownership of the Data in the Telecom Sector, pages 7-9

[3] TRAI Consultation Paper on Privacy, Security and Ownership of the Data in the Telecom Sector, Page 9; and Rahul Matthan, Beyond Consent: A New Paradigm for Data Protection, available at http://takshashila.org.in/takshashila-policy-research/discussion-document-beyond-consent-new-paradigm-data-protection/ (last visited on November 5, 2017)

[4] Graham Greenleaf, European data privacy standards in laws outside Europe, Privacy Law and Business International Report, Issue 149

[5]OECD Guidelines on the Protection of Privacy and Transborder Flows of Personal Data, available at http://www.oecd.org/sti/ieconomy/oecdguidelinesontheprotectionofprivacyandtransborderflowsofpersonaldata.htm (last visited on November 5, 2017)

[6] General Data Protection Regulation, Regulation (EU) 2016/679

[7] Graham Greenleaf, Presentation on 2nd & 3rd generation data privacy standards implemented in laws outside Europe (to be published and available on request).

Advertisements

CCG’s recommendations to the TRAI Consultation Paper on Privacy, Security and Ownership of Data in the Telecom Sector – Part II

In this series of blogposts, we discuss CCG’s responses and recommendations to the TRAI (available here), in response to their Consultation Paper on Privacy, Security and Ownership of the Data in the Telecom Sector. We focus on the principles and concerns that should govern the framing of any new data protection regime, whether limited to the telecom sector or otherwise.

In our previous blogpost, the first of the series, we discussed the background against which we have provided our responses and recommendations. In this post, we look at whether there is a need for a separate regulatory framework for data within the telecom sector, and the jurisdiction and powers of the TRAI.

We note that the Consultation Paper makes several references to stakeholders / players in the digital / telecommunications eco-system that are not traditional telecommunication service providers. These include online content / application service providers, device manufacturers, and providers of online communication services, operating systems, browsers. The Consultation Paper poses several questions about the regulation of data use and processing by such stakeholders.

In this context, we have examined the role and responsibilities of the TRAI beyond the regulation of traditional telecommunication service providers.

The preamble to the Telecom Regulatory Authority of India Act, 1997 (TRAI Act) states that the law is meant to “provide for the establishment of the Telecom Regulatory Authority of India and the Telecom Disputes Settlement and Appellate Tribunal to regulate the telecommunication services, adjudicate disputes, dispose of appeals and to protect the interests of service providers and consumers of the telecom sector, to promote and ensure orderly growth of the telecom sector and for matters connected therewith or incidental thereto”.

Telecommunication services have been defined to mean “service of any description (including electronic mail, voice mail, data services, audio tax services, video tax services, radio paging and cellular mobile telephone services) which is made available to users by means of any transmission or reception of signs, signals, writing, images and sounds or intelligence of any nature, by wire, radio, visual or other electromagnetic means”[1]. Broadcasting services have been excluded from the definition of telecommunication services[2].

Service providers means either the government as a service provider, or a licensee[3] – which refers to any person licensed to provide telecommunication services under the Indian Telegraph Act, 1885[4].

Section 11 of the TRAI Act describes the functions of the TRAI. These functions are divided into two broad areas: (i) making recommendations of certain matters, and (ii) regulatory functions. The regulatory functions largely deal with monitoring compliance with the telecom licenses, and other functions of service providers.

The TRAI’s powers to make recommendations extend to the following matters:

  • need and timing for introduction of new service provider;
  • terms and conditions of licence to a service provider;
  • revocation of licence for non-compliance of terms and conditions of licence;
  • measures to facilitate competition and promote efficiency in the operation of telecommunication services so as to facilitate growth in such services;
  • technological improvements in the services provided by the service providers;
  • type of equipment to be used by the service providers after inspection of equipment used in the network;
  • measures for the development of telecommunication technology and any other matter relatable to telecommunication industry in general;
  • efficient management of available spectrum

We note that most of the above matters deal specifically with functions of service providers. However, as mentioned above, telecommunication services do include some services beyond those provided by traditional telecommunication service providers – such as electronic mail and voice mail among others.

In this context, we would argue that the functions and powers of the TRAI would not extend to making recommendations regarding, or regulating online content and application providers, device manufacturers or other businesses that do not provide communication services.

At best, the TRAI may derive powers to make recommendations regarding based on questions posed in the Consultation Paper, under sub-section (iv) which provides the TRAI with the authority to make recommendations on improving efficiency of telecommunication services.

In our next posts in this series, we will discuss principles that we believe any data protection regulation, irrespective of the sector it applies to, should address. We also note that as Indian businesses grow and adopt new technology, they are increasingly beginning to function across sectors. In this context, we recommend that a basic data protection law that is applicable horizontally across sectors and regions, to cope with these cross-sectoral business models.  Where required, additional regulations may be made applicable to collection and processing of sector specific sensitive personal data.

[1] Section 2(1)(k) of the Telecom Regulatory Authority of India Act, 1997

[2] Section 2(1)(k) of the Telecom Regulatory Authority of India Act, 1997

[3] Section 2(1)(j) of the Telecom Regulatory Authority of India Act, 1997

[4] Section 2(1)(e) of the Telecom Regulatory Authority of India Act, 1997

‘My Data, My Rules’ – The Right to Data Portability

Nandan Nilekani has recently made news cautioning against ‘data colonization’ by heavyweights such as Facebook and Google. He laments that data, which is otherwise a non-rival, unlimited resource, is not being shared freely, and is being put into silos. Not only does this limit its potential uses, users end up with very little control over their own data. He argues for ‘data democracy’ through a data protection law and particularly, one that gives users greater privacy, control and choice. In specific terms, Nilekani appears to be referring to the ‘right to data portability’, a recently recognized concept in the data protection lexicon.

In the course of using online services, individuals typically provide an assortment of personal data to service providers. The right to data portability allows a user to receive their data back in a format that is conducive to reuse with another service. The purpose of data portability is to promote interoperability between systems and to give greater choice and control to the user with respect to their data held by other entities. The aim is also to create a level playing field for newly established service providers that wish to take on incumbents, but are unable to do so because of the significant barriers posed by lock-in and network effects. For instance, Apple Music users could switch to a rival service without having to lose playlists, play counts, or history; or Amazon users could port purchasing history to a service that provides better recommendations; or eBay sellers to a more preferable platform without losing their reputation and ratings. Users could also port to services with more privacy friendly policies, thereby enabling an environment where services must also compete on such metrics.

The European Union’s General Data Protection Regulation (GDPR) is the first legal recognition of the right to data portability. Art. 20(1) defines the right as follows:

“The data subject shall have the right to receive the personal data concerning him or her, which he or she has provided to a controller, in a structured, commonly used and machine-readable format and have the right to transmit those data to another controller without hindrance from the controller to which the data have been provided”

Pursuant to this right, Art. 20(2) further confers the right to directly transmit personal data from one controller to another, wherever technically feasible.

The first aspect of the right to data portability allows data subjects to receive their personal data for private use. Crucially, the data must be a in a format necessarily conducive to reuse. For instance, providing copies of emails in pdf format would not be sufficient. The second aspect is the ability to transfer data directly to another controller, without hindrance.

There are certain prerequisites for the applicability of this right:

a) it applies only to personal data that the data subject ‘provided’ to the controller. This would include data explicitly provided (such as age, or address, etc., through online forms), as well as data generated and collected by the controller on account of the usage of the service. Data derived or inferred by the controller would not be within the scope of this right.

b) the processing must be pursuant to consent or a contract. Personal data processed for a task to be performed in public interest, or in the exercise of official authority is excluded.

c) the processing must be through automated means. Data in paper files would therefore not be portable.

d) the right must not adversely affect the rights and freedoms of others.

The GDPR does not come into force till May 2018, so there remain ambiguities regarding how the right to data portability may come to be implemented. For instance, there is debate about whether ‘observed data’, such as heartbeat tracking by wearables, would be portable. Even so, the right to data portability appears to be a step towards mitigating the influence data giants currently wield.

Data Portability is premised on the principle of informational self-determination, which forms the substance of the European Data Protection framework.  This concept was famously articulated in what is known as the Census decision of the German Federal Constitutional Court in 1983. The Court ruled it to be a necessary condition for the free development of one’s personality, and also an essential element of a democratic society.  The petitioners in India’s Aadhaar-PAN case also  explicitly argued that informational self-determination was a facet of Art. 21 of the Indian Constitution.

Data portability may also be considered an evolution from previously recognized rights such as the right to access and the right to erasure of personal data, both of which are present in the current Information Technology (Reasonable security practices and procedures and sensitive personal data or information) Rules, 2011. TRAI’s recent consultation paper on Privacy, Security and Ownership of Data in the Telecom Sector also refers to data portability as a way to empower users. The right to data portability may be an essential aspect of a robust and modern data protection framework, and India is evidently not averse to taking cues from the EU in this regard. As we (finally) begin to formulate our own data protection law, it may serve us well to evaluate which concepts may be suitably imported.

 

Google Faces Legal Hurdles Under Brazilian Internet Law

By Raissa Campagnaro[1]

The Brazilian Federal Prosecution Ministry has brought civil proceedings against Google for flouting its data protection law. The suit challenges Google’s access to the content of emails exchanged by Gmail users on multiple grounds, including Google’s failure to obtain express consent.

In October, 2016, Brazil’s Federal Prosecutor filed a public civil suit against Google, claiming that the search engine had failed to comply with the country’s internet law, the Internet Bill of Rights. The suit argues that during a previous prosecution investigation, through a civil inquiry, Google had made it public that it scans the content of emails exchanged by Gmail users. According to the Federal Prosecutor, this violates Brazilian data protection standards.

The Internet Bill of Rights establishes data protection principles similar to those set up under the EU Data Protection Directive 95/46/EC. Under this law, any processing of data must be pursuant to express consent. The law specifically requires that the clause seeking consent be prominently displayed and easy to identify amongst other terms of the contract. The law also recognises a right to not have one’s data transferred to third parties without consent and a right to be informed about the specific purposes of the personal data collection, usage, storage, treatment and protection.

When asked about its compliance with the legislation, Google submitted that it analyses the email messages so it can improve consumers’ user experience by filtering the messages for unwanted content, spam, or other kind of malware. It also submitted that the scanning of messages is used to offer products and advertisement for the user and to classify emails into various categories such as ‘social’ ‘promotions’ etc. Finally, Google has contended that the scanning of emails is  consented to by the user at the time of signing up, by agreeing to the privacy policy within Gmail’s terms of service.

However, the Federal Prosecution Ministry considers these practices to be ‘profiling’ – a consequence of personal data aggregation that allows the creation of users’ profiles based on their behaviour, online habits and preferences. These can be used to predict their future actions and decisions. Profiling is frequently used for behavioural advertisements in which aggregated personal data is transferred to other ISPs, who use it to direct ads, products and services determined by the person’s past online activity. According to the Federal Prosecutor, this not only violates people’s right to privacy, especially their informational self-determination right, but also interferes with a consumer’s freedom of choice.

Several scholars and researchers have also opposed profiling and behavioural advertising, arguing that it has severe negative consequences. These include (i) denial of credit or loan concessions; (ii) offering different health insurance deals based on a person’s medical history or the nature of activities they engage in; and (iii) offers with adaptive pricing, based on a variety of criteria that involve some level of discrimination. This is problematic because online profiles are limited. A person’s life is based on several aspects apart from the online information which is collected and aggregated. As a result, personal data aggregation, processing and analysis can lead to an incomplete or incorrect picture of an individual, leading to wrongful interventions in their life. Even if the profile is a complete reflection of a person’s life, the choice to have one’s data collected and used for determined purposes must always be the users’.

The suit alleges that Google’s practices are not in consonance with the legal requirement of seeking express consent, including through prominent display within a policy. It suggests that Google be required to take specific consent in order to access the content of emails.

The case also  challenges the fact that Google’s privacy policy does not allow consumers to withdraw consent. This violates consumers’ control over their data. Further, it is also argued that consent should be sought afresh every time Google changes its privacy policy. The lack of clear and precise information around how data is processed is another issue that has been pointed out in the case, violating the right of Gmail users to information regarding the usage of their data.

To substantiate its case, the Federal Prosecutor is relying on an Italian case in which Google’s data processing activities had been challenged. The ruling was based on Italy’s Data Privacy Code, which establishes data protection guarantees such as i) fair and lawful processing of data; ii) specific, explicit and legitimate purposes and use of data; iii) processing to not be excessive in relation to the purposes for which it is collected or subsequently processed; and iv) that the data must only be kept for the amount of time truly necessary. In addition, the law stipulates that a data subject must receive notice about how their data will be processed, allowing them to make an informed decision. Furthermore, the Italian code also requires consent to be express and documented in writing.

In 2014, Garante’s (i.e. the Italian Data Privacy Authority, furthermore “the Authority”) decision held that Google had failed to comply with some requirements under the Italian legislation. Firstly, the information given by Google around how data processing was carried out was considered insufficient, as it was too general. Secondly, the consent format given through the privacy policy agreement was also held to be too broad. The Authority held that consent should be prior and specific to the data treatment. Although the decision condemned the company’s practices, it did not establish any guidelines for Google to adopt in this regard.

Through the present suit, the Brazilian Federal Prosecutor seeks (i) suspension of Google’s email content analysis, that is, scanning of emails of Gmail users where express consent has not been received ; (ii) an obligation to obtain express and consent from users before scanning or analysing the content of emails and (iii) ensuring the possibility of consent withdrawal. The suit seeks an order directing Google to change its privacy policy to ensure consent is informed and particular to content analysis.

This case demonstrates a new aspect of data protection concern. Apart from the most common cases over data breach situations, where the damage is usually too late or too massive to repair, the Brazilian and the Italian cases are great examples of proactive measures taken to minimise  future risks. Further, the importance of a legal framework that utilises data protection principles to guarantee consumers’ right to privacy is well recognised. Now, it appears that these rules are starting to be more effectively enforced and, in consequence, the right to privacy can be observed in practice.

[1] Raissa is a law student from Brazil with an interest in internet law and policy. Raissa has been interning with the civil liberties team at CCG for the past month.

“The Right to be Forgotten”: Balancing Personal Privacy with the Public’s right to access Information

Evolution of the right and Global framework

In the Internet age, when access to information is quick and easy, procuring personal information or past records about an individual is no longer a herculean task. The relevance of such information or the duration for which such data should be available for public access has hitherto not been debated.

There is growing global debate on a new right called “the right to be forgotten” or “the right of erasure”. This right allows people to request for removal of their personal information/data online after a period of time or if such information/data is no longer relevant. The origin of this right can be traced back to the French jurisprudence on the ‘right to oblivion’ or droit à l’oubli. The rationale behind this right was to allow criminal offenders who have already served their sentence to object to the publication of information regarding their crime and conviction. This was done to ease their process of social integration.

It was along these lines that the 1995 EU Data Protection Directive acknowledged the right to be forgotten. Under the Directive, it was stipulated that the member states should give people the guaranteed right to obtain from the ‘controller’ the rectification, erasure or blocking of data relating to them, the processing of which does not comply with the provisions of the Directive. The term ‘controller’ here refers to a natural or legal person, public authority, agency or any other body which alone or jointly determines the purposes and means of processing personal data.

In May 2014, the Court of Justice of the European Union (‘Court’) recognized the right to be forgotten as a part of the fundamental right to privacy in the Google case. The plaintiff, in this case, had requested for delinking of search results appearing on Google and the deletion of newspaper articles appearing online with respect to bankruptcy proceedings against him. The Court held that individuals have a right to request search engines to delink information which causes prejudice to them. However, the Court was careful to state that this right is not absolute and can be applied only when the data becomes ‘inadequate, irrelevant, excessive, not kept up to date, or kept for longer than necessary’ with respect to the purpose for which it was collected or processed. Accordingly, the Court directed Google to delink the search results in the instant case. It was further held that the publication of accurate data may be lawful at a given point in time, but in due course, it might become inconsistent with the law.

While the judgment in the Google case is a step in the right direction, it leaves much to be desired. The Court did not set out any guidelines or parameters to filter out information as ‘inadequate’ or ‘irrelevant’ or ‘excessive’. It has thrust the onerous task of balancing the right to privacy of an individual and the public’s right to access information on private search engines like Google. This raises critical questions regarding the suitability of private entities taking decisions which are of constitutional import. Pursuant to this judgment, the EU adopted the Data Protection Reforms which includes the right to be forgotten as an essential right under Article 17 of the Data Protection Regulations. This lays down the conditions for application of the right to be forgotten, and requires entities processing personal data to inform third parties regarding requests for erasure of links to any personal data. A detailed discussion of these regulations and their impact on India can be found here.

Challenges in enforcement

There are many legal and technical challenges in the enforcement of the right to be forgotten. The success rate of governments across the world in banning or removing pornographic websites or torrent sites from the Internet has not been great, since there are various ways of circumventing such bans. Further, the blocking or delinking of URLs by search engines does not guarantee that such information has been blocked or deleted from the Internet. There is also no way to ensure that such information is not uploaded again.

To enforce the ruling of the case discussed above, Google has created a mechanism through which an individual can make a request for taking down of or delinking of a specific search result bearing an individual’s name. Google evaluates such requests on various parameters like whether these results are an infringement on his right to privacy or whether such information is of public interest. In case of the former, the individual’s right to be forgotten trumps the public’s right to access information. However, if the information is of public interest, the right to information of the public prevails over privacy rights. This squarely makes Google the decision maker of the relevance, adequacy, and need for data to be available online for public access or not.

With the growing recognition of the right to be forgotten, the number of requests that search engines receive for taking down or delinking is only likely to increase, making it extremely difficult and cumbersome to scrutinize such requests manually. According to Google’s Transparency Report, as on 9th October, 2016, Google had received 565,412 requests for the removal of URLs. The Report further states that it has already evaluated 1,717,714 URLs since May, 2014. The Report shows that Google has removed 43.2% of the URLs from the requests received. With a substantial increase in the number of requests, search engines may even consider using algorithms to deal with such requests instead of manually evaluating the privacy rights vis-à-vis public interest.

Further, search engines are also likely to tread on the side of caution and accept such requests rather than face expensive legal challenges across jurisdictions for non-compliance. This right may be misused by individuals as it will lead to artificial alteration of the content available online which may result in the delinking of pertinent information.

Recent developments in India

The data protection regime and data privacy laws of India are not comprehensive and dynamic enough to respond to technological advances in the modes of collection, transfer and use of personal information. The Information Technology Act, 2000 and the rules framed under the Act make up the primary legal framework that governs this subject. The Delhi High Court is currently hearing a matter (Laksh Vir Singh Yadav vs. Union of India, WP(C) 1021/2016) where the petitioner has requested for the removal of a judgment involving his mother and wife from an online case database. The petitioner claims that the appearance of his name in the judgment is causing prejudice to him and affecting his employment opportunities. It will be interesting to see the outcome of this case and how the larger debate of the right to privacy of an individual versus the right of public to access information unfolds in this case.

It is pertinent to note that the Delhi High Court is dealing with the request for removal of a court order which is a public document. This request is unusual and distinct from a request for delinking of search results appearing in search engines like Google since such delinking does not result in the removal of the information itself. Allowing the removal of such judgments from online case databases could result in the expunging of public records. Furthermore, the removal of judgments from online public databases will obstruct public access to case materials shedding light on critical questions of law.

While implementing the right to be forgotten, a very fine balance has to be struck between the right to freedom of speech and expression, public interest and personal privacy. To balance these conflicting rights, the judiciary may consider implementing a system where personal information like names, addresses etc. of the litigants are redacted from reportable judgments/orders especially in personal disputes. The courts have, in the past, refrained from divulging the identities of parties in order to respect their privacy in many rape or medico-legal cases.

With many unanswered questions surrounding this right, India requires a comprehensive data protection regime to regulate the entities collecting and processing personal data and to define the terms of use, storage and deletion of such personal data. This will ensure that such entities are obliged to take due care of the personal data in their possession and will also provide a framework for dealing with requests for removal or erasure of such personal data.

The New Data Protection Regulation and its Impact on India

Written By Joshita Pai

The European Parliament  adopted  the new Rules on Data Protection on the 14th of April, 2016. The new Regulation replaces the General Rules on Data Protection, 1995 and the 2008 framework decision on cross-border data processing in police and judicial cooperation within the EU. In January 2012, the EU Commission first presented a package of proposals in order to update and modernize the present EU legal framework which was accepted subsequently by the Council in December 2015. The new data protection package consists of a general regulation on personal data processing in the EU and a directive on data processed by the police and judicial authorities.

Highlights of the Regulation

The regulation, establishes a stronger regime for protection of personal data by giving more control to the users in the digital market. It enshrines provisions on the much awaited right to be forgotten in the virtual space,[i] provisions  on the need for clear and affirmative consent and the right of an individual to be informed. Profiling of an individual by collecting a person’s data is often presented in the name of customized service and commercial interest of the company. The new regulation allows for a right to object against profiling unless it is necessary for legal enforcement purposes or for scientific research. The Directive also envisages provisions on data portability which will enable users to shift from one service provider to another, without losing the data accumulated in the use of the former.      Aside from vesting a bundle of rights in the hands of the users, the regulation makes way for an array of provisions for companies to abide by. The crucial provisions affecting business companies include:

  1. Sanctions on companies that breach data transfer of upto 4% of annual profits: This provision in the regulation holds heavy bearing since its application extends to companies established outside the European Union. organisations will additionally be required to carry out data protection impact assessmentswhere their plans to process personal data are “likely to result in a high risk for the rights and freedoms of individuals”.
  2. Provision for appointing a data protection officer if the company engages in processing of sensitive data: For businesses in which the “core activities” consist of processing operations that “by virtue of their nature, their scope and/or their purposes, require regular and systematic monitoring of data subjects on a large scale”; or if it involves processing sensitive data on a large scale, the new Directive recommends the mandatory appointment of a DPO.
  3. The introduction of the new one-stop-shop concept in the Regulation: The Regulation states there will be a single supervisory authority who will be engaging with business houses, instead of one authority in each member state. The ‘one-stop-shop’ will streamline cooperation between the data protection authorities on issues with implications for all of Europe.

The Impact of the new EU Regulation on India

The cross-border flow of data from the EU states to other nations has been contentious, visibly so after the Schrems decision which rendered the EU-US safe harbour provision inadequate. The decision called for a new set of guidelines which resulted in the creation of the EU-US privacy shield.

The EU framework of 1995 as well as the enhanced edition of the Regulation, prescribes a mandatory adequacy decision to determine whether the country in question adequately protects personal data. The new Regulation, dedicates a chapter on transfer of personal data to third party countries, and India’s interest in the Directive lies here. It provides that:

“A transfer of personal data to a third country or an international organization may take place where the Commission has decided that the third country, or a territory or one or more specified sectors within that third country, or the international organization in question ensures an adequate level of protection.”

The European Commission in 2015 produced a report on Data Protection in India to assess the measures and standards adopted for protection of data in India. The report highlighted the lacunae in Indian laws pertaining to personal data. According to a recent survey by NASSCOM-DSCI, there is an opportunity loss of USD 2.0 billion – 2.5 billion owing to data transfer related issues. The report notes that EU clients are hesitant to offshore work to Indian companies because of the dearth of data protection standards in India. With particular regard to data protection, institutionalizing a regulatory regime in India has become a herculean task with no comprehensive legislation on data protection in force. Statutory attempts to this effect have either been dissipated across the arena or have not been effectively executed so far. The penalty of a 4% of annual turnover of a company on account of data breach is one of the outstanding features of the new Regulation and pitching this against the backdrop of a staggered regime on data protection in India indicates a host of repercussions.

Joshita Pai was a Fellow at the Centre for Communication Governance  (2015-2016) 

[i] ‘The right to be forgotten’ stirred up as a concept after a Spanish national sued Google Spain and a Spanish newspaper for retaining information about him that was published several years ago.

The EU-US Privacy Shield: The Safer Harbour?

Written By Joshita Pai

On 29th February, 2016, the European Commission published details of the legal text which will be the building blocks for the EU-US Privacy Shield. The NSA’s bulk collection of the data EU users’ data has been a contentious issue since the Snowden revelations. The new agreement will replace the Safe Harbour agreement which had been struck down by the Court of Justice of the European Union in the Schrems judgment, where the Court rendered the existing provisions as inadequate and incapable of protecting data.

The European Commission today issued a Communication summarizing the actions taken to replace the data protection standards. The Commission announced a number of steps to restore trust in the flow of transatlantic data.  It finalised the reform of EU Data protection rules, which apply to all companies providing services on the EU market negotiated the EU-U.S. Umbrella Agreement ensuring high data protection standards for data transfers across the Atlantic for law enforcement purposes, and built a promising framework for commercial data exchange- the EU-U.S. Privacy Shield.

A preliminary dissection of the collective text indicates a commitment to build a stronger framework towards protecting transatlantic data. The European Commission in its draft adequacy decision published yesterday, provides for the establishment of an enhanced regime, stating that the EU-US Privacy Shield will continue to be based on a system of self-certification where U.S. organisations will commit to the EU-U.S. Privacy Shield Framework Principles.  Article 4 of the Draft Decision provides that:

“The Commission will continuously monitor the functioning of the EU-U.S. Privacy Shield with a view to assessing whether the United States continues to ensure an adequate level of protection of personal data transferred thereunder from the Union to organisations in the United States.”

Intricacies of the Agreement

Under the new agreement, American companies will have to register to be on the Privacy Shield List and self-certify that they meet the requirements set out. This process will be carried out each year with periodic reviews. The Privacy Shield includes the crucial principles of: consent of the user the choice of the user to opt out of divulging personal information; the security of the transmitted information, the purpose limitation principle to ensure that the information is not used for any other purpose but the one the user had consented to.

Aside from these guidelines, the draft decision lists out accountability and transparency provisions for the companies engaging in data transfers and carves out a redressal mechanism for aggrieved users. The FAQs accompanying the Privacy Shield framework, provides that the complaints have to be resolved by the companies within 45 days.

The framework also provides for an alternate dispute resolution process: “A free of charge alternative dispute resolution [ADR] solution will be available. EU citizens can also go to their national data protection authorities, who will work with the US department of commerce, and Federal Trade Commission to ensure that unresolved complaints by EU citizens are investigated and resolved.”  Roping in the recently passed U.S. Judicial Redress Act, the FAQ notes that the Privacy Shield will provide EU citizens access to US courts to enforce privacy rights in relation to personal data transferred to the US for law enforcement purposes. The Judicial Redressal Act however encompasses the last minute amendment which caters to US security interests as an exception to the safe harbour guarantee.

The Article 29 Working Party, in its statement issued recently, outlined on a four-part guideline whenever personal data is transferred from the EU to the United States, to other third countries, as well as to other EU Member States. The statement recommends the following:

“1. Processing should be based on clear, precise and accessible rules: this means that anyone who is reasonably informed should be able to foresee what might happen with her/his data where they are transferred;

  1. Necessity and proportionality with regard to the legitimate objectives pursued need to be demonstrated: a balance needs to be found between the objective for which the data are collected and accessed (generally national security) and the rights of the individual;
  2. An independent oversight mechanism should exist, that is both effective and impartial: this can either be a judge or another independent body, as long as it has sufficient ability to carry out the necessary checks;
  3. Effective remedies need to be available to the individual: anyone should have the right to defend her/his rights before an independent body.”

The new agreement factors in all the recommendations mentioned above. The 3rd recommendation mentions the need for an independent oversight mechanism. This mechanism also resurfaces as an essential criteria in the communication from the Commission to the European parliament and the Council as well as in the draft adequacy decision, both of which were released yesterday. The new agreement creates an “Ombudsperson” to deal with complaints from the EU citizens on how their data has been used by the NSA, however, the autonomy of the oversight authority is debatable.

The draft also provides for suspension of adequacy decision “if the Commission concludes that there are clear indications that effective compliance with the Privacy Principles in the United States might no longer be ensured, or that the actions of U.S. public authorities responsible for national security or the prevention, investigation, detection or prosecution of criminal offenses do not ensure the required level of protection. Alternatively, the Commission may propose to amend this decision, for instance by limiting the scope of the adequacy finding only to data transfers subject to additional conditions.”

Conclusion

The protection of data has been treated as a paramount right in the EU unlike in the US set-up where pro-privacy norms are a rare delight. The Schrems judgment did stir up the status quo and the negotiation process has resulted in the revised agreement. However, The NSA has found its way into the new agreement, visibly so in the exception appended to bulk collection of data. The draft decision envisages six contingencies in the event of which US would be permitted to collect signals intelligence in bulk. These exceptions include detecting and countering certain activities of foreign powers; counter-terrorism; counter-proliferation; cyber-security; detecting and countering threats to U.S. or allied armed forces; and combating transnational criminal threats, including sanctions evasion. The New York Times had recently reported that the bulk collection of data by the NSA will be shared with other U.S. agencies including the FBI and the CIA without removing the identifying information. This marks the meeting point of data processing for commercial purposes and for the purpose of surveillance. In light of the recent FBI-Apple duel, such collision should be viewed cautiously.

 Joshita Pai was a Fellow at the Centre for Communication Governance from 2015-2016