The Future of Democracy in the Shadow of Big and Emerging Tech: CCG Essay Series

By Shrutanjaya Bhardwaj and Sangh Rakshita

In the past few years, the interplay between technology and democracy has reached a critical juncture. The untrammelled optimism for technology has now been shadowed by rising concerns over the survival of a meaningful democratic society. With the expanding reach of technology platforms, there have been increasing concerns in democratic societies around the world on the impact of such platforms on democracy and human rights. In this context, increasingly there has been focus on policy issues like  the need for an antitrust framework for digital platforms, platform regulation and free speech, the challenges of fake news, impact of misinformation on elections, invasion of privacy of citizens due to the deployment of emerging tech,  and cybersecurity. This has intensified the quest for optimal policy solutions. We, at the Centre for Communication Governance at National Law University Delhi (CCG), believe that a detailed academic exploration of the relationship between democracy, and big and emerging tech will aid our understanding of the current problems, help contextualise them and highlight potential policy and regulatory responses.

Thus, we bring to you this series of essays—written by experts in the domain—in an attempt to collate contemporary scholarly thought on some of the issues that arise in the context of the interaction of democracy, and big and emerging tech. The essay series is publicly available on the CCG website. We have also announced the release of the essay series on Twitter

Our first essay addresses the basic but critical question: What is ‘Big Tech’? Urvashi Aneja & Angelina Chamuah present a conceptual understanding of the phrase. While ‘Big Tech’ refers to a set of companies, it is certainly not a fixed set; companies become part of this set by exhibiting four traits or “conceptual markers” and—as a corollary—would stop being identified in this category if they were to lose any of the four markers. The first marker is that the company runs a data-centric model and has massive access to consumer data which can be leveraged or exploited. The second marker is that ‘Big Tech’ companies have a vast user base and are “multi-sided platforms that demonstrate strong network effects”. The third and fourth markers are the infrastructural and civic roles of these companies respectively, i.e., they not only control critical societal infrastructure (which is often acquired through lobbying efforts and strategic mergers and acquisitions) but also operate “consumer-facing platforms” which enable them to generate consumer dependence and gain huge power over the flow of information among citizens. It is these four markers that collectively define ‘Big Tech’. [U. Aneja and A. Chamuah, What is Big Tech? Four Conceptual Markers]

Since the power held by Big Tech is not only immense but also self-reinforcing, it endangers market competition, often by hindering other players from entering the market. Should competition law respond to this threat? If yes, how? Alok P. Kumar & Manjushree R.M. explore the purpose behind competition law and find that competition law is concerned not only with consumer protection but also—as evident from a conjoint reading of Articles 14 & 39 of the Indian Constitution—with preventing the concentration of wealth and material resources in a few hands. Seen in this light, the law must strive to protect “the competitive process”. But the present legal framework is too obsolete to achieve that aim. Current understanding of concepts such as ‘relevant market’, ‘hypothetical monopolist’ and ‘abuse of dominance’ is hard to apply to Big Tech companies which operate more on data than on money. The solution, it is proposed, lies in having ex ante regulation of Big Tech rather than a system of only subsequent sanctions through a possible code of conduct created after extensive stakeholder consultations. [A.P. Kumar and Manjushree R.M., Data, Democracy and Dominance: Exploring a New Antitrust Framework for Digital Platforms]

Market dominance and data control give an even greater power to Big Tech companies, i.e., control over the flow of information among citizens. Given the vital link between democracy and flow of information, many have called for increased control over social media with a view to checking misinformation. Rahul Narayan explores what these demands might mean for free speech theory. Could it be (as some suggest) that these demands are “a sign that the erstwhile uncritical liberal devotion to free speech was just hypocrisy”? Traditional free speech theory, Narayan argues, is inadequate to deal with the misinformation problem for two reasons. First, it is premised on protecting individual liberty from the authoritarian actions by governments, “not to control a situation where baseless gossip and slander impact the very basis of society.” Second, the core assumption behind traditional theory—i.e., the possibility of an organic marketplace of ideas where falsehood can be exposed by true speech—breaks down in context of modern era misinformation campaigns. Therefore, some regulation is essential to ensure the prevalence of truth. [R. Narayan, Fake News, Free Speech and Democracy]

Jhalak M. Kakkar and Arpitha Desai examine the context of election misinformation and consider possible misinformation regulatory regimes. Appraising the ideas of self-regulation and state-imposed prohibitions, they suggest that the best way forward for democracy is to strike a balance between the two. This can be achieved if the State focuses on regulating algorithmic transparency rather than the content of the speech—social media companies must be asked to demonstrate that their algorithms do not facilitate amplification of propaganda, to move from behavioural advertising to contextual advertising, and to maintain transparency with respect to funding of political advertising on their platforms. [J.M. Kakkar and A. Desai, Voting out Election Misinformation in India: How should we regulate Big Tech?]

Much like fake news challenges the fundamentals of free speech theory, it also challenges the traditional concepts of international humanitarian law. While disinformation fuels aggression by state and non-state actors in myriad ways, it is often hard to establish liability. Shreya Bose formulates the problem as one of causation: “How could we measure the effect of psychological warfare or disinformation campaigns…?” E.g., the cause-effect relationship is critical in tackling the recruitment of youth by terrorist outfits and the ultimate execution of acts of terror. It is important also in determining liability of state actors that commit acts of aggression against other sovereign states, in exercise of what they perceive—based on received misinformation about an incoming attack—as self-defence. The author helps us make sense of this tricky terrain and argues that Big Tech could play an important role in countering propaganda warfare, just as it does in promoting it. [S. Bose, Disinformation Campaigns in the Age of Hybrid Warfare]

The last two pieces focus attention on real-life, concrete applications of technology by the state. Vrinda Bhandari highlights the use of facial recognition technology (‘FRT’) in law enforcement as another area where the state deploys Big Tech in the name of ‘efficiency’. Current deployment of FRT is constitutionally problematic. There is no legal framework governing the use of FRT in law enforcement. Profiling of citizens as ‘habitual protestors’ has no rational nexus to the aim of crime prevention; rather, it chills the exercise of free speech and assembly rights. Further, FRT deployment is wholly disproportionate, not only because of the well-documented inaccuracy and bias-related problems in the technology, but also because—more fundamentally—“[t]reating all citizens as potential criminals is disproportionate and arbitrary” and “creates a risk of stigmatisation”. The risk of mass real-time surveillance adds to the problem. In light of these concerns, the author suggests a complete moratorium on the use of FRT for the time being. [V. Bhandari, Facial Recognition: Why We Should Worry the Use of Big Tech for Law Enforcement

In the last essay of the series, Malavika Prasad presents a case study of the Pune Smart Sanitation Project, a first-of-its-kind urban sanitation programme which pursues the Smart City Mission (‘SCM’). According to the author, the structure of city governance (through Municipalities) that existed even prior to the advent of the SCM violated the constitutional principle of self-governance. This flaw was only aggravated by the SCM which effectively handed over key aspects of city governance to state corporations. The Pune Project is but a manifestation of the undemocratic nature of this governance structure—it assumes without any justification that ‘efficiency’ and ‘optimisation’ are neutral objectives that ought to be pursued. Prasad finds that in the hunt for efficiency, the design of the Pune Project provides only for collection of data pertaining to users/consumers, hence excluding the marginalised who may not get access to the system in the first place owing to existing barriers. “Efficiency is hardly a neutral objective,” says Prasad, and the state’s emphasis on efficiency over inclusion and participation reflects a problematic political choice. [M. Prasad, The IoT-loaded Smart City and its Democratic Discontents]

We hope that readers will find the essays insightful. As ever, we welcome feedback.

This series is supported by the Friedrich Naumann Foundation for Freedom (FNF) and has been published by the National Law University Delhi Press. We are thankful for their support. 

The Proliferating Eyes of Argus: State Use of Facial Recognition Technology

Democratic lawmakers introduce ban on facial recognition technology, citing  mistake made by Detroit police | News Hits

This post has been authored by Sangh Rakshita

In Greek mythology Argus Panoptes was a many-eyed, all-seeing, and always awake, giant whose reference has been used to depict an imagery of excessive scrutiny and surveillance. Jeremy Bentham used this reference when he designed the panopticon prison where prisoners would be monitored without them being in the know. Later, Michel Foucault used the panopticon to elaborate on the social theory of panopticism where the watcher ceases to be external to the watched, resulting in internal surveillance or a ‘chilling’ effect. This idea of “panopticism” has gained renewed relevance in the age of digital surveillance.

Amongst the many cutting edge surveillance technologies being adopted globally, ‘Facial Recognition Technology’ (FRT) is one of the most rapidly deployed. ‘Live Facial Recognition Technology’ (LFRT) or ‘Real-time Facial Recognition Technology’, its augmentation, has increasingly become more effective in the past few years. Improvements in computational power and algorithms have enabled cameras placed at odd angles to detect faces even in motion. This post attempts to explore the issues with increasing State use of FRT around the world and the legal framework surrounding it.

What do FRT and LFRT mean?

FRT refers to the usage of algorithms for uniquely detecting, recognising, or verifying a person using recorded images, sketches, videos (which contain their face). The data about a particular face is generally known as the face template. This template is a mathematical representation of a person’s face, which is created by using algorithms that mark and map distinct features on the captured image like eye locations or the length of a nose. These face templates create the biometric database against which new images, sketches, videos, etc. are compared to verify or recognise the identity of a person. As opposed to the application of FRT, which is conducted on pre-recorded images and videos, LFRT involves real-time automated facial recognition of all individuals in the camera field’s vision. It involves biometric processing of images of all the passers-by using an existing database of images as a reference.

The accuracy of FRT algorithms is significantly impacted by factors like distance and angle from which the image was captured or poor lighting conditions. These problems are worsened in LFRT as the images are not captured in a controlled setting, with the subjects in motion, rarely looking at the camera, and often positioned at odd angles from it. 

Despite claims of its effectiveness, there has been growing scepticism about the use of FRT. Its use has been linked with misidentification of people of colour, ethinic minorities, women, and trans people. The prevalent use of FRT may not only affect the privacy rights of such communities, but all those who are surveilled at large.

The Prevalence of FRT 

While FRT has become ubiquitous, LFRT is still in the process of being adopted in countries like the UK, USA, India, and Singapore. The COVID-19 pandemic has further accelerated the adoption of FRT as a way to track the virus’ spread and to build on contactless biometric-based identification systems. For example, in Moscow, city officials were using a system of tens of thousands of cameras equipped with FRT, to check for social distancing measures, usage of face masks, and adherence to quarantine rules to contain the spread of COVID-19. 

FRT is also being steadily deployed for mass surveillance activities, which is often in violation of universally accepted principles of human rights such as necessity and proportionality. These worries have come to the forefront recently with the State use of FRT to identify people participating in protests. For example, FRT was used by law enforcement agencies to identify prospective law breakers during protests in Hong Kong, protests concerning the Citizenship Amendment Act, 2019 in New Delhi and the Black Lives Matter protests across the USA.

Vociferous demands have been made by civil society and digital rights groups for a global moratorium on the pervasive use of FRT that enables mass surveillance, as many cities such as Boston and Portland have banned its deployment. However, it remains to be seen how effective these measures are in halting the use of FRT. Even the temporary refusal by Big Tech companies to sell FRT to police forces in the US does not seem to have much instrumental value – as other private companies continue its unhindered support.

Regulation of FRT

The approach to the regulation of FRT differs vastly across the globe. The regulation spectrum on FRT ranges from permissive use of mass surveillance on citizens in countries like China and Russia to a ban on the use of FRT for example in Belgium and Boston (in USA). However, in many countries around the world, including India, the use of FRT continues unabated, worryingly in a regulatory vacuum.

Recently, an appellate court in the UK declared the use of LFRT for law enforcement purposes as unlawful, on grounds of violation of the rights of data privacy and equality. Despite the presence of a legal framework in the UK for data protection and the use of surveillance cameras, the Court of Appeal held that there was no clear guidance on the use of the technology and it gave excessive discretion to the police officers. 

The EU has been contemplating a moratorium on the use of FRT in public places. Civil society in the EU is demanding a comprehensive and indefinite ban on the use of FRT and related technology for mass surveillance activities.

In the USA, several orders banning or heavily regulating the use of FRT have been passed. A federal law banning the use of facial recognition and biometric technology by law enforcement has been proposed. The bill seeks to place a moratorium on the use of facial recognition until Congress passes a law to lift the temporary ban. It would apply to federal agencies such as the FBI, as well as local and State police departments.

The Indian Scenario

In July 2019, the Government of India announced its intentions of setting up a nationwide facial recognition system. The National Crime Bureau (NCRB) – a government agency operating under the Ministry of Home Affairs – released a request for proposal (RFP) on July 4, 2019 to procure a National Automated Facial Recognition System (AFRS). The deadline for submission of tenders to the RFP has been extended 11 times since July 2019. The stated aim of the AFRS is to help modernise the police force, information gathering, criminal identification, verification, and its dissemination among various police organisations and units across the country. 

Security forces across the states and union territories will have access to the centralised database of AFRS, which will assist in the investigation of crimes. However, civil society organisations have raised concerns regarding privacy and issues of increased surveillance by the State as AFRS does not have a legal basis (statutory or executive) and lacks procedural safeguards and accountability measures like an oversight regulatory authority. They have also questioned the accuracy of FRT in identifying darker skinned women and ethnic minorities and expressed fears of discrimination. 

This is in addition to the FRT already in use by law enforcement agencies in Chennai, Hyderabad, Delhi, and Punjab. There are several instances of deployment of FRT in India by the government in the absence of a specific law regulating FRT or a general data protection law.

Even the proposed Personal Data Protection Bill, 2019 is unlikely to assuage privacy challenges arising from the use of FRT by the Indian State. The primary reason for this is the broad exemptions provided to intelligence and law enforcement agencies under Clause 35 of the Bill on grounds of sovereignty and integrity, security of the State, public order, etc.

After the judgement of K.S. Puttaswamy vs. Union of India (Puttaswamy I), which reaffirmed the fundamental right to privacy in India, any act of State surveillance breaches the right to privacy and will need to adhere to the three part test laid down in Puttaswamy I.

The three prongs of the test are – legality, which postulates the existence of law along with procedural safeguards; necessity, defined in terms of a legitimate State aim; and proportionality which ensures a rational nexus between the objects and the means adopted to achieve them. This test was also applied in the Aadhaar case (Puttaswamy II) to the use of biometrics technology. 

It may be argued that State use of FRT is for the legitimate aim of ensuring national security, but currently its use is neither sanctioned by law, nor does it pass the test of proportionality. For proportionate use of FRT, the State will need to establish that there is a rational nexus between its use and the purpose sought to be achieved and that the use of such technology is the least privacy restrictive measure to achieve the intended goals. As the law stands today in India after Puttaswamy I and II, any use of FRT or LFRT currently is prima facie unconstitutional. 

While mass surveillance is legally impermissible in India, targeted surveillance is allowed under Section 5 of the Indian Telegraph Act, 1885, read with rule 419A of the Indian Telegraph Rules, 1951 and Section 69 of the Information and Technology Act, 2000 (IT Act). Even the constitutionality of Section 69 of the IT Act has been challenged and is currently pending before the Supreme Court.

Puttaswamy I has clarified that the protection of privacy is not completely lost or surrendered in a public place as it is attached to the person. Hence, the constitutionality of India’s surveillance apparatus needs to be assessed from the standards laid down by Puttaswamy I. To check unregulated mass surveillance through the deployment of FRT by the State, there is a need to restructure the overall surveillance regime in the country. Even the Justice Srikrishna Committee report in 2018 – highlighted that several executive sanctioned intelligence-gathering activities of law enforcement agencies would be illegal after Puttaswamy I as they do not operate under any law. 

The need for reform of surveillance laws, in addition to a data protection law in India to safeguard fundamental rights and civil liberties, cannot be stressed enough. The surveillance law reform will have to focus on the use of new technologies like FRT and regulate its deployment with substantive and procedural safeguards to prevent abuse of human rights and civil liberties and provide for relief. 

Well documented limitations of FRT and LFRT in terms of low accuracy rates, along with concerns of profiling and discrimination, make it essential for the surveillance law reform to have additional safeguards such as mandatory accuracy and non-discrimination audits. For example, the National Institute of Standards and Technology (NIST), US Department of Commerce, 2019 Face Recognition Vendor Test (part three) evaluates whether an algorithm performs differently across different demographics in a dataset. The need of the hour is to cease the use of FRT and put a temporary moratorium on any future deployments till surveillance law reforms with adequate proportionality safeguards have been implemented. 

Reflections on Personal Data Protection Bill, 2019

By Sangh Rakshita and Nidhi Singh

Image result for data protection"

 The Personal Data Protection Bill, 2019 (PDP Bill/ Bill) was introduced in the Lok Sabha on December 11, 2019 , and was immediately referred to a joint committee of the Parliament. The joint committee published a press communique on February 4, 2020 inviting comments on the Bill from the public.

The Bill is the successor to the Draft Personal Data Protection Bill 2018 (Draft Bill 2018), recommended by a government appointed expert committee chaired by Justice B.N. Srikrishna. In August 2018, shortly after the recommendations and publication of the draft Bill, the Ministry of Electronics and Information Technology (MeitY) invited comments on the Draft Bill 2018 from the public. (Our comments are available here.)[1]

In this post we undertake a preliminary examination of:

  • The scope and applicability of the PDP Bill
  • The application of general data protection principles
  • The rights afforded to data subjects
  • The exemptions provided to the application of the law

In future posts in the series we will examine the Bill and look at the:

  • The restrictions on cross border transfer of personal data
  • The structure and functions of the regulatory authority
  • The enforcement mechanism and the penalties under the PDP Bill

Scope and Applicability

The Bill identifies four different categories of data. These are personal data, sensitive personal data, critical personal data and non-personal data

Personal data is defined as “data about or relating to a natural person who is directly or indirectly identifiable, having regard to any characteristic, trait, attribute or any other feature of the identity of such natural person, whether online or offline, or any combination of such features with any other information, and shall include any inference drawn from such data for the purpose of profiling. (emphasis added)

The addition of inferred data in the definition realm of personal data is an interesting reflection of the way the conversation around data protection has evolved in the past few months, and requires further analysis.

Sensitive personal data is defined as data that may reveal, be related to or constitute a number of different categories of personal data, including financial data, health data, official identifiers, sex life, sexual orientation, genetic data, transgender status, intersex status, caste or tribe, and religious and political affiliations / beliefs. In addition, under clause 15 of the Bill the Central Government can notify other categories of personal data as sensitive personal data in consultation with the Data Protection Authority and the relevant sectoral regulator.

Similar to the 2018 Bill, the current bill does not define critical personal data and clause 33 provides the Central Government the power to notify what is included under critical personal data. However, in its report accompanying the 2018 Bill, the Srikrishna committee had referred to some examples of critical personal data that relate to critical state interest like Aadhaar number, genetic data, biometric data, health data, etc.

The Bill retains the terminology introduced in the 2018 Draft Bill, referring to data controllers as ‘data fiduciaries’ and data subjects ‘data principals’. The new terminology was introduced with the purpose of reflecting the fiduciary nature of the relationship between the data controllers and subjects. However, whether the use of the specific terminology has more impact on the protection and enforcement of the rights of the data subjects still needs to be seen.

 Application of PDP Bill 2019

The Bill is applicable to (i) the processing of any personal data, which has been collected, disclosed, shared or otherwise processed in India; (ii) the processing of personal data by the Indian government, any Indian company, citizen, or person/ body of persons incorporated or created under Indian law; and (iii) the processing of personal data in relation to any individuals in India, by any persons outside of India.

The scope of the 2019 Bill, is largely similar in this context to that of the 2018 Draft Bill. However, one key difference is seen in relation to anonymised data. While the 2018 Draft Bill completely exempted anonymised data from its scope, the 2019 Bill does not apply to anonymised data, except under clause 91 which gives the government powers to mandate the use and processing of non-personal data or anonymised personal data under policies to promote the digital economy. There are a few concerns that arise in context of this change in treatment of anonymised personal data. First, there are concerns on the concept of anonymisation of personal data itself. While the Bill provides that the Data Protection Authority (DPA) will specify appropriate standards of irreversibility for the process of anonymisation, it is not clear that a truly irreversible form of anonymisation is possible at all. In this case, we need more clarity on what safeguards will be applicable for the use of anonymised personal data.

Second, is the Bill’s focus on the promotion of the digital economy. We have previously discussed some of the concerns regarding focus on the promotion of digital economy in a rights based legislation in our comments to the Draft Bill 2018.

These issues continue to be of concern, and are perhaps heightened with the introduction of a specific provision on the subject in the 2019 Bill (especially without adequate clarity on what services or policy making efforts in this direction, are to be informed by the use of anonymised personal data). Many of these issues are also still under discussion by the committee of experts set up to deliberate on data governance framework (non-personal data). The mandate of this committee includes the study of various issues relating to non-personal data, and to make specific suggestions for consideration of the central government on regulation of non-personal data.

The formation of the non-personal data committee was in pursuance of a recommendation by the Justice Srikrishna Committee to frame a legal framework for the protection of community data, where the community is identifiable. The mandate of the expert committee will overlap with the application of clause 91(2) of the Bill.

Data Fiduciaries, Social Media Intermediaries and Consent Managers

Data Fiduciaries

As discussed above the Bill categorises data controllers as data fiduciaries and significant data fiduciaries. Any person that determines the purpose and means of processing of personal data, (including the State, companies, juristic entities or individuals) is considered a data fiduciary. Some data fiduciaries may be notified as ‘significant data fiduciaries’, on the basis of factors such as the volume and sensitivity of personal data processed, the risks of harm etc. Significant data fiduciaries are held to higher standards of data protection. Under clauses 27-30, significant data fiduciaries are required to carry out data protection impact assessments, maintain accurate records, audit policy and the conduct of its processing of personal data and appoint a data protection officer. 

Social Media Intermediaries

The Bill introduces a distinct category of intermediaries called social media intermediaries. Under clause 26(4) a social media intermediary is ‘an intermediary who primarily or solely enables online interaction between two or more users and allows them to create, upload, share, disseminate, modify or access information using its services’. Intermediaries that primarily enable commercial or business-oriented transactions, provide access to the Internet, or provide storage services are not to be considered social media intermediaries.

Social media intermediaries may be notified to be significant data fiduciaries, if they have a minimum number of users, and their actions have or are likely to have a significant impact on electoral democracy, security of the State, public order or the sovereignty and integrity of India.

Under clause 28 social media intermediaries that have been notified as a significant data fiduciaries will be required to provide for voluntary verification of users to be accompanied with a demonstrable and visible mark of verification.

Consent Managers

The Bill also introduces the idea of a ‘consent manager’ i.e. a (third party) data fiduciary which provides for management of consent through an ‘accessible, transparent and interoperable platform’. The Bill does not contain any details on how consent management will be operationalised, and only states that these details will be specified by regulations under the Bill. 

Data Protection Principles and Obligations of Data Fiduciaries

Consent and grounds for processing

The Bill recognises consent as well as a number of other grounds for the processing of personal data.

Clause 11 provides that personal data shall only be processed if consent is provided by the data principal at the commencement of processing. This provision, similar to the consent provision in the 2018 Draft Bill, draws from various principles including those under the Indian Contract Act, 1872 to inform the concept of valid consent under the PDP Bill. The clause requires that the consent should be free, informed, specific, clear and capable of being withdrawn.

Moreover, explicit consent is required for the processing of sensitive personal data. The current Bill appears to be silent on issues such as incremental consent which were highlighted in our comments in the context of the Draft Bill 2018.

The Bill provides for additional grounds for processing of personal data, consisting of very broad (and much criticised) provisions for the State to collect personal data without obtaining consent. In addition, personal data may be processed without consent if required in the context of employment of an individual, as well as a number of other ‘reasonable purposes’. Some of the reasonable purposes, which were listed in the Draft Bill 2018 as well, have also been a cause for concern given that they appear to serve mostly commercial purposes, without regard for the potential impact on the privacy of the data principal.

In a notable change from the Draft Bill 2018, the PDP Bill, appears to be silent on whether these other grounds for processing will be applicable in relation to sensitive personal data (with the exception of processing in the context of employment which is explicitly barred).

Other principles

The Bill also incorporates a number of traditional data protection principles in the chapter outlining the obligations of data fiduciaries. Personal data can only be processed for a specific, clear and lawful purpose. Processing must be undertaken in a fair and reasonable manner and must ensure the privacy of the data principal – a clear mandatory requirement, as opposed to a ‘duty’ owed by the data fiduciary to the data principal in the Draft Bill 2018 (this change appears to be in line with recommendations made in multiple comments to the Draft Bill 2018 by various academics, including our own).

Purpose and collection limitation principles are mandated, along with a detailed description of the kind of notice to be provided to the data principal, either at the time of collection, or as soon as possible if the data is obtained from a third party. The data fiduciary is also required to ensure that data quality is maintained.

A few changes in the application of data protection principles, as compared to the Draft Bill 2018, can be seen in the data retention and accountability provisions.

On data retention, clause 9 of the Bill provides that personal data shall not be retained beyond the period ‘necessary’ for the purpose of data processing, and must be deleted after such processing, ostensibly a higher standard as compared to ‘reasonably necessary’ in the Draft Bill 2018. Personal data may only be retained for a longer period if explicit consent of the data principal is obtained, or if retention is required to comply with law. In the face of the many difficulties in ensuring meaningful consent in today’s digital world, this may not be a win for the data principal.

Clause 10 on accountability continues to provide that the data fiduciary will be responsible for compliance in relation to any processing undertaken by the data fiduciary or on its behalf. However, the data fiduciary is no longer required to demonstrate such compliance.

Rights of Data Principals

Chapter V of the PDP Bill 2019 outlines the Rights of Data Principals, including the rights to access, confirmation, correction, erasure, data portability and the right to be forgotten. 

Right to Access and Confirmation

The PDP Bill 2019 makes some amendments to the right to confirmation and access, included in clause 17 of the bill. The right has been expanded in scope by the inclusion of sub-clause (3). Clause 17(3) requires data fiduciaries to provide data principals information about the identities of any other data fiduciaries with whom their personal data has been shared, along with details about the kind of data that has been shared.

This allows the data principal to exert greater control over their personal data and its use.  The rights to confirmation and access are important rights that inform and enable a data principal to exercise other rights under the data protection law. As recognized in the Srikrishna Committee Report, these are ‘gateway rights’, which must be given a broad scope.

Right to Erasure

The right to correction (Clause 18) has been expanded to include the right to erasure. This allows data principals to request erasure of personal data which is not necessary for processing. While data fiduciaries may be allowed to refuse correction or erasure, they would be required to produce a justification in writing for doing so, and if there is a continued dispute, indicate alongside the personal data that such data is disputed.

The addition of a right to erasure, is an expansion of rights from the 2018 Bill. While the right to be forgotten only restricts or discontinues disclosure of personal data, the right to erasure goes a step ahead and empowers the data principal to demand complete removal of data from the system of the data fiduciary.

Many of the concerns expressed in the context of the Draft Bill 2018, in terms of the procedural conditions for the exercise of the rights of data principals, as well as the right to data portability specifically, continue to persist in the PDP Bill 2019.

Exceptions and Exemptions

While the PDP Bill ostensibly enables individuals to exercise their right to privacy against the State and the private sector, there are several exemptions available, which raise several concerns.

The Bill grants broad exceptions to the State. In some cases, it is in the context of specific obligations such as the requirement for individuals’ consent. In other cases, State action is almost entirely exempted from obligations under the law. Some of these exemptions from data protection obligations are available to the private sector as well, on grounds like journalistic purposes, research purposes and in the interests of innovation.

The most concerning of these provisions, are the exemptions granted to intelligence and law enforcement agencies under the Bill. The Draft Bill 2018, also provided exemptions to intelligence and law enforcement agencies, so far as the privacy invasive actions of these agencies were permitted under law, and met procedural standards, as well as legal standards of necessity and proportionality. We have previously discussed some of the concerns with this approach here.

The exemptions provided to these agencies under the PDP Bill, seem to exacerbate these issues.

Under the Bill, the Central Government can exempt an agency of the government from the application of this Act by passing an order with reasons recorded in writing if it is of the opinion that the exemption is necessary or expedient in the interest of sovereignty and integrity, security of the state, friendly relations with foreign states, public order; or for preventing incitement to the commission of any cognizable offence relating to the aforementioned grounds. Not only have the grounds on which government agencies can be exempted been worded in an expansive manner, the procedure of granting these exemptions also is bereft of any safeguards.

The executive functioning in India suffers from problems of opacity and unfettered discretion at times, which requires a robust system of checks and balances to avoid abuse. The Indian Telegraph Act, 1885 (Telegraph Act) and the Information Technology Act, 2000 (IT Act) enable government surveillance of communications made over telephones and the internet. For drawing comparison here, we primarily refer to the Telegraph Act as it allows the government to intercept phone calls on similar grounds as mentioned in clause 35 of the Bill by an order in writing. However, the Telegraph Act limits the use of this power to two scenarios – occurrence of a public emergency or in the interest of public safety. The government cannot intercept communications made over telephones in the absence of these two preconditions. The Supreme Court in People’s Union for Civil Liberties v. Union of India, (1997) introduced guidelines to check abuse of surveillance powers under the Telegraph Act which were later incorporated in Rule 419A of the Indian Telegraph Rules, 1951. A prominent safeguard included in Rule 419A requires that surveillance and monitoring orders be issued only after considering ‘other reasonable means’ for acquiring the required information. The court had further limited the scope of interpretation of ‘public emergency’ and ‘public safety’ to mean “the prevalence of a sudden condition or state of affairs affecting the people at large and calling for immediate action”, and “the state or condition of freedom from danger or risk at large” respectively. In spite of the introduction of these safeguards, the procedure of intercepting telephone communications under the Telegraph Act is criticised for lack of transparency and improper implementation. For instance, a 2014 report revealed that around 7500 – 9000 phone interception orders were issued by the Central Government every month. The application of procedural safeguards, in each case would have been physically impossible given the sheer numbers. Thus, legislative and judicial oversight becomes a necessity in such cases.

The constitutionality of India’s surveillance apparatus inclduing section 69 of the IT Act which allows for surveillance on broader grounds on the basis of necessity and expediency and not ‘public emergency’ and ‘public safety’, has been challenged before the Supreme Court and is currently pending. Clause 35 of the Bill also mentions necessity and expediency as prerequisites for the government to exercise its power to grant exemption, which appear to be vague and open-ended as they are not defined. The test of necessity, implies resorting to the least intrusive method of encroachment up on privacy to achieve the legitimate state aim. This test is typically one among several factors applied in deciding on whether a particular intrusion on a right is tenable or not, under human rights law. In his concurring opinion in Puttaswamy (I) J. Kaul had included ‘necessity’ in the proportionality test. (However, this test is not otherwise well developed in Indian jurisprudence).  Expediency, on the other hand, is not a specific legal basis used for determining the validity of an intrusion on human rights. It has also not been referred to in Puttaswamy (I) as a basis of assessing a privacy violation. The use of the term ‘expediency’ in the Bill is deeply worrying as it seems to bring down the threshold for allowing surveillance which is a regressive step in the context of cases like PUCL and Puttaswamy (I). A valid law along with the principles of proportionality and necessity are essential to put in place an effective system of checks and balances on the powers of the executive to provide exemptions. It seems unlikely that the clause will pass the test of proportionality (sanction of law, legitimate aim, proportionate to the need of interference, and procedural guarantees against abuse) as laid down by the Supreme Court in Puttaswamy (I).

The Srikrishna Committee report had recommended that surveillance should not only be conducted under law (and not executive order), but also be subject to oversight, and transparency requirements. The Committee had argued that the tests of lawfulness, necessity and proportionality provided for under clauses 42 and 43 (of the Draft Bill 2018) were sufficient to meet the standards set out under the Puttaswamy judgment. Since the PDP Bill completely does away with all these safeguards and leaves the decision to executive discretion, the law is unconstitutional.  After the Bill was introduced in the Lok Sabha, J. Srikrishna had criticised it for granting expansive exemptions in the absence of judicial oversight. He warned that the consequences could be disastrous from the point of view of safeguarding the right to privacy and could turn the country into an “Orwellian State”. He has also opined on the need for a separate legislation to govern the terms under which the government can resort to surveillance.

Clause 36 of the Bill deals with exemption of some provisions for certain processing of personal data. It combines four different clauses on exemption which were listed in the Draft Bill 2018 (clauses 43, 44, 46 and 47). These include processing of personal data in the interests of prevention, detection, investigation and prosecution of contraventions of law; for the purpose of legal proceedings; personal or domestic purposes; and journalistic purposes. The Draft Bill 2018 had detailed provisions on the need for a law passed by Parliament or the State Legislature which is necessary and proportionate, for processing of personal data in the interests of prevention, detection, investigation and prosecution of contraventions of law. Clause 36 of the Bill does not enumerate the need for a law to process personal data under these exemptions. We had argued that these exemptions granted by the Draft Bill 2018 (clauses 43, 44, 46 and 47) were wide, vague and needed clarifications, but the exemptions under clause 36 of the Bill  are even more ambiguous as they merely enlist the exemptions without any specificities or procedural safeguards in place.

In the Draft Bill 2018, the Authority could not give exemption from the obligation of fair and reasonable processing, measures of security safeguards and data protection impact assessment for research, archiving or statistical purposes As per the current Bill, the Authority can provide exemption from any of the provisions of the Act for research, archiving or statistical purposes.

The last addition to this chapter of exemptions is that of creating a sandbox for encouraging innovation. This newly added clause 40 is aimed at encouraging innovation in artificial intelligence, machine-learning or any other emerging technology in public interest. The details of what the sandbox entails other than exemption from some of the obligations of Chapter II might need further clarity. Additionally, to be considered an eligible applicant, a data fiduciary has to necessarily obtain certification of its privacy by design policy from the DPA, as mentioned in clause 40(4) read with clause 22.

Though well appreciated for its intent, this provision requires clarification on grounds of selection and details of what the sandbox might entail.


[1] At the time of introduction of the PDP Bill 2019, the Minister for Law and Justice of India, Mr. Ravi Shankar Prasad suggested that over 2000 inputs were received on the Draft Bill 2018, based on which changes have been made in the PDP Bill 2019. However, these comments and inputs have not been published by MeitY, and only a handful of comments have been published, by the stakeholders submitting these comments themselves.