The Personal Data Protection Bill, 2019 vs. GDPR: Provisions for the rights of the child and its implications

This post is authored by Puja Nair

The debate on privacy rose to the forefront after the Supreme Court passed a judgement in the case of Justice K.S Puttaswamy (Retd.) v. Union of India, where the Court held that the right to privacy was an intrinsic part of the right to life and personal liberty under Article 21 of the Constitution of India. In arriving at this conclusion, the Court examined a wide range of privacy-related issues and held that the right to privacy included the right to personal autonomy over a wide range of domains in a person’s life.

While the above decision seems obvious in its simplicity, complications arise when one considers that a child or adolescent may not understand the consequences of their individual choices. When taken in the context of online data privacy, it is safe to say that children may be unaware of the exact manner in which any data that they share online is put to use. The report submitted by the committee of experts under the chairmanship of Justice B.N Srikrishna clearly endorses this belief.

Clause 16 of the Indian Personal Data Protection Bill, 2019 (‘PDPB 2019’), which was tabled in parliament on December 11, 2019, deals with the processing of personal and sensitive personal data of children. It states categorically that every data fiduciary shall “process the personal data of a child in a manner that protects the rights of, and is in the best interests of, the child.” It further states that a data fiduciary shall only process the personal data of a child, after verifying their age and obtaining the consent of their parent or guardian, in the manner specified by future regulations.

Based on this provision, the primary question that arises is, who is a child as per the PDPB 2019? According to the provisions of the bill, a child is someone who “has not completed the age of 18 years.” This is distinct from the data protection statutes passed in other jurisdictions. The EU General Data Protection Rules (‘GDPR’) specifies that the age limit on the definition of ‘child’ may be up to the discretion of individual member states and can be anywhere between 13-16 years. The US Children’s Online Privacy Protection Act, 1998 on the other hand, puts the age limit at a firm 13 years. Notwithstanding the above, the PDPB 2019 specifies 18 as the age of majority. This was done to ensure that the provisions of the bill would be in conformity with the prevailing laws of the country.

The adoption of a singular age of majority serves to prevent confusion and conflict between the laws in the country, however, it also serves to underestimate the awareness and advancement of today’s youth. An example of this understanding was espoused by the Madras High Court in the case of Sabari Sabarinathan Sabarivasan v. State Commission for Protection of Child Rights and Ors. This judgment examines existing flaws in the Protection of Children from Sexual Offences (POCSO) Act, 2012 and recommends a change in the definition of the term ‘child,’ so that a consensual relationship between a girl above 16 years of age and a boy between 16 to 21 years of age, would not attract the draconian provisions of the law. The drafters of the PDPB 2019 could have taken a similar view, rather than conforming with the provisions of a statute like the Indian Contract Act or the Indian Majority Act, both of which were enacted in the late-1800’s. Furthermore, a 2019 study conducted among 630 adolescents across 8 schools in the nation’s capital, revealed that 60 per cent of the boys and 40 per cent of the girls, owned their own device while almost half reportedly used two or more devices to access the Internet. The numbers have no doubt increased since then and the COVID-19 crises has further accelerated the adoption of online services for both education and entertainment. This means that mandating a guardian’s consent for anyone below the age of 18 years could very well result in some data fiduciaries inadvertently being on the wrong side of the law.

Another question raised by Clause 16 of the PDPB 2019, is the determination of what constitutes the best interests of the child. The bill does not specify how this is to be determined; however, subclause 5 of Clause 16 categorizes certain types of data processing like behavioural monitoring, tracking, and targeted advertising as harmful for children.

We then come to the requirement for age verification and parental consent. The provisions of the bill do not explore this in detail. It merely states that the process of acquiring such consent and/or verification will be specified in further rules, after taking into account factors like the volume of personal data processed, the proportion of such personal data likely to be that of a child, the potential of harm that may occur to said child as a result of the processing of his/her personal data etc.

Regardless, one issue that may arise when it comes to consent is the question of capacity. Clause 11 of the PDPB 2019 states that among other things, consent must be free and informed. However, parents cannot provide such free and informed consent on behalf of their children, if they do not understand the terms and conditions provided in the policies of these websites. In many instances, we find that children possess a much greater awareness of current technology trends and their implications. Additional issues arise when we consider the concept of free choice. However, the fact of the matter is that if one wants to register with any of the popular online apps and services available, one inevitably has to agree with their terms and conditions, regardless of any reservations one might have. Therefore, the concept of consent being “freely given” is rendered pointless.

GDPR and the European Union

Article 8 of the GDPR states that where there is an offer of “information society service directly to a child” the processing of personal data of said child shall be lawful, where the child is at least 16 years old. If the child is below the age of 16 years, such processing shall be lawful only if consent has been obtained by the “holder of parental responsibility over the child.”Member States can provide for a lower age limit, provided it is not below 13 years of age. The provision further provides that “reasonable efforts” must be made to verify that consent is given or authorised by the holder of parental responsibility over the child, taking into consideration available technology.

Article 8 is the principal provision relating to the protection of children’s personal data in the GDPR. There are other provisions that mandate the type of measures that must be taken for the protection of the personal data of a child. For example, when obtaining data from a child, data controllers must ensure that any information on the processing of such data, should be in clear and plain terms for a child to easily understand. The GDPR also provides for the ‘right of erasure’ for children’s personal data. This is particularly relevant in cases where the data subject may have provided their consent as a child, without being fully aware of the risks involved and now seek the erasure of such personal data. Clause 16 of the PDPB, which relates to the processing of personal data of children, closely mirrors Article 8 of the GDPR. To that end, this post will be limited to an examination of Article 8 of the GDPR to examine the potential pitfalls that await in the implementation of Clause 16 of PDPB 2019.

Article 8 applies only to information society services offered directly to a child. Information society services or ISS is any service that is provided at a distance, by electronic means, and at the individual request of a recipient of the services. The definition also includes the requirement that the service be one that is provided in exchange for “remuneration”. However, the majority of online services that teenagers have access to do not directly require remuneration from the users. Common examples of this include popular social media sites like Facebook, Instagram etc. For this reason, the phrase “remuneration” is interpreted broadly by the European Court of Justice (‘ECJ’). The Court has held that “the essential characteristic of remuneration […] lies in the fact that it constitutes consideration for the service in question and is normally agreed upon between the provider and the recipient of the service’’. It is not essential that the recipient of the services provide the consideration. It is only essential for the consideration to have been received by the service provider. Subsequent rulings specified that such services may also include services provided by a non-profit organization, services involving an element of chance, and services that are of a recreational or sporting nature.

Some confusion may arise in situations where the ISS has both online and offline components. In such cases one must determine whether or not the online component is integral to the nature of the service provided. If it is not integral, then such services cannot be categorized as an ISS. While these cases provide some clarity, it is clear that the definition and scope of what constitutes an ISS will continue to evolve with the evolution of technology. This is in direct contrast to the definition of a data fiduciary in the PDPB 2019, which is much more straightforward. The bill defines a data fiduciary as “any person, including the State, a company, any juristic entity or any individual who alone or in conjunction with others determines the purpose and means of processing of personal data.”

Further, much like Clause 16 of the PDPB 2019, the drafting of Article 8 raises questions on what constitutes proper consent and how such consent can be appropriately verified. Some of these questions have been delineated above in the Indian context and are also applicable here. The European Data Protection Board (‘EDPB’) have addressed these issues in its guidelines on consent under issued under the GDPR. The guidelines state that if a data subject consents because they feel they have no real choice, then the consent is not valid. The guidelines also specify certain situations where the existence of an imbalance of power between the data subject and the controller, would render consent invalid. It further provides that consent would not be considered to be “freely given” if the consent was bundled with the acceptance of the terms and conditions of a website. Additionally, when it comes to the issue of capacity, the guidelines provide that for the consent to be informed, the data subject, or the individual having parental responsibility over the data subject, must have knowledge of the controller’s identity, knowledge of the purpose of each of the processing operations for which consent is sought, knowledge of the type of data collected and used, and knowledge of the existence of the right to withdraw consent.

Finally, even if the validity of consent is established, there is no provision to determine whether the person providing such consent is qualified to do so. According to the provisions of Article 8, consent must be given by a holder of parental responsibility. Does this include even individuals who are acting in loco parenti? For example, in the US, schools may act on the parents’ behalf in an educational context, when personal data is collected from the students for the use and benefit of the school. Further, once this consent is obtained, how is it to be verified? The GDPR has merely required that the controller take “reasonable efforts” to verify said consent. This means that in situations where consent was not verifiable, the controller could still rely on the un-verified consent so long as they prove that “reasonable” efforts were made to verify the same. Fortunately, the EDPB Guidelines on consent fills this gap in Article 8 by recommending two types of verification mechanisms for high-risk and low-risk categories respectively. In the low-risk category, verification of parental consent via email was held to be sufficient. In the high-risk category, it was recommended that further proof of consent would need to be acquired. Trusted third-party verification services were also recommended, to minimise the amount of personal data the controller had to process itself.

Conclusion

The examination of the GDPR provisions clearly shows that numerous issues have arisen in the course of its implementation. These issues have been resolved on a case-by-case basis by courts and other authorities. However, these solutions are remedial and not preventative. One preventative approach is the implementation of principles like data protection by design and default as specified in Article 25 of the GDPR. Data protection by design ensures that privacy and data protection issues are considered at the design phase of any system, service or product and then implemented throughout the lifecycle of the same. Data protection by default limits the type of data collected. It requires controllers to collect and process only such data as is necessary to achieve their specific purpose.

Data protection by design is a principle that is already enshrined in Clause 22 of the PDPB, which provides that every data fiduciary shall submit a privacy by design policy to the proposed Data Protection Authority (DPA) for approval and certification. The manner in which this is to be implemented and the standards of protection required for certification would be subject to future regulations. However, by requiring data fiduciaries engaged in the collection and processing of children’s data to adhere to a higher standard of data protection, the DPA could probably ensure the protection of children’s data regardless of any pitfalls in the practical implementation of Clause 16.

The above measure might not effectively solve the issues specified with the implementation of Clause 16. Notwithstanding these drawbacks, the provisions of this Bill might be the very first step in bringing India’s data protection thresholds at par with the rest of the world.


The Future of Democracy in the Shadow of Big and Emerging Tech: CCG Essay Series

By Shrutanjaya Bhardwaj and Sangh Rakshita

In the past few years, the interplay between technology and democracy has reached a critical juncture. The untrammelled optimism for technology has now been shadowed by rising concerns over the survival of a meaningful democratic society. With the expanding reach of technology platforms, there have been increasing concerns in democratic societies around the world on the impact of such platforms on democracy and human rights. In this context, increasingly there has been focus on policy issues like  the need for an antitrust framework for digital platforms, platform regulation and free speech, the challenges of fake news, impact of misinformation on elections, invasion of privacy of citizens due to the deployment of emerging tech,  and cybersecurity. This has intensified the quest for optimal policy solutions. We, at the Centre for Communication Governance at National Law University Delhi (CCG), believe that a detailed academic exploration of the relationship between democracy, and big and emerging tech will aid our understanding of the current problems, help contextualise them and highlight potential policy and regulatory responses.

Thus, we bring to you this series of essays—written by experts in the domain—in an attempt to collate contemporary scholarly thought on some of the issues that arise in the context of the interaction of democracy, and big and emerging tech. The essay series is publicly available on the CCG website. We have also announced the release of the essay series on Twitter

Our first essay addresses the basic but critical question: What is ‘Big Tech’? Urvashi Aneja & Angelina Chamuah present a conceptual understanding of the phrase. While ‘Big Tech’ refers to a set of companies, it is certainly not a fixed set; companies become part of this set by exhibiting four traits or “conceptual markers” and—as a corollary—would stop being identified in this category if they were to lose any of the four markers. The first marker is that the company runs a data-centric model and has massive access to consumer data which can be leveraged or exploited. The second marker is that ‘Big Tech’ companies have a vast user base and are “multi-sided platforms that demonstrate strong network effects”. The third and fourth markers are the infrastructural and civic roles of these companies respectively, i.e., they not only control critical societal infrastructure (which is often acquired through lobbying efforts and strategic mergers and acquisitions) but also operate “consumer-facing platforms” which enable them to generate consumer dependence and gain huge power over the flow of information among citizens. It is these four markers that collectively define ‘Big Tech’. [U. Aneja and A. Chamuah, What is Big Tech? Four Conceptual Markers]

Since the power held by Big Tech is not only immense but also self-reinforcing, it endangers market competition, often by hindering other players from entering the market. Should competition law respond to this threat? If yes, how? Alok P. Kumar & Manjushree R.M. explore the purpose behind competition law and find that competition law is concerned not only with consumer protection but also—as evident from a conjoint reading of Articles 14 & 39 of the Indian Constitution—with preventing the concentration of wealth and material resources in a few hands. Seen in this light, the law must strive to protect “the competitive process”. But the present legal framework is too obsolete to achieve that aim. Current understanding of concepts such as ‘relevant market’, ‘hypothetical monopolist’ and ‘abuse of dominance’ is hard to apply to Big Tech companies which operate more on data than on money. The solution, it is proposed, lies in having ex ante regulation of Big Tech rather than a system of only subsequent sanctions through a possible code of conduct created after extensive stakeholder consultations. [A.P. Kumar and Manjushree R.M., Data, Democracy and Dominance: Exploring a New Antitrust Framework for Digital Platforms]

Market dominance and data control give an even greater power to Big Tech companies, i.e., control over the flow of information among citizens. Given the vital link between democracy and flow of information, many have called for increased control over social media with a view to checking misinformation. Rahul Narayan explores what these demands might mean for free speech theory. Could it be (as some suggest) that these demands are “a sign that the erstwhile uncritical liberal devotion to free speech was just hypocrisy”? Traditional free speech theory, Narayan argues, is inadequate to deal with the misinformation problem for two reasons. First, it is premised on protecting individual liberty from the authoritarian actions by governments, “not to control a situation where baseless gossip and slander impact the very basis of society.” Second, the core assumption behind traditional theory—i.e., the possibility of an organic marketplace of ideas where falsehood can be exposed by true speech—breaks down in context of modern era misinformation campaigns. Therefore, some regulation is essential to ensure the prevalence of truth. [R. Narayan, Fake News, Free Speech and Democracy]

Jhalak M. Kakkar and Arpitha Desai examine the context of election misinformation and consider possible misinformation regulatory regimes. Appraising the ideas of self-regulation and state-imposed prohibitions, they suggest that the best way forward for democracy is to strike a balance between the two. This can be achieved if the State focuses on regulating algorithmic transparency rather than the content of the speech—social media companies must be asked to demonstrate that their algorithms do not facilitate amplification of propaganda, to move from behavioural advertising to contextual advertising, and to maintain transparency with respect to funding of political advertising on their platforms. [J.M. Kakkar and A. Desai, Voting out Election Misinformation in India: How should we regulate Big Tech?]

Much like fake news challenges the fundamentals of free speech theory, it also challenges the traditional concepts of international humanitarian law. While disinformation fuels aggression by state and non-state actors in myriad ways, it is often hard to establish liability. Shreya Bose formulates the problem as one of causation: “How could we measure the effect of psychological warfare or disinformation campaigns…?” E.g., the cause-effect relationship is critical in tackling the recruitment of youth by terrorist outfits and the ultimate execution of acts of terror. It is important also in determining liability of state actors that commit acts of aggression against other sovereign states, in exercise of what they perceive—based on received misinformation about an incoming attack—as self-defence. The author helps us make sense of this tricky terrain and argues that Big Tech could play an important role in countering propaganda warfare, just as it does in promoting it. [S. Bose, Disinformation Campaigns in the Age of Hybrid Warfare]

The last two pieces focus attention on real-life, concrete applications of technology by the state. Vrinda Bhandari highlights the use of facial recognition technology (‘FRT’) in law enforcement as another area where the state deploys Big Tech in the name of ‘efficiency’. Current deployment of FRT is constitutionally problematic. There is no legal framework governing the use of FRT in law enforcement. Profiling of citizens as ‘habitual protestors’ has no rational nexus to the aim of crime prevention; rather, it chills the exercise of free speech and assembly rights. Further, FRT deployment is wholly disproportionate, not only because of the well-documented inaccuracy and bias-related problems in the technology, but also because—more fundamentally—“[t]reating all citizens as potential criminals is disproportionate and arbitrary” and “creates a risk of stigmatisation”. The risk of mass real-time surveillance adds to the problem. In light of these concerns, the author suggests a complete moratorium on the use of FRT for the time being. [V. Bhandari, Facial Recognition: Why We Should Worry the Use of Big Tech for Law Enforcement

In the last essay of the series, Malavika Prasad presents a case study of the Pune Smart Sanitation Project, a first-of-its-kind urban sanitation programme which pursues the Smart City Mission (‘SCM’). According to the author, the structure of city governance (through Municipalities) that existed even prior to the advent of the SCM violated the constitutional principle of self-governance. This flaw was only aggravated by the SCM which effectively handed over key aspects of city governance to state corporations. The Pune Project is but a manifestation of the undemocratic nature of this governance structure—it assumes without any justification that ‘efficiency’ and ‘optimisation’ are neutral objectives that ought to be pursued. Prasad finds that in the hunt for efficiency, the design of the Pune Project provides only for collection of data pertaining to users/consumers, hence excluding the marginalised who may not get access to the system in the first place owing to existing barriers. “Efficiency is hardly a neutral objective,” says Prasad, and the state’s emphasis on efficiency over inclusion and participation reflects a problematic political choice. [M. Prasad, The IoT-loaded Smart City and its Democratic Discontents]

We hope that readers will find the essays insightful. As ever, we welcome feedback.

This series is supported by the Friedrich Naumann Foundation for Freedom (FNF) and has been published by the National Law University Delhi Press. We are thankful for their support. 

CJEU sets limits on Mass Communications Surveillance – A Win for Privacy in the EU and Possibly Across the World

This post has been authored by Swati Punia

On 6th October, the European Court of Justice (ECJ/ Court) delivered its much anticipated judgments in the consolidated matter of C-623/17, Privacy International from the UK and joined cases from France, C-511/18, La Quadrature du Net and others, C-512/18, French Data Network and others, and Belgium, C-520/18, Ordre des barreaux francophones et germanophone and others (Collectively “Bulk Communications Surveillance Judgments”). 

In this post, I briefly discuss the Bulk Communication Surveillance Judgments, their significance for other countries and for India. 

Through these cases, the Court invalidated the disproportionate interference by Member States with the rights of their citizens, as provided by EU law, in particular the Directive on privacy and electronic communications (e-Privacy Directive) and European Union’s Charter of Fundamental Rights (EU Charter). The Court assessed the Member States’ bulk communications surveillance laws and practices relating to their access and use of telecommunications data. 

The Court recognised the importance of the State’s positive obligations towards conducting surveillance, although it noted that it was essential for surveillance systems to conform with the general principles of EU law and the rights guaranteed under the EU Charter. It laid down clear principles and measures as to when and how the national authorities could access and use telecommunications data (further discussed in the sections ‘The UK Judgment’ and ‘The French and Belgian Judgment’). It carved a few exceptions as well (in the joined cases of France and Belgium) for emergency situations, but held that such measures would have to pass the threshold of being serious and genuine (further discussed in the section ‘The French and Belgian Judgment’). 

The Cases in Brief 

The Court delivered two separate judgments, one in the UK case and one in the joined cases of France and Belgium. Since these cases had similar sets of issues, the proceedings were adjoined. The UK application challenged the bulk acquisition and use of telecommunications data by its Security and Intelligence Agencies (SIAs) in the interest of national security (as per the UK’s Telecommunication Act of 1984). The French and Belgian applications challenged the indiscriminate data retention and access by SIAs for combating crime. 

The French and Belgian applications questioned the legality of their respective data retention laws (numerous domestic surveillance laws which permitted bulk collection of telecommunication data) that imposed blanket obligations on Electronic Communications Service Providers (ECSP) to provide relevant data. The Belgian law required ECSPs to retain various kinds of traffic and location data for a period of 12 months. Whereas, the French law provided for automated analysis and real time data collection measures for preventing terrorism. The French application also raised the issue of providing a notification to the person under the surveillance. 

The Member States contended that such surveillance measures enabled them to inter alia, safeguard national security, prevent terrorism, and combat serious crimes. Hence, they claimed inapplicability of the e-Privacy Directive on their surveillance laws/ activities.

The UK Judgment

The ECJ found the UK surveillance regime unlawful and inconsistent with EU law, and specifically the e-Privacy Directive. The Court analysed the scope and scheme of the e-Privacy Directive with regard to exclusion of certain State purposes such as national and public security, defence, and criminal investigation. Noting the importance of such State purposes, it held that EU Member States could adopt legislative measures that restricted the scope of rights and obligations (Article 5, 6 and 9) provided in the e-Privacy Directive. However, this was allowed only if the Member States complied with the requirements laid down by the Court in Tele2 Sverige and Watson and Others (C-203/15 and C-698/15) (Tele2) and the e-Privacy Directive. In addition to these, the Court held that the EU Charter must be respected too. In Tele2, the ECJ held that legislative measures obligating ECSPs to retain data must be targeted and limited to what was strictly necessary. Such targeted retention had to be with regard to specific categories of persons and data for a limited time period. Also, the access to data must be subject to a prior review by an independent body.

The e-Privacy Directive ensures the confidentiality of electronic communications and the data relating to it (Article 5(1)). It allows ECSPs to retain metadata (context specific data relating to the users and subscribers, location and traffic) for various purposes such as billing, valued added services and security purposes. However, this data must be deleted or made anonymous, once the purpose is fulfilled unless a law allows for a derogation for State purposes. The e-Privacy Directive allows the Member States to derogate (Article 15(1)) from the principle of confidentiality and corresponding obligations (contained in Article 6 (traffic data) and 9 (location data other than traffic data)) for certain State purposes when it is appropriate, necessary and proportionate. 

The Court clarified that measures undertaken for the purpose of national security would not make EU law inapplicable and exempt the Member States from their obligation to ensure confidentiality of communications under the e-Privacy Directive. Hence, an independent review of surveillance activities such as data retention for indefinite time periods, or further processing or sharing, must be conducted for authorising such activities. It was noted that the domestic law at present did not provide for prior review, as a limit on the above mentioned surveillance activities. 

The French and Belgian Judgment

While assessing the joined cases, the Court arrived at a determination in similar terms as the UK case. It reiterated that the exception (Article 15(1) of the e-Privacy Directive) to the principle of confidentiality of communications (Article 5(1) of the e-Privacy Directive) should not become the norm. Hence, national measures that provided for general and indiscriminate data retention and access for State purposes were held to be incompatible with EU law, specifically the e-Privacy Directive.

The Court in the joined cases, unlike the UK case, allowed for specific derogations for State purposes such as safeguarding national security, combating serious crimes and preventing serious threats. It laid down certain requirements that the Member States had to comply with in case of derogations. The derogations should (1) be clear and precise to the stated objective (2) be limited to what is strictly necessary and for a limited time period (3) have a safeguards framework including substantive and procedural conditions to regulate such instances (4) include guarantees to protect the concerned individuals against abuse. They should also be subjected to an ‘effective review’ by a court or an independent body and must be in compliance of general rules and proportionality principles of EU law and the rights provided in the EU Charter. 

The Court held that in establishing a minimum threshold for a safeguards framework, the EU Charter must be interpreted along with the European Convention on Human Rights (ECHR). This would ensure consistency between the rights guaranteed under the EU Charter and the corresponding rights guaranteed in the ECHR (as per Article 52(3) of the EU Charter).

The Court, in particular, allowed for general and indiscriminate data retention in cases of serious threat to national security. Such a threat should be genuine, and present or foreseeable. Real-time data collection and automated analysis were allowed in such circumstances. But the real-time data collection of persons should be limited to those suspected of terrorist activities. Moreover, it should be limited to what was strictly necessary and subject to prior review. It even allowed for general and indiscriminate data retention of IP addresses for the purpose of national security, combating serious crimes and preventing serious threats to public security. Such retention must be for a limited time period to what was strictly necessary. For such purposes, the Court also permitted ECSPs to retain data relating to the identity particulars of their customers (such as name, postal and email/account addresses and payment details) in a general and indiscriminate manner, without specifying any time limitations. 

The Court allowed targeted data retention for the purpose of safeguarding national security and preventing crime, provided that it was for a limited time period and strictly necessary and was done on the basis of objective and non-discriminatory factors. It was held that such retention should be specific to certain categories of persons or geographical areas. The Court also allowed, subject to effective judicial review, expedited data retention after the initial retention period ended, to shed light on serious criminal offences or acts affecting national security. Lastly, in the context of criminal proceedings, the Court held that it was for the Member States to assess the admissibility of evidence resulting from general and indiscriminate data retention. However, the information and evidence must be excluded where it infringes on the right to a fair trial. 

Significance of the Bulk Communication Surveillance Judgments

With these cases, the ECJ decisively resolved a long-standing discord between the Member States and privacy activists in the EU. For a while now, the Court has been dealing with questions relating to surveillance programs for national security and law enforcement purposes. Though the Member States have largely considered these programs outside the ambit of EU privacy law, the Court has been expanding the scope of privacy rights. 

Placing limitations and controls on State powers in democratic societies was considered necessary by the Court in its ruling in Privacy International. This decision may act as a trigger for considering surveillance reforms in many parts of the world, and more specifically for those aspiring to attain an EU adequacy status. India could benefit immensely should it choose to pay heed. 

As of date, India does not have a comprehensive surveillance framework. Various provisions of the Personal Data Protection Bill, 2019 (Bill), Information Technology Act, 2000, Telegraph Act, 1885, and the Code of Criminal Procedure, 1973 provide for targeted surveillance measures. The Bill provides for wide powers to the executive (under Clause 35, 36 and 91 of the Bill) to access personal and non-personal data in the absence of proper and necessary safeguards. This may cause problems for achieving the EU adequacy status as per Article 45 of the EU General Data Protection Regulation (GDPR) that assesses the personal data management rules of third-party countries. 

Recent news reports suggest that the Bill, which is under legislative consideration, is likely to undergo a significant overhaul. India could use this as an opportunity to introduce meaningful changes in the Bill as well as its surveillance regime. India’s privacy framework could be strengthened by adhering to the principles outlined in the Justice K.S. Puttaswamy v. Union of Indiajudgment and the Bulk Communications Surveillance Judgments.

Building an AI Governance Framework for India, Part III

Embedding Principles of Privacy, Transparency and Accountability

This post has been authored by Jhalak M. Kakkar and Nidhi Singh

In July 2020, the NITI Aayog released a draft Working Document entitled “Towards Responsible AI for All” (hereafter ‘NITI Aayog Working Document’ or ‘Working Document’). This Working Document was initially prepared for an expert consultation that was held on 21 July 2020. It was later released for comments by stakeholders on the development of a ‘Responsible AI’ policy in India. CCG’s comments and analysis  on the Working Document can be accessed here.

In our first post in the series, ‘Building an AI governance framework for India’, we discussed the legal and regulatory implications of the Working Document and argued that India’s approach to regulating AI should be (1) firmly grounded in its constitutional framework, and (2) based on clearly articulated overarching ‘Principles for Responsible AI’. Part II of the series discussed specific Principles for Responsible AI – Safety and Reliability, Equality, and Inclusivity and Non-Discrimination. We explored the constituent elements of these principles and the avenues for incorporating them into the Indian regulatory framework. 

In this final post of the series, we will discuss the remaining principles of Privacy, Transparency and Accountability. 

Principle of Privacy 

Given the diversity of AI systems, the privacy risks which they pose to the individuals, and society as a whole are also varied. These may be be broadly related to : 

(i) Data protection and privacy: This relates to privacy implications of the use of data by AI systems and subsequent data protection considerations which arise from this use. There are two broad aspects to think about in terms of the privacy implications from the use of data by AI systems. Firstly, AI systems must be tailored to the legal frameworks for data protection. Secondly, given that AI systems can be used to re-identify anonymised data, the mere anonymisation of data for the training of AI systems may not provide adequate levels of protection for the privacy of an individual.

a) Data protection legal frameworks: Machine learning and AI technologies have existed for decades, however, it was the explosion in the availability of data, which accounts for the advancement of AI technologies in recent years. Machine Learning and AI systems depend upon data for their training. Generally, the more data the system is given, the more it learns and ultimately the more accurate it becomes. The application of existing data protection frameworks to the use of data by AI systems may raise challenges. 

In the Indian context, the Personal Data Protection Bill, 2019 (PDP Bill), currently being considered by Parliament, contains some provisions that may apply to some aspects of the use of data by AI systems. One such provision is Clause 22 of the PDP Bill, which requires data fiduciaries to incorporate the seven ‘privacy by design’ principles and embed privacy and security into the design and operation of their product and/or network. However, given that AI systems rely significantly on anonymised personal data, their use of data may not fall squarely within the regulatory domain of the PDP Bill. The PDP Bill does not apply to the regulation of anonymised data at large but the Data Protection Authority has the power to specify a code of practice for methods of de-identification and anonymisation, which will necessarily impact AI technologies’ use of data.

b) Use of AI to re-identify anonymised data: AI applications can be used to re-identify anonymised personal data. To safeguard the privacy of individuals, datasets composed of the personal data of individuals are often anonymised through a de-identification and sampling process, before they are shared for the purposes of training AI systems to address privacy concerns. However, current technology makes it possible for AI systems to reverse this process of anonymisation to re-identify people, having significant privacy implications for an individual’s personal data. 

(ii) Impact on society: The impact of the use of AI systems on society essentially relates to broader privacy considerations that arise at a societal level due to the deployment and use of AI, including mass surveillance, psychological profiling, and the use of data to manipulate public opinion. The use of AI in facial recognition surveillance technology is one such AI system that has significant privacy implications for society as a whole. Such AI technology enables individuals to be easily tracked and identified and has the potential to significantly transform expectations of privacy and anonymity in public spaces. 

Due to the varying nature of privacy risks and implications caused by AI systems, we will have to design various regulatory mechanisms to address these concerns. It is important to put in place a reporting and investigation mechanism that collects and analyses information on privacy impacts caused by the deployment of AI systems, and privacy incidents that occur in different contexts. The collection of this data would allow actors across the globe to identify common threads of failure and mitigate against potential privacy failures arising from the deployment of AI systems. 

To this end, we can draw on a mechanism that is currently in place in the context of reporting and investigating aircraft incidents, as detailed under Annexure 13 of the Convention on International Civil Aviation (Chicago Convention). It lays down the procedure for investigating aviation incidents and a reporting mechanism to share information between countries. The aim of this accident investigation report is not to apportion blame or liability from the investigation, but rather to extensively study the cause of the accident and prevent future incidents. 

A similar incident investigation mechanism may be employed for AI incidents involving privacy breaches. With many countries now widely developing and deploying AI systems, such a model of incident investigation would ensure that countries can learn from each other’s experiences and deploy more privacy-secure AI systems.

Principle of Transparency

The concept of transparency is a recognised prerequisite for the realisation of ‘trustworthy AI’. The goal of transparency in ethical AI is to make sure that the functioning of the AI system and resultant outcomes are non-discriminatory, fair, and bias mitigating, and that the AI system inspires public confidence in the delivery of safe and reliable AI innovation and development. Additionally, transparency is also important in ensuring better adoption of AI technology—the more users feel that they understand the overall AI system, the more inclined and better equipped they are to use it.

The level of transparency must be tailored to its intended audience. Information about the working of an AI system should be contextualised to the various stakeholder groups interacting and using the AI system. The Institute of Electrical and Electronics Engineers, a global professional organisation of electronic and electrical engineers,  suggested that different stakeholder groups may require varying levels of transparency in accordance with the target group. This means that groups such as users, incident investigators, and the general public would require different standards of transparency depending upon the nature of information relevant for their use of the AI system.

Presently, many AI algorithms are black boxes where automated decisions are taken, based on machine learning over training datasets, and the decision making process is not explainable. When such AI systems produce a decision, human end users don’t know how it arrived at its conclusions. This brings us to two major transparency problems, the public perception and understanding of how AI works, and how much developers actually understand about their own AI system’s decision making process. In many cases, developers may not know, or be able to explain how an AI system makes conclusions or how it has arrived at certain solutions.

This results in a lack of transparency. Some organisations have suggested opening up AI algorithms for scrutiny and ending reliance on opaque algorithms. On the other hand, the NITI Working Document is of the view that disclosing the algorithm is not the solution and instead, the focus should be on explaining how the decisions are taken by AI systems. Given the challenges around explainability discussed above, it will be important for NITI Aayog to discuss how such an approach will be operationalised in practice.

While many countries and organisations are researching different techniques which may be useful in increasing the transparency of an AI system, one of the common suggestions which have gained traction in the last few years is the introduction of labelling mechanisms in AI systems. An example of this is Google’s proposal to use ‘Model Cards’, which are intended to clarify the scope of the AI systems deployment and minimise their usage in contexts for which they may not be well suited. 

Model cards are short documents which accompany a trained machine learning model. They enumerate the benchmarked evaluation of the working of an AI system in a variety of conditions, across different cultural, demographic, and intersectional groups which may be relevant to the intended application of the AI system. They also contain clear information on an AI system’s capabilities including the intended purpose for which it is being deployed, conditions under which it has been designed to function, expected accuracy and limitations. Adopting model cards and other similar labelling requirements in the Indian context may be a useful step towards introducing transparency into AI systems. 

Principle of Accountability

The Principle of Accountability aims to recognise the responsibility of different organisations and individuals that develop, deploy and use the AI systems. Accountability is about responsibility, answerability and trust. There is no one standard form of accountability, rather this is dependent upon the context of the AI and the circumstances of its deployment.

Holding individuals and entities accountable for harm caused by AI systems has significant challenges as AI systems generally involve multiple parties at various stages of the development process. The regulation of the adverse impacts caused by AI systems often goes beyond the existing regimes of tort law, privacy law or consumer protection law. Some degree of accountability can be achieved by enabling greater human oversight. In order to foster trust in AI and appropriately determine the party who is accountable, it is necessary to build a set of shared principles that clarify responsibilities of each stakeholder involved with the research, development and implementation of an AI system ranging from the developers, service providers and end users.

Accountability has to be ensured at the following stages of an AI system: 

(i) Pre-deployment: It would be useful to implement an audit process before the AI system is deployed. A potential mechanism for implementing this could be a multi-stage audit process which is undertaken post design, but before the deployment of the AI system by the developer. This would involve scoping, mapping and testing a potential AI system before it is released to the public. This can include ensuring risk mitigation strategies for changing development environments and ensuring documentation of policies, processes and technologies used in the AI system.

Depending on the nature of the AI system and the potential for risk, regulatory guidelines can be developed prescribing the involvement of various categories of auditors such as internal, expert third party and from the relevant regulatory agency, at various stages of the audit. Such audits which are conducted pre-deployment are aimed at closing the accountability gap which exists currently.

(ii) During deployment: Once the AI system has been deployed, it is important to keep auditing the AI system to note the changes being made/evolution happening in the AI system in the course of its deployment. AI systems constantly learn from the data and evolve to become better and more accurate. It is important that the development team is continuously monitoring the system to capture any errors that may arise, including inconsistencies arising from input data or design features, and address them promptly.

(iii) Post-deployment: Ensuring accountability post-deployment in an AI system can be challenging. The NITI Working Document also recognised that assigning accountability for specific decisions becomes difficult in a scenario with multiple players in the development and deployment of an AI system. In the absence of any consequences for decisions harming others, no one party would feel obligated to take responsibility or take actions to mitigate the effect of the AI systems. Additionally, the lack of accountability also leads to difficulties in grievance redressal mechanisms which can be used to address scenarios where harm has arisen from the use of AI systems. 

The Council of Europe, in its guidelines on the human rights impacts of algorithmic systems, highlighted the need for effective remedies to ensure responsibility and accountability for the protection of human rights in the context of the deployment of AI systems. A potential model for grievance redressal is the redressal mechanism suggested in the AI4People’s Ethical Framework for a Good Society report by the Atomium – European Institute for Science, Media and Democracy. The report suggests that any grievance redressal mechanism for AI systems would have to be widely accessible and include redress for harms inflicted, costs incurred, and other grievances caused by the AI system. It must demarcate a clear system of accountability for both organisations and individuals. Of the various redressal mechanisms they have suggested, two significant mechanisms are: 

(a) AI ombudsperson: This would ensure the auditing of allegedly unfair or inequitable uses of AI reported by users of the public at large through an accessible judicial process. 

(b) Guided process for registering a complaint: This envisions laying down a simple process, similar to filing a Right to Information request, which can be used to bring discrepancies, or faults in an AI system to the notice of the authorities.

Such mechanisms can be evolved to address the human rights concerns and harms arising from the use of AI systems in India. 

Conclusion

In early October, the Government of India hosted the Responsible AI for Social Empowerment (RAISE) Summit which has involved discussions around India’s vision and a roadmap for social transformation, inclusion and empowerment through Responsible AI. At the RAISE Summit, speakers underlined the need for adopting AI ethics and a human centred approach to the deployment of AI systems. However, this conversation is still at a nascent stage and several rounds of consultations may be required to build these principles into an Indian AI governance and regulatory framework. 

As India enters into the next stage of developing and deploying AI systems, it is important to have multi-stakeholder consultations to discuss mechanisms for the adoption of principles for Responsible AI. This will enable the framing of an effective governance framework for AI in India that is firmly grounded in India’s constitutional framework. While the NITI Aayog Working Document has introduced the concept of ‘Responsible AI’ and the ethics around which AI systems may be designed, it lacks substantive discussion on these principles. Hence, in our analysis, we have explored global views and practices around these principles and suggested mechanisms appropriate for adoption in India’s governance framework for AI. Our detailed analysis of these principles can be accessed in our comments to the NITI Aayog’s Working Document Towards Responsible AI for All.

Experimenting With New Models of Data Governance – Data Trusts

This post has been authored by Shashank Mohan

India is in the midst of establishing a robust data governance framework, which will impact the rights and liabilities of all key stakeholders – the government, private entities, and citizens at large. As a parliamentary committee debates its first personal data protection legislation (‘PDPB 2019’), proposals for the regulation of non-personal data and a data empowerment and protection architecture are already underway. 

As data processing capabilities continue to evolve at a feverish pace, basic data protection regulations like the PDPB 2019 might not be sufficient to address new challenges. For example, big data analytics renders traditional notions of consent meaningless as users have no knowledge of how such algorithms behave and what determinations are made about them by such technology. 

Creative data governance models, which are aimed at reversing the power dynamics in the larger data economy are the need of the hour. Recognising these challenges policymakers are driving the conversation on data governance in the right direction. However, they might be missing out on crucial experiments being run in other parts of the world

As users of digital products and services increasingly lose control over data flows, various new models of data governance are being recommended for example, data trusts, data cooperatives, and data commons. Out of these, one of the most promising new models of data governance is – data trusts. 

(For the purposes of this blog post, I’ll be using the phrase data processors as an umbrella term to cover data fiduciaries/controllers and data processors in the legal sense. The word users is meant to include all data principals/subjects.)

What are data trusts?

Though there are various definitions of data trusts, one which is helpful in understanding the concept is – ‘data trusts are intermediaries that aggregate user interests and represent them more effectively vis-à-vis data processors.’ 

To solve the information asymmetries and power imbalances between users and data processors, data trusts will act as facilitators of data flow between the two parties, but on the terms of the users. Data trusts will act in fiduciary duty and in the best interests of its members. They will have the requisite legal and technical knowledge to act on behalf of users. Instead of users making potentially ill-informed decisions over data processing, data trusts will make such decisions on their behalf, based on pre-decided factors like a bar on third-party sharing, and in their best interests. For example, data trusts to users can be what mutual fund managers are to potential investors in capital markets. 

Currently, in a typical transaction in the data economy, if users wish to use a particular digital service, neither do they have the knowledge to understand the possible privacy risks nor the negotiation powers for change. Data trusts with a fiduciary responsibility towards users, specialised knowledge, and multiple members might be successful in tilting back the power dynamics in favour of users. Data trusts might be relevant from the perspective of both the protection and controlled sharing of personal as well as non-personal data. 

(MeitY’s Non-Personal Data Governance Framework introduces the concept of data trustees and data trusts in India’s larger data governance and regulatory framework. But, this applies only to the governance of ‘non-personal data’ and not personal data, as being recommended here. CCG’s comments on MeitY’s Non-Personal Data Governance Framework, can be accessed – here)

Challenges with data trusts

Though creative solutions like data trusts seem promising in theory, they must be thoroughly tested and experimented with before wide-scale implementation. Firstly, such a new form of trusts, where the subject matter of the trust is data, is not envisaged by Indian law (see section 8 of the Indian Trusts Act, 1882, which provides for only property to be the subject matter of a trust). Current and even proposed regulatory structures don’t account for the regulation of institutions like data trusts (the non-personal data governance framework proposes data trusts, but only as data sharing institutions and not as data managers or data stewards, as being suggested here). Thus, data trusts will need to be codified into Indian law to be an operative model. 

Secondly, data processors might not embrace the notion of data trusts, as it may result in loss of market power. Larger tech companies, who have existing stores of data on numerous users may not be sufficiently incentivised to engage with models of data trusts. Structures will need to be built in a way that data processors are incentivised to participate in such novel data governance models. 

Thirdly, the business or operational models for data trusts will need to be aligned to their members i.e. users. Data trusts will require money to operate – for profit entities may not have the best interests of users in mind. Subscription based models, whether for profit or not, might fail as users are habitual to free services. Donation based models might need to be monitored closely for added transparency and accountability. 

Lastly, other issues like creation of technical specifications for data sharing and security, contours of consent, and whether data trusts will help in data sharing with the government, will need to be accounted for. 

Privacy centric data governance models

At this early stage of developing data governance frameworks suited to Indian needs, policymakers are at a crucial juncture of experimenting with different models. These models must be centred around the protection and preservation of privacy rights of Indians, both from private and public entities. Privacy must also be read in its expansive definition as provided by the Supreme Court in Justice K.S. Puttaswamy vs. Union of India. The autonomy, choice, and control over informational privacy are crucial to the Supreme Court’s interpretation of privacy. 

(CCG’s privacy law database that tracks privacy jurisprudence globally and currently contains information from India and Europe, can be accessed – here

The Proliferating Eyes of Argus: State Use of Facial Recognition Technology

Democratic lawmakers introduce ban on facial recognition technology, citing  mistake made by Detroit police | News Hits

This post has been authored by Sangh Rakshita

In Greek mythology Argus Panoptes was a many-eyed, all-seeing, and always awake, giant whose reference has been used to depict an imagery of excessive scrutiny and surveillance. Jeremy Bentham used this reference when he designed the panopticon prison where prisoners would be monitored without them being in the know. Later, Michel Foucault used the panopticon to elaborate on the social theory of panopticism where the watcher ceases to be external to the watched, resulting in internal surveillance or a ‘chilling’ effect. This idea of “panopticism” has gained renewed relevance in the age of digital surveillance.

Amongst the many cutting edge surveillance technologies being adopted globally, ‘Facial Recognition Technology’ (FRT) is one of the most rapidly deployed. ‘Live Facial Recognition Technology’ (LFRT) or ‘Real-time Facial Recognition Technology’, its augmentation, has increasingly become more effective in the past few years. Improvements in computational power and algorithms have enabled cameras placed at odd angles to detect faces even in motion. This post attempts to explore the issues with increasing State use of FRT around the world and the legal framework surrounding it.

What do FRT and LFRT mean?

FRT refers to the usage of algorithms for uniquely detecting, recognising, or verifying a person using recorded images, sketches, videos (which contain their face). The data about a particular face is generally known as the face template. This template is a mathematical representation of a person’s face, which is created by using algorithms that mark and map distinct features on the captured image like eye locations or the length of a nose. These face templates create the biometric database against which new images, sketches, videos, etc. are compared to verify or recognise the identity of a person. As opposed to the application of FRT, which is conducted on pre-recorded images and videos, LFRT involves real-time automated facial recognition of all individuals in the camera field’s vision. It involves biometric processing of images of all the passers-by using an existing database of images as a reference.

The accuracy of FRT algorithms is significantly impacted by factors like distance and angle from which the image was captured or poor lighting conditions. These problems are worsened in LFRT as the images are not captured in a controlled setting, with the subjects in motion, rarely looking at the camera, and often positioned at odd angles from it. 

Despite claims of its effectiveness, there has been growing scepticism about the use of FRT. Its use has been linked with misidentification of people of colour, ethinic minorities, women, and trans people. The prevalent use of FRT may not only affect the privacy rights of such communities, but all those who are surveilled at large.

The Prevalence of FRT 

While FRT has become ubiquitous, LFRT is still in the process of being adopted in countries like the UK, USA, India, and Singapore. The COVID-19 pandemic has further accelerated the adoption of FRT as a way to track the virus’ spread and to build on contactless biometric-based identification systems. For example, in Moscow, city officials were using a system of tens of thousands of cameras equipped with FRT, to check for social distancing measures, usage of face masks, and adherence to quarantine rules to contain the spread of COVID-19. 

FRT is also being steadily deployed for mass surveillance activities, which is often in violation of universally accepted principles of human rights such as necessity and proportionality. These worries have come to the forefront recently with the State use of FRT to identify people participating in protests. For example, FRT was used by law enforcement agencies to identify prospective law breakers during protests in Hong Kong, protests concerning the Citizenship Amendment Act, 2019 in New Delhi and the Black Lives Matter protests across the USA.

Vociferous demands have been made by civil society and digital rights groups for a global moratorium on the pervasive use of FRT that enables mass surveillance, as many cities such as Boston and Portland have banned its deployment. However, it remains to be seen how effective these measures are in halting the use of FRT. Even the temporary refusal by Big Tech companies to sell FRT to police forces in the US does not seem to have much instrumental value – as other private companies continue its unhindered support.

Regulation of FRT

The approach to the regulation of FRT differs vastly across the globe. The regulation spectrum on FRT ranges from permissive use of mass surveillance on citizens in countries like China and Russia to a ban on the use of FRT for example in Belgium and Boston (in USA). However, in many countries around the world, including India, the use of FRT continues unabated, worryingly in a regulatory vacuum.

Recently, an appellate court in the UK declared the use of LFRT for law enforcement purposes as unlawful, on grounds of violation of the rights of data privacy and equality. Despite the presence of a legal framework in the UK for data protection and the use of surveillance cameras, the Court of Appeal held that there was no clear guidance on the use of the technology and it gave excessive discretion to the police officers. 

The EU has been contemplating a moratorium on the use of FRT in public places. Civil society in the EU is demanding a comprehensive and indefinite ban on the use of FRT and related technology for mass surveillance activities.

In the USA, several orders banning or heavily regulating the use of FRT have been passed. A federal law banning the use of facial recognition and biometric technology by law enforcement has been proposed. The bill seeks to place a moratorium on the use of facial recognition until Congress passes a law to lift the temporary ban. It would apply to federal agencies such as the FBI, as well as local and State police departments.

The Indian Scenario

In July 2019, the Government of India announced its intentions of setting up a nationwide facial recognition system. The National Crime Bureau (NCRB) – a government agency operating under the Ministry of Home Affairs – released a request for proposal (RFP) on July 4, 2019 to procure a National Automated Facial Recognition System (AFRS). The deadline for submission of tenders to the RFP has been extended 11 times since July 2019. The stated aim of the AFRS is to help modernise the police force, information gathering, criminal identification, verification, and its dissemination among various police organisations and units across the country. 

Security forces across the states and union territories will have access to the centralised database of AFRS, which will assist in the investigation of crimes. However, civil society organisations have raised concerns regarding privacy and issues of increased surveillance by the State as AFRS does not have a legal basis (statutory or executive) and lacks procedural safeguards and accountability measures like an oversight regulatory authority. They have also questioned the accuracy of FRT in identifying darker skinned women and ethnic minorities and expressed fears of discrimination. 

This is in addition to the FRT already in use by law enforcement agencies in Chennai, Hyderabad, Delhi, and Punjab. There are several instances of deployment of FRT in India by the government in the absence of a specific law regulating FRT or a general data protection law.

Even the proposed Personal Data Protection Bill, 2019 is unlikely to assuage privacy challenges arising from the use of FRT by the Indian State. The primary reason for this is the broad exemptions provided to intelligence and law enforcement agencies under Clause 35 of the Bill on grounds of sovereignty and integrity, security of the State, public order, etc.

After the judgement of K.S. Puttaswamy vs. Union of India (Puttaswamy I), which reaffirmed the fundamental right to privacy in India, any act of State surveillance breaches the right to privacy and will need to adhere to the three part test laid down in Puttaswamy I.

The three prongs of the test are – legality, which postulates the existence of law along with procedural safeguards; necessity, defined in terms of a legitimate State aim; and proportionality which ensures a rational nexus between the objects and the means adopted to achieve them. This test was also applied in the Aadhaar case (Puttaswamy II) to the use of biometrics technology. 

It may be argued that State use of FRT is for the legitimate aim of ensuring national security, but currently its use is neither sanctioned by law, nor does it pass the test of proportionality. For proportionate use of FRT, the State will need to establish that there is a rational nexus between its use and the purpose sought to be achieved and that the use of such technology is the least privacy restrictive measure to achieve the intended goals. As the law stands today in India after Puttaswamy I and II, any use of FRT or LFRT currently is prima facie unconstitutional. 

While mass surveillance is legally impermissible in India, targeted surveillance is allowed under Section 5 of the Indian Telegraph Act, 1885, read with rule 419A of the Indian Telegraph Rules, 1951 and Section 69 of the Information and Technology Act, 2000 (IT Act). Even the constitutionality of Section 69 of the IT Act has been challenged and is currently pending before the Supreme Court.

Puttaswamy I has clarified that the protection of privacy is not completely lost or surrendered in a public place as it is attached to the person. Hence, the constitutionality of India’s surveillance apparatus needs to be assessed from the standards laid down by Puttaswamy I. To check unregulated mass surveillance through the deployment of FRT by the State, there is a need to restructure the overall surveillance regime in the country. Even the Justice Srikrishna Committee report in 2018 – highlighted that several executive sanctioned intelligence-gathering activities of law enforcement agencies would be illegal after Puttaswamy I as they do not operate under any law. 

The need for reform of surveillance laws, in addition to a data protection law in India to safeguard fundamental rights and civil liberties, cannot be stressed enough. The surveillance law reform will have to focus on the use of new technologies like FRT and regulate its deployment with substantive and procedural safeguards to prevent abuse of human rights and civil liberties and provide for relief. 

Well documented limitations of FRT and LFRT in terms of low accuracy rates, along with concerns of profiling and discrimination, make it essential for the surveillance law reform to have additional safeguards such as mandatory accuracy and non-discrimination audits. For example, the National Institute of Standards and Technology (NIST), US Department of Commerce, 2019 Face Recognition Vendor Test (part three) evaluates whether an algorithm performs differently across different demographics in a dataset. The need of the hour is to cease the use of FRT and put a temporary moratorium on any future deployments till surveillance law reforms with adequate proportionality safeguards have been implemented. 

Group Privacy and Data Trusts: A New Frontier for Data Governance?

The Centre’s Non Personal Data Report proposes a policy framework to regulate the use of anonymised data used by Big Tech companies. The question now is: how well do its recommendations meet up to the challenges of regulating non-personal data, amidst a regulatory lacuna for the same? Shashank Mohan of the Centre for Communication Governance explores how concepts of collective privacy and data trusts lie at the forefront of India’s future frameworks for digital governance.

By Shashank Mohan

This post first appeared on The Bastion on September 13, 2020

Image Credits: Swagam Dasgupta, The Bastion

In the past few years, it has become common knowledge that Big Tech companies like Facebook, Google, and Amazon rely on the exploitation of user data to offer seemingly free services. These companies typically use business models that rely on third party advertising to profit off this data. In exchange for their services, we hand over our data without much control or choice in the transaction. 

In response to the privacy threats posed by such business models, countries around the world have been strengthening and enacting data privacy laws. India is currently debating its own personal data protection law, which is loosely based on the benchmark EU data protection law–the General Data Protection Regulation (GDPR). More recently, attention has shifted to the regulation of non-personal data as well. The Indian Government recently released a report on the Non-Personal Data Governance Framework (NPD Report).

But, why do we need to regulate non-personal data?

While progress on the regulation of personal data is necessary and laudable, in the era of Big Data and machine learning, tech companies no longer need to solely rely on processing our personally identifiable data (personal data) to profile or track users. With newer developments in data analytics, they can find patterns and target us using seemingly innocuous data that may be aggregated or anonymised, but doesn’t need to be identifiable.

For example, they only need to know that I am a brown male in the age range of 25-35, from New Delhi, looking for shoes, and not necessarily my name or my phone number. All of this is “non-personal” data as it’s not linked to my personal identity.

Clearly, tech companies extract value from their service offerings using advanced data analytics and machine learning algorithms which rummage through both personal and non-personal data. This shift to harnessing non-identifiable/anonymised/aggregated data creates a lacuna in the governance of data, as traditionally, data protection laws like the GDPR have focused on identifiable data and giving an individual control over their personal data.

So, among other economic proposals, the NPD Report proposes a policy framework to regulate such anonymised data, to fill this lacuna. The question now is: how well do its recommendations meet up to the challenges of regulating non-personal data? 

How Does The Government Define Non-Personal Data?

The NPD Report proposes the regulation of non-personal data, which it defines as data that is never related to an identifiable person, such as data on weather conditions, or personal (identifiable) data which has been rendered anonymous by applying certain technological techniques (such as data anonymisation). The report also recommends the mandatory cross-sharing of this non-personal data between companies, communities of individuals, and the government. The purpose for which this data may be mandated to be shared falls under three broad buckets: national security, community benefit, and promoting market competition.

However, if such data is not related to an identifiable individual, then how can it be protected under personal data privacy laws?

To address these challenges in part, the report introduces two key concepts: collective privacy and data trusts. 

The NPD Report defines collective privacy as a right emanating from a community or group of people that are bound by common interests and purposes. It recommends that communities or a group of people exercise control over their non-personal data–which is distinct from an individual exercising control of their personal data–and do so via an appropriate nominee called a data trustee, who would exercise their privacy rights on behalf of the entire community. These two interconnected concepts of collective privacy and data trusteeship merit deeper exploration, due to their significant impact on how we view privacy rights in the digital age.

What is Collective Privacy and How Shall We Protect It?

The concept of collective privacy shifts the focus from an individual controlling their privacy rights, to a group or a community having data rights as a whole. In the age of Big Data analytics, the NPD Report does well to discuss the risks of collective privacy harms to groups of people or communities. It is essential to look beyond traditional notions of privacy centered around an individual, as Big Data analytical tools rarely focus on individuals, but on drawing insights at the group level, or on “the crowd” of technology users.

In a revealing example from 2013, data processors who accessed New York City’s taxi trip data (including trip dates and times) were able to infer with a degree of accuracy whether a taxi driver was a devout Muslim or not, even though data on the taxi licenses and medallion numbers had been anonymised. Data processors linked pauses in taxi trips with adherence to regularly timed prayer timings to arrive at their conclusion. Such findings and classifications may result in heightened surveillance or discrimination for such groups or communities as a whole.

An example of such a community in the report itself is of people suffering from a socially stigmatised disease who happen to reside in a particular locality in a city. It might be in the interest of such a community to keep details about their ailment and residence private, as even anonymised data pointing to their general whereabouts could lead to harassment and the violation of their privacy.

In such cases, harms arise not specifically to an individual, but to a group or community as a whole. Even if data is anonymised (and rendered completely un-identifiable), insights drawn at a group level help decipher patterns and enable profiling at the macro level.

However, the community suffering from the disease might also see some value in sharing limited, anonymised data on themselves with certain third parties; for example, with experts conducting medical research to find a cure to the disease. Such a group may nominate a data trustee–as envisioned by the NPD Report–who facilitates the exchange of non-personal data on their behalf, and takes their privacy interests into account with relevant data processors. 

This model of data trusteeship is thus clearly envisioned as a novel intermediary relationship–distinct from traditional notions of a legal trust or trustee for the management of property–between users and data trustees to facilitate the proper exchange of data, and protect users against privacy harms like large-scale profiling and behavioral manipulation.

But, what makes data trusts unique? 

Are Data Trusts the New ‘Mutual Funds’? 

Currently, data processors process a wide-range of data–both personal and non-personal–about users, without providing them accessible information about how they use or collect it. These users, if they wish to use services offered by data processors, do not have any negotiating powers over the collection or processing of their data. This results in information asymmetries and power imbalances between both parties, without much recourse to users–especially in terms of non-personal data which is not covered by personal data protection laws like the GDPR, or India’s Draft Personal Data Protection Bill.  

Data trusts can help solve the challenges arising during everyday data transactions taking place on the Internet. Acting as experts on behalf of users, they may be in a better position to negotiate for privacy-respecting practices as compared to individual users. By standardising data sharing practices like data anonymisation and demanding transparency in data usage, data trusts may also be better placed to protect collective privacy rights as compared to an unstructured community. One of the first recommendations to establish data trusts in the public fora came from the UK Government’s independent report from 2017, ‘Growing the artificial intelligence industry in the UK’, which recommended the establishment of data trusts for increased access to data for AI systems.

Simply put: data trusts might be akin to mutual fund managers, as they facilitate complex investments on behalf of and in the best interests of their individual investors. 

The Fault in Our Data Sarkaar

Since data trusts are still untested at a large scale, certain challenges need to be anticipated at the time of their conceptualisation, which the NPD Report does not take account of.

For example, in some cases, the report suggests that the role of the data trustee could be assumed by an arm of the government. The Ministry of Health and Family Welfare, for instance, could act as a trustee for all data on diabetes for Indian citizens. 

However, the government acting as a data trustee raises important questions of conflict of interest–after all, government agencies might utilise relevant non-personal data for the profiling of citizens. The NPD Report doesn’t provide solutions for such challenges.

Additionally, the NPD Report doesn’t clarify the ambiguity in the relationship between  data trusts and data trustees, adding to the complexity of its recommendations. While the report envisions data trusts as institutional structures purely for the sharing of given data sets, it defines data trustees as agents of ‘predetermined’ communities who are tasked with protecting their data rights. 

Broadly, this is just like how commodities (like stocks or gold) are traded over an exchange (such as data trusts) while agents such as stockbrokers (or data trustees) assist investors in making their investments. This is distinct from how Indian law treats traditional conceptions of trusts and trustees, and might require fresh law for its creation. 

In terms of the exchange of non-personal data, possibly both these tasks–that is, facilitating data sharing and protecting data rights of communities/groups–can be delegated to just one entity: data trusts. Individuals who do not form part of any ‘predetermined’ community–and thus may not find themselves represented by an appropriate trustee–may also benefit from such hybrid data trusts for the protection of their data rights.

Clearly, multiple cautionary steps need to be in place for data trusts to work, and for the privacy of millions to be protected–steps yet to be fully disclosed in the Report. 

Firstly, there is a need for legal and regulatory mechanisms that will ensure that these trusts genuinely represent the best interests of their members. Without a strong alignment with regulatory policies, data trusts might enable the further exploitation of data, rather than bringing about reforms in data governance. Borrowing from traditional laws on trusts, a genuine representation of interests can be ensured by placing a legal obligation–in the form of an enforceable trust deed– on the trust of a fiduciary duty (or duty of care) towards its members.

Secondly, data trusts will require money to operate, and developing funding models that ensure the independence of trusts and also serve their members’ best interests. Various models will need to be tested before implementation, including government funded data trusts and user-subscription based systems.

Thirdly, big questions about the transparency of data trusts remain. As these institutions may be the focal point of data exchange in India, ensuring their independence and accountability will be crucial. Auditing, continuous reviews, and reporting mechanisms will need to be enmeshed in future regulation to ensure the accountability of data trusts.

Privacy Rights Must Be Paramount

As the law tries to keep pace with technology in India, recognising new spheres which require immediate attention, like the challenges of collective privacy, becomes pertinent for policymakers. The NPD Report takes momentous strides in recognising some of these challenges which require swift redressal, but fails to take into consideration emerging scholarship on the autonomy, transparency, and strength of its proposed data trusts.

For example, large data processors will need to be incentivised to engage with data trusts. Smaller businesses may engage with data trusts easily considering the newfound easy access to large amounts of data. But, it might be difficult to incentivise Big Tech companies to engage with such structures, due to their existing stores of wide-scale data on millions of users. This is where the government will need to go back to the drawing board and engage with multiple stakeholders to ensure that innovation goes hand in hand with a privacy respecting data governance framework. Novel solutions like data trusts should be tested with pilot projects, before being baked into formal policy or law.

More than three years after India’s Supreme Court reaffirmed the right to privacy as intrinsic to human existence and a guarantee under the Indian Constitution, government policy continues to treat data–whether personal or non-personal–as a resource to be ‘mined’. In this atmosphere, to meaningfully recognise the right to privacy and self-determination, the government must lay down a data governance framework which seeks to protect the rights of users (or data providers), lays down principles of transparency and accountability, and establishes strong institutions for enforcement of the law.

(This post is in context of the report released by the Committee of Experts on Personal Data Governance Framework, as constituted by the Ministry of Electronics and Information Technology. CCG’s comments on the report can be accessed here)

Reflections on Personal Data Protection Bill, 2019

By Sangh Rakshita and Nidhi Singh

Image result for data protection"

 The Personal Data Protection Bill, 2019 (PDP Bill/ Bill) was introduced in the Lok Sabha on December 11, 2019 , and was immediately referred to a joint committee of the Parliament. The joint committee published a press communique on February 4, 2020 inviting comments on the Bill from the public.

The Bill is the successor to the Draft Personal Data Protection Bill 2018 (Draft Bill 2018), recommended by a government appointed expert committee chaired by Justice B.N. Srikrishna. In August 2018, shortly after the recommendations and publication of the draft Bill, the Ministry of Electronics and Information Technology (MeitY) invited comments on the Draft Bill 2018 from the public. (Our comments are available here.)[1]

In this post we undertake a preliminary examination of:

  • The scope and applicability of the PDP Bill
  • The application of general data protection principles
  • The rights afforded to data subjects
  • The exemptions provided to the application of the law

In future posts in the series we will examine the Bill and look at the:

  • The restrictions on cross border transfer of personal data
  • The structure and functions of the regulatory authority
  • The enforcement mechanism and the penalties under the PDP Bill

Scope and Applicability

The Bill identifies four different categories of data. These are personal data, sensitive personal data, critical personal data and non-personal data

Personal data is defined as “data about or relating to a natural person who is directly or indirectly identifiable, having regard to any characteristic, trait, attribute or any other feature of the identity of such natural person, whether online or offline, or any combination of such features with any other information, and shall include any inference drawn from such data for the purpose of profiling. (emphasis added)

The addition of inferred data in the definition realm of personal data is an interesting reflection of the way the conversation around data protection has evolved in the past few months, and requires further analysis.

Sensitive personal data is defined as data that may reveal, be related to or constitute a number of different categories of personal data, including financial data, health data, official identifiers, sex life, sexual orientation, genetic data, transgender status, intersex status, caste or tribe, and religious and political affiliations / beliefs. In addition, under clause 15 of the Bill the Central Government can notify other categories of personal data as sensitive personal data in consultation with the Data Protection Authority and the relevant sectoral regulator.

Similar to the 2018 Bill, the current bill does not define critical personal data and clause 33 provides the Central Government the power to notify what is included under critical personal data. However, in its report accompanying the 2018 Bill, the Srikrishna committee had referred to some examples of critical personal data that relate to critical state interest like Aadhaar number, genetic data, biometric data, health data, etc.

The Bill retains the terminology introduced in the 2018 Draft Bill, referring to data controllers as ‘data fiduciaries’ and data subjects ‘data principals’. The new terminology was introduced with the purpose of reflecting the fiduciary nature of the relationship between the data controllers and subjects. However, whether the use of the specific terminology has more impact on the protection and enforcement of the rights of the data subjects still needs to be seen.

 Application of PDP Bill 2019

The Bill is applicable to (i) the processing of any personal data, which has been collected, disclosed, shared or otherwise processed in India; (ii) the processing of personal data by the Indian government, any Indian company, citizen, or person/ body of persons incorporated or created under Indian law; and (iii) the processing of personal data in relation to any individuals in India, by any persons outside of India.

The scope of the 2019 Bill, is largely similar in this context to that of the 2018 Draft Bill. However, one key difference is seen in relation to anonymised data. While the 2018 Draft Bill completely exempted anonymised data from its scope, the 2019 Bill does not apply to anonymised data, except under clause 91 which gives the government powers to mandate the use and processing of non-personal data or anonymised personal data under policies to promote the digital economy. There are a few concerns that arise in context of this change in treatment of anonymised personal data. First, there are concerns on the concept of anonymisation of personal data itself. While the Bill provides that the Data Protection Authority (DPA) will specify appropriate standards of irreversibility for the process of anonymisation, it is not clear that a truly irreversible form of anonymisation is possible at all. In this case, we need more clarity on what safeguards will be applicable for the use of anonymised personal data.

Second, is the Bill’s focus on the promotion of the digital economy. We have previously discussed some of the concerns regarding focus on the promotion of digital economy in a rights based legislation in our comments to the Draft Bill 2018.

These issues continue to be of concern, and are perhaps heightened with the introduction of a specific provision on the subject in the 2019 Bill (especially without adequate clarity on what services or policy making efforts in this direction, are to be informed by the use of anonymised personal data). Many of these issues are also still under discussion by the committee of experts set up to deliberate on data governance framework (non-personal data). The mandate of this committee includes the study of various issues relating to non-personal data, and to make specific suggestions for consideration of the central government on regulation of non-personal data.

The formation of the non-personal data committee was in pursuance of a recommendation by the Justice Srikrishna Committee to frame a legal framework for the protection of community data, where the community is identifiable. The mandate of the expert committee will overlap with the application of clause 91(2) of the Bill.

Data Fiduciaries, Social Media Intermediaries and Consent Managers

Data Fiduciaries

As discussed above the Bill categorises data controllers as data fiduciaries and significant data fiduciaries. Any person that determines the purpose and means of processing of personal data, (including the State, companies, juristic entities or individuals) is considered a data fiduciary. Some data fiduciaries may be notified as ‘significant data fiduciaries’, on the basis of factors such as the volume and sensitivity of personal data processed, the risks of harm etc. Significant data fiduciaries are held to higher standards of data protection. Under clauses 27-30, significant data fiduciaries are required to carry out data protection impact assessments, maintain accurate records, audit policy and the conduct of its processing of personal data and appoint a data protection officer. 

Social Media Intermediaries

The Bill introduces a distinct category of intermediaries called social media intermediaries. Under clause 26(4) a social media intermediary is ‘an intermediary who primarily or solely enables online interaction between two or more users and allows them to create, upload, share, disseminate, modify or access information using its services’. Intermediaries that primarily enable commercial or business-oriented transactions, provide access to the Internet, or provide storage services are not to be considered social media intermediaries.

Social media intermediaries may be notified to be significant data fiduciaries, if they have a minimum number of users, and their actions have or are likely to have a significant impact on electoral democracy, security of the State, public order or the sovereignty and integrity of India.

Under clause 28 social media intermediaries that have been notified as a significant data fiduciaries will be required to provide for voluntary verification of users to be accompanied with a demonstrable and visible mark of verification.

Consent Managers

The Bill also introduces the idea of a ‘consent manager’ i.e. a (third party) data fiduciary which provides for management of consent through an ‘accessible, transparent and interoperable platform’. The Bill does not contain any details on how consent management will be operationalised, and only states that these details will be specified by regulations under the Bill. 

Data Protection Principles and Obligations of Data Fiduciaries

Consent and grounds for processing

The Bill recognises consent as well as a number of other grounds for the processing of personal data.

Clause 11 provides that personal data shall only be processed if consent is provided by the data principal at the commencement of processing. This provision, similar to the consent provision in the 2018 Draft Bill, draws from various principles including those under the Indian Contract Act, 1872 to inform the concept of valid consent under the PDP Bill. The clause requires that the consent should be free, informed, specific, clear and capable of being withdrawn.

Moreover, explicit consent is required for the processing of sensitive personal data. The current Bill appears to be silent on issues such as incremental consent which were highlighted in our comments in the context of the Draft Bill 2018.

The Bill provides for additional grounds for processing of personal data, consisting of very broad (and much criticised) provisions for the State to collect personal data without obtaining consent. In addition, personal data may be processed without consent if required in the context of employment of an individual, as well as a number of other ‘reasonable purposes’. Some of the reasonable purposes, which were listed in the Draft Bill 2018 as well, have also been a cause for concern given that they appear to serve mostly commercial purposes, without regard for the potential impact on the privacy of the data principal.

In a notable change from the Draft Bill 2018, the PDP Bill, appears to be silent on whether these other grounds for processing will be applicable in relation to sensitive personal data (with the exception of processing in the context of employment which is explicitly barred).

Other principles

The Bill also incorporates a number of traditional data protection principles in the chapter outlining the obligations of data fiduciaries. Personal data can only be processed for a specific, clear and lawful purpose. Processing must be undertaken in a fair and reasonable manner and must ensure the privacy of the data principal – a clear mandatory requirement, as opposed to a ‘duty’ owed by the data fiduciary to the data principal in the Draft Bill 2018 (this change appears to be in line with recommendations made in multiple comments to the Draft Bill 2018 by various academics, including our own).

Purpose and collection limitation principles are mandated, along with a detailed description of the kind of notice to be provided to the data principal, either at the time of collection, or as soon as possible if the data is obtained from a third party. The data fiduciary is also required to ensure that data quality is maintained.

A few changes in the application of data protection principles, as compared to the Draft Bill 2018, can be seen in the data retention and accountability provisions.

On data retention, clause 9 of the Bill provides that personal data shall not be retained beyond the period ‘necessary’ for the purpose of data processing, and must be deleted after such processing, ostensibly a higher standard as compared to ‘reasonably necessary’ in the Draft Bill 2018. Personal data may only be retained for a longer period if explicit consent of the data principal is obtained, or if retention is required to comply with law. In the face of the many difficulties in ensuring meaningful consent in today’s digital world, this may not be a win for the data principal.

Clause 10 on accountability continues to provide that the data fiduciary will be responsible for compliance in relation to any processing undertaken by the data fiduciary or on its behalf. However, the data fiduciary is no longer required to demonstrate such compliance.

Rights of Data Principals

Chapter V of the PDP Bill 2019 outlines the Rights of Data Principals, including the rights to access, confirmation, correction, erasure, data portability and the right to be forgotten. 

Right to Access and Confirmation

The PDP Bill 2019 makes some amendments to the right to confirmation and access, included in clause 17 of the bill. The right has been expanded in scope by the inclusion of sub-clause (3). Clause 17(3) requires data fiduciaries to provide data principals information about the identities of any other data fiduciaries with whom their personal data has been shared, along with details about the kind of data that has been shared.

This allows the data principal to exert greater control over their personal data and its use.  The rights to confirmation and access are important rights that inform and enable a data principal to exercise other rights under the data protection law. As recognized in the Srikrishna Committee Report, these are ‘gateway rights’, which must be given a broad scope.

Right to Erasure

The right to correction (Clause 18) has been expanded to include the right to erasure. This allows data principals to request erasure of personal data which is not necessary for processing. While data fiduciaries may be allowed to refuse correction or erasure, they would be required to produce a justification in writing for doing so, and if there is a continued dispute, indicate alongside the personal data that such data is disputed.

The addition of a right to erasure, is an expansion of rights from the 2018 Bill. While the right to be forgotten only restricts or discontinues disclosure of personal data, the right to erasure goes a step ahead and empowers the data principal to demand complete removal of data from the system of the data fiduciary.

Many of the concerns expressed in the context of the Draft Bill 2018, in terms of the procedural conditions for the exercise of the rights of data principals, as well as the right to data portability specifically, continue to persist in the PDP Bill 2019.

Exceptions and Exemptions

While the PDP Bill ostensibly enables individuals to exercise their right to privacy against the State and the private sector, there are several exemptions available, which raise several concerns.

The Bill grants broad exceptions to the State. In some cases, it is in the context of specific obligations such as the requirement for individuals’ consent. In other cases, State action is almost entirely exempted from obligations under the law. Some of these exemptions from data protection obligations are available to the private sector as well, on grounds like journalistic purposes, research purposes and in the interests of innovation.

The most concerning of these provisions, are the exemptions granted to intelligence and law enforcement agencies under the Bill. The Draft Bill 2018, also provided exemptions to intelligence and law enforcement agencies, so far as the privacy invasive actions of these agencies were permitted under law, and met procedural standards, as well as legal standards of necessity and proportionality. We have previously discussed some of the concerns with this approach here.

The exemptions provided to these agencies under the PDP Bill, seem to exacerbate these issues.

Under the Bill, the Central Government can exempt an agency of the government from the application of this Act by passing an order with reasons recorded in writing if it is of the opinion that the exemption is necessary or expedient in the interest of sovereignty and integrity, security of the state, friendly relations with foreign states, public order; or for preventing incitement to the commission of any cognizable offence relating to the aforementioned grounds. Not only have the grounds on which government agencies can be exempted been worded in an expansive manner, the procedure of granting these exemptions also is bereft of any safeguards.

The executive functioning in India suffers from problems of opacity and unfettered discretion at times, which requires a robust system of checks and balances to avoid abuse. The Indian Telegraph Act, 1885 (Telegraph Act) and the Information Technology Act, 2000 (IT Act) enable government surveillance of communications made over telephones and the internet. For drawing comparison here, we primarily refer to the Telegraph Act as it allows the government to intercept phone calls on similar grounds as mentioned in clause 35 of the Bill by an order in writing. However, the Telegraph Act limits the use of this power to two scenarios – occurrence of a public emergency or in the interest of public safety. The government cannot intercept communications made over telephones in the absence of these two preconditions. The Supreme Court in People’s Union for Civil Liberties v. Union of India, (1997) introduced guidelines to check abuse of surveillance powers under the Telegraph Act which were later incorporated in Rule 419A of the Indian Telegraph Rules, 1951. A prominent safeguard included in Rule 419A requires that surveillance and monitoring orders be issued only after considering ‘other reasonable means’ for acquiring the required information. The court had further limited the scope of interpretation of ‘public emergency’ and ‘public safety’ to mean “the prevalence of a sudden condition or state of affairs affecting the people at large and calling for immediate action”, and “the state or condition of freedom from danger or risk at large” respectively. In spite of the introduction of these safeguards, the procedure of intercepting telephone communications under the Telegraph Act is criticised for lack of transparency and improper implementation. For instance, a 2014 report revealed that around 7500 – 9000 phone interception orders were issued by the Central Government every month. The application of procedural safeguards, in each case would have been physically impossible given the sheer numbers. Thus, legislative and judicial oversight becomes a necessity in such cases.

The constitutionality of India’s surveillance apparatus inclduing section 69 of the IT Act which allows for surveillance on broader grounds on the basis of necessity and expediency and not ‘public emergency’ and ‘public safety’, has been challenged before the Supreme Court and is currently pending. Clause 35 of the Bill also mentions necessity and expediency as prerequisites for the government to exercise its power to grant exemption, which appear to be vague and open-ended as they are not defined. The test of necessity, implies resorting to the least intrusive method of encroachment up on privacy to achieve the legitimate state aim. This test is typically one among several factors applied in deciding on whether a particular intrusion on a right is tenable or not, under human rights law. In his concurring opinion in Puttaswamy (I) J. Kaul had included ‘necessity’ in the proportionality test. (However, this test is not otherwise well developed in Indian jurisprudence).  Expediency, on the other hand, is not a specific legal basis used for determining the validity of an intrusion on human rights. It has also not been referred to in Puttaswamy (I) as a basis of assessing a privacy violation. The use of the term ‘expediency’ in the Bill is deeply worrying as it seems to bring down the threshold for allowing surveillance which is a regressive step in the context of cases like PUCL and Puttaswamy (I). A valid law along with the principles of proportionality and necessity are essential to put in place an effective system of checks and balances on the powers of the executive to provide exemptions. It seems unlikely that the clause will pass the test of proportionality (sanction of law, legitimate aim, proportionate to the need of interference, and procedural guarantees against abuse) as laid down by the Supreme Court in Puttaswamy (I).

The Srikrishna Committee report had recommended that surveillance should not only be conducted under law (and not executive order), but also be subject to oversight, and transparency requirements. The Committee had argued that the tests of lawfulness, necessity and proportionality provided for under clauses 42 and 43 (of the Draft Bill 2018) were sufficient to meet the standards set out under the Puttaswamy judgment. Since the PDP Bill completely does away with all these safeguards and leaves the decision to executive discretion, the law is unconstitutional.  After the Bill was introduced in the Lok Sabha, J. Srikrishna had criticised it for granting expansive exemptions in the absence of judicial oversight. He warned that the consequences could be disastrous from the point of view of safeguarding the right to privacy and could turn the country into an “Orwellian State”. He has also opined on the need for a separate legislation to govern the terms under which the government can resort to surveillance.

Clause 36 of the Bill deals with exemption of some provisions for certain processing of personal data. It combines four different clauses on exemption which were listed in the Draft Bill 2018 (clauses 43, 44, 46 and 47). These include processing of personal data in the interests of prevention, detection, investigation and prosecution of contraventions of law; for the purpose of legal proceedings; personal or domestic purposes; and journalistic purposes. The Draft Bill 2018 had detailed provisions on the need for a law passed by Parliament or the State Legislature which is necessary and proportionate, for processing of personal data in the interests of prevention, detection, investigation and prosecution of contraventions of law. Clause 36 of the Bill does not enumerate the need for a law to process personal data under these exemptions. We had argued that these exemptions granted by the Draft Bill 2018 (clauses 43, 44, 46 and 47) were wide, vague and needed clarifications, but the exemptions under clause 36 of the Bill  are even more ambiguous as they merely enlist the exemptions without any specificities or procedural safeguards in place.

In the Draft Bill 2018, the Authority could not give exemption from the obligation of fair and reasonable processing, measures of security safeguards and data protection impact assessment for research, archiving or statistical purposes As per the current Bill, the Authority can provide exemption from any of the provisions of the Act for research, archiving or statistical purposes.

The last addition to this chapter of exemptions is that of creating a sandbox for encouraging innovation. This newly added clause 40 is aimed at encouraging innovation in artificial intelligence, machine-learning or any other emerging technology in public interest. The details of what the sandbox entails other than exemption from some of the obligations of Chapter II might need further clarity. Additionally, to be considered an eligible applicant, a data fiduciary has to necessarily obtain certification of its privacy by design policy from the DPA, as mentioned in clause 40(4) read with clause 22.

Though well appreciated for its intent, this provision requires clarification on grounds of selection and details of what the sandbox might entail.


[1] At the time of introduction of the PDP Bill 2019, the Minister for Law and Justice of India, Mr. Ravi Shankar Prasad suggested that over 2000 inputs were received on the Draft Bill 2018, based on which changes have been made in the PDP Bill 2019. However, these comments and inputs have not been published by MeitY, and only a handful of comments have been published, by the stakeholders submitting these comments themselves.   

[September 30-October 7] CCG’s Week in Review Curated News in Information Law and Policy

Huawei finds support from Indian telcos in the 5G rollout as PayPal withdrew from Facebook’s Libra cryptocurrency project; Foreign Portfolio Investors moved MeitY against in the Data Protection Bill; the CJEU rules against Facebook in case relating to takedown of content globally; and Karnataka joins list of states considering implementing NRC to remove illegal immigrants – presenting this week’s most important developments in law, tech and national security.

Digital India

  • [Sep 30] Why the imminent global economic slowdown is a growth opportunity for Indian IT services firms, Tech Circle report.
  • [Sep 30] Norms tightened for IT items procurement for schools, The Hindu report.
  • [Oct 1] Govt runs full throttle towards AI, but tech giants want to upskill bureaucrats first, Analytics India Magazine report.
  • [Oct 3] – presenting this week’s most important developments in law, tech and national security. MeitY launches smart-board for effective monitoring of the key programmes, The Economic Times report.
  • [Oct 3] “Use human not artificial intelligence…” to keep a tab on illegal constructions: Court to Mumbai civic body, NDTV report.
  • [Oct 3] India took 3 big productivity leaps: Nilekani, Livemint report.
  • [Oct 4] MeitY to push for more sops to lure electronic makers, The Economic Times report; Inc42 report.
  • [Oct 4] Core philosophy of Digital India embedded in Gandhian values: Ravi Shankar Prasad, Financial Express report.
  • [Oct 4] How can India leverage its data footprint? Experts weigh in at the India Economic Summit, Quartz report.
  • [Oct 4] Indians think jobs would be easy to find despite automation: WEF, Tech Circle report.
  • [Oct 4] Telangana govt adopts new framework to use drones for last-mile delivery, The Economic Times report.
  • [Oct 5] Want to see ‘Assembled in India’ on an iPhone: Ravi Shankar Prasad, The Economic Times report.
  • [Oct 6] Home market gets attractive for India’s IT giants, The Economic Times report.

Internet Governance

  • [Oct 2] India Govt requests maximum social media content takedowns in the world, Inc42 report; Tech Circle report.
  • [Oct 3] Facebook can be forced to delete defamatory content worldwide, top EU court rules, Politico EU report.
  • [Oct 4] EU ruling may spell trouble for Facebook in India, The Economic Times report.
  • [Oct 4] TikTok, TikTok… the clock is ticking on the question whether ByteDance pays its content creators, ET Tech report.
  • [Oct 6] Why data localization triggers a heated debate, The Economic Times report.
  • [Oct 7] Sensitive Indian govt data must be stored locally, Outlook report.

Data Protection and Privacy

  • [Sep 30] FPIs move MeitY against data bill, seek exemption, ET markets report, Inc42 report; Financial Express report.
  • [Oct 1] United States: CCPA exception approved by California legislature, Mondaq.com report.
  • [Oct 1] Privacy is gone, what we need is regulation, says Infosys Kris Gopalakrishnana, News18 report.
  • [Oct 1] Europe’s top court says active consent is needed for tracking cookies, Tech Crunch report.
  • [Oct 3] Turkey fines Facebook $282,000 over data privacy breach, Deccan Herald report.

Free Speech

  • [Oct 1] Singapore’s ‘fake news’ law to come into force Wednesday, but rights group worry it could stifle free speech, The Japan Times report.
  • [Oct 2] Minister says Singapore’s fake news law is about ‘enabling’ free speech, CNBC report.
  • [Oct 3] Hong Kong protests: Authorities to announce face mask ban, BBC News report.
  • [Oct 3] ECHR: Holocaust denial is not protected free speech, ASIL brief.
  • [Oct 4] FIR against Mani Ratnam, Adoor and 47 others who wrote to Modi on communal violence, The News Minute report; Times Now report.
  • [Oct 5] UN asks Malaysia to repeal laws curbing freedom of speech, The New Indian Express report.
  • [Oct 6] When will our varsities get freedom of expression: PC, Deccan Herald report.
  • [Oct 6] UK Government to make university students sign contracts limiting speech and behavior, The Times report.
  • [Oct 7] FIR on Adoor and others condemned, The Telegraph report.

Aadhaar, Digital IDs

  • [Sep 30] Plea in SC seeking linking of social media accounts with Aadhaar to check fake news, The Economic Times report.
  • [Oct 1] Why another omnibus national ID card?, The Hindu Business Line report.
  • [Oct 2] ‘Kenyan court process better than SC’s approach to Aadhaar challenge’: V Anand, who testified against biometric project, LiveLaw report.
  • [Oct 3] Why Aadhaar is a stumbling block in Modi govt’s flagship maternity scheme, The Print report.
  • [Oct 4] Parliament panel to review Aadhaar authority functioning, data security, NDTV report.
  • [Oct 5] Could Aahdaar linking stop GST frauds?, Financial Express report.
  • [Oct 6] Call for liquor sale-Aadhaar linking, The New Indian Express report.

Digital Payments, Fintech

  • [Oct 7] Vision cash-lite: A billion UPI transactions is not enough, Financial Express report.

Cryptocurrencies

  • [Oct 1] US SEC fines crypto company Block.one for unregistered ICO, Medianama report.
  • [Oct 1] South Korean Court issues landmark decision on crypto exchange hacking, Coin Desk report.
  • [Oct 2] The world’s most used cryptocurrency isn’t bitcoin, ET Markets report.
  • [Oct 2] Offline transactions: the final frontier for global crypto adoption, Coin Telegraph report.
  • [Oct 3] Betting on bitcoin prices may soon be deemed illegal gambling, The Economist report.
  • [Oct 3] Japan’s financial regulator issues draft guidelines for funds investing in crypto, Coin Desk report.
  • [Oct 3] Hackers launch widespread botnet attack on crypto wallets using cheap Russian malware, Coin Desk report.
  • [Oct 4] State-backed crypto exchange in Venezuela launches new crypto debit cards, Decrypt report.
  • [Oct 4] PayPal withdraws from Facebook-led Libra crypto project, Coin Desk report.
  • [Oct 5] Russia regulates digital rights, advances other crypto-related bills, Bitcoin.com report.
  • [Oct 5] Hong Kong regulates crypto funds, Decrypt report.

Cybersecurity and Cybercrime

  • [Sep 30] Legit-looking iPhone lightening cables that hack you will be mass produced and sold, Vice report.
  • [Sep 30] Blackberry launches new cybersecurity development labs, Infosecurity Mgazine report.
  • [Oct 1] Cybersecurity experts warn that these 7 emerging technologies will make it easier for hackers to do their jobs, Business Insider report.
  • [Oct 1] US government confirms new aircraft cybersecurity move amid terrorism fears, Forbes report.
  • [Oct 2] ASEAN unites to fight back on cyber crime, GovInsider report; Asia One report.
  • [Oct 2] Adopting AI: the new cybersecurity playbook, TechRadar Pro report.
  • [Oct 4] US-UK Data Access Agreement, signed on Oct 3, is an executive agreement under the CLOUD Act, Medianama report.
  • [Oct 4] The lack of cybersecurity talent is ‘a  national security threat,’ says DHS official, Tech Crunch report.
  • [Oct 4] Millions of Android phones are vulnerable to Israeli surveillance dealer attack, Forbes report; NDTV report.
  • [Oct 4] IoT devices, cloud solutions soft target for cybercriminals: Symantec, Tech Circle report.
  • [Oct 6] 7 cybersecurity threats that can sneak up on you, Wired report.
  • [Oct 6] No one could prevent another ‘WannaCry-style’ attack, says DHS official, Tech Crunch report.
  • [Oct 7] Indian firms rely more on automation for cybersecurity: Report, ET Tech report.

Cyberwarfare

  • [Oct 2] New ASEAN committee to implement norms for countries behaviour in cyberspace, CNA report.

Tech and National Security

  • [Sep 30] IAF ready for Balakot-type strike, says new chief Bhadauria, The Hindu report; Times of India report.
  • [Sep 30] Naval variant of LCA Tejas achieves another milestone during its test flight, Livemint report.
  • [Sep 30] SAAB wants to offer Gripen at half of Rafale cost, full tech transfer, The Print report.
  • [Sep 30] Rajnath harps on ‘second strike capability’, The Shillong Times report.
  • [Oct 1] EAM Jaishankar defends India’s S-400 missile system purchase from Russia as US sanctions threat, International Business Times report.
  • [Oct 1] SC for balance between liberty, national security, Hindustan Times report.
  • [Oct 2] Startups have it easy for defence deals up to Rs. 150 cr, ET Rise report, Swarajya Magazine report.
  • [Oct 3] Huawei-wary US puts more pressure on India, offers alternatives to data localization, The Economic Times report.
  • [Oct 4] India-Russia missile deal: What is CAATSA law and its implications?, Jagran Josh report.
  • [Oct 4] Army inducts Israeli ‘tank killers’ till DRDO develops new ones, Defence Aviation post report.
  • [Oct 4] China, Russia deepen technological ties, Defense One report.
  • [Oct 4] Will not be afraid of taking decisions for fear of attracting corruption complaints: Rajnath Singh, New Indian Express report.
  • [Oct 4] At conclave with naval chiefs of 10 countries, NSA Ajit Doval floats an idea, Hindustan Times report.
  • [Oct 6] Pathankot airbase to finally get enhanced security, The Economic Times report.
  • [Oct 6] rafale with Meteor and Scalp missiles will give India unrivalled combat capability: MBDA, The Economic Times report.
  • [Oct 7] India, Bangladesh sign MoU for setting up a coastal surveillance radar in Bangladesh, The Economic Times report; Decaan Herald report.
  • [Oct 7] Indian operated T-90 tanks to become Russian army’s main battle tank, EurAsian Times report.
  • [Oct 7] IAF’s Sukhois to get more advanced avionics, radar, Defence Aviation post report.

Tech and Law Enforcement

  • [Sep 30] TMC MP Mahua Mitra wants to be impleaded in the WhatsApp traceability case, Medianama report; The Economic Times report.
  • [Oct 1] Role of GIS and emerging technologies in crime detection and prevention, Geospatial World.net report.
  • [Oct 2] TRAI to take more time on OTT norms; lawful interception, security issue now in focus, The Economic Times report.
  • [Oct 2[ China invents super surveillance camera that can spot someone from a crowd of thousands, The Independent report.
  • [Oct 4] ‘Don’t introduce end-to-end encryption,’ UK, US and Australia ask Facebook in an open letter, Medianama report.
  • [Oct 4] Battling new-age cyber threats: Kerala Police leads the way, The Week report.
  • [Oct 7] India govt bid to WhatsApp decryption gets push as UK,US, Australia rally support, Entrackr report.

Tech and Elections

  • [Oct 1] WhatsApp was extensively exploited during 2019 elections in India: Report, Firstpost report.
  • [Oct 3] A national security problem without a parallel in American democracy, Defense One report.

Internal Security: J&K

  • [Sep 30] BDC polls across Jammu, Kashmir, Ladakh on Oct 24, The Economic Times report.
  • [Sep 30] India ‘invaded and occupied Kashmir, says Malaysian PM at UN General Assembly, The Hindu report.
  • [Sep 30] J&K police stations to have CCTV camera surveillance, News18 report.
  • [Oct 1] 5 judge Supreme court bench to hear multiple pleas on Article 370, Kashmir lockdown today, India Today report.
  • [Oct 1] India’s stand clear on Kashmir: won’t accept third-party mediation, India Today report.
  • [Oct 1] J&K directs officials to ensure all schools reopen by Thursday, NDTV report.
  • [Oct 2]] ‘Depressed, frightened’: Minors held in Kashmir crackdown, Al Jazeera report.
  • [Oct 3] J&K: When the counting of the dead came to a halt, The Hindu report.
  • [Oct 3] High schools open in Kashmir, students missing, The Economic Times report.
  • [Oct 3] Jaishanakar reiterates India’s claim over Pakistan-occupied Kashmir, The Hindu report.
  • [Oct 3] Normalcy prevails in Jammu and Kashmir, DD News report.
  • [Oct 3] Kashmiri leaders will be released one by one, India Today report.
  • [Oct 4] India slams Turkey, Malaysia remarks on J&K, The Hindu report.
  • [Oct 5] India’s clampdown hits Kashmir’s Silicon Valley, The Economic Times report.
  • [Oct 5] Traffic cop among 14 injured in grenade attack in South Kashmir, NDTV report; The Economic Times report.
  • [Oct 6] Kashmir situation normal, people happy with Article 370 abrogation: Prkash Javadekar, Times of India report.
  • [Oct 7] Kashmir residents say police forcibly taking over their homes for CRPF troops, Huffpost India report.

Internal Security: Northeast/ NRC

  • [Sep 30] Giving total control of Assam Rifles to MHA will adversely impact vigil: Army to Govt, The Economic Times report.
  • [Sep 30] NRC list impact: Assam’s foreigner tribunals to have 1,600 on contract, The Economic Times report.
  • [Sep 30] Assam NRC: Case against Wipro for rule violation, The Hindu report; News18 report; Scroll.in report.
  • [Sep 30] Hindu outfits demand NRC in Karnataka, Deccan Chronicle report; The Hindustan Times report.
  • [Oct 1] Centre extends AFPSA in three districts of Arunachal Pradesh for six months, ANI News report.
  • [Oct 1] Assam’s NRC: law schools launch legal aid clinic for excluded people, The Hindu report; Times of India report; The Wire report.
  • [Oct 1] Amit Shah in Kolkata: NRC to be implemented in West Bengal, infiltrators will be evicted, The Economic Times report.
  • [Oct 1] US Congress panel to focus on Kashmir, Assam, NRC in hearing on human rights in South Asia, News18 report.
  • [Oct 1] NRC must for national security; will be implemented: Amit Shah, The Hindu Business Line report.
  • [Oct 2] Bengali Hindu women not on NRC pin their hope on promise of another list, citizenship bill, The Print report.
  • [Oct 3] Citizenship Amendment Bill has become necessity for those left out of NRC: Assam BJP president Ranjeet Das, The Economic Times report.
  • [Oct 3] BJP govt in Karnataka mulling NRC to identify illegal migrants, The Economic Times report.
  • [Oct 3] Explained: Why Amit Shah wants to amend the Citizenship Act before undertaking countrywide NRC, The Indian Express report.
  • [Oct 4] Duplicating NPR, NRC to sharpen polarization: CPM, Deccan Herald report.
  • [Oct 5] We were told NRC India’s internal issue: Bangladesh, Livemint report.
  • [Oct 6] Prasanna calls NRC ‘unjust law’, The New Indian Express report.

National Security Institutions

  • [Sep 30] CRPF ‘denied’ ration cash: Govt must stop ‘second-class’ treatment. The Quint report.
  • [Oct 1] Army calls out ‘prejudiced’ foreign report on ‘torture’, refutes claim, Republic World report.
  • [Oct 2] India has no extraterritorial ambition, will fulfill regional and global security obligations: Bipin Rawat, The Economic Times report.

More on Huawei, 5G

  • [Sep 30] Norway open to Huawei supplying 5G equipment, Forbes report.
  • [Sep 30] Airtel deploys 100 hops of Huawei’s 5G technology, The Economic Times report.
  • [Oct 1] America’s answer to Huawei, Foreign Policy report; Tech Circle report.
  • [Oct 1] Huawei buys access to UK innovation with Oxford stake, Financial Times report.
  • [Oct 3] India to take bilateral approach on issues faced by other countries with China: Jaishankar, The Hindu report.
  • [Oct 4] Bharti Chairman Sunil Mittal says India should allow Huawei in 5G, The Economic Times report
  • [Oct 6] 5G rollout: Huawei finds support from telecom industry, Financial Express report.

Emerging Tech: AI, Facial Recognition

  • [Sep 30] Bengaluru set to roll out AI-based traffic solution at all signals, Entrackr report.
  • [Sep 1] AI is being used to diagnose disease and design new drugs, Forbes report.
  • [Oct 1] Only 10 jobs created for every 100 jobs taken away by AI, The Economic Times report.
  • [Oct 2]Emerging tech is helping companies grow revenues 2x: report, ET Tech report.
  • [Oct 2] Google using dubious tactics to target people with ‘darker skin’ in facial recognition project: sources, Daily News report.
  • [Oct 2] Three problems posed by deepfakes that technology won’t solve, MIT Technology Review report.
  • [Oct 3] Getting a new mobile number in China will involve a facial recognition test, Quartz report.
  • [Oct 4] Google contractors targeting homeless people, college students to collect their facial recognition data: Report, Medianama report.
  • [Oct 4] More jobs will be created than are lost from the IA revolution: WEF AI Head, Livemint report.
  • [Oct 6] IIT-Guwahati develops AI-based tool for electric vehicle motor, Livemint report.
  • [Oct 7] Even if China misuses AI tech, Satya Nadella thinks blocking China’s AI research is a bad idea, India Times report.

Big Tech

  • [Oct 3] Dial P for privacy: Google has three new features for users, Times of India report.

Opinions and Analyses

  • [Sep 26] Richard Stengel, Time, We’re in the middle of a global disinformation war. Here’s what we need to do to win.
  • [Sep 29] Ilker Koksal, Forbes, The shift toward decentralized finance: Why are financial firms turning to crypto?
  • [Sep 30] Nistula Hebbar, The Hindu, Govt. views grassroots development in Kashmir as biggest hope for peace.
  • [Sep 30] Simone McCarthy, South China Morning Post, Could China’s strict cyber controls gain international acceptance?
  • [Sep 30] Nele Achten, Lawfare blog, New UN Debate on cybersecurity in the context of international security.
  • [Sep 30[ Dexter Fergie, Defense One, How ‘national security’ took over America.
  • [Sep 30] Bonnie Girard, The Diplomat, A firsrhand account of Huawei’s PR drive.
  • [Oct 1] The Economic Times, Rafale: Past tense but furture perfect.
  • [Oct 1] Simon Chandler, Forbes, AI has become a tool for classifying and ranking people.
  • [Oct 2] Ajay Batra, Business World, Rethink India! – MMRCA, ESDM & Data Privacy Policy.
  • [Oct 2] Carisa Nietsche, National Interest, Why Europe won’t combat Huawei’s Trojan tech.
  • [Oct 3] Aruna Sharma, Financial Express, The digital way: growth with welfare.
  • [Oct 3] Alok Prasanna Kumar, Medianama, When it comes to Netflix, the Government of India has no chill.
  • [Oct 3] Fredrik Bussler, Forbes, Why we need crypto for good.
  • [Oct 3] Panos Mourdoukoutas, Forbes, India changed the game in Kashmir – Now what?
  • [Oct 3] Grant Wyeth, The Diplomat, The NRC and India’s unfinished partition.
  • [Oct 3] Zak Doffman, Forbes, Is Huawei’s worst Google nightmare coming true?
  • [Oct 4] Oren Yunger, Tech Crunch, Cybersecurity is a bubble, but it’s not ready to burst.
  • [Oct 4] Minakshi Buragohain, Indian Express, NRS: Supporters and opposers must engage each other with empathy.
  • [Oct 4] Frank Ready, Law.com, 27 countries agreed on ‘acceptable’ cyberspace behavior. Now comes the hard part.
  • [Oct 4] Samir Saran, World economic Forum (blog), 3 reasons why data is not the new oil and why this matters to India.
  • [Oct 4] Andrew Marantz, The New York Times, Free Speech is killing us.
  • [Oct 4] Financial Times editorial, ECJ ruling risks for freedom of speech online.
  • [Oct 4] George Kamis, GCN, Digital transformation requires a modern approach to cybersecurity.
  • [Oct 4] Naomi Xu Elegant and Grady McGregor, Fortune, Hong King’s mask ban pits anonymity against the surveillance state.
  • [Oct 4] Prashanth Parameswaran, The Diplomat, What’s behind the new US-ASEAN cyber dialogue?
  • [Oct 5] Huong Le Thu, The Strategist, Cybersecurity and geopolitics: why Southeast Asia is wary of a Huawei ban.
  • [Oct 5] Hannah Devlin, The Guardian, We are hurtling towards a surveillance state: the rise of facial recognition technology.
  • [Oct 5] PV Navaneethakrishnan, The Hindu Why no takers? (for ME/M.Tech programmes).
  • [Oct 6] Aakar Patel, Times of India blog, Cases against PC, letter-writing celebs show liberties are at risk.
  • [Oct 6] Suhasini Haidar, The Hindu, Explained: How ill purchases from Russia affect India-US ties?
  • [Oct 6] Sumit Chakraberty, Livemint, Evolution of business models in the era of privacy by design.
  • [Oct 6] Spy’s Eye, Outlook, Insider threat management.
  • [Oct 6] Roger Marshall, Deccan Herald, Big oil, Big Data and the shape of water.
  • [Oct 6] Neil Chatterjee, Fortune, The power grid is evolving. Cybersecurity  must too.
  • [Oct 7] Scott W Pink, Modaq.com, EU: What is GDPR and CCPA and how does it impact blockchain?
  • [Oct 7] GN Devy, The Telegraph, Has India slid into an irreversible Talibanization of the mind?
  • [Oct 7] Susan Ariel Aaronson, South China Morning Post, The Trump administration’s approach to AI is not that smart: it’s about cooperation, not domination.

[September 23-30] CCG’s Week in Review: Curated News in Information Law and Policy

The deadline to link PAN cards with Aadhaar was extended to December 31 this week; the Election Commission ruled that voting rights of those excluded in the NRC process remain unaffected; the Home Minister proposed a digital census with multipurpose ID cards for 2021; and 27 nations including the US, UK and Canada issued joint statement urging for a rules-based order in cyberspace – presenting this week’s most important developments in law, technology and national security.

Aadhaar and Digital IDs

  • [Sep 23] Home Minister announces digital census in 2021, proposed multipurpose ID card, Entrackr report; Business Today report.
  • [Sep 24] NRIs can now apply for Aadhaar on arrival without 182-day wait, The Economic Times report.
  • [Sep 24] Aadhaar will be linked to driving license to avoid forgery: Ravi Shankar Prasad, The Indian Express report.
  • [Sep 24] One nation, one card? Amit Shah floats idea of all-in-one ID; here are all the problems with that idea, Medianama report; Money Control report.
  • [Sep 24] Explained: Is India likely to have a multipurpose national ID card? The Indian Express report.
  • [Sep 24] UIDAI nod to ‘voluntary’ use of Aadhaar for National Population Register rollout, The Economic Times report.
  • [Sep 24] Govt must decide on Aadhaar-social media linkage:SC, Deccan Herald report.
  • [Sep 25] New law needed for Aadhaar-social media linkage: UIDAI, The Economic Times report; Inc42 report.
  • [Sep 26] NPR process to include passport, voter ID, Aadhaar and other details, Business Standard report.
  • [Sep 27] Gang involved in making fake Aadhaar cards busted, The Tribune report.
  • [Sep 27] What will happen if you don’t link your PAN card with Aadhaar by Sep 20, The Quint report.
  • [Sep 27] Explained: The National Population Register, and the controversy around it, The Indian Express report.
  • [Sep 27] Aadhaar to weed out bogus social security beneficiaries in Karnataka, Deccan Herald report.
  • [Sep 29] Bajrang Dal wants Aadhaar mandatory at dandiya to keep ‘non-Hindus’ out, The Hindustan Times report; The Wire report.
  • [Sep 30] Kerala urges Centre to extend deadline to link ration cards with Aadhaar, The News Minute report.
  • [Sep 30] PAN-Aadhaar linking deadline extended to December 31, The Economic Times report.

Digital India 

  • [Sep 25] India’s regulatory approach should focus on the regulation of the ‘core’: IAMAI, Livemint report.
  • [Sep 27] India may have to offer sops to boost electronic manufacturing, ET Tech report; Inc42 report.
  • [Sep 27] Digital India, start-ups are priorities for $5 trillion economy: PM Modi, Medianama report.
  • [Sep 29] Tech giants aim to skill Indian govt officials in AI, cloud, ET CIO report.
  • [Sep 29] India’s share in IT, R&D biz up in 2 years: report, The Economic Times report.

Internet Governance

  • [Sep 24] Supreme Court to MeitY: What’s the status of intermediary guidelines? Tell us by Oct 15, Medianama report.
  • [Sep 26] Will not be ‘excessive’ with social media rules, ay Govt officials, Inc42 report.
  • [Sep 26] Government trying to balance privacy and security in draft IT intermediary norms, The Economic Times report.
  • [Sep 27] Citizens, tech companies served better with some regulation: Facebook India MD Ajit Mohan, ET Tech report; Inc42 report.
  • [Sep 27] Balance benefits of internet, data security: Google CEO Sundar Pichai, ET Tech report; Business Today report.

Free Speech

  • [Sep 25] Jadavpur University calls upon ‘stakeholders’ to ensure free speech on campus, The New Indian Express report.
  • [Sep 28] RSS raises objections to uncensored content of Maoj Bajpayee’s “The Family Man”, The Hindu report; Outlook report.

Privacy and Data Protection

  • [Sep 23] A landmark decision on Tuesday could radically reshape how Google’s search results work, Business Insider report.
  • [Sep 23] Google tightens its voice assistant rules amidst privacy backlash, Wired report.
  • [Sep 24] Dell rolls out new data protection storage appliances and capabilities, ZDNet report.
  • [Sep 24] ‘Right to be forgotten’ privacy rule is limited by Europe’s top court, The New York Times report; Live Law report.
  • [Sep 27] Nigeria launches investigation into Truecaller for potential breach of privacy, Medianama report.
  • [Sep 29] Right to be forgotten will be arduous as India frames data protection law, Business Standard report.
  • [Sep 30] FPIs move against data bill, seek exemption, ET Telecom report; Entrackr report.

Data Localisation

  • [Sep 26] Reconsider imposition of data localisation: IAMAI report, The Economic Times report.
  • [Sep 27] Why data is not oil: Here’s how India’s data localisation norms will hurt the economy, Inc42 report.

Digital Payments and Fintech

  • [Sep 23] RBI rider on credit bureau data access has Fintech in a quandary, ET Tech report.

Cryptocurrencies

  • [Sep 23] Facebook reveals Libra currency basket breakdown, Coin Desk report.
  • [Sep 23] The face of India’s crypto lobby readies for a clash, Ozy report.
  • [Sep 23] Why has Brazil’s Central Bank included crypto assets in trade balance? Coin Telegraph report.
  • [Sep 24] French retailers widening crypto acceptance, Tech Xplore report.
  • [Sep 26] Why crypto hoaxes are so successful, Quartz report.
  • [Sep 26] South Africa: the net frontier for crypto exchanges, Coin Telegraph report
  • [Sep 27] The crypto wars’ strange bedfellows, Forbes report.
  • [Sep 28] Crypto industry is already preparing for Google’s ‘quantum supremacy’, Decrypt report.
  • [Sep 29] How crypto gambling is regulated around the world, Coin Telegraph report.

Tech and Law Enforcement

  • [Sep 29] New WhatsApp and Facebook Encryption ‘Backdoors’ – What’s really going on, Forbes report.
  • [Sep 28] Facebook, WhatsApp will have to share messages with UK Government, Bloomberg report.
  • [Sep 23] Secret FBI subpoenas scoop up personal data from scores of companies, The New York Times report.
  • [Sep 23] ‘Don’t transfer the WhatsApp traceability case’, Internet Freedom Foundation asks Supreme Court, Medianama report.
  • [Sep 24] China offers free subway rides to citizens who register their face with surveillance system, The Independent report.
  • [Sep 24] Facial recognition technology in public housing prompts backlash, The New York Times report.
  • [Sep 24] Facebook-Aadhaar linkage and WhatsApp traceability: Supreme Court says government must frame rules, CNBC TV18 report.
  • [ep 27] Fashion that counters surveillance cameras, Business Times report.
  • [Sep 27] Unnao rape case: Delhi court directs Apple to give Sengar’s location details on day of alleged rape, Medianama report.
  • [Sep 27] Face masks to decoy t-shirts: the rise of anti-surveillance fashion, Times of India report.
  • [Sep 30] Battle for privacy and encryption: WhatsApp and government head for a showdown on access to messages, ET Prime report.
  • [Sep 29] Improving digital evidence sharing, Scottish Government news report; Public technology report.

Internal Security: J&K

  • [Sep 23] Government launches internet facilitation centre in Pulwama for students, Times of India report; Business Standard report.
  • [Sep 23] Army chief rejects ‘clampdown’ in Jammu and Kashmir, Times of India report.
  • [Sep 24] Rising power: Why India has faced muted criticism over its Kashmir policy, Business Standard report.
  • [Sep 24] ‘Restore Article 370, 35A in Jammu and Kashmir, withdraw army, paramilitary forces’: 5-member women’s group will submit demands to Amit Shah, Firstpost report.
  • [Sep 24] No normalcy in Kashmir, says fact finding team, The Hindu report.
  • [Sep 25] End clampdown: Kashmir media, The Telegraph report.
  • [Sep 25] Resolve Kashmir issue through dialogue and not through collision: Erdogan, The Economic Times report.
  • [Sep 25] Rajya Sabha deputy chair thwarts Pakistan’s attempt at Kashmir at Eurasian Conference, The Economic Times report.
  • [Sep 25] Pakistan leader will urge UN intervention in Kashmir, The New York Times report.
  • [Sep 25] NSA Ajit Doval back in Srinagar to review security situation, The Hindustan Times report.
  • [Sep 27] Communication curbs add fresh challenge to Kashmir counter-insurgency operations, News18 report.
  • [Sep 27] Fresh restrictions in parts of Kashmir, The Hindu report.
  • [Sep 27] US wants ‘rapid’ easing of Kashmir restrictions, Times of India report.
  • [Sep 27] Kashmir issue: Rescind action on Art. 370, OIC tells India, The Hindu report.
  • [Sep 28] India objects to China’s reference to J&K and Ladakh at UNGA, The Economic Times report; The Hindu report.
  • [Sep 29] Surveillance, area domination operations intensified in Kashmir, The Economic Times report; Financial Express report.
  • [Sep 29] Police impose restrictions in J&K after Imran Khan’s speech at UNGA, India Today report.

Internal Security: NRC and the North-East

  • [Sep 23] Assam framing cyber security policy to secure data related to NRC, police, services, The Economic Times report; Money Control report.
  • [Sep 24] BJP will tell SC that we reject this NRC, says Himanta Biswa Sarma, Business Standard report.
  • [Sep 24] Amit Shah to speak on NRC, Citizenship Amendment Bill in Kolkata on Oct 1, The Economic Times report.
  • [Sep 26] ‘Expensive’ legal battle for those rejected in Assam NRC final list, The Economic Times report.
  • [Sep 27] Scared of NRC? Come back in 2022, The Telegraph report.
  • [Sep 27] Voters left out of NRC will have right to vote, rules Election Commission, India Today report; The Wire report.
  • [Sep 27] NRC: Assam government announces 200 Foreigners Tribunals in 33 districts, Times Now report; Times of India report.
  • [Sep 28] Judge urges new FT members to examine NRC claims with utmost care, Times of India report.

National Security Legislation

  • [Sep 23] Centre will reintroduce Citizenship Bill in Parliament: Himanta Biswa Sarma, The Hindu report.
  • [Sep 26] National Security Guard: History, Functions and Operations, Jagran Josh report.
  • [Sep 28] Left parties seek revocation of decision on Article 370, The Tribune India report.

Tech and National Security

  • [Sep 25] Army to start using Artificial Intelligence in 2-3 years: South Western Army commander, The Print report; India Today report; The New Indian Express report; Financial Express report.
  • [Sep 23] Modi, Trump set new course on terrorism, border security, The Hindu report.
  • [Sep 23] PM Modi in the US” Trump promises more defence deals with India, military trade to go up, Financial Express report.
  • [Sep 23] Punjab police bust terror module supplied with weapons by drones from Pak, NDTV report.
  • [Sep 26] Lockheed Martin to begin supplying F-16 wings from Hyderabad plant in 2020, Livemint report.
  • [Sep 26] Drones used for cross-border arms infiltration in Punjab a national security issues, says Randhawa, The Hindu report.
  • [Sep 27] UK MoD sets up cyber team for secure innovation, UK Authority report.
  • [Sep 29] New tri-services special ops division, meant for surgical strikes, finishes first exercise today, The Print report.
  • [Sep 30] After Saudi attacks, India developing anti-drone technology to counter drone menace, Eurasian Times report.

Tech and Elections

  • [Sep 20] Microsoft will offer free Windows 7 support for US election officials through 2020, Cyber Scoop report.
  • [Sep 26] Social media platforms to follow ‘code of ethics’ in all future elections: EC, The Economic Times report.
  • [Sep 28] Why is EC not making ‘authentic’ 2019 Lok Sabha results public? The Quint report.

Cybersecurity

  • [Sep 24] Androids and iPhones hacked with just one WhatsApp click – and Tibetans are under attack, Forbes report.
  • [Sep 25] Sharp questions can help board oversee cybersecurity, The Wall Street Journal report.
  • [Sep 25] What we know about CrowdStrike, the cybersecurity firm trump mentioned in Ukraine call, and its billionaire CEO, Forbes report.
  • [Sep 25] 36% smaller firms witnessed data breaches in 2019 globally, ET Rise report.
  • [Sep 28] Defence Construction Canada hit by cyber attack – corporation’s team trying to restore full IT capability, Ottawa Citizen report.
  • [Sep 29] Experts call for collective efforts to counter cyber threats, The New Indian Express report.
  • [Sep 29] Microsoft spots malware that turns PCs into zombie proxies, ET Telecom report
  • [Sep 29] US steps up scrutiny of airplane cybersecurity, The Wall Street Journal report.

Cyberwarfare

  • [Sep 24] 27 countries sign cybersecurity pledge urging rules-based control over cyberspace in Joint Statement, with digs at China and Russia, CNN report; IT world Canada report; Meri Talk report.
  • [Sep 26] Cyber Peace Institute fills a critical need for cyber attack victims, Microsoft blog.
  • [Sep 29] Britain is ‘at war every day’ due to constant cyber attacks, Chief of the Defence Staff says, The Telegraph report.

Telecom and 5G

  • [Sep 27] Telcos’ IT investments intact, auto companies may slow pace: IBM exec, ET Tech report.
  • [Sep 29] Telecom players to lead digital transformation in India, BW Businessworld report.

More on Huawei

  • [Sep 22] Huawei confirms another nasty surprise for Mate 30 buyers, Forbes report.
  • [Sep 23] We’re on the same page with government on security: Huawei, The Economic Times report.
  • [Sep 24] The debate around 5G’s safety is getting in the way of science, Quartz report (paywall).
  • [Sep 24] Govt will take call on Huawei with national interest in mind: Telecom Secy, Business Standard report.
  • [Sep 24] Huawei enables 5G smart travel system at Beijing airport, Tech Radar report.
  • [Sep 25] Huawei 5G backdoor entry unproven, The Economic Times report.
  • [Sep 25] US prepares $1 bn fund to replace Huawei ban kit, Tech Radar report.
  • [Sep 26] Google releases large dataset of deepfakes for researchers, Medianama report.
  • [Sep 26] Huawei willing to license 5G technology to a US firm, The Hindu Business Line report; Business Standard report.
  • [Sep 26] Southeast Asia’s top phone carrier still open to Huawei 5G, Bloomberg report.
  • [Sep 29] Russia rolls out the red carpet for Huawei over 5G, The Economic Times report.

Emerging Tech and AI

  • [Sep 20] Google researchers have reportedly achieved “Quantum Supremacy”, Financial Times report; MIT Technology Review report
  • [Sep 23] Artificial Intelligence revolution in healthcare in India: All we need to know, The Hindustan Times report.
  • [Sep 23] A new joystick for the brain-controlled vehicles of the future, Defense One report.
  • [Sep 24] Computing and AI: Humanistic Perspectives from MIT, MIT News report.
  • [Sep 24] Emerging technologies such as AI, 5G posing threats to privacy, says report, China Daily report.
  • [Sep 25] Alibaba unveils chip developed for artificial intelligence era, Financial Times report.
  • [Sep 26] Pentagon wants AI to interpret ‘strategic activity around the globe, Defense One report.
  • [Sep 27] Only 10 jobs created for every 100 jobs taken away by AI, ET Tech report.
  • [Sep 27] Experts say these emerging technologies should concern us, Business Insider report.
  • [Sep 27] What is on the horizon for export controls on ‘emerging technologies’? Industry comments may hold a clue, Modaq.com report.
  • [Sep 27] India can become world leader in artificial intelligence: Vishal Sikka, Money Control report.
  • [Sep 27] Elon Musk issues a terrifying prediction of ‘AI robot swarms’ and huge threat to mankind, The Daily Express (UK) report
  • [Sep 27] Russia’s national AI Centre is taking shape, Defense One report.
  • [Sep 29] Explained: What is ‘quantum supremacy’, The Hindu report.
  • [Sep 29] Why are scientists so excited about a new quantum computing milestone?, Scroll.in report.
  • [Sep 29] Artificial Intelligence has a gender bias problem – just ask Siri, The Wire report.
  • [Sep 29] How AI is changing the landscape of digital marketing, Inc42 report.

Opinions and Analyses

  • [Sep 21] Wim Zijnenburg, Defense One, Time to Harden International Norms on Armed Drones.
  • [Sep 23] David Sanger and Julian Barnes, The New York Times, The urgent search for a cyber silver bullet against Iran.
  • [Sep 23] Neven Ahmad, PRIO Blog, The EU’s response to the drone age: A united sky.
  • [Sep 23] Bisajit Dhar and KS Chalapati Rao, The Wire, Why an India-US Free Trade Agreement would require New Delhi to reorient key policies.
  • [Sep 23] Filip Cotfas, Money Control, Five reasons why data loss prevention has to be taken seriously.
  • [Sep 23] NF Mendoza, Tech Republic, 10 policy principles needed for artificial intelligence.
  • [Sep 24] Ali Ahmed, News Click, Are Indian armed forces turning partisan? : The changing civil-military relationship needs monitoring.
  • [Sep 24] Editorial, Deccan Herald, A polity drunk on Aadhaar.
  • [Sep 24] Mike Loukides, Quartz, The biggest problem with social media has nothing to do with free speech.
  • [Sep 24] Ananth Padmanabhan, Medianama, Civilian Drones: Privacy challenges and potential resolution. 
  • [Sep 24] Celine Herwijer and Dominic Kailash Nath Waughray, World Economic Forum, How technology can fast-track the global goals.
  • [Sep 24] S. Jaishankar, Financial Times, Changing the status of Jammu and Kashmir will benefit all of India.
  • [Sep 24] Editorial, Livemint, Aadhaar Mark 2.
  • [Sep 24] Vishal Chawla, Analytics India Magazine, AI in Defence: How Indi compares to US, China, Russia and South Korea.
  • [Sep 25] Craig Borysowich, IT Toolbox, Origin of Markets for Artificial Intelligence.
  • [Sep 25] Sudeep Chakravarti, Livemint, After Assam, NRC troubles may visit ‘sister’ Tripura.
  • [Sep 25] DH Kass, MSSP Blog, Cyber Warfare: New Rules of Engagement?
  • [Sep 25] Chris Roberts, Observer, How artificial intelligence could make nuclear war more likely.
  • [Sep 25] Ken Tola, Forbes, What is cybersecurity?
  • [Sep 25] William Dixon and  Jamil Farshchi, World Economic Forum, AI is transforming cybercrime. Here’s how we can fight back.
  • [Sep 25] Patrick Tucker, Defense One, Big Tech bulks up its anti-extremism group. But will it do more than talk?
  • [Sep 26] Udbhav Tiwari, Huffpost India, Despite last year’s Aadhaar judgement, Indians have less privacy than ever.
  • [Sep 26] Sylvia Mishra, Medianama, India and the United States: The time has come to collaborate on commercial drones.
  • [Sep 26] Subimal Bhattacharjee, The Hindu Business Line, Data flows and our national security interests.
  • [Sep 26] Ram Sagar, Analytics India Magazine, Top countries that are betting big on AI-based surveillance.
  • [Sep 26] Patrick Tucker, Defense One, AI will tell future medics who lives and who dies on the battlefield.
  • [Sep 26] Karen Hao, MIT Technology Review, This is how AI bias really happens – and why it’s so hard to fix.
  • [Sep 27] AG Noorani, Frontline, Kashmir dispute: Domestic or world issue?
  • [Sep 27] Sishanta Talukdar, Frontline, Final NRC list: List of exclusion.
  • [Sep 27] Freddie Stuart, Open Democracy, How facial recognition technology is bringing surveillance capitalism to our streets.
  • [Sep 27] Paul de Havilland, Crypto Briefing, Did Bitcoin crash or dip? Crypto’s trajectory moving forward.
  • [Sep 28] John Naughton, The Guardian, Will advances in quantum computing affect internet security?
  • [Sep 28] Suhrith Parthasarathy, The Hindu, The top court and a grave of freedom.
  • [Sep 28] Kazim Rizvi, YourStory, Data Protection Authority: the cornerstone to implement data privacy.
  • [Sep 28] Shekhar Gupta, The Print, Modi has convinced the world that Kashmir is India’s internal affair – but they’re still watching.
  • [Sep 29] Indrani Bagchi, The Economic Times, Why india needs to tread carefully on Kashmir.
  • [Sep 29] Medha Dutta Yadav, The New Indian Express, Data: Brave new frontier.
  • [Sep 29] Jon Markman, Forbes, New cybersecurity companies have their heads in the cloud.
  • [Sep 29] Editorial, The New York Times, On cybersecurity: Two scoops of perspective.
  • [Sep 30] Kuldip Singh, The Quint, New IAF Chief’s appointment: Why RKS Bhadauria must tread lightly.
  • [Sep 30] Karishma Koshal, The Caravan, With the data-protection bill in limbo, these policies contravene the right to privacy.