The Right to be Forgotten – Examining Approaches in Europe and India

This is a guest post authored by Aishwarya Giridhar.

How far does the right to control personal information about oneself extend online? Would it extend, for example, to having a person’s name erased from a court order on online searches, or to those who have been subjected to revenge pornography or sexual violence such that pictures or videos have non-consensually been shared online? These are some questions that have come up in Indian courts and are some of the issues that jurisprudence relating to the ‘right to be forgotten’ seeks to address. This right is derived from the concepts of personal autonomy and informational self-determination, which are core aspects of the right to privacy. They were integral to the Indian Supreme Court’s conception of privacy in Puttaswamy vs. Union of India which held that privacy was a fundamental right guaranteed by the Indian Constitution. However, privacy is not an absolute right and needs to be balanced with other rights such as freedom of expression and access to information, and the right to be forgotten tests the extent to which the right to privacy extends.

On a general level, the right to be forgotten enables individuals to have personal information about themselves removed from publicly available sources under certain circumstances. This post examines the right to be forgotten under the General Data Protection Regulation (GDPR) in Europe, and the draft Personal Data Protection Bill, 2019 (PDP Bill) in India.

What is the right to be forgotten?

The right to be forgotten was brought into prominence in 2014 when the European Court of Justice (ECJ) held that users can require search engines to remove personal data from search results, where the linked websites contain information that is “inadequate, irrelevant or no longer relevant, or excessive.” The Court recognised that search engines had the ability to significantly affect a person’s right to privacy since it allowed any Internet user to obtain a wide range of information on a person’s life, which would have been much harder or even impossible to find without the search engine. 

The GDPR provides statutory recognition to the right to be forgotten in the form of a ‘right to erasure’ (Article 17). It provides data subjects the right to request controllers to erase personal data in some circumstances, such as when the data is no longer needed for their original processing purpose, or when the data subject has withdrawn her consent or objected to data processing. In this context, the data subject is the person to whom the relevant personal data relates, and the controller is the entity which determines how and why the data would be processed. Under this provision, the controller would be required to assess whether to keep or remove information when it receives a request from data subjects.

In comparison, clause 20 of India’s Personal Data Protection Bill (PDP Bill), which proposes a right to be forgotten, allows data principals (similar to data subjects) to require data fiduciaries (similar to data controllers) to restrict or prevent the disclosure of personal information. This is possible where such disclosure is no longer necessary, was made on the basis of consent which has since been withdrawn, or was made contrary to law. Unlike the GDPR, the PDP Bill requires data subjects to approach Adjudicating Officers appointed under the legislation to request restricted disclosure of personal information. The rights provided under both the GDPR and PDP Bill are not absolute and are limited by the freedom of speech and information and other specified exceptions. In the PDP Bill, for example, some of the factors the Adjudicating Officer is required to account for are the sensitivity of the data, the scale of disclosure and how much it is sought to be restricted, the role of the data principal in public life, and the relevance of the data to the public. 

Although the PDP Bill, if passed, would be the first legislation to recognise this right in India, courts have provided remedies that allow for removing personal information in some circumstances. Petitioners have approached courts for removing information in cases ranging from matrimonial disputes to defamation and information affecting employment opportunities, and courts have sometimes granted the requested reliefs. Courts have also acknowledged the right to be forgotten in some cases, although there have been conflicting orders on whether a person can have personal information redacted from judicial decisions available on online repositories and other sources. In November last year, the Orissa High Court also highlighted the importance of the right to be forgotten for persons who’s photos and videos have been uploaded online, without  their consent, especially in the case of sexual violence. These cases also highlight why it is essential that this right is provided by statute, so that the extent of protections offered under this right, as well as the relevant safeguards can be clearly defined.

Intersections with access to information and free speech

The most significant criticisms of the right to be forgotten stem from its potential to restrict speech and access to information. Critics are concerned that this right will lead to widespread censorship and a whitewashing of personal histories when it comes to past crimes and information on public figures, and a less free and open Internet. There are also concerns that global takedowns of information, if required by national laws, can severely restrict speech and serve as a tool of censorship. Operationalising this right can also lead to other issues in practice.

For instance, the right framed under the GDPR requires private entities to balance the right to privacy with the larger public interest and the right to information. Two cases decided by the ECJ in 2019 provided some clarity on the obligations of search engines in this context. In the first, the Court clarified that controllers are not under an obligation to apply the right globally and that removing search results for domains in the EU would suffice. However, it left the option open for countries to enact laws that would require global delisting. In the second case, among other issues, the Court identified some factors that controllers would need to account for in considering requests for delisting. These included the nature of information, the public’s interest in having that information, and the role the data subject plays in public life, among others. Guidelines framed by the Article 29 Working Party, set up under the GDPR’s precursor also provide limited, non-binding guidance for controllers in assessing which requests for delisting are valid.

Nevertheless, the balance between the right to be forgotten and competing considerations can still be difficult to assess on a case-by-case basis. This issue is compounded by concerns that data controllers would be incentivised to over-remove content to shield themselves from liability, especially where they have limited resources. While larger entities like Google may have the resources to be able to invest in assessing claims under the right to be forgotten, this will not be possible for smaller platforms. There are also concerns that requiring private parties to make such assessments amounts to the ‘privatisation of regulation’, and the limited potential for transparency on erasures remove an important check against over-removal of information. 

As a result of some of this criticism, the right to be forgotten is framed differently under the PDP Bill in India. Unlike the GDPR, the PDP Bill requires Adjudicating Officers and not data fiduciaries to assess whether the rights and interests of the data principal in restricting disclosure overrides the others’ right to information and free speech. Adjudicating Officers are required to have special knowledge of or professional experience in areas relating to law and policy, and the terms of their appointment would have to ensure their independence. While they seem better suited to make this assessment than data fiduciaries, much of how this right is implemented will depend on whether the Adjudicating Officers are able to function truly independently and are adequately qualified. Additionally, this system is likely to lead to long delays in assessment, especially if the quantum of requests is similar to that in the EU. It will also not address the issues with transparency highlighted above. Moreover, the PDP Bill is not finalised and may change significantly, since the Joint Parliamentary Committee that is reviewing it is reportedly considering substantial changes to its scope.

What is clear is that there are no easy answers when it comes to providing the right to be forgotten. It can provide a remedy in some situations where people do not currently have recourse, such as with revenge pornography or other non-consensual use of data. However, when improperly implemented, it can significantly hamper access to information. Drawing lessons from how this right is evolving in the EU can prove instructive for India. Although the assessment of whether or not to delist information will always subjective to some extent, there are some steps that can be taken provide clarity on how such determinations are made. Clearly outlining the scope of the right in the relevant legislation, and developing substantive standards that are aimed at protecting access to information, that can be used in assessing whether to remove information are some measures that can help strike a better balance between privacy and competing considerations.

Addition of US Privacy Cases on the Privacy Law Library

This post is authored by Swati Punia.

We are excited to announce the addition of privacy jurisprudence from the United States’ Supreme Court on the Privacy Law Library. These cases cover a variety of subject areas from the right against intrusive search and seizure to the right to abortion and right to sexual intimacy/ relationships. You may access all the US cases on our database, here.

(The Privacy Law Library is our global database of privacy law and jurisprudence, currently containing cases from India, Europe (ECJ and ECtHR), the United States, and Canada.)

The Supreme Court of the US (SCOTUS) has carved out the right to privacy from various provisions of the US constitution, particularly the first, fourth, fifth, ninth and fourteenth amendments to the US constitution. The Court has included the right to privacy in varying contexts through an expansive interpretation of the constitutional provisions. For instance, the Court has read privacy rights into the first amendment for protecting private possession of obscene material from State intrusion; the fourth amendment for protecting privacy of the person and possessions from unreasonable State intrusion; and the fourteenth amendment which recognises an individual’s decisions about abortion and family planning as part of their right of liberty that encompasses aspects of privacy such as dignity and autonomy under the amendment’s due process clause.

The right to privacy is not expressly provided for in the US constitution. However, the Court identified an implicit right to privacy, for the very first time, in Griswold v. Connecticut(1965) in the context of the right to use contraceptives/ marital privacy. Since then, the Court has extended the scope to include, inter alia, reasonable expectation of privacy against State intrusion in Katz v. United States (1967), abortion rights of women in Roe v. Wade (1973), and right to sexual intimacy between consenting adults of the same-sex in Lawrence v. Texas (2003). 

The US privacy framework consists of several privacy laws and regulations developed at both the federal and state level. As of now, the US privacy laws are primarily sector specific, instead of a single comprehensive federal data protection law like the European Union’s General Data Protection Regulation (GDPR) and the Canadian Personal Information Protection and Electronic Documents Act (PIPEDA). However, there are certain states in the US like California that have enacted comprehensive privacy laws, comparable to the GDPR and PIPEDA. The California Consumer Privacy Act (CCPA) which came into effect on January 1, 2020 aims to protect consumers’ privacy across industry. It codifies certain rights and remedies for consumers, and obligations for entities/businesses. One of its main aims is to provide consumers more control over their data by obligating businesses to ensure transparency about how they collect, use, share and sell consumer data. 

To know more about the status of the right to privacy in the US, refer to our page here. Some of the key privacy cases from the SCOTUS on our database are – Griswold vs. Connecticut, Time INC vs. Hill, Roe vs. Wade, Katz vs. United States, and Stanley vs. Georgia.

CJEU sets limits on Mass Communications Surveillance – A Win for Privacy in the EU and Possibly Across the World

This post has been authored by Swati Punia

On 6th October, the European Court of Justice (ECJ/ Court) delivered its much anticipated judgments in the consolidated matter of C-623/17, Privacy International from the UK and joined cases from France, C-511/18, La Quadrature du Net and others, C-512/18, French Data Network and others, and Belgium, C-520/18, Ordre des barreaux francophones et germanophone and others (Collectively “Bulk Communications Surveillance Judgments”). 

In this post, I briefly discuss the Bulk Communication Surveillance Judgments, their significance for other countries and for India. 

Through these cases, the Court invalidated the disproportionate interference by Member States with the rights of their citizens, as provided by EU law, in particular the Directive on privacy and electronic communications (e-Privacy Directive) and European Union’s Charter of Fundamental Rights (EU Charter). The Court assessed the Member States’ bulk communications surveillance laws and practices relating to their access and use of telecommunications data. 

The Court recognised the importance of the State’s positive obligations towards conducting surveillance, although it noted that it was essential for surveillance systems to conform with the general principles of EU law and the rights guaranteed under the EU Charter. It laid down clear principles and measures as to when and how the national authorities could access and use telecommunications data (further discussed in the sections ‘The UK Judgment’ and ‘The French and Belgian Judgment’). It carved a few exceptions as well (in the joined cases of France and Belgium) for emergency situations, but held that such measures would have to pass the threshold of being serious and genuine (further discussed in the section ‘The French and Belgian Judgment’). 

The Cases in Brief 

The Court delivered two separate judgments, one in the UK case and one in the joined cases of France and Belgium. Since these cases had similar sets of issues, the proceedings were adjoined. The UK application challenged the bulk acquisition and use of telecommunications data by its Security and Intelligence Agencies (SIAs) in the interest of national security (as per the UK’s Telecommunication Act of 1984). The French and Belgian applications challenged the indiscriminate data retention and access by SIAs for combating crime. 

The French and Belgian applications questioned the legality of their respective data retention laws (numerous domestic surveillance laws which permitted bulk collection of telecommunication data) that imposed blanket obligations on Electronic Communications Service Providers (ECSP) to provide relevant data. The Belgian law required ECSPs to retain various kinds of traffic and location data for a period of 12 months. Whereas, the French law provided for automated analysis and real time data collection measures for preventing terrorism. The French application also raised the issue of providing a notification to the person under the surveillance. 

The Member States contended that such surveillance measures enabled them to inter alia, safeguard national security, prevent terrorism, and combat serious crimes. Hence, they claimed inapplicability of the e-Privacy Directive on their surveillance laws/ activities.

The UK Judgment

The ECJ found the UK surveillance regime unlawful and inconsistent with EU law, and specifically the e-Privacy Directive. The Court analysed the scope and scheme of the e-Privacy Directive with regard to exclusion of certain State purposes such as national and public security, defence, and criminal investigation. Noting the importance of such State purposes, it held that EU Member States could adopt legislative measures that restricted the scope of rights and obligations (Article 5, 6 and 9) provided in the e-Privacy Directive. However, this was allowed only if the Member States complied with the requirements laid down by the Court in Tele2 Sverige and Watson and Others (C-203/15 and C-698/15) (Tele2) and the e-Privacy Directive. In addition to these, the Court held that the EU Charter must be respected too. In Tele2, the ECJ held that legislative measures obligating ECSPs to retain data must be targeted and limited to what was strictly necessary. Such targeted retention had to be with regard to specific categories of persons and data for a limited time period. Also, the access to data must be subject to a prior review by an independent body.

The e-Privacy Directive ensures the confidentiality of electronic communications and the data relating to it (Article 5(1)). It allows ECSPs to retain metadata (context specific data relating to the users and subscribers, location and traffic) for various purposes such as billing, valued added services and security purposes. However, this data must be deleted or made anonymous, once the purpose is fulfilled unless a law allows for a derogation for State purposes. The e-Privacy Directive allows the Member States to derogate (Article 15(1)) from the principle of confidentiality and corresponding obligations (contained in Article 6 (traffic data) and 9 (location data other than traffic data)) for certain State purposes when it is appropriate, necessary and proportionate. 

The Court clarified that measures undertaken for the purpose of national security would not make EU law inapplicable and exempt the Member States from their obligation to ensure confidentiality of communications under the e-Privacy Directive. Hence, an independent review of surveillance activities such as data retention for indefinite time periods, or further processing or sharing, must be conducted for authorising such activities. It was noted that the domestic law at present did not provide for prior review, as a limit on the above mentioned surveillance activities. 

The French and Belgian Judgment

While assessing the joined cases, the Court arrived at a determination in similar terms as the UK case. It reiterated that the exception (Article 15(1) of the e-Privacy Directive) to the principle of confidentiality of communications (Article 5(1) of the e-Privacy Directive) should not become the norm. Hence, national measures that provided for general and indiscriminate data retention and access for State purposes were held to be incompatible with EU law, specifically the e-Privacy Directive.

The Court in the joined cases, unlike the UK case, allowed for specific derogations for State purposes such as safeguarding national security, combating serious crimes and preventing serious threats. It laid down certain requirements that the Member States had to comply with in case of derogations. The derogations should (1) be clear and precise to the stated objective (2) be limited to what is strictly necessary and for a limited time period (3) have a safeguards framework including substantive and procedural conditions to regulate such instances (4) include guarantees to protect the concerned individuals against abuse. They should also be subjected to an ‘effective review’ by a court or an independent body and must be in compliance of general rules and proportionality principles of EU law and the rights provided in the EU Charter. 

The Court held that in establishing a minimum threshold for a safeguards framework, the EU Charter must be interpreted along with the European Convention on Human Rights (ECHR). This would ensure consistency between the rights guaranteed under the EU Charter and the corresponding rights guaranteed in the ECHR (as per Article 52(3) of the EU Charter).

The Court, in particular, allowed for general and indiscriminate data retention in cases of serious threat to national security. Such a threat should be genuine, and present or foreseeable. Real-time data collection and automated analysis were allowed in such circumstances. But the real-time data collection of persons should be limited to those suspected of terrorist activities. Moreover, it should be limited to what was strictly necessary and subject to prior review. It even allowed for general and indiscriminate data retention of IP addresses for the purpose of national security, combating serious crimes and preventing serious threats to public security. Such retention must be for a limited time period to what was strictly necessary. For such purposes, the Court also permitted ECSPs to retain data relating to the identity particulars of their customers (such as name, postal and email/account addresses and payment details) in a general and indiscriminate manner, without specifying any time limitations. 

The Court allowed targeted data retention for the purpose of safeguarding national security and preventing crime, provided that it was for a limited time period and strictly necessary and was done on the basis of objective and non-discriminatory factors. It was held that such retention should be specific to certain categories of persons or geographical areas. The Court also allowed, subject to effective judicial review, expedited data retention after the initial retention period ended, to shed light on serious criminal offences or acts affecting national security. Lastly, in the context of criminal proceedings, the Court held that it was for the Member States to assess the admissibility of evidence resulting from general and indiscriminate data retention. However, the information and evidence must be excluded where it infringes on the right to a fair trial. 

Significance of the Bulk Communication Surveillance Judgments

With these cases, the ECJ decisively resolved a long-standing discord between the Member States and privacy activists in the EU. For a while now, the Court has been dealing with questions relating to surveillance programs for national security and law enforcement purposes. Though the Member States have largely considered these programs outside the ambit of EU privacy law, the Court has been expanding the scope of privacy rights. 

Placing limitations and controls on State powers in democratic societies was considered necessary by the Court in its ruling in Privacy International. This decision may act as a trigger for considering surveillance reforms in many parts of the world, and more specifically for those aspiring to attain an EU adequacy status. India could benefit immensely should it choose to pay heed. 

As of date, India does not have a comprehensive surveillance framework. Various provisions of the Personal Data Protection Bill, 2019 (Bill), Information Technology Act, 2000, Telegraph Act, 1885, and the Code of Criminal Procedure, 1973 provide for targeted surveillance measures. The Bill provides for wide powers to the executive (under Clause 35, 36 and 91 of the Bill) to access personal and non-personal data in the absence of proper and necessary safeguards. This may cause problems for achieving the EU adequacy status as per Article 45 of the EU General Data Protection Regulation (GDPR) that assesses the personal data management rules of third-party countries. 

Recent news reports suggest that the Bill, which is under legislative consideration, is likely to undergo a significant overhaul. India could use this as an opportunity to introduce meaningful changes in the Bill as well as its surveillance regime. India’s privacy framework could be strengthened by adhering to the principles outlined in the Justice K.S. Puttaswamy v. Union of Indiajudgment and the Bulk Communications Surveillance Judgments.

Building an AI Governance Framework for India, Part III

Embedding Principles of Privacy, Transparency and Accountability

This post has been authored by Jhalak M. Kakkar and Nidhi Singh

In July 2020, the NITI Aayog released a draft Working Document entitled “Towards Responsible AI for All” (hereafter ‘NITI Aayog Working Document’ or ‘Working Document’). This Working Document was initially prepared for an expert consultation that was held on 21 July 2020. It was later released for comments by stakeholders on the development of a ‘Responsible AI’ policy in India. CCG’s comments and analysis  on the Working Document can be accessed here.

In our first post in the series, ‘Building an AI governance framework for India’, we discussed the legal and regulatory implications of the Working Document and argued that India’s approach to regulating AI should be (1) firmly grounded in its constitutional framework, and (2) based on clearly articulated overarching ‘Principles for Responsible AI’. Part II of the series discussed specific Principles for Responsible AI – Safety and Reliability, Equality, and Inclusivity and Non-Discrimination. We explored the constituent elements of these principles and the avenues for incorporating them into the Indian regulatory framework. 

In this final post of the series, we will discuss the remaining principles of Privacy, Transparency and Accountability. 

Principle of Privacy 

Given the diversity of AI systems, the privacy risks which they pose to the individuals, and society as a whole are also varied. These may be be broadly related to : 

(i) Data protection and privacy: This relates to privacy implications of the use of data by AI systems and subsequent data protection considerations which arise from this use. There are two broad aspects to think about in terms of the privacy implications from the use of data by AI systems. Firstly, AI systems must be tailored to the legal frameworks for data protection. Secondly, given that AI systems can be used to re-identify anonymised data, the mere anonymisation of data for the training of AI systems may not provide adequate levels of protection for the privacy of an individual.

a) Data protection legal frameworks: Machine learning and AI technologies have existed for decades, however, it was the explosion in the availability of data, which accounts for the advancement of AI technologies in recent years. Machine Learning and AI systems depend upon data for their training. Generally, the more data the system is given, the more it learns and ultimately the more accurate it becomes. The application of existing data protection frameworks to the use of data by AI systems may raise challenges. 

In the Indian context, the Personal Data Protection Bill, 2019 (PDP Bill), currently being considered by Parliament, contains some provisions that may apply to some aspects of the use of data by AI systems. One such provision is Clause 22 of the PDP Bill, which requires data fiduciaries to incorporate the seven ‘privacy by design’ principles and embed privacy and security into the design and operation of their product and/or network. However, given that AI systems rely significantly on anonymised personal data, their use of data may not fall squarely within the regulatory domain of the PDP Bill. The PDP Bill does not apply to the regulation of anonymised data at large but the Data Protection Authority has the power to specify a code of practice for methods of de-identification and anonymisation, which will necessarily impact AI technologies’ use of data.

b) Use of AI to re-identify anonymised data: AI applications can be used to re-identify anonymised personal data. To safeguard the privacy of individuals, datasets composed of the personal data of individuals are often anonymised through a de-identification and sampling process, before they are shared for the purposes of training AI systems to address privacy concerns. However, current technology makes it possible for AI systems to reverse this process of anonymisation to re-identify people, having significant privacy implications for an individual’s personal data. 

(ii) Impact on society: The impact of the use of AI systems on society essentially relates to broader privacy considerations that arise at a societal level due to the deployment and use of AI, including mass surveillance, psychological profiling, and the use of data to manipulate public opinion. The use of AI in facial recognition surveillance technology is one such AI system that has significant privacy implications for society as a whole. Such AI technology enables individuals to be easily tracked and identified and has the potential to significantly transform expectations of privacy and anonymity in public spaces. 

Due to the varying nature of privacy risks and implications caused by AI systems, we will have to design various regulatory mechanisms to address these concerns. It is important to put in place a reporting and investigation mechanism that collects and analyses information on privacy impacts caused by the deployment of AI systems, and privacy incidents that occur in different contexts. The collection of this data would allow actors across the globe to identify common threads of failure and mitigate against potential privacy failures arising from the deployment of AI systems. 

To this end, we can draw on a mechanism that is currently in place in the context of reporting and investigating aircraft incidents, as detailed under Annexure 13 of the Convention on International Civil Aviation (Chicago Convention). It lays down the procedure for investigating aviation incidents and a reporting mechanism to share information between countries. The aim of this accident investigation report is not to apportion blame or liability from the investigation, but rather to extensively study the cause of the accident and prevent future incidents. 

A similar incident investigation mechanism may be employed for AI incidents involving privacy breaches. With many countries now widely developing and deploying AI systems, such a model of incident investigation would ensure that countries can learn from each other’s experiences and deploy more privacy-secure AI systems.

Principle of Transparency

The concept of transparency is a recognised prerequisite for the realisation of ‘trustworthy AI’. The goal of transparency in ethical AI is to make sure that the functioning of the AI system and resultant outcomes are non-discriminatory, fair, and bias mitigating, and that the AI system inspires public confidence in the delivery of safe and reliable AI innovation and development. Additionally, transparency is also important in ensuring better adoption of AI technology—the more users feel that they understand the overall AI system, the more inclined and better equipped they are to use it.

The level of transparency must be tailored to its intended audience. Information about the working of an AI system should be contextualised to the various stakeholder groups interacting and using the AI system. The Institute of Electrical and Electronics Engineers, a global professional organisation of electronic and electrical engineers,  suggested that different stakeholder groups may require varying levels of transparency in accordance with the target group. This means that groups such as users, incident investigators, and the general public would require different standards of transparency depending upon the nature of information relevant for their use of the AI system.

Presently, many AI algorithms are black boxes where automated decisions are taken, based on machine learning over training datasets, and the decision making process is not explainable. When such AI systems produce a decision, human end users don’t know how it arrived at its conclusions. This brings us to two major transparency problems, the public perception and understanding of how AI works, and how much developers actually understand about their own AI system’s decision making process. In many cases, developers may not know, or be able to explain how an AI system makes conclusions or how it has arrived at certain solutions.

This results in a lack of transparency. Some organisations have suggested opening up AI algorithms for scrutiny and ending reliance on opaque algorithms. On the other hand, the NITI Working Document is of the view that disclosing the algorithm is not the solution and instead, the focus should be on explaining how the decisions are taken by AI systems. Given the challenges around explainability discussed above, it will be important for NITI Aayog to discuss how such an approach will be operationalised in practice.

While many countries and organisations are researching different techniques which may be useful in increasing the transparency of an AI system, one of the common suggestions which have gained traction in the last few years is the introduction of labelling mechanisms in AI systems. An example of this is Google’s proposal to use ‘Model Cards’, which are intended to clarify the scope of the AI systems deployment and minimise their usage in contexts for which they may not be well suited. 

Model cards are short documents which accompany a trained machine learning model. They enumerate the benchmarked evaluation of the working of an AI system in a variety of conditions, across different cultural, demographic, and intersectional groups which may be relevant to the intended application of the AI system. They also contain clear information on an AI system’s capabilities including the intended purpose for which it is being deployed, conditions under which it has been designed to function, expected accuracy and limitations. Adopting model cards and other similar labelling requirements in the Indian context may be a useful step towards introducing transparency into AI systems. 

Principle of Accountability

The Principle of Accountability aims to recognise the responsibility of different organisations and individuals that develop, deploy and use the AI systems. Accountability is about responsibility, answerability and trust. There is no one standard form of accountability, rather this is dependent upon the context of the AI and the circumstances of its deployment.

Holding individuals and entities accountable for harm caused by AI systems has significant challenges as AI systems generally involve multiple parties at various stages of the development process. The regulation of the adverse impacts caused by AI systems often goes beyond the existing regimes of tort law, privacy law or consumer protection law. Some degree of accountability can be achieved by enabling greater human oversight. In order to foster trust in AI and appropriately determine the party who is accountable, it is necessary to build a set of shared principles that clarify responsibilities of each stakeholder involved with the research, development and implementation of an AI system ranging from the developers, service providers and end users.

Accountability has to be ensured at the following stages of an AI system: 

(i) Pre-deployment: It would be useful to implement an audit process before the AI system is deployed. A potential mechanism for implementing this could be a multi-stage audit process which is undertaken post design, but before the deployment of the AI system by the developer. This would involve scoping, mapping and testing a potential AI system before it is released to the public. This can include ensuring risk mitigation strategies for changing development environments and ensuring documentation of policies, processes and technologies used in the AI system.

Depending on the nature of the AI system and the potential for risk, regulatory guidelines can be developed prescribing the involvement of various categories of auditors such as internal, expert third party and from the relevant regulatory agency, at various stages of the audit. Such audits which are conducted pre-deployment are aimed at closing the accountability gap which exists currently.

(ii) During deployment: Once the AI system has been deployed, it is important to keep auditing the AI system to note the changes being made/evolution happening in the AI system in the course of its deployment. AI systems constantly learn from the data and evolve to become better and more accurate. It is important that the development team is continuously monitoring the system to capture any errors that may arise, including inconsistencies arising from input data or design features, and address them promptly.

(iii) Post-deployment: Ensuring accountability post-deployment in an AI system can be challenging. The NITI Working Document also recognised that assigning accountability for specific decisions becomes difficult in a scenario with multiple players in the development and deployment of an AI system. In the absence of any consequences for decisions harming others, no one party would feel obligated to take responsibility or take actions to mitigate the effect of the AI systems. Additionally, the lack of accountability also leads to difficulties in grievance redressal mechanisms which can be used to address scenarios where harm has arisen from the use of AI systems. 

The Council of Europe, in its guidelines on the human rights impacts of algorithmic systems, highlighted the need for effective remedies to ensure responsibility and accountability for the protection of human rights in the context of the deployment of AI systems. A potential model for grievance redressal is the redressal mechanism suggested in the AI4People’s Ethical Framework for a Good Society report by the Atomium – European Institute for Science, Media and Democracy. The report suggests that any grievance redressal mechanism for AI systems would have to be widely accessible and include redress for harms inflicted, costs incurred, and other grievances caused by the AI system. It must demarcate a clear system of accountability for both organisations and individuals. Of the various redressal mechanisms they have suggested, two significant mechanisms are: 

(a) AI ombudsperson: This would ensure the auditing of allegedly unfair or inequitable uses of AI reported by users of the public at large through an accessible judicial process. 

(b) Guided process for registering a complaint: This envisions laying down a simple process, similar to filing a Right to Information request, which can be used to bring discrepancies, or faults in an AI system to the notice of the authorities.

Such mechanisms can be evolved to address the human rights concerns and harms arising from the use of AI systems in India. 

Conclusion

In early October, the Government of India hosted the Responsible AI for Social Empowerment (RAISE) Summit which has involved discussions around India’s vision and a roadmap for social transformation, inclusion and empowerment through Responsible AI. At the RAISE Summit, speakers underlined the need for adopting AI ethics and a human centred approach to the deployment of AI systems. However, this conversation is still at a nascent stage and several rounds of consultations may be required to build these principles into an Indian AI governance and regulatory framework. 

As India enters into the next stage of developing and deploying AI systems, it is important to have multi-stakeholder consultations to discuss mechanisms for the adoption of principles for Responsible AI. This will enable the framing of an effective governance framework for AI in India that is firmly grounded in India’s constitutional framework. While the NITI Aayog Working Document has introduced the concept of ‘Responsible AI’ and the ethics around which AI systems may be designed, it lacks substantive discussion on these principles. Hence, in our analysis, we have explored global views and practices around these principles and suggested mechanisms appropriate for adoption in India’s governance framework for AI. Our detailed analysis of these principles can be accessed in our comments to the NITI Aayog’s Working Document Towards Responsible AI for All.

Experimenting With New Models of Data Governance – Data Trusts

This post has been authored by Shashank Mohan

India is in the midst of establishing a robust data governance framework, which will impact the rights and liabilities of all key stakeholders – the government, private entities, and citizens at large. As a parliamentary committee debates its first personal data protection legislation (‘PDPB 2019’), proposals for the regulation of non-personal data and a data empowerment and protection architecture are already underway. 

As data processing capabilities continue to evolve at a feverish pace, basic data protection regulations like the PDPB 2019 might not be sufficient to address new challenges. For example, big data analytics renders traditional notions of consent meaningless as users have no knowledge of how such algorithms behave and what determinations are made about them by such technology. 

Creative data governance models, which are aimed at reversing the power dynamics in the larger data economy are the need of the hour. Recognising these challenges policymakers are driving the conversation on data governance in the right direction. However, they might be missing out on crucial experiments being run in other parts of the world

As users of digital products and services increasingly lose control over data flows, various new models of data governance are being recommended for example, data trusts, data cooperatives, and data commons. Out of these, one of the most promising new models of data governance is – data trusts. 

(For the purposes of this blog post, I’ll be using the phrase data processors as an umbrella term to cover data fiduciaries/controllers and data processors in the legal sense. The word users is meant to include all data principals/subjects.)

What are data trusts?

Though there are various definitions of data trusts, one which is helpful in understanding the concept is – ‘data trusts are intermediaries that aggregate user interests and represent them more effectively vis-à-vis data processors.’ 

To solve the information asymmetries and power imbalances between users and data processors, data trusts will act as facilitators of data flow between the two parties, but on the terms of the users. Data trusts will act in fiduciary duty and in the best interests of its members. They will have the requisite legal and technical knowledge to act on behalf of users. Instead of users making potentially ill-informed decisions over data processing, data trusts will make such decisions on their behalf, based on pre-decided factors like a bar on third-party sharing, and in their best interests. For example, data trusts to users can be what mutual fund managers are to potential investors in capital markets. 

Currently, in a typical transaction in the data economy, if users wish to use a particular digital service, neither do they have the knowledge to understand the possible privacy risks nor the negotiation powers for change. Data trusts with a fiduciary responsibility towards users, specialised knowledge, and multiple members might be successful in tilting back the power dynamics in favour of users. Data trusts might be relevant from the perspective of both the protection and controlled sharing of personal as well as non-personal data. 

(MeitY’s Non-Personal Data Governance Framework introduces the concept of data trustees and data trusts in India’s larger data governance and regulatory framework. But, this applies only to the governance of ‘non-personal data’ and not personal data, as being recommended here. CCG’s comments on MeitY’s Non-Personal Data Governance Framework, can be accessed – here)

Challenges with data trusts

Though creative solutions like data trusts seem promising in theory, they must be thoroughly tested and experimented with before wide-scale implementation. Firstly, such a new form of trusts, where the subject matter of the trust is data, is not envisaged by Indian law (see section 8 of the Indian Trusts Act, 1882, which provides for only property to be the subject matter of a trust). Current and even proposed regulatory structures don’t account for the regulation of institutions like data trusts (the non-personal data governance framework proposes data trusts, but only as data sharing institutions and not as data managers or data stewards, as being suggested here). Thus, data trusts will need to be codified into Indian law to be an operative model. 

Secondly, data processors might not embrace the notion of data trusts, as it may result in loss of market power. Larger tech companies, who have existing stores of data on numerous users may not be sufficiently incentivised to engage with models of data trusts. Structures will need to be built in a way that data processors are incentivised to participate in such novel data governance models. 

Thirdly, the business or operational models for data trusts will need to be aligned to their members i.e. users. Data trusts will require money to operate – for profit entities may not have the best interests of users in mind. Subscription based models, whether for profit or not, might fail as users are habitual to free services. Donation based models might need to be monitored closely for added transparency and accountability. 

Lastly, other issues like creation of technical specifications for data sharing and security, contours of consent, and whether data trusts will help in data sharing with the government, will need to be accounted for. 

Privacy centric data governance models

At this early stage of developing data governance frameworks suited to Indian needs, policymakers are at a crucial juncture of experimenting with different models. These models must be centred around the protection and preservation of privacy rights of Indians, both from private and public entities. Privacy must also be read in its expansive definition as provided by the Supreme Court in Justice K.S. Puttaswamy vs. Union of India. The autonomy, choice, and control over informational privacy are crucial to the Supreme Court’s interpretation of privacy. 

(CCG’s privacy law database that tracks privacy jurisprudence globally and currently contains information from India and Europe, can be accessed – here

Building an AI governance framework for India

This post has been authored by Jhalak M. Kakkar and Nidhi Singh

In July 2020, the NITI Aayog released a “Working Document: Towards Responsible AI for All” (“NITI Working Document/Working Document”). The Working Document was initially prepared for an expert consultation held on 21 July 2020. It was later released for comments by stakeholders on the development of a ‘Responsible AI’ policy in India. CCG responded with comments to the Working Document, and our analysis can be accessed here.

The Working Document highlights the potential of Artificial Intelligence (“AI”) in the Indian context. It attempts to identify the challenges that will be faced in the adoption of AI and makes some recommendations on how to address these challenges. The Working Document emphasises the economic potential of the adoption of AI in boosting India’s annual growth rate, its potential for use in the social sector (‘AI for All’) and the potential for India to export relevant social sector products to other emerging economies (‘AI Garage’). 

However, this is not the first time that the NITI Aayog has discussed the large-scale adoption of AI in India. In 2018, the NITI Aayog released a discussion paper on the “National Strategy for Artificial Intelligence” (“National Strategy”). Building upon the National Strategy, the Working Document attempts to delineate ‘Principles for Responsible AI’ and identify relevant policy and governance recommendations. 

Any framework for the regulation of AI systems needs to be based on clear principles. The ‘Principles for Responsible AI’ identified by the Working Document include the principles of safety and reliability, equality, inclusivity and non-discrimination, privacy and security, transparency, accountability, and the protection and reinforcement of positive human values. While the NITI Working Document introduces these principles, it does not go into any substantive details on the regulatory approach that India should adopt and what the adoption of these principles into India’s regulatory framework would entail. 

In a series of posts, we will discuss the legal and regulatory implications of the proposed Working Document and more broadly discuss the regulatory approach India should adopt to AI and the principles India should embed in it. In this first post, we map out key considerations that should be kept in mind in order to develop a comprehensive regulatory regime to govern the adoption and deployment of AI systems in India. Subsequent posts will discuss the various ‘Principles for Responsible AI’, their constituent elements and how we should think of incorporating them into the Indian regulatory framework.

Approach to building an AI regulatory framework 

While the adoption of AI has several benefits, there are several potential harms and unintended risks if the technology is not assessed adequately for its alignment with India’s constitutional principles and its impact on the safety of individuals. Depending upon the nature and scope of the deployment of an AI system, its potential risks can include the discriminatory impact on vulnerable and marginalised communities, and material harms such as the negative impact on the health and safety of individuals. In the case of deployments by the State, risks include violation of the fundamental rights to equality, privacy, freedom of assembly and association, and freedom of speech and expression. 

We highlight some of the regulatory considerations that should be considered below:

Anchoring AI regulatory principles within the constitutional framework of India

The use of AI systems has raised concerns about their potential to violate multiple rights protected under the Indian Constitution such as the right against discrimination, the right to privacy, the right to freedom of speech and expression, the right to assemble peaceably and the right to freedom of association. Any regulatory framework put in place to govern the adoption and deployment of AI technology in India will have to be in consonance with its constitutional framework. While the NITI Working Document does refer to the idea of the prevailing morality of India and its relation to constitutional morality, it does not comprehensively address the idea of framing AI principles in compliance with India’s constitutional principles.

For instance, the government is seeking to acquire facial surveillance technology, and the National Strategy discusses the use of AI-powered surveillance applications by the government to predict crowd behaviour and for crowd management. The use of AI powered surveillance systems such as these needs to be balanced with their impact on an individual’s right to freedom of speech and expression, privacy and equality. Operational challenges surrounding accuracy and fairness in these systems raise further concerns. Considering the risks posed to the privacy of individuals, the deployment of these systems by the government, if at all, should only be done in specific contexts for a particular purpose and in compliance with the principles laid down by the Supreme Court in the Puttaswamy case.

In the context of AI’s potential to exacerbate discrimination, it would be relevant to discuss the State’s use of AI systems for the sentencing of criminals and assessing recidivism. AI systems are trained on existing datasets. These datasets tend to contain historically biased, unequal and discriminatory data. We have to be cognizant of the propensity for historical bias’ and discrimination getting imported into AI systems and their decision making. This could further reinforce and exacerbate the existing discrimination in the criminal justice system towards marginalised and vulnerable communities, and result in a potential violation of their fundamental rights.

The National Strategy acknowledges the presence of such biases and proposes a technical approach to reduce bias. While such attempts are appreciable in their efforts to rectify the situation and yield fairer outcomes, such an approach disregards the fact that these datasets are biased because they arise from a biased, unequal and discriminatory world. As we seek to build effective regulation to govern the use and deployment of AI systems, we have to remember that these are socio-technical systems that reflect the world around us and embed the biases, inequality and discrimination inherent in the Indian society. We have to keep this broader Indian social context in mind as we design AI systems and create regulatory frameworks to govern their deployment. 

While, the Working Document introduces the principles for responsible AI such as equality, inclusivity and non-discrimination, and privacy and security, there needs to be substantive discussion around incorporating these principles into India’s regulatory framework in consonance with constitutional guaranteed rights.

Regulatory Challenges in the adoption of AI in India

As India designs a regulatory framework to govern the adoption and deployment of AI systems, it is important that we keep the following in focus: 

  • Heightened threshold of responsibility for government or public sector deployment of AI systems

The EU is considering adopting a risk-based approach for regulation of AI, with heavier regulation for high-risk AI systems. The extent of risk factors such as safety, consumer rights and fundamental rights are assessed by looking at the sector of deployment and the intended use of the AI system. Similarly, India must consider the adoption of a higher regulatory threshold for the use of AI by at least government institutions, given their potential for impacting citizen’s rights. Government use of AI systems that have the potential of severely impacting citizens’ fundamental rights include the use of AI in the disbursal of government benefits, surveillance, law enforcement and judicial sentencing

  • Need for overarching principles based AI regulatory framework

Different sectoral regulators are currently evolving regulations to address the specific challenges posed by AI in their sector. While it is vital to harness the domain expertise of a sectoral regulator and encourage the development of sector-specific AI regulations, such piecemeal development of AI principles can lead to fragmentation in the overall approach to regulating AI in India. Therefore, to ensure uniformity in the approach to regulating AI systems across sectors, it is crucial to put in place a horizontal overarching principles-based framework. 

  • Adaptation of sectoral regulation to effectively regulate AI

In addition to an overarching regulatory framework which forms the basis for the regulation of AI, it is equally important to envisage how this framework would work with horizontal or sector-specific laws such as consumer protection law and the applicability of product liability to various AI systems. Traditionally consumer protection and product liability regulatory frameworks have been structured around fault-based claims. However, given the challenges concerning explainability and transparency of decision making by AI systems, it may be difficult to establish the presence of defects in products and, for an individual who has suffered harm, to provide the necessary evidence in court. Hence, consumer protection laws may have to be adapted to stay relevant in the context of AI systems. Even sectoral legislation regulating the use of motor vehicles, such as the Motor Vehicles Act, 1988 would have to be modified to enable and regulate the use of autonomous vehicles and other AI transport systems. 

  • Contextualising AI systems for both their safe development and use

To ensure the effective and safe use of AI systems, they have to be designed, adapted and trained on relevant datasets depending on the context in which they will be deployed. The Working Document envisages India being the AI Garage for 40% of the world – developing AI solutions in India which can then be deployed in other emerging economies. Additionally, India will likely import AI systems developed in countries such as the US, EU and China to be deployed within the Indian context. Both scenarios involve the use of AI systems in a context distinct from the one in which they have been developed. Without effectively contextualising socio-technical systems like AI systems to the environment they are to be deployed in, there are enhanced safety, accuracy and reliability concerns. Regulatory standards and processes need to be developed in India to ascertain the safe use and deployment of AI systems that have been developed in contexts that are distinct from the ones in which they will be deployed. 

The NITI Working Document is the first step towards an informed discussion on the adoption of a regulatory framework to govern AI technology in India. However, there is a great deal of work to be done. Any regulatory framework developed by India to govern AI must balance the benefits and risks of deploying AI, diminish the risk of any harm and have a consumer protection framework in place to adequately address any harm that may arise. Besides this, the regulatory framework must ensure that the deployment and use of AI systems are in consonance with India’s constitutional scheme.

Reflections on Personal Data Protection Bill, 2019

By Sangh Rakshita and Nidhi Singh

Image result for data protection"

 The Personal Data Protection Bill, 2019 (PDP Bill/ Bill) was introduced in the Lok Sabha on December 11, 2019 , and was immediately referred to a joint committee of the Parliament. The joint committee published a press communique on February 4, 2020 inviting comments on the Bill from the public.

The Bill is the successor to the Draft Personal Data Protection Bill 2018 (Draft Bill 2018), recommended by a government appointed expert committee chaired by Justice B.N. Srikrishna. In August 2018, shortly after the recommendations and publication of the draft Bill, the Ministry of Electronics and Information Technology (MeitY) invited comments on the Draft Bill 2018 from the public. (Our comments are available here.)[1]

In this post we undertake a preliminary examination of:

  • The scope and applicability of the PDP Bill
  • The application of general data protection principles
  • The rights afforded to data subjects
  • The exemptions provided to the application of the law

In future posts in the series we will examine the Bill and look at the:

  • The restrictions on cross border transfer of personal data
  • The structure and functions of the regulatory authority
  • The enforcement mechanism and the penalties under the PDP Bill

Scope and Applicability

The Bill identifies four different categories of data. These are personal data, sensitive personal data, critical personal data and non-personal data

Personal data is defined as “data about or relating to a natural person who is directly or indirectly identifiable, having regard to any characteristic, trait, attribute or any other feature of the identity of such natural person, whether online or offline, or any combination of such features with any other information, and shall include any inference drawn from such data for the purpose of profiling. (emphasis added)

The addition of inferred data in the definition realm of personal data is an interesting reflection of the way the conversation around data protection has evolved in the past few months, and requires further analysis.

Sensitive personal data is defined as data that may reveal, be related to or constitute a number of different categories of personal data, including financial data, health data, official identifiers, sex life, sexual orientation, genetic data, transgender status, intersex status, caste or tribe, and religious and political affiliations / beliefs. In addition, under clause 15 of the Bill the Central Government can notify other categories of personal data as sensitive personal data in consultation with the Data Protection Authority and the relevant sectoral regulator.

Similar to the 2018 Bill, the current bill does not define critical personal data and clause 33 provides the Central Government the power to notify what is included under critical personal data. However, in its report accompanying the 2018 Bill, the Srikrishna committee had referred to some examples of critical personal data that relate to critical state interest like Aadhaar number, genetic data, biometric data, health data, etc.

The Bill retains the terminology introduced in the 2018 Draft Bill, referring to data controllers as ‘data fiduciaries’ and data subjects ‘data principals’. The new terminology was introduced with the purpose of reflecting the fiduciary nature of the relationship between the data controllers and subjects. However, whether the use of the specific terminology has more impact on the protection and enforcement of the rights of the data subjects still needs to be seen.

 Application of PDP Bill 2019

The Bill is applicable to (i) the processing of any personal data, which has been collected, disclosed, shared or otherwise processed in India; (ii) the processing of personal data by the Indian government, any Indian company, citizen, or person/ body of persons incorporated or created under Indian law; and (iii) the processing of personal data in relation to any individuals in India, by any persons outside of India.

The scope of the 2019 Bill, is largely similar in this context to that of the 2018 Draft Bill. However, one key difference is seen in relation to anonymised data. While the 2018 Draft Bill completely exempted anonymised data from its scope, the 2019 Bill does not apply to anonymised data, except under clause 91 which gives the government powers to mandate the use and processing of non-personal data or anonymised personal data under policies to promote the digital economy. There are a few concerns that arise in context of this change in treatment of anonymised personal data. First, there are concerns on the concept of anonymisation of personal data itself. While the Bill provides that the Data Protection Authority (DPA) will specify appropriate standards of irreversibility for the process of anonymisation, it is not clear that a truly irreversible form of anonymisation is possible at all. In this case, we need more clarity on what safeguards will be applicable for the use of anonymised personal data.

Second, is the Bill’s focus on the promotion of the digital economy. We have previously discussed some of the concerns regarding focus on the promotion of digital economy in a rights based legislation in our comments to the Draft Bill 2018.

These issues continue to be of concern, and are perhaps heightened with the introduction of a specific provision on the subject in the 2019 Bill (especially without adequate clarity on what services or policy making efforts in this direction, are to be informed by the use of anonymised personal data). Many of these issues are also still under discussion by the committee of experts set up to deliberate on data governance framework (non-personal data). The mandate of this committee includes the study of various issues relating to non-personal data, and to make specific suggestions for consideration of the central government on regulation of non-personal data.

The formation of the non-personal data committee was in pursuance of a recommendation by the Justice Srikrishna Committee to frame a legal framework for the protection of community data, where the community is identifiable. The mandate of the expert committee will overlap with the application of clause 91(2) of the Bill.

Data Fiduciaries, Social Media Intermediaries and Consent Managers

Data Fiduciaries

As discussed above the Bill categorises data controllers as data fiduciaries and significant data fiduciaries. Any person that determines the purpose and means of processing of personal data, (including the State, companies, juristic entities or individuals) is considered a data fiduciary. Some data fiduciaries may be notified as ‘significant data fiduciaries’, on the basis of factors such as the volume and sensitivity of personal data processed, the risks of harm etc. Significant data fiduciaries are held to higher standards of data protection. Under clauses 27-30, significant data fiduciaries are required to carry out data protection impact assessments, maintain accurate records, audit policy and the conduct of its processing of personal data and appoint a data protection officer. 

Social Media Intermediaries

The Bill introduces a distinct category of intermediaries called social media intermediaries. Under clause 26(4) a social media intermediary is ‘an intermediary who primarily or solely enables online interaction between two or more users and allows them to create, upload, share, disseminate, modify or access information using its services’. Intermediaries that primarily enable commercial or business-oriented transactions, provide access to the Internet, or provide storage services are not to be considered social media intermediaries.

Social media intermediaries may be notified to be significant data fiduciaries, if they have a minimum number of users, and their actions have or are likely to have a significant impact on electoral democracy, security of the State, public order or the sovereignty and integrity of India.

Under clause 28 social media intermediaries that have been notified as a significant data fiduciaries will be required to provide for voluntary verification of users to be accompanied with a demonstrable and visible mark of verification.

Consent Managers

The Bill also introduces the idea of a ‘consent manager’ i.e. a (third party) data fiduciary which provides for management of consent through an ‘accessible, transparent and interoperable platform’. The Bill does not contain any details on how consent management will be operationalised, and only states that these details will be specified by regulations under the Bill. 

Data Protection Principles and Obligations of Data Fiduciaries

Consent and grounds for processing

The Bill recognises consent as well as a number of other grounds for the processing of personal data.

Clause 11 provides that personal data shall only be processed if consent is provided by the data principal at the commencement of processing. This provision, similar to the consent provision in the 2018 Draft Bill, draws from various principles including those under the Indian Contract Act, 1872 to inform the concept of valid consent under the PDP Bill. The clause requires that the consent should be free, informed, specific, clear and capable of being withdrawn.

Moreover, explicit consent is required for the processing of sensitive personal data. The current Bill appears to be silent on issues such as incremental consent which were highlighted in our comments in the context of the Draft Bill 2018.

The Bill provides for additional grounds for processing of personal data, consisting of very broad (and much criticised) provisions for the State to collect personal data without obtaining consent. In addition, personal data may be processed without consent if required in the context of employment of an individual, as well as a number of other ‘reasonable purposes’. Some of the reasonable purposes, which were listed in the Draft Bill 2018 as well, have also been a cause for concern given that they appear to serve mostly commercial purposes, without regard for the potential impact on the privacy of the data principal.

In a notable change from the Draft Bill 2018, the PDP Bill, appears to be silent on whether these other grounds for processing will be applicable in relation to sensitive personal data (with the exception of processing in the context of employment which is explicitly barred).

Other principles

The Bill also incorporates a number of traditional data protection principles in the chapter outlining the obligations of data fiduciaries. Personal data can only be processed for a specific, clear and lawful purpose. Processing must be undertaken in a fair and reasonable manner and must ensure the privacy of the data principal – a clear mandatory requirement, as opposed to a ‘duty’ owed by the data fiduciary to the data principal in the Draft Bill 2018 (this change appears to be in line with recommendations made in multiple comments to the Draft Bill 2018 by various academics, including our own).

Purpose and collection limitation principles are mandated, along with a detailed description of the kind of notice to be provided to the data principal, either at the time of collection, or as soon as possible if the data is obtained from a third party. The data fiduciary is also required to ensure that data quality is maintained.

A few changes in the application of data protection principles, as compared to the Draft Bill 2018, can be seen in the data retention and accountability provisions.

On data retention, clause 9 of the Bill provides that personal data shall not be retained beyond the period ‘necessary’ for the purpose of data processing, and must be deleted after such processing, ostensibly a higher standard as compared to ‘reasonably necessary’ in the Draft Bill 2018. Personal data may only be retained for a longer period if explicit consent of the data principal is obtained, or if retention is required to comply with law. In the face of the many difficulties in ensuring meaningful consent in today’s digital world, this may not be a win for the data principal.

Clause 10 on accountability continues to provide that the data fiduciary will be responsible for compliance in relation to any processing undertaken by the data fiduciary or on its behalf. However, the data fiduciary is no longer required to demonstrate such compliance.

Rights of Data Principals

Chapter V of the PDP Bill 2019 outlines the Rights of Data Principals, including the rights to access, confirmation, correction, erasure, data portability and the right to be forgotten. 

Right to Access and Confirmation

The PDP Bill 2019 makes some amendments to the right to confirmation and access, included in clause 17 of the bill. The right has been expanded in scope by the inclusion of sub-clause (3). Clause 17(3) requires data fiduciaries to provide data principals information about the identities of any other data fiduciaries with whom their personal data has been shared, along with details about the kind of data that has been shared.

This allows the data principal to exert greater control over their personal data and its use.  The rights to confirmation and access are important rights that inform and enable a data principal to exercise other rights under the data protection law. As recognized in the Srikrishna Committee Report, these are ‘gateway rights’, which must be given a broad scope.

Right to Erasure

The right to correction (Clause 18) has been expanded to include the right to erasure. This allows data principals to request erasure of personal data which is not necessary for processing. While data fiduciaries may be allowed to refuse correction or erasure, they would be required to produce a justification in writing for doing so, and if there is a continued dispute, indicate alongside the personal data that such data is disputed.

The addition of a right to erasure, is an expansion of rights from the 2018 Bill. While the right to be forgotten only restricts or discontinues disclosure of personal data, the right to erasure goes a step ahead and empowers the data principal to demand complete removal of data from the system of the data fiduciary.

Many of the concerns expressed in the context of the Draft Bill 2018, in terms of the procedural conditions for the exercise of the rights of data principals, as well as the right to data portability specifically, continue to persist in the PDP Bill 2019.

Exceptions and Exemptions

While the PDP Bill ostensibly enables individuals to exercise their right to privacy against the State and the private sector, there are several exemptions available, which raise several concerns.

The Bill grants broad exceptions to the State. In some cases, it is in the context of specific obligations such as the requirement for individuals’ consent. In other cases, State action is almost entirely exempted from obligations under the law. Some of these exemptions from data protection obligations are available to the private sector as well, on grounds like journalistic purposes, research purposes and in the interests of innovation.

The most concerning of these provisions, are the exemptions granted to intelligence and law enforcement agencies under the Bill. The Draft Bill 2018, also provided exemptions to intelligence and law enforcement agencies, so far as the privacy invasive actions of these agencies were permitted under law, and met procedural standards, as well as legal standards of necessity and proportionality. We have previously discussed some of the concerns with this approach here.

The exemptions provided to these agencies under the PDP Bill, seem to exacerbate these issues.

Under the Bill, the Central Government can exempt an agency of the government from the application of this Act by passing an order with reasons recorded in writing if it is of the opinion that the exemption is necessary or expedient in the interest of sovereignty and integrity, security of the state, friendly relations with foreign states, public order; or for preventing incitement to the commission of any cognizable offence relating to the aforementioned grounds. Not only have the grounds on which government agencies can be exempted been worded in an expansive manner, the procedure of granting these exemptions also is bereft of any safeguards.

The executive functioning in India suffers from problems of opacity and unfettered discretion at times, which requires a robust system of checks and balances to avoid abuse. The Indian Telegraph Act, 1885 (Telegraph Act) and the Information Technology Act, 2000 (IT Act) enable government surveillance of communications made over telephones and the internet. For drawing comparison here, we primarily refer to the Telegraph Act as it allows the government to intercept phone calls on similar grounds as mentioned in clause 35 of the Bill by an order in writing. However, the Telegraph Act limits the use of this power to two scenarios – occurrence of a public emergency or in the interest of public safety. The government cannot intercept communications made over telephones in the absence of these two preconditions. The Supreme Court in People’s Union for Civil Liberties v. Union of India, (1997) introduced guidelines to check abuse of surveillance powers under the Telegraph Act which were later incorporated in Rule 419A of the Indian Telegraph Rules, 1951. A prominent safeguard included in Rule 419A requires that surveillance and monitoring orders be issued only after considering ‘other reasonable means’ for acquiring the required information. The court had further limited the scope of interpretation of ‘public emergency’ and ‘public safety’ to mean “the prevalence of a sudden condition or state of affairs affecting the people at large and calling for immediate action”, and “the state or condition of freedom from danger or risk at large” respectively. In spite of the introduction of these safeguards, the procedure of intercepting telephone communications under the Telegraph Act is criticised for lack of transparency and improper implementation. For instance, a 2014 report revealed that around 7500 – 9000 phone interception orders were issued by the Central Government every month. The application of procedural safeguards, in each case would have been physically impossible given the sheer numbers. Thus, legislative and judicial oversight becomes a necessity in such cases.

The constitutionality of India’s surveillance apparatus inclduing section 69 of the IT Act which allows for surveillance on broader grounds on the basis of necessity and expediency and not ‘public emergency’ and ‘public safety’, has been challenged before the Supreme Court and is currently pending. Clause 35 of the Bill also mentions necessity and expediency as prerequisites for the government to exercise its power to grant exemption, which appear to be vague and open-ended as they are not defined. The test of necessity, implies resorting to the least intrusive method of encroachment up on privacy to achieve the legitimate state aim. This test is typically one among several factors applied in deciding on whether a particular intrusion on a right is tenable or not, under human rights law. In his concurring opinion in Puttaswamy (I) J. Kaul had included ‘necessity’ in the proportionality test. (However, this test is not otherwise well developed in Indian jurisprudence).  Expediency, on the other hand, is not a specific legal basis used for determining the validity of an intrusion on human rights. It has also not been referred to in Puttaswamy (I) as a basis of assessing a privacy violation. The use of the term ‘expediency’ in the Bill is deeply worrying as it seems to bring down the threshold for allowing surveillance which is a regressive step in the context of cases like PUCL and Puttaswamy (I). A valid law along with the principles of proportionality and necessity are essential to put in place an effective system of checks and balances on the powers of the executive to provide exemptions. It seems unlikely that the clause will pass the test of proportionality (sanction of law, legitimate aim, proportionate to the need of interference, and procedural guarantees against abuse) as laid down by the Supreme Court in Puttaswamy (I).

The Srikrishna Committee report had recommended that surveillance should not only be conducted under law (and not executive order), but also be subject to oversight, and transparency requirements. The Committee had argued that the tests of lawfulness, necessity and proportionality provided for under clauses 42 and 43 (of the Draft Bill 2018) were sufficient to meet the standards set out under the Puttaswamy judgment. Since the PDP Bill completely does away with all these safeguards and leaves the decision to executive discretion, the law is unconstitutional.  After the Bill was introduced in the Lok Sabha, J. Srikrishna had criticised it for granting expansive exemptions in the absence of judicial oversight. He warned that the consequences could be disastrous from the point of view of safeguarding the right to privacy and could turn the country into an “Orwellian State”. He has also opined on the need for a separate legislation to govern the terms under which the government can resort to surveillance.

Clause 36 of the Bill deals with exemption of some provisions for certain processing of personal data. It combines four different clauses on exemption which were listed in the Draft Bill 2018 (clauses 43, 44, 46 and 47). These include processing of personal data in the interests of prevention, detection, investigation and prosecution of contraventions of law; for the purpose of legal proceedings; personal or domestic purposes; and journalistic purposes. The Draft Bill 2018 had detailed provisions on the need for a law passed by Parliament or the State Legislature which is necessary and proportionate, for processing of personal data in the interests of prevention, detection, investigation and prosecution of contraventions of law. Clause 36 of the Bill does not enumerate the need for a law to process personal data under these exemptions. We had argued that these exemptions granted by the Draft Bill 2018 (clauses 43, 44, 46 and 47) were wide, vague and needed clarifications, but the exemptions under clause 36 of the Bill  are even more ambiguous as they merely enlist the exemptions without any specificities or procedural safeguards in place.

In the Draft Bill 2018, the Authority could not give exemption from the obligation of fair and reasonable processing, measures of security safeguards and data protection impact assessment for research, archiving or statistical purposes As per the current Bill, the Authority can provide exemption from any of the provisions of the Act for research, archiving or statistical purposes.

The last addition to this chapter of exemptions is that of creating a sandbox for encouraging innovation. This newly added clause 40 is aimed at encouraging innovation in artificial intelligence, machine-learning or any other emerging technology in public interest. The details of what the sandbox entails other than exemption from some of the obligations of Chapter II might need further clarity. Additionally, to be considered an eligible applicant, a data fiduciary has to necessarily obtain certification of its privacy by design policy from the DPA, as mentioned in clause 40(4) read with clause 22.

Though well appreciated for its intent, this provision requires clarification on grounds of selection and details of what the sandbox might entail.


[1] At the time of introduction of the PDP Bill 2019, the Minister for Law and Justice of India, Mr. Ravi Shankar Prasad suggested that over 2000 inputs were received on the Draft Bill 2018, based on which changes have been made in the PDP Bill 2019. However, these comments and inputs have not been published by MeitY, and only a handful of comments have been published, by the stakeholders submitting these comments themselves.   

Right to Privacy: The Puttaswamy Effect

By Sangh Rakshita and Nidhi Singh

The Puttaswamy judgement of 2017 reaffirmed the ‘Right to Privacy’ as a fundamental right in Indian Jurisprudence. Since then, it has been used as an important precedent in many cases, to emphasize upon the right to privacy as a fundamental right and to clarify the scope of the same. In this blog, we discuss some of the cases of the Supreme Court and various High Courts, post August 2017, which have used the Puttaswamy judgement and the tests laid in it to further the jurisprudence on right to privacy in India. With the Personal Data Protection Bill tabled in 2019, the debate on privacy has been re-ignited, and as such, it is important to explore the contours of the right to privacy as a fundamental right, post the Puttaswamy judgement.   

Navtej Singh Johar and ors Vs. Union of India (UOI) and Ors., 2018 (Supreme Court)

In this case, the Supreme Court of India unanimously held that Section 377 of the Indian Penal Code 1860 (IPC), which criminalized ‘carnal intercourse against the order of nature’, was unconstitutional in so far as it criminalized consensual sexual conduct between adults of the same sex. The petition, challenged Section 377 on the ground that it was vague and it violated the constitutional rights to privacy, freedom of expression, equality, human dignity and protection from discrimination guaranteed under Articles 14, 15, 19 and 21 of the Constitution. The Court relied upon the judgement in the case of K.S. Puttaswamy v. Union of India, which held that denying the LGBT community its right to privacy on the ground that they form a minority of the population would be violative of their fundamental rights, and that sexual orientation forms an inherent part of self-identity and denying the same would be violative of the right to life.

Justice K.S. Puttaswamy and Ors. vs. Union of India (UOI) and Ors., 2018 (Supreme Court)

 The Supreme Court upheld the validity of the Aadhar Scheme on the ground that it did not violate the right to privacy of the citizens as minimal biometric data was collected in the enrolment process and the authentication process is not exposed to the internet. The majority upheld the constitutionality of the Aadhaar Act, 2016 barring a few provisions on disclosure of personal information, cognizance of offences and use of the Aadhaar ecosystem by private corporations. They relied on the fulfilment of the proportionality test as laid down in the Puttaswamy (2017) judgment.

Joseph Shine vs. Union of India (UOI), 2018 (Supreme Court)

The Supreme Court decriminalised adultery in this case where the constitutional validity of Section 497 (adultery) of IPC and Section 198(2) of Code of Criminal Procedure, 1973 (CrPC) was challenged. The Court held that in criminalizing adultery, the legislature has imposed its imprimatur on the control by a man over the sexuality of his spouse – in doing that, the statutory provision fails to meet the touchstone of Article 21. Section 497 was struck down on the ground that it deprives a woman of her autonomy, dignity and privacy and that it compounds the encroachment on her right to life and personal liberty by adopting a notion of marriage which subverts true equality. Concurring judgments in this case referred to Puttaswamy to explain the concepts of autonomy and dignity, and their intricate relationship with the protection of life and liberty as guaranteed in the Constitution. They relied on the Puttaswamy judgment to emphasize the dangers of the “use of privacy as a veneer for patriarchal domination and abuse of women.” They also cited Puttaswamy to elucidate that privacy is the entitlement of every individual, with no distinction to be made on the basis of the individual’s position in society.

Indian Young Lawyers Association and Ors. vs. The State of Kerala and Ors., 2018 (Supreme Court)

In this case, the Supreme Court upheld the right of women aged between 10 to 50 years to enter the Sabrimala Temple. The court held Rule 3(b) of the Kerala Hindu Places of Public Worship (Authorisation of Entry) Rules, 1965, which restricts the entry of women into the Sabarimala temple, to be ultra vires (i.e. not permitted under the Kerala Hindu Places of Public Worship (Authorisation of Entry) Act, 1965). While discussing the guarantee against social exclusion based on notions of “purity and pollution” as an acknowledgment of the inalienable dignity of every individual J. Chandrachud (in his concurring judgment) referred to Puttaswamy specifically to explain dignity as a facet of Article 21. In the course of submissions, the Amicus to the case had submitted that the exclusionary practice in its implementation results in involuntary disclosure by women of both their menstrual status and age which amounts to forced disclosure that consequently violates the right to dignity and privacy embedded in Article 21 of the Constitution of India.

(The judgement is under review before a 9 judge constitutional bench.)

Vinit Kumar Vs. Central Bureau of Investigation and Ors., 2019 (Bombay High Court)

This case dealt with phone tapping and surveillance under section 5(2) of the Indian Telegraph Act, 1885 (Telegraph Act) and the balance between public safety interests and the right to privacy. Section 5(2) of the Telegraph Act permits the interception of telephone communications in the case of a public emergency, or where there is a public safety requirement. Such interception needs to comply with the procedural safeguards set out by the Supreme Court in PUCL v. Union of India (1997), which were then codified as rules under the Telegraph Act. The Bombay High Court applied the tests of legitimacy and proportionality laid down in Puttaswamy, to the interception orders issued under the Telegraph Act, and held that in this case the order for interception could not be substantiated in the interest of public safety and did not satisfy the test of “principles of proportionality and legitimacy” as laid down in Puttaswamy. The Bombay High Court quashed the interception orders in question, and directed that the copies / recordings of the intercepted communications be destroyed.

Central Public Information Officer, Supreme Court of India vs. Subhash Chandra Agarwal, 2019 (Supreme Court)

In this case, the Supreme Court held that held that the Office of the Chief Justice of India is a ‘public authority’ under the Right to Information Act, 2005 (RTI Act) – enabling the disclosure of information such as the Judges personal assets. In this case, the Court discussed the privacy impact of such disclosure extensively, including in the context of Puttaswamy. The Court found that the right to information and right to privacy are at an equal footing, and that there was no requirement to take a view that one right trumps the other. The Court stated that the proportionality test laid down in Puttaswamy should be used by the Information Officer to balance the two rights, and also found that the RTI Act itself has sufficient procedural safeguards built in, to meet this test in the case of disclosure of personal information.

X vs. State of Uttarakhand and Ors., 2019 (Uttarakhand High Court)

In this case the petitioner claimed that she had identified herself as female, and undergone gender reassignment surgery and therefore should be treated as a female. She was not recognized as female by the State. While the Court primarily relied upon the judgment of the Supreme Court in NALSA v. Union of India, it also referred to the judgment in Puttaswamy. Specifically, the judgment refers to the finding in Puttaswamy that the right to privacy is not necessarily limited to any one provision in the chapter on fundamental rights, but rather intersecting rights. The intersection of Article 15 with Article 21 locates a constitutional right to privacy as an expression of individual autonomy, dignity and identity. The Court also referred to the Supreme Court’s judgment in Navtej Singh Johar v. Union of India, and on the basis of all three judgments, upheld the right of the petitioner to be recognized as a female.

(This judgment may need to be re-examined in light of the The Transgender Persons (Protection of Rights) Bill, 2019.)

Indian Hotel and Restaurant Association (AHAR) and Ors. vs. The State of Maharashtra and Ors., 2019 (Supreme Court)

This case dealt with the validity of the Maharashtra Prohibition of Obscene Dance in Hotels, Restaurant and Bar Rooms and Protection of Dignity of Women (Working therein) Act, 2016. The Supreme Court held that the applications for grant of licence should be considered more objectively and with open mind so that there is no complete ban on staging dance performances at designated places prescribed in the Act. Several of the conditions under the Act were challenged, including one that required the installation of CCTV cameras in the rooms where dances were to be performed. Here, the Court relied on Puttaswamy (and the discussion on unpopular privacy laws) to set aside the condition requiring such installation of CCTV cameras.

(The Puttaswamy case has been mentioned in at least 102 High Court and Supreme Court judgments since 2017.)

[September 30-October 7] CCG’s Week in Review Curated News in Information Law and Policy

Huawei finds support from Indian telcos in the 5G rollout as PayPal withdrew from Facebook’s Libra cryptocurrency project; Foreign Portfolio Investors moved MeitY against in the Data Protection Bill; the CJEU rules against Facebook in case relating to takedown of content globally; and Karnataka joins list of states considering implementing NRC to remove illegal immigrants – presenting this week’s most important developments in law, tech and national security.

Digital India

  • [Sep 30] Why the imminent global economic slowdown is a growth opportunity for Indian IT services firms, Tech Circle report.
  • [Sep 30] Norms tightened for IT items procurement for schools, The Hindu report.
  • [Oct 1] Govt runs full throttle towards AI, but tech giants want to upskill bureaucrats first, Analytics India Magazine report.
  • [Oct 3] – presenting this week’s most important developments in law, tech and national security. MeitY launches smart-board for effective monitoring of the key programmes, The Economic Times report.
  • [Oct 3] “Use human not artificial intelligence…” to keep a tab on illegal constructions: Court to Mumbai civic body, NDTV report.
  • [Oct 3] India took 3 big productivity leaps: Nilekani, Livemint report.
  • [Oct 4] MeitY to push for more sops to lure electronic makers, The Economic Times report; Inc42 report.
  • [Oct 4] Core philosophy of Digital India embedded in Gandhian values: Ravi Shankar Prasad, Financial Express report.
  • [Oct 4] How can India leverage its data footprint? Experts weigh in at the India Economic Summit, Quartz report.
  • [Oct 4] Indians think jobs would be easy to find despite automation: WEF, Tech Circle report.
  • [Oct 4] Telangana govt adopts new framework to use drones for last-mile delivery, The Economic Times report.
  • [Oct 5] Want to see ‘Assembled in India’ on an iPhone: Ravi Shankar Prasad, The Economic Times report.
  • [Oct 6] Home market gets attractive for India’s IT giants, The Economic Times report.

Internet Governance

  • [Oct 2] India Govt requests maximum social media content takedowns in the world, Inc42 report; Tech Circle report.
  • [Oct 3] Facebook can be forced to delete defamatory content worldwide, top EU court rules, Politico EU report.
  • [Oct 4] EU ruling may spell trouble for Facebook in India, The Economic Times report.
  • [Oct 4] TikTok, TikTok… the clock is ticking on the question whether ByteDance pays its content creators, ET Tech report.
  • [Oct 6] Why data localization triggers a heated debate, The Economic Times report.
  • [Oct 7] Sensitive Indian govt data must be stored locally, Outlook report.

Data Protection and Privacy

  • [Sep 30] FPIs move MeitY against data bill, seek exemption, ET markets report, Inc42 report; Financial Express report.
  • [Oct 1] United States: CCPA exception approved by California legislature, Mondaq.com report.
  • [Oct 1] Privacy is gone, what we need is regulation, says Infosys Kris Gopalakrishnana, News18 report.
  • [Oct 1] Europe’s top court says active consent is needed for tracking cookies, Tech Crunch report.
  • [Oct 3] Turkey fines Facebook $282,000 over data privacy breach, Deccan Herald report.

Free Speech

  • [Oct 1] Singapore’s ‘fake news’ law to come into force Wednesday, but rights group worry it could stifle free speech, The Japan Times report.
  • [Oct 2] Minister says Singapore’s fake news law is about ‘enabling’ free speech, CNBC report.
  • [Oct 3] Hong Kong protests: Authorities to announce face mask ban, BBC News report.
  • [Oct 3] ECHR: Holocaust denial is not protected free speech, ASIL brief.
  • [Oct 4] FIR against Mani Ratnam, Adoor and 47 others who wrote to Modi on communal violence, The News Minute report; Times Now report.
  • [Oct 5] UN asks Malaysia to repeal laws curbing freedom of speech, The New Indian Express report.
  • [Oct 6] When will our varsities get freedom of expression: PC, Deccan Herald report.
  • [Oct 6] UK Government to make university students sign contracts limiting speech and behavior, The Times report.
  • [Oct 7] FIR on Adoor and others condemned, The Telegraph report.

Aadhaar, Digital IDs

  • [Sep 30] Plea in SC seeking linking of social media accounts with Aadhaar to check fake news, The Economic Times report.
  • [Oct 1] Why another omnibus national ID card?, The Hindu Business Line report.
  • [Oct 2] ‘Kenyan court process better than SC’s approach to Aadhaar challenge’: V Anand, who testified against biometric project, LiveLaw report.
  • [Oct 3] Why Aadhaar is a stumbling block in Modi govt’s flagship maternity scheme, The Print report.
  • [Oct 4] Parliament panel to review Aadhaar authority functioning, data security, NDTV report.
  • [Oct 5] Could Aahdaar linking stop GST frauds?, Financial Express report.
  • [Oct 6] Call for liquor sale-Aadhaar linking, The New Indian Express report.

Digital Payments, Fintech

  • [Oct 7] Vision cash-lite: A billion UPI transactions is not enough, Financial Express report.

Cryptocurrencies

  • [Oct 1] US SEC fines crypto company Block.one for unregistered ICO, Medianama report.
  • [Oct 1] South Korean Court issues landmark decision on crypto exchange hacking, Coin Desk report.
  • [Oct 2] The world’s most used cryptocurrency isn’t bitcoin, ET Markets report.
  • [Oct 2] Offline transactions: the final frontier for global crypto adoption, Coin Telegraph report.
  • [Oct 3] Betting on bitcoin prices may soon be deemed illegal gambling, The Economist report.
  • [Oct 3] Japan’s financial regulator issues draft guidelines for funds investing in crypto, Coin Desk report.
  • [Oct 3] Hackers launch widespread botnet attack on crypto wallets using cheap Russian malware, Coin Desk report.
  • [Oct 4] State-backed crypto exchange in Venezuela launches new crypto debit cards, Decrypt report.
  • [Oct 4] PayPal withdraws from Facebook-led Libra crypto project, Coin Desk report.
  • [Oct 5] Russia regulates digital rights, advances other crypto-related bills, Bitcoin.com report.
  • [Oct 5] Hong Kong regulates crypto funds, Decrypt report.

Cybersecurity and Cybercrime

  • [Sep 30] Legit-looking iPhone lightening cables that hack you will be mass produced and sold, Vice report.
  • [Sep 30] Blackberry launches new cybersecurity development labs, Infosecurity Mgazine report.
  • [Oct 1] Cybersecurity experts warn that these 7 emerging technologies will make it easier for hackers to do their jobs, Business Insider report.
  • [Oct 1] US government confirms new aircraft cybersecurity move amid terrorism fears, Forbes report.
  • [Oct 2] ASEAN unites to fight back on cyber crime, GovInsider report; Asia One report.
  • [Oct 2] Adopting AI: the new cybersecurity playbook, TechRadar Pro report.
  • [Oct 4] US-UK Data Access Agreement, signed on Oct 3, is an executive agreement under the CLOUD Act, Medianama report.
  • [Oct 4] The lack of cybersecurity talent is ‘a  national security threat,’ says DHS official, Tech Crunch report.
  • [Oct 4] Millions of Android phones are vulnerable to Israeli surveillance dealer attack, Forbes report; NDTV report.
  • [Oct 4] IoT devices, cloud solutions soft target for cybercriminals: Symantec, Tech Circle report.
  • [Oct 6] 7 cybersecurity threats that can sneak up on you, Wired report.
  • [Oct 6] No one could prevent another ‘WannaCry-style’ attack, says DHS official, Tech Crunch report.
  • [Oct 7] Indian firms rely more on automation for cybersecurity: Report, ET Tech report.

Cyberwarfare

  • [Oct 2] New ASEAN committee to implement norms for countries behaviour in cyberspace, CNA report.

Tech and National Security

  • [Sep 30] IAF ready for Balakot-type strike, says new chief Bhadauria, The Hindu report; Times of India report.
  • [Sep 30] Naval variant of LCA Tejas achieves another milestone during its test flight, Livemint report.
  • [Sep 30] SAAB wants to offer Gripen at half of Rafale cost, full tech transfer, The Print report.
  • [Sep 30] Rajnath harps on ‘second strike capability’, The Shillong Times report.
  • [Oct 1] EAM Jaishankar defends India’s S-400 missile system purchase from Russia as US sanctions threat, International Business Times report.
  • [Oct 1] SC for balance between liberty, national security, Hindustan Times report.
  • [Oct 2] Startups have it easy for defence deals up to Rs. 150 cr, ET Rise report, Swarajya Magazine report.
  • [Oct 3] Huawei-wary US puts more pressure on India, offers alternatives to data localization, The Economic Times report.
  • [Oct 4] India-Russia missile deal: What is CAATSA law and its implications?, Jagran Josh report.
  • [Oct 4] Army inducts Israeli ‘tank killers’ till DRDO develops new ones, Defence Aviation post report.
  • [Oct 4] China, Russia deepen technological ties, Defense One report.
  • [Oct 4] Will not be afraid of taking decisions for fear of attracting corruption complaints: Rajnath Singh, New Indian Express report.
  • [Oct 4] At conclave with naval chiefs of 10 countries, NSA Ajit Doval floats an idea, Hindustan Times report.
  • [Oct 6] Pathankot airbase to finally get enhanced security, The Economic Times report.
  • [Oct 6] rafale with Meteor and Scalp missiles will give India unrivalled combat capability: MBDA, The Economic Times report.
  • [Oct 7] India, Bangladesh sign MoU for setting up a coastal surveillance radar in Bangladesh, The Economic Times report; Decaan Herald report.
  • [Oct 7] Indian operated T-90 tanks to become Russian army’s main battle tank, EurAsian Times report.
  • [Oct 7] IAF’s Sukhois to get more advanced avionics, radar, Defence Aviation post report.

Tech and Law Enforcement

  • [Sep 30] TMC MP Mahua Mitra wants to be impleaded in the WhatsApp traceability case, Medianama report; The Economic Times report.
  • [Oct 1] Role of GIS and emerging technologies in crime detection and prevention, Geospatial World.net report.
  • [Oct 2] TRAI to take more time on OTT norms; lawful interception, security issue now in focus, The Economic Times report.
  • [Oct 2[ China invents super surveillance camera that can spot someone from a crowd of thousands, The Independent report.
  • [Oct 4] ‘Don’t introduce end-to-end encryption,’ UK, US and Australia ask Facebook in an open letter, Medianama report.
  • [Oct 4] Battling new-age cyber threats: Kerala Police leads the way, The Week report.
  • [Oct 7] India govt bid to WhatsApp decryption gets push as UK,US, Australia rally support, Entrackr report.

Tech and Elections

  • [Oct 1] WhatsApp was extensively exploited during 2019 elections in India: Report, Firstpost report.
  • [Oct 3] A national security problem without a parallel in American democracy, Defense One report.

Internal Security: J&K

  • [Sep 30] BDC polls across Jammu, Kashmir, Ladakh on Oct 24, The Economic Times report.
  • [Sep 30] India ‘invaded and occupied Kashmir, says Malaysian PM at UN General Assembly, The Hindu report.
  • [Sep 30] J&K police stations to have CCTV camera surveillance, News18 report.
  • [Oct 1] 5 judge Supreme court bench to hear multiple pleas on Article 370, Kashmir lockdown today, India Today report.
  • [Oct 1] India’s stand clear on Kashmir: won’t accept third-party mediation, India Today report.
  • [Oct 1] J&K directs officials to ensure all schools reopen by Thursday, NDTV report.
  • [Oct 2]] ‘Depressed, frightened’: Minors held in Kashmir crackdown, Al Jazeera report.
  • [Oct 3] J&K: When the counting of the dead came to a halt, The Hindu report.
  • [Oct 3] High schools open in Kashmir, students missing, The Economic Times report.
  • [Oct 3] Jaishanakar reiterates India’s claim over Pakistan-occupied Kashmir, The Hindu report.
  • [Oct 3] Normalcy prevails in Jammu and Kashmir, DD News report.
  • [Oct 3] Kashmiri leaders will be released one by one, India Today report.
  • [Oct 4] India slams Turkey, Malaysia remarks on J&K, The Hindu report.
  • [Oct 5] India’s clampdown hits Kashmir’s Silicon Valley, The Economic Times report.
  • [Oct 5] Traffic cop among 14 injured in grenade attack in South Kashmir, NDTV report; The Economic Times report.
  • [Oct 6] Kashmir situation normal, people happy with Article 370 abrogation: Prkash Javadekar, Times of India report.
  • [Oct 7] Kashmir residents say police forcibly taking over their homes for CRPF troops, Huffpost India report.

Internal Security: Northeast/ NRC

  • [Sep 30] Giving total control of Assam Rifles to MHA will adversely impact vigil: Army to Govt, The Economic Times report.
  • [Sep 30] NRC list impact: Assam’s foreigner tribunals to have 1,600 on contract, The Economic Times report.
  • [Sep 30] Assam NRC: Case against Wipro for rule violation, The Hindu report; News18 report; Scroll.in report.
  • [Sep 30] Hindu outfits demand NRC in Karnataka, Deccan Chronicle report; The Hindustan Times report.
  • [Oct 1] Centre extends AFPSA in three districts of Arunachal Pradesh for six months, ANI News report.
  • [Oct 1] Assam’s NRC: law schools launch legal aid clinic for excluded people, The Hindu report; Times of India report; The Wire report.
  • [Oct 1] Amit Shah in Kolkata: NRC to be implemented in West Bengal, infiltrators will be evicted, The Economic Times report.
  • [Oct 1] US Congress panel to focus on Kashmir, Assam, NRC in hearing on human rights in South Asia, News18 report.
  • [Oct 1] NRC must for national security; will be implemented: Amit Shah, The Hindu Business Line report.
  • [Oct 2] Bengali Hindu women not on NRC pin their hope on promise of another list, citizenship bill, The Print report.
  • [Oct 3] Citizenship Amendment Bill has become necessity for those left out of NRC: Assam BJP president Ranjeet Das, The Economic Times report.
  • [Oct 3] BJP govt in Karnataka mulling NRC to identify illegal migrants, The Economic Times report.
  • [Oct 3] Explained: Why Amit Shah wants to amend the Citizenship Act before undertaking countrywide NRC, The Indian Express report.
  • [Oct 4] Duplicating NPR, NRC to sharpen polarization: CPM, Deccan Herald report.
  • [Oct 5] We were told NRC India’s internal issue: Bangladesh, Livemint report.
  • [Oct 6] Prasanna calls NRC ‘unjust law’, The New Indian Express report.

National Security Institutions

  • [Sep 30] CRPF ‘denied’ ration cash: Govt must stop ‘second-class’ treatment. The Quint report.
  • [Oct 1] Army calls out ‘prejudiced’ foreign report on ‘torture’, refutes claim, Republic World report.
  • [Oct 2] India has no extraterritorial ambition, will fulfill regional and global security obligations: Bipin Rawat, The Economic Times report.

More on Huawei, 5G

  • [Sep 30] Norway open to Huawei supplying 5G equipment, Forbes report.
  • [Sep 30] Airtel deploys 100 hops of Huawei’s 5G technology, The Economic Times report.
  • [Oct 1] America’s answer to Huawei, Foreign Policy report; Tech Circle report.
  • [Oct 1] Huawei buys access to UK innovation with Oxford stake, Financial Times report.
  • [Oct 3] India to take bilateral approach on issues faced by other countries with China: Jaishankar, The Hindu report.
  • [Oct 4] Bharti Chairman Sunil Mittal says India should allow Huawei in 5G, The Economic Times report
  • [Oct 6] 5G rollout: Huawei finds support from telecom industry, Financial Express report.

Emerging Tech: AI, Facial Recognition

  • [Sep 30] Bengaluru set to roll out AI-based traffic solution at all signals, Entrackr report.
  • [Sep 1] AI is being used to diagnose disease and design new drugs, Forbes report.
  • [Oct 1] Only 10 jobs created for every 100 jobs taken away by AI, The Economic Times report.
  • [Oct 2]Emerging tech is helping companies grow revenues 2x: report, ET Tech report.
  • [Oct 2] Google using dubious tactics to target people with ‘darker skin’ in facial recognition project: sources, Daily News report.
  • [Oct 2] Three problems posed by deepfakes that technology won’t solve, MIT Technology Review report.
  • [Oct 3] Getting a new mobile number in China will involve a facial recognition test, Quartz report.
  • [Oct 4] Google contractors targeting homeless people, college students to collect their facial recognition data: Report, Medianama report.
  • [Oct 4] More jobs will be created than are lost from the IA revolution: WEF AI Head, Livemint report.
  • [Oct 6] IIT-Guwahati develops AI-based tool for electric vehicle motor, Livemint report.
  • [Oct 7] Even if China misuses AI tech, Satya Nadella thinks blocking China’s AI research is a bad idea, India Times report.

Big Tech

  • [Oct 3] Dial P for privacy: Google has three new features for users, Times of India report.

Opinions and Analyses

  • [Sep 26] Richard Stengel, Time, We’re in the middle of a global disinformation war. Here’s what we need to do to win.
  • [Sep 29] Ilker Koksal, Forbes, The shift toward decentralized finance: Why are financial firms turning to crypto?
  • [Sep 30] Nistula Hebbar, The Hindu, Govt. views grassroots development in Kashmir as biggest hope for peace.
  • [Sep 30] Simone McCarthy, South China Morning Post, Could China’s strict cyber controls gain international acceptance?
  • [Sep 30] Nele Achten, Lawfare blog, New UN Debate on cybersecurity in the context of international security.
  • [Sep 30[ Dexter Fergie, Defense One, How ‘national security’ took over America.
  • [Sep 30] Bonnie Girard, The Diplomat, A firsrhand account of Huawei’s PR drive.
  • [Oct 1] The Economic Times, Rafale: Past tense but furture perfect.
  • [Oct 1] Simon Chandler, Forbes, AI has become a tool for classifying and ranking people.
  • [Oct 2] Ajay Batra, Business World, Rethink India! – MMRCA, ESDM & Data Privacy Policy.
  • [Oct 2] Carisa Nietsche, National Interest, Why Europe won’t combat Huawei’s Trojan tech.
  • [Oct 3] Aruna Sharma, Financial Express, The digital way: growth with welfare.
  • [Oct 3] Alok Prasanna Kumar, Medianama, When it comes to Netflix, the Government of India has no chill.
  • [Oct 3] Fredrik Bussler, Forbes, Why we need crypto for good.
  • [Oct 3] Panos Mourdoukoutas, Forbes, India changed the game in Kashmir – Now what?
  • [Oct 3] Grant Wyeth, The Diplomat, The NRC and India’s unfinished partition.
  • [Oct 3] Zak Doffman, Forbes, Is Huawei’s worst Google nightmare coming true?
  • [Oct 4] Oren Yunger, Tech Crunch, Cybersecurity is a bubble, but it’s not ready to burst.
  • [Oct 4] Minakshi Buragohain, Indian Express, NRS: Supporters and opposers must engage each other with empathy.
  • [Oct 4] Frank Ready, Law.com, 27 countries agreed on ‘acceptable’ cyberspace behavior. Now comes the hard part.
  • [Oct 4] Samir Saran, World economic Forum (blog), 3 reasons why data is not the new oil and why this matters to India.
  • [Oct 4] Andrew Marantz, The New York Times, Free Speech is killing us.
  • [Oct 4] Financial Times editorial, ECJ ruling risks for freedom of speech online.
  • [Oct 4] George Kamis, GCN, Digital transformation requires a modern approach to cybersecurity.
  • [Oct 4] Naomi Xu Elegant and Grady McGregor, Fortune, Hong King’s mask ban pits anonymity against the surveillance state.
  • [Oct 4] Prashanth Parameswaran, The Diplomat, What’s behind the new US-ASEAN cyber dialogue?
  • [Oct 5] Huong Le Thu, The Strategist, Cybersecurity and geopolitics: why Southeast Asia is wary of a Huawei ban.
  • [Oct 5] Hannah Devlin, The Guardian, We are hurtling towards a surveillance state: the rise of facial recognition technology.
  • [Oct 5] PV Navaneethakrishnan, The Hindu Why no takers? (for ME/M.Tech programmes).
  • [Oct 6] Aakar Patel, Times of India blog, Cases against PC, letter-writing celebs show liberties are at risk.
  • [Oct 6] Suhasini Haidar, The Hindu, Explained: How ill purchases from Russia affect India-US ties?
  • [Oct 6] Sumit Chakraberty, Livemint, Evolution of business models in the era of privacy by design.
  • [Oct 6] Spy’s Eye, Outlook, Insider threat management.
  • [Oct 6] Roger Marshall, Deccan Herald, Big oil, Big Data and the shape of water.
  • [Oct 6] Neil Chatterjee, Fortune, The power grid is evolving. Cybersecurity  must too.
  • [Oct 7] Scott W Pink, Modaq.com, EU: What is GDPR and CCPA and how does it impact blockchain?
  • [Oct 7] GN Devy, The Telegraph, Has India slid into an irreversible Talibanization of the mind?
  • [Oct 7] Susan Ariel Aaronson, South China Morning Post, The Trump administration’s approach to AI is not that smart: it’s about cooperation, not domination.

The General Data Protection Regulation and You

By Aditya Singh Chawla

A cursory look at your email inbox this past month presents an intriguing trend. Multiple online services seem to have taken it upon themselves to notify changes to their Privacy Policies at the same time. The reason, simply, is that the European Union’s General Data Protection Regulation (GDPR) comes into force on May 25, 2018.

The GDPR marks a substantial overhaul of the existing data protection regime in the EU, as it replaces the earlier ‘Directive 95/46/EC on the protection of individuals with regard to the processing of personal data and on the free movement of such data.’ The Regulation was adopted by the European Parliament in 2016, with a period of almost two years to allow entities sufficient time to comply with their increased obligations.

The GDPR is an attempt to harmonize and strengthen data protection across Member States of the European Union. CCG has previously written about the Regulation and what it entails here. For one, the instrument is a ‘Regulation’, as opposed to a ‘Directive’. A Regulation is directly binding across all Member States in its entirety. A Directive simply sets out a goal that all EU countries must achieve, but allows them discretion as to how. Member States must enact national measures to transpose a Directive, and this can sometimes lead to a lack of uniformity across Member States.

The GDPR introduces, among other things, additional rights and protections for data subjects. This includes, for instance, the introduction of the right to data portability, and the codification of the controversial right to be forgotten. Our writing on these concepts can be found here, and here. Another noteworthy change is the substantial sanctions that can be imposed for violations. Entities that fall foul of the Regulation may have to pay fines up to 20 million Euros, or 4% of global annual turnover, whichever is higher.

The Regulation also has consequences for entities and users outside the EU. First, the Regulation has expansive territorial scope, and applies to non-EU entities if they offer goods and services to the EU, or monitor the behavior of EU citizens. The EU is also a significant digital market, which allows it to nudge other jurisdictions towards the standards it adopts. The Regulation (like the earlier Directive) restricts the transfer of personal data to entities outside the EU to cases where an adequate level of data protection can be ensured. This has resulted in many countries adopting regulation in compliance with EU standards. In addition, with the implementation of the GDPR, companies that operate in multiple jurisdictions might prefer to maintain parity between their data protection policies. For instance, Microsoft has announced that it will extend core GDPR protections to its users worldwide. As a consequence, many of the protections offered by the GDPR may in effect become available to users in other jurisdictions as well.

The implementation of the GDPR is also of particular significance to India, which is currently in the process of formulating its own data protection framework. The Regulation represents a recent attempt by a jurisdiction (that typically places a high premium on privacy) to address the harms caused by practices surrounding personal data. The lead-up to its adoption and implementation has generated much discourse on data protection and privacy. This can offer useful lessons as we debate the scope and ambit of our own data protection regulation.

Aditya is an Analyst at the Centre for Communication Governance at National Law University Delhi