The Right to be Forgotten – Examining Approaches in Europe and India

This is a guest post authored by Aishwarya Giridhar.

How far does the right to control personal information about oneself extend online? Would it extend, for example, to having a person’s name erased from a court order on online searches, or to those who have been subjected to revenge pornography or sexual violence such that pictures or videos have non-consensually been shared online? These are some questions that have come up in Indian courts and are some of the issues that jurisprudence relating to the ‘right to be forgotten’ seeks to address. This right is derived from the concepts of personal autonomy and informational self-determination, which are core aspects of the right to privacy. They were integral to the Indian Supreme Court’s conception of privacy in Puttaswamy vs. Union of India which held that privacy was a fundamental right guaranteed by the Indian Constitution. However, privacy is not an absolute right and needs to be balanced with other rights such as freedom of expression and access to information, and the right to be forgotten tests the extent to which the right to privacy extends.

On a general level, the right to be forgotten enables individuals to have personal information about themselves removed from publicly available sources under certain circumstances. This post examines the right to be forgotten under the General Data Protection Regulation (GDPR) in Europe, and the draft Personal Data Protection Bill, 2019 (PDP Bill) in India.

What is the right to be forgotten?

The right to be forgotten was brought into prominence in 2014 when the European Court of Justice (ECJ) held that users can require search engines to remove personal data from search results, where the linked websites contain information that is “inadequate, irrelevant or no longer relevant, or excessive.” The Court recognised that search engines had the ability to significantly affect a person’s right to privacy since it allowed any Internet user to obtain a wide range of information on a person’s life, which would have been much harder or even impossible to find without the search engine. 

The GDPR provides statutory recognition to the right to be forgotten in the form of a ‘right to erasure’ (Article 17). It provides data subjects the right to request controllers to erase personal data in some circumstances, such as when the data is no longer needed for their original processing purpose, or when the data subject has withdrawn her consent or objected to data processing. In this context, the data subject is the person to whom the relevant personal data relates, and the controller is the entity which determines how and why the data would be processed. Under this provision, the controller would be required to assess whether to keep or remove information when it receives a request from data subjects.

In comparison, clause 20 of India’s Personal Data Protection Bill (PDP Bill), which proposes a right to be forgotten, allows data principals (similar to data subjects) to require data fiduciaries (similar to data controllers) to restrict or prevent the disclosure of personal information. This is possible where such disclosure is no longer necessary, was made on the basis of consent which has since been withdrawn, or was made contrary to law. Unlike the GDPR, the PDP Bill requires data subjects to approach Adjudicating Officers appointed under the legislation to request restricted disclosure of personal information. The rights provided under both the GDPR and PDP Bill are not absolute and are limited by the freedom of speech and information and other specified exceptions. In the PDP Bill, for example, some of the factors the Adjudicating Officer is required to account for are the sensitivity of the data, the scale of disclosure and how much it is sought to be restricted, the role of the data principal in public life, and the relevance of the data to the public. 

Although the PDP Bill, if passed, would be the first legislation to recognise this right in India, courts have provided remedies that allow for removing personal information in some circumstances. Petitioners have approached courts for removing information in cases ranging from matrimonial disputes to defamation and information affecting employment opportunities, and courts have sometimes granted the requested reliefs. Courts have also acknowledged the right to be forgotten in some cases, although there have been conflicting orders on whether a person can have personal information redacted from judicial decisions available on online repositories and other sources. In November last year, the Orissa High Court also highlighted the importance of the right to be forgotten for persons who’s photos and videos have been uploaded online, without  their consent, especially in the case of sexual violence. These cases also highlight why it is essential that this right is provided by statute, so that the extent of protections offered under this right, as well as the relevant safeguards can be clearly defined.

Intersections with access to information and free speech

The most significant criticisms of the right to be forgotten stem from its potential to restrict speech and access to information. Critics are concerned that this right will lead to widespread censorship and a whitewashing of personal histories when it comes to past crimes and information on public figures, and a less free and open Internet. There are also concerns that global takedowns of information, if required by national laws, can severely restrict speech and serve as a tool of censorship. Operationalising this right can also lead to other issues in practice.

For instance, the right framed under the GDPR requires private entities to balance the right to privacy with the larger public interest and the right to information. Two cases decided by the ECJ in 2019 provided some clarity on the obligations of search engines in this context. In the first, the Court clarified that controllers are not under an obligation to apply the right globally and that removing search results for domains in the EU would suffice. However, it left the option open for countries to enact laws that would require global delisting. In the second case, among other issues, the Court identified some factors that controllers would need to account for in considering requests for delisting. These included the nature of information, the public’s interest in having that information, and the role the data subject plays in public life, among others. Guidelines framed by the Article 29 Working Party, set up under the GDPR’s precursor also provide limited, non-binding guidance for controllers in assessing which requests for delisting are valid.

Nevertheless, the balance between the right to be forgotten and competing considerations can still be difficult to assess on a case-by-case basis. This issue is compounded by concerns that data controllers would be incentivised to over-remove content to shield themselves from liability, especially where they have limited resources. While larger entities like Google may have the resources to be able to invest in assessing claims under the right to be forgotten, this will not be possible for smaller platforms. There are also concerns that requiring private parties to make such assessments amounts to the ‘privatisation of regulation’, and the limited potential for transparency on erasures remove an important check against over-removal of information. 

As a result of some of this criticism, the right to be forgotten is framed differently under the PDP Bill in India. Unlike the GDPR, the PDP Bill requires Adjudicating Officers and not data fiduciaries to assess whether the rights and interests of the data principal in restricting disclosure overrides the others’ right to information and free speech. Adjudicating Officers are required to have special knowledge of or professional experience in areas relating to law and policy, and the terms of their appointment would have to ensure their independence. While they seem better suited to make this assessment than data fiduciaries, much of how this right is implemented will depend on whether the Adjudicating Officers are able to function truly independently and are adequately qualified. Additionally, this system is likely to lead to long delays in assessment, especially if the quantum of requests is similar to that in the EU. It will also not address the issues with transparency highlighted above. Moreover, the PDP Bill is not finalised and may change significantly, since the Joint Parliamentary Committee that is reviewing it is reportedly considering substantial changes to its scope.

What is clear is that there are no easy answers when it comes to providing the right to be forgotten. It can provide a remedy in some situations where people do not currently have recourse, such as with revenge pornography or other non-consensual use of data. However, when improperly implemented, it can significantly hamper access to information. Drawing lessons from how this right is evolving in the EU can prove instructive for India. Although the assessment of whether or not to delist information will always subjective to some extent, there are some steps that can be taken provide clarity on how such determinations are made. Clearly outlining the scope of the right in the relevant legislation, and developing substantive standards that are aimed at protecting access to information, that can be used in assessing whether to remove information are some measures that can help strike a better balance between privacy and competing considerations.

Addition of US Privacy Cases on the Privacy Law Library

This post is authored by Swati Punia.

We are excited to announce the addition of privacy jurisprudence from the United States’ Supreme Court on the Privacy Law Library. These cases cover a variety of subject areas from the right against intrusive search and seizure to the right to abortion and right to sexual intimacy/ relationships. You may access all the US cases on our database, here.

(The Privacy Law Library is our global database of privacy law and jurisprudence, currently containing cases from India, Europe (ECJ and ECtHR), the United States, and Canada.)

The Supreme Court of the US (SCOTUS) has carved out the right to privacy from various provisions of the US constitution, particularly the first, fourth, fifth, ninth and fourteenth amendments to the US constitution. The Court has included the right to privacy in varying contexts through an expansive interpretation of the constitutional provisions. For instance, the Court has read privacy rights into the first amendment for protecting private possession of obscene material from State intrusion; the fourth amendment for protecting privacy of the person and possessions from unreasonable State intrusion; and the fourteenth amendment which recognises an individual’s decisions about abortion and family planning as part of their right of liberty that encompasses aspects of privacy such as dignity and autonomy under the amendment’s due process clause.

The right to privacy is not expressly provided for in the US constitution. However, the Court identified an implicit right to privacy, for the very first time, in Griswold v. Connecticut(1965) in the context of the right to use contraceptives/ marital privacy. Since then, the Court has extended the scope to include, inter alia, reasonable expectation of privacy against State intrusion in Katz v. United States (1967), abortion rights of women in Roe v. Wade (1973), and right to sexual intimacy between consenting adults of the same-sex in Lawrence v. Texas (2003). 

The US privacy framework consists of several privacy laws and regulations developed at both the federal and state level. As of now, the US privacy laws are primarily sector specific, instead of a single comprehensive federal data protection law like the European Union’s General Data Protection Regulation (GDPR) and the Canadian Personal Information Protection and Electronic Documents Act (PIPEDA). However, there are certain states in the US like California that have enacted comprehensive privacy laws, comparable to the GDPR and PIPEDA. The California Consumer Privacy Act (CCPA) which came into effect on January 1, 2020 aims to protect consumers’ privacy across industry. It codifies certain rights and remedies for consumers, and obligations for entities/businesses. One of its main aims is to provide consumers more control over their data by obligating businesses to ensure transparency about how they collect, use, share and sell consumer data. 

To know more about the status of the right to privacy in the US, refer to our page here. Some of the key privacy cases from the SCOTUS on our database are – Griswold vs. Connecticut, Time INC vs. Hill, Roe vs. Wade, Katz vs. United States, and Stanley vs. Georgia.

A Brief Look at the Tamil Nadu Cyber Security Policy 2020

This post is authored by Sharngan Aravindakshan.

The Tamil Nadu State Government (State Government) released the Tamil Nadu Cyber Security Policy 2020 (TNCS Policy) on September 19, 2020. It has been prepared by the Electronics Corporation of Tamil Nadu (ELCOT), a public sector undertaking which operates under the aegis of the Information Technology Department of the Government of Tamil Nadu. This post takes a brief look at the TNCS Policy and its impact on India’s cybersecurity health.

The TNCS Policy is divided into five chapters –

  1. Outline of Cyber Security Policy;
  2. Security Architecture Framework – Tamil Nadu (SAF-TN);
  3. Best Practices – Governance, Risk Management and Compliance);
  4. Computer Emergency Response Team – Tamil Nadu (CERT-TN)); and
  5. Chapter-V (Cyber Crisis Management Plan).

Chapter-I, titled ‘Outline of Cyber Security Policy’, contains a preamble which highlights the need for the State Government to have a cyber security policy. Chapter-I also lays out the scope and applicability of the TNCS Policy, which is that it is applicable to ‘government departments and associated agencies’, and covers ‘Information Assets that may include Hardware, Applications and Services provided by these Agencies to other Government Departments, Industry or Citizens’. It also applies to ‘private agencies that are entrusted with State Government work’ (e.g. contractors, etc.), as well as ‘Central Infrastructure and Personnel’ who provide services to the State Government, which is likely a reference to Central Government agencies and personnel.

Notably, the TNCS Policy does not define ‘cyber security’, choosing to define ‘information security management’ (ISM)  instead. ISM is defined as involving the “planning, implementation and continuous Security controls and measures to protect the confidentiality, integrity and availability of Information Assets and its associated Information Systems”. Further, it states that Information security management also includes the following elements –

(a) Security Architecture Framework – SAF-TN;

(b) Best Practices for Governance, Risk Management and Compliance (GRC);

(c) Security Operations – SOC-TN;

(d) Incident Management – CERT-TN;

(e) Awareness Training and Capability Building;

(f) Situational awareness and information sharing.

The Information Technology Department, which is the nodal department for IT security in Tamil Nadu, has been assigned several duties with respect to cyber security including establishing and operating a ‘Cyber Security Architecture for Tamil Nadu’ (CSA-TN) as well as a Security Operations Centre (SOC-TN) and a state Computer Emergency Response Team (CERT-TN). Its other duties include providing safe hosting for Servers, Applications and Data of various Departments /Agencies, advising on government procurement of IT and ITES, conducting training programmes on cyber security as well as formulating cyber security related policies for the State Government. Importantly, the TNCS Policy also mentions the formulation of a ‘recommended statutory framework for ensuring legal backing of the policies’. While prima facie it seems that cyber security will have more Central control than State, given the nature of these documents, any direct conflict is in any case unlikely.

Chapter-II gives a break-up of the Cyber Security Architecture of Tamil Nadu (CSA-TN). The CSA-TN’s constituent components are (a) Security Architecture Framework (SAF-TN), (b) Security Operations Centre (SOC-TN), (c) Cyber Crisis Management Plan (CCMP-TN) and (d) the Computer Emergency Response Team (CERT-TN). It clarifies that the “Architecture” defines the overall scope of authority of the cyber security-related agencies in Tamil Nadu, and also that while the policy will remain consistent, the Architecture will be dynamic to meet evolving technological challenges.

Chapter-III deals with best practices in governance, risk management and compliance, and broadly covers procurement policies, e-mail retention policies, social media policies and password policies for government departments and entities. With respect to procurement policies, it highlights certain objectives, such as building trusted relationships with vendors for improving end-to-end supply chain security visibility and encouraging entities to adopt guidelines for the procurement of trustworthy ICT products. However, the TNCS Policy also specifies that it is not meant to infringe or supersede existing policies such as procurement policies.

On the subject of e-mails, it emphasizes standardizing e-mail retention periods on account of the “need to save space on e-mail server(s)” and the “need to stay in line with Federal and Industry Record e-Keeping Regulations”. E-mail hygiene has proved to be essential especially for government organizations, given that the malware discovered in one of the nuclear facilities situated in Tamil Nadu (nuclear facilities) is believed to have entered the systems through a phishing email. However, surprisingly, other than e-mail retention, the TNCS Policy does not deal with e-mail safety practices. For instance, the Information Security Best Practices released by the Ministry of Home Affairs provides a more comprehensive list of good practices for email communications which includes specific sections on email communications and social engineering. These do not find mention in the TNCS Policy.

On social media policies, the TNCS Policy makes it clear that it prioritizes the ‘online reputation’ of its departments. However, Employees are advised against reacting online and pass on this information to the official spokesperson for an appropriate response. The TNCS Policy also counsels proper disclosure where personal information is collected through online social media platforms. Some best practices for safe passwords are also detailed, such as password age (no reuse of any of the last ten passwords, etc.) and length (passwords may be required to have a minimum number of characters, etc.).

Chapter-IV highlights the roles and responsibilities of the Computer Emergency Response Team – Tamil Nadu (CERT-TN). It specifies that CERT-TN is the nodal agency responsible for implementing the Security Architecture Framework, and for monitoring, detecting, assessing and responding to cyber vulnerabilities, cyber threats, incidents and also demonstrate cyber resilience. The policy also recognizes that CERT-TN is the statutory body that is authorized to issue directives, guidelines and advisories to government departments. It will also establish, operate and maintain the Information Security Management systems for the State Government.

CERT-TN will also coordinate with the National or State Computer Security Incident Response Teams (CSIRTs), government agencies, law enforcement agencies, and research labs. However, the “Coordination Centre” (CoC) is the designated nodal intermediary between the CERT-TN and governmental departments, CERT-In, State CERTs, etc. under the TNCS Policy.  The CoC will also be responsible for monitoring responses to service requests, delivery timelines and other performance related issues for the CERT-TN. The TNCS Policy makes it clear that Incident Handling and Response (IHR) will be as per Standard Operation Process Manuals (prepared by CERT-TN) that will be regularly reviewed and updated. ‘Criticality of the affected resource” will determine the priority of the incident.

Significantly, Chapter-IV also deals with vulnerability disclosures and states that vulnerabilities in e-Governance services will only be reported to CERT-TN or the respective department if they relate to e-Governance services offered by the Government of Tamil Nadu, and will not be publicly disclosed until a resolution is found. Other vulnerabilities may be disclosed to the respective vendors as well. An upper limit of 30 days is prescribed for resolving reported vulnerabilities. An ‘Incident Reporter’ reporting in good faith will not be penalized “provided he cooperates with the stakeholders in resolving the vulnerability and minimizing the impact”, and the Incident Reporter’s contribution in vulnerability discovery and resolution will be publicly credited by CERT-TN.

Chapter-IV also mandates regular security assessments of the State Government’s departmental assets, a help-desk for reporting cyber incidents, training and awareness both for CERT-TN, as well as by CERT-TN for other departments. Departments will also be graded by “maturity of Cyber Security Practices and Resilience Strength by the Key Performance Indicators”. However, these indicators are not specified in the policy itself.

Chapter-V is titled ‘Cyber Crisis Management Plan’ (CCMP), meant for  countering cyber-attacks and cyber terrorism. It envisages establishing a strategic framework and actions to prepare for, respond to, and begin to coordinate recovery from a Cyber-Incident, in the form of guidelines. ‘Detect’(ing) cyber-incidents is noticeably absent in this list of verbs, especially considering the first chapter which laid emphasis on the CERT-TN’s role in “Monitoring, Detecting, Assessing and Responding” to cyber vulnerabilities and incidents.

In conformity with CERT-In’s Cyber Crisis Management Plan for Countering Cyber Attacks and Cyber Terrorism which requires ministries / departments of State governments and Union Territories to draw up their own sectoral Cyber Crisis Management Plans in line with CERT-In’s plan, the TNCS Policy establishes the institutional architecture for implementing such plan.  The TNCS Policy contemplates a ‘Crisis Management Group’ (CMG) for each department, constituted by the Secretary to the Government (Chairman), Heads of all organizations under the administrative control of the department and the Chief Information Security Officers (CISO)/Deputy CISOs within the department. It will be the task of the CMG to prepare a contingency plan in consultation with CERT-In, as well as coordinate with CERT-In in crisis situations. The TNCS Policy also envisions a ‘Crisis Management Cell’ (CMC), under the supervision of the CMG. The CMC will be constituted by the head of the organization, CISO, head of HR/admin and the person In-charge of the IT Section. The TNCS Policy also requires each organization to nominate a CISO, preferably a senior officer with adequate IT experience. The CMC’s priority is to prepare a plan that would ensure continuity of operations and speedy restoration of an acceptable level of service.

Observations

The TNCS Policy is a positive step, with a whole-of-government approach towards increasing governmental cyber security at the State government level. However, its applicability is restricted to governmental departments and their suppliers / vendors / contractors. It does not, therefore, view cyber security as a broader ecosystem that requires each of its stakeholders including the public sector, private sector, NGOs, academia, etc. to play a role in the maintenance of its security and recognize their mutual interdependence as a key feature of this domain.

Given the interconnected nature of cyberspace, cyber security cannot be achieved only through securing governmental assets. As both the ITU National Cybersecurity Strategy Guide and the NATO CCDCOE Guidelines recommend, it requires the creation and active participation of an equally robust private industry, and other stakeholders. The TNCS Policy does not concern itself with the private sector at large, beyond private entities working under governmental contracts. It does not set up any initiatives, nor does it create any incentives for its development. It also does not identify any major or prevalent cyber threats, specify budget allocation for implementing the policy or establish R&D initiatives at the state level. No capacity building measures are provided for, beyond CERT-In’s training and awareness programs.

Approaching cyber security as an ecosystem, whose maintenance requires the participation and growth of several stakeholders including the private sector and civil society organisations, and then using a combination of regulation and incentives, may be the better way.

CJEU sets limits on Mass Communications Surveillance – A Win for Privacy in the EU and Possibly Across the World

This post has been authored by Swati Punia

On 6th October, the European Court of Justice (ECJ/ Court) delivered its much anticipated judgments in the consolidated matter of C-623/17, Privacy International from the UK and joined cases from France, C-511/18, La Quadrature du Net and others, C-512/18, French Data Network and others, and Belgium, C-520/18, Ordre des barreaux francophones et germanophone and others (Collectively “Bulk Communications Surveillance Judgments”). 

In this post, I briefly discuss the Bulk Communication Surveillance Judgments, their significance for other countries and for India. 

Through these cases, the Court invalidated the disproportionate interference by Member States with the rights of their citizens, as provided by EU law, in particular the Directive on privacy and electronic communications (e-Privacy Directive) and European Union’s Charter of Fundamental Rights (EU Charter). The Court assessed the Member States’ bulk communications surveillance laws and practices relating to their access and use of telecommunications data. 

The Court recognised the importance of the State’s positive obligations towards conducting surveillance, although it noted that it was essential for surveillance systems to conform with the general principles of EU law and the rights guaranteed under the EU Charter. It laid down clear principles and measures as to when and how the national authorities could access and use telecommunications data (further discussed in the sections ‘The UK Judgment’ and ‘The French and Belgian Judgment’). It carved a few exceptions as well (in the joined cases of France and Belgium) for emergency situations, but held that such measures would have to pass the threshold of being serious and genuine (further discussed in the section ‘The French and Belgian Judgment’). 

The Cases in Brief 

The Court delivered two separate judgments, one in the UK case and one in the joined cases of France and Belgium. Since these cases had similar sets of issues, the proceedings were adjoined. The UK application challenged the bulk acquisition and use of telecommunications data by its Security and Intelligence Agencies (SIAs) in the interest of national security (as per the UK’s Telecommunication Act of 1984). The French and Belgian applications challenged the indiscriminate data retention and access by SIAs for combating crime. 

The French and Belgian applications questioned the legality of their respective data retention laws (numerous domestic surveillance laws which permitted bulk collection of telecommunication data) that imposed blanket obligations on Electronic Communications Service Providers (ECSP) to provide relevant data. The Belgian law required ECSPs to retain various kinds of traffic and location data for a period of 12 months. Whereas, the French law provided for automated analysis and real time data collection measures for preventing terrorism. The French application also raised the issue of providing a notification to the person under the surveillance. 

The Member States contended that such surveillance measures enabled them to inter alia, safeguard national security, prevent terrorism, and combat serious crimes. Hence, they claimed inapplicability of the e-Privacy Directive on their surveillance laws/ activities.

The UK Judgment

The ECJ found the UK surveillance regime unlawful and inconsistent with EU law, and specifically the e-Privacy Directive. The Court analysed the scope and scheme of the e-Privacy Directive with regard to exclusion of certain State purposes such as national and public security, defence, and criminal investigation. Noting the importance of such State purposes, it held that EU Member States could adopt legislative measures that restricted the scope of rights and obligations (Article 5, 6 and 9) provided in the e-Privacy Directive. However, this was allowed only if the Member States complied with the requirements laid down by the Court in Tele2 Sverige and Watson and Others (C-203/15 and C-698/15) (Tele2) and the e-Privacy Directive. In addition to these, the Court held that the EU Charter must be respected too. In Tele2, the ECJ held that legislative measures obligating ECSPs to retain data must be targeted and limited to what was strictly necessary. Such targeted retention had to be with regard to specific categories of persons and data for a limited time period. Also, the access to data must be subject to a prior review by an independent body.

The e-Privacy Directive ensures the confidentiality of electronic communications and the data relating to it (Article 5(1)). It allows ECSPs to retain metadata (context specific data relating to the users and subscribers, location and traffic) for various purposes such as billing, valued added services and security purposes. However, this data must be deleted or made anonymous, once the purpose is fulfilled unless a law allows for a derogation for State purposes. The e-Privacy Directive allows the Member States to derogate (Article 15(1)) from the principle of confidentiality and corresponding obligations (contained in Article 6 (traffic data) and 9 (location data other than traffic data)) for certain State purposes when it is appropriate, necessary and proportionate. 

The Court clarified that measures undertaken for the purpose of national security would not make EU law inapplicable and exempt the Member States from their obligation to ensure confidentiality of communications under the e-Privacy Directive. Hence, an independent review of surveillance activities such as data retention for indefinite time periods, or further processing or sharing, must be conducted for authorising such activities. It was noted that the domestic law at present did not provide for prior review, as a limit on the above mentioned surveillance activities. 

The French and Belgian Judgment

While assessing the joined cases, the Court arrived at a determination in similar terms as the UK case. It reiterated that the exception (Article 15(1) of the e-Privacy Directive) to the principle of confidentiality of communications (Article 5(1) of the e-Privacy Directive) should not become the norm. Hence, national measures that provided for general and indiscriminate data retention and access for State purposes were held to be incompatible with EU law, specifically the e-Privacy Directive.

The Court in the joined cases, unlike the UK case, allowed for specific derogations for State purposes such as safeguarding national security, combating serious crimes and preventing serious threats. It laid down certain requirements that the Member States had to comply with in case of derogations. The derogations should (1) be clear and precise to the stated objective (2) be limited to what is strictly necessary and for a limited time period (3) have a safeguards framework including substantive and procedural conditions to regulate such instances (4) include guarantees to protect the concerned individuals against abuse. They should also be subjected to an ‘effective review’ by a court or an independent body and must be in compliance of general rules and proportionality principles of EU law and the rights provided in the EU Charter. 

The Court held that in establishing a minimum threshold for a safeguards framework, the EU Charter must be interpreted along with the European Convention on Human Rights (ECHR). This would ensure consistency between the rights guaranteed under the EU Charter and the corresponding rights guaranteed in the ECHR (as per Article 52(3) of the EU Charter).

The Court, in particular, allowed for general and indiscriminate data retention in cases of serious threat to national security. Such a threat should be genuine, and present or foreseeable. Real-time data collection and automated analysis were allowed in such circumstances. But the real-time data collection of persons should be limited to those suspected of terrorist activities. Moreover, it should be limited to what was strictly necessary and subject to prior review. It even allowed for general and indiscriminate data retention of IP addresses for the purpose of national security, combating serious crimes and preventing serious threats to public security. Such retention must be for a limited time period to what was strictly necessary. For such purposes, the Court also permitted ECSPs to retain data relating to the identity particulars of their customers (such as name, postal and email/account addresses and payment details) in a general and indiscriminate manner, without specifying any time limitations. 

The Court allowed targeted data retention for the purpose of safeguarding national security and preventing crime, provided that it was for a limited time period and strictly necessary and was done on the basis of objective and non-discriminatory factors. It was held that such retention should be specific to certain categories of persons or geographical areas. The Court also allowed, subject to effective judicial review, expedited data retention after the initial retention period ended, to shed light on serious criminal offences or acts affecting national security. Lastly, in the context of criminal proceedings, the Court held that it was for the Member States to assess the admissibility of evidence resulting from general and indiscriminate data retention. However, the information and evidence must be excluded where it infringes on the right to a fair trial. 

Significance of the Bulk Communication Surveillance Judgments

With these cases, the ECJ decisively resolved a long-standing discord between the Member States and privacy activists in the EU. For a while now, the Court has been dealing with questions relating to surveillance programs for national security and law enforcement purposes. Though the Member States have largely considered these programs outside the ambit of EU privacy law, the Court has been expanding the scope of privacy rights. 

Placing limitations and controls on State powers in democratic societies was considered necessary by the Court in its ruling in Privacy International. This decision may act as a trigger for considering surveillance reforms in many parts of the world, and more specifically for those aspiring to attain an EU adequacy status. India could benefit immensely should it choose to pay heed. 

As of date, India does not have a comprehensive surveillance framework. Various provisions of the Personal Data Protection Bill, 2019 (Bill), Information Technology Act, 2000, Telegraph Act, 1885, and the Code of Criminal Procedure, 1973 provide for targeted surveillance measures. The Bill provides for wide powers to the executive (under Clause 35, 36 and 91 of the Bill) to access personal and non-personal data in the absence of proper and necessary safeguards. This may cause problems for achieving the EU adequacy status as per Article 45 of the EU General Data Protection Regulation (GDPR) that assesses the personal data management rules of third-party countries. 

Recent news reports suggest that the Bill, which is under legislative consideration, is likely to undergo a significant overhaul. India could use this as an opportunity to introduce meaningful changes in the Bill as well as its surveillance regime. India’s privacy framework could be strengthened by adhering to the principles outlined in the Justice K.S. Puttaswamy v. Union of Indiajudgment and the Bulk Communications Surveillance Judgments.

Building an AI Governance Framework for India, Part III

Embedding Principles of Privacy, Transparency and Accountability

This post has been authored by Jhalak M. Kakkar and Nidhi Singh

In July 2020, the NITI Aayog released a draft Working Document entitled “Towards Responsible AI for All” (hereafter ‘NITI Aayog Working Document’ or ‘Working Document’). This Working Document was initially prepared for an expert consultation that was held on 21 July 2020. It was later released for comments by stakeholders on the development of a ‘Responsible AI’ policy in India. CCG’s comments and analysis  on the Working Document can be accessed here.

In our first post in the series, ‘Building an AI governance framework for India’, we discussed the legal and regulatory implications of the Working Document and argued that India’s approach to regulating AI should be (1) firmly grounded in its constitutional framework, and (2) based on clearly articulated overarching ‘Principles for Responsible AI’. Part II of the series discussed specific Principles for Responsible AI – Safety and Reliability, Equality, and Inclusivity and Non-Discrimination. We explored the constituent elements of these principles and the avenues for incorporating them into the Indian regulatory framework. 

In this final post of the series, we will discuss the remaining principles of Privacy, Transparency and Accountability. 

Principle of Privacy 

Given the diversity of AI systems, the privacy risks which they pose to the individuals, and society as a whole are also varied. These may be be broadly related to : 

(i) Data protection and privacy: This relates to privacy implications of the use of data by AI systems and subsequent data protection considerations which arise from this use. There are two broad aspects to think about in terms of the privacy implications from the use of data by AI systems. Firstly, AI systems must be tailored to the legal frameworks for data protection. Secondly, given that AI systems can be used to re-identify anonymised data, the mere anonymisation of data for the training of AI systems may not provide adequate levels of protection for the privacy of an individual.

a) Data protection legal frameworks: Machine learning and AI technologies have existed for decades, however, it was the explosion in the availability of data, which accounts for the advancement of AI technologies in recent years. Machine Learning and AI systems depend upon data for their training. Generally, the more data the system is given, the more it learns and ultimately the more accurate it becomes. The application of existing data protection frameworks to the use of data by AI systems may raise challenges. 

In the Indian context, the Personal Data Protection Bill, 2019 (PDP Bill), currently being considered by Parliament, contains some provisions that may apply to some aspects of the use of data by AI systems. One such provision is Clause 22 of the PDP Bill, which requires data fiduciaries to incorporate the seven ‘privacy by design’ principles and embed privacy and security into the design and operation of their product and/or network. However, given that AI systems rely significantly on anonymised personal data, their use of data may not fall squarely within the regulatory domain of the PDP Bill. The PDP Bill does not apply to the regulation of anonymised data at large but the Data Protection Authority has the power to specify a code of practice for methods of de-identification and anonymisation, which will necessarily impact AI technologies’ use of data.

b) Use of AI to re-identify anonymised data: AI applications can be used to re-identify anonymised personal data. To safeguard the privacy of individuals, datasets composed of the personal data of individuals are often anonymised through a de-identification and sampling process, before they are shared for the purposes of training AI systems to address privacy concerns. However, current technology makes it possible for AI systems to reverse this process of anonymisation to re-identify people, having significant privacy implications for an individual’s personal data. 

(ii) Impact on society: The impact of the use of AI systems on society essentially relates to broader privacy considerations that arise at a societal level due to the deployment and use of AI, including mass surveillance, psychological profiling, and the use of data to manipulate public opinion. The use of AI in facial recognition surveillance technology is one such AI system that has significant privacy implications for society as a whole. Such AI technology enables individuals to be easily tracked and identified and has the potential to significantly transform expectations of privacy and anonymity in public spaces. 

Due to the varying nature of privacy risks and implications caused by AI systems, we will have to design various regulatory mechanisms to address these concerns. It is important to put in place a reporting and investigation mechanism that collects and analyses information on privacy impacts caused by the deployment of AI systems, and privacy incidents that occur in different contexts. The collection of this data would allow actors across the globe to identify common threads of failure and mitigate against potential privacy failures arising from the deployment of AI systems. 

To this end, we can draw on a mechanism that is currently in place in the context of reporting and investigating aircraft incidents, as detailed under Annexure 13 of the Convention on International Civil Aviation (Chicago Convention). It lays down the procedure for investigating aviation incidents and a reporting mechanism to share information between countries. The aim of this accident investigation report is not to apportion blame or liability from the investigation, but rather to extensively study the cause of the accident and prevent future incidents. 

A similar incident investigation mechanism may be employed for AI incidents involving privacy breaches. With many countries now widely developing and deploying AI systems, such a model of incident investigation would ensure that countries can learn from each other’s experiences and deploy more privacy-secure AI systems.

Principle of Transparency

The concept of transparency is a recognised prerequisite for the realisation of ‘trustworthy AI’. The goal of transparency in ethical AI is to make sure that the functioning of the AI system and resultant outcomes are non-discriminatory, fair, and bias mitigating, and that the AI system inspires public confidence in the delivery of safe and reliable AI innovation and development. Additionally, transparency is also important in ensuring better adoption of AI technology—the more users feel that they understand the overall AI system, the more inclined and better equipped they are to use it.

The level of transparency must be tailored to its intended audience. Information about the working of an AI system should be contextualised to the various stakeholder groups interacting and using the AI system. The Institute of Electrical and Electronics Engineers, a global professional organisation of electronic and electrical engineers,  suggested that different stakeholder groups may require varying levels of transparency in accordance with the target group. This means that groups such as users, incident investigators, and the general public would require different standards of transparency depending upon the nature of information relevant for their use of the AI system.

Presently, many AI algorithms are black boxes where automated decisions are taken, based on machine learning over training datasets, and the decision making process is not explainable. When such AI systems produce a decision, human end users don’t know how it arrived at its conclusions. This brings us to two major transparency problems, the public perception and understanding of how AI works, and how much developers actually understand about their own AI system’s decision making process. In many cases, developers may not know, or be able to explain how an AI system makes conclusions or how it has arrived at certain solutions.

This results in a lack of transparency. Some organisations have suggested opening up AI algorithms for scrutiny and ending reliance on opaque algorithms. On the other hand, the NITI Working Document is of the view that disclosing the algorithm is not the solution and instead, the focus should be on explaining how the decisions are taken by AI systems. Given the challenges around explainability discussed above, it will be important for NITI Aayog to discuss how such an approach will be operationalised in practice.

While many countries and organisations are researching different techniques which may be useful in increasing the transparency of an AI system, one of the common suggestions which have gained traction in the last few years is the introduction of labelling mechanisms in AI systems. An example of this is Google’s proposal to use ‘Model Cards’, which are intended to clarify the scope of the AI systems deployment and minimise their usage in contexts for which they may not be well suited. 

Model cards are short documents which accompany a trained machine learning model. They enumerate the benchmarked evaluation of the working of an AI system in a variety of conditions, across different cultural, demographic, and intersectional groups which may be relevant to the intended application of the AI system. They also contain clear information on an AI system’s capabilities including the intended purpose for which it is being deployed, conditions under which it has been designed to function, expected accuracy and limitations. Adopting model cards and other similar labelling requirements in the Indian context may be a useful step towards introducing transparency into AI systems. 

Principle of Accountability

The Principle of Accountability aims to recognise the responsibility of different organisations and individuals that develop, deploy and use the AI systems. Accountability is about responsibility, answerability and trust. There is no one standard form of accountability, rather this is dependent upon the context of the AI and the circumstances of its deployment.

Holding individuals and entities accountable for harm caused by AI systems has significant challenges as AI systems generally involve multiple parties at various stages of the development process. The regulation of the adverse impacts caused by AI systems often goes beyond the existing regimes of tort law, privacy law or consumer protection law. Some degree of accountability can be achieved by enabling greater human oversight. In order to foster trust in AI and appropriately determine the party who is accountable, it is necessary to build a set of shared principles that clarify responsibilities of each stakeholder involved with the research, development and implementation of an AI system ranging from the developers, service providers and end users.

Accountability has to be ensured at the following stages of an AI system: 

(i) Pre-deployment: It would be useful to implement an audit process before the AI system is deployed. A potential mechanism for implementing this could be a multi-stage audit process which is undertaken post design, but before the deployment of the AI system by the developer. This would involve scoping, mapping and testing a potential AI system before it is released to the public. This can include ensuring risk mitigation strategies for changing development environments and ensuring documentation of policies, processes and technologies used in the AI system.

Depending on the nature of the AI system and the potential for risk, regulatory guidelines can be developed prescribing the involvement of various categories of auditors such as internal, expert third party and from the relevant regulatory agency, at various stages of the audit. Such audits which are conducted pre-deployment are aimed at closing the accountability gap which exists currently.

(ii) During deployment: Once the AI system has been deployed, it is important to keep auditing the AI system to note the changes being made/evolution happening in the AI system in the course of its deployment. AI systems constantly learn from the data and evolve to become better and more accurate. It is important that the development team is continuously monitoring the system to capture any errors that may arise, including inconsistencies arising from input data or design features, and address them promptly.

(iii) Post-deployment: Ensuring accountability post-deployment in an AI system can be challenging. The NITI Working Document also recognised that assigning accountability for specific decisions becomes difficult in a scenario with multiple players in the development and deployment of an AI system. In the absence of any consequences for decisions harming others, no one party would feel obligated to take responsibility or take actions to mitigate the effect of the AI systems. Additionally, the lack of accountability also leads to difficulties in grievance redressal mechanisms which can be used to address scenarios where harm has arisen from the use of AI systems. 

The Council of Europe, in its guidelines on the human rights impacts of algorithmic systems, highlighted the need for effective remedies to ensure responsibility and accountability for the protection of human rights in the context of the deployment of AI systems. A potential model for grievance redressal is the redressal mechanism suggested in the AI4People’s Ethical Framework for a Good Society report by the Atomium – European Institute for Science, Media and Democracy. The report suggests that any grievance redressal mechanism for AI systems would have to be widely accessible and include redress for harms inflicted, costs incurred, and other grievances caused by the AI system. It must demarcate a clear system of accountability for both organisations and individuals. Of the various redressal mechanisms they have suggested, two significant mechanisms are: 

(a) AI ombudsperson: This would ensure the auditing of allegedly unfair or inequitable uses of AI reported by users of the public at large through an accessible judicial process. 

(b) Guided process for registering a complaint: This envisions laying down a simple process, similar to filing a Right to Information request, which can be used to bring discrepancies, or faults in an AI system to the notice of the authorities.

Such mechanisms can be evolved to address the human rights concerns and harms arising from the use of AI systems in India. 

Conclusion

In early October, the Government of India hosted the Responsible AI for Social Empowerment (RAISE) Summit which has involved discussions around India’s vision and a roadmap for social transformation, inclusion and empowerment through Responsible AI. At the RAISE Summit, speakers underlined the need for adopting AI ethics and a human centred approach to the deployment of AI systems. However, this conversation is still at a nascent stage and several rounds of consultations may be required to build these principles into an Indian AI governance and regulatory framework. 

As India enters into the next stage of developing and deploying AI systems, it is important to have multi-stakeholder consultations to discuss mechanisms for the adoption of principles for Responsible AI. This will enable the framing of an effective governance framework for AI in India that is firmly grounded in India’s constitutional framework. While the NITI Aayog Working Document has introduced the concept of ‘Responsible AI’ and the ethics around which AI systems may be designed, it lacks substantive discussion on these principles. Hence, in our analysis, we have explored global views and practices around these principles and suggested mechanisms appropriate for adoption in India’s governance framework for AI. Our detailed analysis of these principles can be accessed in our comments to the NITI Aayog’s Working Document Towards Responsible AI for All.

Experimenting With New Models of Data Governance – Data Trusts

This post has been authored by Shashank Mohan

India is in the midst of establishing a robust data governance framework, which will impact the rights and liabilities of all key stakeholders – the government, private entities, and citizens at large. As a parliamentary committee debates its first personal data protection legislation (‘PDPB 2019’), proposals for the regulation of non-personal data and a data empowerment and protection architecture are already underway. 

As data processing capabilities continue to evolve at a feverish pace, basic data protection regulations like the PDPB 2019 might not be sufficient to address new challenges. For example, big data analytics renders traditional notions of consent meaningless as users have no knowledge of how such algorithms behave and what determinations are made about them by such technology. 

Creative data governance models, which are aimed at reversing the power dynamics in the larger data economy are the need of the hour. Recognising these challenges policymakers are driving the conversation on data governance in the right direction. However, they might be missing out on crucial experiments being run in other parts of the world

As users of digital products and services increasingly lose control over data flows, various new models of data governance are being recommended for example, data trusts, data cooperatives, and data commons. Out of these, one of the most promising new models of data governance is – data trusts. 

(For the purposes of this blog post, I’ll be using the phrase data processors as an umbrella term to cover data fiduciaries/controllers and data processors in the legal sense. The word users is meant to include all data principals/subjects.)

What are data trusts?

Though there are various definitions of data trusts, one which is helpful in understanding the concept is – ‘data trusts are intermediaries that aggregate user interests and represent them more effectively vis-à-vis data processors.’ 

To solve the information asymmetries and power imbalances between users and data processors, data trusts will act as facilitators of data flow between the two parties, but on the terms of the users. Data trusts will act in fiduciary duty and in the best interests of its members. They will have the requisite legal and technical knowledge to act on behalf of users. Instead of users making potentially ill-informed decisions over data processing, data trusts will make such decisions on their behalf, based on pre-decided factors like a bar on third-party sharing, and in their best interests. For example, data trusts to users can be what mutual fund managers are to potential investors in capital markets. 

Currently, in a typical transaction in the data economy, if users wish to use a particular digital service, neither do they have the knowledge to understand the possible privacy risks nor the negotiation powers for change. Data trusts with a fiduciary responsibility towards users, specialised knowledge, and multiple members might be successful in tilting back the power dynamics in favour of users. Data trusts might be relevant from the perspective of both the protection and controlled sharing of personal as well as non-personal data. 

(MeitY’s Non-Personal Data Governance Framework introduces the concept of data trustees and data trusts in India’s larger data governance and regulatory framework. But, this applies only to the governance of ‘non-personal data’ and not personal data, as being recommended here. CCG’s comments on MeitY’s Non-Personal Data Governance Framework, can be accessed – here)

Challenges with data trusts

Though creative solutions like data trusts seem promising in theory, they must be thoroughly tested and experimented with before wide-scale implementation. Firstly, such a new form of trusts, where the subject matter of the trust is data, is not envisaged by Indian law (see section 8 of the Indian Trusts Act, 1882, which provides for only property to be the subject matter of a trust). Current and even proposed regulatory structures don’t account for the regulation of institutions like data trusts (the non-personal data governance framework proposes data trusts, but only as data sharing institutions and not as data managers or data stewards, as being suggested here). Thus, data trusts will need to be codified into Indian law to be an operative model. 

Secondly, data processors might not embrace the notion of data trusts, as it may result in loss of market power. Larger tech companies, who have existing stores of data on numerous users may not be sufficiently incentivised to engage with models of data trusts. Structures will need to be built in a way that data processors are incentivised to participate in such novel data governance models. 

Thirdly, the business or operational models for data trusts will need to be aligned to their members i.e. users. Data trusts will require money to operate – for profit entities may not have the best interests of users in mind. Subscription based models, whether for profit or not, might fail as users are habitual to free services. Donation based models might need to be monitored closely for added transparency and accountability. 

Lastly, other issues like creation of technical specifications for data sharing and security, contours of consent, and whether data trusts will help in data sharing with the government, will need to be accounted for. 

Privacy centric data governance models

At this early stage of developing data governance frameworks suited to Indian needs, policymakers are at a crucial juncture of experimenting with different models. These models must be centred around the protection and preservation of privacy rights of Indians, both from private and public entities. Privacy must also be read in its expansive definition as provided by the Supreme Court in Justice K.S. Puttaswamy vs. Union of India. The autonomy, choice, and control over informational privacy are crucial to the Supreme Court’s interpretation of privacy. 

(CCG’s privacy law database that tracks privacy jurisprudence globally and currently contains information from India and Europe, can be accessed – here

Building an AI Governance Framework for India, Part II

Embedding Principles of Safety, Equality and Non-Discrimination

This post has been authored by Jhalak M. Kakkar and Nidhi Singh

In July 2020, the NITI Aayog released a draft Working Document entitled “Towards Responsible AI for All” (hereafter ‘NITI Working Document’ or ‘Working Document’). This Working Document was initially prepared for an expert consultation held on 21 July 2020. It was later released for comments by stakeholders on the development of a ‘Responsible AI’ policy in India. CCG responded with comments to the Working Document, and our analysis can be accessed here.

In our previous post on building an AI governance framework for India, we discussed the legal and regulatory implications of the proposed Working Document and argued that India’s approach to regulating AI should be (1) firmly grounded in its Constitutional framework and (2) based on clearly articulated overarching principles. While the NITI Working Document introduces certain principles, it does not go into any substantive details on what the adoption of these principles into India’s regulatory framework would entail.

We will now examine these ‘Principles for Responsible AI’, their constituent elements and avenues for incorporating them into the Indian regulatory framework. The NITI Working Document proposed the following seven ‘Principles for Responsible AI’ to guide India’s regulatory framework for AI systems: 

  1. Safety and reliability
  2. Equality
  3. Inclusivity and Non-Discrimination
  4. Privacy and Security 
  5. Transparency
  6. Accountability
  7. Protection and Reinforcement of Positive Human Values. 

This post explores the principles of Safety and Reliability, Equality, and Inclusivity and Non-Discrimination. A subsequent post will discuss the principles of Privacy and Security, Transparency, Accountability and the Protection and Reinforcement of Positive Human Values.

Principle of Safety and Reliability

The Principle of Reliability and Safety aims to ensure that AI systems operate reliably in accordance with their intended purpose throughout their lifecycle and ensures the security, safety and robustness of an AI system. It requires that AI systems should not pose unreasonable safety risks, should adopt safety measures which are proportionate to the potential risks, should be continuously monitored and tested to ensure compliance with their intended purpose, and should have a continuous risk management system to address any identified problems. 

Here, it is important to note the distinction between safety and reliability. The reliability of a system relates to the ability of an AI system to behave exactly as its designers have intended and anticipated. A reliable system would adhere to the specifications it was programmed to carry out. Reliability is therefore, a measure of consistency and establishes confidence in the safety of a system. Whereas, safety refers to an AI system’s ability to do what it is supposed to do without harming users (human physical integrity), resources or the environment.

Human oversight: An important aspect of ensuring the safety and reliability of AI systems is the presence of human oversight over the system. Any regulatory framework that is developed in India to govern AI systems must incorporate norms that specify the circumstances and degree to which human oversight is required over various AI systems. 

The level of involvement of human oversight would depend upon the sensitivity of the function and potential for significant impact on an individual’s life which the AI system may have. For example, AI systems deployed in the context of the provision of government benefits should have a high level of human oversight. Decisions made by the AI system in this context should be reviewed by a human before being implemented. Other AI systems may be deployed in contexts that do not need constant human involvement. However, these systems should have a mechanism in place for human review if a question is subsequently raised for review by, say a user. An example of this may be vending machines which have simple algorithms. Hence, the purpose for which the system is deployed and the impact it could have on individuals would be relevant factors in determining if ‘human in the loop’, ‘human on the loop’, or any other oversight mechanism is appropriate. 

Principle of Equality

The principle of equality holds that everyone, irrespective of their status in the society, should get the same opportunities and protections with the development of AI systems. 

Implementing equality in the context of AI systems essentially requires three components: 

(i) Protection of human rights: AI instruments developed across the globe have highlighted that the implementation of AI would pose risks to the right to equality, and countries would have to take steps to mitigate such risks proactively. 

(ii) Access to technology: The AI systems should be designed in a way to ensure widespread access to technology, so that people may derive benefits from AI technology.

(iii) Guarantees of equal opportunities through technology: The guarantee of equal opportunity relies upon the transformative power of AI systems to “help eliminate relationships of domination between groups and people based on differences of power, wealth, or knowledge” and “produce social and economic benefits for all by reducing social inequalities and vulnerabilities.” AI systems will have to be designed and deployed such that they further the guarantees of equal opportunity and do not exacerbate and further entrench existing inequality.

The development, use and deployment of AI systems in society would pose the above-mentioned risks to the right to equality, and India’s regulatory framework for AI must take steps to mitigate such risks proactively.

Principle of Inclusivity and Non-Discrimination

The idea of non-discrimination mostly arises out of technical considerations in the context of AI. It holds that non-discrimination and the prevention of bias in AI should be mitigated in the training data, technical design choices, or the technology’s deployment to prevent discriminatory impacts. 

Examples of this can be seen in data collection in policing, where the disproportionate attention paid to neighbourhoods with minorities, would show higher incidences of crime in minority neighbourhoods, thereby skewing AI results. Use of AI systems becomes safer when they are trained on datasets that are sufficiently broad, and the datasets encompass the various scenarios in which the system is envisaged to be deployed. Additionally, datasets should be developed to be representative and hence avoid discriminatory outcomes from the use of the AI system. 

Another example of this can be semi-autonomous vehicles which experience higher accident rates among dark-skinned pedestrians due to the software’s poorer performance in recognising darker-skinned individuals. This can be traced back to training datasets, which contained mostly light-skinned people. The lack of diversity in the data set can lead to discrimination against specific groups in society. To ensure effective non-discrimination, AI policies must be truly representative of the society in its training data and ensure that no section of the populace is either over-represented or under-represented, which may skew the data sets. While designing the AI systems for deployment in India, the constitutional rights of individuals should be used as central values around which the AI systems are designed. 

In order to implement inclusivity in AI, the diversity of the team involved in design as well as the diversity of the training data set would have to be assessed. This would involve the creation of guidelines under India’s regulatory framework for AI to help researchers and programmers in designing inclusive data sets, measuring product performance on the parameter of inclusivity, selecting features to avoid exclusion and testing new systems through the lens of inclusivity.

Checklist Model: To address the challenges of non-discrimination and inclusivity a potential model which can be adopted in India’s regulatory framework for AI would be the ‘Checklist’. The European Network of Equality Bodies (EQUINET), in its recent report on ‘Meeting the new challenges to equality and non-discrimination from increased digitisation and the use of Artificial Intelligence’ provides a checklist to assess whether an AI system is complying with the principles of equality and non-discrimination. The checklist consists of several broad categories, with a focus on the deployment of AI technology in Europe. This includes heads such as direct discrimination, indirect discrimination, transparency, other types of equity claims, data protection, liability issues, and identification of the liable party. 

The list contains a series of questions which judges whether an AI system meets standards of equality, and identifies any potential biases it may have. For example, the question “Does the artificial intelligence system treat people differently because of a protected characteristic?” includes the parameters of both direct data and proxies. If the answer to the question is yes, the system would be identified as indulging in indirect bias. A similar checklist system, which has been contextualised for India, can be developed and employed in India’s regulatory framework for AI. 

Way forward

This post highlights some of the key aspects of the principles of Safety and Reliability, Equality, and Inclusivity and Non-Discrimination. Integration of these principles which have been identified in the NITI Working Document into India’s regulatory framework requires that we first clearly define their content, scope and ambit to identify the right mechanisms to operationalise them. Given the absence of any exploration of the content of these AI principles or the mechanism for their implementation in India in the NITI Working Document, we have examined the relevant international literature surrounding the adoption of AI ethics and suggested mechanisms for their adoption. The NITI Working Document has spurred discussion around designing an effective regulatory framework for AI. However, these discussions are at a preliminary stage and there is a need to develop a far more nuanced proposal for a regulatory framework for AI.

Over the last week, India has hosted the Responsible AI for Social Empowerment (RAISE) Summit which has involved discussions around India’s vision and roadmap for social transformation, inclusion and empowerment through Responsible AI. As we discuss mechanisms for India to effectively harness the economic potential of AI, we also need to design an effective framework to address the massive regulatory challenges emerging from the deployment of AI—simultaneously, and not as an afterthought post-deployment. While a few of the RAISE sessions engaged with certain aspects of regulating AI, there still remains a need for extensive, continued public consultations with a cross section of stakeholders to embed principles for Responsible AI in the design of an effective AI regulatory framework for India. 

For a more detailed discussion on these principles and their integration into the Indian context, refer to our comments to the NITI Aayog here. 

Cyberspace and International Law: Taking Stock of Ongoing Discussions at the OEWG

This post is authored by Sharngan Aravindakshan

Introduction

The second round of informal meetings in the Open-Ended Working Group on the Use of ICTs in the Context of International Security is scheduled to be held from today (29th September) till 1st October, with the agenda being international law.

At the end of the OEWG’s second substantive session in February 2020, the Chairperson of the OEWG released an “initial pre-draft” (Initial Pre-Draft) of the OEWG’s report, for stakeholder discussions and comments. The Initial Pre-Draft covers a number of issues on cyberspace, and is divided into the following:

  1. Section A (Introduction);
  2. Section B (Existing and Potential Threats);
  3. Section C (International Law);
  4. Section D (Rules, Norms and Principles for Responsible State Behaviour);
  5. Section E (Confidence-building Measures);
  6. Section F (Capacity-building);
  7. Section G (Regular Institutional Dialogue); and
  8. Section H (Conclusions and Recommendations).

In accordance with the agenda for the coming informal meeting in the OEWG, this post is a brief recap of this cyber norm making process with a focus on Section C, i.e., the international law section of the Initial Pre-Draft and States’ comments to it.

What does the OEWG Initial Pre-Draft Say About International Law?

Section C of the Initial Pre-Draft begins with a chapeau stating that existing obligations under international law, in particular the Charter of the United Nations, are applicable to State use of ICTs. The chapeau goes on to state that “furthering shared understandings among States” on how international law applies to the use of ICTs is fundamental for international security and stability. According to the chapeau, exchanging views on the issue among States can foster this shared understanding.

The body of Section C records that States affirmed that international law, including the UN Charter, is applicable to the ICT environment. It particularly notes that the principles of the UN Charter such as sovereign equality, non-intervention in internal affairs of States, the prohibition on the threat or use of force, human rights and fundamental freedoms apply to cyberspace. It also mentions that specific bodies of international law such as international humanitarian law (IHL), international human rights law (IHRL) and international criminal law (ICL) as applicable as well. Section C also records that “States underscored that international humanitarian law neither encourages militarization nor legitimizes conflict in any domain”, without mentioning which States did so.

Significantly, Section C of the Initial Pre-Draft also notes that a view was expressed in the discussions that “existing international law, complemented by the voluntary, non-binding norms that reflect consensus among States” is “currently sufficient for addressing State use of ICTs”. According to this view, it only remains for a “common understanding” to be reached on how the already agreed normative framework could apply and be operationalized. At the same time, the counter-view expressed by some other States is also noted in Section C, that “there may be a need to adapt existing international law or develop a new instrument to address the unique characteristics of ICTs.”

This view arises from the confusion or lack of clarity on how existing international law could apply to cyberspace and includes but is not limited to questions on thresholds for use of force, armed attacks and self-defence, as well as the question of applicability of international humanitarian law to cyberspace. Section C goes on to note that in this context, proposals were made for the development of a legally binding instrument on the use of ICTs by States. Again, the States are not mentioned by name. Additionally, Section C notes a third view which proposed a “politically binding commitment with regular meetings and voluntary State reporting”. This was proposed as a middle ground between the first view that existing international law was sufficient and the second view that new rules of international law were required in the form of a legally binding treaty. Developing a “common approach to attribution at the technical level” was also discussed as a way of ensuring greater accountability and transparency.

With respect to the international law portion, the Initial Pre-Draft proposed recommendations including the creation of a global repository of State practice and national views in the application of international law as well as requesting the International Law Commission to undertake a study of national views and practice on how international law applies in the use of ICTs by States.

What did States have to say about Section C of the Initial Pre-Draft?

In his letter dated 11 March 2020, the Chairperson opened the Initial Pre-Draft for comments from States and other stakeholders. A total of 42 countries have submitted comments, excluding the European Union (EU) and the Non Aligned Movement (NAM), both of which have also submitted comments separately from their member States. The various submissions can be found here. Not all States’ submissions have comments specific to Section C, the international law portion. But it is nevertheless worthwhile examining the submissions of those States that do. India had also submitted comments which can be found here. However, these are no longer available on the OEWG website and appear to have been taken down.

International Law and Cyberspace

Let’s start with what States have said in answer to the basic question of whether existing international law applies to cyberspace and if so, whether its sufficient to regulate State-use of ICTs. A majority of States have answered in the affirmative and this list includes the Western Bloc led by the US including Canada, France, Germany, Austria, Czech Republic, Denmark, Estonia, Ireland, Liechtenstein, Netherlands, Norway, Sweden, Switzerland, Italy, and the United Kingdom, as well as Australia, New Zealand, Japan, South Korea, Colombia, South Africa, Mexico and Uruguay. While Singapore has affirmed that international law, in particular, the UN Charter, applies to cyberspace, it is silent on whether its current form is sufficient to regulate State action in cyberspace.

Several States, however, are of the clear view that international law as it exists is insufficient to regulate cyberspace or cannot be directly applied to cyberspace. These States have identified a “legal vacuum” in international law vis-à-vis cyberspace and call for new rules in the form of a binding treaty. This list includes China, Cuba, Iran, Nicaragua, Russia and Zimbabwe. Indonesia, in its turn, has stated that “automatic application” of existing law without examining the context and unique nature of activities in cyberspace should be avoided since “practical adjustment and possible new interpretations are needed”, and the “gap of the ungoverned issues in cyberspace” also needs to be addressed.

NAM has stated that the UN Charter applies, but has also noted the need to “identify possible gaps” that can be addressed through “furthering the development of international rules”. India’s earlier uploaded statement had expressed the view that although the applicability of international law had been agreed to, there are “differences in the structure and functioning of cyberspace, including complicated jurisdictional issues” and that “gaps in the existing international laws in their applicability to cyberspace” need examining. This statement also spoke of “workable modifications to existing laws and exploring the needs of, if any, new laws”.

Venezuela has stated that “the use of ICTs must be fully consistent with the purposes and principles of the UN Charter and international law”, but has also stated that “it is necessary to clarify that International Public Law cannot be directly applicable to cyberspace”, leaving its exact views on the subject unclear.

International Humanitarian Law and Cyberspace

The Initial Pre-Draft’s view on the applicability of IHL to cyberspace has also become a point of contention for States. States supporting its applicability include Brazil, Czech Republic, Denmark, Estonia, France, Germany, Ireland, Netherlands, Switzerland, the United Kingdom and Uruguay. India is among the supporters. Some among these like Estonia, Germany and Switzerland have called for the specific principles of humanity, proportionality, necessity and distinction to be included in the report.

States including China, Cuba, Nicaragua, Russia, Venezuela and Zimbabwe are against applying IHL, with their primary reason being that it will promote “militarization” of cyberspace and “legitimize” conflict. According to China, we should be “extremely cautious against any attempt to introduce use of force in any form into cyberspace,… and refrain from sending wrong messages to the world.” Russia has acerbically stated that to say that IHL can apply “to the ICT environment in peacetime” is “illogical and contradictory” since “IHL is only applied in the context of a military conflict while currently the ICTs do not fit the definition of a weapon”.

Second level of detail on these questions, especially concerning specific principles including sovereignty, non-intervention, threat or use of force, armed attack and inherent right of self-defence, is scarce in States’ comments, beyond whether they apply to cyberspace. Zimbabwe has mentioned in its submission that these principles do apply, as has NAM. Cuba, as it did in the 2017 GGE, has taken the stand that the inherent right to self-defence under Article 51 of the UN Charter cannot be automatically applied to cyberspace. Cuba also stated that it cannot be invoked to justify a State responding with conventional attacks. The US has also taken the view it expressed in the 2017 GGE, that if States’ obligations such as refraining from the threat or use of force are to be mentioned in the report, it should also contain States’ rights, namely, the inherent right to self-defence in Article 51.

Austria has categorically stated that the violation of sovereignty is an internationally wrongful act if attributable to a State. But other States’ comments are broader and do not address the issue of sovereignty at this level. Consider Indonesia’s comments, for instance, where it has simply stated that it “underlines the importance of the principle of sovereignty” and that the report should as well. For India’s part, its earlier uploaded statement approached the issue of sovereignty from a different angle. It stated that the “territorial jurisdiction and sovereignty are losing its relevance in contemporary cyberspace discourse” and went on to recommend a “new form of sovereignty which would be based on ownership of data, i.e., the ownership of the data would be that of the person who has created it and the territorial jurisdiction of a country would be on the data which is owned by its citizens irrespective of the place where the data physically is located”. On the face of it, this comment appears to relate more to the conflict of laws with respect to the transborder nature of data rather than any principle of international law.

The Initial Pre-Draft mentioning the need for a “common approach” for attribution also drew sharp criticism. France, Germany, Italy, Nicaragua, Russia, Switzerland and the United Kingdom have all expressed the view that attribution is a “national” or “sovereign” prerogative and should be left to each State. Iran has stated that addressing a common approach for attribution is premature in the absence of a treaty. Meanwhile, Brazil, China and Norway have supported working towards a common approach for attribution. This issue has notably seen something of a re-alignment of divided State groups.

International Human Rights Law and Cyberspace

States’ comments to Section C also pertain to its language on IHRL with respect to ICT use. Austria, France, the Netherlands, Sweden and Switzerland have called for greater emphasis on human rights and its applicability in cyberspace, especially in the context of privacy and freedoms of expression, association, and information. France has also included the “issues of protection of personal data” in this context. Switzerland has interestingly linked cybersecurity and human rights as “complementary, mutually reinforcing and interdependent”. Ireland and Uruguay’s comments also specify that IHRL apply.

On the other hand, Russia’s comments make it clear that it believes there is an “overemphasis” on human rights law, and it is not “directly related” to international peace and security. Surprisingly, the UK has stated that issues concerning data protection and internet governance are beyond the OEWG’s mandate, while the US comments are silent on the issue. While not directly referring to international human rights law, India’s comments had also mentioned that its concept of data ownership based sovereignty would reaffirm the “universality of the right to privacy”.

Role of the International Law Commission

The Initial Pre-Draft also recommended requesting the International Law Commission (through the General Assembly) to “undertake a study of national views and practice on how international law applies in the use of ICTs by States”. A majority of States including Canada, Denmark, Japan, the Netherlands, Russia, Switzerland, the United Kingdom and the United States have expressed clearly that they are against sending the issue to the ILC as it is too premature at this stage, and would also be contrary to the General Assembly resolutions referring the issue to the OEWG and the GGE.

With respect to the Initial Pre-Draft’s recommendation for a repository of State practices on the application of international law to State-use of ICTs, support is found in comments submitted by Ireland, Italy, Japan, South Korea, Singapore, South Africa, Sweden and Thailand. While Japan, South Africa and India (comments taken down) have qualified their views by stating these contributions should be voluntary, the EU has sought clarification on the modalities of contributing to the repository so as to avoid duplication of efforts.

Other Notable Comments

Aside from the above, States have raised certain other points of interest that may be relevant to the ongoing discussion on international law. The Czech Republic and France have both drawn attention to the due diligence norm in cyberspace and pointed out that it needs greater focus and elaboration in the report.

In its comments, Colombia has rightly pointed out that discussions should centre around “national views” as opposed to “State practice”, since it is difficult for State practice to develop when “some States are still developing national positions”. This accurately highlights a significant problem in cyberspace, namely the scarcity of State practice on account of unclarity in national positions. It holds true for most developing nations, including but not limited to India.

On a separate issue, the UK has made an interesting, but implausible proposal. The UK in its comments has proposed that “States acknowledge military capabilities at an organizational level as well as provide general information on the legal and oversight regimes under which they operate”. Although it has its benefits, such as reducing information asymmetries in cyberspace, it is highly unlikely that States will accept an obligation to disclose or acknowledge military capabilities, let alone any information on the “legal and oversight regimes under which they operate”. This information speaks to a State’s military strength in cyberspace, and while a State may comment on the legality of offensive cyber capabilities in abstract, realpolitik deems it unlikely that it will divulge information on its own capabilities. It is worth noting here that the UK has acknowledged having offensive cyber capabilities in its National Cyber Security Strategy 2016 to 2021.

What does the Revised Pre-Draft Say About International Law?

The OEWG Chair, by a letter dated 27 May 2010, notified member States of the revised version of the Initial Pre-Draft (Revised Pre-Draft). He clarified that the “Recommendations” portion had been left changed. On perusal, it appears Section C of the Revised Pre-Draft is almost entirely unchanged as well, barring the correction of a few typographical errors. This is perhaps not surprising, given the OEWG Chair made it clear in his letter that he still expected “guidance from Member States for further revisions to the draft”.

CCG will track States’ comments to the Revised Pre-Draft as well, as and when they are submitted by member States.

International Law and Cyberspace: Three Different Conversations

With the establishment of the OEWG, the UN GGE was no longer the only multilateral conversation on cyberspace and international law among States in the UN. Of course, both the OEWG and the GGE are about more than just the questions of whether and how international law applies in cyberspace – they also deal with equally important, related issues of capacity-building, confidence building measures and so on in cyberspace. But their work on international law is still extremely significant since they offer platforms for States to express their views on international law and reach consensus on contentious issues in cyberspace. Together, these two forums form two important streams of conversation between States on international law in cyberspace.

At the same time, States are also separately articulating and releasing their own positions on international law and how it applies to cyberspace. Australia, France, Germany, Iran, the Netherlands, the United Kingdom and the United States have all indicated their own views on how international law applies to cyberspace, independent of both the GGE and the OEWG, with Iran being the latest State to do so. To the extent they engage with each other by converging and diverging on some issues such as sovereignty in cyberspace, they form the third conversation among States on international law. Notably, India has not yet joined this conversation.

It is increasingly becoming clear that this third conversation is taking place at a particularly level of granularity, not seen so far in the OEWG or the GGE. For instance, the raging debate on whether sovereignty in international law in cyberspace is a rule entailing consequences for violation or is merely a principle that only gives rise to binding rules such as the prohibitions on use of force or intervention, has so far been restricted to this third conversation. In contrast, States’ comments to the OEWG’s Initial Pre-Draft have indicated that discussions in the OEWG appear to still centre around the broad question of whether and how international law applies to cyberspace. Only Austria mentioned in its comments to the Initial Pre-Draft that it believed sovereignty was a rule the violation of which would be an internationally wrongful act. The same applies for the GGE, since although it was able to deliver consensus reports on international law applying to cyberspace, it also cannot claim to have dealt with these issues at level of specificity beyond this.

This variance in the three conversations shows that some States are racing way ahead of others in their understanding of how international law applies to cyberspace, and these States are so far predominantly Western and developed, with the exception of Iran. Colombia’s comment to the OEWG’s Initial Pre-Draft is a timely reminder in this regard, that most States are still in the process of developing their national positions. The interplay between these three conversations around international law and cyberspace will be interesting to observe.

The Centre for Communication Governance’s comments to the Initial Pre-Draft can be accessed here.

On Cyber Weapons and Chimeras

This post has been authored by Gunjan Chawla and Vagisha Srivastava

Closeup of laptop computer keyboard, and gun bullets, representing the concept of cyber attacks, Journalism, terrorism, support for terrorists, click enter

“The first thing we do, let’s kill all the lawyers,” says Shakespeare’s Dick the Butcher to Jack Cade, who leads fellow conspirators in the popular rebellion against Henry VI.

The same cliché may as well have been the opening line of Pukhraj Singh’s response to our last piece, which joins his earlier pieces heavily burdened with thinly veiled disdain for lawyers poking their noses into cyber operations. In his eagerness to establish code as law, he omits not only the universal professional courtesy of getting our names right, but also a basic background check on authors he so fervently critiques – only one of whom is in fact a lawyer and the other, an early career technologist.

In this final piece in our series on offensive cyber capabilities, we take exception to Singh’s misrepresentation of our work and hope to redirect the conversation back to the question raised by our first piece – what is the difference between ‘cyber weapons’ and offensive cyber capabilities, if any? Our readers may recall from our first piece in the series Does India have offensive cyber capabilities that Lt Gen Pant had in an interview to Medianama, denied any intent on part of the Government of India to procure ‘cyber weapons’. However, certain amendments inserted in export control regulations by the DGFT suggested the presence of offensive cyber capabilities in India’s cyber ecosystem. Quoting Thomas Rid from Cyber War Will Not Take Place,

“these conceptual considerations are not introduced here as a scholarly gimmick. Indeed theory shouldn’t be left to scholars; theory needs to become personal knowledge, conceptual tools used to comprehend conflict, to prevail in it, or to prevent it.”

While lawyers and strategists working in the cyber policy domain admittedly, still have a lot to learn from those with personal knowledge of the conduct of hostilities in cyberspace, deftly obscured by a labyrinth of regulations and rapidly changing rules of engagement, the question of nomenclature remains an important one. The primary reason for this is that the taxonomy of cyber operations has significant implications for the obligations incumbent on States and State actors under international as well as domestic law.

A chimeral critique

Singh’s most seriously mounted objection in his piece is to our assertion that ‘cyber capabilities’ and ‘cyber operations’ are not synonymous, just as ‘arms’ and ‘armed attack’, or ‘weapons’ and ‘war’ are distinct concepts. However, a wilful misunderstanding of our assertion that cyber capabilities and cyber operations are not interchangeable terms does not foster any deeper understanding of the legal or technical ingredients of a ‘cyber operation’–irrespective of whether it is offensive, defensive or exploitative in intent and design.

The central idea remains, that a capability is wielded with the intent of causing a particular effect (which may or may not be identical to the actual effect resulting from the cyber operation). A recent report by the Belfer Center at Harvard on a ‘National Cyber Power Index’, which views a nation’s cyber power as a function of its intent and capability, also seems to support this position. Certainly, the criteria and methodology of assessment remain open to debate and critique from academics as well as practitioners, and this debate needs to inform our legal position and strategic posture (again, the two are not synonymous) as to the legality of developing offensive cyber capabilities in international as well as domestic law.

Second, in finding at least one of us guilty of a ‘failure of imagination’, Singh steadfastly advocates the view that cyber (intelligence) operators like himself are better off unbounded by legal restraint of their technical prowess, functioning in a Hobbesian (virtual) reality where code is law and technological might makes right. It is thus unsurprising that Singh in what is by his own admission a ‘never to be published manuscript’, seems to favour practices normalized by the United States’ military doctrine, regardless of their dubious legality.

Third, in criticizing lawyers’ use of analogical reasoning—which to Singh, has become ‘the bane of cyber policy’—he conveniently forgets that for those of us who were neither born in the darkness of covert cyber ops, nor moulded by it, analogies are a key tool to understand unfamiliar concepts by drawing upon learnings from more familiar concepts. Indeed, it has even been argued that analogy is the core of human cognition.

Navigating a Taxing Taxonomy

Writing in 2012 with Peter McBurney, Rid postulates that cyber weapons may span a wide spectrum, from generic but low-potential tools to specific high potential weaponry – and may be viewed as a subset of ‘weapons’. In treating cyberweaponry as a subset of conventional weaponry, their underlying assumption is that the (cyber) weapon is being developed and/or deployed with ‘the aim of threatening or causing physical, functional or mental harm to structures, systems or living beings’. This also supports our assertion that intent is a key element to planning and launching a cyber operation, but not for the purposes of classifying a cyber operation as an ‘armed attack’ under international law. However, it is important to mention that Rid considers ‘cyber war’ as an extremely problematic and dangerous concept, one that is far narrower than the concept of ‘cyber weapons’.

Singh laments that without distinguishing between cyber techniques and effects, we fall into ‘a quicksand of lexicon, taxonomies, hypotheses, assumptions and legalese’. He considers the OCOs/DCOs classification too ‘simplistic’ in comparison to the CNA/CND/CNE framework. Even if the technological underpinnings of cyber exploits (for intelligence gathering) and cyber attacks (for damage, disruption and denial) have not changed over the years, as Singh argues—the change in terminology/vocabulary cannot be attributed to ‘ideology’. This change is a function of a complete reorganization and restructuring of the American national security establishment to permit greater agility and freedom of action in rules of hostile engagement by the military in cyberspace.

Unless the law treats cognitive or psychological effects of cyber operations, (eg. those depicted in the Social Dilemma or the Great Hack, or even in doxing classified documents) as harm that is ‘comparable’ to physical damage/destruction, ‘cyber offence’ will not graduate to the status of a ‘cyber weapon’. For the time being, an erasure of the physical/psychological dichotomy appears extremely unlikely. If the Russian and Chinese playbook appears innovative in translating online activity to offline harm, it is because of an obvious conflation between a computer systems-centric cyber security model and the state-centric information security model that values guarding State secrets above all else, and benefits from denying one’s adversary the luxury of secrecy in State affairs.

The changing legal framework and as a corollary, the plethora of terminologies employed around the conduct of cyber operations by the United States run parallel to the evolving relationship between its intelligence agencies and military institutions.

The US Cyber Command (CYBERCOM) was first created in 2008, but was incubated for a long time by the NSA under a peculiar arrangement established in 2009, whereby the head of the NSA was also the head of the US CYBERCOM, with a view to leverage the vastly superior surveillance capabilities of the NSA at the time. This came to be known as a ‘dual-hat arrangement’, a moniker descriptive of the double role played by the same individual simultaneously heading an intelligence agency as well as a military command. Simply put, cyber infrastructure raised for the purposes of foreign surveillance and espionage was but a stepping stone to building cyber warfare capabilities. Through a presidential memorandum in 2017, President Trump directed the Secretary of Defense to establish the US Cyber Command as a Unified Combatant Command, elevating its status from a sub-unit of the US Strategic Command (STRATCOM).

An important aspect of the ‘restructuring’ we refer to are two Presidential directives – one from 2012 and another from 2018. In October 2012, President Obama signed the Presidential Policy Directive- 20 2012 (PPD). It was classified as Top Secret at the time, but leaked by Ellen Nakashima of the Washington Post a month later. The PPD defined US cyber policy, including terms such as ‘Offensive Cyber Effects Operations’ (OCEO) and ‘Defensive Cyber Effects Operations’ (DCEO) and mandated that all cyber operations were to be executed with the explicit authorization from the President. In August, 2018, Congress passed a military-authorization bill that delegated some cyber operations to be authorized by the Secretary of Defense. It is relevant that ‘clandestine military activity (covert operations) or operations in cyberspace are now considered a traditional military activity under this statute, bringing it under the DoD’s authority. The National Security Presidential Memorandum 13 (NSPM) on offensive cyber operations signed by President Trump around the same time, although not available in the public domain, has reportedly further eased procedural requirements for Presidential approval in certain cyber operations.

Thus, if we overcome apprehensions about the alleged ‘quicksand of lexicon, taxonomies, hypotheses, assumptions and legalese,’ we can appreciate the crucial role played by these many terms in the formulation of clear operational directives. They serve an important role in the conduct of cyber operations by (1) delineating the chain of command for the conduct of military cyber operations for the purposes of domestic law and (2) bringing the conversation on cyber operations outside the don’t-ask-don’t-tell realm of ‘espionage’, enabling lawyers and strategists to opine on their legality and legitimacy, or lack thereof, as military operations for the purposes of international law – much to Singh’s apparent disappointment. To observers more closely acquainted with the US playbook on international law, the inverse is also true, where operational imperatives have necessitated a re-formulation of terms that may convey any sense of illegality or impropriety in military conduct (as opposed to the conduct of intelligence agencies, which is designed for ‘plausible deniability’ in case of an adverse outcome).

We relied on the latest (June 2020) version of JP 1-02 for the current definition of ‘offensive cyber operations’ in American warfighting doctrine. We can look to earlier versions of the DoD Dictionary to trace back the terms relevant to CNOs (including CAN, CNE and CND). This exercise makes it quite apparent that the contemporary terminologies and practices are all rooted in (covert) cyber intelligence operations, which the (American) law and policy around cyberspace bends backwards to accommodate and conceal. That leading scholars have recently sought to frame ‘cyber conflict as an intelligence contest’ further supports this position.

  • 2001 to 2007 – ‘cyber counterintelligence’ as the only relevant military activity in cyberspace (even though a National Military Strategy for Cyberspace Operations existed in 2006)
    • 2008: US CYBERCOM created as a sub-unit of US STRATCOM
    • 2009 – Dual Hat arrangement between NSA and CYBERCOM
    • 2010– US CYBERCOM achieves operational capability on May 21; CNA/CNE enter the DoD lexicon
    • 2012 – PPD 20 issued by President Obama
    • 2013 – JP 3-12 published as doctrinal guidance from the DoD to plan, execute and assess cyber operations
    • By 2016 – DoD dictionary defines ‘cyberspace operations’, DCOs, OCOs, (but not cyberspace exploitation) relying on JP 3-12
    • 2018 – NSPDM 13 signed by President Trump
    • 2020 – ‘cyberspace attack’ ‘cyberspace capability’, ‘cyberspace defence’, ‘cyberspace exploitation’, ‘cyberspace operations’, cyberspace security, cybersecurity as well as OCOs/DCOs are defined terms in the Dictionary

Even as JP 3-12 remains an important document from the standpoint of military operations, reliance on this document is inapposite, even irrelevant for the purposes of agencies responsible for cyber intelligence operations. In fact, JP 3-12 is also not helpful to explain the whys and hows of the evolution in the DoD vocabulary. This is a handy guide to decode the seemingly cryptic numbering of DoD’s Joint Publications.

Waging Cyber War without Cyber ‘Weapons’?

It is relevant to mention that none of the documents referenced above, including JP 3-12, make any mention of the term ‘cyber weapon’. A 2010 memorandum from the Chairman of the Joint Chiefs of Staff, however, clearly identifies CNAs as a form of ‘offensive fire’ – analogous to weapons that are ‘fired’ upon a commander’s order, as well as a key component of Information Operations.

The United States’ Department of Defense in its 2011 Defense Cyberspace Policy Report to Congress acknowledged that “the interconnected nature of cyberspace poses significant challenges for applying some of the legal frameworks developed for physical domains” and observed that “there is currently no international consensus regarding the definition of a cyber weapon”.

A plausible explanation as to why the US Government refrains from using the term ‘cyber weapons’ is found in this report, as it highlights certain legal issues in the transporting cyber ‘weapons’ across the Internet through the infrastructure owned and/or located in neutral third countries without obtaining the equivalent of ‘overflight rights’, and suggests ‘a principled application of existing norms to be developed along with partners and allies’. A resolution to this legal problem highlighted in the DoD’s report to Congress is visible in the omission of the term ‘cyber weapon’ in legal and policy frameworks altogether, only to be replaced by ‘cyber capabilities’.

We can find the rationale for and implications of this pivot in the work of Professor Michael Schmitt’s 2019 paper, wherein he argues in the context of applicable international law – contrary to the position he espoused in the Tallinn Manual –that ‘cyber capabilities’ cannot meet the definition of a weapon or means of warfare, but that cyber operations may qualify as methods of warfare. This interpretation permits ‘cyber weapons’ in the garb of ‘cyber capabilities’ to circumvent at least three obligations under the Law of Armed Conflict/International Humanitarian Law.

First, is the requirement for legal review of weapons under Article 36 of the First Additional Protocol to the Geneva Conventions (an issue Col. Gary Brown has also written about) and second, is taking precautions in attack. Third and most important, the argument that cyber weapons cannot be classified as munitions also has the consequence of depriving neutral States of their sovereign right to refuse permission of the transportation of weapons (or in this case, transmission of weaponised cyber capabilities) through their territory (assuming that this is technically possible).

So, in a sense, if we do not treat offensive cyber capabilities, or ‘cyber weapons’ as analogous in international law to conventional weapons normally associated with armed hostilities, in effect, we also restrain the ability of other sovereign States under international law to prevent and prohibit a weaponization of cyberspace without their consent, for military purposes of other cyber powers. Col. Gary Brown whose work Singh seems to nurture a deep admiration for admits that the first ‘cyber operation’ was conducted by the United States against the Soviet Union in 1982, causing a trans-Siberian pipe to explode by use of malware implanted in Canadian software acquired by Soviet agents. Since 1982, the US seems to have functioned in single-player mode until Russia’s DDoS attacks on Estonia in 2007, or at the very least, until MOONLIGHT MAZE was uncovered in 1998. For those not inclined to read, Col. Brown makes a fascinating appearance alongside former CIA director Michael Hayden in Alex Gibney’s 2016 Documentary ‘Zero Days’ which delves into Stuxnet – an obvious cyber weapon by any standards, which the US ‘plausibly denied’ until 2012.

Turning back to domestic law, the nomenclature is also significant from a public finance perspective. As anecdotal evidence, we can refer to this 2013 Reuters report, which suggests that the US Air Force designated certain cyber capabilities as ‘weapons’ with a view to secure funding from Congress.

From the standpoint of managing public perceptions too, it is apparent that the positive connotations associated with ‘developing cyber capabilities’ makes the same activity a lot more palatable, even development-oriented in the eyes of the general public, as opposed to the inherent negativity associated with say, the ‘proliferation of cyber weapons’.

Additionally, the legal framework is also important to delineate the geographical scope of the legal authority (or its personal jurisdiction, if you will) vested in the military as opposed to intelligence agencies to conduct cyber operations. For organizational purposes, the role of intelligence would (in theory) be limited to CNE, whereas CNA and CND would be vested in the military. We know from (Pukhraj’s) experience, this distinction is nearly impossible to make in practice, at least until after the fact. This overlap of what are arguably, artificially created categories of cyber operations, raises urgent questions about the scope and extent of authority the law can legitimately vest in our intelligence agencies, over and above the implicit authority of the armed forces to operate in the cyber domain.

Norm Making by Norm Breaking

In addition to understanding who wields offensive cyber capabilities, under what circumstances, it is also important for the law to specify where or against whom they are permitted to do so by law. Although militaries of modern day ‘civilized’ nations are rarely ever deployed domestically, there has been some recent concern over whether the US CYBERCOM could be deployed against American citizens in light of recent protests, just as special forces were. While the CIA has legal authority to operate exclusively beyond the United States, the NSA is not burdened by such constraints and is authorized to operate domestically. Thus, the governance/institutional choices before a State looking to ‘acquire cyber weapons’ or ‘develop (offensive) cyber capabilities’ range from bad to worse. One might either (1) permit its intelligence agencies to engage in activities that resemble warfighting more than they resemble intelligence gathering and risk unintentional escalations internationally or (2) permit its military to engage in intelligence collection domestically, potentially against its own citizens and risk ubiquitous militarization of and surveillance in its domestic cyberspace.

Even as many celebrate the recent Federal court verdict that the mass surveillance programmes of the NSA revealed by Edward Snowden were illegal and unconstitutional, let us not forget that this illegality is found vis-à-vis the use of this programme against American citizens only – not foreign surveillance programmes and cyber operations conducted beyond American soil against foreign nationals. Turning to an international law analysis, it is the US’ refusal to recognize State sovereignty as a binding rule of international law, that enables the operationalization of international surveillance and espionage networks and transmission of weaponized cyber capabilities that routinely violate not only the sovereignty of States, but also the privacy and dignity of targeted individuals (the United States does not accept the extra-territorial applicability of the ICCPR).

The nom de guerre of these transgressions in American doctrine is now ‘persistent engagement’ and ‘defend forward’, popularized by the Cyber Solarium Commission most recently—a cleverly crafted term that brings about no technical changes in the modus operandi, but disguises aggressive cyber intrusions across national borders as ostensible self-defence.

It is also relevant that this particular problem also finds a clear mention in the Chinese Foreign Minister’s recent statement on the formulation of Digital Security rules by China. Yet, it is not a practice from which either the US or China plan to desist. Recent revelations about the Chinese firm Zhenhua Data Information Technology Co. by the Indian Express have only served to confirm the expansive, and expanding cyber intelligence network of the Chinese state.

These practices of extraterritorial surveillance, condemnable as they may be, have nonetheless, shaped the international legal order we find ourselves in today – a testimony to the paradoxical dynamism of international law– not unlike the process of ‘creative destruction’ of cyberspace highlighted by Singh—where a transgression of the norm (by either cyber power) may one day, itself become a norm. What this norm is, or should be still remains open to interpretation, so let’s not rush to kill all the lawyers—not just yet anyway.

The Proliferating Eyes of Argus: State Use of Facial Recognition Technology

Democratic lawmakers introduce ban on facial recognition technology, citing  mistake made by Detroit police | News Hits

This post has been authored by Sangh Rakshita

In Greek mythology Argus Panoptes was a many-eyed, all-seeing, and always awake, giant whose reference has been used to depict an imagery of excessive scrutiny and surveillance. Jeremy Bentham used this reference when he designed the panopticon prison where prisoners would be monitored without them being in the know. Later, Michel Foucault used the panopticon to elaborate on the social theory of panopticism where the watcher ceases to be external to the watched, resulting in internal surveillance or a ‘chilling’ effect. This idea of “panopticism” has gained renewed relevance in the age of digital surveillance.

Amongst the many cutting edge surveillance technologies being adopted globally, ‘Facial Recognition Technology’ (FRT) is one of the most rapidly deployed. ‘Live Facial Recognition Technology’ (LFRT) or ‘Real-time Facial Recognition Technology’, its augmentation, has increasingly become more effective in the past few years. Improvements in computational power and algorithms have enabled cameras placed at odd angles to detect faces even in motion. This post attempts to explore the issues with increasing State use of FRT around the world and the legal framework surrounding it.

What do FRT and LFRT mean?

FRT refers to the usage of algorithms for uniquely detecting, recognising, or verifying a person using recorded images, sketches, videos (which contain their face). The data about a particular face is generally known as the face template. This template is a mathematical representation of a person’s face, which is created by using algorithms that mark and map distinct features on the captured image like eye locations or the length of a nose. These face templates create the biometric database against which new images, sketches, videos, etc. are compared to verify or recognise the identity of a person. As opposed to the application of FRT, which is conducted on pre-recorded images and videos, LFRT involves real-time automated facial recognition of all individuals in the camera field’s vision. It involves biometric processing of images of all the passers-by using an existing database of images as a reference.

The accuracy of FRT algorithms is significantly impacted by factors like distance and angle from which the image was captured or poor lighting conditions. These problems are worsened in LFRT as the images are not captured in a controlled setting, with the subjects in motion, rarely looking at the camera, and often positioned at odd angles from it. 

Despite claims of its effectiveness, there has been growing scepticism about the use of FRT. Its use has been linked with misidentification of people of colour, ethinic minorities, women, and trans people. The prevalent use of FRT may not only affect the privacy rights of such communities, but all those who are surveilled at large.

The Prevalence of FRT 

While FRT has become ubiquitous, LFRT is still in the process of being adopted in countries like the UK, USA, India, and Singapore. The COVID-19 pandemic has further accelerated the adoption of FRT as a way to track the virus’ spread and to build on contactless biometric-based identification systems. For example, in Moscow, city officials were using a system of tens of thousands of cameras equipped with FRT, to check for social distancing measures, usage of face masks, and adherence to quarantine rules to contain the spread of COVID-19. 

FRT is also being steadily deployed for mass surveillance activities, which is often in violation of universally accepted principles of human rights such as necessity and proportionality. These worries have come to the forefront recently with the State use of FRT to identify people participating in protests. For example, FRT was used by law enforcement agencies to identify prospective law breakers during protests in Hong Kong, protests concerning the Citizenship Amendment Act, 2019 in New Delhi and the Black Lives Matter protests across the USA.

Vociferous demands have been made by civil society and digital rights groups for a global moratorium on the pervasive use of FRT that enables mass surveillance, as many cities such as Boston and Portland have banned its deployment. However, it remains to be seen how effective these measures are in halting the use of FRT. Even the temporary refusal by Big Tech companies to sell FRT to police forces in the US does not seem to have much instrumental value – as other private companies continue its unhindered support.

Regulation of FRT

The approach to the regulation of FRT differs vastly across the globe. The regulation spectrum on FRT ranges from permissive use of mass surveillance on citizens in countries like China and Russia to a ban on the use of FRT for example in Belgium and Boston (in USA). However, in many countries around the world, including India, the use of FRT continues unabated, worryingly in a regulatory vacuum.

Recently, an appellate court in the UK declared the use of LFRT for law enforcement purposes as unlawful, on grounds of violation of the rights of data privacy and equality. Despite the presence of a legal framework in the UK for data protection and the use of surveillance cameras, the Court of Appeal held that there was no clear guidance on the use of the technology and it gave excessive discretion to the police officers. 

The EU has been contemplating a moratorium on the use of FRT in public places. Civil society in the EU is demanding a comprehensive and indefinite ban on the use of FRT and related technology for mass surveillance activities.

In the USA, several orders banning or heavily regulating the use of FRT have been passed. A federal law banning the use of facial recognition and biometric technology by law enforcement has been proposed. The bill seeks to place a moratorium on the use of facial recognition until Congress passes a law to lift the temporary ban. It would apply to federal agencies such as the FBI, as well as local and State police departments.

The Indian Scenario

In July 2019, the Government of India announced its intentions of setting up a nationwide facial recognition system. The National Crime Bureau (NCRB) – a government agency operating under the Ministry of Home Affairs – released a request for proposal (RFP) on July 4, 2019 to procure a National Automated Facial Recognition System (AFRS). The deadline for submission of tenders to the RFP has been extended 11 times since July 2019. The stated aim of the AFRS is to help modernise the police force, information gathering, criminal identification, verification, and its dissemination among various police organisations and units across the country. 

Security forces across the states and union territories will have access to the centralised database of AFRS, which will assist in the investigation of crimes. However, civil society organisations have raised concerns regarding privacy and issues of increased surveillance by the State as AFRS does not have a legal basis (statutory or executive) and lacks procedural safeguards and accountability measures like an oversight regulatory authority. They have also questioned the accuracy of FRT in identifying darker skinned women and ethnic minorities and expressed fears of discrimination. 

This is in addition to the FRT already in use by law enforcement agencies in Chennai, Hyderabad, Delhi, and Punjab. There are several instances of deployment of FRT in India by the government in the absence of a specific law regulating FRT or a general data protection law.

Even the proposed Personal Data Protection Bill, 2019 is unlikely to assuage privacy challenges arising from the use of FRT by the Indian State. The primary reason for this is the broad exemptions provided to intelligence and law enforcement agencies under Clause 35 of the Bill on grounds of sovereignty and integrity, security of the State, public order, etc.

After the judgement of K.S. Puttaswamy vs. Union of India (Puttaswamy I), which reaffirmed the fundamental right to privacy in India, any act of State surveillance breaches the right to privacy and will need to adhere to the three part test laid down in Puttaswamy I.

The three prongs of the test are – legality, which postulates the existence of law along with procedural safeguards; necessity, defined in terms of a legitimate State aim; and proportionality which ensures a rational nexus between the objects and the means adopted to achieve them. This test was also applied in the Aadhaar case (Puttaswamy II) to the use of biometrics technology. 

It may be argued that State use of FRT is for the legitimate aim of ensuring national security, but currently its use is neither sanctioned by law, nor does it pass the test of proportionality. For proportionate use of FRT, the State will need to establish that there is a rational nexus between its use and the purpose sought to be achieved and that the use of such technology is the least privacy restrictive measure to achieve the intended goals. As the law stands today in India after Puttaswamy I and II, any use of FRT or LFRT currently is prima facie unconstitutional. 

While mass surveillance is legally impermissible in India, targeted surveillance is allowed under Section 5 of the Indian Telegraph Act, 1885, read with rule 419A of the Indian Telegraph Rules, 1951 and Section 69 of the Information and Technology Act, 2000 (IT Act). Even the constitutionality of Section 69 of the IT Act has been challenged and is currently pending before the Supreme Court.

Puttaswamy I has clarified that the protection of privacy is not completely lost or surrendered in a public place as it is attached to the person. Hence, the constitutionality of India’s surveillance apparatus needs to be assessed from the standards laid down by Puttaswamy I. To check unregulated mass surveillance through the deployment of FRT by the State, there is a need to restructure the overall surveillance regime in the country. Even the Justice Srikrishna Committee report in 2018 – highlighted that several executive sanctioned intelligence-gathering activities of law enforcement agencies would be illegal after Puttaswamy I as they do not operate under any law. 

The need for reform of surveillance laws, in addition to a data protection law in India to safeguard fundamental rights and civil liberties, cannot be stressed enough. The surveillance law reform will have to focus on the use of new technologies like FRT and regulate its deployment with substantive and procedural safeguards to prevent abuse of human rights and civil liberties and provide for relief. 

Well documented limitations of FRT and LFRT in terms of low accuracy rates, along with concerns of profiling and discrimination, make it essential for the surveillance law reform to have additional safeguards such as mandatory accuracy and non-discrimination audits. For example, the National Institute of Standards and Technology (NIST), US Department of Commerce, 2019 Face Recognition Vendor Test (part three) evaluates whether an algorithm performs differently across different demographics in a dataset. The need of the hour is to cease the use of FRT and put a temporary moratorium on any future deployments till surveillance law reforms with adequate proportionality safeguards have been implemented.