Search Engines and the Right to be Forgotten

This post is authored by Thulasi K. Raj.

In January 2021, Indian Kanoon, the legal case law database argued before the Kerala High Court that requiring de-indexing of search results in the guise of privacy rights under Article 21 of the Constitution of India restricts the right to free speech. The petitioner in this case was aggrieved by the display of personal details including his name and address on Google, via Indian Kanoon. This has rekindled the debate on the right to be forgotten (“RTBF”) and its ambit in the Indian legal framework. 

When we walk down the street, various personal identifiers such as one’s skin colour, approximate height, weight and other physical features are unconsciously communicated to others. It would be strange indeed, if the right to privacy required us to erase these memories, which we involuntarily capture in normal social life.

What makes digital memory different, however is its relative permanency. A digital device can store data more or less permanently. Schönberger explores how human forgetfulness is problematically replaced by perfect memory in his aptly titled bookDelete: The virtue of forgetting in the digital age.’ He rightly remarks that the “balance of remembering and forgetting has become inverted.” Remembering is now the default, “and forgetting, the exception.” If a derogatory news report from several years ago emerges in search results, it can momentarily damage one’s reputation and infringe upon privacy. This is where RTBF becomes significant.

Recital 65 of the EU’s General Data Protection Regulation (GDPR) acknowledges a “right to be forgotten”, i.e., for the personal data to be erased on certain occasions. One, where the data is no longer necessary in relation to the purpose for which it was collected. Two, where the particular individual has withdrawn their consent or objects to their data being processed or three, where the personal data does not comply with the GDPR. Recital 66 strengthens this right as it requires the data controller that made the personal data public, to inform other controllers that may also be processing the same personal data to also remove links or copies. 

The privacy argument behind the RTBF is that firstly, one must have control over one’s personal information. This includes personal details, contact information or search engine queries. Moreover, the individual,  according to Mantelero, has a right not to be reminded of her previous acts, “without being perpetually or periodically stigmatized as a consequence of a specific action.” It enables her to regain control over her past, to decide as to which parts of her information should be accessible to others and which not.

The decision by the European Court of Justice (‘ECJ’) in Google Inc. v. AEPD in 2014 brought the discussion on the RTBF to mainstream political and academic debate. In this case, one Mario Costeja González in Spain, found that when his name was searched on Google, the results included a newspaper announcement of a real estate auction for recovery of his social security debts. He approached Agencia Española de Protección de Datos (AEDP), the Spanish Data Protection Agency seeking removal of the information from Google. The claims against Google were allowed and Google appealed to the high court in Spain. The matter was then referred to the ECJ. The court recognised the RTBF under the 1995 EU Data Protection Directive, for the first time, and held that search engines must remove ‘inadequate, irrelevant, or excessive’ personal information about users. 

In India, clause 20 of the Personal Data Protection Bill, 2019 recognises RTBF when any of the three conditions are satisfied: when retention of information is unnecessary, consent given for disclosure of personal data is withdrawn, or when retention of data is illegal. Unlike the EU, adjudicating officers have to determine whether these conditions are met before ordering for withholding of the information. The Supreme Court has made references to RTBF in the Puttaswamy judgment. Various High Courts also have discussed this right while considering pleas of removal of information from search engine results. Although such pleas are allowed in some cases, it is difficult to find an authoritative judicial pronouncement affirmatively and comprehensively locating a right to be forgotten in the Indian legal framework. 

An objection against recognition of the RTBF is its conflict with the right to free speech, especially in jurisdictions like the US where search engines claim the right to free speech. For example, while search engines are required to cease retaining personal information, they often argue that such requirement violates their right to freedom of speech. They claim that the right to display information is part of the right to free speech since it involves collection, selection, arrangement and display of information. For instance, in Langdon v. Google Inc. in the United States, Google has argued that the kind of function the search engine engages is not fundamentally different from that of a newspaper editor who collects, sorts and publishes information, and is therefore entitled to a comparable right to free speech. 

In India, free speech rights of search engine companies are not categorically adjudicated on so far. The right to free speech is available to citizens alone under Article 19 of the Constitution. But the Supreme Court in Chiranjit Lal  Chowdhuri held that fundamental rights are available not only to citizens, but “corporate bodies as well.” The Court has also held in Delhi Cloth and General Mills that the free speech rights of companies are co-extensive to that of shareholders and denial of one can lead to denial of the other. This jurisprudence might enable search engine companies, such as Indian Kanoon in India to make a free speech argument.  However, the courts will be confronted with the critical question of how far search engine companies that collate information can be treated in par with companies engaged in printing and publishing newspapers.

The determination of the Indian Kanoon case will depend among other things on two aspects, from a rights perspective: firstly, whether and to what extent the court will recognise a right to be forgotten under the Indian law. This argument could rely on an expansive understanding of the right to privacy, especially informational privacy under Article 21 in the light of the Puttaswamy judgment. Secondly, whether search engines will be entitled to a free speech claim under Article 19. It remains to be seen what the implications of such a recognition will be, for search engines as well as for users. 

(The author is a practising lawyer and a DIGITAL Fellow at the Centre for Communication Governance at National Law University, Delhi).

The Right to be Forgotten – Examining Approaches in Europe and India

This is a guest post authored by Aishwarya Giridhar.

How far does the right to control personal information about oneself extend online? Would it extend, for example, to having a person’s name erased from a court order on online searches, or to those who have been subjected to revenge pornography or sexual violence such that pictures or videos have non-consensually been shared online? These are some questions that have come up in Indian courts and are some of the issues that jurisprudence relating to the ‘right to be forgotten’ seeks to address. This right is derived from the concepts of personal autonomy and informational self-determination, which are core aspects of the right to privacy. They were integral to the Indian Supreme Court’s conception of privacy in Puttaswamy vs. Union of India which held that privacy was a fundamental right guaranteed by the Indian Constitution. However, privacy is not an absolute right and needs to be balanced with other rights such as freedom of expression and access to information, and the right to be forgotten tests the extent to which the right to privacy extends.

On a general level, the right to be forgotten enables individuals to have personal information about themselves removed from publicly available sources under certain circumstances. This post examines the right to be forgotten under the General Data Protection Regulation (GDPR) in Europe, and the draft Personal Data Protection Bill, 2019 (PDP Bill) in India.

What is the right to be forgotten?

The right to be forgotten was brought into prominence in 2014 when the European Court of Justice (ECJ) held that users can require search engines to remove personal data from search results, where the linked websites contain information that is “inadequate, irrelevant or no longer relevant, or excessive.” The Court recognised that search engines had the ability to significantly affect a person’s right to privacy since it allowed any Internet user to obtain a wide range of information on a person’s life, which would have been much harder or even impossible to find without the search engine. 

The GDPR provides statutory recognition to the right to be forgotten in the form of a ‘right to erasure’ (Article 17). It provides data subjects the right to request controllers to erase personal data in some circumstances, such as when the data is no longer needed for their original processing purpose, or when the data subject has withdrawn her consent or objected to data processing. In this context, the data subject is the person to whom the relevant personal data relates, and the controller is the entity which determines how and why the data would be processed. Under this provision, the controller would be required to assess whether to keep or remove information when it receives a request from data subjects.

In comparison, clause 20 of India’s Personal Data Protection Bill (PDP Bill), which proposes a right to be forgotten, allows data principals (similar to data subjects) to require data fiduciaries (similar to data controllers) to restrict or prevent the disclosure of personal information. This is possible where such disclosure is no longer necessary, was made on the basis of consent which has since been withdrawn, or was made contrary to law. Unlike the GDPR, the PDP Bill requires data subjects to approach Adjudicating Officers appointed under the legislation to request restricted disclosure of personal information. The rights provided under both the GDPR and PDP Bill are not absolute and are limited by the freedom of speech and information and other specified exceptions. In the PDP Bill, for example, some of the factors the Adjudicating Officer is required to account for are the sensitivity of the data, the scale of disclosure and how much it is sought to be restricted, the role of the data principal in public life, and the relevance of the data to the public. 

Although the PDP Bill, if passed, would be the first legislation to recognise this right in India, courts have provided remedies that allow for removing personal information in some circumstances. Petitioners have approached courts for removing information in cases ranging from matrimonial disputes to defamation and information affecting employment opportunities, and courts have sometimes granted the requested reliefs. Courts have also acknowledged the right to be forgotten in some cases, although there have been conflicting orders on whether a person can have personal information redacted from judicial decisions available on online repositories and other sources. In November last year, the Orissa High Court also highlighted the importance of the right to be forgotten for persons who’s photos and videos have been uploaded online, without  their consent, especially in the case of sexual violence. These cases also highlight why it is essential that this right is provided by statute, so that the extent of protections offered under this right, as well as the relevant safeguards can be clearly defined.

Intersections with access to information and free speech

The most significant criticisms of the right to be forgotten stem from its potential to restrict speech and access to information. Critics are concerned that this right will lead to widespread censorship and a whitewashing of personal histories when it comes to past crimes and information on public figures, and a less free and open Internet. There are also concerns that global takedowns of information, if required by national laws, can severely restrict speech and serve as a tool of censorship. Operationalising this right can also lead to other issues in practice.

For instance, the right framed under the GDPR requires private entities to balance the right to privacy with the larger public interest and the right to information. Two cases decided by the ECJ in 2019 provided some clarity on the obligations of search engines in this context. In the first, the Court clarified that controllers are not under an obligation to apply the right globally and that removing search results for domains in the EU would suffice. However, it left the option open for countries to enact laws that would require global delisting. In the second case, among other issues, the Court identified some factors that controllers would need to account for in considering requests for delisting. These included the nature of information, the public’s interest in having that information, and the role the data subject plays in public life, among others. Guidelines framed by the Article 29 Working Party, set up under the GDPR’s precursor also provide limited, non-binding guidance for controllers in assessing which requests for delisting are valid.

Nevertheless, the balance between the right to be forgotten and competing considerations can still be difficult to assess on a case-by-case basis. This issue is compounded by concerns that data controllers would be incentivised to over-remove content to shield themselves from liability, especially where they have limited resources. While larger entities like Google may have the resources to be able to invest in assessing claims under the right to be forgotten, this will not be possible for smaller platforms. There are also concerns that requiring private parties to make such assessments amounts to the ‘privatisation of regulation’, and the limited potential for transparency on erasures remove an important check against over-removal of information. 

As a result of some of this criticism, the right to be forgotten is framed differently under the PDP Bill in India. Unlike the GDPR, the PDP Bill requires Adjudicating Officers and not data fiduciaries to assess whether the rights and interests of the data principal in restricting disclosure overrides the others’ right to information and free speech. Adjudicating Officers are required to have special knowledge of or professional experience in areas relating to law and policy, and the terms of their appointment would have to ensure their independence. While they seem better suited to make this assessment than data fiduciaries, much of how this right is implemented will depend on whether the Adjudicating Officers are able to function truly independently and are adequately qualified. Additionally, this system is likely to lead to long delays in assessment, especially if the quantum of requests is similar to that in the EU. It will also not address the issues with transparency highlighted above. Moreover, the PDP Bill is not finalised and may change significantly, since the Joint Parliamentary Committee that is reviewing it is reportedly considering substantial changes to its scope.

What is clear is that there are no easy answers when it comes to providing the right to be forgotten. It can provide a remedy in some situations where people do not currently have recourse, such as with revenge pornography or other non-consensual use of data. However, when improperly implemented, it can significantly hamper access to information. Drawing lessons from how this right is evolving in the EU can prove instructive for India. Although the assessment of whether or not to delist information will always subjective to some extent, there are some steps that can be taken provide clarity on how such determinations are made. Clearly outlining the scope of the right in the relevant legislation, and developing substantive standards that are aimed at protecting access to information, that can be used in assessing whether to remove information are some measures that can help strike a better balance between privacy and competing considerations.

Addition of US Privacy Cases on the Privacy Law Library

This post is authored by Swati Punia.

We are excited to announce the addition of privacy jurisprudence from the United States’ Supreme Court on the Privacy Law Library. These cases cover a variety of subject areas from the right against intrusive search and seizure to the right to abortion and right to sexual intimacy/ relationships. You may access all the US cases on our database, here.

(The Privacy Law Library is our global database of privacy law and jurisprudence, currently containing cases from India, Europe (ECJ and ECtHR), the United States, and Canada.)

The Supreme Court of the US (SCOTUS) has carved out the right to privacy from various provisions of the US constitution, particularly the first, fourth, fifth, ninth and fourteenth amendments to the US constitution. The Court has included the right to privacy in varying contexts through an expansive interpretation of the constitutional provisions. For instance, the Court has read privacy rights into the first amendment for protecting private possession of obscene material from State intrusion; the fourth amendment for protecting privacy of the person and possessions from unreasonable State intrusion; and the fourteenth amendment which recognises an individual’s decisions about abortion and family planning as part of their right of liberty that encompasses aspects of privacy such as dignity and autonomy under the amendment’s due process clause.

The right to privacy is not expressly provided for in the US constitution. However, the Court identified an implicit right to privacy, for the very first time, in Griswold v. Connecticut(1965) in the context of the right to use contraceptives/ marital privacy. Since then, the Court has extended the scope to include, inter alia, reasonable expectation of privacy against State intrusion in Katz v. United States (1967), abortion rights of women in Roe v. Wade (1973), and right to sexual intimacy between consenting adults of the same-sex in Lawrence v. Texas (2003). 

The US privacy framework consists of several privacy laws and regulations developed at both the federal and state level. As of now, the US privacy laws are primarily sector specific, instead of a single comprehensive federal data protection law like the European Union’s General Data Protection Regulation (GDPR) and the Canadian Personal Information Protection and Electronic Documents Act (PIPEDA). However, there are certain states in the US like California that have enacted comprehensive privacy laws, comparable to the GDPR and PIPEDA. The California Consumer Privacy Act (CCPA) which came into effect on January 1, 2020 aims to protect consumers’ privacy across industry. It codifies certain rights and remedies for consumers, and obligations for entities/businesses. One of its main aims is to provide consumers more control over their data by obligating businesses to ensure transparency about how they collect, use, share and sell consumer data. 

To know more about the status of the right to privacy in the US, refer to our page here. Some of the key privacy cases from the SCOTUS on our database are – Griswold vs. Connecticut, Time INC vs. Hill, Roe vs. Wade, Katz vs. United States, and Stanley vs. Georgia.

A Brief Look at the Tamil Nadu Cyber Security Policy 2020

This post is authored by Sharngan Aravindakshan.

The Tamil Nadu State Government (State Government) released the Tamil Nadu Cyber Security Policy 2020 (TNCS Policy) on September 19, 2020. It has been prepared by the Electronics Corporation of Tamil Nadu (ELCOT), a public sector undertaking which operates under the aegis of the Information Technology Department of the Government of Tamil Nadu. This post takes a brief look at the TNCS Policy and its impact on India’s cybersecurity health.

The TNCS Policy is divided into five chapters –

  1. Outline of Cyber Security Policy;
  2. Security Architecture Framework – Tamil Nadu (SAF-TN);
  3. Best Practices – Governance, Risk Management and Compliance);
  4. Computer Emergency Response Team – Tamil Nadu (CERT-TN)); and
  5. Chapter-V (Cyber Crisis Management Plan).

Chapter-I, titled ‘Outline of Cyber Security Policy’, contains a preamble which highlights the need for the State Government to have a cyber security policy. Chapter-I also lays out the scope and applicability of the TNCS Policy, which is that it is applicable to ‘government departments and associated agencies’, and covers ‘Information Assets that may include Hardware, Applications and Services provided by these Agencies to other Government Departments, Industry or Citizens’. It also applies to ‘private agencies that are entrusted with State Government work’ (e.g. contractors, etc.), as well as ‘Central Infrastructure and Personnel’ who provide services to the State Government, which is likely a reference to Central Government agencies and personnel.

Notably, the TNCS Policy does not define ‘cyber security’, choosing to define ‘information security management’ (ISM)  instead. ISM is defined as involving the “planning, implementation and continuous Security controls and measures to protect the confidentiality, integrity and availability of Information Assets and its associated Information Systems”. Further, it states that Information security management also includes the following elements –

(a) Security Architecture Framework – SAF-TN;

(b) Best Practices for Governance, Risk Management and Compliance (GRC);

(c) Security Operations – SOC-TN;

(d) Incident Management – CERT-TN;

(e) Awareness Training and Capability Building;

(f) Situational awareness and information sharing.

The Information Technology Department, which is the nodal department for IT security in Tamil Nadu, has been assigned several duties with respect to cyber security including establishing and operating a ‘Cyber Security Architecture for Tamil Nadu’ (CSA-TN) as well as a Security Operations Centre (SOC-TN) and a state Computer Emergency Response Team (CERT-TN). Its other duties include providing safe hosting for Servers, Applications and Data of various Departments /Agencies, advising on government procurement of IT and ITES, conducting training programmes on cyber security as well as formulating cyber security related policies for the State Government. Importantly, the TNCS Policy also mentions the formulation of a ‘recommended statutory framework for ensuring legal backing of the policies’. While prima facie it seems that cyber security will have more Central control than State, given the nature of these documents, any direct conflict is in any case unlikely.

Chapter-II gives a break-up of the Cyber Security Architecture of Tamil Nadu (CSA-TN). The CSA-TN’s constituent components are (a) Security Architecture Framework (SAF-TN), (b) Security Operations Centre (SOC-TN), (c) Cyber Crisis Management Plan (CCMP-TN) and (d) the Computer Emergency Response Team (CERT-TN). It clarifies that the “Architecture” defines the overall scope of authority of the cyber security-related agencies in Tamil Nadu, and also that while the policy will remain consistent, the Architecture will be dynamic to meet evolving technological challenges.

Chapter-III deals with best practices in governance, risk management and compliance, and broadly covers procurement policies, e-mail retention policies, social media policies and password policies for government departments and entities. With respect to procurement policies, it highlights certain objectives, such as building trusted relationships with vendors for improving end-to-end supply chain security visibility and encouraging entities to adopt guidelines for the procurement of trustworthy ICT products. However, the TNCS Policy also specifies that it is not meant to infringe or supersede existing policies such as procurement policies.

On the subject of e-mails, it emphasizes standardizing e-mail retention periods on account of the “need to save space on e-mail server(s)” and the “need to stay in line with Federal and Industry Record e-Keeping Regulations”. E-mail hygiene has proved to be essential especially for government organizations, given that the malware discovered in one of the nuclear facilities situated in Tamil Nadu (nuclear facilities) is believed to have entered the systems through a phishing email. However, surprisingly, other than e-mail retention, the TNCS Policy does not deal with e-mail safety practices. For instance, the Information Security Best Practices released by the Ministry of Home Affairs provides a more comprehensive list of good practices for email communications which includes specific sections on email communications and social engineering. These do not find mention in the TNCS Policy.

On social media policies, the TNCS Policy makes it clear that it prioritizes the ‘online reputation’ of its departments. However, Employees are advised against reacting online and pass on this information to the official spokesperson for an appropriate response. The TNCS Policy also counsels proper disclosure where personal information is collected through online social media platforms. Some best practices for safe passwords are also detailed, such as password age (no reuse of any of the last ten passwords, etc.) and length (passwords may be required to have a minimum number of characters, etc.).

Chapter-IV highlights the roles and responsibilities of the Computer Emergency Response Team – Tamil Nadu (CERT-TN). It specifies that CERT-TN is the nodal agency responsible for implementing the Security Architecture Framework, and for monitoring, detecting, assessing and responding to cyber vulnerabilities, cyber threats, incidents and also demonstrate cyber resilience. The policy also recognizes that CERT-TN is the statutory body that is authorized to issue directives, guidelines and advisories to government departments. It will also establish, operate and maintain the Information Security Management systems for the State Government.

CERT-TN will also coordinate with the National or State Computer Security Incident Response Teams (CSIRTs), government agencies, law enforcement agencies, and research labs. However, the “Coordination Centre” (CoC) is the designated nodal intermediary between the CERT-TN and governmental departments, CERT-In, State CERTs, etc. under the TNCS Policy.  The CoC will also be responsible for monitoring responses to service requests, delivery timelines and other performance related issues for the CERT-TN. The TNCS Policy makes it clear that Incident Handling and Response (IHR) will be as per Standard Operation Process Manuals (prepared by CERT-TN) that will be regularly reviewed and updated. ‘Criticality of the affected resource” will determine the priority of the incident.

Significantly, Chapter-IV also deals with vulnerability disclosures and states that vulnerabilities in e-Governance services will only be reported to CERT-TN or the respective department if they relate to e-Governance services offered by the Government of Tamil Nadu, and will not be publicly disclosed until a resolution is found. Other vulnerabilities may be disclosed to the respective vendors as well. An upper limit of 30 days is prescribed for resolving reported vulnerabilities. An ‘Incident Reporter’ reporting in good faith will not be penalized “provided he cooperates with the stakeholders in resolving the vulnerability and minimizing the impact”, and the Incident Reporter’s contribution in vulnerability discovery and resolution will be publicly credited by CERT-TN.

Chapter-IV also mandates regular security assessments of the State Government’s departmental assets, a help-desk for reporting cyber incidents, training and awareness both for CERT-TN, as well as by CERT-TN for other departments. Departments will also be graded by “maturity of Cyber Security Practices and Resilience Strength by the Key Performance Indicators”. However, these indicators are not specified in the policy itself.

Chapter-V is titled ‘Cyber Crisis Management Plan’ (CCMP), meant for  countering cyber-attacks and cyber terrorism. It envisages establishing a strategic framework and actions to prepare for, respond to, and begin to coordinate recovery from a Cyber-Incident, in the form of guidelines. ‘Detect’(ing) cyber-incidents is noticeably absent in this list of verbs, especially considering the first chapter which laid emphasis on the CERT-TN’s role in “Monitoring, Detecting, Assessing and Responding” to cyber vulnerabilities and incidents.

In conformity with CERT-In’s Cyber Crisis Management Plan for Countering Cyber Attacks and Cyber Terrorism which requires ministries / departments of State governments and Union Territories to draw up their own sectoral Cyber Crisis Management Plans in line with CERT-In’s plan, the TNCS Policy establishes the institutional architecture for implementing such plan.  The TNCS Policy contemplates a ‘Crisis Management Group’ (CMG) for each department, constituted by the Secretary to the Government (Chairman), Heads of all organizations under the administrative control of the department and the Chief Information Security Officers (CISO)/Deputy CISOs within the department. It will be the task of the CMG to prepare a contingency plan in consultation with CERT-In, as well as coordinate with CERT-In in crisis situations. The TNCS Policy also envisions a ‘Crisis Management Cell’ (CMC), under the supervision of the CMG. The CMC will be constituted by the head of the organization, CISO, head of HR/admin and the person In-charge of the IT Section. The TNCS Policy also requires each organization to nominate a CISO, preferably a senior officer with adequate IT experience. The CMC’s priority is to prepare a plan that would ensure continuity of operations and speedy restoration of an acceptable level of service.


The TNCS Policy is a positive step, with a whole-of-government approach towards increasing governmental cyber security at the State government level. However, its applicability is restricted to governmental departments and their suppliers / vendors / contractors. It does not, therefore, view cyber security as a broader ecosystem that requires each of its stakeholders including the public sector, private sector, NGOs, academia, etc. to play a role in the maintenance of its security and recognize their mutual interdependence as a key feature of this domain.

Given the interconnected nature of cyberspace, cyber security cannot be achieved only through securing governmental assets. As both the ITU National Cybersecurity Strategy Guide and the NATO CCDCOE Guidelines recommend, it requires the creation and active participation of an equally robust private industry, and other stakeholders. The TNCS Policy does not concern itself with the private sector at large, beyond private entities working under governmental contracts. It does not set up any initiatives, nor does it create any incentives for its development. It also does not identify any major or prevalent cyber threats, specify budget allocation for implementing the policy or establish R&D initiatives at the state level. No capacity building measures are provided for, beyond CERT-In’s training and awareness programs.

Approaching cyber security as an ecosystem, whose maintenance requires the participation and growth of several stakeholders including the private sector and civil society organisations, and then using a combination of regulation and incentives, may be the better way.

Cyberspace and International Law: Taking Stock of Ongoing Discussions at the OEWG

This post is authored by Sharngan Aravindakshan


The second round of informal meetings in the Open-Ended Working Group on the Use of ICTs in the Context of International Security is scheduled to be held from today (29th September) till 1st October, with the agenda being international law.

At the end of the OEWG’s second substantive session in February 2020, the Chairperson of the OEWG released an “initial pre-draft” (Initial Pre-Draft) of the OEWG’s report, for stakeholder discussions and comments. The Initial Pre-Draft covers a number of issues on cyberspace, and is divided into the following:

  1. Section A (Introduction);
  2. Section B (Existing and Potential Threats);
  3. Section C (International Law);
  4. Section D (Rules, Norms and Principles for Responsible State Behaviour);
  5. Section E (Confidence-building Measures);
  6. Section F (Capacity-building);
  7. Section G (Regular Institutional Dialogue); and
  8. Section H (Conclusions and Recommendations).

In accordance with the agenda for the coming informal meeting in the OEWG, this post is a brief recap of this cyber norm making process with a focus on Section C, i.e., the international law section of the Initial Pre-Draft and States’ comments to it.

What does the OEWG Initial Pre-Draft Say About International Law?

Section C of the Initial Pre-Draft begins with a chapeau stating that existing obligations under international law, in particular the Charter of the United Nations, are applicable to State use of ICTs. The chapeau goes on to state that “furthering shared understandings among States” on how international law applies to the use of ICTs is fundamental for international security and stability. According to the chapeau, exchanging views on the issue among States can foster this shared understanding.

The body of Section C records that States affirmed that international law, including the UN Charter, is applicable to the ICT environment. It particularly notes that the principles of the UN Charter such as sovereign equality, non-intervention in internal affairs of States, the prohibition on the threat or use of force, human rights and fundamental freedoms apply to cyberspace. It also mentions that specific bodies of international law such as international humanitarian law (IHL), international human rights law (IHRL) and international criminal law (ICL) as applicable as well. Section C also records that “States underscored that international humanitarian law neither encourages militarization nor legitimizes conflict in any domain”, without mentioning which States did so.

Significantly, Section C of the Initial Pre-Draft also notes that a view was expressed in the discussions that “existing international law, complemented by the voluntary, non-binding norms that reflect consensus among States” is “currently sufficient for addressing State use of ICTs”. According to this view, it only remains for a “common understanding” to be reached on how the already agreed normative framework could apply and be operationalized. At the same time, the counter-view expressed by some other States is also noted in Section C, that “there may be a need to adapt existing international law or develop a new instrument to address the unique characteristics of ICTs.”

This view arises from the confusion or lack of clarity on how existing international law could apply to cyberspace and includes but is not limited to questions on thresholds for use of force, armed attacks and self-defence, as well as the question of applicability of international humanitarian law to cyberspace. Section C goes on to note that in this context, proposals were made for the development of a legally binding instrument on the use of ICTs by States. Again, the States are not mentioned by name. Additionally, Section C notes a third view which proposed a “politically binding commitment with regular meetings and voluntary State reporting”. This was proposed as a middle ground between the first view that existing international law was sufficient and the second view that new rules of international law were required in the form of a legally binding treaty. Developing a “common approach to attribution at the technical level” was also discussed as a way of ensuring greater accountability and transparency.

With respect to the international law portion, the Initial Pre-Draft proposed recommendations including the creation of a global repository of State practice and national views in the application of international law as well as requesting the International Law Commission to undertake a study of national views and practice on how international law applies in the use of ICTs by States.

What did States have to say about Section C of the Initial Pre-Draft?

In his letter dated 11 March 2020, the Chairperson opened the Initial Pre-Draft for comments from States and other stakeholders. A total of 42 countries have submitted comments, excluding the European Union (EU) and the Non Aligned Movement (NAM), both of which have also submitted comments separately from their member States. The various submissions can be found here. Not all States’ submissions have comments specific to Section C, the international law portion. But it is nevertheless worthwhile examining the submissions of those States that do. India had also submitted comments which can be found here. However, these are no longer available on the OEWG website and appear to have been taken down.

International Law and Cyberspace

Let’s start with what States have said in answer to the basic question of whether existing international law applies to cyberspace and if so, whether its sufficient to regulate State-use of ICTs. A majority of States have answered in the affirmative and this list includes the Western Bloc led by the US including Canada, France, Germany, Austria, Czech Republic, Denmark, Estonia, Ireland, Liechtenstein, Netherlands, Norway, Sweden, Switzerland, Italy, and the United Kingdom, as well as Australia, New Zealand, Japan, South Korea, Colombia, South Africa, Mexico and Uruguay. While Singapore has affirmed that international law, in particular, the UN Charter, applies to cyberspace, it is silent on whether its current form is sufficient to regulate State action in cyberspace.

Several States, however, are of the clear view that international law as it exists is insufficient to regulate cyberspace or cannot be directly applied to cyberspace. These States have identified a “legal vacuum” in international law vis-à-vis cyberspace and call for new rules in the form of a binding treaty. This list includes China, Cuba, Iran, Nicaragua, Russia and Zimbabwe. Indonesia, in its turn, has stated that “automatic application” of existing law without examining the context and unique nature of activities in cyberspace should be avoided since “practical adjustment and possible new interpretations are needed”, and the “gap of the ungoverned issues in cyberspace” also needs to be addressed.

NAM has stated that the UN Charter applies, but has also noted the need to “identify possible gaps” that can be addressed through “furthering the development of international rules”. India’s earlier uploaded statement had expressed the view that although the applicability of international law had been agreed to, there are “differences in the structure and functioning of cyberspace, including complicated jurisdictional issues” and that “gaps in the existing international laws in their applicability to cyberspace” need examining. This statement also spoke of “workable modifications to existing laws and exploring the needs of, if any, new laws”.

Venezuela has stated that “the use of ICTs must be fully consistent with the purposes and principles of the UN Charter and international law”, but has also stated that “it is necessary to clarify that International Public Law cannot be directly applicable to cyberspace”, leaving its exact views on the subject unclear.

International Humanitarian Law and Cyberspace

The Initial Pre-Draft’s view on the applicability of IHL to cyberspace has also become a point of contention for States. States supporting its applicability include Brazil, Czech Republic, Denmark, Estonia, France, Germany, Ireland, Netherlands, Switzerland, the United Kingdom and Uruguay. India is among the supporters. Some among these like Estonia, Germany and Switzerland have called for the specific principles of humanity, proportionality, necessity and distinction to be included in the report.

States including China, Cuba, Nicaragua, Russia, Venezuela and Zimbabwe are against applying IHL, with their primary reason being that it will promote “militarization” of cyberspace and “legitimize” conflict. According to China, we should be “extremely cautious against any attempt to introduce use of force in any form into cyberspace,… and refrain from sending wrong messages to the world.” Russia has acerbically stated that to say that IHL can apply “to the ICT environment in peacetime” is “illogical and contradictory” since “IHL is only applied in the context of a military conflict while currently the ICTs do not fit the definition of a weapon”.

Second level of detail on these questions, especially concerning specific principles including sovereignty, non-intervention, threat or use of force, armed attack and inherent right of self-defence, is scarce in States’ comments, beyond whether they apply to cyberspace. Zimbabwe has mentioned in its submission that these principles do apply, as has NAM. Cuba, as it did in the 2017 GGE, has taken the stand that the inherent right to self-defence under Article 51 of the UN Charter cannot be automatically applied to cyberspace. Cuba also stated that it cannot be invoked to justify a State responding with conventional attacks. The US has also taken the view it expressed in the 2017 GGE, that if States’ obligations such as refraining from the threat or use of force are to be mentioned in the report, it should also contain States’ rights, namely, the inherent right to self-defence in Article 51.

Austria has categorically stated that the violation of sovereignty is an internationally wrongful act if attributable to a State. But other States’ comments are broader and do not address the issue of sovereignty at this level. Consider Indonesia’s comments, for instance, where it has simply stated that it “underlines the importance of the principle of sovereignty” and that the report should as well. For India’s part, its earlier uploaded statement approached the issue of sovereignty from a different angle. It stated that the “territorial jurisdiction and sovereignty are losing its relevance in contemporary cyberspace discourse” and went on to recommend a “new form of sovereignty which would be based on ownership of data, i.e., the ownership of the data would be that of the person who has created it and the territorial jurisdiction of a country would be on the data which is owned by its citizens irrespective of the place where the data physically is located”. On the face of it, this comment appears to relate more to the conflict of laws with respect to the transborder nature of data rather than any principle of international law.

The Initial Pre-Draft mentioning the need for a “common approach” for attribution also drew sharp criticism. France, Germany, Italy, Nicaragua, Russia, Switzerland and the United Kingdom have all expressed the view that attribution is a “national” or “sovereign” prerogative and should be left to each State. Iran has stated that addressing a common approach for attribution is premature in the absence of a treaty. Meanwhile, Brazil, China and Norway have supported working towards a common approach for attribution. This issue has notably seen something of a re-alignment of divided State groups.

International Human Rights Law and Cyberspace

States’ comments to Section C also pertain to its language on IHRL with respect to ICT use. Austria, France, the Netherlands, Sweden and Switzerland have called for greater emphasis on human rights and its applicability in cyberspace, especially in the context of privacy and freedoms of expression, association, and information. France has also included the “issues of protection of personal data” in this context. Switzerland has interestingly linked cybersecurity and human rights as “complementary, mutually reinforcing and interdependent”. Ireland and Uruguay’s comments also specify that IHRL apply.

On the other hand, Russia’s comments make it clear that it believes there is an “overemphasis” on human rights law, and it is not “directly related” to international peace and security. Surprisingly, the UK has stated that issues concerning data protection and internet governance are beyond the OEWG’s mandate, while the US comments are silent on the issue. While not directly referring to international human rights law, India’s comments had also mentioned that its concept of data ownership based sovereignty would reaffirm the “universality of the right to privacy”.

Role of the International Law Commission

The Initial Pre-Draft also recommended requesting the International Law Commission (through the General Assembly) to “undertake a study of national views and practice on how international law applies in the use of ICTs by States”. A majority of States including Canada, Denmark, Japan, the Netherlands, Russia, Switzerland, the United Kingdom and the United States have expressed clearly that they are against sending the issue to the ILC as it is too premature at this stage, and would also be contrary to the General Assembly resolutions referring the issue to the OEWG and the GGE.

With respect to the Initial Pre-Draft’s recommendation for a repository of State practices on the application of international law to State-use of ICTs, support is found in comments submitted by Ireland, Italy, Japan, South Korea, Singapore, South Africa, Sweden and Thailand. While Japan, South Africa and India (comments taken down) have qualified their views by stating these contributions should be voluntary, the EU has sought clarification on the modalities of contributing to the repository so as to avoid duplication of efforts.

Other Notable Comments

Aside from the above, States have raised certain other points of interest that may be relevant to the ongoing discussion on international law. The Czech Republic and France have both drawn attention to the due diligence norm in cyberspace and pointed out that it needs greater focus and elaboration in the report.

In its comments, Colombia has rightly pointed out that discussions should centre around “national views” as opposed to “State practice”, since it is difficult for State practice to develop when “some States are still developing national positions”. This accurately highlights a significant problem in cyberspace, namely the scarcity of State practice on account of unclarity in national positions. It holds true for most developing nations, including but not limited to India.

On a separate issue, the UK has made an interesting, but implausible proposal. The UK in its comments has proposed that “States acknowledge military capabilities at an organizational level as well as provide general information on the legal and oversight regimes under which they operate”. Although it has its benefits, such as reducing information asymmetries in cyberspace, it is highly unlikely that States will accept an obligation to disclose or acknowledge military capabilities, let alone any information on the “legal and oversight regimes under which they operate”. This information speaks to a State’s military strength in cyberspace, and while a State may comment on the legality of offensive cyber capabilities in abstract, realpolitik deems it unlikely that it will divulge information on its own capabilities. It is worth noting here that the UK has acknowledged having offensive cyber capabilities in its National Cyber Security Strategy 2016 to 2021.

What does the Revised Pre-Draft Say About International Law?

The OEWG Chair, by a letter dated 27 May 2010, notified member States of the revised version of the Initial Pre-Draft (Revised Pre-Draft). He clarified that the “Recommendations” portion had been left changed. On perusal, it appears Section C of the Revised Pre-Draft is almost entirely unchanged as well, barring the correction of a few typographical errors. This is perhaps not surprising, given the OEWG Chair made it clear in his letter that he still expected “guidance from Member States for further revisions to the draft”.

CCG will track States’ comments to the Revised Pre-Draft as well, as and when they are submitted by member States.

International Law and Cyberspace: Three Different Conversations

With the establishment of the OEWG, the UN GGE was no longer the only multilateral conversation on cyberspace and international law among States in the UN. Of course, both the OEWG and the GGE are about more than just the questions of whether and how international law applies in cyberspace – they also deal with equally important, related issues of capacity-building, confidence building measures and so on in cyberspace. But their work on international law is still extremely significant since they offer platforms for States to express their views on international law and reach consensus on contentious issues in cyberspace. Together, these two forums form two important streams of conversation between States on international law in cyberspace.

At the same time, States are also separately articulating and releasing their own positions on international law and how it applies to cyberspace. Australia, France, Germany, Iran, the Netherlands, the United Kingdom and the United States have all indicated their own views on how international law applies to cyberspace, independent of both the GGE and the OEWG, with Iran being the latest State to do so. To the extent they engage with each other by converging and diverging on some issues such as sovereignty in cyberspace, they form the third conversation among States on international law. Notably, India has not yet joined this conversation.

It is increasingly becoming clear that this third conversation is taking place at a particularly level of granularity, not seen so far in the OEWG or the GGE. For instance, the raging debate on whether sovereignty in international law in cyberspace is a rule entailing consequences for violation or is merely a principle that only gives rise to binding rules such as the prohibitions on use of force or intervention, has so far been restricted to this third conversation. In contrast, States’ comments to the OEWG’s Initial Pre-Draft have indicated that discussions in the OEWG appear to still centre around the broad question of whether and how international law applies to cyberspace. Only Austria mentioned in its comments to the Initial Pre-Draft that it believed sovereignty was a rule the violation of which would be an internationally wrongful act. The same applies for the GGE, since although it was able to deliver consensus reports on international law applying to cyberspace, it also cannot claim to have dealt with these issues at level of specificity beyond this.

This variance in the three conversations shows that some States are racing way ahead of others in their understanding of how international law applies to cyberspace, and these States are so far predominantly Western and developed, with the exception of Iran. Colombia’s comment to the OEWG’s Initial Pre-Draft is a timely reminder in this regard, that most States are still in the process of developing their national positions. The interplay between these three conversations around international law and cyberspace will be interesting to observe.

The Centre for Communication Governance’s comments to the Initial Pre-Draft can be accessed here.

CCG’s Comments to the Ministry of Defence on the Defence Acquisition Procedure, 2020

On 28 July 2020, the Ministry of Defence (‘MoD’) uploaded the second draft of the Defence Procurement Procedure 2020 (‘DPP 2020’), now renamed as the ‘Defence Acquisition Procedure 2020’ (‘DAP 2020’) on its website, inviting comments and suggestions from interested stakeholders and the general public.

CCG submitted its comments on the DAP 2020 underscoring its key concerns with this latest iteration of the MoD’s policy for capital acquisitions. The comments were authored by Gunjan Chawla, with inputs and research from Sharngan Aravindakshan and Vagisha Srivastava.

Our comments to the MoD are aimed at:

(1) Highlighting certain points in law and procedure to refine the DAP 2020 and facilitate the building of a more robust regulatory framework for defence acquisitions that contribute to the building of an Aatmanirbhar Bharat (self-reliant India).

(2) Presenting certain legal tools and frameworks that remain at the Ministry’s disposal in this endeavour geared towards a thorough preparation for the defence of India, in tandem with the envisioned goal of the National Cybersecurity Strategy 2020-2025 [currently being formulated by the office of the National Cybersecurity Coordinator (‘NCSC’)] to build a cyber secure nation.

Other than this broader objective of formulating a clear, coherent and comprehensive policy for acquisition of critical technologies to strengthen India’s national security posture, our comments are intended to contribute meaningfully to the building of legal frameworks that enable enhancing the state of cybersecurity in India generally, and the defence establishment and defence industrial base ecosystem specifically.

The comments are divided into five parts.

Part I introduces the scope and ambit of this document. These comments are not a granular evaluation of the merits and demerits of every procedural step to be followed in various categories of defence acquisitions. Here, we broadly trace the evolution of the structure, objectives and salient features of India’s defence procurement and acquisition policies in recent years. The scope of the comments are restricted to those features of the DAP that are most closely related with or have implications for the cybersecurity of the defence establishment. In this regard, we note the omission of Chapter X on ‘Simplified Capital Expenditure Procedure’ from the text of the draft DAP document as a serious error that ought to be rectified at the earliest opportunity.

Part II deals with the cybersecurity and information security in the acquisitions process generally, as this is a concern that must be addressed irrespective of the procedural categorisation of a particular acquisition. The inherently sensitive and strategic nature of defence acquisitions demands that processes and procedures be formulated in a manner that prevents any unwarranted leakage of information at premature stages in the acquisition process. Herein, we recommend that:

  1. The DAP 2020 should carefully distinguish between the terms ‘information security’ and ‘cyber security’, and refrain from using them interchangeably in policy documents.
  2. Demand a full disclosure of the history of cyber-attacks, breaches and incidents suffered by the vendor company (and related corporate entities) prior to the signing of the acquisition contract. This should be supplemented with a good faith disclosure of incidents where the cyber infrastructure or assets of the vendor company may have been used, with or without proper authorization, in the conduct of a cyber breach or other incident including attacks or exploits or other violations of digital privacy and human rights.

    As discussed in the comments, this line of inquiry would further India’s adherence to at least three of eleven voluntary, non-binding norms on responsible state behaviour in cyberspace articulated in the 2015 Report of the Group of Governmental Experts on Advancing Responsible State Behaviour in Cyberspace in the context of International Security.
  3. Designation of online procurement portals as ‘Critical Information Infrastructure’ and/or ‘Protected Systems’ within the meaning of Sections 70 and 70A of the Information Technology Act, 2000.

Part III of the comments focuses on issues in the acquisition of information and communications technologies (ICT) and cyber systems. All suggestions and comments included in this Part are aimed towards ensuring that our vision of  Aatmanirbhar Bharat (self-reliant India) is also a sustainable one.

Key recommendations presented in this part include:

  1. Clearly defining the terminologies used with regard to the ‘cyber domain’ in Chapter VIII, such as ICTs/cyber systems in order to bring more clarity to the procurement process, as well the scope and ambit of the DAP document.
  2. In these definitions and classification, distinguishing both ‘cyber weapons’ and ‘cyber physical weapons’ from cyber systems for command and control or C4I2SR, as well as ‘cybersecurity products and services’, which are essential to protect the confidentiality and integrity of sensitive government data across various ministries from external threats.
  3. The MoD should clarify the scope and ambit of the DAP and the DPM and the extent to which they apply to various categories of IT, ICT and cyber systems.
  4. The defence budget dataset should be re-assessed to evaluate the ratio of revenue expenditures to capital expenditure alongside an assessment of the contribution of capital expenditures incurred over the years to capital assets owned by the armed forces and that portion of capital expenditure that is diverted towards maintenance, upkeep and life cycle costs of equipment as per the CBRP model.

Further building on the issues that have been highlighted in the previous sections, Part IV delves into the broader legal and Constitutional framework applicable to procurements generally, and defence acquisitions specifically.

Herein, we propose opening up a discussion on opportunities and challenges in strengthening Parliamentary oversight over the defence acquisitions. Given the huge sums of public funds that are involved in defence acquisitions, ensuring accountability and integrity in these processes is of paramount importance.

We note that the Defence Acquisition Procedure as well as the Defence Procurement Manual are internal guidelines issued by the Ministry of Defence as policy directives to be followed as matter of the Executive’s internal administration and so far, do not enjoy legislative backing through an Act of Parliament. Accordingly, this section presents a brief overview of current processes and mechanisms in this regard, and recommends that:

  1. This defect in the DAP ought to be remedied on a priority basis, drawing on the Constitutional authority vested in Parliament pursuant to Article 246 read with Schedule VII, List I Entry 1 to enact laws ‘for the preparation of defence of India’.

Part V concludes the major findings and recommendations of this submission.

The comments can be accessed here on CCG’s Blog.

The Potential and Hurdles of Fighting Atrocities in the Age of Social Media

Preserving online content documenting atrocities – in an atmosphere of censorship of social media, both by platforms themselves and governments – is a challenge.

By Sharngan Aravindakshan and Radhika Kapoor

This post originally appeared on on April 27, 2020

Grisly video footage showing police personnel wearing riot gear, coercing five bruised, bloodied men to sing the national anthem, emerged on social media during the communal violence in Delhi earlier this year. One of the men, Faizan, died later from his injuries.

Shocking as it was, the video was one among many that helped shine light on police brutality, forcing the police to initiate an inquiry into the matter. Along similar lines, videos of the January 2020 attack on Jawaharlal Nehru University, uploaded on social media, helped unveil the identity of the assailants.

Social media is used both by the public as well as human rights and humanitarian organisations to gather, store, analyse and disseminate information as part of early warning, prevention and mitigation systems during localised conflicts, riots and other forms of mass violence. There is immense evidentiary potential of such information in relation to atrocity crimes, whether as pictures or videos. But there are hurdles faced in the preservation of such online content in an atmosphere of censorship of social media, both by platforms themselves and governments.

Online content as evidence of atrocity crimes

Judicial mechanisms have often failed to hold perpetrators of atrocity crimes because of insufficient evidence. In 2007, constrained by a lack of evidence, the International Court of Justice was unable to hold Serbia responsible for the Bosnian genocide. Similarly, in the case of Gbagbo and Ble Goude – charged with crimes against humanity committed during post-electoral violence in the Ivory Coast – the ICC observed that the prosecutor had failed to submit “sufficient evidence.” In India, criminal courts acquitted dozens of accused persons in the 2002 Gujarat riots for lack of evidence.

But in today’s digital age, social media may be one platform that remains relatively inexpensive and accessible to civilians, even in conflict-ridden areas or at high-risk for mass violence. Platforms like Facebook, Twitter and YouTube enable victims or even bystanders to quickly take photos and videos, and upload them to inform the world about ongoing atrocities or mass violence at their location at a moment’s notice. This can serve several purposes, including as warnings to others, beacons to direct relief efforts, or even evidence that can later be used to prosecute the perpetrators.

For instance, video footage taken by an activist group on cell phones and uploaded to social media was crucial in indicting high-level officers for police atrocities against civilians in Rio de Janeiro’s favelas. Horrific videos showing the execution of some individuals uploaded on forums like Facebook led to the issuance of an arrest warrant against Mahmoud Al-Werfalli, a commanding officer of the Libyan National Army in 2017, the first of its kind issued by the ICC relying on social media evidence. Similarly, thousands of high-resolution images captured by Syrian defector “Caesar”, depicting torture in Syrian government facilities, were submitted to the German Federal Prosecutor’s Office to consolidate the evidentiary record against Syrian intelligence services and military forces in a universal jurisdiction case. Eventually, the German Federal Court of Justice issued an arrest warrant on the basis, among others, of the “Caesar images.”

Don’t throw the baby out with the bathwater

Easy access to social media may be a blessing in some ways for human rights activists, but its potential as an evidence repository is underutilised and faces several obstacles. Popular platforms including Facebook, Twitter and YouTube all use a mix of artificial intelligence and human moderators to filter, assess and regulate content. Machine learning algorithms are used to track and filter out content that may be in violation of their community rules. Content that cannot be filtered by the algorithm gets sent to the human content moderators who then make the decision with respect to that content.

But these platforms are all increasingly coming under pressure from governments to combat false and inflammatory content on their platforms. For instance, incidents such as New Zealand’s Christchurch shooting, which was live-streamed on Facebook and remained on the site for hours before being taken down, have led to the endorsement of the Christchurch Call by tech companies and governments, including India, to eliminate terrorist and violent extremist content online.

As a result, these companies are ramping up efforts to police their platforms, resorting to technology to take down content deemed harmful or illegal. Facebook asserted in 2018 that 99.5% of terrorism-related content was taken down by algorithms before anyone could see it.

The problem with this is that it ends up removing several posts that document atrocities as well, throwing the baby out with the bathwater. In their current stage, artificial intelligence-enabled algorithms cannot properly understand the context in which content is posted – a highly subjective task that even human moderators struggle with. YouTube, for instance, has taken down hundreds of videos which were evidence of government-led attacks in Syria, because they were flagged as violent. The video on Facebook showing Werfalli ordering the execution of a group of individuals and kickstarting his prosecution by the ICC has also been deleted. (An archived copy exists, however.)

How these machine-learning algorithms work is also not disclosed by companies, making it harder for civil society or other organisations to understand how they function and work with them.

Apart from content monitoring by platforms, censorship by the government can also pose obstacles. In India, a combination of Section 69A of the Information Technology Act, 2000, and the rules framed under it, i.e., the Information Technology (Procedure and Safeguards for Blocking for Access of Information by Public) Rules, 2009 (Blocking Rules, 2009), provide the Central government with the power to issue takedown orders that platforms have to comply with under a procedure that appears to have checks and balances, but in actuality involves extensive executive discretion and lacks transparency.

No member of the judiciary is involved in any stage of this process. Even the Review Committee that sits periodically and reviews the validity of the blocking order does not have any judicial member. If it finds that the order is not valid, it can direct unblocking of the particular post, but this rarely happens in practice. Of course, these orders are subject to judicial review in the form of challenge in courts, but Rule 16 of the Blocking Rules ensures that these takedown orders are confidential, making any challenge in court difficult.

The Supreme Court’s judgement in Shreya Singhal vs Union of India upheld the validity of these rules, ostensibly on the ground that these apparent safeguards are sufficient. The judgement is laudable for striking down the notorious Section 66A of the Information Technology Act, 2000, and making intermediaries’ lives easier by clarifying that they are only required to monitor and take down illegal content when asked to in the form of an order from the court or a competent government authority.  But the challenge to the Blocking Rules, 2009, ought to have been more seriously considered.

Such unchecked power in the hands of the government when it comes to social media regulation is problematic. Takedown orders issued to these platforms under Section 69A are usually shrouded in secrecy on account of the confidentiality obligation under the Blocking Rules, 2009. In 2019, news reports emerged of alleged requests made to Twitter by the Indian government to take down accounts for spreading “anti-India propaganda” in Jammu and Kashmir. Instances where platforms have pushed back strongly against unreasonable blocking orders are few.

All of this gains significance in the context of the Delhi violence, which saw several pictures and videos uploaded on social media platforms such as Twitter. Among these were videos documenting police inaction while crowds attacked individuals, and police personnel even breaking surveillance cameras. Such videographic evidence can be critical – it can assist in investigations and potentially be used as evidence in court (subject to proof of authenticity), with two FIRs having already been filed in respect of Faizan’s death.

Context is everything

Governments the world over are clamping down on violent and extremist content online. The European Union is considering passing a regulation that will require online platforms to remove access to terrorism-related content within a record one hour of receipt of the removal order, failing which they will be fined up to 4% of their global turnover in the previous business year. Australia passed legislation last year under which executives of social media companies can be imprisoned if “abhorrent violent material” is not removed quickly.

But context is everything. Every online video, photograph or other documentary record – even those that graphically depict violence – is part of a larger story or record. Often, such online content can help consolidate the evidentiary record for mass atrocities. Failing to acknowledge this, as is often currently the case, will continue to cause problems for atrocity documentation. One way platforms can help is by retaining the data they takedown, as Facebook did when it confirmed that it was “preserving data” from pages removed for inciting violence against the Rohingya.

At the same time, the authenticity of such pictorial and videographic evidence may be disputed. Deep fakes have made it possible to falsify images that can escape detection even by algorithms. But solutions are being developed, such as the eyeWitness to Atrocities app developed by the International Bar Association which adds a time-stamp and GPS fixed location to the recordings, which can then be encrypted and uploaded to data banks from anywhere. In Syria’s “Caesar” case, the metadata underlying the images was also submitted and used to verify the authenticity of the images, further enhancing the images’ evidentiary value.

Social media can be a powerful force for good. With the government’s much-awaited amendment to the Information Technology (Intermediary Guidelines) Rules, 2011, around the corner, any measure dealing with these tough issues should ensure that it stays that way.

Sharngan Aravindakshan is a Programme Officer in the Centre for Communication Governance at National Law University, Delhi. Radhika Kapoor is a Harvard Kaufman Fellow at the Public International Law and Policy Group, Washington DC.

Press Note 3 of 2020: FDI Policy and the Expanding Sphere of National Security

By Sharngan Aravindakshan

India recently joined the list of countries that have placed restrictions on foreign investment to fend off predatory investments during the ongoing Covid-19 crisis. It announced that investors from countries sharing a land border with it would henceforth have to invest only through the government approval route. India’s decision follows on the heels of similar curbs on foreign investment by Australia and a guidance note issued by the EU cautioning vigilance to its members to avoid a “sell-off Europe’s business and industrial actors, including SMEs.” Following this, Germany, France, and Spain are reportedly taking steps to safeguard theirs and their companies’ interests.

Para 3.1.1(a) of Press Note 3 of 2020 dated April 17, 2020, requires that all investment from Afghanistan, Bangladesh, Bhutan, Myanmar, Nepal, Pakistan, and most importantly, China, as countries that share a land border with India, to go through the government approval route, irrespective of the sector. To ensure that indirect investments from these countries that are routed through different jurisdictions such as Mauritius are covered as well – the press note specifies that it applies to beneficial ownership of such investment in the specified countries. Further, Para 3.1.1(b) of the press note specifies that any change in the existing ownership of FDI in Indian entities resulting in the beneficial ownership falling within the purview of Para 3.1.1(a) will also require government approval.

This latest amendment to the FDI policy is indicative of the Government not wanting to leave weakened companies in strategically significant sectors vulnerable to takeover by companies looking for a greater foothold in India. The amendment to the FDI policy came into effect from the midnight of April 23.

The immediate aftermath of the release of the press note saw the Chinese Embassy in India allege that the amended policy violated the WTO principle of non-discrimination and express a hope that India would “revise relevant discriminatory practices, treat investments from different countries equally, and foster an open, fair and equitable business environment”. India’s view, however, reportedly is that there is no violation of WTO principles since the press note only specifies a different approval process for foreign investment from the concerned countries, as opposed to a bar on such investment.

Shift in Policy

Usually, foreign direct investment (FDI), which is when foreign entities invest in a company in another country, can be routed into India through two routes. The first is the automatic and the second the approval route. The automatic route allows FDI without any prior approval from the RBI or the Government and applicable for sectors such as e-commerce activities, electronic systems, and ports & shipping, among others. A formal notification is required, but no approval.

FDI through the approval route requires the prior approval of the Government. The entity proposing to invest has to apply through the Foreign Investment Facilitation Portal, which provides a single-window clearance system, after which it is sent to the concerned Ministry. The Ministry, in consultation with the Department of Promotion of Industry and Internal Trade (DPIIT) under the Ministry of Commerce, will then approve or reject the application. This route is reserved for sectors such as the banking and public sector, multi-brand retail, print media, and satellites. Foreign investment in some sectors, such as defence can be routed through the automatic route up to 49% with any additional investment requiring government approval. There are also some sectors in which no foreign investment is allowed under either route. These include sectors such as atomic energy generation.

Even earlier, all investment from Bangladesh was required to go through the approval route, while all investment from Pakistan, besides being  allowed only through the approval route, was also restricted to sectors/activities other than defence, space, atomic energy and sectors/activities prohibited for foreign investment.

But Press Note 3 is a marked shift in this policy since it mandates that all investment from India’s bordering countries requires government approval. There is also no sunset clause, i.e., it is not known whether this is a temporary measure for the duration of the current crisis or if it will continue to apply for all future foreign investments from these regions even post the crisis.

The China Factor

Quite apparently, the policy aims to prevent a stronger Chinese foothold in significant sectors in India. Chinese investment in India has crossed at least 26 billion US dollars.

In India, alarm bells began ringing after HDFC informed stock exchanges that the People’s Bank of China had increased its stake in HDFC from 0.8% to 1.01% in mid-April. The Securities and Exchange Board of India (SEBI) then turned its focus on the amount of Chinese investments in Indian companies, vigorously seeking details on foreign portfolio investments from Asian countries. These details include whether Chinese investors control the funds and whether any controlling interest is vested in the investors from these countries.

There is concern that Chinese state-owned enterprises having access to vast reserves and deep-pockets are capable of buying out strategically significant companies whose valuations have been hit in their home countries. Amid a global crisis caused by the Covid-19 virus, that has brought most economies to a standstill, the Chinese economy has reportedly shown strong resilience. Even before the crisis, governments were already becoming increasingly concerned about the over-reliance of major global supply chains on China. Various governments’ concerns about the over-reliance of major global supply chains on China have been laid bare by the Covid-19 crisis, and India is not insulated from such exposure to the Chinese economy. Facing shortages of test kits for Covid-19, several countries, including India, imported rapid test kits from China, only for reports to emerge that they were faulty and could not be relied upon. India has also reportedly cancelled its order for these kits.  

China’s reach into India’s Tech Sector

Chinese investment funds a significant portion of India’s tech start-ups. Of the 30 Indian unicorns, at least 18 have a Chinese investor.

According to a recent report by Gateway House, over 75 start-ups in India have Chinese investors concentrated in e-commerce, fintech, media/social media, aggregation services, and logistics. Another recent report by the Brookings Institution reveals that since 2016, Alibaba, Tencent, and Xiaomi have invested over 3.5 billion dollars in the Indian tech space.

BigBasket, Byju’s, Flipkart, Hike, MakeMyTrip, Ola, Paytm, PolicyBazaar, Quickr, Snapdeal, ShareChat, Swiggy, and Zomato are only a few of the start-ups that have Chinese investments. 

Chinese investment in these Indian start-ups consists of both venture capital funds as well as funds from Chinese tech companies such as Tencent and Alibaba. As of 2019, Chinese company BBK Electronics Corporation, which owns brands such as OnePlus, Oppo, and Vivo, has climbed to the top of the Indian market for smartphones. It is telling that its second-in-line is also another Chinese company, Xiaomi. 

This level of entrenchment in the Indian economy can have significant consequences. Chinese stakeholders that have invested heavily can lean on Indian companies to adopt and use Chinese technology. Fears the world over of malicious code in Huawei’s technology that allows snooping by the Chinese Government certainly warrants greater scrutiny of this technology. Also, this concern over data privacy is not limited to Huawei. According to a ranking for maintaining online privacy by Amnesty, Tencent was the only company that “has not stated publicly that it will not grant government requests to access encrypted messages by building a backdoor”. Amnesty’s assessment also gave Tencent a score of 0 out of 100. Even within India, Chinese investments have raised data-sharing concerns with the Chinese Government. Prominent parliamentarians have questioned whether Indian users’ data privacy is maintained in light of Paytm’s investments from Alibaba and Tiktok’s growing influence in India, given its close connection to the Chinese Government. The Huawei controversy is also indicative of the close links between China’s private sector and the People’s Liberation Army, creating a legitimate concern that Indian companies can be made to toe the Chinese line for fear of losing investment. Tencent’s 150 million dollar investment into Reddit also invited heavy criticism on account of censorship fears.

Given these fears, China’s massive reach in India’s technology sector calls for a more watchful eye.

An Indian-CFIUS in the Reckoning?

National security concerns have begun pervading almost every sphere, more in recent times than ever before. Some jurisdictions have a specialized screening process for all foreign investment that reviews such investment through the lens of national security. The Committee on Foreign Investments in the United States (CFIUS) is one such interagency group that reviews foreign investments in the United States to assess whether they are a threat to US national security. It includes members from Homeland Security, Defence, Energy, Commerce, Labor, National Intelligence Trade and Science & Technology Departments, as well as the Attorney General. The scope of its review includes “any merger, acquisition or takeover….which could result in foreign control of any person engaged in interstate commerce in the United States” and its role is to determine whether and the extent to which the transaction could impact US national security. Should it determine that it does, the President has powers to suspend or prohibit the transaction, or to allow it with conditions. The CFIUS review process, divided into various stages, is to be completed in a fixed number of days, but can be automatically extended. However, CFIUS has traditionally been opaque and secretive. The decisions are not made public and the companies do not have a lot of scope to present their case or challenge the decisions.

Notable action by the CFIUS includes a review of TikTok’s owner ByteDance Technology’s 1 billion US dollars acquisition of, a US social media app, as well as forcing Beijing Kunlun Tech to sell its holding in Grindr. Concerns had been raised about the collection of user data by TikTok, a video sharing platform, and Chinese censorship of content relating to Tiananmen Square or Tibet (among others), on TikTok even in the US. The CFIUS is reportedly in talks with TikTok on measures that it can take to avoid these concerns. Similarly, Kunlun’s Grindr transaction was not permitted because of concerns over the personal data stored on the app, including its users’ geolocations.

Other jurisdictions also similarly perceiving threats to national security posed by China’s long arm are adopting a CFIUS-ish screening policy. In March 2019, the European Union adopted a regulation for a framework to screen foreign investments on the grounds of security or public order, and such investments include those “regarding critical infrastructure, critical technologies, or critical inputs which are essential for security or public order.” Other member states can seek information on investments that they consider a risk to their security or public order as well as address comments on said investment. Similarly, the Union also has the right to address comments. But the ultimate decision concerning the investment is up to the individual member state.

It is perhaps time for India to consider similar mechanisms as well. Currently, each Ministry reviews foreign investment in its sector and approves it. It may be beneficial for India to consider an inter-ministerial body to similarly screen investments through the lens of national security, with due consideration given to the time taken for approvals. This will enable and reinforce a uniform and holistic approach to national security. Given the influx of investment in India’s burgeoning tech sector, such a mechanism may be needed sooner rather than later.

The Pegasus Hack: A Hark Back to the Wassenaar Arrangement

By Sharngan Aravindakshan

The world’s most popular messaging application, Whatsapp, recently revealed that a significant number of Indians were among the targets of Pegasus, a sophisticated spyware that operates by exploiting a vulnerability in Whatsapp’s video-calling feature. It has also come to light that Whatsapp, working with the University of Toronto’s Citizen Lab, an academic research organization with a focus on digital threats to civil society, has traced the source of the spyware to NSO Group, an Israeli company well known both for developing and selling hacking and surveillance technology to governments with a questionable record in human rights. Whatsapp’s lawsuit against NSO Group in a federal court in California also specifically alludes to NSO Group’s clients “which include but are not limited to government agencies in the Kingdom of Bahrain, the United Arab Emirates, and Mexico as well as private entities.” The complaint filed by Whatsapp against NSO Group can be accessed here.

In this context, we examine the shortcomings of international efforts in limiting or regulating the transfers or sale of advanced and sophisticated technology to governments that often use it to violate human rights, as well as highlight the often complex and blurred lines between the military and civil use of these technologies by the government.

The Wassenaar Arrangement on Export Controls for Conventional Arms and Dual-Use Goods and Technologies (WA) exists for this precise reason. Established in 1996 and voluntary / non-binding in nature[I], its stated mission is “to contribute to regional and international security and stability, by promoting transparency and greater responsibility in transfers of conventional arms and dual-use goods and technologies, thus preventing destabilizing accumulations.”[ii] Military advancements across the globe, significant among which were the Indian and Pakistani nuclear tests, rocket tests by India and South Korea and the use of chemical warfare during the Iran-Iraq war, were all catalysts in the formulation of this multilateral attempt to regulate the transfer of advanced technologies capable of being weaponized.[iii] With more and more incidents coming to light of authoritarian regimes utilizing advanced western technology to violate human rights, the WA was amended to bring within its ambit “intrusion software” and “IP network surveillance systems” as well. 

Wassenaar: A General Outline

With a current membership of 42 countries (India being the latest to join in late 2017), the WA is the successor to the cold war-era Coordinating Committee for Multilateral Export Controls (COCOM) which had been established by the Western Bloc in order to prevent weapons and technology exports to the Eastern Bloc or what was then known as the Soviet Union.[iv] However, unlike its predecessor, the WA does not target any nation-state, and its members cannot exercise any veto power over other member’s export decisions.[v] Notably, while Russia is a member, Israel and China are not.

The WA lists out the different technologies in the form of “Control Lists” primarily consisting of the “List of Dual-Use Goods and Technologies” or the Basic List, and the “Munitions List”.[vi] The term “dual-use technology” typically refers to technology that can be used for both civilian and military purposes.[vii] The Basic List consists of ten categories[viii]

  • Special Materials and Related Equipment (Category 1); 
  • Materials Processing (Category 2); 
  • Electronics (Category 3); 
  • Computers (Category 4); 
  • Telecommunications (Category 5, Part 1); 
  • Information Security (Category 5, Part 2); 
  • Sensors and Lasers (Category 6); 
  • Navigation and Avionics (Category 7); 
  • Marine (Category 8); 
  • Aerospace and Propulsion (Category 9). 

Additionally, the Basic List also has the Sensitive and Very Sensitive Lists which include technologies covering radiation, submarine technology, advanced radar, etc. 

An outline of the WA’s principles is provided in its Guidelines & Procedures, including the Initial Elements. Typically, participating countries enforce controls on transfer of the listed items by enacting domestic legislation requiring licenses for export of these items and are also expected to ensure that the exports “do not contribute to the development or enhancement of military capabilities which undermine these goals, and are not diverted to support such capabilities.[ix]

While the Guidelines & Procedures document does not expressly proscribe the export of the specified items to non-WA countries, members are expected to notify other participants twice a year if a license under the Dual List is denied for export to any non-WA country.[x]

Amid concerns of violation of civil liberties

Unlike conventional weapons, cyberspace and information technology is one of those sectors where the government does not yet have a monopoly in expertise. In what can only be termed a “cyber-arms race”, it would be fair to say that most governments are even now busily acquiring technology from private companies to enhance their cyber-capacity, which includes surveillance technology for intelligence-gathering efforts. This, by itself, is plain real-politik.

However, amid this weaponization of the cyberspace, there were growing concerns that this technology was being purchased by authoritarian or repressive governments for use against their citizens. For instance, Eagle, monitoring technology owned by Amesys (a unit of the French firm Bull SA), Boeing Co.’s internet-filtering Narus, and China’s ZTE Corp. all contributed to the surveillance efforts by Col. Gaddafi’s regime in Libya. Surveillance technology equipment sold by Siemens AG and maintained by Nokia Siemens Networks were used against human rights activists in Bahrain. These instances, as part of a wider pattern that came to the spotlight, galvanized the WA countries in 2013 to include “intrusion software” and “IP network surveillance systems” in the Control List to attempt to limit the transfer of these technologies to known repressive regimes. 

Unexpected Consequences

The 2013 Amendment to the Control Lists was the subject of severe criticism by tech companies and civil society groups across the board. While the intention behind it was recognized as laudable, the terms “intrusion software” and “IP network surveillance system” were widely viewed as over-broad and having the unintended consequence of looping in both legitimate as well as illegitimate use of technology. The problems pointed out by cybersecurity experts are manifold and are a result of a misunderstanding of how cybersecurity works.

The inclusion of these terms, which was meant to regulate surveillance based on computer codes / programmes, also has the consequence of bringing within its ambit legitimate and often beneficial uses of these technologies, including even antivirus technology according to one view. Cybersecurity research and development often involves making use of “zero-day exploits” or vulnerabilities in the developed software, which when discovered and reported by any “bounty hunter”, is typically bought by the company owning the software. This helps the company immediately develop a “patch” for the reported vulnerability. These transactions are often necessarily cross-border. Experts complained that if directly transposed to domestic law, the changes would have a chilling effect on the vital exchange of information and research in this area, which was a major hurdle for advances in cybersecurity, making cyberspace globally less safer. A prime example is HewlettPackard’s (HP)  withdrawal from Pwn2Own—a computer hacking contest held annually at the PacSecWest security conference where contestants are challenged to hack into / exploit vulnerabilities on widely used software. HP, which sponsored the event, was forced to withdraw in 2015 citing the “complexity in obtaining real-time import /export licenses in countries that participate in the Wassenaar Arrangement”, among others. The member nation in this case was Japan.

After facing fierce opposition on its home soil, the United States decided to not implement the WA amendment and instead, decided to argue for a reversal at the next Plenary session of the WA, which failed. Other nations, including the EU and Japan have implemented the WA amendment export controls with varying degrees of success.

The Pegasus Hack, India and the Wassenaar

Considering many of the Indians identified as victims of the Pegasus hack were either journalists or human rights activists, with many of them being associated with the highly-contentious Bhima-Koregaon case, speculation is rife that the Indian government is among those purchasing and utilizing this kind of advanced surveillance technology to spy on its own citizens. Adding this to the NSO Group’s public statement that its “sole purpose” is to “provide technology to licensed government intelligence and law enforcement agencies to help them fight terrorism and serious crime”, it appears there are credible allegations that the Indian government was involved in the hack. The government’s evasiveness in responding and insistence on so-called “standard operating procedures” having been followed are less than reassuring.

While India’s entry to the WA as its 42nd member in 2018 has certainly elevated its status in the international arms control regime by granting it access to three of the world’s four main arms-control regimes (the others being the Nuclear Suppliers’ Group / NSG, the Missile Technology Control Group / MTCR and the Australia Group), the Pegasus Hack incident and the apparent connection to the Indian government shows us that its commitment to the principles underlying the WA is doubtful. The purpose of the inclusion of “intrusion software” and “IP network surveillance system” in the WA’s Control Lists by way of the 2013 Amendment, no matter their unintended consequences for legitimate uses of such technology, was to prevent governmental purchases exactly like this one. Hence, even though the WA does not prohibit the purchase of any surveillance technology from a non-member, the Pegasus incident arguably, is still a serious detraction from India’s commitment to the WA, even if not an explicit violation.

Military Cyber-Capability Vs Law Enforcement Cyber-Capability

Given what we know so far, it appears that highly sophisticated surveillance technology has also come into the hands of local law enforcement agencies. Had it been disclosed that the Pegasus software was being utilized by a military wing against external enemies, by, say, even the newly created Defence Cyber Agency, it would have probably caused fewer ripples. In fact, it might even have come off as reassuring evidence of the country’s advanced cyber-capabilities. However, the idea of such advanced, sophisticated technologies at the easy disposal of local law enforcement agencies is cause for worry. This is because while traditionally the domain of the military is external, the domain of law enforcement agencies is internal, i.e., the citizenry. There is tremendous scope for misuse by such authorities, including increased targeting of minorities. The recent incident of police officials in Hyderabad randomly collecting biometric data including their fingerprints and clicking people’s pictures only exacerbates this point. Even abroad, there already exist on-going efforts to limit the use of surveillance technologies by local law enforcement such as the police.

The conflation of technology use by both military and civil agencies  is a problem that is created in part at least, by the complex and often dual-use nature of technology. While dual use technology is recognized by the WA, this problem is not one that it is able to solve. As explained above, dual use technology is technology that can be used for both civil and military purposes. The demands of real-politik, increase in cyber-terrorism and the manifold ways in which a nation’s security can be compromised in cyberspace necessitate any government in today’s world to increase and improve its cyber-military-capacity by acquiring such technology. After all, a government that acquires surveillance technology undoubtedly increases the effectiveness of its intelligence gathering and ergo, its security efforts. But at the same time, the government also acquires the power to simultaneously spy on its own citizens, which can easily cascade into more targeted violations. 

Governments must resist the impulse to turn such technology on its own citizens. In the Indian scenario, citizens have been granted a ring of protection by way of the Puttaswamy judgement, which explicitly recognizes their right to privacy as a fundamental right. Interception and surveillance by the government while currently limited by laid-down protocols, are not regulated by any dedicated law. While there are calls for urgent legislation on the subject, few deal with the technology procurement processes involved. It has also now emerged that Chhattisgarh’s State Government has set up a panel to look into allegations that that NSO officials had a meeting with the state police a few years ago. This raises questions of oversight in the relevant authorities’ public procurement processes, apart from their legal authority to actually carry out domestic surveillance by exploiting zero-day vulnerabilities.  It is now becoming evident that any law dealing with surveillance will need to ensure transparency and accountability in the procurement of and use of the different kinds of invasive technology adopted by Central or State authorities to carry out such surveillance. 

[i]A Guide to the Wassenaar Arrangement, Daryl Kimball, Arms Control Association, December 9, 2013,, last accessed on November 27, 2019.


[iii]Data, Interrupted: Regulating Digital Surveillance Exports, Tim Maurerand Jonathan Diamond, November 24, 2015, World Politics Review.

[iv]Wassenaar Arrangement: The Case of India’s Membership, Rajeswari P. Rajagopalan and Arka Biswas, , ORF Occasional Paper #92 p.3, OBSERVER RESEARCH FOUNDATION, May 5, 2016,, last accessed on November 27, 2019.

[v]Ibid, p. 3

[vi]“List of Dual-Use Goods and Technologies And Munitions List,” The Wassenaar Arrangement, available at, last accessed on November 27, 2019. 

[vii]Article 2(1), Proposal for a REGULATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL setting up a Union regime for the control of exports, transfer, brokering, technical assistance and transit of dual-use items (recast), European Commission, September 28th, 2016,, last accessed on November 27, 2019. 

[viii]supra note vi.

[ix]Guidelines & Procedures, including the Initial Elements, The Wassenaar Arrangement, December, 2016, content/uploads/2016/12/Guidelines-and-procedures-including-the-Initial-Elements-2016.pdf, last accessed on November 27, 2019.

[x]Articles V(1) & (2), Guidelines & Procedures, including the Initial Elements, The Wassenaar Arrangement, December, 2016,, last accessed on November 27, 2019.

Fork in the Road? UN General Assembly passes Russia-backed Resolution to fight Cybercrime

By Sharngan Aravindakshan

On 19 November 2019, the Third Committee of the United Nations General Assembly passed a Russia-backed resolution. The resolution called for the establishment of an ad-hoc intergovernmental committee of experts “to elaborate a comprehensive international convention countering the use of information and communications technologies for criminal purposes” (A/C.3/74/L.11/Rev.1). China, Iran, Myanmar, North Korea and Syria were also some of the countries that sponsored the resolution. Notably, countries such as Russia, China and North Korea are all proponents of the internet-restrictive “cyber-sovereignty” model, as opposed to the free, open and global internet advocated by the Western bloc. Equally notably, India voted in favour of the resolution. The draft resolution, which was passed by a majority of 88-58 with 34 abstentions, can be accessed here.

The resolution was strongly opposed by most of the Western bloc, with the United States leading the fight against what they believe is a divisive attempt by Russia and China to create UN norms and standards permitting unrestricted state control of the internet. This is the second successful attempt by Russia and China, traditionally seen as outliers in cyberspace for their authoritarian internet regimes, to counter cybernorm leadership by the West. The resolution, to the extent it calls for the establishment of an open-ended ad hoc intergovernmental committee of experts “to elaborate a comprehensive international convention” on cybercrime, is also apparently a Russian proposal for an alternative to the Council of Europe’s Budapest Convention.

Similarly, last year, Russia and China successfully pushed for and established the Open-Ended Working Group (OEWG), also under the aegis of the United Nations, as an alternative to the US-led UN Group of Governmental Experts (GGE) in the attempt at making norms for responsible state behaviour in cyberspace. Hence, we now have two parallel UN based processes working on essentially the same issues in cyberspace. The Russians claim that both these processes  are complementary to each other, while others have stated that it was actually an attempt to delay consensus-building in cyberspace. In terms of outcome, scholars have noted the likelihood of either both processes succeeding or both failing, or what Dennis Broeders termed “Mutually Assured Diplomacy”.


The Russia-backed cyber-crime resolution, while innocuously worded, has been widely criticized by civil society groups for its vagueness and for potentially opening the door to widespread human rights violations. In an open letter to the UN General Assembly, various civil society and academic groups have expressed the worry that “it could lead to criminalizing ordinary online behaviour protected under human rights law” and assailed the resolution for the following reasons:

  • The resolution fails to define “use of information and communication technologies for criminal purposes.” It is not clear whether this is meant to cover cyber-dependent crimes (i.e. crimes that can only be committed by using ICTs, like breaking into computer systems to commit a crime or DDoS attacks) or cyber-enabled crimes (i.e. using ICTs to assist in committing “offline” crimes, like child sexual exploitation). The broad wording of the text includes most crimes and this lack of specificity opens the door to criminalising even ordinary online behaviour;
  • The single reference to human rights in the resolution, i.e., “Reaffirming the importance of respect for human rights and fundamental freedoms” is not strong enough to counter the growing trend among countries to use cybercrime legislation to violate human rights, nor does it recognize any positive obligation on the state to protect human rights.
  • It is essentially a move to negotiate a cybercrime convention or treaty, which will duplicate efforts. The Council of Europe’s Budapest Convention already has the acceptance of 64 countries that have ratified it. Also, there are already other significant international efforts underway in combating cybercrime including the UN Office on Drugs and Crime working on various related issues such as challenges faced by national laws in combating cybercrime (Cybercrime Depository) and the Open Ended Intergovernmental Expert Group Meeting on Cybercrime, which is due to release its report with its findings in 2021.

Wolves in the hen-house?

Russia’s record in human rights protection in the use of information and communications technology has been controversial. Conspicuously, this resolution comes just a few months after it passed its “sovereign-internet law”. The law grants the Kremlin the power to completely cut-off the Russian internet from the rest of the world. According to Human Rights Watch, the law obliges internet service providers to install special equipment that can track, filter, and reroute internet traffic, allowing the Russian government to spy, censor and independently block access to internet content ranging from a single message to cutting off Russia from the global internet or shutting down internet within Russia. While some experts have doubted the technical feasibility of isolating the Russian internet no matter what the government wants, the law has already come into force from 1 November 2019 and it definitely seems like Russia is going to try.

Apart from this, there have also been credible claims attributing various cyberattacks to Russia, including the 2007 attacks on Estonia, the 2008 attacks on Georgia and even the recent hacking of the Democratic National Committee (DNC) in the US. More recently, in a rare incident of collective public attribution, the US, the UK and the Netherlands called out Russia for targeting the Organization for the Prohibition of Chemical Weapons’ (OPCW) investigation into the chemical attack on a former Russian spy in the U.K., and anti-doping organizations through cyberattacks in 2018.

China, another sponsor of the resolution, is also not far behind. According to the RAND Corporation, the most number of cyber-incidents including cyber theft from 2005- 2017 was attributed to China. Also, China’s Great Firewall is famous for allowing internet censorship in the country. A Russo-China led effort in international cybernorm making is now widely feared as portending stricter state control over the internet leading to more restrictions on civil liberties.

However, as a victim of growing cyber-attacks and as a country whose current public stance is against “data monopoly” by the West, India is going to need a lot more convincing by the Western bloc to bring it over to the “free, open and global” internet camp, as its vote in favour of this resolution shows. An analysis of the voting pattern for last year’s UNGA resolution on countering the use of ICT for criminal purposes and what it means for international cyber norm making can be accessed here.

Fractured Norm-making

This latest development only further splinters the already fractured global norm-making process in cyberspace. Countries such as the United States are also taking the approach of negotiating separate bilateral cyberspace treaties with “like-minded nations” to advance its “cyber freedom” doctrine and China is similarly advancing its own “cyber-sovereignty” doctrine alongside Russia.

Add to this mix the private sector’s efforts like Microsoft’s Cybersecurity Tech Accord (2018) and the Paris Call for Trust and Security in Cyberspace (2018), and it becomes clear that any unified multilateral approach to cybernorm making now seems extremely difficult, if not impossible. With each initiative paving its own way, it now remains to be seen whether these roads all lead to cyberspace stability.