Cyberspace and International Law: Taking Stock of Ongoing Discussions at the OEWG

This post is authored by Sharngan Aravindakshan

Introduction

The second round of informal meetings in the Open-Ended Working Group on the Use of ICTs in the Context of International Security is scheduled to be held from today (29th September) till 1st October, with the agenda being international law.

At the end of the OEWG’s second substantive session in February 2020, the Chairperson of the OEWG released an “initial pre-draft” (Initial Pre-Draft) of the OEWG’s report, for stakeholder discussions and comments. The Initial Pre-Draft covers a number of issues on cyberspace, and is divided into the following:

  1. Section A (Introduction);
  2. Section B (Existing and Potential Threats);
  3. Section C (International Law);
  4. Section D (Rules, Norms and Principles for Responsible State Behaviour);
  5. Section E (Confidence-building Measures);
  6. Section F (Capacity-building);
  7. Section G (Regular Institutional Dialogue); and
  8. Section H (Conclusions and Recommendations).

In accordance with the agenda for the coming informal meeting in the OEWG, this post is a brief recap of this cyber norm making process with a focus on Section C, i.e., the international law section of the Initial Pre-Draft and States’ comments to it.

What does the OEWG Initial Pre-Draft Say About International Law?

Section C of the Initial Pre-Draft begins with a chapeau stating that existing obligations under international law, in particular the Charter of the United Nations, are applicable to State use of ICTs. The chapeau goes on to state that “furthering shared understandings among States” on how international law applies to the use of ICTs is fundamental for international security and stability. According to the chapeau, exchanging views on the issue among States can foster this shared understanding.

The body of Section C records that States affirmed that international law, including the UN Charter, is applicable to the ICT environment. It particularly notes that the principles of the UN Charter such as sovereign equality, non-intervention in internal affairs of States, the prohibition on the threat or use of force, human rights and fundamental freedoms apply to cyberspace. It also mentions that specific bodies of international law such as international humanitarian law (IHL), international human rights law (IHRL) and international criminal law (ICL) as applicable as well. Section C also records that “States underscored that international humanitarian law neither encourages militarization nor legitimizes conflict in any domain”, without mentioning which States did so.

Significantly, Section C of the Initial Pre-Draft also notes that a view was expressed in the discussions that “existing international law, complemented by the voluntary, non-binding norms that reflect consensus among States” is “currently sufficient for addressing State use of ICTs”. According to this view, it only remains for a “common understanding” to be reached on how the already agreed normative framework could apply and be operationalized. At the same time, the counter-view expressed by some other States is also noted in Section C, that “there may be a need to adapt existing international law or develop a new instrument to address the unique characteristics of ICTs.”

This view arises from the confusion or lack of clarity on how existing international law could apply to cyberspace and includes but is not limited to questions on thresholds for use of force, armed attacks and self-defence, as well as the question of applicability of international humanitarian law to cyberspace. Section C goes on to note that in this context, proposals were made for the development of a legally binding instrument on the use of ICTs by States. Again, the States are not mentioned by name. Additionally, Section C notes a third view which proposed a “politically binding commitment with regular meetings and voluntary State reporting”. This was proposed as a middle ground between the first view that existing international law was sufficient and the second view that new rules of international law were required in the form of a legally binding treaty. Developing a “common approach to attribution at the technical level” was also discussed as a way of ensuring greater accountability and transparency.

With respect to the international law portion, the Initial Pre-Draft proposed recommendations including the creation of a global repository of State practice and national views in the application of international law as well as requesting the International Law Commission to undertake a study of national views and practice on how international law applies in the use of ICTs by States.

What did States have to say about Section C of the Initial Pre-Draft?

In his letter dated 11 March 2020, the Chairperson opened the Initial Pre-Draft for comments from States and other stakeholders. A total of 42 countries have submitted comments, excluding the European Union (EU) and the Non Aligned Movement (NAM), both of which have also submitted comments separately from their member States. The various submissions can be found here. Not all States’ submissions have comments specific to Section C, the international law portion. But it is nevertheless worthwhile examining the submissions of those States that do. India had also submitted comments which can be found here. However, these are no longer available on the OEWG website and appear to have been taken down.

International Law and Cyberspace

Let’s start with what States have said in answer to the basic question of whether existing international law applies to cyberspace and if so, whether its sufficient to regulate State-use of ICTs. A majority of States have answered in the affirmative and this list includes the Western Bloc led by the US including Canada, France, Germany, Austria, Czech Republic, Denmark, Estonia, Ireland, Liechtenstein, Netherlands, Norway, Sweden, Switzerland, Italy, and the United Kingdom, as well as Australia, New Zealand, Japan, South Korea, Colombia, South Africa, Mexico and Uruguay. While Singapore has affirmed that international law, in particular, the UN Charter, applies to cyberspace, it is silent on whether its current form is sufficient to regulate State action in cyberspace.

Several States, however, are of the clear view that international law as it exists is insufficient to regulate cyberspace or cannot be directly applied to cyberspace. These States have identified a “legal vacuum” in international law vis-à-vis cyberspace and call for new rules in the form of a binding treaty. This list includes China, Cuba, Iran, Nicaragua, Russia and Zimbabwe. Indonesia, in its turn, has stated that “automatic application” of existing law without examining the context and unique nature of activities in cyberspace should be avoided since “practical adjustment and possible new interpretations are needed”, and the “gap of the ungoverned issues in cyberspace” also needs to be addressed.

NAM has stated that the UN Charter applies, but has also noted the need to “identify possible gaps” that can be addressed through “furthering the development of international rules”. India’s earlier uploaded statement had expressed the view that although the applicability of international law had been agreed to, there are “differences in the structure and functioning of cyberspace, including complicated jurisdictional issues” and that “gaps in the existing international laws in their applicability to cyberspace” need examining. This statement also spoke of “workable modifications to existing laws and exploring the needs of, if any, new laws”.

Venezuela has stated that “the use of ICTs must be fully consistent with the purposes and principles of the UN Charter and international law”, but has also stated that “it is necessary to clarify that International Public Law cannot be directly applicable to cyberspace”, leaving its exact views on the subject unclear.

International Humanitarian Law and Cyberspace

The Initial Pre-Draft’s view on the applicability of IHL to cyberspace has also become a point of contention for States. States supporting its applicability include Brazil, Czech Republic, Denmark, Estonia, France, Germany, Ireland, Netherlands, Switzerland, the United Kingdom and Uruguay. India is among the supporters. Some among these like Estonia, Germany and Switzerland have called for the specific principles of humanity, proportionality, necessity and distinction to be included in the report.

States including China, Cuba, Nicaragua, Russia, Venezuela and Zimbabwe are against applying IHL, with their primary reason being that it will promote “militarization” of cyberspace and “legitimize” conflict. According to China, we should be “extremely cautious against any attempt to introduce use of force in any form into cyberspace,… and refrain from sending wrong messages to the world.” Russia has acerbically stated that to say that IHL can apply “to the ICT environment in peacetime” is “illogical and contradictory” since “IHL is only applied in the context of a military conflict while currently the ICTs do not fit the definition of a weapon”.

Second level of detail on these questions, especially concerning specific principles including sovereignty, non-intervention, threat or use of force, armed attack and inherent right of self-defence, is scarce in States’ comments, beyond whether they apply to cyberspace. Zimbabwe has mentioned in its submission that these principles do apply, as has NAM. Cuba, as it did in the 2017 GGE, has taken the stand that the inherent right to self-defence under Article 51 of the UN Charter cannot be automatically applied to cyberspace. Cuba also stated that it cannot be invoked to justify a State responding with conventional attacks. The US has also taken the view it expressed in the 2017 GGE, that if States’ obligations such as refraining from the threat or use of force are to be mentioned in the report, it should also contain States’ rights, namely, the inherent right to self-defence in Article 51.

Austria has categorically stated that the violation of sovereignty is an internationally wrongful act if attributable to a State. But other States’ comments are broader and do not address the issue of sovereignty at this level. Consider Indonesia’s comments, for instance, where it has simply stated that it “underlines the importance of the principle of sovereignty” and that the report should as well. For India’s part, its earlier uploaded statement approached the issue of sovereignty from a different angle. It stated that the “territorial jurisdiction and sovereignty are losing its relevance in contemporary cyberspace discourse” and went on to recommend a “new form of sovereignty which would be based on ownership of data, i.e., the ownership of the data would be that of the person who has created it and the territorial jurisdiction of a country would be on the data which is owned by its citizens irrespective of the place where the data physically is located”. On the face of it, this comment appears to relate more to the conflict of laws with respect to the transborder nature of data rather than any principle of international law.

The Initial Pre-Draft mentioning the need for a “common approach” for attribution also drew sharp criticism. France, Germany, Italy, Nicaragua, Russia, Switzerland and the United Kingdom have all expressed the view that attribution is a “national” or “sovereign” prerogative and should be left to each State. Iran has stated that addressing a common approach for attribution is premature in the absence of a treaty. Meanwhile, Brazil, China and Norway have supported working towards a common approach for attribution. This issue has notably seen something of a re-alignment of divided State groups.

International Human Rights Law and Cyberspace

States’ comments to Section C also pertain to its language on IHRL with respect to ICT use. Austria, France, the Netherlands, Sweden and Switzerland have called for greater emphasis on human rights and its applicability in cyberspace, especially in the context of privacy and freedoms of expression, association, and information. France has also included the “issues of protection of personal data” in this context. Switzerland has interestingly linked cybersecurity and human rights as “complementary, mutually reinforcing and interdependent”. Ireland and Uruguay’s comments also specify that IHRL apply.

On the other hand, Russia’s comments make it clear that it believes there is an “overemphasis” on human rights law, and it is not “directly related” to international peace and security. Surprisingly, the UK has stated that issues concerning data protection and internet governance are beyond the OEWG’s mandate, while the US comments are silent on the issue. While not directly referring to international human rights law, India’s comments had also mentioned that its concept of data ownership based sovereignty would reaffirm the “universality of the right to privacy”.

Role of the International Law Commission

The Initial Pre-Draft also recommended requesting the International Law Commission (through the General Assembly) to “undertake a study of national views and practice on how international law applies in the use of ICTs by States”. A majority of States including Canada, Denmark, Japan, the Netherlands, Russia, Switzerland, the United Kingdom and the United States have expressed clearly that they are against sending the issue to the ILC as it is too premature at this stage, and would also be contrary to the General Assembly resolutions referring the issue to the OEWG and the GGE.

With respect to the Initial Pre-Draft’s recommendation for a repository of State practices on the application of international law to State-use of ICTs, support is found in comments submitted by Ireland, Italy, Japan, South Korea, Singapore, South Africa, Sweden and Thailand. While Japan, South Africa and India (comments taken down) have qualified their views by stating these contributions should be voluntary, the EU has sought clarification on the modalities of contributing to the repository so as to avoid duplication of efforts.

Other Notable Comments

Aside from the above, States have raised certain other points of interest that may be relevant to the ongoing discussion on international law. The Czech Republic and France have both drawn attention to the due diligence norm in cyberspace and pointed out that it needs greater focus and elaboration in the report.

In its comments, Colombia has rightly pointed out that discussions should centre around “national views” as opposed to “State practice”, since it is difficult for State practice to develop when “some States are still developing national positions”. This accurately highlights a significant problem in cyberspace, namely the scarcity of State practice on account of unclarity in national positions. It holds true for most developing nations, including but not limited to India.

On a separate issue, the UK has made an interesting, but implausible proposal. The UK in its comments has proposed that “States acknowledge military capabilities at an organizational level as well as provide general information on the legal and oversight regimes under which they operate”. Although it has its benefits, such as reducing information asymmetries in cyberspace, it is highly unlikely that States will accept an obligation to disclose or acknowledge military capabilities, let alone any information on the “legal and oversight regimes under which they operate”. This information speaks to a State’s military strength in cyberspace, and while a State may comment on the legality of offensive cyber capabilities in abstract, realpolitik deems it unlikely that it will divulge information on its own capabilities. It is worth noting here that the UK has acknowledged having offensive cyber capabilities in its National Cyber Security Strategy 2016 to 2021.

What does the Revised Pre-Draft Say About International Law?

The OEWG Chair, by a letter dated 27 May 2010, notified member States of the revised version of the Initial Pre-Draft (Revised Pre-Draft). He clarified that the “Recommendations” portion had been left changed. On perusal, it appears Section C of the Revised Pre-Draft is almost entirely unchanged as well, barring the correction of a few typographical errors. This is perhaps not surprising, given the OEWG Chair made it clear in his letter that he still expected “guidance from Member States for further revisions to the draft”.

CCG will track States’ comments to the Revised Pre-Draft as well, as and when they are submitted by member States.

International Law and Cyberspace: Three Different Conversations

With the establishment of the OEWG, the UN GGE was no longer the only multilateral conversation on cyberspace and international law among States in the UN. Of course, both the OEWG and the GGE are about more than just the questions of whether and how international law applies in cyberspace – they also deal with equally important, related issues of capacity-building, confidence building measures and so on in cyberspace. But their work on international law is still extremely significant since they offer platforms for States to express their views on international law and reach consensus on contentious issues in cyberspace. Together, these two forums form two important streams of conversation between States on international law in cyberspace.

At the same time, States are also separately articulating and releasing their own positions on international law and how it applies to cyberspace. Australia, France, Germany, Iran, the Netherlands, the United Kingdom and the United States have all indicated their own views on how international law applies to cyberspace, independent of both the GGE and the OEWG, with Iran being the latest State to do so. To the extent they engage with each other by converging and diverging on some issues such as sovereignty in cyberspace, they form the third conversation among States on international law. Notably, India has not yet joined this conversation.

It is increasingly becoming clear that this third conversation is taking place at a particularly level of granularity, not seen so far in the OEWG or the GGE. For instance, the raging debate on whether sovereignty in international law in cyberspace is a rule entailing consequences for violation or is merely a principle that only gives rise to binding rules such as the prohibitions on use of force or intervention, has so far been restricted to this third conversation. In contrast, States’ comments to the OEWG’s Initial Pre-Draft have indicated that discussions in the OEWG appear to still centre around the broad question of whether and how international law applies to cyberspace. Only Austria mentioned in its comments to the Initial Pre-Draft that it believed sovereignty was a rule the violation of which would be an internationally wrongful act. The same applies for the GGE, since although it was able to deliver consensus reports on international law applying to cyberspace, it also cannot claim to have dealt with these issues at level of specificity beyond this.

This variance in the three conversations shows that some States are racing way ahead of others in their understanding of how international law applies to cyberspace, and these States are so far predominantly Western and developed, with the exception of Iran. Colombia’s comment to the OEWG’s Initial Pre-Draft is a timely reminder in this regard, that most States are still in the process of developing their national positions. The interplay between these three conversations around international law and cyberspace will be interesting to observe.

The Centre for Communication Governance’s comments to the Initial Pre-Draft can be accessed here.

On Cyber Weapons and Chimeras

This post has been authored by Gunjan Chawla and Vagisha Srivastava

Closeup of laptop computer keyboard, and gun bullets, representing the concept of cyber attacks, Journalism, terrorism, support for terrorists, click enter

“The first thing we do, let’s kill all the lawyers,” says Shakespeare’s Dick the Butcher to Jack Cade, who leads fellow conspirators in the popular rebellion against Henry VI.

The same cliché may as well have been the opening line of Pukhraj Singh’s response to our last piece, which joins his earlier pieces heavily burdened with thinly veiled disdain for lawyers poking their noses into cyber operations. In his eagerness to establish code as law, he omits not only the universal professional courtesy of getting our names right, but also a basic background check on authors he so fervently critiques – only one of whom is in fact a lawyer and the other, an early career technologist.

In this final piece in our series on offensive cyber capabilities, we take exception to Singh’s misrepresentation of our work and hope to redirect the conversation back to the question raised by our first piece – what is the difference between ‘cyber weapons’ and offensive cyber capabilities, if any? Our readers may recall from our first piece in the series Does India have offensive cyber capabilities that Lt Gen Pant had in an interview to Medianama, denied any intent on part of the Government of India to procure ‘cyber weapons’. However, certain amendments inserted in export control regulations by the DGFT suggested the presence of offensive cyber capabilities in India’s cyber ecosystem. Quoting Thomas Rid from Cyber War Will Not Take Place,

“these conceptual considerations are not introduced here as a scholarly gimmick. Indeed theory shouldn’t be left to scholars; theory needs to become personal knowledge, conceptual tools used to comprehend conflict, to prevail in it, or to prevent it.”

While lawyers and strategists working in the cyber policy domain admittedly, still have a lot to learn from those with personal knowledge of the conduct of hostilities in cyberspace, deftly obscured by a labyrinth of regulations and rapidly changing rules of engagement, the question of nomenclature remains an important one. The primary reason for this is that the taxonomy of cyber operations has significant implications for the obligations incumbent on States and State actors under international as well as domestic law.

A chimeral critique

Singh’s most seriously mounted objection in his piece is to our assertion that ‘cyber capabilities’ and ‘cyber operations’ are not synonymous, just as ‘arms’ and ‘armed attack’, or ‘weapons’ and ‘war’ are distinct concepts. However, a wilful misunderstanding of our assertion that cyber capabilities and cyber operations are not interchangeable terms does not foster any deeper understanding of the legal or technical ingredients of a ‘cyber operation’–irrespective of whether it is offensive, defensive or exploitative in intent and design.

The central idea remains, that a capability is wielded with the intent of causing a particular effect (which may or may not be identical to the actual effect resulting from the cyber operation). A recent report by the Belfer Center at Harvard on a ‘National Cyber Power Index’, which views a nation’s cyber power as a function of its intent and capability, also seems to support this position. Certainly, the criteria and methodology of assessment remain open to debate and critique from academics as well as practitioners, and this debate needs to inform our legal position and strategic posture (again, the two are not synonymous) as to the legality of developing offensive cyber capabilities in international as well as domestic law.

Second, in finding at least one of us guilty of a ‘failure of imagination’, Singh steadfastly advocates the view that cyber (intelligence) operators like himself are better off unbounded by legal restraint of their technical prowess, functioning in a Hobbesian (virtual) reality where code is law and technological might makes right. It is thus unsurprising that Singh in what is by his own admission a ‘never to be published manuscript’, seems to favour practices normalized by the United States’ military doctrine, regardless of their dubious legality.

Third, in criticizing lawyers’ use of analogical reasoning—which to Singh, has become ‘the bane of cyber policy’—he conveniently forgets that for those of us who were neither born in the darkness of covert cyber ops, nor moulded by it, analogies are a key tool to understand unfamiliar concepts by drawing upon learnings from more familiar concepts. Indeed, it has even been argued that analogy is the core of human cognition.

Navigating a Taxing Taxonomy

Writing in 2012 with Peter McBurney, Rid postulates that cyber weapons may span a wide spectrum, from generic but low-potential tools to specific high potential weaponry – and may be viewed as a subset of ‘weapons’. In treating cyberweaponry as a subset of conventional weaponry, their underlying assumption is that the (cyber) weapon is being developed and/or deployed with ‘the aim of threatening or causing physical, functional or mental harm to structures, systems or living beings’. This also supports our assertion that intent is a key element to planning and launching a cyber operation, but not for the purposes of classifying a cyber operation as an ‘armed attack’ under international law. However, it is important to mention that Rid considers ‘cyber war’ as an extremely problematic and dangerous concept, one that is far narrower than the concept of ‘cyber weapons’.

Singh laments that without distinguishing between cyber techniques and effects, we fall into ‘a quicksand of lexicon, taxonomies, hypotheses, assumptions and legalese’. He considers the OCOs/DCOs classification too ‘simplistic’ in comparison to the CNA/CND/CNE framework. Even if the technological underpinnings of cyber exploits (for intelligence gathering) and cyber attacks (for damage, disruption and denial) have not changed over the years, as Singh argues—the change in terminology/vocabulary cannot be attributed to ‘ideology’. This change is a function of a complete reorganization and restructuring of the American national security establishment to permit greater agility and freedom of action in rules of hostile engagement by the military in cyberspace.

Unless the law treats cognitive or psychological effects of cyber operations, (eg. those depicted in the Social Dilemma or the Great Hack, or even in doxing classified documents) as harm that is ‘comparable’ to physical damage/destruction, ‘cyber offence’ will not graduate to the status of a ‘cyber weapon’. For the time being, an erasure of the physical/psychological dichotomy appears extremely unlikely. If the Russian and Chinese playbook appears innovative in translating online activity to offline harm, it is because of an obvious conflation between a computer systems-centric cyber security model and the state-centric information security model that values guarding State secrets above all else, and benefits from denying one’s adversary the luxury of secrecy in State affairs.

The changing legal framework and as a corollary, the plethora of terminologies employed around the conduct of cyber operations by the United States run parallel to the evolving relationship between its intelligence agencies and military institutions.

The US Cyber Command (CYBERCOM) was first created in 2008, but was incubated for a long time by the NSA under a peculiar arrangement established in 2009, whereby the head of the NSA was also the head of the US CYBERCOM, with a view to leverage the vastly superior surveillance capabilities of the NSA at the time. This came to be known as a ‘dual-hat arrangement’, a moniker descriptive of the double role played by the same individual simultaneously heading an intelligence agency as well as a military command. Simply put, cyber infrastructure raised for the purposes of foreign surveillance and espionage was but a stepping stone to building cyber warfare capabilities. Through a presidential memorandum in 2017, President Trump directed the Secretary of Defense to establish the US Cyber Command as a Unified Combatant Command, elevating its status from a sub-unit of the US Strategic Command (STRATCOM).

An important aspect of the ‘restructuring’ we refer to are two Presidential directives – one from 2012 and another from 2018. In October 2012, President Obama signed the Presidential Policy Directive- 20 2012 (PPD). It was classified as Top Secret at the time, but leaked by Ellen Nakashima of the Washington Post a month later. The PPD defined US cyber policy, including terms such as ‘Offensive Cyber Effects Operations’ (OCEO) and ‘Defensive Cyber Effects Operations’ (DCEO) and mandated that all cyber operations were to be executed with the explicit authorization from the President. In August, 2018, Congress passed a military-authorization bill that delegated some cyber operations to be authorized by the Secretary of Defense. It is relevant that ‘clandestine military activity (covert operations) or operations in cyberspace are now considered a traditional military activity under this statute, bringing it under the DoD’s authority. The National Security Presidential Memorandum 13 (NSPM) on offensive cyber operations signed by President Trump around the same time, although not available in the public domain, has reportedly further eased procedural requirements for Presidential approval in certain cyber operations.

Thus, if we overcome apprehensions about the alleged ‘quicksand of lexicon, taxonomies, hypotheses, assumptions and legalese,’ we can appreciate the crucial role played by these many terms in the formulation of clear operational directives. They serve an important role in the conduct of cyber operations by (1) delineating the chain of command for the conduct of military cyber operations for the purposes of domestic law and (2) bringing the conversation on cyber operations outside the don’t-ask-don’t-tell realm of ‘espionage’, enabling lawyers and strategists to opine on their legality and legitimacy, or lack thereof, as military operations for the purposes of international law – much to Singh’s apparent disappointment. To observers more closely acquainted with the US playbook on international law, the inverse is also true, where operational imperatives have necessitated a re-formulation of terms that may convey any sense of illegality or impropriety in military conduct (as opposed to the conduct of intelligence agencies, which is designed for ‘plausible deniability’ in case of an adverse outcome).

We relied on the latest (June 2020) version of JP 1-02 for the current definition of ‘offensive cyber operations’ in American warfighting doctrine. We can look to earlier versions of the DoD Dictionary to trace back the terms relevant to CNOs (including CAN, CNE and CND). This exercise makes it quite apparent that the contemporary terminologies and practices are all rooted in (covert) cyber intelligence operations, which the (American) law and policy around cyberspace bends backwards to accommodate and conceal. That leading scholars have recently sought to frame ‘cyber conflict as an intelligence contest’ further supports this position.

  • 2001 to 2007 – ‘cyber counterintelligence’ as the only relevant military activity in cyberspace (even though a National Military Strategy for Cyberspace Operations existed in 2006)
    • 2008: US CYBERCOM created as a sub-unit of US STRATCOM
    • 2009 – Dual Hat arrangement between NSA and CYBERCOM
    • 2010– US CYBERCOM achieves operational capability on May 21; CNA/CNE enter the DoD lexicon
    • 2012 – PPD 20 issued by President Obama
    • 2013 – JP 3-12 published as doctrinal guidance from the DoD to plan, execute and assess cyber operations
    • By 2016 – DoD dictionary defines ‘cyberspace operations’, DCOs, OCOs, (but not cyberspace exploitation) relying on JP 3-12
    • 2018 – NSPDM 13 signed by President Trump
    • 2020 – ‘cyberspace attack’ ‘cyberspace capability’, ‘cyberspace defence’, ‘cyberspace exploitation’, ‘cyberspace operations’, cyberspace security, cybersecurity as well as OCOs/DCOs are defined terms in the Dictionary

Even as JP 3-12 remains an important document from the standpoint of military operations, reliance on this document is inapposite, even irrelevant for the purposes of agencies responsible for cyber intelligence operations. In fact, JP 3-12 is also not helpful to explain the whys and hows of the evolution in the DoD vocabulary. This is a handy guide to decode the seemingly cryptic numbering of DoD’s Joint Publications.

Waging Cyber War without Cyber ‘Weapons’?

It is relevant to mention that none of the documents referenced above, including JP 3-12, make any mention of the term ‘cyber weapon’. A 2010 memorandum from the Chairman of the Joint Chiefs of Staff, however, clearly identifies CNAs as a form of ‘offensive fire’ – analogous to weapons that are ‘fired’ upon a commander’s order, as well as a key component of Information Operations.

The United States’ Department of Defense in its 2011 Defense Cyberspace Policy Report to Congress acknowledged that “the interconnected nature of cyberspace poses significant challenges for applying some of the legal frameworks developed for physical domains” and observed that “there is currently no international consensus regarding the definition of a cyber weapon”.

A plausible explanation as to why the US Government refrains from using the term ‘cyber weapons’ is found in this report, as it highlights certain legal issues in the transporting cyber ‘weapons’ across the Internet through the infrastructure owned and/or located in neutral third countries without obtaining the equivalent of ‘overflight rights’, and suggests ‘a principled application of existing norms to be developed along with partners and allies’. A resolution to this legal problem highlighted in the DoD’s report to Congress is visible in the omission of the term ‘cyber weapon’ in legal and policy frameworks altogether, only to be replaced by ‘cyber capabilities’.

We can find the rationale for and implications of this pivot in the work of Professor Michael Schmitt’s 2019 paper, wherein he argues in the context of applicable international law – contrary to the position he espoused in the Tallinn Manual –that ‘cyber capabilities’ cannot meet the definition of a weapon or means of warfare, but that cyber operations may qualify as methods of warfare. This interpretation permits ‘cyber weapons’ in the garb of ‘cyber capabilities’ to circumvent at least three obligations under the Law of Armed Conflict/International Humanitarian Law.

First, is the requirement for legal review of weapons under Article 36 of the First Additional Protocol to the Geneva Conventions (an issue Col. Gary Brown has also written about) and second, is taking precautions in attack. Third and most important, the argument that cyber weapons cannot be classified as munitions also has the consequence of depriving neutral States of their sovereign right to refuse permission of the transportation of weapons (or in this case, transmission of weaponised cyber capabilities) through their territory (assuming that this is technically possible).

So, in a sense, if we do not treat offensive cyber capabilities, or ‘cyber weapons’ as analogous in international law to conventional weapons normally associated with armed hostilities, in effect, we also restrain the ability of other sovereign States under international law to prevent and prohibit a weaponization of cyberspace without their consent, for military purposes of other cyber powers. Col. Gary Brown whose work Singh seems to nurture a deep admiration for admits that the first ‘cyber operation’ was conducted by the United States against the Soviet Union in 1982, causing a trans-Siberian pipe to explode by use of malware implanted in Canadian software acquired by Soviet agents. Since 1982, the US seems to have functioned in single-player mode until Russia’s DDoS attacks on Estonia in 2007, or at the very least, until MOONLIGHT MAZE was uncovered in 1998. For those not inclined to read, Col. Brown makes a fascinating appearance alongside former CIA director Michael Hayden in Alex Gibney’s 2016 Documentary ‘Zero Days’ which delves into Stuxnet – an obvious cyber weapon by any standards, which the US ‘plausibly denied’ until 2012.

Turning back to domestic law, the nomenclature is also significant from a public finance perspective. As anecdotal evidence, we can refer to this 2013 Reuters report, which suggests that the US Air Force designated certain cyber capabilities as ‘weapons’ with a view to secure funding from Congress.

From the standpoint of managing public perceptions too, it is apparent that the positive connotations associated with ‘developing cyber capabilities’ makes the same activity a lot more palatable, even development-oriented in the eyes of the general public, as opposed to the inherent negativity associated with say, the ‘proliferation of cyber weapons’.

Additionally, the legal framework is also important to delineate the geographical scope of the legal authority (or its personal jurisdiction, if you will) vested in the military as opposed to intelligence agencies to conduct cyber operations. For organizational purposes, the role of intelligence would (in theory) be limited to CNE, whereas CNA and CND would be vested in the military. We know from (Pukhraj’s) experience, this distinction is nearly impossible to make in practice, at least until after the fact. This overlap of what are arguably, artificially created categories of cyber operations, raises urgent questions about the scope and extent of authority the law can legitimately vest in our intelligence agencies, over and above the implicit authority of the armed forces to operate in the cyber domain.

Norm Making by Norm Breaking

In addition to understanding who wields offensive cyber capabilities, under what circumstances, it is also important for the law to specify where or against whom they are permitted to do so by law. Although militaries of modern day ‘civilized’ nations are rarely ever deployed domestically, there has been some recent concern over whether the US CYBERCOM could be deployed against American citizens in light of recent protests, just as special forces were. While the CIA has legal authority to operate exclusively beyond the United States, the NSA is not burdened by such constraints and is authorized to operate domestically. Thus, the governance/institutional choices before a State looking to ‘acquire cyber weapons’ or ‘develop (offensive) cyber capabilities’ range from bad to worse. One might either (1) permit its intelligence agencies to engage in activities that resemble warfighting more than they resemble intelligence gathering and risk unintentional escalations internationally or (2) permit its military to engage in intelligence collection domestically, potentially against its own citizens and risk ubiquitous militarization of and surveillance in its domestic cyberspace.

Even as many celebrate the recent Federal court verdict that the mass surveillance programmes of the NSA revealed by Edward Snowden were illegal and unconstitutional, let us not forget that this illegality is found vis-à-vis the use of this programme against American citizens only – not foreign surveillance programmes and cyber operations conducted beyond American soil against foreign nationals. Turning to an international law analysis, it is the US’ refusal to recognize State sovereignty as a binding rule of international law, that enables the operationalization of international surveillance and espionage networks and transmission of weaponized cyber capabilities that routinely violate not only the sovereignty of States, but also the privacy and dignity of targeted individuals (the United States does not accept the extra-territorial applicability of the ICCPR).

The nom de guerre of these transgressions in American doctrine is now ‘persistent engagement’ and ‘defend forward’, popularized by the Cyber Solarium Commission most recently—a cleverly crafted term that brings about no technical changes in the modus operandi, but disguises aggressive cyber intrusions across national borders as ostensible self-defence.

It is also relevant that this particular problem also finds a clear mention in the Chinese Foreign Minister’s recent statement on the formulation of Digital Security rules by China. Yet, it is not a practice from which either the US or China plan to desist. Recent revelations about the Chinese firm Zhenhua Data Information Technology Co. by the Indian Express have only served to confirm the expansive, and expanding cyber intelligence network of the Chinese state.

These practices of extraterritorial surveillance, condemnable as they may be, have nonetheless, shaped the international legal order we find ourselves in today – a testimony to the paradoxical dynamism of international law– not unlike the process of ‘creative destruction’ of cyberspace highlighted by Singh—where a transgression of the norm (by either cyber power) may one day, itself become a norm. What this norm is, or should be still remains open to interpretation, so let’s not rush to kill all the lawyers—not just yet anyway.

The Proliferating Eyes of Argus: State Use of Facial Recognition Technology

Democratic lawmakers introduce ban on facial recognition technology, citing  mistake made by Detroit police | News Hits

This post has been authored by Sangh Rakshita

In Greek mythology Argus Panoptes was a many-eyed, all-seeing, and always awake, giant whose reference has been used to depict an imagery of excessive scrutiny and surveillance. Jeremy Bentham used this reference when he designed the panopticon prison where prisoners would be monitored without them being in the know. Later, Michel Foucault used the panopticon to elaborate on the social theory of panopticism where the watcher ceases to be external to the watched, resulting in internal surveillance or a ‘chilling’ effect. This idea of “panopticism” has gained renewed relevance in the age of digital surveillance.

Amongst the many cutting edge surveillance technologies being adopted globally, ‘Facial Recognition Technology’ (FRT) is one of the most rapidly deployed. ‘Live Facial Recognition Technology’ (LFRT) or ‘Real-time Facial Recognition Technology’, its augmentation, has increasingly become more effective in the past few years. Improvements in computational power and algorithms have enabled cameras placed at odd angles to detect faces even in motion. This post attempts to explore the issues with increasing State use of FRT around the world and the legal framework surrounding it.

What do FRT and LFRT mean?

FRT refers to the usage of algorithms for uniquely detecting, recognising, or verifying a person using recorded images, sketches, videos (which contain their face). The data about a particular face is generally known as the face template. This template is a mathematical representation of a person’s face, which is created by using algorithms that mark and map distinct features on the captured image like eye locations or the length of a nose. These face templates create the biometric database against which new images, sketches, videos, etc. are compared to verify or recognise the identity of a person. As opposed to the application of FRT, which is conducted on pre-recorded images and videos, LFRT involves real-time automated facial recognition of all individuals in the camera field’s vision. It involves biometric processing of images of all the passers-by using an existing database of images as a reference.

The accuracy of FRT algorithms is significantly impacted by factors like distance and angle from which the image was captured or poor lighting conditions. These problems are worsened in LFRT as the images are not captured in a controlled setting, with the subjects in motion, rarely looking at the camera, and often positioned at odd angles from it. 

Despite claims of its effectiveness, there has been growing scepticism about the use of FRT. Its use has been linked with misidentification of people of colour, ethinic minorities, women, and trans people. The prevalent use of FRT may not only affect the privacy rights of such communities, but all those who are surveilled at large.

The Prevalence of FRT 

While FRT has become ubiquitous, LFRT is still in the process of being adopted in countries like the UK, USA, India, and Singapore. The COVID-19 pandemic has further accelerated the adoption of FRT as a way to track the virus’ spread and to build on contactless biometric-based identification systems. For example, in Moscow, city officials were using a system of tens of thousands of cameras equipped with FRT, to check for social distancing measures, usage of face masks, and adherence to quarantine rules to contain the spread of COVID-19. 

FRT is also being steadily deployed for mass surveillance activities, which is often in violation of universally accepted principles of human rights such as necessity and proportionality. These worries have come to the forefront recently with the State use of FRT to identify people participating in protests. For example, FRT was used by law enforcement agencies to identify prospective law breakers during protests in Hong Kong, protests concerning the Citizenship Amendment Act, 2019 in New Delhi and the Black Lives Matter protests across the USA.

Vociferous demands have been made by civil society and digital rights groups for a global moratorium on the pervasive use of FRT that enables mass surveillance, as many cities such as Boston and Portland have banned its deployment. However, it remains to be seen how effective these measures are in halting the use of FRT. Even the temporary refusal by Big Tech companies to sell FRT to police forces in the US does not seem to have much instrumental value – as other private companies continue its unhindered support.

Regulation of FRT

The approach to the regulation of FRT differs vastly across the globe. The regulation spectrum on FRT ranges from permissive use of mass surveillance on citizens in countries like China and Russia to a ban on the use of FRT for example in Belgium and Boston (in USA). However, in many countries around the world, including India, the use of FRT continues unabated, worryingly in a regulatory vacuum.

Recently, an appellate court in the UK declared the use of LFRT for law enforcement purposes as unlawful, on grounds of violation of the rights of data privacy and equality. Despite the presence of a legal framework in the UK for data protection and the use of surveillance cameras, the Court of Appeal held that there was no clear guidance on the use of the technology and it gave excessive discretion to the police officers. 

The EU has been contemplating a moratorium on the use of FRT in public places. Civil society in the EU is demanding a comprehensive and indefinite ban on the use of FRT and related technology for mass surveillance activities.

In the USA, several orders banning or heavily regulating the use of FRT have been passed. A federal law banning the use of facial recognition and biometric technology by law enforcement has been proposed. The bill seeks to place a moratorium on the use of facial recognition until Congress passes a law to lift the temporary ban. It would apply to federal agencies such as the FBI, as well as local and State police departments.

The Indian Scenario

In July 2019, the Government of India announced its intentions of setting up a nationwide facial recognition system. The National Crime Bureau (NCRB) – a government agency operating under the Ministry of Home Affairs – released a request for proposal (RFP) on July 4, 2019 to procure a National Automated Facial Recognition System (AFRS). The deadline for submission of tenders to the RFP has been extended 11 times since July 2019. The stated aim of the AFRS is to help modernise the police force, information gathering, criminal identification, verification, and its dissemination among various police organisations and units across the country. 

Security forces across the states and union territories will have access to the centralised database of AFRS, which will assist in the investigation of crimes. However, civil society organisations have raised concerns regarding privacy and issues of increased surveillance by the State as AFRS does not have a legal basis (statutory or executive) and lacks procedural safeguards and accountability measures like an oversight regulatory authority. They have also questioned the accuracy of FRT in identifying darker skinned women and ethnic minorities and expressed fears of discrimination. 

This is in addition to the FRT already in use by law enforcement agencies in Chennai, Hyderabad, Delhi, and Punjab. There are several instances of deployment of FRT in India by the government in the absence of a specific law regulating FRT or a general data protection law.

Even the proposed Personal Data Protection Bill, 2019 is unlikely to assuage privacy challenges arising from the use of FRT by the Indian State. The primary reason for this is the broad exemptions provided to intelligence and law enforcement agencies under Clause 35 of the Bill on grounds of sovereignty and integrity, security of the State, public order, etc.

After the judgement of K.S. Puttaswamy vs. Union of India (Puttaswamy I), which reaffirmed the fundamental right to privacy in India, any act of State surveillance breaches the right to privacy and will need to adhere to the three part test laid down in Puttaswamy I.

The three prongs of the test are – legality, which postulates the existence of law along with procedural safeguards; necessity, defined in terms of a legitimate State aim; and proportionality which ensures a rational nexus between the objects and the means adopted to achieve them. This test was also applied in the Aadhaar case (Puttaswamy II) to the use of biometrics technology. 

It may be argued that State use of FRT is for the legitimate aim of ensuring national security, but currently its use is neither sanctioned by law, nor does it pass the test of proportionality. For proportionate use of FRT, the State will need to establish that there is a rational nexus between its use and the purpose sought to be achieved and that the use of such technology is the least privacy restrictive measure to achieve the intended goals. As the law stands today in India after Puttaswamy I and II, any use of FRT or LFRT currently is prima facie unconstitutional. 

While mass surveillance is legally impermissible in India, targeted surveillance is allowed under Section 5 of the Indian Telegraph Act, 1885, read with rule 419A of the Indian Telegraph Rules, 1951 and Section 69 of the Information and Technology Act, 2000 (IT Act). Even the constitutionality of Section 69 of the IT Act has been challenged and is currently pending before the Supreme Court.

Puttaswamy I has clarified that the protection of privacy is not completely lost or surrendered in a public place as it is attached to the person. Hence, the constitutionality of India’s surveillance apparatus needs to be assessed from the standards laid down by Puttaswamy I. To check unregulated mass surveillance through the deployment of FRT by the State, there is a need to restructure the overall surveillance regime in the country. Even the Justice Srikrishna Committee report in 2018 – highlighted that several executive sanctioned intelligence-gathering activities of law enforcement agencies would be illegal after Puttaswamy I as they do not operate under any law. 

The need for reform of surveillance laws, in addition to a data protection law in India to safeguard fundamental rights and civil liberties, cannot be stressed enough. The surveillance law reform will have to focus on the use of new technologies like FRT and regulate its deployment with substantive and procedural safeguards to prevent abuse of human rights and civil liberties and provide for relief. 

Well documented limitations of FRT and LFRT in terms of low accuracy rates, along with concerns of profiling and discrimination, make it essential for the surveillance law reform to have additional safeguards such as mandatory accuracy and non-discrimination audits. For example, the National Institute of Standards and Technology (NIST), US Department of Commerce, 2019 Face Recognition Vendor Test (part three) evaluates whether an algorithm performs differently across different demographics in a dataset. The need of the hour is to cease the use of FRT and put a temporary moratorium on any future deployments till surveillance law reforms with adequate proportionality safeguards have been implemented. 

Building an AI governance framework for India

This post has been authored by Jhalak M. Kakkar and Nidhi Singh

In July 2020, the NITI Aayog released a “Working Document: Towards Responsible AI for All” (“NITI Working Document/Working Document”). The Working Document was initially prepared for an expert consultation held on 21 July 2020. It was later released for comments by stakeholders on the development of a ‘Responsible AI’ policy in India. CCG responded with comments to the Working Document, and our analysis can be accessed here.

The Working Document highlights the potential of Artificial Intelligence (“AI”) in the Indian context. It attempts to identify the challenges that will be faced in the adoption of AI and makes some recommendations on how to address these challenges. The Working Document emphasises the economic potential of the adoption of AI in boosting India’s annual growth rate, its potential for use in the social sector (‘AI for All’) and the potential for India to export relevant social sector products to other emerging economies (‘AI Garage’). 

However, this is not the first time that the NITI Aayog has discussed the large-scale adoption of AI in India. In 2018, the NITI Aayog released a discussion paper on the “National Strategy for Artificial Intelligence” (“National Strategy”). Building upon the National Strategy, the Working Document attempts to delineate ‘Principles for Responsible AI’ and identify relevant policy and governance recommendations. 

Any framework for the regulation of AI systems needs to be based on clear principles. The ‘Principles for Responsible AI’ identified by the Working Document include the principles of safety and reliability, equality, inclusivity and non-discrimination, privacy and security, transparency, accountability, and the protection and reinforcement of positive human values. While the NITI Working Document introduces these principles, it does not go into any substantive details on the regulatory approach that India should adopt and what the adoption of these principles into India’s regulatory framework would entail. 

In a series of posts, we will discuss the legal and regulatory implications of the proposed Working Document and more broadly discuss the regulatory approach India should adopt to AI and the principles India should embed in it. In this first post, we map out key considerations that should be kept in mind in order to develop a comprehensive regulatory regime to govern the adoption and deployment of AI systems in India. Subsequent posts will discuss the various ‘Principles for Responsible AI’, their constituent elements and how we should think of incorporating them into the Indian regulatory framework.

Approach to building an AI regulatory framework 

While the adoption of AI has several benefits, there are several potential harms and unintended risks if the technology is not assessed adequately for its alignment with India’s constitutional principles and its impact on the safety of individuals. Depending upon the nature and scope of the deployment of an AI system, its potential risks can include the discriminatory impact on vulnerable and marginalised communities, and material harms such as the negative impact on the health and safety of individuals. In the case of deployments by the State, risks include violation of the fundamental rights to equality, privacy, freedom of assembly and association, and freedom of speech and expression. 

We highlight some of the regulatory considerations that should be considered below:

Anchoring AI regulatory principles within the constitutional framework of India

The use of AI systems has raised concerns about their potential to violate multiple rights protected under the Indian Constitution such as the right against discrimination, the right to privacy, the right to freedom of speech and expression, the right to assemble peaceably and the right to freedom of association. Any regulatory framework put in place to govern the adoption and deployment of AI technology in India will have to be in consonance with its constitutional framework. While the NITI Working Document does refer to the idea of the prevailing morality of India and its relation to constitutional morality, it does not comprehensively address the idea of framing AI principles in compliance with India’s constitutional principles.

For instance, the government is seeking to acquire facial surveillance technology, and the National Strategy discusses the use of AI-powered surveillance applications by the government to predict crowd behaviour and for crowd management. The use of AI powered surveillance systems such as these needs to be balanced with their impact on an individual’s right to freedom of speech and expression, privacy and equality. Operational challenges surrounding accuracy and fairness in these systems raise further concerns. Considering the risks posed to the privacy of individuals, the deployment of these systems by the government, if at all, should only be done in specific contexts for a particular purpose and in compliance with the principles laid down by the Supreme Court in the Puttaswamy case.

In the context of AI’s potential to exacerbate discrimination, it would be relevant to discuss the State’s use of AI systems for the sentencing of criminals and assessing recidivism. AI systems are trained on existing datasets. These datasets tend to contain historically biased, unequal and discriminatory data. We have to be cognizant of the propensity for historical bias’ and discrimination getting imported into AI systems and their decision making. This could further reinforce and exacerbate the existing discrimination in the criminal justice system towards marginalised and vulnerable communities, and result in a potential violation of their fundamental rights.

The National Strategy acknowledges the presence of such biases and proposes a technical approach to reduce bias. While such attempts are appreciable in their efforts to rectify the situation and yield fairer outcomes, such an approach disregards the fact that these datasets are biased because they arise from a biased, unequal and discriminatory world. As we seek to build effective regulation to govern the use and deployment of AI systems, we have to remember that these are socio-technical systems that reflect the world around us and embed the biases, inequality and discrimination inherent in the Indian society. We have to keep this broader Indian social context in mind as we design AI systems and create regulatory frameworks to govern their deployment. 

While, the Working Document introduces the principles for responsible AI such as equality, inclusivity and non-discrimination, and privacy and security, there needs to be substantive discussion around incorporating these principles into India’s regulatory framework in consonance with constitutional guaranteed rights.

Regulatory Challenges in the adoption of AI in India

As India designs a regulatory framework to govern the adoption and deployment of AI systems, it is important that we keep the following in focus: 

  • Heightened threshold of responsibility for government or public sector deployment of AI systems

The EU is considering adopting a risk-based approach for regulation of AI, with heavier regulation for high-risk AI systems. The extent of risk factors such as safety, consumer rights and fundamental rights are assessed by looking at the sector of deployment and the intended use of the AI system. Similarly, India must consider the adoption of a higher regulatory threshold for the use of AI by at least government institutions, given their potential for impacting citizen’s rights. Government use of AI systems that have the potential of severely impacting citizens’ fundamental rights include the use of AI in the disbursal of government benefits, surveillance, law enforcement and judicial sentencing

  • Need for overarching principles based AI regulatory framework

Different sectoral regulators are currently evolving regulations to address the specific challenges posed by AI in their sector. While it is vital to harness the domain expertise of a sectoral regulator and encourage the development of sector-specific AI regulations, such piecemeal development of AI principles can lead to fragmentation in the overall approach to regulating AI in India. Therefore, to ensure uniformity in the approach to regulating AI systems across sectors, it is crucial to put in place a horizontal overarching principles-based framework. 

  • Adaptation of sectoral regulation to effectively regulate AI

In addition to an overarching regulatory framework which forms the basis for the regulation of AI, it is equally important to envisage how this framework would work with horizontal or sector-specific laws such as consumer protection law and the applicability of product liability to various AI systems. Traditionally consumer protection and product liability regulatory frameworks have been structured around fault-based claims. However, given the challenges concerning explainability and transparency of decision making by AI systems, it may be difficult to establish the presence of defects in products and, for an individual who has suffered harm, to provide the necessary evidence in court. Hence, consumer protection laws may have to be adapted to stay relevant in the context of AI systems. Even sectoral legislation regulating the use of motor vehicles, such as the Motor Vehicles Act, 1988 would have to be modified to enable and regulate the use of autonomous vehicles and other AI transport systems. 

  • Contextualising AI systems for both their safe development and use

To ensure the effective and safe use of AI systems, they have to be designed, adapted and trained on relevant datasets depending on the context in which they will be deployed. The Working Document envisages India being the AI Garage for 40% of the world – developing AI solutions in India which can then be deployed in other emerging economies. Additionally, India will likely import AI systems developed in countries such as the US, EU and China to be deployed within the Indian context. Both scenarios involve the use of AI systems in a context distinct from the one in which they have been developed. Without effectively contextualising socio-technical systems like AI systems to the environment they are to be deployed in, there are enhanced safety, accuracy and reliability concerns. Regulatory standards and processes need to be developed in India to ascertain the safe use and deployment of AI systems that have been developed in contexts that are distinct from the ones in which they will be deployed. 

The NITI Working Document is the first step towards an informed discussion on the adoption of a regulatory framework to govern AI technology in India. However, there is a great deal of work to be done. Any regulatory framework developed by India to govern AI must balance the benefits and risks of deploying AI, diminish the risk of any harm and have a consumer protection framework in place to adequately address any harm that may arise. Besides this, the regulatory framework must ensure that the deployment and use of AI systems are in consonance with India’s constitutional scheme.

Group Privacy and Data Trusts: A New Frontier for Data Governance?

The Centre’s Non Personal Data Report proposes a policy framework to regulate the use of anonymised data used by Big Tech companies. The question now is: how well do its recommendations meet up to the challenges of regulating non-personal data, amidst a regulatory lacuna for the same? Shashank Mohan of the Centre for Communication Governance explores how concepts of collective privacy and data trusts lie at the forefront of India’s future frameworks for digital governance.

By Shashank Mohan

This post first appeared on The Bastion on September 13, 2020

Image Credits: Swagam Dasgupta, The Bastion

In the past few years, it has become common knowledge that Big Tech companies like Facebook, Google, and Amazon rely on the exploitation of user data to offer seemingly free services. These companies typically use business models that rely on third party advertising to profit off this data. In exchange for their services, we hand over our data without much control or choice in the transaction. 

In response to the privacy threats posed by such business models, countries around the world have been strengthening and enacting data privacy laws. India is currently debating its own personal data protection law, which is loosely based on the benchmark EU data protection law–the General Data Protection Regulation (GDPR). More recently, attention has shifted to the regulation of non-personal data as well. The Indian Government recently released a report on the Non-Personal Data Governance Framework (NPD Report).

But, why do we need to regulate non-personal data?

While progress on the regulation of personal data is necessary and laudable, in the era of Big Data and machine learning, tech companies no longer need to solely rely on processing our personally identifiable data (personal data) to profile or track users. With newer developments in data analytics, they can find patterns and target us using seemingly innocuous data that may be aggregated or anonymised, but doesn’t need to be identifiable.

For example, they only need to know that I am a brown male in the age range of 25-35, from New Delhi, looking for shoes, and not necessarily my name or my phone number. All of this is “non-personal” data as it’s not linked to my personal identity.

Clearly, tech companies extract value from their service offerings using advanced data analytics and machine learning algorithms which rummage through both personal and non-personal data. This shift to harnessing non-identifiable/anonymised/aggregated data creates a lacuna in the governance of data, as traditionally, data protection laws like the GDPR have focused on identifiable data and giving an individual control over their personal data.

So, among other economic proposals, the NPD Report proposes a policy framework to regulate such anonymised data, to fill this lacuna. The question now is: how well do its recommendations meet up to the challenges of regulating non-personal data? 

How Does The Government Define Non-Personal Data?

The NPD Report proposes the regulation of non-personal data, which it defines as data that is never related to an identifiable person, such as data on weather conditions, or personal (identifiable) data which has been rendered anonymous by applying certain technological techniques (such as data anonymisation). The report also recommends the mandatory cross-sharing of this non-personal data between companies, communities of individuals, and the government. The purpose for which this data may be mandated to be shared falls under three broad buckets: national security, community benefit, and promoting market competition.

However, if such data is not related to an identifiable individual, then how can it be protected under personal data privacy laws?

To address these challenges in part, the report introduces two key concepts: collective privacy and data trusts. 

The NPD Report defines collective privacy as a right emanating from a community or group of people that are bound by common interests and purposes. It recommends that communities or a group of people exercise control over their non-personal data–which is distinct from an individual exercising control of their personal data–and do so via an appropriate nominee called a data trustee, who would exercise their privacy rights on behalf of the entire community. These two interconnected concepts of collective privacy and data trusteeship merit deeper exploration, due to their significant impact on how we view privacy rights in the digital age.

What is Collective Privacy and How Shall We Protect It?

The concept of collective privacy shifts the focus from an individual controlling their privacy rights, to a group or a community having data rights as a whole. In the age of Big Data analytics, the NPD Report does well to discuss the risks of collective privacy harms to groups of people or communities. It is essential to look beyond traditional notions of privacy centered around an individual, as Big Data analytical tools rarely focus on individuals, but on drawing insights at the group level, or on “the crowd” of technology users.

In a revealing example from 2013, data processors who accessed New York City’s taxi trip data (including trip dates and times) were able to infer with a degree of accuracy whether a taxi driver was a devout Muslim or not, even though data on the taxi licenses and medallion numbers had been anonymised. Data processors linked pauses in taxi trips with adherence to regularly timed prayer timings to arrive at their conclusion. Such findings and classifications may result in heightened surveillance or discrimination for such groups or communities as a whole.

An example of such a community in the report itself is of people suffering from a socially stigmatised disease who happen to reside in a particular locality in a city. It might be in the interest of such a community to keep details about their ailment and residence private, as even anonymised data pointing to their general whereabouts could lead to harassment and the violation of their privacy.

In such cases, harms arise not specifically to an individual, but to a group or community as a whole. Even if data is anonymised (and rendered completely un-identifiable), insights drawn at a group level help decipher patterns and enable profiling at the macro level.

However, the community suffering from the disease might also see some value in sharing limited, anonymised data on themselves with certain third parties; for example, with experts conducting medical research to find a cure to the disease. Such a group may nominate a data trustee–as envisioned by the NPD Report–who facilitates the exchange of non-personal data on their behalf, and takes their privacy interests into account with relevant data processors. 

This model of data trusteeship is thus clearly envisioned as a novel intermediary relationship–distinct from traditional notions of a legal trust or trustee for the management of property–between users and data trustees to facilitate the proper exchange of data, and protect users against privacy harms like large-scale profiling and behavioral manipulation.

But, what makes data trusts unique? 

Are Data Trusts the New ‘Mutual Funds’? 

Currently, data processors process a wide-range of data–both personal and non-personal–about users, without providing them accessible information about how they use or collect it. These users, if they wish to use services offered by data processors, do not have any negotiating powers over the collection or processing of their data. This results in information asymmetries and power imbalances between both parties, without much recourse to users–especially in terms of non-personal data which is not covered by personal data protection laws like the GDPR, or India’s Draft Personal Data Protection Bill.  

Data trusts can help solve the challenges arising during everyday data transactions taking place on the Internet. Acting as experts on behalf of users, they may be in a better position to negotiate for privacy-respecting practices as compared to individual users. By standardising data sharing practices like data anonymisation and demanding transparency in data usage, data trusts may also be better placed to protect collective privacy rights as compared to an unstructured community. One of the first recommendations to establish data trusts in the public fora came from the UK Government’s independent report from 2017, ‘Growing the artificial intelligence industry in the UK’, which recommended the establishment of data trusts for increased access to data for AI systems.

Simply put: data trusts might be akin to mutual fund managers, as they facilitate complex investments on behalf of and in the best interests of their individual investors. 

The Fault in Our Data Sarkaar

Since data trusts are still untested at a large scale, certain challenges need to be anticipated at the time of their conceptualisation, which the NPD Report does not take account of.

For example, in some cases, the report suggests that the role of the data trustee could be assumed by an arm of the government. The Ministry of Health and Family Welfare, for instance, could act as a trustee for all data on diabetes for Indian citizens. 

However, the government acting as a data trustee raises important questions of conflict of interest–after all, government agencies might utilise relevant non-personal data for the profiling of citizens. The NPD Report doesn’t provide solutions for such challenges.

Additionally, the NPD Report doesn’t clarify the ambiguity in the relationship between  data trusts and data trustees, adding to the complexity of its recommendations. While the report envisions data trusts as institutional structures purely for the sharing of given data sets, it defines data trustees as agents of ‘predetermined’ communities who are tasked with protecting their data rights. 

Broadly, this is just like how commodities (like stocks or gold) are traded over an exchange (such as data trusts) while agents such as stockbrokers (or data trustees) assist investors in making their investments. This is distinct from how Indian law treats traditional conceptions of trusts and trustees, and might require fresh law for its creation. 

In terms of the exchange of non-personal data, possibly both these tasks–that is, facilitating data sharing and protecting data rights of communities/groups–can be delegated to just one entity: data trusts. Individuals who do not form part of any ‘predetermined’ community–and thus may not find themselves represented by an appropriate trustee–may also benefit from such hybrid data trusts for the protection of their data rights.

Clearly, multiple cautionary steps need to be in place for data trusts to work, and for the privacy of millions to be protected–steps yet to be fully disclosed in the Report. 

Firstly, there is a need for legal and regulatory mechanisms that will ensure that these trusts genuinely represent the best interests of their members. Without a strong alignment with regulatory policies, data trusts might enable the further exploitation of data, rather than bringing about reforms in data governance. Borrowing from traditional laws on trusts, a genuine representation of interests can be ensured by placing a legal obligation–in the form of an enforceable trust deed– on the trust of a fiduciary duty (or duty of care) towards its members.

Secondly, data trusts will require money to operate, and developing funding models that ensure the independence of trusts and also serve their members’ best interests. Various models will need to be tested before implementation, including government funded data trusts and user-subscription based systems.

Thirdly, big questions about the transparency of data trusts remain. As these institutions may be the focal point of data exchange in India, ensuring their independence and accountability will be crucial. Auditing, continuous reviews, and reporting mechanisms will need to be enmeshed in future regulation to ensure the accountability of data trusts.

Privacy Rights Must Be Paramount

As the law tries to keep pace with technology in India, recognising new spheres which require immediate attention, like the challenges of collective privacy, becomes pertinent for policymakers. The NPD Report takes momentous strides in recognising some of these challenges which require swift redressal, but fails to take into consideration emerging scholarship on the autonomy, transparency, and strength of its proposed data trusts.

For example, large data processors will need to be incentivised to engage with data trusts. Smaller businesses may engage with data trusts easily considering the newfound easy access to large amounts of data. But, it might be difficult to incentivise Big Tech companies to engage with such structures, due to their existing stores of wide-scale data on millions of users. This is where the government will need to go back to the drawing board and engage with multiple stakeholders to ensure that innovation goes hand in hand with a privacy respecting data governance framework. Novel solutions like data trusts should be tested with pilot projects, before being baked into formal policy or law.

More than three years after India’s Supreme Court reaffirmed the right to privacy as intrinsic to human existence and a guarantee under the Indian Constitution, government policy continues to treat data–whether personal or non-personal–as a resource to be ‘mined’. In this atmosphere, to meaningfully recognise the right to privacy and self-determination, the government must lay down a data governance framework which seeks to protect the rights of users (or data providers), lays down principles of transparency and accountability, and establishes strong institutions for enforcement of the law.

(This post is in context of the report released by the Committee of Experts on Personal Data Governance Framework, as constituted by the Ministry of Electronics and Information Technology. CCG’s comments on the report can be accessed here)

CCG’s Comments to the Ministry of Defence on the Defence Acquisition Procedure, 2020

On 28 July 2020, the Ministry of Defence (‘MoD’) uploaded the second draft of the Defence Procurement Procedure 2020 (‘DPP 2020’), now renamed as the ‘Defence Acquisition Procedure 2020’ (‘DAP 2020’) on its website, inviting comments and suggestions from interested stakeholders and the general public.

CCG submitted its comments on the DAP 2020 underscoring its key concerns with this latest iteration of the MoD’s policy for capital acquisitions. The comments were authored by Gunjan Chawla, with inputs and research from Sharngan Aravindakshan and Vagisha Srivastava.

Our comments to the MoD are aimed at:

(1) Highlighting certain points in law and procedure to refine the DAP 2020 and facilitate the building of a more robust regulatory framework for defence acquisitions that contribute to the building of an Aatmanirbhar Bharat (self-reliant India).

(2) Presenting certain legal tools and frameworks that remain at the Ministry’s disposal in this endeavour geared towards a thorough preparation for the defence of India, in tandem with the envisioned goal of the National Cybersecurity Strategy 2020-2025 [currently being formulated by the office of the National Cybersecurity Coordinator (‘NCSC’)] to build a cyber secure nation.

Other than this broader objective of formulating a clear, coherent and comprehensive policy for acquisition of critical technologies to strengthen India’s national security posture, our comments are intended to contribute meaningfully to the building of legal frameworks that enable enhancing the state of cybersecurity in India generally, and the defence establishment and defence industrial base ecosystem specifically.

The comments are divided into five parts.

Part I introduces the scope and ambit of this document. These comments are not a granular evaluation of the merits and demerits of every procedural step to be followed in various categories of defence acquisitions. Here, we broadly trace the evolution of the structure, objectives and salient features of India’s defence procurement and acquisition policies in recent years. The scope of the comments are restricted to those features of the DAP that are most closely related with or have implications for the cybersecurity of the defence establishment. In this regard, we note the omission of Chapter X on ‘Simplified Capital Expenditure Procedure’ from the text of the draft DAP document as a serious error that ought to be rectified at the earliest opportunity.

Part II deals with the cybersecurity and information security in the acquisitions process generally, as this is a concern that must be addressed irrespective of the procedural categorisation of a particular acquisition. The inherently sensitive and strategic nature of defence acquisitions demands that processes and procedures be formulated in a manner that prevents any unwarranted leakage of information at premature stages in the acquisition process. Herein, we recommend that:

  1. The DAP 2020 should carefully distinguish between the terms ‘information security’ and ‘cyber security’, and refrain from using them interchangeably in policy documents.
  2. Demand a full disclosure of the history of cyber-attacks, breaches and incidents suffered by the vendor company (and related corporate entities) prior to the signing of the acquisition contract. This should be supplemented with a good faith disclosure of incidents where the cyber infrastructure or assets of the vendor company may have been used, with or without proper authorization, in the conduct of a cyber breach or other incident including attacks or exploits or other violations of digital privacy and human rights.

    As discussed in the comments, this line of inquiry would further India’s adherence to at least three of eleven voluntary, non-binding norms on responsible state behaviour in cyberspace articulated in the 2015 Report of the Group of Governmental Experts on Advancing Responsible State Behaviour in Cyberspace in the context of International Security.
  3. Designation of online procurement portals as ‘Critical Information Infrastructure’ and/or ‘Protected Systems’ within the meaning of Sections 70 and 70A of the Information Technology Act, 2000.

Part III of the comments focuses on issues in the acquisition of information and communications technologies (ICT) and cyber systems. All suggestions and comments included in this Part are aimed towards ensuring that our vision of  Aatmanirbhar Bharat (self-reliant India) is also a sustainable one.

Key recommendations presented in this part include:

  1. Clearly defining the terminologies used with regard to the ‘cyber domain’ in Chapter VIII, such as ICTs/cyber systems in order to bring more clarity to the procurement process, as well the scope and ambit of the DAP document.
  2. In these definitions and classification, distinguishing both ‘cyber weapons’ and ‘cyber physical weapons’ from cyber systems for command and control or C4I2SR, as well as ‘cybersecurity products and services’, which are essential to protect the confidentiality and integrity of sensitive government data across various ministries from external threats.
  3. The MoD should clarify the scope and ambit of the DAP and the DPM and the extent to which they apply to various categories of IT, ICT and cyber systems.
  4. The defence budget dataset should be re-assessed to evaluate the ratio of revenue expenditures to capital expenditure alongside an assessment of the contribution of capital expenditures incurred over the years to capital assets owned by the armed forces and that portion of capital expenditure that is diverted towards maintenance, upkeep and life cycle costs of equipment as per the CBRP model.

Further building on the issues that have been highlighted in the previous sections, Part IV delves into the broader legal and Constitutional framework applicable to procurements generally, and defence acquisitions specifically.

Herein, we propose opening up a discussion on opportunities and challenges in strengthening Parliamentary oversight over the defence acquisitions. Given the huge sums of public funds that are involved in defence acquisitions, ensuring accountability and integrity in these processes is of paramount importance.

We note that the Defence Acquisition Procedure as well as the Defence Procurement Manual are internal guidelines issued by the Ministry of Defence as policy directives to be followed as matter of the Executive’s internal administration and so far, do not enjoy legislative backing through an Act of Parliament. Accordingly, this section presents a brief overview of current processes and mechanisms in this regard, and recommends that:

  1. This defect in the DAP ought to be remedied on a priority basis, drawing on the Constitutional authority vested in Parliament pursuant to Article 246 read with Schedule VII, List I Entry 1 to enact laws ‘for the preparation of defence of India’.

Part V concludes the major findings and recommendations of this submission.

The comments can be accessed here on CCG’s Blog.

What are ‘offensive cyber capabilities’?

Antivirus interface over modern tech devices in dark background 3D rendering

By Gunjan Chawla and Vagisha Srivastava

In our previous post, “Does India have offensive cyber capabilities?”, we discussed a recent amendment to the SCOMET list appended to the ITC-HS classification by the Directorate General of Foreign Trade (DGFT). The amendment did not define, but described software for military offensive cyber operations as a term including (but not limited to) software which are designed to destroy, damage, degrade or disrupt systems, equipment and other softwares specified by Category 6 (Munitions), as well as software for cyber reconnaissance and cyber command and control.

In this post, we examine what exactly constitutes ‘offensive cyber capabilities’ (OCCs) and their role in conducting cyber operations with reference to various concepts from US, UK and Australia’s cyber doctrines. We begin by comparing two definitions of ‘cyber capabilities’.

‘Cyber Capabilities’ = ‘Cyber Operations’?

In US military doctrine, a ‘cyberspace capability’ is defined not as human skill in handling tools and software, but as “a device or computer program, including any combination of software, firmware, or hardware, designed to create an effect in or through cyberspace.” (emphasis added)

In contrast, the Australian Strategic Policy Institute (ASPI) in Defining Offensive Cyber Capabilities notes that “In the context of cyber operations, having a capability means possessing the resources, skills, knowledge, operational concepts and procedures to be able to have an effect in cyberspace.” (emphasis added)

The ASPI’s emphasis on resources, skills and knowledge merits special attention. Without skilled personnel to wield such devices or software, offensive cyber operations cannot be mounted successfully. This is an especially important distinction if we are looking to formulate a functional definition relevant to India’s requirements. Our conceptualisation of OCCs must accord priority to not only the acquisition of tools, devices and software developed by other nations, but to build internal capacity through investment in creation and dissemination of technical knowledge and skill development.

This view also finds support in the United Kingdom’s articulation of defence ‘cyber capabilitiy’. In the UK’s Cyber Primer formulated by the Ministry of Defence, it is acknowledged (see fn 7) that defence cyber capabilities can be a combination of hardware, firmware, software and operator action (emphasis added).

Yet, surprisingly, the ASPI’s concluding definition of OCCs equates offensive capabilities with offensive cyber operations (OCOs), “offensive cyber capabilities are defined as operations in cyberspace to manipulate, deny, disrupt, degrade, or destroy targeted computers, information systems or networks.” (emphasis added)

The underlying logic of this equation is perhaps the old adage – the proof of the pudding is in the eating? This means that in ASPI’s conceptualisation, to ‘have’ OCCs would be meaningless, and not entirely credible if no OCOs are conducted by entities claiming to possess OCCs. However, from a legal standpoint, one cannot say that ‘capabilities’ and ‘operations’ are synonymous any more than one could claim that having ‘arms/ammunitions/weapons’ are synonymous to an ‘armed attack’.

This leads us to an obvious question – what are offensive cyber operations?

Offensive Cyber Operations: Cyber Attacks (or Exploits) by Another Name?

In the United States’ military doctrine, Offensive Cyber Operations (OCOs) are understood to be operations that are “intended to project power by application of force in or through cyberspace.”

This definition of OCOs is also reiterated in the March 2020 report of the Cyberspace Solarium Commission (CSC). The CSC was constituted last year by the US Congress under the John S. McCain National Defense Authorization Act, 2019 to “develop a consensus on a strategic approach to defending the United States in cyberspace against cyber attacks of significant consequences” and presented its report to the public on 11 March 2020.

Over the years, the vocabulary of the US military doctrine and strategy documents of the Department of Defense (DoD) too, have used a variety of terms to classify various categories of cyber operations. In 2006, the DoD preferred using the broader term ‘Computer Network Operations’ (CNOs) instead of ‘cyber attacks’, as seen in its National Military Strategy for Cyberspace Operations.  CNOs were classified into computer network attack (CNAs), computer network defense (CND) and computer network exploitation (CNEs).

More recent documents have dropped the use of the term ‘CNO’ and exhibit a preference for ‘cyberspace operations’ or ‘cyber operations’ instead. The US DoD Dictionary of Military and Associated Terms defines ‘cyberspace operations’ as ‘[t]he employment of cyberspace capabilities where the primary purpose is to achieve objectives in or through cyberspace’.

Yet, in spite of the multiplicity of terms employed, offensive cyber capabilities can be categorised broadly, as the ability to conduct a cyber attack or cyber exploitation. Although similar, it is important to distinguish cyber attacks from cyber exploitations. Herbert Lin has observed that “[t]he primary technical difference between cyber attack and cyber exploitation is in the nature of the payload to be executed—a cyber attack payload is destructive whereas a cyber exploitation payload acquires information nondestructively”.

Indeed, the US DoD dictionary defines ‘cyberspace attacks’ and ‘cyberspace exploits’ separately. ‘Cyberspace attacks’ are actions taken in cyberspace that create noticeable denial effects (i.e., degradation, disruption, or destruction) in cyberspace or manipulation that leads to denial that appears in a physical domain, and is considered a form of fire. In contrast, cyberspace exploitation refers to actions taken in cyberspace to gain intelligence, maneuver, collect information, or perform other enabling actions required to prepare for future military operations’.

A definition of OCOs similar to the US’ conceptualisation can also be found in the UK Cyber Primer. This Primer defines OCOs as “activities that project power to achieve military objectives in, or through, cyberspace”.

The UK envisions OCOs as one of four non-discrete categories within the broader term ‘cyber operations’ that can be used to inflict temporary or permanent effects that reduce an adversary’s confidence in networks or capabilities.  Such action can support deterrence by communicating intent or threats. These four categories are, namely, (1) defensive cyber operations; (2) offensive cyber operations; (3) cyber intelligence, surveillance and reconnaissance; and (4) cyber operational preparation of the environment.

Thus, we can infer from a combined reading of all these definitions that

  1. cyber capabilities and cyber operations are not synonymous, but
  2. cyber capabilities (both the technological tools, as well as the human skill elements) are a prerequisite to conducting OCOs, which may be intended to either –
    • ‘project power through the application of force’ (US) or
    • ‘achieve military objectives‘ (UK) or  
    • ‘manipulate, deny, disrupt, degrade, or destroy targeted computers, information systems or networks’ (ASPI)  or
    • ‘destroy, damage, degrade or disrupt systems, equipment and other softwares (India’s DGFT) – in or through cyberspace.

A one trick pony?

In order to execute an offensive cyber operation, the tools (or capabilities) used could range from simple malware, virus, phishing attacks, ransomware, denial of service attacks, to more sophisticated and specially-built softwares. But these tools would be futile if not for the existence of vulnerabilities in the system being attacked to enable the exploit.

From the standpoint of conducting an offensive cyber operation (whether an attack or exploit), one would necessarily require:

  1. Cyber capabilities (technical tools and software) to exploit a pre-existing vulnerability, or to introduce a new vulnerability into the targeted system
  2. A specific intent (i.e. specific orders or directions to meet a particular, specified military or strategic objective through on in cyberspace)
  3. A person/organization/entity/State identified as the target and (i.e. an intended target)
  4. Planning and clearly defining the expected consequences of the attack (i.e. the intended effects)

The presence or absence of any of these factors would heavily determine the likelihood of the success of a cyber attack or exploit. Often, the actual outcome of a cyber attack is different from the intended outcome. As one cyber intelligence analyst puts it, “Any cyber operator worth her salt knows that even mission-driven, militaristic hacking thrives under great, terrifying ambiguity.”

Additionally, while the tools used are time-consuming to produce, they are rendered useless after deploying an attack. In most cases, this is because operators of the system being attacked will ensure the application of security patches to close known vulnerabilities in the aftermath of a cyber attack. For this reason, OCCs, especially those that have been ‘specially designed or modified for use in military offensive cyber operations’, once deployed, have extremely limited to negligible potential for re-use or re-deployment, especially against the same target. However, without sufficient emphasis on and investment in human skills and capabilities, the effectiveness of the available technical tools would also suffer in the long run.

A ‘digital strike’ to start a ‘cyber war’?

The deployment of cyber capabilities in an OCO must cause actual physical damage comparable in scale and effects to that of a conventional, kinetic attack to be termed as an ‘armed attack’ or an unlawful ‘use of force’ in international law. Although some of the attacks or exploitations in cyberspace could result in physical damage akin to damage caused by a traditional kinetic attack, most don’t.

Drawing from a list of significant cyber incidents recorded by the Center for Strategic and International Studies (CSIS), we can observe that very few attacks carried out in the past had the potential to lead to casualties. Scholars still disagree if all these cyber incidents could be termed as ‘a use of force’ or ‘a tool of coercion’ in international law.

However, it is interesting to note that the intent of the perpetrator of a cyber attack, a crucial element that is baked into American definitions of OCOs, is conspicuously missing from the international law analyses to classify cyber attacks as a ‘use of force’ or ‘armed attack’ – which relies largely on the scale and effects (actual, not intended) of the cyber attack. (see Tallinn Manual 2.0, Rules 69 and 71) The omission of any reference to human skill or judgment in the US’ definition of cyber capabilities too, provides additional insulation from inquiries into the actual intent of the perpetrator of a cyber attack.

At this point in time it is difficult to conceptualize a ‘war’ that is waged exclusively in cyberspace, does not manifest physical effects or spill over into other domains—not just air, land and sea, but also the economy. For this very reason, i.e. the interconnected nature of cyberspace with other domains of where conflict manifests from competing interests, OCCs provide States a strategic military advantage by strengthening the effectiveness of conventional means and methods of warfare and streamlining military communications. However, the increasing dependence of the Government, critical infrastructure as well as businesses on the internet in the networked economy necessarily implies that a failure to develop or acquire cyber capabilities will make regular economic losses and disruptions by way of cyber attacks inevitable.

This leads us to another question worth considering in the context of State hostilities in cyberspace—whether economic losses occasioned by cyber attacks can be considered as a factor in determining whether its scale and effects are comparable to that of a kinetic armed attack?

Both cyber attack and cyber exploitations hold the potential to cause economic losses to the State under attack. Today it is common knowledge that the notorious WannaCry and NotPetya attacks resulted in losses totalling up to billions of dollars. Attacks on financial systems, commercial softwares, platforms or applications that generate economic value, or civilian infrastructure linked closely with the state economy could all fall under this risk. Such attacks can also substantially slow down State functions if the chaos generated within cyber systems spills over into the physical realm.

We must also remember, that any response to this question cuts both ways – if India – or any other nation – wishes to treat economic losses caused by hostile States and other actors in cyberspace as indicative of an unlawful ‘use of force’ or an ‘armed attack’ in cyberspace, we must also be prepared to have our adversaries draw similar conclusions regarding economic losses inflicted upon them, and anticipate retaliatory action.

Given the massive risks to the economy associated with a high incidence of cyber attacks, it would be interesting to observe what direction the debate on offensive cyber capabilities takes with the release of the National Cyber Security Strategy 2020. With India’s cyber ecosystem under development, both the cyber offence and cyber defence capabilities are of immense strategic value and merit a deeper exploration and stricter scrutiny by policymakers.

This question lingers as an especially intriguing one, as the amendments to Appendix III of the ITC-HS classification referred to in our last post have now been taken down from the website of the Directorate General of Foreign Trade, only to be replaced by a sanitized version of the SCOMET list amended on 11.06.2020 – one that includes no reference ‘military offensive cyber operations’ or even ‘cyber’ simpliciter. Even the reference to ‘intrusion software’ under head 8E401 has now been omitted. The version of the SCOMET list that we relied on for our previous post is no longer available on the DGFT website, but for interested researchers, can be downloaded here on CCG’s Blog.

Does India have offensive cyber capabilities?

cyber, attack,hacked word on screen binary code display, hacker

By Gunjan Chawla

While we await the release of the much-anticipated National Cyber Security Strategy 2020 (NCSS), a very significant development in the domestic regulation of foreign trade – by way of an amendment quietly inserted by the Directorate General of Foreign Trade (DGFT) on 11.06.2020, contains an extremely significant indication for the direction we can expect the NCSS document to take.

The Foreign Trade Policy (FTP) is formulated and notified by the DGFT under the statutory authorization provided by Section 5 of the Foreign Trade (Development and Regulation) Act, 1992.  The FTP regulates among many other things, the import and export of certain types of technologies. It also enforces in compliance with India’s obligations under international export control agreements like the Wassenaar Arrangement.

The latest FTP was formulated for the period of 2015-2020, and last revised in December 2017. The FTP is published in three parts – (i) the Policy Document (ii) Handbook of Procedures and (iii) the ITC-HS Classification.

The Indian Trade Classification based on Harmonized System of Coding, better known as the ITC-HS classification system uses eight digit codes to describe and categorize items subject to regulation. Schedule I of the ITC-HS deals with import policy, while Schedule II of the ITC-HS describes the rules and regulations related to export policies.

Appendix III to Schedule II contains a descriptive list for the category of SCOMET (Special Chemicals, Organisms, Materials, Equipment and Technology). The SCOMET list itemises goods, services and technologies used for civilian and military applications, including also some ‘dual-use items’ for export control regulation.

Category 6 of the SCOMET list is the Munitions list, while Category 8 relates to “Special Materials and Related Equipment, Material Processing, Electronics, Computers, Telecommunications, Information Security, Sensors and Lasers, Navigation and Avionics, Marine, Aerospace and Propulsion”.

Under 6A021, which falls under the Munitions list, “software” subject to export control regulations is now defined to include,

“Software” specially designed or modified for the conduct of military offensive cyber operations;

Note 1 6A021.b.5. includes “software” designed to destroy, damage, degrade or disrupt systems, equipment or “software”, specified by Category 6, cyber reconnaissance and cyber command and control “software”, therefor.

Note 2 6A021.b.5. does not apply to “vulnerability disclosure” or to “cyber incident response”, limited to non-military defensive cybersecurity readiness or response.

Note 2 under 6A021 appears as a welcome relief to the information security research community by keeping vulnerability disclosures beyond the purview of export control regulations. However, it is relevant to mention that “vulnerability disclosures” and “cyber incident response” had already been excluded from the purview of export control restrictions in an earlier amendment to the SCOMET list on 03.07.2018.  However, this exception appears not under category 6, but category 8, as an exception to head 8E401 Computers (Technology). Therefore, the exception carved out under 6A021 by the 11.06.2020 amendment is a mere reiteration of the exception already contained under 8E401, inserted by the amendment of 03.07.2018, which reads as follows:

c. “Technology” for the “development” of “intrusion software”.

Note 1: 8E401.a and 8E401.c do not apply to ‘vulnerability disclosure’ or ‘cyber incident response’.

 Note 2: Note 1 does not diminish national authorities’ rights to ascertain compliance with 8E401.a and 8E401.c.

Technical Notes:

1. ‘Vulnerability disclosure’ means the process of identifying, reporting, or communicating a vulnerability to, or analysing a vulnerability with, individuals or organizations responsible for conducting or coordinating remediation for the purpose of resolving the vulnerability.

2. ‘Cyber incident response’ means the process of exchanging necessary information on a cyber security incident with individuals or organizations responsible for conducting or coordinating remediation to address the cyber security incident.

Therefore, our export control regulations may have been cognizant of and sensitive to the need for ensuring free flow of data and information with regards to vulnerability disclosures and cyber incident response systems since 2018. It is also relevant to mention that the previous version of this list dated 24.04.2017 made no references whatsoever to ‘cyber incident response’ or ‘vulnerability disclosure’.

The June 2020 amendment to the SCOMET list is a highly significant development, as this is the first official document that strongly suggests the existenceof offensive cyber capabilities specially designed for military use in the broader ecosystem of tech regulation in India.

While MeitY had made a passing reference to “offensive cyber” in a draft report authored by one of four Committees constituted in February 2018, for the promotion of AI and the development of a regulatory framework. The Report of Group D, the Committee on Cyber Security, Safety, Legal and Ethical Issues briefly speaks of “defensive and offensive AI techniques”. However, this report contained  recommendations that do not carry the force of law. In contrast, the DGFT’s  latest amendment to the SCOMET list has the effect of subjecting the export of such technologies to strict regulatory control by the Government.

This regulatory development stands in contrast to the response of National Cyber Security Coordinator Lt. Gen. Pant in an interview to Medianama on 2 June 2020, only a few days before the date of this amendment to the SCOMET list:

MediaNama: In terms of follow-up to hardware and software procurement, does India procure any software as cyber weapons? Is there a process to import or export them? There has been a discussion at the Open-ended Working Group [OEWG] at the UN regarding global procurement of cyber weapons. What is India’s position, policy on procurement of cyber weapons?

Lt General Pant: No, no. I don’t think anyone will be speaking of cyber weapons, sale or anything like that.

It now remains to be seen whether the National Cyber Security Strategy, yet to be released, will officially acknowledge the existence of ‘offensive cyber capabilities’, if not ‘cyber weapons’ within India’s cyber ecosystem.

Technology and National Security Law and Policy: Seminar Course Curriculum [February-June 2020]

Given the rapidly evolving landscape of international security issues and the challenges and opportunities presented by new and emerging technologies, Indian lawyers and policymakers need to acquire the capacity to engage effectively with national security law and policy. However, curricula in Indian law schools do not engage adequately with issues of national security. National security threats, balance of power, issues of secrecy and political accountability, terrorism and surveillance laws tend to be discussed in a piece-meal manner within various courses or electives.

To fill this knowledge gap within the legal community, the Centre for Communication Governance at National Law University Delhi (CCG-NLU) offered this seminar course to fourth and fifth-year students of the B.A. LL.B. (Hons.) Programme during in February-June 2020..

The course explores interdisciplinary approaches in the study of national security law and policy, with a particular focus on issues in cybersecurity and cyberwarfare. Through this course curriculum, we aim to (1) recognize and develop National Security Law as a discrete discipline of legal studies, and (2) impart basic levels of cybersecurity awareness and inculcate good information security practices among tomorrow’s lawyers.

The curriculum is split into six modules taught over a period of 12 weeks:

  • Module I: Unpacking ‘National Security’
  • Module II: Introduction to Strategic Thinking – Linking Law and Policy
  • Module III: National Security in the Domestic Sphere
  • Module IV: War and National Security in International Law
  • Module V: Cybersecurity, Cyberwarfare and International Law
  • Module VI: Cybersecurity in India

The course outline and reading list can be accessed here:

CCG’s Comments on the NODE Whitepaper

By Shashank Mohan and Nidhi Singh

In late March, the Ministry of Electronics and Information Technology (MeitY) released its consultation whitepaper on the National Open Digital Ecosystems (NODE). The NODE strategy was developed by MeitY in consultation with other departments and stakeholders, as a part of its efforts to build an enabling ecosystem to leverage digital platforms for transformative social, economic and governance impact, through a citizen-centric approach. The Whitepaper highlights key elements of NODE, and also its distinction from the previous models of GovTech. The Centre submitted its comments on the NODE Whitepaper on 31 May 2020, highlighting some of our key concerns with the proposed strategy.

The NODE Whitepaper proposes a complex network of digital platforms with the aim of providing efficient public services to the citizens of India. It defines NODE as open and secure delivery platforms anchored by transparent governance mechanisms, which enable a community of partners to unlock innovative solutions, to transform societal outcomes.

Our comments on the NODE strategy revolve around four key challenges: open standards, privacy and security, transparency and accountability, and community engagement. We have provided recommendations at each stage and have relied upon our previous work around privacy, cyber security and technology policy for our analysis.

Firstly, we believe that the NODE Whitepaper stops short of providing a robust definition of openness, and does not comprehensively address existing Government policies on open source software and open APIs. We recommend that existing policies are adopted by MeitY where relevant, and are revised and updated at least in the context of NODEs where required.

Secondly, one of the key concerns with the NODE Whitepaper is the lack of detailed discussion on the aspects of data privacy and security. The Whitepaper does not consider the principles of data protection established in the Personal Data Protection Bill, 2019 (PDPB 2019) or take into account other internationally recognised principles. Without adequately addressing the data privacy concerns which arise from NODEs, any policy framework on the subject runs the risk of being devoid of context. The existence of a robust privacy framework is essential before instituting a NODE like architecture. As the PDPB 2019 is considered by Parliament, MeitY should, as a minimum, incorporate the data protection principles as laid down in the PDPB 2019 in any policy framework for NODEs. We also recommend that in order to fully protect the right to privacy and autonomy of citizens, participation in or the use of NODEs must be strictly voluntary.

Thirdly, a NODE framework built with the aim of public service delivery should also incorporate principles of transparency and accountability at each level of the ecosystem. In a network involving numerous stakeholders including private entities, it is essential that the NODE architecture operates on sound principles of transparency and accountability and sets up independent institutions for regulatory and grievance redressal purposes. Public private relationships within the ecosystem must remain transparent in line with the Supreme Court jurisprudence on the subject. To this end, we recommend that each NODE platform should be supported and governed by accountable institutions, in a transparent manner. These institutions must be independent and not disproportionately controlled by the Executive arm of the Government.

Lastly, we focus on the importance of inclusion in a digital first solution like the NODE. Despite steady growth in Internet penetration in India, more than half of its population does not enjoy access to the Internet and there is a crucial gender gap in the access to Internet amongst Indians, with men forming a majority of the user base. Learning from studies on the challenges of exclusion from the Aadhaar project, we recommend that the NODE architecture must be built keeping in mind India’s digital infrastructure. Global best practices suggest that designing frameworks which are based on inclusion is a pre-condition for building successful models of e-governance. Similarly, NODEs should be built with the aim of inclusion, and must not become a roadblock for accessing public services by citizens.

Public consultations like these will go a long way in building a robust strategy on open data systems as numerous stakeholders with varied skills must be consulted to ensure quality and efficacy in e-governance models. We thank MeitY for this opportunity and hope that future developments would also follow a similar process of public consultations to foster transparency, openness and public participation in the process of policy making.

Our full comments submitted to the Ministry can be found here.