Reflections on Second Substantive Session of UN OEWG on ICT Security (Part 2): Threats, Cyber Norms and International Law

Ananya Moncourt & Sidharth Deb

“Aspects of Cyber Conflict (pt. 3)” by Linda Graf is licensed under CC BY 4.0

Introduction

Part 1 of this three part series on the second substantive session of the United Nations’ (UN) Open-Ended Working Group (OEWG) on ICT security (2021-25) analysed key organisational developments regarding multistakeholder participation. The post contextualised the OEWG’s institutional mandate, analysed the impact of the Russia-Ukraine conflict on discussions, traced differing State positions, and critiqued the overall inclusiveness of final modalities on stakeholder participation at the OEWG.

This post (and subsequently Part 3) analyses substantial discussions at the session held between March 28 and April 01, 2022. These discussions were organised according to the OEWG’s mandate outlined in UN General Assembly (GA) Resolution 75/240. Accordingly, Part 2’s analysis covers:

  • existing and potential threats to “information security”.
  • rules, norms and principles of responsible State behaviour i.e. cyber norms.
  • international law’s applicability to States’ use of ICTs.

Both posts examine differing State interventions, and India’s interventions under each theme. The combined analysis of Parts 2 and 3 provides evidence that UN cybersecurity processes struggle with an inherent tension. This relates to the dichotomy between the OEWG’s mandate, which is based on confidence building, cooperation, collective resilience, common understanding and mutual accountability; as against the geopolitical rivalries which shape multilateralism. Specifically, it demonstrates the role of lawfare within these processes.

Existing and Potential Threats

Discussions reflected the wide heterogeneities of States’ perceptions of threats in cyberspace. The US, UK, EU, Estonia, France, Germany, Canada, Singapore, Netherlands and Japan prioritise securing critical infrastructure and ICT supply chains. Submarine cables, communication networks, rail systems, the public core of the internet, healthcare infrastructure and information assets, humanitarian databases, and oil and gas pipelines were cited as contemporary targets. Ransomware and social engineering were highlighted as prominent malicious cyber techniques.

In contrast, Russia, China and allies like Syria, Cuba and Iran urged the OEWG to address threats which conform to their understanding of “information security”. Premised on information sovereignty and domestic regime stability, prior proposals like the International Code of Conduct for Information Security offers a template in understanding their objectives. These States advocate regulating large-scale disinformation, terrorism, recruitment, hate speech and propaganda occurring over private digital platforms like social media. Cuba described such ICTs as tools for interventionism and destabilisation which interfere in States’ internal affairs. Iran and Venezuela cautioned States against using globally integral ICT systems as conduits for illegitimate geopolitical goals, which compromise other States’ cyber sovereignty—a recurring theme of these States’ engagement at the session.

Netherlands and Germany described threats against democratic and/or electoral processes as threats to critical infrastructure. Similarly, France described disinformation as a risk to security and stability in cyberspace. This is important to track since partial intersections with the Sino-Russian understanding of information security could increase future prospects of information flows regulation at the OEWG.

Developing States like Brazil, Venezuela and Pakistan characterised the digital/ICT divide between States as a major threat to cyberspace stability. Thus, capacity building, multistakeholder involvement and international cooperation — at CERT, policymaking and law enforcement levels — were introduced early as key elements of international cybersecurity. UK and Russia supported this agenda. France, China and Ecuador identified the development of cyber offensive capabilities as an international threat since they legitimise cyberspace as a theatre of military operations.

India’s participation in this area treads a middle ground. ICT supply chain security across infrastructure, products and services; and the protection of “critical information infrastructures” (CIIs) integral to economies and “social harmony” were stated priorities. Notably, the definition of CIIs under the Information Technology Act does not cite social harmony. India cited ransomware, misinformation, data security breaches and “… mismatches in cyber capabilities between Member States” as contemporary threats. To mitigate these threats, India advocated for improved information sharing and cooperation at technical, policy and government levels across Member States.

Cyber Norms

States disagreed on whether prior GGE and OEWG consensus reports serve as a minimum baseline for future cyber norms discussions. The Sino-Russian camp which includes Iraq, Nicaragua, Pakistan, Belarus, Cuba and others argued that cyber norms are an insufficient fix, and instead proposed a new legally binding instrument on international cybersecurity. China proposed a Global Initiative on Data Security as a blueprint for such a framework. Calls for treaties/conventions could trigger reintroduction of prior proposals on information security by these States.

The US, UK, Australia, Japan, France, Germany, Netherlands and allied States, and developing countries like Brazil, Argentina, Costa Rica, South Africa and Kenya argued that, instead of revisiting first principles, the current OEWG’s focus should be the implementation of earlier agreed cyber norms. Self-assessment of States’ implementation of the cyber norms framework was considered an international first step. The United Nations Institute for Disarmament Research (UNIDIR) in partnership with Australia, Canada, Mexico and others, launched a new national survey tool to gauge countries’ trajectories in implementation. Since cyber norms are voluntary, the survey serves as a soft mechanism of accountability, a platform which democratises best practices, and a directory of national points-of-contact (PoCs) wherein States can connect and collaborate.

States also raised substantive areas for discussions on new norms or clarifications on existing ones. Netherlands, US, UK and Estonia called for protections safeguarding the public core of the internet, since it comprises the technical backbone infrastructure in cyberspace which facilitates freedom of expression, peaceful assembly and access to online information. “Due diligence”— which requires States to not allow their territory to be used for internationally wrongful acts—was another substantive area of interest.

ICT supply chain integrity and attribution generated substantial interest. Given the close scrutiny on domestic companies, under this theme China recommended new rules and standards on international supply chain security. If analysed through lawfare this proposal perhaps aims to minimise targeted State measures against Chinese ICT suppliers in both telecom and digital markets.

The US pressed for deliberations on “attribution” and specifically public attribution of State-sponsored malicious cyber activities. China cautioned against hasty public attributions since it may cause escalation and inter-State confrontation. China argued that attributions on cyber incidents require complete and sufficient technical evidence. The sole emphasis on technical evidence (which ignores surrounding evidence and factors) could be strategic since it creates a challenging threshold for attribution. As a result it could counter-intuitively end up obfuscating the source of malicious activities in cyberspace.

Discussions on “critical infrastructure” protection also raised important interventions. Singapore stated that critical infrastructure security should protect electoral and democratic integrity. China argued for an international definition of “critical infrastructure” consistent with sovereignty. Over time such representations could further legitimise greater information controls and embed the Sino-Russian conception of information security within global processes.

India focused on supply chain integrity, critical infrastructure protection and greater institutional and policy cooperation. They advocated close cooperation in matters involving criminal and terrorist use of ICTs. There were also brief references to democratisation of cyber capabilities across Member States and the role of cloud computing infrastructure in future inter-State conflicts. This served as a prelude to India’s interventions under international law.

International Law

Familiar geopolitical fragmentations shaped discussions. Russia, China, Cuba, Belarus, Iran, and Syria called for a binding international instrument which regulates State behaviour in cyberspace. Belarus argued that extant international legal norms and the UN Charter lack meaningful applicability to modern cyber threat landscapes. Russia and Syria called for clarity on what areas and issues fall within the sphere of international cybersecurity. Viewed through the lens of lawfare, it appears that such proposals aim to integrate their conceptions of information security within OEWG discussions.

EU, Estonia, Australia and France argued this would undermine prior international processes and the cyber norms framework. The US, UK, Australia, Canada, Brazil, France, Japan, Germany and Korea instead focused on developing a common understanding on international law’s applicability to cyberspace, including the UN Charter. They pushed for dialogue on international humanitarian law, international human rights law, prohibition on the use of force, and the right to self-defence against armed attacks. Similar to previous failed negotiations at the 5th GGE, these issues continue to remain contentious areas. For instance, Cuba argued against the applicability of the right to self-defence since no cybersecurity incident can qualify as an “armed attack”.

Sovereignty, sovereign equality and non-interference in States’ internal affairs were prominent issues. Other substantive areas included attribution (technical, legal and political), critical infrastructure protection and the peaceful settlement of disputes. To enable common understanding and potential consensus on international law, the US, Singapore and Switzerland advocated the OEWG follow a similar approach to the 6th UN GGE. Specifically, they suggested developing a voluntary compendium of national positions on the applicability of international law in cyberspace.

India addressed issues relating to sovereignty, non-intervention in internal affairs, prohibition of the use of force, attribution, and dispute settlement. It discussed the need to assign international responsibility on States for cyber operations emerging from one State and which have extra-territorial effects. They argued for States enjoying the sovereignty to pass domestic laws/policies towards securing their ICT environments. India advocated imposing upon States an obligation to take reasonable steps to stop ICT-based internationally wrongful acts domestically. Finally, it highlighted that international law must adapt to the role of cloud computing hosting data/malicious activities in cross-border settings.

Conclusion | Previewing Part 3

In Part 2 of this series on the second substantive session of the OEWG on ICT Security (2021-25) we have analysed States’ interventions on matters relating to existing and potential threats to information security; the future role of cyber norms for responsible State behaviour in cyberspace; and the applicability of international law within cyberspace. In Part 3 we assess discussions relating to confidence building measures, capacity building and regular institutional dialogue. While this post reveals the geopolitical tensions which influence international cybersecurity discussions, the next post focuses extensively on the international cooperation, trust building, technical and institutional collaboration, and developmental aspects of these processes.

Technology and National Security Law Reflection Series Paper 6: The Legality of Lethal Autonomous Weapons Systems (“LAWS”)

Drishti Kaushik*

About the Author: The author is a final year student at the National Law University, Delhi. She has previously been associated with CCG as part of its summer school in March 2020 and has also worked with the Centre as a Research Assistant between September 2020 and March 2021. 

Editor’s note: This post is part of the Reflection Series showcasing exceptional student essays from CCG-NLUD’s Seminar Course on Technology & National Security Law.

Introduction

When a machine has the ability to perform certain tasks which typically require human intelligence it is known as Artificial Intelligence (AI). AI is currently used in a variety of fields and disciplines. One such field is the military where AI is viewed as a means to reduce human casualties.

One such use case is the development and use of Lethal Autonomous Weapons Systems (LAWS) or “killer robots” which can make life and death decisions without human intervention. Though the technology behind LAWS and its application remains foggy, LAWS have become a central point of debate globally. Several countries seek a complete preemptive ban on its use and development, by highlighting that technology to achieve such outcomes already exists. Other countries have expressed their preference for a moratorium on its development till there are universal standards regarding its production and usage. 

This piece examines whether LAWS are legal/lawful under International Humanitarian Law (IHL) as per the principles of distinction, proportionality and precautions. LAWS are understood as fully autonomous weapon systems that once activated, have the ability to select and engage targets without any human involvement. The author argues that it is premature to conclude LAWS as legal or illegal by hypothetically determining their compliance with extant humanitarian principles. Additionally, they pose ethical considerations and legal reviews under IHL that must be satisfied to determine the legality of LAWS. 

What are LAWS?

There is presently no universal definition of LAWS since the term ‘autonomous’ is ambiguous. ‘Autonomous’ in AI refers to the ability of a machine to make decisions without human intervention. The US’ Department of Defense issued a 2012 directive which defines LAWS as weapon systems that can autonomously or independently “select and engage targets without any human intervention” once activated. This means LAWS leave humans “out of the loop”. The “lack of human intervention” element is also present in definitions proposed by Human Rights Watch, the International Committee of the Red Cross (ICRC)  and the UK Defence Ministry. 

While weapon systems that are completely autonomous currently do not exist, the technology to develop the same does. There are near-autonomous weapons systems like Israel’s Iron Dome and the American Terminal High Altitude Area Defense that can identify and engage with incoming rockets. These are defensive in nature and protect sovereign nations from external attacks. Conversely, LAWS are weapon systems having offensive capabilities of pursuing targets. Some scholars recommend incentivizing defensive autonomous systems within the international humanitarian framework.

Even though there is no singular definition, LAWS can be identified as machines or weapon systems which once activated or switched on by humans have the autonomy to select and search for targets as well as engage or attack them without any human involvement in the entire selection and attacking process. The offensive nature of LAWS as opposed to the use of automated systems for defensive purposes is an important distinguishing factor for identifying LAWS. An open letter by Future of Life Institute calls for a ban on “offensive autonomous weapons beyond meaningful human control” instead of complete ban on AI in the military sector. This distinction between offensive and defensive weapons in the definition of LAWS was also raised in the Group of Governmental Experts on LAWS 2017 meet.

Autonomy and offensive characteristics are primary grounds behind demands for a complete ban on LAWS. Countries like Zimbabwe are uncomfortable with a machine making life and death decisions and others like Pakistan are worried about military disparities with technologically superior nations leading to an unfair balance of power.

There remains considerable uncertainty surrounding LAWS and its legality as weapons to be used in armed conflicts. Governance of these weapons, accountability, criminal liability and product liability are specific avenues of concern.

Autonomous anti-air unit by Pascal. Licensed under CC0.

Legal Issues under IHL

The legality of LAWS under IHL is observable at two levels: (a) development, and (b) deployment/use. 

Legal Review of New Weapons

The Geneva Convention provides for Legal Review of any new weapons or means of warfare under Article 36 of Additional Protocol I (“AP 1”) to determine whether the development of new weapons is in compliance with the Geneva Convention and customary international law. The weapon must not have an “indiscriminate effect” or cause “superfluous injury” or “unnecessary suffering” like chemical weapons.

The conduct of LAWS must have ‘predictability’ and ‘reliability’ for them to be legally deployed in armed conflicts. If not possible then the conduct of LAWS in the midst of conflict  may lead to “indiscriminate effect or superfluous injury or unnecessary suffering”. 

Principles of Distinction, Proportionality & Precautions 

LAWS must uphold the basic rule of distinction. LAWS should differentiate between civilians and military objects; and between those injured and those active in combat. Often even deployed troops are unable to successfully determine this and thus, programming LAWS to uphold the principle of distinction remains a challenge.

Second, LAWS must uphold the principle of proportionality. Civilian casualties, injury and destruction must not be excessive in comparison to the military advantage gained by the attack. Making such value judgments in the middle of intense battles is difficult. Programmers who develop LAWS may struggle to comprehend the complexities around these circumstances. Even when deploying deep learning, as machines recognise patterns, there might be situations when it first has to gain experience and those growing pains in technological refinement may lead to violations of the proportionality principle. 

Finally, LAWS must adhere to the principle of precaution. This is the ability to recall or suspend an act when it is not proportionate or harms civilians as opposed to military adversaries. The ability to deactivate or recall a weapon once deployed is tricky. There is general consensus that LAWS will fail to comply with these principles and violate the laws of armed conflict.

Conversely others argue that its autonomous characteristics are not enough to prove that LAWS violate IHL. Existing principles are enough to restrict the use of LAWS to situations where IHL is not violated. Furthermore, autonomous weapons might be able to wait till they are fired upon to determine whether a person is civilian or military as their sense of ‘self-preservation’ will not be as strong as that of human troops thereby complying with the principle of distinction. Moreover, they might be employed in the navy or other areas not open to civilians, thereby affording LAWS a lower threshold for compliance with IHL principles. Supporters contend that LAWS might calculate and make last minute decisions without any human subjective emotions allowing them to choose the best possible plan of action thereby respecting the principles of proportionality and precautions. 

Marten’s Clause 

Article 1 of AP I to the Geneva Conventions states that if certain cases are not covered under the Convention, then the civilians and the combatants are protected under “Customary International Law, principles of Humanity and Dictates of Public Conscience”. This has also been reiterated in the preamble of AP II of the Geneva Conventions. This is referred to as Marten’s Clause and provides the basis for ethical and moral aspects to the law of armed conflict. Since LAWS are not directly covered by the Geneva Convention, their development and use must be guided by Marten’s clause. Therefore, LAWS may be prohibited due to noncompliance with customary international law or principles of humanity or dictates of public conscience

LAWS cannot be declared illegal under customary international law since there is no defined state practice; as they are still being developed. The principles of humanity require us to examine questions about whether machines should have the ability to make life or death decisions regarding humans. Moreover, recent data suggests that dictates of public conscience may be skewed against the use of  LAWS. 

It might be early to term LAWS, which do not currently exist, as legal or illegal on the basis of compliance with the Geneva Convention. However, any discussion regarding the same must keep these legal and ethical IHL-related considerations in mind.

Present Policy Situation 

The legal issues relating to LAWS are recognised by the UN Office of Disarmament.  Under the Convention of Certain Conventional Weapons (CCW), a Group of Governmental Experts was asked to address the issues regarding LAWS. This group is yet to provide a singular definition of the term. However, it has recommended 11 guiding principles which were adopted by the High Contracting Parties to the CCW in 2019.

The first principle states that IHL shall apply to all autonomous weapons systems including LAWS. The second principle addresses accountability through “human responsibility” during decision making relating to the use of these systems. Further, any degree of human-machine interaction at any stage of development or activation must be in compliance with IHL. Accountability for development, deployment and use of these weapons must be as per IHL by ensuring there is a “chain of human command and control”. States’ obligation of ensuring a legal review for any new weapons is also reiterated.

The guidelines also state that cyber and physical risks, and the  risk of proliferation and acquisition by terrorists must be considered while developing and acquiring such weapons. Risk assessment and mitigation must also be made a part of the design and development of such weapons. Consideration must be given to compliance with IHL and other international obligations while using LAWS. While crafting policy measures, emerging technologies in LAWS must not be “anthropomorphized”. Discussions on LAWS should not hinder peaceful civilian innovations. The principles finally highlight the importance of balancing military needs and human factors under the CCW framework. 

The CCW also highlights the need for ensuring “meaningful human control” over weapon systems but does not define relevant criteria for the same. Additionally, there are different stages such as development, activation and deployment of autonomous weapons. Only a human can develop and activate the autonomous systems. However, deployment is determined by the  autonomous weapon on its own as per its human programming. 

Therefore, the question arises – will that level of human control over the LAWS’ programming be enough to qualify as meaningful human control? If not, will an override human command which may or may not be exercised allow for “meaningful human control”? These questions require further deliberation on what qualifies as “meaningful human control” and whether this control will even be enough given how rapidly AI is being developed. There is also a need to ensure that no bias is programmed into these weapons. 

While these guiding principles are the first step towards an international framework, there is still no universal/comprehensive legal framework to ensure accountability on LAWS.

 Conclusion

The legal, ethical and international concerns regarding LAWS must be addressed at a global level. A pre-emptive and premature ban might stifle helpful civilian innovation. Moreover, a ban will not be possible without the support of leading States like the US, Russia, UK. Conversely,  if the development of LAWS is left unregulated then it will make it easier for countries with LAWS to go to war. Moreover, development and deployment of LAWS will create a significant imbalance between the technologically advanced and technologically disadvantaged nations. Furthermore, no regulation may lead to the proliferation and acquisition of LAWS by bad actors for malicious, immoral and/or illegal purposes.

Since LAWS disarmament is not an option, control on LAWS is recommended. The issues with LAWS must be addressed at the international level by creating a binding treaty which incorporates a comprehensive definition of LAWS. The limits of autonomy must also be clearly demarcated along with other legal and ethical considerations. The principles of IHL including legal reviews must also be implemented. Till then, defense research centers around the world should incorporate AI in more “defensive” and “non-lethal” military machineries. Such applications could include disarming bombs or surveillance drones or smart borders instead of offensive and lethal autonomous weapons systems without any overriding human control.


*Views expressed in the blog are personal and should not be attributed to the institution.