Technology & National Security Reflection Series Paper 3: Technology and the Paradoxical Logic of Strategy

Manaswini Singh*

About the Author: The author is a 2020 graduate of National Law University, Delhi. She is currently pursuing an LLM with specialization in Human Rights and Criminal law from National Law Institute University, Bhopal. 

Editor’s note: This post is part of the Reflection Series showcasing exceptional student essays from CCG-NLUD’s Seminar Course on Technology & National Security Law.

In the present essay, the author reflects upon the following question: 

According to Luttwak, “The entire realm of strategy is pervaded by a paradoxical logic very different from the ordinary ‘linear’ logic by which we live in all other spheres of life” (at p. 2) Can you explain the relationship between technological developments and the conduct of war through the lens of this paradoxical logic?

Introducing Luttwak’s Paradoxical Logic of Strategy

While weakness invites the threat of attack, technologically advanced nations with substantial investment in better military technology and R&D that are capable of retaliation, have the power to persuade weaker nations engaged in war to disengage or face consequences. Initiating his discussion on the paradox of war, Luttwak mentions the famous roman maxim si vis pacem, para bellum which translates to – if you want peace, prepare war. Simply understood, readiness to fight can ensure peace. He takes the example of the Cold War to discuss the practicality of this paradoxical proposition. Countries that spend large resources in acquiring and maintaining nuclear weapons resolve to deter from first use. Readiness at all times, to retaliate against an attack is a good defensive stance as it showcases peaceful intent while discouraging attacks altogether. An act of developing anti-nuclear defensive technology – by which a nation waging war may be able to conduct a nuclear attack and defend itself upon retaliation – showcases provocativeness on its part.

The presence of nuclear weapons, which cause large scale destruction, have helped avoid any instance of global war since 1945. This is despite prolonged periods of tensions between many nations across the globe. Nuclear weapons are an important reason for the maintenance of international peace. This is observable with India and its border disputes with China and Pakistan where conflicts have been frequent and extremely tense leading to many deaths. Yet these issues have not escalated to large scale or a full-fledged war because of an awareness across all parties that the other has sufficient means to engage in war and shall be willing to use the means when push comes to shove. 

Using the example of standardisation of antiaircraft missiles, Luttwak points out that ‘‘in war a competent enemy will be able to identify the weapon’s equally homogeneous performance boundaries and then proceed to evade interception by transcending those boundaries… what is true of anti aircraft missiles is just as true of any other machine of war that must function in direct interaction with reacting enemy – that is, the vast majority of weapons.”

Image by VISHNU_KV. Licensed via CC0.

Luttwak’s Levels of Strategy

The five levels of strategy as traced by Luttwak are: 

  1. Technical interplay of specific weapons and counter-weapons.
  2. Tactical combat of the forces that employ those particular weapons.
  3. Operational level that governs the consequences of what is done and not done tactically.
  4. Higher level of theatre strategy, where the consequences of stand alone operations are felt in the overall conduct of offence and defence.
  5. The highest level of grand strategy, where military activities take place within the broader context of international politics, domestic governance, economic activity, and related ancillaries.

These five levels of strategy create a defined hierarchy but outcomes are not simply imposed in a one-way transmission from top to bottom. These levels of strategy interact with one another in a two-way process. In this way, strategy has two dimensions: the vertical dimension and the horizontal dimension. The vertical dimension comprises of the different levels that interact with one another; and the horizontal dimension comprises of the dynamic logic that unfolds concurrently within each level.

Situating Technological Advancements Within Luttwak’s Levels of Strategy

In the application of paradoxical logic at the highest level of grand strategy, we observe that breakthrough technological developments only provide an incremental benefit for a short period of time. The problem with technological advancement giving advantage to one participant in war is that this advantage is only initial and short-lasting. In discussing the development of efficient technology, he gives an example of the use of Torpedo boats in warfare which was a narrow technological specialisation with high efficiency. Marginal technological advancement of pre-existing tech is commonplace occurrences in militaries. The torpedo naval ship was a highly specialised weapon i.e. a breakthrough technological development which was capable of causing more damage to larger battleships by attacking enemy ships with explosive spar torpedoes. The problem with such concentrated technology is that it is vulnerable to countermeasures. The torpedo boats were very effective in their early use but were quickly met with the countermeasure of torpedo beat destroyers designed specially to destroy torpedo boats. This initial efficiency and technical advantage and its ultimate vulnerability to countermeasures is the expression of paradoxical logic in its dynamic form. 

When the opponent uses narrowly incremental technology to cause damage to more expensive and larger costlier weapons, in the hopes of causing a surprise attack with the newly developed weapon, a reactionary increment in one’s weaponry is enough to neutralise the effects of such innovative technologically advanced weapon(s). The technological developments which have the effect of paradoxical conduct in surprising the opponent and finding them unprepared to respond in events of attacks, can be easily overcome due to their narrowly specialised nature themselves. Such narrowly specialised new tech are not equipped to accommodate broad counter-countermeasures and hence the element of surprise attached with such incremental technology can be nullified. These reciprocal force-development effects of acts against torpedo-like weapons make the responding party’s defence stronger by increasing their ability to fight and neutralise specialty weapons. Luttwak observed a similar response to the development of Anti-tank missiles which was countered by having infantry accompany tanks.

Conclusion

The aforementioned forces create a distinctly homogenous and cyclical process which span the development of technology for military purposes, and concomitant countermeasures. In the same breadth, one side’s reactionary measure also reaches a culmination point and can be vulnerable to newer technical advancement for executing surprise attacks. Resources get wasted in responding to a deliberate offensive action in which the offensive side may be aware of defensive capabilities and it is just aiming to drain resources and cause initial shock. This can initiate another cycle of the dynamic paradoxical strategy. Within the scheme of the grand strategy, what looks like deadly and cheap wonder weapons at the technical level; fails due to the existence of an active thinking opponent. These opponents can deploy their own will to engage in response strategies and that can serve as a dent to the initial strategic assumptions and logic.

In summary, a disadvantage at the technical level can sometimes also be overcome at the tactical level of grand strategy . Paradoxical logic is present in war and strategy, and use of technology in conduct of war also observes the dynamic interplay of paradoxical logic. Modern States have pursued technological advancements in ICT domains and this has increased their dependence on high-end cyber networks for communication, storage of information etc. Enemy States or third parties that may not be equipped with equally strong manpower or ammunition for effective adversarial action may adopt tactical methods of warfare by introducing malware into the network systems of a State’s critical infrastructure of intelligence, research facilities or stock markets which are vulnerable to cyber-attacks and where States’ inability in attribution of liability may pose additional problems.


*Views expressed in the blog are personal and should not be attributed to the institution.

Technology & National Security Reflection Series Paper 2: Sun Tzu’s Art of War: Strategy or Stratagems?

Manaswini Singh*

About the Author: The author is a 2020 graduate of National Law University, Delhi. She is currently pursuing an LLM with specialization in Human Rights and Criminal law from National Law Institute University, Bhopal. 

Editor’s note: This post is part of the Reflection Series showcasing exceptional student essays from CCG-NLUD’s Seminar Course on Technology & National Security Law. In the present essay, the author reflects upon the following question:

Edward Luttwak critiques Sun Tzu’s Art of War as a book of ‘stratagems’ or clever tricks, rather than a book of ‘strategy’. Do you agree with this assessment? Why/ why not?

Introduction to Luttwak

Edward Luttwak in his book Strategy: The Logic of War and Peace discusses the conscious use of paradox versus the use of linear logical and straightforward military tactics as means of strategy of war. According to Luttwak, strategy unfolds in two dimensions i.e. the vertical and the horizontal dimensions. 

The vertical dimension of strategy deals with the different levels of conflict. Among others his work considers the technical aspect, the operational aspects, the tactical as well as strategic ones. The horizontal dimension of strategy is the one involving dealing with an adversary i.e. the opponent whose moves we seek to reverse and deflect. 

A grand strategy is a confluence of the military interactions that flow up and down level by level, forming strategy’s vertical dimension, with the varied external relations among states forming strategy’s horizontal dimension.      

While discussing the paradoxes inherent in war, he mentions the famous Latin maxim si vis pacem, para bellum which translates to – if you want peace, prepare for war. Simply understood, readiness to fight can ensure peace (Emphasis added). He says that situations of conflict tend to reward paradoxical logic of strategy which leads to lethal damage sometimes in defying straightforward logical action.

Art of War” by Nuno Barreto. Licensed under CC BY-SA 2.0

Critiquing Luttwak’s Assessment of Sun Tzu’s Art of War

Sun Tzu’s military treatise the Art of War comprises of chapter-wise lessons and basic principles discussing key war subject matters like laying plans, logistics of waging war, importance of a military general, the requirement of deception in war, resources, surprise attack, attack by stratagem, tactical dispositions, knowing the strength of one’s army in opposition to the other and attacking accordingly, preparedness for surprise, political non-interference in war chain of command, defense, quick and decisive attack, seeking victory as opposed to battle, use of energy to one’s advantage, managing the army, strengths and weaknesses, arrival on battle ground, opponent’s weakness, significance of secrecy and identifying weak places and attacking those. Secrecy and deception are crucial tactics of war for Sun Tzu who on one hand goes so far as to say that all war is based on deception. 

Luttwak, on the other hand, finds deception and secrecy to be costly plans in armed conflicts. He discusses the Normandy Surprise attack and Pearl Harbor raid. The diversion created to mislead the opponent involves costs and diverts valuable resources when engaging in paradoxical action and maintaining secrecy of the actual plan of action but he fails to acknowledge the success of these operations. Luttwak also fails to provide alternatives to those strategies which showcase a desirable end achievable by other better replaceable means, especially when deceptions proved effective.

In the example of the 1943 battle of Kursk, Luttwak himself negates his earlier claims of high-risk uncertain war tactics being more harmful than useful, by highlighting Stalin’s trust in the intelligence information received about the German attack. The Soviet leader, on deliberation, decided to take a defensive stance in the battle, giving the German forces an initial offensive advantage. But this defensive measure was taken to draw the Germans into a trap and to destroy their armors creating conditions for an effective counteroffensive by the Soviet army. The Chinese general’s principles of knowing one’s enemy favored the Russian leader immensely. Having a well-equipped and robust army, he ordered his men to surround and attack the Germans, giving effect to Sun Tzu’s principles. Luttwak seems stuck on the strategy of surprise attacking the weakest zone of the opponent while forgoing other lessons from Sun Tzu’s work on intelligence, importance of spies and knowing one’s enemies as well as we know ourselves.

In Luttwak’s view, operational risks and the incidence of friction will ultimately affect the combat by reducing effectiveness of manpower or resources. But when parties waging war are not on an equal footing of resources and manpower and combat risk is already high, operational risks may prove to be better chosen risks as compared to combat risks when outnumbered by the enemy’s weaponry and manpower. Meeting an opponent with equal strength and resources may be more common nowadays than it was in ancient times, and here is where Sun Tzu’s principles lose some contemporary application. But a dismissal of his principles as cheap tricks remains extreme. 

The Role of Diplomatic Engagement: A Blind Spot in the Art of War?

Luttwak emphasizes on strategy involving the existence of an adversary and recognizing the existence of another in one’s plan of war and postulates that the Chinese system now or historically does not engage in this. Chinese do not look into the enemy and decide their own actions in isolation. He alleges lack of diplomacy in its historical events due to the geography which minimized interaction between kingdoms. His argument is that the Art of War was composed in the backdrop of Chinese culture that flourished with jungles to the south, protected by the sea towards east, thinly populated areas and of Tibet to its west and an empty northern border which was the entryway for infrequent invasions. 

According to Luttwak, intra-cultural conflict between kingdoms in this isolated culture hindered the advent of diplomacy in Chinese culture. Conversely in Europe where arguably the interaction between sovereign states made strategies and elaborate planning a necessity. Adversarial logic is important for him in strategizing and in his opinion this was not present due to lack of third party intervention in China unlike Europe. He says Sun Tzu’s tactics work best intra-culturally because in dealing with foreigners, prediction becomes a more tedious and a less accurate task. But Sun Tzu himself stresses the knowledge of the enemy’s tactics to be an important aspect of strategy building by a general preparing for war. He has recognized the existence of an adversary and penned down military tactics that constitute the Art of War accordingly. The term ‘enemy’ in his treatise cannot be assumed to be exclusive of an enemy sovereign state.

Relevance of the Art of War in Modern Times

To Luttwak, Chinese geography did not facilitate diplomacy. But the researcher argues, geography plays an important role in strategizing as acting in accordance with terrain and natural forces is specific to the places. Sun Tzu’s ideas of utilizing the heaven (weather) and earth (terrain) to one’s advantage places importance on the geographical terrain and weather conditions in one’s favor. Principles cannot be dismissed as cheap tricks just because they were not formulated in the era of modern warfare between nation-states that are enabled by high technology, especially when these wars involve the existence of nuclear weapons and other high-tech means of warfare rather than mere low-tech close contact combat more prevalent in former times. Modern strategy promotes economic war rather than military wars. This may be the contextual limitation to the strict application of Sun Tzu’s principles in modern contexts. But reliance on infantry as a method of warfare is also resorted to in armed conflict and Sun Tzu’s writings cannot be held obsolete in this regard.

Sun Tzu promoted non-interference of the sovereign in the General’s command of war, so as to prevent confusion in the minds of troops with regard to the chain of command. Contemporary developments in international politics create a heavy political and bureaucratic influence on military strategy; and war and politics are intertwined so deeply in the relations of States that this aspect of Sun Tzu’s principles seems irrelevant. But to the extent that we are concerned with the ground level operational chain of command, it must still be vested in the capable hands of military strategists and commanders of forces with minimal interference by members of political parties even when in power. 

The nature of national armed forces of sovereign states is such that the commanders are individuals of authority whose commands derive authority from their military ranks and because of their expertise in the ground realities of conflict. An established chain of command headed by experienced high ranking officials of a state’s military is pivotal for effective execution of war strategy.

Sun Tzu gave importance to secrecy and spying as important methods of maintaining information awareness in warfare. Modern day nation-states are diverting heavy funding to national intelligence agencies and keep the gathered information out of the general public’s knowledge. For example in India, as per section 24 of the Right to Information Act of 2005 the Intelligence Bureau and National Security Guard of the Ministry of Home Affairs of India are few of the intelligence and security organizations that  are exempted from the state’s duty to divulge information to the public. Military secrets and secret missions today are still as relevant as they were in Sun Tzu’s time or even during the World Wars. 

Final Conclusion

Luttwak agrees that actions based on paradoxical logic have always been a prevalent military tactic and will still remain to exist in the most competent military tactics even when straightforward logical tactics that avoid operational risks are favored for parties with great strength, power and number. He gives the example of Israeli armed forces whose actions became predictable and were intercepted by opponents appropriately. But Sun Tzu’s work provides for the use of a more direct attack when one is stronger than the opponent. He stressed the importance of non-repetition of surprise tactics so as to not make the enemy aware of such patterns that become predictable. Even in the case of deceptive attacks of a strong Israeli force, a straightforward logical attack was a digression from its common strategy of attacking weak points and can be taken to be an unanticipated move digressing from Israel’s general tactics.

A paradoxical action is not synonymous to an illogical action. In many strategies like that of the Viet Cong, a paradoxical action as opposed to a straightforward linear act is most suited to ascertain or increase the probability of winning.1 In current times, the Art of War acts as an inspiration. It gives broader strategic principles rather than clever tricks, with its own set of limitations due to technological development and political relevance within war i.e. due to increased friction at vertical level due to variables (factors that were either unknown or avoidable in ancient times but are relevant now). Luttwak’s dismissal of the ancient text as clever tricks may be motivated because of the text being ancient or because of prejudice against eastern political systems by the west as barbaric but that certainly does not completely delete the influence of the Art of War as an important text on war and strategy.


* The views expressed in the blog are personal and should not be attributed to the institution.

References

  1.  Luttwak, Edward N., Strategy, The Logic of War and Peace, The Belknap Press of Harvard University Press, 2001, pp. 13-15.

Introducing the Reflection Series on CCG’s Technology and National Security Law and Policy Seminar Course

In February 2022, CCG-NLUD will commence the latest edition of its Seminar Course on Technology and National Security Law and Policy (“the Seminar Course”). The Seminar Course is offered to interested 4th and 5th year students who are enrolled in the B.A. LL.B. (Hons.) programme at the National Law University, Delhi. The course is set against the backdrop of the rapidly evolving landscape of international security issues, and concomitant challenges and opportunities presented by emerging technologies.

National security law, viewed as a discrete discipline of study, emerges and evolves at the intersection of constitutional law; domestic criminal law and its implementation in surveillance; counter-terrorism and counter-insurgency operations; international law including the Law of Armed Conflict (LOAC) and international human rights law; and foreign policy within the ever-evolving contours of international politics.

Innovations and technological advancements in cyberspace and next generation technologies serve as a jumping off point for the course since they have opened up novel national security issues at the digital frontier. New technologies have posed new legal questions, introduced uncertainty within settled legal doctrines, and raised several legal and policy concerns. Understanding that law schools in India have limited engagement with cyber and national security issues, this Seminar Course attempts to fill this knowledge gap.

The Course was first designed and launched by CCGNLUD in 2018. In 2019, the Seminar Course was re-designed with the help of expert consultations to add new dimensions and debates surrounding national security and emerging technologies. The redesign was meant to ground the course in interdisciplinary paradigms in a manner which allows students to study the domain through practical considerations like military and geo-political strategy. The revised Seminar Course engages more  deeply with third world approaches which helps situate several issues within the rubric of international relations and geopolitics. This allows students to holistically critique conventional precepts of the international world order.  

The revamped Seminar Course was relaunched in the spring semester of 2020. Owing to the sudden countrywide lockdown in the wake of COVID-19, most sessions shifted online. However, we managed to navigate these exigencies with the support of our allies and the resolve of our students.

In adopting an interdisciplinary approach, the Seminar Course delves into debates at the intersection of national security law and policy, and emerging technologies, with an emphasis on cybersecurity and cyberwarfare. Further, the Course aims to:

  1. Recognize and develop National Security Law as a discrete discipline of legal studies, and
  2. Impart basic levels of cybersecurity awareness and inculcate good information security practices among tomorrow’s lawyers.

The Technology and National Security Seminar Reflection Paper Series (“The Reflection Series”) is meant to serve as a mirror of key takeaways and student learnings from the course. It will be presented as a showcase of exceptional student essays which were developed and informed by classroom discussions during the 2020 and 2021 editions of the Seminar Course. The Reflection Series also offers a flavour of the thematic and theoretical approaches the Course adopts in order to stimulate structured discussion and thought among the students. A positive learning from these two editions is that students demonstrated considerable intellectual curiosity and had the freedom to develop their own unique understanding and solutions to contemporary issues—especially in the context of cyberspace and the wider ICT environments. Students were prescribed atypical readings and this allowed them to consider typical issues in domains like international law through the lens of developing countries. Students were allowed to revisit the legitimacy of traditional sources of authority or preconceived notions and assumptions which underpin much of the orthodox thinking in geostrategic realms like national security.

CCG-NLUD presents the Reflection Series with a view to acknowledge and showcase some of the best student pieces we received and evaluated for academic credit. We thank our students for their unwavering support and fruitful engagement that makes this course better and more impactful.

Starting January 5, 2022, select reflection papers will be published three times a week. This curated series is meant to showcase different modules and themes of engagement which came up during previous iterations of the course. It will demonstrate that CCG-NLUD designs the course in a way which covers the broad spectrum of issues which cover topics at the intersection of national security and emerging technology. Specifically, this includes a showcase of (i) conceptual theory and strategic thinking, (ii) national security through an international and geostrategic lens, and (iii) national security through a domestic lens.

Here is a brief glimpse of what is to come in the coming weeks:

  1. Reimagining Philosophical and Theoretical Underpinnings of National Security and Military Strategy (January 5-12, 2022)

Our first reflection paper is written by Kushagra Kumar Sahai (Class of ’20) in which he evaluates whether Hugo Grotius, commonly known as the father of international law owing to his seminal work on the law of war and peace, is better described as an international lawyer or a military strategist for Dutch colonial expansion.

Our second reflection paper is a piece written by Manaswini Singh (Class of ’20). Manaswini provides her take on Edward Luttwak’s critique of Sun Tzu’s Art of War as a book of ‘stratagems’ or clever tricks, rather than a book of strategy. In a separate paper (third entry), Manaswini also undertakes the task of explaining the relationship between technological developments and the conduct of war through the lens of the paradoxical logic of strategy.

Our fourth reflection paper is by Animesh Choudhary (Class of ’21) on Redefining National Security. Animesh, in his submission, points out several fallacies in the current understanding of national security and pushes for “Human Security” as an alternative and more appropriate lens for understanding security issues in the 21st century.

  1. International Law, Emerging Technologies and Cyberspace (January 14-24, 2022)

In our fifth reflection paper, Siddharth Gautam (Class of ’20) explores whether cyber weapons could be subjected to any regulation under contemporary rules of international law.

Our sixth reflection paper is written by Drishti Kaushik (Class of ’21) on The Legality of Lethal Autonomous Weapons Systems (“LAWS”). In this piece, she first presents an analysis of what constitutes LAWS. She then attempts to situate modern systems of warfare like LAWS and its compliance with traditional legal norms as prescribed under international humanitarian laws.

Our seventh reflection paper is written by Karan Vijay (Class of ’20) on ‘Use of Force in modern times: Sisyphus’ first world ‘boulder’. Karan examines whether under international law, a mere threat of use of force by a state against another state would give rise to a right of self-defence. In another piece (eighth entry), Karan writes on the authoritative value of interpretations of international law expressed in texts like the Tallinn Manual with reference to Article 38 of the Statute of the International Court of Justice i.e. traditional sources of international law.

Our ninth reflection paper is written by Neeraj Nainani (Class of ’20), who offers his insights on the Legality of Foreign Influence Operations (FIOs) under International law. Neeraj’s paper, queries the legality of the FIOs conducted by adversary states to influence elections in other states through the use of covert information campaigns (such as conspiracy theories, deep fake videos, “fake news”, etc.) under the established principles of international law.

Our tenth reflection paper is written by Anmol Dhawan (Class of ’21). His contribution addresses the International Responsibility for Hackers-for-Hire Operations. He introduces us to the current legal issues in assigning legal responsibility to states for hacker-for-hire operations under the due diligence obligation in international law.

  1. Domestic Cyber Law and Policy (January 28- February 4, 2022)

Our eleventh and twelfth reflection papers are two independent pieces written by Bharti (Class of ’20)and Kumar Ritwik (Class of ’20). These pieces evaluate whether the Government of India’s ongoing response to the COVID-19 pandemic could have benefited if the Government had invoked emergency provisions under the Constitution. Since the two pieces take directly opposing views, they collectively product a fascinating debate on the tradeoffs of different approaches.

Our thirteenth and fourteenth reflection papers have been written by Tejaswita Kharel (Class of ’20) and Shreyasi (Class of ’20). Both Tejaswita and Shreyasi interrogate whether the internet (and therefore internet access) is an enabler of fundamental rights, or whether access to the internet is a fundamental right unto itself. Their analysis rely considerably on the Indian Supreme Court’s judgement in Anuradha Bhasin v. Union of India which related to prolonged government mandated internet restrictions in Kashmir.

We will close our symposium with a reflection paper by Romit Kohli (Class of ’21), on Data Localisation and National Security: Flipping the Narrative. He argues that the mainstream narrative around data localisation in India espouses a myopic view of national security. His contribution argues the need to go beyond this mainstream narrative and constructs a novel understanding of the link between national security and data localisation by taking into consideration the unintended and oft-ignored consequences of the latter on economic development.

The Supreme Court’s Pegasus Order

This blog post has been authored by Shrutanjaya Bhardwaj.

On 28th October 2021, the Supreme Court passed an order in the “Pegasus” case establishing a 3-member committee of technical experts to investigate allegations of illegal surveillance by hacking into the phones of several Indian citizens, including journalists. This post  analyses the Pegasus order. Analyses by others may be accessed here, here and here.

Overview

The writ petitioners alleged that the Indian Government and its agencies have been using a spyware tool called “Pegasus”—produced by an Israeli technology firm named the NSO Group—to spy on Indian citizens. As the Court notes, Pegasus can be installed on digital devices such as mobile phones, and once Pegasus infiltrates the device, “the entire control over the device is allegedly handed over to the Pegasus user who can then remotely control all the functionalities of the device.” Practically, this means the ‘Pegasus user’ (i.e., the infiltrator) has access to all data on the device (emails, texts, and calls) and can remotely activate the camera and microphone to surveil the device owner and their immediate surroundings. 

The Court records some basic facts that are instructive in understanding its final order:

  1. The NSO Group itself claims that it only sells Pegasus to governments. 
  2. In November 2019, the then-Minister of Electronics and IT acknowledged in Parliament that Pegasus had infected the devices of certain Indians. 
  3. In June-July 2020, reputed media houses uncovered instances of Pegasus spyware attacks on many Indians including “senior journalists, doctors, political persons, and even some Court staff”.
  4. Foreign governments have since taken steps to diplomatically engage with Israel or/and internally conduct investigations to understand the issue.
  5. Despite repeated requests by the Court, the Union Government did not furnish any specific information to assist the Court’s understanding of the matter.

These facts led the Court to conclude that the petitioners’ allegations of illegal surveillance by hacking need further investigation. The Court noted that the petitioners had placed on record expert reports and there also existed a wealth of ‘cross-verified media coverage’ coupled with the reactions of foreign governments to the use of Pegasus. The Court’s order leaves open the possibility that a foreign State or perhaps a private entity may have conducted surveillance on Indians. Additionally, the Union Government’s refusal to clarify its position on the legality and use of Pegasus in Court raised the possibility that the Union Government itself may have used the spyware. As discussed below, this possibility ultimately shaped the Court’s directions and relief.  

The Pegasus order is analysed below along three lines: (i) the Court’s acknowledgement of the threat to fundamental rights, (ii) the Union Government’s submissions before the Court, and (iii) the Court’s assertion of its constitutional duty of judicial review—even in the face of sensitive considerations like national security.

Acknowledging the risks to fundamental rights

While all fundamental rights may be reasonably restricted by the State, every right has different grounds on which it may be restricted. Identifying the precise right under threat is hence an important exercise. The Court articulates three distinct rights at risk in a Pegasus attack. Two flow from the freedom of speech under Article 19(1)(a) of the Constitution and one from the right to privacy under Article 21. 

The first right, relatable to Article 19(1)(a), is journalistic freedom. The Court noted that the awareness of being spied on causes the journalist to tread carefully and think twice before speaking the truth. Additionally, when a journalist’s entire private communication is accessible to the State, the chances of undue pressure increase manifold. The Court described such surveillance as “an assault on the vital public watchdog role of the press”.

The second right, also traced to Article 19(1)(a), is the journalist’s right to protect their sources. The Court treats this as a “basic condition” for the freedom of the press. “Without such protection, sources may be deterred from assisting the press in informing the public on matters of public interest,” which harms the free flow of information that Article 19(1)(a) is designed to ensure. This observation and acknowledgment by the Court is significant and it will be interesting to see how the Court’s jurisprudence develops and engages with this issue.The third right, traceable to Article 21 as interpreted in Puttaswamy, is the citizen’s right to privacy (see CCG’s case brief on the CCG’s Privacy Law Library of Puttaswamy). Surveillance and hacking are prima facie an invasion of privacy. However, the State may justify a privacy breach as a reasonable restriction on constitutional grounds if the legality, necessity, and proportionality of the State’s surveillance measure is established.

Court’s response to the Government’s “conduct” before the Court

The Court devotes a significant part of the Pegasus order to discuss the Union Government’s “conduct”in the litigation. The first formal response filed by the Government, characterised as a “limited affidavit”, did not furnish any details about the controversy owing to an alleged “paucity of time”. When the Court termed this affidavit as “insufficient” and demanded a more detailed affidavit, the Solicitor General cited national security implications as the reason for not filing a comprehensive response to the surveillance allegations. This was despite repeated assurances given by both the Petitioners and the Court that no sensitive information was being sought, and the Government need only disclose what was necessary to decide the matter at hand. Additionally, the Government did not specify the national security consequences that would arise if more details were disclosed. (The Court’s response to the invocation of the national security ground on merits is discussed in the next section.) 

In addition to invoking national security, the Government made three other arguments:

  1. The press reports and expert evidence were “motivated and self-serving” and thus of insufficient veracity to trigger the Court’s jurisdiction.
  2. While all technology may be misused, the use of Pegasus cannot per se be impermissible, and India had sufficient legal safeguards to guard against constitutionally impermissible surveillance.
  3. The Court need not establish a committee as the Union Government was prepared to constitute its own committee of experts to investigate the issue.

The Court noted that the nature and “sheer volume” of news reports are such that these materials “cannot be brushed aside”. The Court was unwilling to accept the other two arguments in part due to the Union Government’s broader “conduct” on the issue of Pegasus. It noted that the first reports of Pegasus use dated back to 2018 and a Union Minister had informed Parliament of the spyware’s use on Indians in 2019, yet no steps to investigate or resolve the issue had been taken until the present writ petitions had been filed. Additionally, the Court ruled that the limited documentation provided by the Government did not clarify its stand on the use of Pegasus. In this context, and owing to reasons of natural justice (discussed below), the Court opined that independent fact finding and judicial review were warranted.

Assertion of constitutional duty of judicial review

As noted above, the Union Government invoked national security as a ground to not file documentation regarding its alleged use of Pegasus. The Court acknowledged that the government is entitled to invoke this ground, and even noted that the scope of judicial review is narrow on issues of national security. However, the Court held that the mere invocation of national security is insufficient to exclude court intervention. Rather, the government must demonstrate how the information being withheld would raise national security concerns and the Court will decide whether the government’s concerns are legitimate. 

The order contains important observations on the Government’s use of the national security exception to exclude judicial scrutiny. The Court notes that such arguments are not new; and that governments have often urged constitutional courts to take a hands-off approach in matters that have a “political” facet (like those pertaining to defence and security). But the Court has previously held, and also affirmed in the Pegasus order, that it will not abstain from interfering merely because a case has a political complexion. The Court noted that it may certainly choose to defer to the Government on sensitive aspects, but there is no “omnibus prohibition” on judicial review in matters of national security. If the State wishes to withhold information from the Court, it must “plead and prove” the necessary facts to justify such withholding.

The Government had also suggested that the Court let the Government set up a committee to investigate the matter. The Supreme Court had adopted this approach in the Kashmir Internet Shutdowns case by setting up an executive-led committee to examine the validity and necessity of continuing internet shutdowns. That judgment was widely criticised (see here, here and here). However, in the present case, as the petitions alleged that the Union Government itself had used Pegasus on Indians, the Court held that allowing the Union Government to set up a committee to investigate would violate the principle of bias in inquiries. The Court quoted the age-old principle that “justice must not only be done, but also be seen to be done”, and refused to allow the Government to set up its own committee. This is consistent with the Court’s assertion of its constitutional obligation of judicial review in the earlier parts of the order. 

Looking ahead

The terms of reference of the Committee are pointed and meaningful. The Committee is required to investigate, inter alia, (i) whether Pegasus was used to hack into phones of Indian citizens, and if so which citizens; (ii) whether the Indian Government procured and deployed Pegasus; and (iii) if the Government did use Pegasus, what law or regulatory framework the spyware was used under. All governmental agencies have been directed to cooperate with the Committee and furnish any required information.

Additionally, the Committee is to make recommendations regarding the enactment of a new surveillance law or amendment of existing law(s), improvements to India’s cybersecurity systems, setting up a robust investigation and grievance-redressal mechanism for the benefit of citizens, and any ad-hoc arrangements to be made by the Supreme Court for the protection of citizen’s rights pending requisite action by Parliament.

The Court has directed the Committee to carry out its investigation “expeditiously” and listed the matter again after 8 weeks. As per the Supreme Court’s website, the petitions are tentatively to be listed on 3 January 2022.

This blog was written with the support of the Friedrich Naumann Foundation for Freedom.

The Future of Democracy in the Shadow of Big and Emerging Tech: CCG Essay Series

By Shrutanjaya Bhardwaj and Sangh Rakshita

In the past few years, the interplay between technology and democracy has reached a critical juncture. The untrammelled optimism for technology has now been shadowed by rising concerns over the survival of a meaningful democratic society. With the expanding reach of technology platforms, there have been increasing concerns in democratic societies around the world on the impact of such platforms on democracy and human rights. In this context, increasingly there has been focus on policy issues like  the need for an antitrust framework for digital platforms, platform regulation and free speech, the challenges of fake news, impact of misinformation on elections, invasion of privacy of citizens due to the deployment of emerging tech,  and cybersecurity. This has intensified the quest for optimal policy solutions. We, at the Centre for Communication Governance at National Law University Delhi (CCG), believe that a detailed academic exploration of the relationship between democracy, and big and emerging tech will aid our understanding of the current problems, help contextualise them and highlight potential policy and regulatory responses.

Thus, we bring to you this series of essays—written by experts in the domain—in an attempt to collate contemporary scholarly thought on some of the issues that arise in the context of the interaction of democracy, and big and emerging tech. The essay series is publicly available on the CCG website. We have also announced the release of the essay series on Twitter

Our first essay addresses the basic but critical question: What is ‘Big Tech’? Urvashi Aneja & Angelina Chamuah present a conceptual understanding of the phrase. While ‘Big Tech’ refers to a set of companies, it is certainly not a fixed set; companies become part of this set by exhibiting four traits or “conceptual markers” and—as a corollary—would stop being identified in this category if they were to lose any of the four markers. The first marker is that the company runs a data-centric model and has massive access to consumer data which can be leveraged or exploited. The second marker is that ‘Big Tech’ companies have a vast user base and are “multi-sided platforms that demonstrate strong network effects”. The third and fourth markers are the infrastructural and civic roles of these companies respectively, i.e., they not only control critical societal infrastructure (which is often acquired through lobbying efforts and strategic mergers and acquisitions) but also operate “consumer-facing platforms” which enable them to generate consumer dependence and gain huge power over the flow of information among citizens. It is these four markers that collectively define ‘Big Tech’. [U. Aneja and A. Chamuah, What is Big Tech? Four Conceptual Markers]

Since the power held by Big Tech is not only immense but also self-reinforcing, it endangers market competition, often by hindering other players from entering the market. Should competition law respond to this threat? If yes, how? Alok P. Kumar & Manjushree R.M. explore the purpose behind competition law and find that competition law is concerned not only with consumer protection but also—as evident from a conjoint reading of Articles 14 & 39 of the Indian Constitution—with preventing the concentration of wealth and material resources in a few hands. Seen in this light, the law must strive to protect “the competitive process”. But the present legal framework is too obsolete to achieve that aim. Current understanding of concepts such as ‘relevant market’, ‘hypothetical monopolist’ and ‘abuse of dominance’ is hard to apply to Big Tech companies which operate more on data than on money. The solution, it is proposed, lies in having ex ante regulation of Big Tech rather than a system of only subsequent sanctions through a possible code of conduct created after extensive stakeholder consultations. [A.P. Kumar and Manjushree R.M., Data, Democracy and Dominance: Exploring a New Antitrust Framework for Digital Platforms]

Market dominance and data control give an even greater power to Big Tech companies, i.e., control over the flow of information among citizens. Given the vital link between democracy and flow of information, many have called for increased control over social media with a view to checking misinformation. Rahul Narayan explores what these demands might mean for free speech theory. Could it be (as some suggest) that these demands are “a sign that the erstwhile uncritical liberal devotion to free speech was just hypocrisy”? Traditional free speech theory, Narayan argues, is inadequate to deal with the misinformation problem for two reasons. First, it is premised on protecting individual liberty from the authoritarian actions by governments, “not to control a situation where baseless gossip and slander impact the very basis of society.” Second, the core assumption behind traditional theory—i.e., the possibility of an organic marketplace of ideas where falsehood can be exposed by true speech—breaks down in context of modern era misinformation campaigns. Therefore, some regulation is essential to ensure the prevalence of truth. [R. Narayan, Fake News, Free Speech and Democracy]

Jhalak M. Kakkar and Arpitha Desai examine the context of election misinformation and consider possible misinformation regulatory regimes. Appraising the ideas of self-regulation and state-imposed prohibitions, they suggest that the best way forward for democracy is to strike a balance between the two. This can be achieved if the State focuses on regulating algorithmic transparency rather than the content of the speech—social media companies must be asked to demonstrate that their algorithms do not facilitate amplification of propaganda, to move from behavioural advertising to contextual advertising, and to maintain transparency with respect to funding of political advertising on their platforms. [J.M. Kakkar and A. Desai, Voting out Election Misinformation in India: How should we regulate Big Tech?]

Much like fake news challenges the fundamentals of free speech theory, it also challenges the traditional concepts of international humanitarian law. While disinformation fuels aggression by state and non-state actors in myriad ways, it is often hard to establish liability. Shreya Bose formulates the problem as one of causation: “How could we measure the effect of psychological warfare or disinformation campaigns…?” E.g., the cause-effect relationship is critical in tackling the recruitment of youth by terrorist outfits and the ultimate execution of acts of terror. It is important also in determining liability of state actors that commit acts of aggression against other sovereign states, in exercise of what they perceive—based on received misinformation about an incoming attack—as self-defence. The author helps us make sense of this tricky terrain and argues that Big Tech could play an important role in countering propaganda warfare, just as it does in promoting it. [S. Bose, Disinformation Campaigns in the Age of Hybrid Warfare]

The last two pieces focus attention on real-life, concrete applications of technology by the state. Vrinda Bhandari highlights the use of facial recognition technology (‘FRT’) in law enforcement as another area where the state deploys Big Tech in the name of ‘efficiency’. Current deployment of FRT is constitutionally problematic. There is no legal framework governing the use of FRT in law enforcement. Profiling of citizens as ‘habitual protestors’ has no rational nexus to the aim of crime prevention; rather, it chills the exercise of free speech and assembly rights. Further, FRT deployment is wholly disproportionate, not only because of the well-documented inaccuracy and bias-related problems in the technology, but also because—more fundamentally—“[t]reating all citizens as potential criminals is disproportionate and arbitrary” and “creates a risk of stigmatisation”. The risk of mass real-time surveillance adds to the problem. In light of these concerns, the author suggests a complete moratorium on the use of FRT for the time being. [V. Bhandari, Facial Recognition: Why We Should Worry the Use of Big Tech for Law Enforcement

In the last essay of the series, Malavika Prasad presents a case study of the Pune Smart Sanitation Project, a first-of-its-kind urban sanitation programme which pursues the Smart City Mission (‘SCM’). According to the author, the structure of city governance (through Municipalities) that existed even prior to the advent of the SCM violated the constitutional principle of self-governance. This flaw was only aggravated by the SCM which effectively handed over key aspects of city governance to state corporations. The Pune Project is but a manifestation of the undemocratic nature of this governance structure—it assumes without any justification that ‘efficiency’ and ‘optimisation’ are neutral objectives that ought to be pursued. Prasad finds that in the hunt for efficiency, the design of the Pune Project provides only for collection of data pertaining to users/consumers, hence excluding the marginalised who may not get access to the system in the first place owing to existing barriers. “Efficiency is hardly a neutral objective,” says Prasad, and the state’s emphasis on efficiency over inclusion and participation reflects a problematic political choice. [M. Prasad, The IoT-loaded Smart City and its Democratic Discontents]

We hope that readers will find the essays insightful. As ever, we welcome feedback.

This series is supported by the Friedrich Naumann Foundation for Freedom (FNF) and has been published by the National Law University Delhi Press. We are thankful for their support. 

CJEU sets limits on Mass Communications Surveillance – A Win for Privacy in the EU and Possibly Across the World

This post has been authored by Swati Punia

On 6th October, the European Court of Justice (ECJ/ Court) delivered its much anticipated judgments in the consolidated matter of C-623/17, Privacy International from the UK and joined cases from France, C-511/18, La Quadrature du Net and others, C-512/18, French Data Network and others, and Belgium, C-520/18, Ordre des barreaux francophones et germanophone and others (Collectively “Bulk Communications Surveillance Judgments”). 

In this post, I briefly discuss the Bulk Communication Surveillance Judgments, their significance for other countries and for India. 

Through these cases, the Court invalidated the disproportionate interference by Member States with the rights of their citizens, as provided by EU law, in particular the Directive on privacy and electronic communications (e-Privacy Directive) and European Union’s Charter of Fundamental Rights (EU Charter). The Court assessed the Member States’ bulk communications surveillance laws and practices relating to their access and use of telecommunications data. 

The Court recognised the importance of the State’s positive obligations towards conducting surveillance, although it noted that it was essential for surveillance systems to conform with the general principles of EU law and the rights guaranteed under the EU Charter. It laid down clear principles and measures as to when and how the national authorities could access and use telecommunications data (further discussed in the sections ‘The UK Judgment’ and ‘The French and Belgian Judgment’). It carved a few exceptions as well (in the joined cases of France and Belgium) for emergency situations, but held that such measures would have to pass the threshold of being serious and genuine (further discussed in the section ‘The French and Belgian Judgment’). 

The Cases in Brief 

The Court delivered two separate judgments, one in the UK case and one in the joined cases of France and Belgium. Since these cases had similar sets of issues, the proceedings were adjoined. The UK application challenged the bulk acquisition and use of telecommunications data by its Security and Intelligence Agencies (SIAs) in the interest of national security (as per the UK’s Telecommunication Act of 1984). The French and Belgian applications challenged the indiscriminate data retention and access by SIAs for combating crime. 

The French and Belgian applications questioned the legality of their respective data retention laws (numerous domestic surveillance laws which permitted bulk collection of telecommunication data) that imposed blanket obligations on Electronic Communications Service Providers (ECSP) to provide relevant data. The Belgian law required ECSPs to retain various kinds of traffic and location data for a period of 12 months. Whereas, the French law provided for automated analysis and real time data collection measures for preventing terrorism. The French application also raised the issue of providing a notification to the person under the surveillance. 

The Member States contended that such surveillance measures enabled them to inter alia, safeguard national security, prevent terrorism, and combat serious crimes. Hence, they claimed inapplicability of the e-Privacy Directive on their surveillance laws/ activities.

The UK Judgment

The ECJ found the UK surveillance regime unlawful and inconsistent with EU law, and specifically the e-Privacy Directive. The Court analysed the scope and scheme of the e-Privacy Directive with regard to exclusion of certain State purposes such as national and public security, defence, and criminal investigation. Noting the importance of such State purposes, it held that EU Member States could adopt legislative measures that restricted the scope of rights and obligations (Article 5, 6 and 9) provided in the e-Privacy Directive. However, this was allowed only if the Member States complied with the requirements laid down by the Court in Tele2 Sverige and Watson and Others (C-203/15 and C-698/15) (Tele2) and the e-Privacy Directive. In addition to these, the Court held that the EU Charter must be respected too. In Tele2, the ECJ held that legislative measures obligating ECSPs to retain data must be targeted and limited to what was strictly necessary. Such targeted retention had to be with regard to specific categories of persons and data for a limited time period. Also, the access to data must be subject to a prior review by an independent body.

The e-Privacy Directive ensures the confidentiality of electronic communications and the data relating to it (Article 5(1)). It allows ECSPs to retain metadata (context specific data relating to the users and subscribers, location and traffic) for various purposes such as billing, valued added services and security purposes. However, this data must be deleted or made anonymous, once the purpose is fulfilled unless a law allows for a derogation for State purposes. The e-Privacy Directive allows the Member States to derogate (Article 15(1)) from the principle of confidentiality and corresponding obligations (contained in Article 6 (traffic data) and 9 (location data other than traffic data)) for certain State purposes when it is appropriate, necessary and proportionate. 

The Court clarified that measures undertaken for the purpose of national security would not make EU law inapplicable and exempt the Member States from their obligation to ensure confidentiality of communications under the e-Privacy Directive. Hence, an independent review of surveillance activities such as data retention for indefinite time periods, or further processing or sharing, must be conducted for authorising such activities. It was noted that the domestic law at present did not provide for prior review, as a limit on the above mentioned surveillance activities. 

The French and Belgian Judgment

While assessing the joined cases, the Court arrived at a determination in similar terms as the UK case. It reiterated that the exception (Article 15(1) of the e-Privacy Directive) to the principle of confidentiality of communications (Article 5(1) of the e-Privacy Directive) should not become the norm. Hence, national measures that provided for general and indiscriminate data retention and access for State purposes were held to be incompatible with EU law, specifically the e-Privacy Directive.

The Court in the joined cases, unlike the UK case, allowed for specific derogations for State purposes such as safeguarding national security, combating serious crimes and preventing serious threats. It laid down certain requirements that the Member States had to comply with in case of derogations. The derogations should (1) be clear and precise to the stated objective (2) be limited to what is strictly necessary and for a limited time period (3) have a safeguards framework including substantive and procedural conditions to regulate such instances (4) include guarantees to protect the concerned individuals against abuse. They should also be subjected to an ‘effective review’ by a court or an independent body and must be in compliance of general rules and proportionality principles of EU law and the rights provided in the EU Charter. 

The Court held that in establishing a minimum threshold for a safeguards framework, the EU Charter must be interpreted along with the European Convention on Human Rights (ECHR). This would ensure consistency between the rights guaranteed under the EU Charter and the corresponding rights guaranteed in the ECHR (as per Article 52(3) of the EU Charter).

The Court, in particular, allowed for general and indiscriminate data retention in cases of serious threat to national security. Such a threat should be genuine, and present or foreseeable. Real-time data collection and automated analysis were allowed in such circumstances. But the real-time data collection of persons should be limited to those suspected of terrorist activities. Moreover, it should be limited to what was strictly necessary and subject to prior review. It even allowed for general and indiscriminate data retention of IP addresses for the purpose of national security, combating serious crimes and preventing serious threats to public security. Such retention must be for a limited time period to what was strictly necessary. For such purposes, the Court also permitted ECSPs to retain data relating to the identity particulars of their customers (such as name, postal and email/account addresses and payment details) in a general and indiscriminate manner, without specifying any time limitations. 

The Court allowed targeted data retention for the purpose of safeguarding national security and preventing crime, provided that it was for a limited time period and strictly necessary and was done on the basis of objective and non-discriminatory factors. It was held that such retention should be specific to certain categories of persons or geographical areas. The Court also allowed, subject to effective judicial review, expedited data retention after the initial retention period ended, to shed light on serious criminal offences or acts affecting national security. Lastly, in the context of criminal proceedings, the Court held that it was for the Member States to assess the admissibility of evidence resulting from general and indiscriminate data retention. However, the information and evidence must be excluded where it infringes on the right to a fair trial. 

Significance of the Bulk Communication Surveillance Judgments

With these cases, the ECJ decisively resolved a long-standing discord between the Member States and privacy activists in the EU. For a while now, the Court has been dealing with questions relating to surveillance programs for national security and law enforcement purposes. Though the Member States have largely considered these programs outside the ambit of EU privacy law, the Court has been expanding the scope of privacy rights. 

Placing limitations and controls on State powers in democratic societies was considered necessary by the Court in its ruling in Privacy International. This decision may act as a trigger for considering surveillance reforms in many parts of the world, and more specifically for those aspiring to attain an EU adequacy status. India could benefit immensely should it choose to pay heed. 

As of date, India does not have a comprehensive surveillance framework. Various provisions of the Personal Data Protection Bill, 2019 (Bill), Information Technology Act, 2000, Telegraph Act, 1885, and the Code of Criminal Procedure, 1973 provide for targeted surveillance measures. The Bill provides for wide powers to the executive (under Clause 35, 36 and 91 of the Bill) to access personal and non-personal data in the absence of proper and necessary safeguards. This may cause problems for achieving the EU adequacy status as per Article 45 of the EU General Data Protection Regulation (GDPR) that assesses the personal data management rules of third-party countries. 

Recent news reports suggest that the Bill, which is under legislative consideration, is likely to undergo a significant overhaul. India could use this as an opportunity to introduce meaningful changes in the Bill as well as its surveillance regime. India’s privacy framework could be strengthened by adhering to the principles outlined in the Justice K.S. Puttaswamy v. Union of Indiajudgment and the Bulk Communications Surveillance Judgments.

The Proliferating Eyes of Argus: State Use of Facial Recognition Technology

Democratic lawmakers introduce ban on facial recognition technology, citing  mistake made by Detroit police | News Hits

This post has been authored by Sangh Rakshita

In Greek mythology Argus Panoptes was a many-eyed, all-seeing, and always awake, giant whose reference has been used to depict an imagery of excessive scrutiny and surveillance. Jeremy Bentham used this reference when he designed the panopticon prison where prisoners would be monitored without them being in the know. Later, Michel Foucault used the panopticon to elaborate on the social theory of panopticism where the watcher ceases to be external to the watched, resulting in internal surveillance or a ‘chilling’ effect. This idea of “panopticism” has gained renewed relevance in the age of digital surveillance.

Amongst the many cutting edge surveillance technologies being adopted globally, ‘Facial Recognition Technology’ (FRT) is one of the most rapidly deployed. ‘Live Facial Recognition Technology’ (LFRT) or ‘Real-time Facial Recognition Technology’, its augmentation, has increasingly become more effective in the past few years. Improvements in computational power and algorithms have enabled cameras placed at odd angles to detect faces even in motion. This post attempts to explore the issues with increasing State use of FRT around the world and the legal framework surrounding it.

What do FRT and LFRT mean?

FRT refers to the usage of algorithms for uniquely detecting, recognising, or verifying a person using recorded images, sketches, videos (which contain their face). The data about a particular face is generally known as the face template. This template is a mathematical representation of a person’s face, which is created by using algorithms that mark and map distinct features on the captured image like eye locations or the length of a nose. These face templates create the biometric database against which new images, sketches, videos, etc. are compared to verify or recognise the identity of a person. As opposed to the application of FRT, which is conducted on pre-recorded images and videos, LFRT involves real-time automated facial recognition of all individuals in the camera field’s vision. It involves biometric processing of images of all the passers-by using an existing database of images as a reference.

The accuracy of FRT algorithms is significantly impacted by factors like distance and angle from which the image was captured or poor lighting conditions. These problems are worsened in LFRT as the images are not captured in a controlled setting, with the subjects in motion, rarely looking at the camera, and often positioned at odd angles from it. 

Despite claims of its effectiveness, there has been growing scepticism about the use of FRT. Its use has been linked with misidentification of people of colour, ethinic minorities, women, and trans people. The prevalent use of FRT may not only affect the privacy rights of such communities, but all those who are surveilled at large.

The Prevalence of FRT 

While FRT has become ubiquitous, LFRT is still in the process of being adopted in countries like the UK, USA, India, and Singapore. The COVID-19 pandemic has further accelerated the adoption of FRT as a way to track the virus’ spread and to build on contactless biometric-based identification systems. For example, in Moscow, city officials were using a system of tens of thousands of cameras equipped with FRT, to check for social distancing measures, usage of face masks, and adherence to quarantine rules to contain the spread of COVID-19. 

FRT is also being steadily deployed for mass surveillance activities, which is often in violation of universally accepted principles of human rights such as necessity and proportionality. These worries have come to the forefront recently with the State use of FRT to identify people participating in protests. For example, FRT was used by law enforcement agencies to identify prospective law breakers during protests in Hong Kong, protests concerning the Citizenship Amendment Act, 2019 in New Delhi and the Black Lives Matter protests across the USA.

Vociferous demands have been made by civil society and digital rights groups for a global moratorium on the pervasive use of FRT that enables mass surveillance, as many cities such as Boston and Portland have banned its deployment. However, it remains to be seen how effective these measures are in halting the use of FRT. Even the temporary refusal by Big Tech companies to sell FRT to police forces in the US does not seem to have much instrumental value – as other private companies continue its unhindered support.

Regulation of FRT

The approach to the regulation of FRT differs vastly across the globe. The regulation spectrum on FRT ranges from permissive use of mass surveillance on citizens in countries like China and Russia to a ban on the use of FRT for example in Belgium and Boston (in USA). However, in many countries around the world, including India, the use of FRT continues unabated, worryingly in a regulatory vacuum.

Recently, an appellate court in the UK declared the use of LFRT for law enforcement purposes as unlawful, on grounds of violation of the rights of data privacy and equality. Despite the presence of a legal framework in the UK for data protection and the use of surveillance cameras, the Court of Appeal held that there was no clear guidance on the use of the technology and it gave excessive discretion to the police officers. 

The EU has been contemplating a moratorium on the use of FRT in public places. Civil society in the EU is demanding a comprehensive and indefinite ban on the use of FRT and related technology for mass surveillance activities.

In the USA, several orders banning or heavily regulating the use of FRT have been passed. A federal law banning the use of facial recognition and biometric technology by law enforcement has been proposed. The bill seeks to place a moratorium on the use of facial recognition until Congress passes a law to lift the temporary ban. It would apply to federal agencies such as the FBI, as well as local and State police departments.

The Indian Scenario

In July 2019, the Government of India announced its intentions of setting up a nationwide facial recognition system. The National Crime Bureau (NCRB) – a government agency operating under the Ministry of Home Affairs – released a request for proposal (RFP) on July 4, 2019 to procure a National Automated Facial Recognition System (AFRS). The deadline for submission of tenders to the RFP has been extended 11 times since July 2019. The stated aim of the AFRS is to help modernise the police force, information gathering, criminal identification, verification, and its dissemination among various police organisations and units across the country. 

Security forces across the states and union territories will have access to the centralised database of AFRS, which will assist in the investigation of crimes. However, civil society organisations have raised concerns regarding privacy and issues of increased surveillance by the State as AFRS does not have a legal basis (statutory or executive) and lacks procedural safeguards and accountability measures like an oversight regulatory authority. They have also questioned the accuracy of FRT in identifying darker skinned women and ethnic minorities and expressed fears of discrimination. 

This is in addition to the FRT already in use by law enforcement agencies in Chennai, Hyderabad, Delhi, and Punjab. There are several instances of deployment of FRT in India by the government in the absence of a specific law regulating FRT or a general data protection law.

Even the proposed Personal Data Protection Bill, 2019 is unlikely to assuage privacy challenges arising from the use of FRT by the Indian State. The primary reason for this is the broad exemptions provided to intelligence and law enforcement agencies under Clause 35 of the Bill on grounds of sovereignty and integrity, security of the State, public order, etc.

After the judgement of K.S. Puttaswamy vs. Union of India (Puttaswamy I), which reaffirmed the fundamental right to privacy in India, any act of State surveillance breaches the right to privacy and will need to adhere to the three part test laid down in Puttaswamy I.

The three prongs of the test are – legality, which postulates the existence of law along with procedural safeguards; necessity, defined in terms of a legitimate State aim; and proportionality which ensures a rational nexus between the objects and the means adopted to achieve them. This test was also applied in the Aadhaar case (Puttaswamy II) to the use of biometrics technology. 

It may be argued that State use of FRT is for the legitimate aim of ensuring national security, but currently its use is neither sanctioned by law, nor does it pass the test of proportionality. For proportionate use of FRT, the State will need to establish that there is a rational nexus between its use and the purpose sought to be achieved and that the use of such technology is the least privacy restrictive measure to achieve the intended goals. As the law stands today in India after Puttaswamy I and II, any use of FRT or LFRT currently is prima facie unconstitutional. 

While mass surveillance is legally impermissible in India, targeted surveillance is allowed under Section 5 of the Indian Telegraph Act, 1885, read with rule 419A of the Indian Telegraph Rules, 1951 and Section 69 of the Information and Technology Act, 2000 (IT Act). Even the constitutionality of Section 69 of the IT Act has been challenged and is currently pending before the Supreme Court.

Puttaswamy I has clarified that the protection of privacy is not completely lost or surrendered in a public place as it is attached to the person. Hence, the constitutionality of India’s surveillance apparatus needs to be assessed from the standards laid down by Puttaswamy I. To check unregulated mass surveillance through the deployment of FRT by the State, there is a need to restructure the overall surveillance regime in the country. Even the Justice Srikrishna Committee report in 2018 – highlighted that several executive sanctioned intelligence-gathering activities of law enforcement agencies would be illegal after Puttaswamy I as they do not operate under any law. 

The need for reform of surveillance laws, in addition to a data protection law in India to safeguard fundamental rights and civil liberties, cannot be stressed enough. The surveillance law reform will have to focus on the use of new technologies like FRT and regulate its deployment with substantive and procedural safeguards to prevent abuse of human rights and civil liberties and provide for relief. 

Well documented limitations of FRT and LFRT in terms of low accuracy rates, along with concerns of profiling and discrimination, make it essential for the surveillance law reform to have additional safeguards such as mandatory accuracy and non-discrimination audits. For example, the National Institute of Standards and Technology (NIST), US Department of Commerce, 2019 Face Recognition Vendor Test (part three) evaluates whether an algorithm performs differently across different demographics in a dataset. The need of the hour is to cease the use of FRT and put a temporary moratorium on any future deployments till surveillance law reforms with adequate proportionality safeguards have been implemented. 

Reflections on Personal Data Protection Bill, 2019

By Sangh Rakshita and Nidhi Singh

Image result for data protection"

 The Personal Data Protection Bill, 2019 (PDP Bill/ Bill) was introduced in the Lok Sabha on December 11, 2019 , and was immediately referred to a joint committee of the Parliament. The joint committee published a press communique on February 4, 2020 inviting comments on the Bill from the public.

The Bill is the successor to the Draft Personal Data Protection Bill 2018 (Draft Bill 2018), recommended by a government appointed expert committee chaired by Justice B.N. Srikrishna. In August 2018, shortly after the recommendations and publication of the draft Bill, the Ministry of Electronics and Information Technology (MeitY) invited comments on the Draft Bill 2018 from the public. (Our comments are available here.)[1]

In this post we undertake a preliminary examination of:

  • The scope and applicability of the PDP Bill
  • The application of general data protection principles
  • The rights afforded to data subjects
  • The exemptions provided to the application of the law

In future posts in the series we will examine the Bill and look at the:

  • The restrictions on cross border transfer of personal data
  • The structure and functions of the regulatory authority
  • The enforcement mechanism and the penalties under the PDP Bill

Scope and Applicability

The Bill identifies four different categories of data. These are personal data, sensitive personal data, critical personal data and non-personal data

Personal data is defined as “data about or relating to a natural person who is directly or indirectly identifiable, having regard to any characteristic, trait, attribute or any other feature of the identity of such natural person, whether online or offline, or any combination of such features with any other information, and shall include any inference drawn from such data for the purpose of profiling. (emphasis added)

The addition of inferred data in the definition realm of personal data is an interesting reflection of the way the conversation around data protection has evolved in the past few months, and requires further analysis.

Sensitive personal data is defined as data that may reveal, be related to or constitute a number of different categories of personal data, including financial data, health data, official identifiers, sex life, sexual orientation, genetic data, transgender status, intersex status, caste or tribe, and religious and political affiliations / beliefs. In addition, under clause 15 of the Bill the Central Government can notify other categories of personal data as sensitive personal data in consultation with the Data Protection Authority and the relevant sectoral regulator.

Similar to the 2018 Bill, the current bill does not define critical personal data and clause 33 provides the Central Government the power to notify what is included under critical personal data. However, in its report accompanying the 2018 Bill, the Srikrishna committee had referred to some examples of critical personal data that relate to critical state interest like Aadhaar number, genetic data, biometric data, health data, etc.

The Bill retains the terminology introduced in the 2018 Draft Bill, referring to data controllers as ‘data fiduciaries’ and data subjects ‘data principals’. The new terminology was introduced with the purpose of reflecting the fiduciary nature of the relationship between the data controllers and subjects. However, whether the use of the specific terminology has more impact on the protection and enforcement of the rights of the data subjects still needs to be seen.

 Application of PDP Bill 2019

The Bill is applicable to (i) the processing of any personal data, which has been collected, disclosed, shared or otherwise processed in India; (ii) the processing of personal data by the Indian government, any Indian company, citizen, or person/ body of persons incorporated or created under Indian law; and (iii) the processing of personal data in relation to any individuals in India, by any persons outside of India.

The scope of the 2019 Bill, is largely similar in this context to that of the 2018 Draft Bill. However, one key difference is seen in relation to anonymised data. While the 2018 Draft Bill completely exempted anonymised data from its scope, the 2019 Bill does not apply to anonymised data, except under clause 91 which gives the government powers to mandate the use and processing of non-personal data or anonymised personal data under policies to promote the digital economy. There are a few concerns that arise in context of this change in treatment of anonymised personal data. First, there are concerns on the concept of anonymisation of personal data itself. While the Bill provides that the Data Protection Authority (DPA) will specify appropriate standards of irreversibility for the process of anonymisation, it is not clear that a truly irreversible form of anonymisation is possible at all. In this case, we need more clarity on what safeguards will be applicable for the use of anonymised personal data.

Second, is the Bill’s focus on the promotion of the digital economy. We have previously discussed some of the concerns regarding focus on the promotion of digital economy in a rights based legislation in our comments to the Draft Bill 2018.

These issues continue to be of concern, and are perhaps heightened with the introduction of a specific provision on the subject in the 2019 Bill (especially without adequate clarity on what services or policy making efforts in this direction, are to be informed by the use of anonymised personal data). Many of these issues are also still under discussion by the committee of experts set up to deliberate on data governance framework (non-personal data). The mandate of this committee includes the study of various issues relating to non-personal data, and to make specific suggestions for consideration of the central government on regulation of non-personal data.

The formation of the non-personal data committee was in pursuance of a recommendation by the Justice Srikrishna Committee to frame a legal framework for the protection of community data, where the community is identifiable. The mandate of the expert committee will overlap with the application of clause 91(2) of the Bill.

Data Fiduciaries, Social Media Intermediaries and Consent Managers

Data Fiduciaries

As discussed above the Bill categorises data controllers as data fiduciaries and significant data fiduciaries. Any person that determines the purpose and means of processing of personal data, (including the State, companies, juristic entities or individuals) is considered a data fiduciary. Some data fiduciaries may be notified as ‘significant data fiduciaries’, on the basis of factors such as the volume and sensitivity of personal data processed, the risks of harm etc. Significant data fiduciaries are held to higher standards of data protection. Under clauses 27-30, significant data fiduciaries are required to carry out data protection impact assessments, maintain accurate records, audit policy and the conduct of its processing of personal data and appoint a data protection officer. 

Social Media Intermediaries

The Bill introduces a distinct category of intermediaries called social media intermediaries. Under clause 26(4) a social media intermediary is ‘an intermediary who primarily or solely enables online interaction between two or more users and allows them to create, upload, share, disseminate, modify or access information using its services’. Intermediaries that primarily enable commercial or business-oriented transactions, provide access to the Internet, or provide storage services are not to be considered social media intermediaries.

Social media intermediaries may be notified to be significant data fiduciaries, if they have a minimum number of users, and their actions have or are likely to have a significant impact on electoral democracy, security of the State, public order or the sovereignty and integrity of India.

Under clause 28 social media intermediaries that have been notified as a significant data fiduciaries will be required to provide for voluntary verification of users to be accompanied with a demonstrable and visible mark of verification.

Consent Managers

The Bill also introduces the idea of a ‘consent manager’ i.e. a (third party) data fiduciary which provides for management of consent through an ‘accessible, transparent and interoperable platform’. The Bill does not contain any details on how consent management will be operationalised, and only states that these details will be specified by regulations under the Bill. 

Data Protection Principles and Obligations of Data Fiduciaries

Consent and grounds for processing

The Bill recognises consent as well as a number of other grounds for the processing of personal data.

Clause 11 provides that personal data shall only be processed if consent is provided by the data principal at the commencement of processing. This provision, similar to the consent provision in the 2018 Draft Bill, draws from various principles including those under the Indian Contract Act, 1872 to inform the concept of valid consent under the PDP Bill. The clause requires that the consent should be free, informed, specific, clear and capable of being withdrawn.

Moreover, explicit consent is required for the processing of sensitive personal data. The current Bill appears to be silent on issues such as incremental consent which were highlighted in our comments in the context of the Draft Bill 2018.

The Bill provides for additional grounds for processing of personal data, consisting of very broad (and much criticised) provisions for the State to collect personal data without obtaining consent. In addition, personal data may be processed without consent if required in the context of employment of an individual, as well as a number of other ‘reasonable purposes’. Some of the reasonable purposes, which were listed in the Draft Bill 2018 as well, have also been a cause for concern given that they appear to serve mostly commercial purposes, without regard for the potential impact on the privacy of the data principal.

In a notable change from the Draft Bill 2018, the PDP Bill, appears to be silent on whether these other grounds for processing will be applicable in relation to sensitive personal data (with the exception of processing in the context of employment which is explicitly barred).

Other principles

The Bill also incorporates a number of traditional data protection principles in the chapter outlining the obligations of data fiduciaries. Personal data can only be processed for a specific, clear and lawful purpose. Processing must be undertaken in a fair and reasonable manner and must ensure the privacy of the data principal – a clear mandatory requirement, as opposed to a ‘duty’ owed by the data fiduciary to the data principal in the Draft Bill 2018 (this change appears to be in line with recommendations made in multiple comments to the Draft Bill 2018 by various academics, including our own).

Purpose and collection limitation principles are mandated, along with a detailed description of the kind of notice to be provided to the data principal, either at the time of collection, or as soon as possible if the data is obtained from a third party. The data fiduciary is also required to ensure that data quality is maintained.

A few changes in the application of data protection principles, as compared to the Draft Bill 2018, can be seen in the data retention and accountability provisions.

On data retention, clause 9 of the Bill provides that personal data shall not be retained beyond the period ‘necessary’ for the purpose of data processing, and must be deleted after such processing, ostensibly a higher standard as compared to ‘reasonably necessary’ in the Draft Bill 2018. Personal data may only be retained for a longer period if explicit consent of the data principal is obtained, or if retention is required to comply with law. In the face of the many difficulties in ensuring meaningful consent in today’s digital world, this may not be a win for the data principal.

Clause 10 on accountability continues to provide that the data fiduciary will be responsible for compliance in relation to any processing undertaken by the data fiduciary or on its behalf. However, the data fiduciary is no longer required to demonstrate such compliance.

Rights of Data Principals

Chapter V of the PDP Bill 2019 outlines the Rights of Data Principals, including the rights to access, confirmation, correction, erasure, data portability and the right to be forgotten. 

Right to Access and Confirmation

The PDP Bill 2019 makes some amendments to the right to confirmation and access, included in clause 17 of the bill. The right has been expanded in scope by the inclusion of sub-clause (3). Clause 17(3) requires data fiduciaries to provide data principals information about the identities of any other data fiduciaries with whom their personal data has been shared, along with details about the kind of data that has been shared.

This allows the data principal to exert greater control over their personal data and its use.  The rights to confirmation and access are important rights that inform and enable a data principal to exercise other rights under the data protection law. As recognized in the Srikrishna Committee Report, these are ‘gateway rights’, which must be given a broad scope.

Right to Erasure

The right to correction (Clause 18) has been expanded to include the right to erasure. This allows data principals to request erasure of personal data which is not necessary for processing. While data fiduciaries may be allowed to refuse correction or erasure, they would be required to produce a justification in writing for doing so, and if there is a continued dispute, indicate alongside the personal data that such data is disputed.

The addition of a right to erasure, is an expansion of rights from the 2018 Bill. While the right to be forgotten only restricts or discontinues disclosure of personal data, the right to erasure goes a step ahead and empowers the data principal to demand complete removal of data from the system of the data fiduciary.

Many of the concerns expressed in the context of the Draft Bill 2018, in terms of the procedural conditions for the exercise of the rights of data principals, as well as the right to data portability specifically, continue to persist in the PDP Bill 2019.

Exceptions and Exemptions

While the PDP Bill ostensibly enables individuals to exercise their right to privacy against the State and the private sector, there are several exemptions available, which raise several concerns.

The Bill grants broad exceptions to the State. In some cases, it is in the context of specific obligations such as the requirement for individuals’ consent. In other cases, State action is almost entirely exempted from obligations under the law. Some of these exemptions from data protection obligations are available to the private sector as well, on grounds like journalistic purposes, research purposes and in the interests of innovation.

The most concerning of these provisions, are the exemptions granted to intelligence and law enforcement agencies under the Bill. The Draft Bill 2018, also provided exemptions to intelligence and law enforcement agencies, so far as the privacy invasive actions of these agencies were permitted under law, and met procedural standards, as well as legal standards of necessity and proportionality. We have previously discussed some of the concerns with this approach here.

The exemptions provided to these agencies under the PDP Bill, seem to exacerbate these issues.

Under the Bill, the Central Government can exempt an agency of the government from the application of this Act by passing an order with reasons recorded in writing if it is of the opinion that the exemption is necessary or expedient in the interest of sovereignty and integrity, security of the state, friendly relations with foreign states, public order; or for preventing incitement to the commission of any cognizable offence relating to the aforementioned grounds. Not only have the grounds on which government agencies can be exempted been worded in an expansive manner, the procedure of granting these exemptions also is bereft of any safeguards.

The executive functioning in India suffers from problems of opacity and unfettered discretion at times, which requires a robust system of checks and balances to avoid abuse. The Indian Telegraph Act, 1885 (Telegraph Act) and the Information Technology Act, 2000 (IT Act) enable government surveillance of communications made over telephones and the internet. For drawing comparison here, we primarily refer to the Telegraph Act as it allows the government to intercept phone calls on similar grounds as mentioned in clause 35 of the Bill by an order in writing. However, the Telegraph Act limits the use of this power to two scenarios – occurrence of a public emergency or in the interest of public safety. The government cannot intercept communications made over telephones in the absence of these two preconditions. The Supreme Court in People’s Union for Civil Liberties v. Union of India, (1997) introduced guidelines to check abuse of surveillance powers under the Telegraph Act which were later incorporated in Rule 419A of the Indian Telegraph Rules, 1951. A prominent safeguard included in Rule 419A requires that surveillance and monitoring orders be issued only after considering ‘other reasonable means’ for acquiring the required information. The court had further limited the scope of interpretation of ‘public emergency’ and ‘public safety’ to mean “the prevalence of a sudden condition or state of affairs affecting the people at large and calling for immediate action”, and “the state or condition of freedom from danger or risk at large” respectively. In spite of the introduction of these safeguards, the procedure of intercepting telephone communications under the Telegraph Act is criticised for lack of transparency and improper implementation. For instance, a 2014 report revealed that around 7500 – 9000 phone interception orders were issued by the Central Government every month. The application of procedural safeguards, in each case would have been physically impossible given the sheer numbers. Thus, legislative and judicial oversight becomes a necessity in such cases.

The constitutionality of India’s surveillance apparatus inclduing section 69 of the IT Act which allows for surveillance on broader grounds on the basis of necessity and expediency and not ‘public emergency’ and ‘public safety’, has been challenged before the Supreme Court and is currently pending. Clause 35 of the Bill also mentions necessity and expediency as prerequisites for the government to exercise its power to grant exemption, which appear to be vague and open-ended as they are not defined. The test of necessity, implies resorting to the least intrusive method of encroachment up on privacy to achieve the legitimate state aim. This test is typically one among several factors applied in deciding on whether a particular intrusion on a right is tenable or not, under human rights law. In his concurring opinion in Puttaswamy (I) J. Kaul had included ‘necessity’ in the proportionality test. (However, this test is not otherwise well developed in Indian jurisprudence).  Expediency, on the other hand, is not a specific legal basis used for determining the validity of an intrusion on human rights. It has also not been referred to in Puttaswamy (I) as a basis of assessing a privacy violation. The use of the term ‘expediency’ in the Bill is deeply worrying as it seems to bring down the threshold for allowing surveillance which is a regressive step in the context of cases like PUCL and Puttaswamy (I). A valid law along with the principles of proportionality and necessity are essential to put in place an effective system of checks and balances on the powers of the executive to provide exemptions. It seems unlikely that the clause will pass the test of proportionality (sanction of law, legitimate aim, proportionate to the need of interference, and procedural guarantees against abuse) as laid down by the Supreme Court in Puttaswamy (I).

The Srikrishna Committee report had recommended that surveillance should not only be conducted under law (and not executive order), but also be subject to oversight, and transparency requirements. The Committee had argued that the tests of lawfulness, necessity and proportionality provided for under clauses 42 and 43 (of the Draft Bill 2018) were sufficient to meet the standards set out under the Puttaswamy judgment. Since the PDP Bill completely does away with all these safeguards and leaves the decision to executive discretion, the law is unconstitutional.  After the Bill was introduced in the Lok Sabha, J. Srikrishna had criticised it for granting expansive exemptions in the absence of judicial oversight. He warned that the consequences could be disastrous from the point of view of safeguarding the right to privacy and could turn the country into an “Orwellian State”. He has also opined on the need for a separate legislation to govern the terms under which the government can resort to surveillance.

Clause 36 of the Bill deals with exemption of some provisions for certain processing of personal data. It combines four different clauses on exemption which were listed in the Draft Bill 2018 (clauses 43, 44, 46 and 47). These include processing of personal data in the interests of prevention, detection, investigation and prosecution of contraventions of law; for the purpose of legal proceedings; personal or domestic purposes; and journalistic purposes. The Draft Bill 2018 had detailed provisions on the need for a law passed by Parliament or the State Legislature which is necessary and proportionate, for processing of personal data in the interests of prevention, detection, investigation and prosecution of contraventions of law. Clause 36 of the Bill does not enumerate the need for a law to process personal data under these exemptions. We had argued that these exemptions granted by the Draft Bill 2018 (clauses 43, 44, 46 and 47) were wide, vague and needed clarifications, but the exemptions under clause 36 of the Bill  are even more ambiguous as they merely enlist the exemptions without any specificities or procedural safeguards in place.

In the Draft Bill 2018, the Authority could not give exemption from the obligation of fair and reasonable processing, measures of security safeguards and data protection impact assessment for research, archiving or statistical purposes As per the current Bill, the Authority can provide exemption from any of the provisions of the Act for research, archiving or statistical purposes.

The last addition to this chapter of exemptions is that of creating a sandbox for encouraging innovation. This newly added clause 40 is aimed at encouraging innovation in artificial intelligence, machine-learning or any other emerging technology in public interest. The details of what the sandbox entails other than exemption from some of the obligations of Chapter II might need further clarity. Additionally, to be considered an eligible applicant, a data fiduciary has to necessarily obtain certification of its privacy by design policy from the DPA, as mentioned in clause 40(4) read with clause 22.

Though well appreciated for its intent, this provision requires clarification on grounds of selection and details of what the sandbox might entail.


[1] At the time of introduction of the PDP Bill 2019, the Minister for Law and Justice of India, Mr. Ravi Shankar Prasad suggested that over 2000 inputs were received on the Draft Bill 2018, based on which changes have been made in the PDP Bill 2019. However, these comments and inputs have not been published by MeitY, and only a handful of comments have been published, by the stakeholders submitting these comments themselves.   

The Pegasus Hack: A Hark Back to the Wassenaar Arrangement

By Sharngan Aravindakshan

The world’s most popular messaging application, Whatsapp, recently revealed that a significant number of Indians were among the targets of Pegasus, a sophisticated spyware that operates by exploiting a vulnerability in Whatsapp’s video-calling feature. It has also come to light that Whatsapp, working with the University of Toronto’s Citizen Lab, an academic research organization with a focus on digital threats to civil society, has traced the source of the spyware to NSO Group, an Israeli company well known both for developing and selling hacking and surveillance technology to governments with a questionable record in human rights. Whatsapp’s lawsuit against NSO Group in a federal court in California also specifically alludes to NSO Group’s clients “which include but are not limited to government agencies in the Kingdom of Bahrain, the United Arab Emirates, and Mexico as well as private entities.” The complaint filed by Whatsapp against NSO Group can be accessed here.

In this context, we examine the shortcomings of international efforts in limiting or regulating the transfers or sale of advanced and sophisticated technology to governments that often use it to violate human rights, as well as highlight the often complex and blurred lines between the military and civil use of these technologies by the government.

The Wassenaar Arrangement on Export Controls for Conventional Arms and Dual-Use Goods and Technologies (WA) exists for this precise reason. Established in 1996 and voluntary / non-binding in nature[I], its stated mission is “to contribute to regional and international security and stability, by promoting transparency and greater responsibility in transfers of conventional arms and dual-use goods and technologies, thus preventing destabilizing accumulations.”[ii] Military advancements across the globe, significant among which were the Indian and Pakistani nuclear tests, rocket tests by India and South Korea and the use of chemical warfare during the Iran-Iraq war, were all catalysts in the formulation of this multilateral attempt to regulate the transfer of advanced technologies capable of being weaponized.[iii] With more and more incidents coming to light of authoritarian regimes utilizing advanced western technology to violate human rights, the WA was amended to bring within its ambit “intrusion software” and “IP network surveillance systems” as well. 

Wassenaar: A General Outline

With a current membership of 42 countries (India being the latest to join in late 2017), the WA is the successor to the cold war-era Coordinating Committee for Multilateral Export Controls (COCOM) which had been established by the Western Bloc in order to prevent weapons and technology exports to the Eastern Bloc or what was then known as the Soviet Union.[iv] However, unlike its predecessor, the WA does not target any nation-state, and its members cannot exercise any veto power over other member’s export decisions.[v] Notably, while Russia is a member, Israel and China are not.

The WA lists out the different technologies in the form of “Control Lists” primarily consisting of the “List of Dual-Use Goods and Technologies” or the Basic List, and the “Munitions List”.[vi] The term “dual-use technology” typically refers to technology that can be used for both civilian and military purposes.[vii] The Basic List consists of ten categories[viii]

  • Special Materials and Related Equipment (Category 1); 
  • Materials Processing (Category 2); 
  • Electronics (Category 3); 
  • Computers (Category 4); 
  • Telecommunications (Category 5, Part 1); 
  • Information Security (Category 5, Part 2); 
  • Sensors and Lasers (Category 6); 
  • Navigation and Avionics (Category 7); 
  • Marine (Category 8); 
  • Aerospace and Propulsion (Category 9). 

Additionally, the Basic List also has the Sensitive and Very Sensitive Lists which include technologies covering radiation, submarine technology, advanced radar, etc. 

An outline of the WA’s principles is provided in its Guidelines & Procedures, including the Initial Elements. Typically, participating countries enforce controls on transfer of the listed items by enacting domestic legislation requiring licenses for export of these items and are also expected to ensure that the exports “do not contribute to the development or enhancement of military capabilities which undermine these goals, and are not diverted to support such capabilities.[ix]

While the Guidelines & Procedures document does not expressly proscribe the export of the specified items to non-WA countries, members are expected to notify other participants twice a year if a license under the Dual List is denied for export to any non-WA country.[x]

Amid concerns of violation of civil liberties

Unlike conventional weapons, cyberspace and information technology is one of those sectors where the government does not yet have a monopoly in expertise. In what can only be termed a “cyber-arms race”, it would be fair to say that most governments are even now busily acquiring technology from private companies to enhance their cyber-capacity, which includes surveillance technology for intelligence-gathering efforts. This, by itself, is plain real-politik.

However, amid this weaponization of the cyberspace, there were growing concerns that this technology was being purchased by authoritarian or repressive governments for use against their citizens. For instance, Eagle, monitoring technology owned by Amesys (a unit of the French firm Bull SA), Boeing Co.’s internet-filtering Narus, and China’s ZTE Corp. all contributed to the surveillance efforts by Col. Gaddafi’s regime in Libya. Surveillance technology equipment sold by Siemens AG and maintained by Nokia Siemens Networks were used against human rights activists in Bahrain. These instances, as part of a wider pattern that came to the spotlight, galvanized the WA countries in 2013 to include “intrusion software” and “IP network surveillance systems” in the Control List to attempt to limit the transfer of these technologies to known repressive regimes. 

Unexpected Consequences

The 2013 Amendment to the Control Lists was the subject of severe criticism by tech companies and civil society groups across the board. While the intention behind it was recognized as laudable, the terms “intrusion software” and “IP network surveillance system” were widely viewed as over-broad and having the unintended consequence of looping in both legitimate as well as illegitimate use of technology. The problems pointed out by cybersecurity experts are manifold and are a result of a misunderstanding of how cybersecurity works.

The inclusion of these terms, which was meant to regulate surveillance based on computer codes / programmes, also has the consequence of bringing within its ambit legitimate and often beneficial uses of these technologies, including even antivirus technology according to one view. Cybersecurity research and development often involves making use of “zero-day exploits” or vulnerabilities in the developed software, which when discovered and reported by any “bounty hunter”, is typically bought by the company owning the software. This helps the company immediately develop a “patch” for the reported vulnerability. These transactions are often necessarily cross-border. Experts complained that if directly transposed to domestic law, the changes would have a chilling effect on the vital exchange of information and research in this area, which was a major hurdle for advances in cybersecurity, making cyberspace globally less safer. A prime example is HewlettPackard’s (HP)  withdrawal from Pwn2Own—a computer hacking contest held annually at the PacSecWest security conference where contestants are challenged to hack into / exploit vulnerabilities on widely used software. HP, which sponsored the event, was forced to withdraw in 2015 citing the “complexity in obtaining real-time import /export licenses in countries that participate in the Wassenaar Arrangement”, among others. The member nation in this case was Japan.

After facing fierce opposition on its home soil, the United States decided to not implement the WA amendment and instead, decided to argue for a reversal at the next Plenary session of the WA, which failed. Other nations, including the EU and Japan have implemented the WA amendment export controls with varying degrees of success.

The Pegasus Hack, India and the Wassenaar

Considering many of the Indians identified as victims of the Pegasus hack were either journalists or human rights activists, with many of them being associated with the highly-contentious Bhima-Koregaon case, speculation is rife that the Indian government is among those purchasing and utilizing this kind of advanced surveillance technology to spy on its own citizens. Adding this to the NSO Group’s public statement that its “sole purpose” is to “provide technology to licensed government intelligence and law enforcement agencies to help them fight terrorism and serious crime”, it appears there are credible allegations that the Indian government was involved in the hack. The government’s evasiveness in responding and insistence on so-called “standard operating procedures” having been followed are less than reassuring.

While India’s entry to the WA as its 42nd member in 2018 has certainly elevated its status in the international arms control regime by granting it access to three of the world’s four main arms-control regimes (the others being the Nuclear Suppliers’ Group / NSG, the Missile Technology Control Group / MTCR and the Australia Group), the Pegasus Hack incident and the apparent connection to the Indian government shows us that its commitment to the principles underlying the WA is doubtful. The purpose of the inclusion of “intrusion software” and “IP network surveillance system” in the WA’s Control Lists by way of the 2013 Amendment, no matter their unintended consequences for legitimate uses of such technology, was to prevent governmental purchases exactly like this one. Hence, even though the WA does not prohibit the purchase of any surveillance technology from a non-member, the Pegasus incident arguably, is still a serious detraction from India’s commitment to the WA, even if not an explicit violation.

Military Cyber-Capability Vs Law Enforcement Cyber-Capability

Given what we know so far, it appears that highly sophisticated surveillance technology has also come into the hands of local law enforcement agencies. Had it been disclosed that the Pegasus software was being utilized by a military wing against external enemies, by, say, even the newly created Defence Cyber Agency, it would have probably caused fewer ripples. In fact, it might even have come off as reassuring evidence of the country’s advanced cyber-capabilities. However, the idea of such advanced, sophisticated technologies at the easy disposal of local law enforcement agencies is cause for worry. This is because while traditionally the domain of the military is external, the domain of law enforcement agencies is internal, i.e., the citizenry. There is tremendous scope for misuse by such authorities, including increased targeting of minorities. The recent incident of police officials in Hyderabad randomly collecting biometric data including their fingerprints and clicking people’s pictures only exacerbates this point. Even abroad, there already exist on-going efforts to limit the use of surveillance technologies by local law enforcement such as the police.

The conflation of technology use by both military and civil agencies  is a problem that is created in part at least, by the complex and often dual-use nature of technology. While dual use technology is recognized by the WA, this problem is not one that it is able to solve. As explained above, dual use technology is technology that can be used for both civil and military purposes. The demands of real-politik, increase in cyber-terrorism and the manifold ways in which a nation’s security can be compromised in cyberspace necessitate any government in today’s world to increase and improve its cyber-military-capacity by acquiring such technology. After all, a government that acquires surveillance technology undoubtedly increases the effectiveness of its intelligence gathering and ergo, its security efforts. But at the same time, the government also acquires the power to simultaneously spy on its own citizens, which can easily cascade into more targeted violations. 

Governments must resist the impulse to turn such technology on its own citizens. In the Indian scenario, citizens have been granted a ring of protection by way of the Puttaswamy judgement, which explicitly recognizes their right to privacy as a fundamental right. Interception and surveillance by the government while currently limited by laid-down protocols, are not regulated by any dedicated law. While there are calls for urgent legislation on the subject, few deal with the technology procurement processes involved. It has also now emerged that Chhattisgarh’s State Government has set up a panel to look into allegations that that NSO officials had a meeting with the state police a few years ago. This raises questions of oversight in the relevant authorities’ public procurement processes, apart from their legal authority to actually carry out domestic surveillance by exploiting zero-day vulnerabilities.  It is now becoming evident that any law dealing with surveillance will need to ensure transparency and accountability in the procurement of and use of the different kinds of invasive technology adopted by Central or State authorities to carry out such surveillance. 


[i]A Guide to the Wassenaar Arrangement, Daryl Kimball, Arms Control Association, December 9, 2013, https://www.armscontrol.org/factsheets/wassenaar, last accessed on November 27, 2019.

[ii]Ibid.

[iii]Data, Interrupted: Regulating Digital Surveillance Exports, Tim Maurerand Jonathan Diamond, November 24, 2015, World Politics Review.

[iv]Wassenaar Arrangement: The Case of India’s Membership, Rajeswari P. Rajagopalan and Arka Biswas, , ORF Occasional Paper #92 p.3, OBSERVER RESEARCH FOUNDATION, May 5, 2016, http://www.orfonline.org/wp-content/uploads/2016/05/ORF-Occasional-Paper_92.pdf, last accessed on November 27, 2019.

[v]Ibid, p. 3

[vi]“List of Dual-Use Goods and Technologies And Munitions List,” The Wassenaar Arrangement, available at https://www.wassenaar.org/public-documents/, last accessed on November 27, 2019. 

[vii]Article 2(1), Proposal for a REGULATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL setting up a Union regime for the control of exports, transfer, brokering, technical assistance and transit of dual-use items (recast), European Commission, September 28th, 2016, http://trade.ec.europa.eu/doclib/docs/2016/september/tradoc_154976.pdf, last accessed on November 27, 2019. 

[viii]supra note vi.

[ix]Guidelines & Procedures, including the Initial Elements, The Wassenaar Arrangement, December, 2016, http://www.wassenaar.org/wp- content/uploads/2016/12/Guidelines-and-procedures-including-the-Initial-Elements-2016.pdf, last accessed on November 27, 2019.

[x]Articles V(1) & (2), Guidelines & Procedures, including the Initial Elements, The Wassenaar Arrangement, December, 2016, https://www.wassenaar.org/public-documents/, last accessed on November 27, 2019.

[September 23-30] CCG’s Week in Review: Curated News in Information Law and Policy

The deadline to link PAN cards with Aadhaar was extended to December 31 this week; the Election Commission ruled that voting rights of those excluded in the NRC process remain unaffected; the Home Minister proposed a digital census with multipurpose ID cards for 2021; and 27 nations including the US, UK and Canada issued joint statement urging for a rules-based order in cyberspace – presenting this week’s most important developments in law, technology and national security.

Aadhaar and Digital IDs

  • [Sep 23] Home Minister announces digital census in 2021, proposed multipurpose ID card, Entrackr report; Business Today report.
  • [Sep 24] NRIs can now apply for Aadhaar on arrival without 182-day wait, The Economic Times report.
  • [Sep 24] Aadhaar will be linked to driving license to avoid forgery: Ravi Shankar Prasad, The Indian Express report.
  • [Sep 24] One nation, one card? Amit Shah floats idea of all-in-one ID; here are all the problems with that idea, Medianama report; Money Control report.
  • [Sep 24] Explained: Is India likely to have a multipurpose national ID card? The Indian Express report.
  • [Sep 24] UIDAI nod to ‘voluntary’ use of Aadhaar for National Population Register rollout, The Economic Times report.
  • [Sep 24] Govt must decide on Aadhaar-social media linkage:SC, Deccan Herald report.
  • [Sep 25] New law needed for Aadhaar-social media linkage: UIDAI, The Economic Times report; Inc42 report.
  • [Sep 26] NPR process to include passport, voter ID, Aadhaar and other details, Business Standard report.
  • [Sep 27] Gang involved in making fake Aadhaar cards busted, The Tribune report.
  • [Sep 27] What will happen if you don’t link your PAN card with Aadhaar by Sep 20, The Quint report.
  • [Sep 27] Explained: The National Population Register, and the controversy around it, The Indian Express report.
  • [Sep 27] Aadhaar to weed out bogus social security beneficiaries in Karnataka, Deccan Herald report.
  • [Sep 29] Bajrang Dal wants Aadhaar mandatory at dandiya to keep ‘non-Hindus’ out, The Hindustan Times report; The Wire report.
  • [Sep 30] Kerala urges Centre to extend deadline to link ration cards with Aadhaar, The News Minute report.
  • [Sep 30] PAN-Aadhaar linking deadline extended to December 31, The Economic Times report.

Digital India 

  • [Sep 25] India’s regulatory approach should focus on the regulation of the ‘core’: IAMAI, Livemint report.
  • [Sep 27] India may have to offer sops to boost electronic manufacturing, ET Tech report; Inc42 report.
  • [Sep 27] Digital India, start-ups are priorities for $5 trillion economy: PM Modi, Medianama report.
  • [Sep 29] Tech giants aim to skill Indian govt officials in AI, cloud, ET CIO report.
  • [Sep 29] India’s share in IT, R&D biz up in 2 years: report, The Economic Times report.

Internet Governance

  • [Sep 24] Supreme Court to MeitY: What’s the status of intermediary guidelines? Tell us by Oct 15, Medianama report.
  • [Sep 26] Will not be ‘excessive’ with social media rules, ay Govt officials, Inc42 report.
  • [Sep 26] Government trying to balance privacy and security in draft IT intermediary norms, The Economic Times report.
  • [Sep 27] Citizens, tech companies served better with some regulation: Facebook India MD Ajit Mohan, ET Tech report; Inc42 report.
  • [Sep 27] Balance benefits of internet, data security: Google CEO Sundar Pichai, ET Tech report; Business Today report.

Free Speech

  • [Sep 25] Jadavpur University calls upon ‘stakeholders’ to ensure free speech on campus, The New Indian Express report.
  • [Sep 28] RSS raises objections to uncensored content of Maoj Bajpayee’s “The Family Man”, The Hindu report; Outlook report.

Privacy and Data Protection

  • [Sep 23] A landmark decision on Tuesday could radically reshape how Google’s search results work, Business Insider report.
  • [Sep 23] Google tightens its voice assistant rules amidst privacy backlash, Wired report.
  • [Sep 24] Dell rolls out new data protection storage appliances and capabilities, ZDNet report.
  • [Sep 24] ‘Right to be forgotten’ privacy rule is limited by Europe’s top court, The New York Times report; Live Law report.
  • [Sep 27] Nigeria launches investigation into Truecaller for potential breach of privacy, Medianama report.
  • [Sep 29] Right to be forgotten will be arduous as India frames data protection law, Business Standard report.
  • [Sep 30] FPIs move against data bill, seek exemption, ET Telecom report; Entrackr report.

Data Localisation

  • [Sep 26] Reconsider imposition of data localisation: IAMAI report, The Economic Times report.
  • [Sep 27] Why data is not oil: Here’s how India’s data localisation norms will hurt the economy, Inc42 report.

Digital Payments and Fintech

  • [Sep 23] RBI rider on credit bureau data access has Fintech in a quandary, ET Tech report.

Cryptocurrencies

  • [Sep 23] Facebook reveals Libra currency basket breakdown, Coin Desk report.
  • [Sep 23] The face of India’s crypto lobby readies for a clash, Ozy report.
  • [Sep 23] Why has Brazil’s Central Bank included crypto assets in trade balance? Coin Telegraph report.
  • [Sep 24] French retailers widening crypto acceptance, Tech Xplore report.
  • [Sep 26] Why crypto hoaxes are so successful, Quartz report.
  • [Sep 26] South Africa: the net frontier for crypto exchanges, Coin Telegraph report
  • [Sep 27] The crypto wars’ strange bedfellows, Forbes report.
  • [Sep 28] Crypto industry is already preparing for Google’s ‘quantum supremacy’, Decrypt report.
  • [Sep 29] How crypto gambling is regulated around the world, Coin Telegraph report.

Tech and Law Enforcement

  • [Sep 29] New WhatsApp and Facebook Encryption ‘Backdoors’ – What’s really going on, Forbes report.
  • [Sep 28] Facebook, WhatsApp will have to share messages with UK Government, Bloomberg report.
  • [Sep 23] Secret FBI subpoenas scoop up personal data from scores of companies, The New York Times report.
  • [Sep 23] ‘Don’t transfer the WhatsApp traceability case’, Internet Freedom Foundation asks Supreme Court, Medianama report.
  • [Sep 24] China offers free subway rides to citizens who register their face with surveillance system, The Independent report.
  • [Sep 24] Facial recognition technology in public housing prompts backlash, The New York Times report.
  • [Sep 24] Facebook-Aadhaar linkage and WhatsApp traceability: Supreme Court says government must frame rules, CNBC TV18 report.
  • [ep 27] Fashion that counters surveillance cameras, Business Times report.
  • [Sep 27] Unnao rape case: Delhi court directs Apple to give Sengar’s location details on day of alleged rape, Medianama report.
  • [Sep 27] Face masks to decoy t-shirts: the rise of anti-surveillance fashion, Times of India report.
  • [Sep 30] Battle for privacy and encryption: WhatsApp and government head for a showdown on access to messages, ET Prime report.
  • [Sep 29] Improving digital evidence sharing, Scottish Government news report; Public technology report.

Internal Security: J&K

  • [Sep 23] Government launches internet facilitation centre in Pulwama for students, Times of India report; Business Standard report.
  • [Sep 23] Army chief rejects ‘clampdown’ in Jammu and Kashmir, Times of India report.
  • [Sep 24] Rising power: Why India has faced muted criticism over its Kashmir policy, Business Standard report.
  • [Sep 24] ‘Restore Article 370, 35A in Jammu and Kashmir, withdraw army, paramilitary forces’: 5-member women’s group will submit demands to Amit Shah, Firstpost report.
  • [Sep 24] No normalcy in Kashmir, says fact finding team, The Hindu report.
  • [Sep 25] End clampdown: Kashmir media, The Telegraph report.
  • [Sep 25] Resolve Kashmir issue through dialogue and not through collision: Erdogan, The Economic Times report.
  • [Sep 25] Rajya Sabha deputy chair thwarts Pakistan’s attempt at Kashmir at Eurasian Conference, The Economic Times report.
  • [Sep 25] Pakistan leader will urge UN intervention in Kashmir, The New York Times report.
  • [Sep 25] NSA Ajit Doval back in Srinagar to review security situation, The Hindustan Times report.
  • [Sep 27] Communication curbs add fresh challenge to Kashmir counter-insurgency operations, News18 report.
  • [Sep 27] Fresh restrictions in parts of Kashmir, The Hindu report.
  • [Sep 27] US wants ‘rapid’ easing of Kashmir restrictions, Times of India report.
  • [Sep 27] Kashmir issue: Rescind action on Art. 370, OIC tells India, The Hindu report.
  • [Sep 28] India objects to China’s reference to J&K and Ladakh at UNGA, The Economic Times report; The Hindu report.
  • [Sep 29] Surveillance, area domination operations intensified in Kashmir, The Economic Times report; Financial Express report.
  • [Sep 29] Police impose restrictions in J&K after Imran Khan’s speech at UNGA, India Today report.

Internal Security: NRC and the North-East

  • [Sep 23] Assam framing cyber security policy to secure data related to NRC, police, services, The Economic Times report; Money Control report.
  • [Sep 24] BJP will tell SC that we reject this NRC, says Himanta Biswa Sarma, Business Standard report.
  • [Sep 24] Amit Shah to speak on NRC, Citizenship Amendment Bill in Kolkata on Oct 1, The Economic Times report.
  • [Sep 26] ‘Expensive’ legal battle for those rejected in Assam NRC final list, The Economic Times report.
  • [Sep 27] Scared of NRC? Come back in 2022, The Telegraph report.
  • [Sep 27] Voters left out of NRC will have right to vote, rules Election Commission, India Today report; The Wire report.
  • [Sep 27] NRC: Assam government announces 200 Foreigners Tribunals in 33 districts, Times Now report; Times of India report.
  • [Sep 28] Judge urges new FT members to examine NRC claims with utmost care, Times of India report.

National Security Legislation

  • [Sep 23] Centre will reintroduce Citizenship Bill in Parliament: Himanta Biswa Sarma, The Hindu report.
  • [Sep 26] National Security Guard: History, Functions and Operations, Jagran Josh report.
  • [Sep 28] Left parties seek revocation of decision on Article 370, The Tribune India report.

Tech and National Security

  • [Sep 25] Army to start using Artificial Intelligence in 2-3 years: South Western Army commander, The Print report; India Today report; The New Indian Express report; Financial Express report.
  • [Sep 23] Modi, Trump set new course on terrorism, border security, The Hindu report.
  • [Sep 23] PM Modi in the US” Trump promises more defence deals with India, military trade to go up, Financial Express report.
  • [Sep 23] Punjab police bust terror module supplied with weapons by drones from Pak, NDTV report.
  • [Sep 26] Lockheed Martin to begin supplying F-16 wings from Hyderabad plant in 2020, Livemint report.
  • [Sep 26] Drones used for cross-border arms infiltration in Punjab a national security issues, says Randhawa, The Hindu report.
  • [Sep 27] UK MoD sets up cyber team for secure innovation, UK Authority report.
  • [Sep 29] New tri-services special ops division, meant for surgical strikes, finishes first exercise today, The Print report.
  • [Sep 30] After Saudi attacks, India developing anti-drone technology to counter drone menace, Eurasian Times report.

Tech and Elections

  • [Sep 20] Microsoft will offer free Windows 7 support for US election officials through 2020, Cyber Scoop report.
  • [Sep 26] Social media platforms to follow ‘code of ethics’ in all future elections: EC, The Economic Times report.
  • [Sep 28] Why is EC not making ‘authentic’ 2019 Lok Sabha results public? The Quint report.

Cybersecurity

  • [Sep 24] Androids and iPhones hacked with just one WhatsApp click – and Tibetans are under attack, Forbes report.
  • [Sep 25] Sharp questions can help board oversee cybersecurity, The Wall Street Journal report.
  • [Sep 25] What we know about CrowdStrike, the cybersecurity firm trump mentioned in Ukraine call, and its billionaire CEO, Forbes report.
  • [Sep 25] 36% smaller firms witnessed data breaches in 2019 globally, ET Rise report.
  • [Sep 28] Defence Construction Canada hit by cyber attack – corporation’s team trying to restore full IT capability, Ottawa Citizen report.
  • [Sep 29] Experts call for collective efforts to counter cyber threats, The New Indian Express report.
  • [Sep 29] Microsoft spots malware that turns PCs into zombie proxies, ET Telecom report
  • [Sep 29] US steps up scrutiny of airplane cybersecurity, The Wall Street Journal report.

Cyberwarfare

  • [Sep 24] 27 countries sign cybersecurity pledge urging rules-based control over cyberspace in Joint Statement, with digs at China and Russia, CNN report; IT world Canada report; Meri Talk report.
  • [Sep 26] Cyber Peace Institute fills a critical need for cyber attack victims, Microsoft blog.
  • [Sep 29] Britain is ‘at war every day’ due to constant cyber attacks, Chief of the Defence Staff says, The Telegraph report.

Telecom and 5G

  • [Sep 27] Telcos’ IT investments intact, auto companies may slow pace: IBM exec, ET Tech report.
  • [Sep 29] Telecom players to lead digital transformation in India, BW Businessworld report.

More on Huawei

  • [Sep 22] Huawei confirms another nasty surprise for Mate 30 buyers, Forbes report.
  • [Sep 23] We’re on the same page with government on security: Huawei, The Economic Times report.
  • [Sep 24] The debate around 5G’s safety is getting in the way of science, Quartz report (paywall).
  • [Sep 24] Govt will take call on Huawei with national interest in mind: Telecom Secy, Business Standard report.
  • [Sep 24] Huawei enables 5G smart travel system at Beijing airport, Tech Radar report.
  • [Sep 25] Huawei 5G backdoor entry unproven, The Economic Times report.
  • [Sep 25] US prepares $1 bn fund to replace Huawei ban kit, Tech Radar report.
  • [Sep 26] Google releases large dataset of deepfakes for researchers, Medianama report.
  • [Sep 26] Huawei willing to license 5G technology to a US firm, The Hindu Business Line report; Business Standard report.
  • [Sep 26] Southeast Asia’s top phone carrier still open to Huawei 5G, Bloomberg report.
  • [Sep 29] Russia rolls out the red carpet for Huawei over 5G, The Economic Times report.

Emerging Tech and AI

  • [Sep 20] Google researchers have reportedly achieved “Quantum Supremacy”, Financial Times report; MIT Technology Review report
  • [Sep 23] Artificial Intelligence revolution in healthcare in India: All we need to know, The Hindustan Times report.
  • [Sep 23] A new joystick for the brain-controlled vehicles of the future, Defense One report.
  • [Sep 24] Computing and AI: Humanistic Perspectives from MIT, MIT News report.
  • [Sep 24] Emerging technologies such as AI, 5G posing threats to privacy, says report, China Daily report.
  • [Sep 25] Alibaba unveils chip developed for artificial intelligence era, Financial Times report.
  • [Sep 26] Pentagon wants AI to interpret ‘strategic activity around the globe, Defense One report.
  • [Sep 27] Only 10 jobs created for every 100 jobs taken away by AI, ET Tech report.
  • [Sep 27] Experts say these emerging technologies should concern us, Business Insider report.
  • [Sep 27] What is on the horizon for export controls on ‘emerging technologies’? Industry comments may hold a clue, Modaq.com report.
  • [Sep 27] India can become world leader in artificial intelligence: Vishal Sikka, Money Control report.
  • [Sep 27] Elon Musk issues a terrifying prediction of ‘AI robot swarms’ and huge threat to mankind, The Daily Express (UK) report
  • [Sep 27] Russia’s national AI Centre is taking shape, Defense One report.
  • [Sep 29] Explained: What is ‘quantum supremacy’, The Hindu report.
  • [Sep 29] Why are scientists so excited about a new quantum computing milestone?, Scroll.in report.
  • [Sep 29] Artificial Intelligence has a gender bias problem – just ask Siri, The Wire report.
  • [Sep 29] How AI is changing the landscape of digital marketing, Inc42 report.

Opinions and Analyses

  • [Sep 21] Wim Zijnenburg, Defense One, Time to Harden International Norms on Armed Drones.
  • [Sep 23] David Sanger and Julian Barnes, The New York Times, The urgent search for a cyber silver bullet against Iran.
  • [Sep 23] Neven Ahmad, PRIO Blog, The EU’s response to the drone age: A united sky.
  • [Sep 23] Bisajit Dhar and KS Chalapati Rao, The Wire, Why an India-US Free Trade Agreement would require New Delhi to reorient key policies.
  • [Sep 23] Filip Cotfas, Money Control, Five reasons why data loss prevention has to be taken seriously.
  • [Sep 23] NF Mendoza, Tech Republic, 10 policy principles needed for artificial intelligence.
  • [Sep 24] Ali Ahmed, News Click, Are Indian armed forces turning partisan? : The changing civil-military relationship needs monitoring.
  • [Sep 24] Editorial, Deccan Herald, A polity drunk on Aadhaar.
  • [Sep 24] Mike Loukides, Quartz, The biggest problem with social media has nothing to do with free speech.
  • [Sep 24] Ananth Padmanabhan, Medianama, Civilian Drones: Privacy challenges and potential resolution. 
  • [Sep 24] Celine Herwijer and Dominic Kailash Nath Waughray, World Economic Forum, How technology can fast-track the global goals.
  • [Sep 24] S. Jaishankar, Financial Times, Changing the status of Jammu and Kashmir will benefit all of India.
  • [Sep 24] Editorial, Livemint, Aadhaar Mark 2.
  • [Sep 24] Vishal Chawla, Analytics India Magazine, AI in Defence: How Indi compares to US, China, Russia and South Korea.
  • [Sep 25] Craig Borysowich, IT Toolbox, Origin of Markets for Artificial Intelligence.
  • [Sep 25] Sudeep Chakravarti, Livemint, After Assam, NRC troubles may visit ‘sister’ Tripura.
  • [Sep 25] DH Kass, MSSP Blog, Cyber Warfare: New Rules of Engagement?
  • [Sep 25] Chris Roberts, Observer, How artificial intelligence could make nuclear war more likely.
  • [Sep 25] Ken Tola, Forbes, What is cybersecurity?
  • [Sep 25] William Dixon and  Jamil Farshchi, World Economic Forum, AI is transforming cybercrime. Here’s how we can fight back.
  • [Sep 25] Patrick Tucker, Defense One, Big Tech bulks up its anti-extremism group. But will it do more than talk?
  • [Sep 26] Udbhav Tiwari, Huffpost India, Despite last year’s Aadhaar judgement, Indians have less privacy than ever.
  • [Sep 26] Sylvia Mishra, Medianama, India and the United States: The time has come to collaborate on commercial drones.
  • [Sep 26] Subimal Bhattacharjee, The Hindu Business Line, Data flows and our national security interests.
  • [Sep 26] Ram Sagar, Analytics India Magazine, Top countries that are betting big on AI-based surveillance.
  • [Sep 26] Patrick Tucker, Defense One, AI will tell future medics who lives and who dies on the battlefield.
  • [Sep 26] Karen Hao, MIT Technology Review, This is how AI bias really happens – and why it’s so hard to fix.
  • [Sep 27] AG Noorani, Frontline, Kashmir dispute: Domestic or world issue?
  • [Sep 27] Sishanta Talukdar, Frontline, Final NRC list: List of exclusion.
  • [Sep 27] Freddie Stuart, Open Democracy, How facial recognition technology is bringing surveillance capitalism to our streets.
  • [Sep 27] Paul de Havilland, Crypto Briefing, Did Bitcoin crash or dip? Crypto’s trajectory moving forward.
  • [Sep 28] John Naughton, The Guardian, Will advances in quantum computing affect internet security?
  • [Sep 28] Suhrith Parthasarathy, The Hindu, The top court and a grave of freedom.
  • [Sep 28] Kazim Rizvi, YourStory, Data Protection Authority: the cornerstone to implement data privacy.
  • [Sep 28] Shekhar Gupta, The Print, Modi has convinced the world that Kashmir is India’s internal affair – but they’re still watching.
  • [Sep 29] Indrani Bagchi, The Economic Times, Why india needs to tread carefully on Kashmir.
  • [Sep 29] Medha Dutta Yadav, The New Indian Express, Data: Brave new frontier.
  • [Sep 29] Jon Markman, Forbes, New cybersecurity companies have their heads in the cloud.
  • [Sep 29] Editorial, The New York Times, On cybersecurity: Two scoops of perspective.
  • [Sep 30] Kuldip Singh, The Quint, New IAF Chief’s appointment: Why RKS Bhadauria must tread lightly.
  • [Sep 30] Karishma Koshal, The Caravan, With the data-protection bill in limbo, these policies contravene the right to privacy.