The Supreme Court’s Pegasus Order

This blog post has been authored by Shrutanjaya Bhardwaj.

On 28th October 2021, the Supreme Court passed an order in the “Pegasus” case establishing a 3-member committee of technical experts to investigate allegations of illegal surveillance by hacking into the phones of several Indian citizens, including journalists. This post  analyses the Pegasus order. Analyses by others may be accessed here, here and here.

Overview

The writ petitioners alleged that the Indian Government and its agencies have been using a spyware tool called “Pegasus”—produced by an Israeli technology firm named the NSO Group—to spy on Indian citizens. As the Court notes, Pegasus can be installed on digital devices such as mobile phones, and once Pegasus infiltrates the device, “the entire control over the device is allegedly handed over to the Pegasus user who can then remotely control all the functionalities of the device.” Practically, this means the ‘Pegasus user’ (i.e., the infiltrator) has access to all data on the device (emails, texts, and calls) and can remotely activate the camera and microphone to surveil the device owner and their immediate surroundings. 

The Court records some basic facts that are instructive in understanding its final order:

  1. The NSO Group itself claims that it only sells Pegasus to governments. 
  2. In November 2019, the then-Minister of Electronics and IT acknowledged in Parliament that Pegasus had infected the devices of certain Indians. 
  3. In June-July 2020, reputed media houses uncovered instances of Pegasus spyware attacks on many Indians including “senior journalists, doctors, political persons, and even some Court staff”.
  4. Foreign governments have since taken steps to diplomatically engage with Israel or/and internally conduct investigations to understand the issue.
  5. Despite repeated requests by the Court, the Union Government did not furnish any specific information to assist the Court’s understanding of the matter.

These facts led the Court to conclude that the petitioners’ allegations of illegal surveillance by hacking need further investigation. The Court noted that the petitioners had placed on record expert reports and there also existed a wealth of ‘cross-verified media coverage’ coupled with the reactions of foreign governments to the use of Pegasus. The Court’s order leaves open the possibility that a foreign State or perhaps a private entity may have conducted surveillance on Indians. Additionally, the Union Government’s refusal to clarify its position on the legality and use of Pegasus in Court raised the possibility that the Union Government itself may have used the spyware. As discussed below, this possibility ultimately shaped the Court’s directions and relief.  

The Pegasus order is analysed below along three lines: (i) the Court’s acknowledgement of the threat to fundamental rights, (ii) the Union Government’s submissions before the Court, and (iii) the Court’s assertion of its constitutional duty of judicial review—even in the face of sensitive considerations like national security.

Acknowledging the risks to fundamental rights

While all fundamental rights may be reasonably restricted by the State, every right has different grounds on which it may be restricted. Identifying the precise right under threat is hence an important exercise. The Court articulates three distinct rights at risk in a Pegasus attack. Two flow from the freedom of speech under Article 19(1)(a) of the Constitution and one from the right to privacy under Article 21. 

The first right, relatable to Article 19(1)(a), is journalistic freedom. The Court noted that the awareness of being spied on causes the journalist to tread carefully and think twice before speaking the truth. Additionally, when a journalist’s entire private communication is accessible to the State, the chances of undue pressure increase manifold. The Court described such surveillance as “an assault on the vital public watchdog role of the press”.

The second right, also traced to Article 19(1)(a), is the journalist’s right to protect their sources. The Court treats this as a “basic condition” for the freedom of the press. “Without such protection, sources may be deterred from assisting the press in informing the public on matters of public interest,” which harms the free flow of information that Article 19(1)(a) is designed to ensure. This observation and acknowledgment by the Court is significant and it will be interesting to see how the Court’s jurisprudence develops and engages with this issue.The third right, traceable to Article 21 as interpreted in Puttaswamy, is the citizen’s right to privacy (see CCG’s case brief on the CCG’s Privacy Law Library of Puttaswamy). Surveillance and hacking are prima facie an invasion of privacy. However, the State may justify a privacy breach as a reasonable restriction on constitutional grounds if the legality, necessity, and proportionality of the State’s surveillance measure is established.

Court’s response to the Government’s “conduct” before the Court

The Court devotes a significant part of the Pegasus order to discuss the Union Government’s “conduct”in the litigation. The first formal response filed by the Government, characterised as a “limited affidavit”, did not furnish any details about the controversy owing to an alleged “paucity of time”. When the Court termed this affidavit as “insufficient” and demanded a more detailed affidavit, the Solicitor General cited national security implications as the reason for not filing a comprehensive response to the surveillance allegations. This was despite repeated assurances given by both the Petitioners and the Court that no sensitive information was being sought, and the Government need only disclose what was necessary to decide the matter at hand. Additionally, the Government did not specify the national security consequences that would arise if more details were disclosed. (The Court’s response to the invocation of the national security ground on merits is discussed in the next section.) 

In addition to invoking national security, the Government made three other arguments:

  1. The press reports and expert evidence were “motivated and self-serving” and thus of insufficient veracity to trigger the Court’s jurisdiction.
  2. While all technology may be misused, the use of Pegasus cannot per se be impermissible, and India had sufficient legal safeguards to guard against constitutionally impermissible surveillance.
  3. The Court need not establish a committee as the Union Government was prepared to constitute its own committee of experts to investigate the issue.

The Court noted that the nature and “sheer volume” of news reports are such that these materials “cannot be brushed aside”. The Court was unwilling to accept the other two arguments in part due to the Union Government’s broader “conduct” on the issue of Pegasus. It noted that the first reports of Pegasus use dated back to 2018 and a Union Minister had informed Parliament of the spyware’s use on Indians in 2019, yet no steps to investigate or resolve the issue had been taken until the present writ petitions had been filed. Additionally, the Court ruled that the limited documentation provided by the Government did not clarify its stand on the use of Pegasus. In this context, and owing to reasons of natural justice (discussed below), the Court opined that independent fact finding and judicial review were warranted.

Assertion of constitutional duty of judicial review

As noted above, the Union Government invoked national security as a ground to not file documentation regarding its alleged use of Pegasus. The Court acknowledged that the government is entitled to invoke this ground, and even noted that the scope of judicial review is narrow on issues of national security. However, the Court held that the mere invocation of national security is insufficient to exclude court intervention. Rather, the government must demonstrate how the information being withheld would raise national security concerns and the Court will decide whether the government’s concerns are legitimate. 

The order contains important observations on the Government’s use of the national security exception to exclude judicial scrutiny. The Court notes that such arguments are not new; and that governments have often urged constitutional courts to take a hands-off approach in matters that have a “political” facet (like those pertaining to defence and security). But the Court has previously held, and also affirmed in the Pegasus order, that it will not abstain from interfering merely because a case has a political complexion. The Court noted that it may certainly choose to defer to the Government on sensitive aspects, but there is no “omnibus prohibition” on judicial review in matters of national security. If the State wishes to withhold information from the Court, it must “plead and prove” the necessary facts to justify such withholding.

The Government had also suggested that the Court let the Government set up a committee to investigate the matter. The Supreme Court had adopted this approach in the Kashmir Internet Shutdowns case by setting up an executive-led committee to examine the validity and necessity of continuing internet shutdowns. That judgment was widely criticised (see here, here and here). However, in the present case, as the petitions alleged that the Union Government itself had used Pegasus on Indians, the Court held that allowing the Union Government to set up a committee to investigate would violate the principle of bias in inquiries. The Court quoted the age-old principle that “justice must not only be done, but also be seen to be done”, and refused to allow the Government to set up its own committee. This is consistent with the Court’s assertion of its constitutional obligation of judicial review in the earlier parts of the order. 

Looking ahead

The terms of reference of the Committee are pointed and meaningful. The Committee is required to investigate, inter alia, (i) whether Pegasus was used to hack into phones of Indian citizens, and if so which citizens; (ii) whether the Indian Government procured and deployed Pegasus; and (iii) if the Government did use Pegasus, what law or regulatory framework the spyware was used under. All governmental agencies have been directed to cooperate with the Committee and furnish any required information.

Additionally, the Committee is to make recommendations regarding the enactment of a new surveillance law or amendment of existing law(s), improvements to India’s cybersecurity systems, setting up a robust investigation and grievance-redressal mechanism for the benefit of citizens, and any ad-hoc arrangements to be made by the Supreme Court for the protection of citizen’s rights pending requisite action by Parliament.

The Court has directed the Committee to carry out its investigation “expeditiously” and listed the matter again after 8 weeks. As per the Supreme Court’s website, the petitions are tentatively to be listed on 3 January 2022.

This blog was written with the support of the Friedrich Naumann Foundation for Freedom.

The Future of Democracy in the Shadow of Big and Emerging Tech: CCG Essay Series

By Shrutanjaya Bhardwaj and Sangh Rakshita

In the past few years, the interplay between technology and democracy has reached a critical juncture. The untrammelled optimism for technology has now been shadowed by rising concerns over the survival of a meaningful democratic society. With the expanding reach of technology platforms, there have been increasing concerns in democratic societies around the world on the impact of such platforms on democracy and human rights. In this context, increasingly there has been focus on policy issues like  the need for an antitrust framework for digital platforms, platform regulation and free speech, the challenges of fake news, impact of misinformation on elections, invasion of privacy of citizens due to the deployment of emerging tech,  and cybersecurity. This has intensified the quest for optimal policy solutions. We, at the Centre for Communication Governance at National Law University Delhi (CCG), believe that a detailed academic exploration of the relationship between democracy, and big and emerging tech will aid our understanding of the current problems, help contextualise them and highlight potential policy and regulatory responses.

Thus, we bring to you this series of essays—written by experts in the domain—in an attempt to collate contemporary scholarly thought on some of the issues that arise in the context of the interaction of democracy, and big and emerging tech. The essay series is publicly available on the CCG website. We have also announced the release of the essay series on Twitter

Our first essay addresses the basic but critical question: What is ‘Big Tech’? Urvashi Aneja & Angelina Chamuah present a conceptual understanding of the phrase. While ‘Big Tech’ refers to a set of companies, it is certainly not a fixed set; companies become part of this set by exhibiting four traits or “conceptual markers” and—as a corollary—would stop being identified in this category if they were to lose any of the four markers. The first marker is that the company runs a data-centric model and has massive access to consumer data which can be leveraged or exploited. The second marker is that ‘Big Tech’ companies have a vast user base and are “multi-sided platforms that demonstrate strong network effects”. The third and fourth markers are the infrastructural and civic roles of these companies respectively, i.e., they not only control critical societal infrastructure (which is often acquired through lobbying efforts and strategic mergers and acquisitions) but also operate “consumer-facing platforms” which enable them to generate consumer dependence and gain huge power over the flow of information among citizens. It is these four markers that collectively define ‘Big Tech’. [U. Aneja and A. Chamuah, What is Big Tech? Four Conceptual Markers]

Since the power held by Big Tech is not only immense but also self-reinforcing, it endangers market competition, often by hindering other players from entering the market. Should competition law respond to this threat? If yes, how? Alok P. Kumar & Manjushree R.M. explore the purpose behind competition law and find that competition law is concerned not only with consumer protection but also—as evident from a conjoint reading of Articles 14 & 39 of the Indian Constitution—with preventing the concentration of wealth and material resources in a few hands. Seen in this light, the law must strive to protect “the competitive process”. But the present legal framework is too obsolete to achieve that aim. Current understanding of concepts such as ‘relevant market’, ‘hypothetical monopolist’ and ‘abuse of dominance’ is hard to apply to Big Tech companies which operate more on data than on money. The solution, it is proposed, lies in having ex ante regulation of Big Tech rather than a system of only subsequent sanctions through a possible code of conduct created after extensive stakeholder consultations. [A.P. Kumar and Manjushree R.M., Data, Democracy and Dominance: Exploring a New Antitrust Framework for Digital Platforms]

Market dominance and data control give an even greater power to Big Tech companies, i.e., control over the flow of information among citizens. Given the vital link between democracy and flow of information, many have called for increased control over social media with a view to checking misinformation. Rahul Narayan explores what these demands might mean for free speech theory. Could it be (as some suggest) that these demands are “a sign that the erstwhile uncritical liberal devotion to free speech was just hypocrisy”? Traditional free speech theory, Narayan argues, is inadequate to deal with the misinformation problem for two reasons. First, it is premised on protecting individual liberty from the authoritarian actions by governments, “not to control a situation where baseless gossip and slander impact the very basis of society.” Second, the core assumption behind traditional theory—i.e., the possibility of an organic marketplace of ideas where falsehood can be exposed by true speech—breaks down in context of modern era misinformation campaigns. Therefore, some regulation is essential to ensure the prevalence of truth. [R. Narayan, Fake News, Free Speech and Democracy]

Jhalak M. Kakkar and Arpitha Desai examine the context of election misinformation and consider possible misinformation regulatory regimes. Appraising the ideas of self-regulation and state-imposed prohibitions, they suggest that the best way forward for democracy is to strike a balance between the two. This can be achieved if the State focuses on regulating algorithmic transparency rather than the content of the speech—social media companies must be asked to demonstrate that their algorithms do not facilitate amplification of propaganda, to move from behavioural advertising to contextual advertising, and to maintain transparency with respect to funding of political advertising on their platforms. [J.M. Kakkar and A. Desai, Voting out Election Misinformation in India: How should we regulate Big Tech?]

Much like fake news challenges the fundamentals of free speech theory, it also challenges the traditional concepts of international humanitarian law. While disinformation fuels aggression by state and non-state actors in myriad ways, it is often hard to establish liability. Shreya Bose formulates the problem as one of causation: “How could we measure the effect of psychological warfare or disinformation campaigns…?” E.g., the cause-effect relationship is critical in tackling the recruitment of youth by terrorist outfits and the ultimate execution of acts of terror. It is important also in determining liability of state actors that commit acts of aggression against other sovereign states, in exercise of what they perceive—based on received misinformation about an incoming attack—as self-defence. The author helps us make sense of this tricky terrain and argues that Big Tech could play an important role in countering propaganda warfare, just as it does in promoting it. [S. Bose, Disinformation Campaigns in the Age of Hybrid Warfare]

The last two pieces focus attention on real-life, concrete applications of technology by the state. Vrinda Bhandari highlights the use of facial recognition technology (‘FRT’) in law enforcement as another area where the state deploys Big Tech in the name of ‘efficiency’. Current deployment of FRT is constitutionally problematic. There is no legal framework governing the use of FRT in law enforcement. Profiling of citizens as ‘habitual protestors’ has no rational nexus to the aim of crime prevention; rather, it chills the exercise of free speech and assembly rights. Further, FRT deployment is wholly disproportionate, not only because of the well-documented inaccuracy and bias-related problems in the technology, but also because—more fundamentally—“[t]reating all citizens as potential criminals is disproportionate and arbitrary” and “creates a risk of stigmatisation”. The risk of mass real-time surveillance adds to the problem. In light of these concerns, the author suggests a complete moratorium on the use of FRT for the time being. [V. Bhandari, Facial Recognition: Why We Should Worry the Use of Big Tech for Law Enforcement

In the last essay of the series, Malavika Prasad presents a case study of the Pune Smart Sanitation Project, a first-of-its-kind urban sanitation programme which pursues the Smart City Mission (‘SCM’). According to the author, the structure of city governance (through Municipalities) that existed even prior to the advent of the SCM violated the constitutional principle of self-governance. This flaw was only aggravated by the SCM which effectively handed over key aspects of city governance to state corporations. The Pune Project is but a manifestation of the undemocratic nature of this governance structure—it assumes without any justification that ‘efficiency’ and ‘optimisation’ are neutral objectives that ought to be pursued. Prasad finds that in the hunt for efficiency, the design of the Pune Project provides only for collection of data pertaining to users/consumers, hence excluding the marginalised who may not get access to the system in the first place owing to existing barriers. “Efficiency is hardly a neutral objective,” says Prasad, and the state’s emphasis on efficiency over inclusion and participation reflects a problematic political choice. [M. Prasad, The IoT-loaded Smart City and its Democratic Discontents]

We hope that readers will find the essays insightful. As ever, we welcome feedback.

This series is supported by the Friedrich Naumann Foundation for Freedom (FNF) and has been published by the National Law University Delhi Press. We are thankful for their support. 

Building an AI Governance Framework for India, Part III

Embedding Principles of Privacy, Transparency and Accountability

This post has been authored by Jhalak M. Kakkar and Nidhi Singh

In July 2020, the NITI Aayog released a draft Working Document entitled “Towards Responsible AI for All” (hereafter ‘NITI Aayog Working Document’ or ‘Working Document’). This Working Document was initially prepared for an expert consultation that was held on 21 July 2020. It was later released for comments by stakeholders on the development of a ‘Responsible AI’ policy in India. CCG’s comments and analysis  on the Working Document can be accessed here.

In our first post in the series, ‘Building an AI governance framework for India’, we discussed the legal and regulatory implications of the Working Document and argued that India’s approach to regulating AI should be (1) firmly grounded in its constitutional framework, and (2) based on clearly articulated overarching ‘Principles for Responsible AI’. Part II of the series discussed specific Principles for Responsible AI – Safety and Reliability, Equality, and Inclusivity and Non-Discrimination. We explored the constituent elements of these principles and the avenues for incorporating them into the Indian regulatory framework. 

In this final post of the series, we will discuss the remaining principles of Privacy, Transparency and Accountability. 

Principle of Privacy 

Given the diversity of AI systems, the privacy risks which they pose to the individuals, and society as a whole are also varied. These may be be broadly related to : 

(i) Data protection and privacy: This relates to privacy implications of the use of data by AI systems and subsequent data protection considerations which arise from this use. There are two broad aspects to think about in terms of the privacy implications from the use of data by AI systems. Firstly, AI systems must be tailored to the legal frameworks for data protection. Secondly, given that AI systems can be used to re-identify anonymised data, the mere anonymisation of data for the training of AI systems may not provide adequate levels of protection for the privacy of an individual.

a) Data protection legal frameworks: Machine learning and AI technologies have existed for decades, however, it was the explosion in the availability of data, which accounts for the advancement of AI technologies in recent years. Machine Learning and AI systems depend upon data for their training. Generally, the more data the system is given, the more it learns and ultimately the more accurate it becomes. The application of existing data protection frameworks to the use of data by AI systems may raise challenges. 

In the Indian context, the Personal Data Protection Bill, 2019 (PDP Bill), currently being considered by Parliament, contains some provisions that may apply to some aspects of the use of data by AI systems. One such provision is Clause 22 of the PDP Bill, which requires data fiduciaries to incorporate the seven ‘privacy by design’ principles and embed privacy and security into the design and operation of their product and/or network. However, given that AI systems rely significantly on anonymised personal data, their use of data may not fall squarely within the regulatory domain of the PDP Bill. The PDP Bill does not apply to the regulation of anonymised data at large but the Data Protection Authority has the power to specify a code of practice for methods of de-identification and anonymisation, which will necessarily impact AI technologies’ use of data.

b) Use of AI to re-identify anonymised data: AI applications can be used to re-identify anonymised personal data. To safeguard the privacy of individuals, datasets composed of the personal data of individuals are often anonymised through a de-identification and sampling process, before they are shared for the purposes of training AI systems to address privacy concerns. However, current technology makes it possible for AI systems to reverse this process of anonymisation to re-identify people, having significant privacy implications for an individual’s personal data. 

(ii) Impact on society: The impact of the use of AI systems on society essentially relates to broader privacy considerations that arise at a societal level due to the deployment and use of AI, including mass surveillance, psychological profiling, and the use of data to manipulate public opinion. The use of AI in facial recognition surveillance technology is one such AI system that has significant privacy implications for society as a whole. Such AI technology enables individuals to be easily tracked and identified and has the potential to significantly transform expectations of privacy and anonymity in public spaces. 

Due to the varying nature of privacy risks and implications caused by AI systems, we will have to design various regulatory mechanisms to address these concerns. It is important to put in place a reporting and investigation mechanism that collects and analyses information on privacy impacts caused by the deployment of AI systems, and privacy incidents that occur in different contexts. The collection of this data would allow actors across the globe to identify common threads of failure and mitigate against potential privacy failures arising from the deployment of AI systems. 

To this end, we can draw on a mechanism that is currently in place in the context of reporting and investigating aircraft incidents, as detailed under Annexure 13 of the Convention on International Civil Aviation (Chicago Convention). It lays down the procedure for investigating aviation incidents and a reporting mechanism to share information between countries. The aim of this accident investigation report is not to apportion blame or liability from the investigation, but rather to extensively study the cause of the accident and prevent future incidents. 

A similar incident investigation mechanism may be employed for AI incidents involving privacy breaches. With many countries now widely developing and deploying AI systems, such a model of incident investigation would ensure that countries can learn from each other’s experiences and deploy more privacy-secure AI systems.

Principle of Transparency

The concept of transparency is a recognised prerequisite for the realisation of ‘trustworthy AI’. The goal of transparency in ethical AI is to make sure that the functioning of the AI system and resultant outcomes are non-discriminatory, fair, and bias mitigating, and that the AI system inspires public confidence in the delivery of safe and reliable AI innovation and development. Additionally, transparency is also important in ensuring better adoption of AI technology—the more users feel that they understand the overall AI system, the more inclined and better equipped they are to use it.

The level of transparency must be tailored to its intended audience. Information about the working of an AI system should be contextualised to the various stakeholder groups interacting and using the AI system. The Institute of Electrical and Electronics Engineers, a global professional organisation of electronic and electrical engineers,  suggested that different stakeholder groups may require varying levels of transparency in accordance with the target group. This means that groups such as users, incident investigators, and the general public would require different standards of transparency depending upon the nature of information relevant for their use of the AI system.

Presently, many AI algorithms are black boxes where automated decisions are taken, based on machine learning over training datasets, and the decision making process is not explainable. When such AI systems produce a decision, human end users don’t know how it arrived at its conclusions. This brings us to two major transparency problems, the public perception and understanding of how AI works, and how much developers actually understand about their own AI system’s decision making process. In many cases, developers may not know, or be able to explain how an AI system makes conclusions or how it has arrived at certain solutions.

This results in a lack of transparency. Some organisations have suggested opening up AI algorithms for scrutiny and ending reliance on opaque algorithms. On the other hand, the NITI Working Document is of the view that disclosing the algorithm is not the solution and instead, the focus should be on explaining how the decisions are taken by AI systems. Given the challenges around explainability discussed above, it will be important for NITI Aayog to discuss how such an approach will be operationalised in practice.

While many countries and organisations are researching different techniques which may be useful in increasing the transparency of an AI system, one of the common suggestions which have gained traction in the last few years is the introduction of labelling mechanisms in AI systems. An example of this is Google’s proposal to use ‘Model Cards’, which are intended to clarify the scope of the AI systems deployment and minimise their usage in contexts for which they may not be well suited. 

Model cards are short documents which accompany a trained machine learning model. They enumerate the benchmarked evaluation of the working of an AI system in a variety of conditions, across different cultural, demographic, and intersectional groups which may be relevant to the intended application of the AI system. They also contain clear information on an AI system’s capabilities including the intended purpose for which it is being deployed, conditions under which it has been designed to function, expected accuracy and limitations. Adopting model cards and other similar labelling requirements in the Indian context may be a useful step towards introducing transparency into AI systems. 

Principle of Accountability

The Principle of Accountability aims to recognise the responsibility of different organisations and individuals that develop, deploy and use the AI systems. Accountability is about responsibility, answerability and trust. There is no one standard form of accountability, rather this is dependent upon the context of the AI and the circumstances of its deployment.

Holding individuals and entities accountable for harm caused by AI systems has significant challenges as AI systems generally involve multiple parties at various stages of the development process. The regulation of the adverse impacts caused by AI systems often goes beyond the existing regimes of tort law, privacy law or consumer protection law. Some degree of accountability can be achieved by enabling greater human oversight. In order to foster trust in AI and appropriately determine the party who is accountable, it is necessary to build a set of shared principles that clarify responsibilities of each stakeholder involved with the research, development and implementation of an AI system ranging from the developers, service providers and end users.

Accountability has to be ensured at the following stages of an AI system: 

(i) Pre-deployment: It would be useful to implement an audit process before the AI system is deployed. A potential mechanism for implementing this could be a multi-stage audit process which is undertaken post design, but before the deployment of the AI system by the developer. This would involve scoping, mapping and testing a potential AI system before it is released to the public. This can include ensuring risk mitigation strategies for changing development environments and ensuring documentation of policies, processes and technologies used in the AI system.

Depending on the nature of the AI system and the potential for risk, regulatory guidelines can be developed prescribing the involvement of various categories of auditors such as internal, expert third party and from the relevant regulatory agency, at various stages of the audit. Such audits which are conducted pre-deployment are aimed at closing the accountability gap which exists currently.

(ii) During deployment: Once the AI system has been deployed, it is important to keep auditing the AI system to note the changes being made/evolution happening in the AI system in the course of its deployment. AI systems constantly learn from the data and evolve to become better and more accurate. It is important that the development team is continuously monitoring the system to capture any errors that may arise, including inconsistencies arising from input data or design features, and address them promptly.

(iii) Post-deployment: Ensuring accountability post-deployment in an AI system can be challenging. The NITI Working Document also recognised that assigning accountability for specific decisions becomes difficult in a scenario with multiple players in the development and deployment of an AI system. In the absence of any consequences for decisions harming others, no one party would feel obligated to take responsibility or take actions to mitigate the effect of the AI systems. Additionally, the lack of accountability also leads to difficulties in grievance redressal mechanisms which can be used to address scenarios where harm has arisen from the use of AI systems. 

The Council of Europe, in its guidelines on the human rights impacts of algorithmic systems, highlighted the need for effective remedies to ensure responsibility and accountability for the protection of human rights in the context of the deployment of AI systems. A potential model for grievance redressal is the redressal mechanism suggested in the AI4People’s Ethical Framework for a Good Society report by the Atomium – European Institute for Science, Media and Democracy. The report suggests that any grievance redressal mechanism for AI systems would have to be widely accessible and include redress for harms inflicted, costs incurred, and other grievances caused by the AI system. It must demarcate a clear system of accountability for both organisations and individuals. Of the various redressal mechanisms they have suggested, two significant mechanisms are: 

(a) AI ombudsperson: This would ensure the auditing of allegedly unfair or inequitable uses of AI reported by users of the public at large through an accessible judicial process. 

(b) Guided process for registering a complaint: This envisions laying down a simple process, similar to filing a Right to Information request, which can be used to bring discrepancies, or faults in an AI system to the notice of the authorities.

Such mechanisms can be evolved to address the human rights concerns and harms arising from the use of AI systems in India. 

Conclusion

In early October, the Government of India hosted the Responsible AI for Social Empowerment (RAISE) Summit which has involved discussions around India’s vision and a roadmap for social transformation, inclusion and empowerment through Responsible AI. At the RAISE Summit, speakers underlined the need for adopting AI ethics and a human centred approach to the deployment of AI systems. However, this conversation is still at a nascent stage and several rounds of consultations may be required to build these principles into an Indian AI governance and regulatory framework. 

As India enters into the next stage of developing and deploying AI systems, it is important to have multi-stakeholder consultations to discuss mechanisms for the adoption of principles for Responsible AI. This will enable the framing of an effective governance framework for AI in India that is firmly grounded in India’s constitutional framework. While the NITI Aayog Working Document has introduced the concept of ‘Responsible AI’ and the ethics around which AI systems may be designed, it lacks substantive discussion on these principles. Hence, in our analysis, we have explored global views and practices around these principles and suggested mechanisms appropriate for adoption in India’s governance framework for AI. Our detailed analysis of these principles can be accessed in our comments to the NITI Aayog’s Working Document Towards Responsible AI for All.

Experimenting With New Models of Data Governance – Data Trusts

This post has been authored by Shashank Mohan

India is in the midst of establishing a robust data governance framework, which will impact the rights and liabilities of all key stakeholders – the government, private entities, and citizens at large. As a parliamentary committee debates its first personal data protection legislation (‘PDPB 2019’), proposals for the regulation of non-personal data and a data empowerment and protection architecture are already underway. 

As data processing capabilities continue to evolve at a feverish pace, basic data protection regulations like the PDPB 2019 might not be sufficient to address new challenges. For example, big data analytics renders traditional notions of consent meaningless as users have no knowledge of how such algorithms behave and what determinations are made about them by such technology. 

Creative data governance models, which are aimed at reversing the power dynamics in the larger data economy are the need of the hour. Recognising these challenges policymakers are driving the conversation on data governance in the right direction. However, they might be missing out on crucial experiments being run in other parts of the world

As users of digital products and services increasingly lose control over data flows, various new models of data governance are being recommended for example, data trusts, data cooperatives, and data commons. Out of these, one of the most promising new models of data governance is – data trusts. 

(For the purposes of this blog post, I’ll be using the phrase data processors as an umbrella term to cover data fiduciaries/controllers and data processors in the legal sense. The word users is meant to include all data principals/subjects.)

What are data trusts?

Though there are various definitions of data trusts, one which is helpful in understanding the concept is – ‘data trusts are intermediaries that aggregate user interests and represent them more effectively vis-à-vis data processors.’ 

To solve the information asymmetries and power imbalances between users and data processors, data trusts will act as facilitators of data flow between the two parties, but on the terms of the users. Data trusts will act in fiduciary duty and in the best interests of its members. They will have the requisite legal and technical knowledge to act on behalf of users. Instead of users making potentially ill-informed decisions over data processing, data trusts will make such decisions on their behalf, based on pre-decided factors like a bar on third-party sharing, and in their best interests. For example, data trusts to users can be what mutual fund managers are to potential investors in capital markets. 

Currently, in a typical transaction in the data economy, if users wish to use a particular digital service, neither do they have the knowledge to understand the possible privacy risks nor the negotiation powers for change. Data trusts with a fiduciary responsibility towards users, specialised knowledge, and multiple members might be successful in tilting back the power dynamics in favour of users. Data trusts might be relevant from the perspective of both the protection and controlled sharing of personal as well as non-personal data. 

(MeitY’s Non-Personal Data Governance Framework introduces the concept of data trustees and data trusts in India’s larger data governance and regulatory framework. But, this applies only to the governance of ‘non-personal data’ and not personal data, as being recommended here. CCG’s comments on MeitY’s Non-Personal Data Governance Framework, can be accessed – here)

Challenges with data trusts

Though creative solutions like data trusts seem promising in theory, they must be thoroughly tested and experimented with before wide-scale implementation. Firstly, such a new form of trusts, where the subject matter of the trust is data, is not envisaged by Indian law (see section 8 of the Indian Trusts Act, 1882, which provides for only property to be the subject matter of a trust). Current and even proposed regulatory structures don’t account for the regulation of institutions like data trusts (the non-personal data governance framework proposes data trusts, but only as data sharing institutions and not as data managers or data stewards, as being suggested here). Thus, data trusts will need to be codified into Indian law to be an operative model. 

Secondly, data processors might not embrace the notion of data trusts, as it may result in loss of market power. Larger tech companies, who have existing stores of data on numerous users may not be sufficiently incentivised to engage with models of data trusts. Structures will need to be built in a way that data processors are incentivised to participate in such novel data governance models. 

Thirdly, the business or operational models for data trusts will need to be aligned to their members i.e. users. Data trusts will require money to operate – for profit entities may not have the best interests of users in mind. Subscription based models, whether for profit or not, might fail as users are habitual to free services. Donation based models might need to be monitored closely for added transparency and accountability. 

Lastly, other issues like creation of technical specifications for data sharing and security, contours of consent, and whether data trusts will help in data sharing with the government, will need to be accounted for. 

Privacy centric data governance models

At this early stage of developing data governance frameworks suited to Indian needs, policymakers are at a crucial juncture of experimenting with different models. These models must be centred around the protection and preservation of privacy rights of Indians, both from private and public entities. Privacy must also be read in its expansive definition as provided by the Supreme Court in Justice K.S. Puttaswamy vs. Union of India. The autonomy, choice, and control over informational privacy are crucial to the Supreme Court’s interpretation of privacy. 

(CCG’s privacy law database that tracks privacy jurisprudence globally and currently contains information from India and Europe, can be accessed – here

Building an AI governance framework for India

This post has been authored by Jhalak M. Kakkar and Nidhi Singh

In July 2020, the NITI Aayog released a “Working Document: Towards Responsible AI for All” (“NITI Working Document/Working Document”). The Working Document was initially prepared for an expert consultation held on 21 July 2020. It was later released for comments by stakeholders on the development of a ‘Responsible AI’ policy in India. CCG responded with comments to the Working Document, and our analysis can be accessed here.

The Working Document highlights the potential of Artificial Intelligence (“AI”) in the Indian context. It attempts to identify the challenges that will be faced in the adoption of AI and makes some recommendations on how to address these challenges. The Working Document emphasises the economic potential of the adoption of AI in boosting India’s annual growth rate, its potential for use in the social sector (‘AI for All’) and the potential for India to export relevant social sector products to other emerging economies (‘AI Garage’). 

However, this is not the first time that the NITI Aayog has discussed the large-scale adoption of AI in India. In 2018, the NITI Aayog released a discussion paper on the “National Strategy for Artificial Intelligence” (“National Strategy”). Building upon the National Strategy, the Working Document attempts to delineate ‘Principles for Responsible AI’ and identify relevant policy and governance recommendations. 

Any framework for the regulation of AI systems needs to be based on clear principles. The ‘Principles for Responsible AI’ identified by the Working Document include the principles of safety and reliability, equality, inclusivity and non-discrimination, privacy and security, transparency, accountability, and the protection and reinforcement of positive human values. While the NITI Working Document introduces these principles, it does not go into any substantive details on the regulatory approach that India should adopt and what the adoption of these principles into India’s regulatory framework would entail. 

In a series of posts, we will discuss the legal and regulatory implications of the proposed Working Document and more broadly discuss the regulatory approach India should adopt to AI and the principles India should embed in it. In this first post, we map out key considerations that should be kept in mind in order to develop a comprehensive regulatory regime to govern the adoption and deployment of AI systems in India. Subsequent posts will discuss the various ‘Principles for Responsible AI’, their constituent elements and how we should think of incorporating them into the Indian regulatory framework.

Approach to building an AI regulatory framework 

While the adoption of AI has several benefits, there are several potential harms and unintended risks if the technology is not assessed adequately for its alignment with India’s constitutional principles and its impact on the safety of individuals. Depending upon the nature and scope of the deployment of an AI system, its potential risks can include the discriminatory impact on vulnerable and marginalised communities, and material harms such as the negative impact on the health and safety of individuals. In the case of deployments by the State, risks include violation of the fundamental rights to equality, privacy, freedom of assembly and association, and freedom of speech and expression. 

We highlight some of the regulatory considerations that should be considered below:

Anchoring AI regulatory principles within the constitutional framework of India

The use of AI systems has raised concerns about their potential to violate multiple rights protected under the Indian Constitution such as the right against discrimination, the right to privacy, the right to freedom of speech and expression, the right to assemble peaceably and the right to freedom of association. Any regulatory framework put in place to govern the adoption and deployment of AI technology in India will have to be in consonance with its constitutional framework. While the NITI Working Document does refer to the idea of the prevailing morality of India and its relation to constitutional morality, it does not comprehensively address the idea of framing AI principles in compliance with India’s constitutional principles.

For instance, the government is seeking to acquire facial surveillance technology, and the National Strategy discusses the use of AI-powered surveillance applications by the government to predict crowd behaviour and for crowd management. The use of AI powered surveillance systems such as these needs to be balanced with their impact on an individual’s right to freedom of speech and expression, privacy and equality. Operational challenges surrounding accuracy and fairness in these systems raise further concerns. Considering the risks posed to the privacy of individuals, the deployment of these systems by the government, if at all, should only be done in specific contexts for a particular purpose and in compliance with the principles laid down by the Supreme Court in the Puttaswamy case.

In the context of AI’s potential to exacerbate discrimination, it would be relevant to discuss the State’s use of AI systems for the sentencing of criminals and assessing recidivism. AI systems are trained on existing datasets. These datasets tend to contain historically biased, unequal and discriminatory data. We have to be cognizant of the propensity for historical bias’ and discrimination getting imported into AI systems and their decision making. This could further reinforce and exacerbate the existing discrimination in the criminal justice system towards marginalised and vulnerable communities, and result in a potential violation of their fundamental rights.

The National Strategy acknowledges the presence of such biases and proposes a technical approach to reduce bias. While such attempts are appreciable in their efforts to rectify the situation and yield fairer outcomes, such an approach disregards the fact that these datasets are biased because they arise from a biased, unequal and discriminatory world. As we seek to build effective regulation to govern the use and deployment of AI systems, we have to remember that these are socio-technical systems that reflect the world around us and embed the biases, inequality and discrimination inherent in the Indian society. We have to keep this broader Indian social context in mind as we design AI systems and create regulatory frameworks to govern their deployment. 

While, the Working Document introduces the principles for responsible AI such as equality, inclusivity and non-discrimination, and privacy and security, there needs to be substantive discussion around incorporating these principles into India’s regulatory framework in consonance with constitutional guaranteed rights.

Regulatory Challenges in the adoption of AI in India

As India designs a regulatory framework to govern the adoption and deployment of AI systems, it is important that we keep the following in focus: 

  • Heightened threshold of responsibility for government or public sector deployment of AI systems

The EU is considering adopting a risk-based approach for regulation of AI, with heavier regulation for high-risk AI systems. The extent of risk factors such as safety, consumer rights and fundamental rights are assessed by looking at the sector of deployment and the intended use of the AI system. Similarly, India must consider the adoption of a higher regulatory threshold for the use of AI by at least government institutions, given their potential for impacting citizen’s rights. Government use of AI systems that have the potential of severely impacting citizens’ fundamental rights include the use of AI in the disbursal of government benefits, surveillance, law enforcement and judicial sentencing

  • Need for overarching principles based AI regulatory framework

Different sectoral regulators are currently evolving regulations to address the specific challenges posed by AI in their sector. While it is vital to harness the domain expertise of a sectoral regulator and encourage the development of sector-specific AI regulations, such piecemeal development of AI principles can lead to fragmentation in the overall approach to regulating AI in India. Therefore, to ensure uniformity in the approach to regulating AI systems across sectors, it is crucial to put in place a horizontal overarching principles-based framework. 

  • Adaptation of sectoral regulation to effectively regulate AI

In addition to an overarching regulatory framework which forms the basis for the regulation of AI, it is equally important to envisage how this framework would work with horizontal or sector-specific laws such as consumer protection law and the applicability of product liability to various AI systems. Traditionally consumer protection and product liability regulatory frameworks have been structured around fault-based claims. However, given the challenges concerning explainability and transparency of decision making by AI systems, it may be difficult to establish the presence of defects in products and, for an individual who has suffered harm, to provide the necessary evidence in court. Hence, consumer protection laws may have to be adapted to stay relevant in the context of AI systems. Even sectoral legislation regulating the use of motor vehicles, such as the Motor Vehicles Act, 1988 would have to be modified to enable and regulate the use of autonomous vehicles and other AI transport systems. 

  • Contextualising AI systems for both their safe development and use

To ensure the effective and safe use of AI systems, they have to be designed, adapted and trained on relevant datasets depending on the context in which they will be deployed. The Working Document envisages India being the AI Garage for 40% of the world – developing AI solutions in India which can then be deployed in other emerging economies. Additionally, India will likely import AI systems developed in countries such as the US, EU and China to be deployed within the Indian context. Both scenarios involve the use of AI systems in a context distinct from the one in which they have been developed. Without effectively contextualising socio-technical systems like AI systems to the environment they are to be deployed in, there are enhanced safety, accuracy and reliability concerns. Regulatory standards and processes need to be developed in India to ascertain the safe use and deployment of AI systems that have been developed in contexts that are distinct from the ones in which they will be deployed. 

The NITI Working Document is the first step towards an informed discussion on the adoption of a regulatory framework to govern AI technology in India. However, there is a great deal of work to be done. Any regulatory framework developed by India to govern AI must balance the benefits and risks of deploying AI, diminish the risk of any harm and have a consumer protection framework in place to adequately address any harm that may arise. Besides this, the regulatory framework must ensure that the deployment and use of AI systems are in consonance with India’s constitutional scheme.

India’s Cybersecurity Budget FY 2013-14 to FY 2019-20: Analysis of Budgetary Allocations for Cybersecurity and Related Activities

This is an edited excerpt of Part V and Annexure ‘C’ of CCG’s Comments to the National Security Council Secretariat on the National Cyber Security Strategy 2020 (NCSS 2020). The full text of the Comments can be accessed here.

Note on Research Methodology

CCG compiled the data on allocations (budgeted and revised) and actual expenditure from the Demands for Grants of Ministries as approved by Parliament and presented in the Annual Expenditure Budget of various ministries and their respective departments which are related to cybersecurity from FY 2013-17 to FY 2019-20. 

The departments have been identified from publicly available information represented in the organograms presented as Annexure ‘B’. We understand a ‘relevant department’ to mean those departments which are either directly related to cybersecurity and/or support the functioning of the technical and security aspects of internet governance at large.

We have then identified those budget heads under the Union Budgets for FY 2013-14 through FY 2019-2020, which correspond most closely to the departments identified and highlighted in Annexure ‘B’ to calculate the total allocation to ministries for cybersecurity-related activities. We then analyse this data in under four broad categories:

(I) Department Wise Allocation: The departments that are directly related to the expenditure for cybersecurity are calculated under this heading. Various expenditures under Ministry of Electronics and Information Technology (MEITY), Department of Telecommunication (DOT), and Ministry of Home Affairs are tabulated for this. 

Under MeitY, we have included the budget heads for

  1. Computer Emergency Response Team (CERT-IN),
  2. Centre for Development of Advanced Computing (C-DAC),
  3. Centre for Materials for Electronics and IT (C-MET),
  4. Society for Applied Microwave Electronics Engineering and Research (SAMEER),
  5. Standardization Testing and Quality Certification (STQC),
  6. Controller of Certifying Authorities (CCA), and
  7. Foreign Trade and Export Promotion and
  8. Certain components of the Digital India Initiative, namely:
  • Manpower Development,
  • National Knowledge Network,
  • Promotion of electronics and IT HW manufacturing,
  • Cybersecurity projects (which includes National Cyber Coordination centre and others),
  • Research and Development in Electronics/IT,
  • Promotion of IT/ITeS industries,
  • Promotion of Digital Payment, and
  • Pradhan Mantri Digital Saksharta Abhiyan (PMGDISHA).

Under Ministry of Communication, our focus was only on the Department of Telecommunication. We considered the budget allocated to the following, to come up with the total Department budget. These heads are:

  1. Telecom Regulatory Authority of India (TRAI),
  2. Human Resource Management under National Institute of Communication Finance,
  3. Wireless Planning and Coordination,
  4. Telecom Engineering Centre,
  5. Technology Development and Investment Promotion,
  6. South Asia Sub-Regional Economic Cooperation (SASEC) under Information Highway Project,
  7. Telecom Testing and Security Certification Centre,
  8. Telecom Computer Emergency Response Team,
  9. Central Equipments Identity Register (CEIR),
  10. 5G Connectivity Test Bed,
  11. Promotion of Innovation and Incubation of Future Technologies for Telecom Sector,
  12. Centre for Development of Telematics (C-DoT), and
  13. Labour, Employment and Skill Development.

Under Ministry of Home Affairs, the funds allocated for the following budget heads have been included:

  1. Education, Training and Research purposes,
  2. Criminology and Forensic Science,
  3. Modernisation of Police Forces and Crime and Criminal Tracking Network and Systems (CCTNS),
  4. Indian Cyber Crime Coordination Centre, and
  5. Technical and Economic Cooperation with Other Countries.

All these budget heads were tabulated to come up with the total for department wise allocation. Along with departments mentioned under ‘Supporting Departments’, all these departments were again classified on the basis of their functions and activities,  and analysed under (III).

(II) Supporting Department Wise Allocation: While certain expenditures of the Ministry of Defence, Ministry of External Affairs, Department of Telecommunication, and Ministry of Home Affairs can potentially be used for cybersecurity-related activities, but it it is not possible to infer from the Demands for Grants, the share of cyber in the total allocation, we have treated them as ‘allocations to supporting departments’. In this data, the total funds indicated may not be directly related to cybersecurity efforts, but they contribute towards the larger security and governance framework, which enables the creation of a secure ecosystem for cyber. These headings are tabulated under this section.

Under Ministry of Defence, the following heads were considered to contribute towards the larger security and governance framework in cyberspace:

  1. Navy/Joint Staff,
  2. Ordnance Factories R&D,
  3. Research and Development, including the Research and Development component of R&D head,
  4. Capital Outlay on R&D, and
  5. Technology Development and Assistance for Prototype Development under Make Procedure

Under Ministry of External Affairs, we considered the following heads as important contributors:

  1. The Special Diplomatic Expenditure,
  2. Expenditure for International Cooperation,
  3. Expenditure for Technical and Economic Cooperation with other Countries, and
  4. Other Expenditure of Ministry

Under Department of Telecommunication again, there were several heads that we considered not to be directly related to cybersecurity, but they did significantly contribute towards it. These include allocations for

  1. Defence Spectrum,
  2. Capital Outlay on Telecommunication and Electronic Industries,
  3. Capital Outlay on Other Communication Services, and
  4. Universal Service Obligation Fund (USOF)

Under Ministry of Home Affairs, the departments that are involved with defence and intelligence along with law enforcement are important to be considered for cybersecurity. Thus we included the allocations for

  1. Intelligence Bureau,
  2. NATGRID,
  3. Delhi Police, and
  4. Capital Outlay on Police.

(III) Activity Wise Allocation: For further analysis, we have categorized the expenditures mentioned in Department Wise Allocation into five categories, each of which have been identified as constituent elements of the three Pillars of Strategy namely:

  1. Human Resource Development Component (Strengthen)
  2. Technical Research & Development Component, Capacity Building (Strengthen/Synergize)
  3. International Cooperation and Investment Promotion Component (Secure/Synergise)
  4. Standardisation, Quality Testing and Certification Component (Strengthen)
  5. Active Cyber Incident Response/ Defence Operations and Security Component (Secure/Strengthen)      

The total for these are calculated to identify if any trends or patterns emerge in expenditure by the ministries. Apart from the ministries covered in classifications (I) and (II), we have also included budgets of two other heads/departments. Namely, these are (i) the allocation towards corporate data management under the authority of the Ministry of Corporate Affairs, which has been included in category (5) indicated above and (ii) the allocation towards technical and economic cooperation with other countries for the Department of Economic Affairs under the Ministry of Finance, which has been included in category (3) indicated above.

(IV) Ministries share over Financial Year: The total value tabulated in Department wise allocation and supporting department wise allocation for the ministries is then used to calculate the share of budget allocated to Cyber Security and related activities with respect to the total budget allocation of ministries. The ministries taken into account, which contribute significantly to Cyber Security and related activities are:

  1. Department of Telecommunication (under the Ministry of Communications),
  2. Ministry of Defence,
  3. Ministry of External Affairs,
  4. Ministry of Electronics and Information Technology,
  5. Ministry of Home Affairs, and
  6. Department of Science and Technology (under the Ministry of Science and Technology).

Ministry-wise Allocations and Expenditure on Cybersecurity and Related Activities FY 2013-14 to FY 2019-20

Figure 9 depicts actual expenditure (from FY 2013-14 to FY 2017-18), the Revised Expenditure (RE) for FY 2018-19 and Budgeted Expenditure for FY 2019-20. With the exception of FY 2016-17, we can see a clear trend of increasing allocations for expenditure towards cyber-security related activities, especially for the DoT. It is relevant to point out that this representation also includes the expenditure on Departments playing a supporting role in cybersecurity activities, such as the IDS/Joint Staff and R&D under the Ministry of Defence (MoD) as well as the MEA’s expenditure on international technical cooperation. As the expenditure incurred on cybersecurity related activities alone cannot be inferred from these budget heads, they have been treated as Departments playing a supporting role for cybersecurity efforts and included in overall expenditure.

Figure 9: Ministry-wise Total Expenditure on Cybersecurity and Related Activities
FY 2013-14 to FY 2019-20

Figure 10 is a narrower subset of the expenses indicated in Figure 9. It represents the allocations to Departments in Ministries that have been entrusted with core activities that contribute towards cybersecurity operations, R&D, e-Governance and internet governance at large. These include, to name a few, the promotion of electronics and IT hardware manufacturing and other initiatives such as Digital India, C-DAC, NCCC and other similar programmes under MeitY, TRAI, C-DoT and the 5G test bed under the authority of the DoT and MHA’s expenses towards modernization of police forces, forensics, and initiatives such as the Indian Cyber Crime Coordination Centre.

Figure 10 reveals an immediate upsurge in such allocations in the time period during and immediately after the formulation of the National Cyber Security Policy 2013, after which the allocations begin to dwindle in FY 2014-15. We can also note that with the exception of FY 2015-16 actual expenditure is consistently lower than the Budgeted Expenditure allocated to all these Ministries for cybersecurity related activities.

Figure 10: Ministry-wise Total Expenditure on Cybersecurity and Related Activities
FY 2013-14 to FY 2019-20

It is interesting to note that if we convert the absolute figures represented in Figure 10 into percentages, and represent the same data set as such, it reveals a remarkable consistency and a clear pattern emerges in burden-sharing between these three Ministries (MHA, MeitY and DoT under the Ministry of Communications).

Figure 11 depicts the same allocations indicated as absolute figures in Figure 10 as percentages of the total expenditure on core cybersecurity activities. It is clear that the MHA consistently bears the bulk of expenses on cyber security related activities, clearly with an emphasis on cyber crimes. The remaining half seems to be divided between MeitY and DoT more or less equally. FY 2015-16 allocations and actual expenditure in FY 2014-15 is the only exception to this equal distribution.

Figure 11: Ministry-wise Total Allocation for Cybersecurity and Related Activities
FY 2013-14 to FY 2019-20

Activity-wise Allocation and Expenditure on Cybersecurity

To further analyse how these budgetary allocations are being utilized, we have re-categorized the expenditures mentioned in Department/Ministry wise allocation into five categories, each of which have been identified as constituent elements of the three Pillars of Strategy namely: 

  1. Human Resource Development Component (Strengthen)
  2. Technical Research and Development Component, Capacity Building (Strengthen/Synergize)
  3. International Cooperation and Investment Promotion Component (Secure/Synergise)
  4. Standardization, Quality Testing and Certification Component (Strengthen)
  5. Active Cyber Incident Response/ Cyber Defence Operations and Security Component (Secure/Strengthen)

The total expenses incurred for these allocations are calculated to identify if any trends or patterns emerge to identify which activities are being prioritized according to the actual expenditure incurred by the relevant ministries. It is important to note that none of these categories include any expenses earmarked for cyber defence operations under the MoD, as the budget heads do not permit drawing such an inference in its current format.

In this reclassification, we have included one budget head each for two other Departments that do not figure in the data represented in Figures 9, 10 or 11. Namely, these are (a) the allocation towards corporate data management under the authority of the Ministry of Corporate Affairs, which has been included in category (5) indicated above and (b) the allocation towards technical and economic cooperation with other countries for the Department of Economic Affairs under the Ministry of Finance, which has been included in category (3) indicated above.

Figure 12 represents activity-wise trends in these Ministries’ actual expenditure. The figures for FY 2018-19 and FY 2019-20 represent the RE and BE for those years, respectively. It is not surprising that the expenditure on international cooperation and investment promotion towers over all other activities, as the allocated expenses would contribute to overall cooperation efforts at the international level and the promotion of investment broadly, and not only cybersecurity. Nonetheless, these are crucial contributions to enhancing India’s cybersecurity posture at home and abroad. For a clearer analysis, we remove the indicator for expenses towards international cooperation and investment promotion in Figure 13.

Figure 12: Activity-wise Expenditure for Cyber Security
FY 2013-14 to FY 2019-20
Figure 13: Activity-wise Expenditure for Cybersecurity FY 2013-14 to FY 2019-20 (excluding international cooperation and investment promotion)

From Figure 13, we can clearly infer which of the four activities at the core of the Government’s cybersecurity efforts are being prioritized in terms of allocation of budgetary resources. Clearly, emphasis on equipment testing and certification needs to be sharpened. There is an apparent tension between the funds that are made available for active cybersecurity operations and programmes on the one hand, and investments in human resource development on the other.

We submit that in both these areas, the Government must look to the private sector to create synergies and supplement the financial resources available for these particular activities. We also recommend that the expenditure earmarked for quality testing, development of technical standards and certification should be increased, and accorded greater priority than before.

Share of Ministries’ Budget Allocated to Cybersecurity and Related Activities

If we try to contextualize the utilization of funds made available for cybersecurity-related activities against the total allocations to relevant Ministries, there is no identifiable trend in expenditure patterns of the MEA, MeitY and DoT. Figure 14 represents the total expenditure on cybersecurity-related activities as a percentage of the total expenses allocated to the relevant Ministry. Cybersecurity-related activities appear to be fluctuating in terms of the priority accorded to them over time, in the diversion of financial resources towards this area. The contribution of the Department of Science and Technology towards R&D in cybersecurity has been consistently low, almost negligible. This has only changed with the establishment of the National Mission on Interdisciplinary Cyber Physical Systems in FY 2018-19. has been MHA’s share of expenditure on cybersecurity activities appears relatively more consistent, and could potentially be leveraged to create synergies for the rationalization of expenditure across Ministries.

Figure 14: Share of Cybersecurity-related Activities in Total Budget Allocated to Ministries

Budget for NCSS 2020?

In anticipation of the National Cyber Security Strategy 2020 expected to be released soon, we will be closely monitoring the the Union Budget for FY 2020-21 for fresh allocations to the relevant departments indicated in our analysis. We will also be on the lookout for fresh allocations that may be relevant to various components of the NCSS 2020. Watch this space for more on India’s Cybersecurity Budget 2020, coming soon!

The Pegasus Hack: A Hark Back to the Wassenaar Arrangement

By Sharngan Aravindakshan

The world’s most popular messaging application, Whatsapp, recently revealed that a significant number of Indians were among the targets of Pegasus, a sophisticated spyware that operates by exploiting a vulnerability in Whatsapp’s video-calling feature. It has also come to light that Whatsapp, working with the University of Toronto’s Citizen Lab, an academic research organization with a focus on digital threats to civil society, has traced the source of the spyware to NSO Group, an Israeli company well known both for developing and selling hacking and surveillance technology to governments with a questionable record in human rights. Whatsapp’s lawsuit against NSO Group in a federal court in California also specifically alludes to NSO Group’s clients “which include but are not limited to government agencies in the Kingdom of Bahrain, the United Arab Emirates, and Mexico as well as private entities.” The complaint filed by Whatsapp against NSO Group can be accessed here.

In this context, we examine the shortcomings of international efforts in limiting or regulating the transfers or sale of advanced and sophisticated technology to governments that often use it to violate human rights, as well as highlight the often complex and blurred lines between the military and civil use of these technologies by the government.

The Wassenaar Arrangement on Export Controls for Conventional Arms and Dual-Use Goods and Technologies (WA) exists for this precise reason. Established in 1996 and voluntary / non-binding in nature[I], its stated mission is “to contribute to regional and international security and stability, by promoting transparency and greater responsibility in transfers of conventional arms and dual-use goods and technologies, thus preventing destabilizing accumulations.”[ii] Military advancements across the globe, significant among which were the Indian and Pakistani nuclear tests, rocket tests by India and South Korea and the use of chemical warfare during the Iran-Iraq war, were all catalysts in the formulation of this multilateral attempt to regulate the transfer of advanced technologies capable of being weaponized.[iii] With more and more incidents coming to light of authoritarian regimes utilizing advanced western technology to violate human rights, the WA was amended to bring within its ambit “intrusion software” and “IP network surveillance systems” as well. 

Wassenaar: A General Outline

With a current membership of 42 countries (India being the latest to join in late 2017), the WA is the successor to the cold war-era Coordinating Committee for Multilateral Export Controls (COCOM) which had been established by the Western Bloc in order to prevent weapons and technology exports to the Eastern Bloc or what was then known as the Soviet Union.[iv] However, unlike its predecessor, the WA does not target any nation-state, and its members cannot exercise any veto power over other member’s export decisions.[v] Notably, while Russia is a member, Israel and China are not.

The WA lists out the different technologies in the form of “Control Lists” primarily consisting of the “List of Dual-Use Goods and Technologies” or the Basic List, and the “Munitions List”.[vi] The term “dual-use technology” typically refers to technology that can be used for both civilian and military purposes.[vii] The Basic List consists of ten categories[viii]

  • Special Materials and Related Equipment (Category 1); 
  • Materials Processing (Category 2); 
  • Electronics (Category 3); 
  • Computers (Category 4); 
  • Telecommunications (Category 5, Part 1); 
  • Information Security (Category 5, Part 2); 
  • Sensors and Lasers (Category 6); 
  • Navigation and Avionics (Category 7); 
  • Marine (Category 8); 
  • Aerospace and Propulsion (Category 9). 

Additionally, the Basic List also has the Sensitive and Very Sensitive Lists which include technologies covering radiation, submarine technology, advanced radar, etc. 

An outline of the WA’s principles is provided in its Guidelines & Procedures, including the Initial Elements. Typically, participating countries enforce controls on transfer of the listed items by enacting domestic legislation requiring licenses for export of these items and are also expected to ensure that the exports “do not contribute to the development or enhancement of military capabilities which undermine these goals, and are not diverted to support such capabilities.[ix]

While the Guidelines & Procedures document does not expressly proscribe the export of the specified items to non-WA countries, members are expected to notify other participants twice a year if a license under the Dual List is denied for export to any non-WA country.[x]

Amid concerns of violation of civil liberties

Unlike conventional weapons, cyberspace and information technology is one of those sectors where the government does not yet have a monopoly in expertise. In what can only be termed a “cyber-arms race”, it would be fair to say that most governments are even now busily acquiring technology from private companies to enhance their cyber-capacity, which includes surveillance technology for intelligence-gathering efforts. This, by itself, is plain real-politik.

However, amid this weaponization of the cyberspace, there were growing concerns that this technology was being purchased by authoritarian or repressive governments for use against their citizens. For instance, Eagle, monitoring technology owned by Amesys (a unit of the French firm Bull SA), Boeing Co.’s internet-filtering Narus, and China’s ZTE Corp. all contributed to the surveillance efforts by Col. Gaddafi’s regime in Libya. Surveillance technology equipment sold by Siemens AG and maintained by Nokia Siemens Networks were used against human rights activists in Bahrain. These instances, as part of a wider pattern that came to the spotlight, galvanized the WA countries in 2013 to include “intrusion software” and “IP network surveillance systems” in the Control List to attempt to limit the transfer of these technologies to known repressive regimes. 

Unexpected Consequences

The 2013 Amendment to the Control Lists was the subject of severe criticism by tech companies and civil society groups across the board. While the intention behind it was recognized as laudable, the terms “intrusion software” and “IP network surveillance system” were widely viewed as over-broad and having the unintended consequence of looping in both legitimate as well as illegitimate use of technology. The problems pointed out by cybersecurity experts are manifold and are a result of a misunderstanding of how cybersecurity works.

The inclusion of these terms, which was meant to regulate surveillance based on computer codes / programmes, also has the consequence of bringing within its ambit legitimate and often beneficial uses of these technologies, including even antivirus technology according to one view. Cybersecurity research and development often involves making use of “zero-day exploits” or vulnerabilities in the developed software, which when discovered and reported by any “bounty hunter”, is typically bought by the company owning the software. This helps the company immediately develop a “patch” for the reported vulnerability. These transactions are often necessarily cross-border. Experts complained that if directly transposed to domestic law, the changes would have a chilling effect on the vital exchange of information and research in this area, which was a major hurdle for advances in cybersecurity, making cyberspace globally less safer. A prime example is HewlettPackard’s (HP)  withdrawal from Pwn2Own—a computer hacking contest held annually at the PacSecWest security conference where contestants are challenged to hack into / exploit vulnerabilities on widely used software. HP, which sponsored the event, was forced to withdraw in 2015 citing the “complexity in obtaining real-time import /export licenses in countries that participate in the Wassenaar Arrangement”, among others. The member nation in this case was Japan.

After facing fierce opposition on its home soil, the United States decided to not implement the WA amendment and instead, decided to argue for a reversal at the next Plenary session of the WA, which failed. Other nations, including the EU and Japan have implemented the WA amendment export controls with varying degrees of success.

The Pegasus Hack, India and the Wassenaar

Considering many of the Indians identified as victims of the Pegasus hack were either journalists or human rights activists, with many of them being associated with the highly-contentious Bhima-Koregaon case, speculation is rife that the Indian government is among those purchasing and utilizing this kind of advanced surveillance technology to spy on its own citizens. Adding this to the NSO Group’s public statement that its “sole purpose” is to “provide technology to licensed government intelligence and law enforcement agencies to help them fight terrorism and serious crime”, it appears there are credible allegations that the Indian government was involved in the hack. The government’s evasiveness in responding and insistence on so-called “standard operating procedures” having been followed are less than reassuring.

While India’s entry to the WA as its 42nd member in 2018 has certainly elevated its status in the international arms control regime by granting it access to three of the world’s four main arms-control regimes (the others being the Nuclear Suppliers’ Group / NSG, the Missile Technology Control Group / MTCR and the Australia Group), the Pegasus Hack incident and the apparent connection to the Indian government shows us that its commitment to the principles underlying the WA is doubtful. The purpose of the inclusion of “intrusion software” and “IP network surveillance system” in the WA’s Control Lists by way of the 2013 Amendment, no matter their unintended consequences for legitimate uses of such technology, was to prevent governmental purchases exactly like this one. Hence, even though the WA does not prohibit the purchase of any surveillance technology from a non-member, the Pegasus incident arguably, is still a serious detraction from India’s commitment to the WA, even if not an explicit violation.

Military Cyber-Capability Vs Law Enforcement Cyber-Capability

Given what we know so far, it appears that highly sophisticated surveillance technology has also come into the hands of local law enforcement agencies. Had it been disclosed that the Pegasus software was being utilized by a military wing against external enemies, by, say, even the newly created Defence Cyber Agency, it would have probably caused fewer ripples. In fact, it might even have come off as reassuring evidence of the country’s advanced cyber-capabilities. However, the idea of such advanced, sophisticated technologies at the easy disposal of local law enforcement agencies is cause for worry. This is because while traditionally the domain of the military is external, the domain of law enforcement agencies is internal, i.e., the citizenry. There is tremendous scope for misuse by such authorities, including increased targeting of minorities. The recent incident of police officials in Hyderabad randomly collecting biometric data including their fingerprints and clicking people’s pictures only exacerbates this point. Even abroad, there already exist on-going efforts to limit the use of surveillance technologies by local law enforcement such as the police.

The conflation of technology use by both military and civil agencies  is a problem that is created in part at least, by the complex and often dual-use nature of technology. While dual use technology is recognized by the WA, this problem is not one that it is able to solve. As explained above, dual use technology is technology that can be used for both civil and military purposes. The demands of real-politik, increase in cyber-terrorism and the manifold ways in which a nation’s security can be compromised in cyberspace necessitate any government in today’s world to increase and improve its cyber-military-capacity by acquiring such technology. After all, a government that acquires surveillance technology undoubtedly increases the effectiveness of its intelligence gathering and ergo, its security efforts. But at the same time, the government also acquires the power to simultaneously spy on its own citizens, which can easily cascade into more targeted violations. 

Governments must resist the impulse to turn such technology on its own citizens. In the Indian scenario, citizens have been granted a ring of protection by way of the Puttaswamy judgement, which explicitly recognizes their right to privacy as a fundamental right. Interception and surveillance by the government while currently limited by laid-down protocols, are not regulated by any dedicated law. While there are calls for urgent legislation on the subject, few deal with the technology procurement processes involved. It has also now emerged that Chhattisgarh’s State Government has set up a panel to look into allegations that that NSO officials had a meeting with the state police a few years ago. This raises questions of oversight in the relevant authorities’ public procurement processes, apart from their legal authority to actually carry out domestic surveillance by exploiting zero-day vulnerabilities.  It is now becoming evident that any law dealing with surveillance will need to ensure transparency and accountability in the procurement of and use of the different kinds of invasive technology adopted by Central or State authorities to carry out such surveillance. 


[i]A Guide to the Wassenaar Arrangement, Daryl Kimball, Arms Control Association, December 9, 2013, https://www.armscontrol.org/factsheets/wassenaar, last accessed on November 27, 2019.

[ii]Ibid.

[iii]Data, Interrupted: Regulating Digital Surveillance Exports, Tim Maurerand Jonathan Diamond, November 24, 2015, World Politics Review.

[iv]Wassenaar Arrangement: The Case of India’s Membership, Rajeswari P. Rajagopalan and Arka Biswas, , ORF Occasional Paper #92 p.3, OBSERVER RESEARCH FOUNDATION, May 5, 2016, http://www.orfonline.org/wp-content/uploads/2016/05/ORF-Occasional-Paper_92.pdf, last accessed on November 27, 2019.

[v]Ibid, p. 3

[vi]“List of Dual-Use Goods and Technologies And Munitions List,” The Wassenaar Arrangement, available at https://www.wassenaar.org/public-documents/, last accessed on November 27, 2019. 

[vii]Article 2(1), Proposal for a REGULATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL setting up a Union regime for the control of exports, transfer, brokering, technical assistance and transit of dual-use items (recast), European Commission, September 28th, 2016, http://trade.ec.europa.eu/doclib/docs/2016/september/tradoc_154976.pdf, last accessed on November 27, 2019. 

[viii]supra note vi.

[ix]Guidelines & Procedures, including the Initial Elements, The Wassenaar Arrangement, December, 2016, http://www.wassenaar.org/wp- content/uploads/2016/12/Guidelines-and-procedures-including-the-Initial-Elements-2016.pdf, last accessed on November 27, 2019.

[x]Articles V(1) & (2), Guidelines & Procedures, including the Initial Elements, The Wassenaar Arrangement, December, 2016, https://www.wassenaar.org/public-documents/, last accessed on November 27, 2019.

[September 2-9] CCG’s Week in Review: Curated News in Information Law and Policy

This week, Delhi International Airport deployed facial recognition on a ‘trial basis’ for 3 months, landline communications were restored in Kashmir as the Government mulls over certification for online video streaming platforms like Netflix and PrimeVideo – presenting this week’s most important developments in law, tech and national security.

Aadhaar

  • [Sep 3] PAN will be issued automatically using Aadhaar for filing returns: CBDT, DD News report.
  • [Sep 3] BJD set to collect Aadhaar numbers of its members in Odisha, Opposition parties slam move, News 18 report; The New Indian Express report; Financial Express report.
  • [Sep 5] Aadhaar is secure, says ex-UIDAI chief, Times of India report.
  • [Sep 5] Passport-like Aadhaar centre opened in Chennai: Online appointment booking starts, Livemint report.
  • [Sep 8] Plans to link Janani Suraksha and Matra Vandan schemes with Aadhaar: CM Yogi Adityanath, Times of India report.

Digital India

  • [Sep 5] Digital media bodies welcome 26% FDI cap, Times of India report.
  • [Sep 6] Automation ‘not  threat’ to India’s IT industry, ET Tech report.
  • [Sep 6] Tech Mahindra to modernise AT&T network systems, Tech Circle report.

Data Protection and Governance

  • [Sep 2] Health data comes under the purview of Data Protection Bill: IAMAI, Inc42 report.
  • [Sep 2] Credit history should not be viewed as sensitive data, say online lenders, Livemint report.
  • [Sep 3] MeitY may come up with policy on regulation of non-personal data, Medianama report.
  • [Sep 3] MeitY to work on a white paper to gain clarity on public data regulations, Inc42 report.
  • [Sep 6] Treating data as commons is more beneficial, says UN report, Medianama report.
  • [Sep 9] Indian Government may allow companies to sell non-personal data of its users, Inc42 report, The Economic Times report.
  • [Sep 9] Tech firms may be compelled to share public data of its users, ET Tech report.

Data Privacy and Breaches

  • [Sep 2] Chinese face-swap app Zao faces backlash over user data protection, KrAsia report; Medianama report.
  • [Sep 2] Study finds Big Data eliminates confidentiality in court judgments, Swiss Info report.
  • [Sep 4] YouTube will pay $170 million to settle claims it violated child privacy laws, CNBC report; FTC Press Release.
  • [Sep 4] Facebook will now let people opt-out of its face recognition feature, Medianama report.
  • [Sep 4] Mental health websites in Europe found sharing user data for ads, Tech Crunch report.
  • [Sep 5] A huge database of Facebook users’ phone numbers found online, Tech Crunch report.
  • [Sep 5] Twitter has temporarily disabled tweet to SMS feature, Medianama report.
  • [Sep 6] Fake apps a trap to track your device and crucial data, ET Tech report.
  • [Sep 6] 419 million Facebook users phone numbers leaked online, ET Tech report; Medianama report
  • [Sep 9] Community social media platform, LocalCircles, highlights data misuse worries, The Economic Times report.

Free Speech

  • [Sep 7] Freedom of expression is not absolute: PCI Chairman, The Hindu report.
  • [Sep 7] Chennai: Another IAS officer resign over ‘freedom of expression’, Deccan Chronicle report.
  • [Sep 8] Justice Deepak Gupta: Law on sedition needs to be toned down if not abolished, The Wire report.

Online Content Regulation

  • [Sep 3] Government plans certification for Netflix, Amazon Prime, Other OTT Platforms, Inc42 report.
  • [Sep 4] Why Justice for Rights went to court, asking for online content to be regulated, Medianama report.
  • [Sep 4] Youtube claims new hate speech policy working, removals up 5x, Medianama report.
  • [Sep 6] MeitY may relax norms on content monitoring for social media firms, ET Tech report; Inc42 report; Entrackr report.

E-Commerce

  • [Sep 4] Offline retailers accuse Amazon and Flipkart of deep discounting, predatory pricing and undercutting, Medianama report; Entrackr report.
  • [Sep 6] Companies rely on digital certification startups to foolproof customer identity, ET Tech report.

Digital Payments and FinTech

  • [Sep 3] A sweeping reset is in the works to bring India in line with fintech’s rise, The Economic Times report.
  • [Sep 3] Insurance and lending companies in agro sector should use drones to reduce credit an insurance risks: DEA’s report on fintech, Medianama report.
  • [Sep 3] Panel recommends regulating fintech startups, RBI extends KYC deadline for e-wallet companies, TechCircle report.
  • [Sep 4] NABARD can use AI and ML to create credit scoring registry: Finance Ministry report on FinTech, Medianama report.
  • [Sep 5] RBI denies action against Paytm Payments bank over PIL allegation, Entrackr report.
  • [Sep 5] UPI entities may face market share cap, ET Tech report.
  • [Sep 6] NBFC license makes fintech startups opt for lending, ET Tech report.
  • [Sep 9] Ease access to credit history: Fintech firms, ET Markets report.

Cryptocurrencies

  • [Sep 1] Facebook hires lobbyists to boost crypto-friendly regulations in Washington, Yahoo Finance report.
  • [Sep 2] US Congress urged to regulate crypto under Bank Secrecy Act, Coin Telegraph report.
  • [Sep 2] Indian exchanges innovate as calls for positive crypto regulation escalate, Bitcoin.com report.
  • [Sep 4] Marshall Islands official explains national crypto with fixed supply, Coin Telegraph report.
  • [Sep 5] Apple thinks cryptocurrency has “long-term potential”, Quartz report.
  • [Sep 5] NSA reportedly developing quantum-resistant ‘crypto’, Coin Desk report.
  • [Sep 6] Crypto stablecoins may face bottleneck, ET Markets report.

Cybersecurity

  • [Sep 3] Google’s Android suffers sustained attacks by anti-Ugihur hackers, Forbes report.
  • [Sep 4] Firefox will not block third-party tracking and cryptomining by default for all users, Medianama report.
  • [Sep 4] Insurance companies are fueling ransomware attacks, Defense One report.
  • [Sep 5] Firms facing shortage of skilled workforce in cybersecurity: Infosys Research, The Economic Times report.
  • [Sep 5] Cybersecurity a boardroom imperative in almost 50% of global firms: Survey, Outlook report; ANI report.
  • [Sep 5] DoD unveils new cybersecurity certification model for contractors, Federal News Network report.
  • [Sep 5] Jigsaw Academy launches cybersecurity certification programme in India, DQ India report.
  • [Sep 6] Indians lead the world as Facebook Big Bug Hunters, ET Tech report.
  • [Sep 6] Australia is getting a new cybersecurity strategy, ZD Net report.
  • [Sep 9] China’s 5G, industrial internet roll-outs to fuel more demand for cybersecurity, South China Morning Post report.

Tech and National Security

  • [Sep 3] Apache copters to be inducted today, The Pioneer report.
  • [Sep 3] How AI will predict Chinese and Russian moves in the Pacific, Defense One report.
  • [Sep 3] US testing autonomous border-patrol drones, Defense One report.
  • [Sep 3] Meet the coalition pushing for ‘Cyber Peace’ rules. Defense One report.
  • [Sep 4] US wargames to try out concepts for fighting China, Russia, defense One report.
  • [Sep 4] Southern Command hosts seminar on security challenges, Times of India report; The Indian Express report
  • [Sep 4] Russia, already India’s biggest arms supplier, in line for more, Business Standard report.
  • [Sep 4] Pentagon, NSA prepare to train AI-powered cyber defenses, Defense One report.
  •  [Sep 5] Cabinet clears procurement of Akash missile system at Rs. 5500 crore, Times Now report.
  • [Sep 5] India to go ahead with $3.1 billion US del for maritime patrol aircraft, The Economic Times report.
  • [Sep 5] DGCA certifies ‘small’ category drone for complying with ‘No-Permission, No-Takeoff’ protocol, Medianama report.
  • [Sep 5] India has never been aggressor but will not hesitate in using its strength to defend itseld: Rajnath Singh, The Economic Times report.
  • [Sep 5] Panel reviewing procurement policy framework to come out with new versions of DPP, DPM by March 2020, The Economic Times report; Business Standard report; Deccan Herald report.
  • [Sep 5] Russia proposes joint development of submarines with India, The Hindu report.
  • [Sep 7] Proud of you: India tells ISRO after contact lost with CHandrayaan-2 lander, India Today report.

Tech and Elections

  • [Sep 4] ECI asks social media firms to follow voluntary code of ethics ahead of state polls: report, Medianama report.
  • [Sep 6] Congress party to reorganise its data analytics department, Medianama report.
  • [Sep 5] Why the 2020 campaigns are still soft targets for hackers, Defense One report.
  • [Sep 5] Facebook meets with FBI to discuss election security, Bloomberg report.
  • [Sep 5] Facebook is making its own AI deepfakes to head off a disinformation disaster, MIT Tech Review report.

Internal Security: J&K

  • [Sep 4] Long convoy, intel failure: Multiple lapses led to Pulwama terror attack, finds CRPF inquiry, India Today report; Kashmir Media Service report; The Wire report.
  • [Sep 4] Extension of President’s Rule in Kashmir was not delayed, MHA says in report to SC lawyer’s article, Scroll.in report.
  • [Sep 6] Landline communication restored in Kashmir Valley: Report, Medianama report.
  • [Sep 7] Kashmir’s Shia areas face curbs, all Muharram processions banned, The Quint report.
  • [Sep 7] No question of army atrocities in Kashmir as it’s only fighting terrorists: NSA Ajit Doval, India Today report.
  • [Sep 8] More than 200 militants trying to cross into Kashmir from Pakistan: Ajit Doval, Money Control report.
  • [Sep 8] ‘Such unilateral actions are futile’, says India after Pakistan blocks airspace for President Kovind, Scroll.in report; NDTV report.

Internal Security: NRC

  • [Sep 2] Contradictory voices in Assam Congress son NRC: Tarun Gogoi slams it as waste paper, party MP says historic document, India Today report.
  • [Sep 3] Why Amit Shah is silent on NRC, India Today report.
  • [Sep 7] AFSPA extended for 6 months in Assam, Deccan Herald report.
  • [Sep 7] At RSS mega meet, concerns over Hindus being left out of NRC: Sources, Financial Express report.

National Security Institutions and Legislation

  • [Sep 5] Azhar, Saeed, Dawood declared terrorists under UAPA law, Deccan Herald report; The Economic Times report.
  • [Sep 8] Home Minister says India’s national security apparatus more robust than ever, Livemint report.
  • [Sep 8] Financial safety not national security reason for women to join BSF: Study, India Today report.

Telecom/5G

  • [Sep 6] Security is an issue in 5G: NCSC Pant on Huawei, Times of India report.

More on Huawei

  • [Sep 1] Huawei believes banning it from 5G will make countries insecure, ZD Net report.
  • [Sep 2] Huawei upbeat on AI strategy for India, no word on 5G roll-out plans yet, Business Standard report.
  • [Sep 3] Huawei denies US allegations of technology theft, NDTV Gadgets 260 report; Business Insider report; The Economic Times report.
  • [Sep 3] Shocking Huawei ‘Extortion and Cyberattack’ allegations in new US legal fight, Forbes report; Livemint report, BBC News report; The Verge report
  • [Sep 3] Committed to providing the most advanced products: Huawei, ET Telecom report.
  • [Sep 4] Huawei says 5G rollout in India will be delayed by 3 years if it’s banned, Livemint report
  • [Sep 4] Trump not interested in talking Huawei with China, Tech Circle report.
  • [Sep 5] Nepal’s only billionaire enlists Huawei to transform country’s elections, Financial Times report.
  • [Sep 8] Trump gets shocking new Huawei warning – from Microsoft, Forbes report.

Emerging Tech

  • [Aug 30] Facebook is building an AI Assistant Inside Minecraft, Forbes report.
  • [Sep 3] AWS partners with IIT KGP for much needed push to India’s AI skilling, Inc42 report.
  • [Sep 3] Behind the Rise of China’s facial recognition giants, Wired report.
  • [Sep 4] Facebook won’t use facial recognition on you unless you tell it to, Quartz report.
  • [Sep 4] An AI app that turns you into a movie star has risked the privacy of millions, MIT Technology Review report.
  • [Sep 6] Police use f facial recognition is accepted by British Court, The New York Times report.
  • [Sep 6] Facebook, Microsoft announce challenge to detect deepfakes, Medianama report.
  • [Sep 6] Facial recognition tech to debut at Delhi airport’s T3 terminal; on ‘trial basis’ for next three months, Medianama report.

Internet Shutdowns

  • [Sep 3] After more than 10 weeks, internet services in towns of Rakhine and Chin restored, Medianama report.
  • [Sep 4] Bangladesh bans mobile phone services in Rohingya camps, Medianama report.

Opinions and Analyses

  • [Sep 2] Michael J Casey, Coin Desk, A crypto fix for a broken international monetary system.
  • [Sep 2] Yengkhom Jilangamba, News18 Opinion, Not a solution to immigration problem, NRC final list has only brought to surface fault lines within society.
  • [Sep 2] Samuel Bendett, Defense One, What Russian Chatbots Think About Us.
  • [Sep 2] Shivani Singh, Hindustan Times, India’s no first use policy is a legacy that must be preserved.
  • [Sep 3] Abir Roy, Financial Express, Why a comprehensive law is needed for data protection. 
  • [Sep 3] Dhirendra Kumar, The Economic Times, Aadhaar is back for mutual fund investments.
  • [Sep 3] Ashley Feng, Defense One, Welcome to the new phase of US-China tech competition.
  • [Sep 3] Nesrine Malik, The Guardian, The myth of the free speech crisis.
  • [Sep 3] Tom Wheeler and David Simpson, Brookings Institution, Why 5G requires new approaches to cybersecurity.
  • [Sep 3] Karen Roby, Tech Republic, Why cybersecurity is a big problem for small businesses.
  • [Sep 4] Wendy McElroy, Bitcoin.com, Crypto needs less regulation, not more.
  • [Sep 4] Natascha Gerlack and Elisabeth Macher, Modaq.com, US CLOUD Act’s potential impact on the GDPR. 
  • [Sep 4] Peter Kafka, Vox, The US Government isn’t ready to regulate the internet. Today’s Google fine shows why.
  • [Sep 5] Murtaza Bhatia, Firstpost, Effective cybersecurity can help in accelerating business transformation. 
  • [Sep 5] MG Devasahayam, The Tribune, Looking into human rights violations by Army.
  • [Sep 5] James Hadley, Forbes, Cybersecurity Frameworks: Not just for bits and bytes, but flesh and blood too.
  • [Sep 5] MR Subramani, Swarajya Magazine, Question at heart of TN’s ‘WhatsApp traceability case’: Are you endangering national security if you don’t link your social media account with Aadhaar? 
  • [ Sep 5] Justin Sherman, Wired, Cold War Analogies are Warping Tech Policy.
  • [Sep 6] Nishtha Gautam, The Quint, Peer pressure, militant threats enforcing civil curfew in Kashmir?
  • [Sep 6] Harsh V Pant and Kartik Bommakanti, Foreign Policy, Modi reimagines the Indian military.
  • [Sep 6] Shuman Rana, Business Standard, Free speech in the crosshairs.
  • [Sep 6] David Gokhshtein, Forbes, Thoughts on American Crypto Regulation: Considering the Pros and Cons.
  • [Sep 6] Krishan Pratap Singh, NDTV Opinion, How to read Modi Government’s stand on Kashmir.
  • [Sep 7] MK Bhadrakumar, Mainstream Weekly, The Big Five on Kashmir.
  • [Sep 7] Greg Ness, Security Boulevard, The Digital Cyber Security Paradox.
  • [Sep 8] Lt. Gen. DS Hoods, Times of India, Here’s how to take forward the national security strategy.
  • [Sep 8] Smita Aggarwal, Livemint, India’s unique public digital platforms to further inclusion, empowerment. 

India’s Latest National Security Dilemma: The Huawei Ban and NAM(O) 2.0

Over two weeks after the ban on Huawei was imposed by the United States on suspicions of facilitating espionage on behalf of China, the newly appointed Minister of Electronics and IT, Ravi Shankar Prasad acknowledged that there are ‘complex security concerns’ around the deployment of Huawei’s technology in India. His statement comes soon after the TRAI’s statement emphasizing the need to indigenize telecom infrastructure in the aftermath of the US ban on Huawei.

The Chinese tech giant has been at the centre of controversy even before May 16, when President Trump signed an Executive Order entitled ‘Securing the Information and Communications Technology and Services Supply Chain’, declaring a national cybersecurity emergency, placing Huawei on the ‘Entity List’ of the US Department of Commerce under Supplement 4 to Part 744 of its  Export Administration Regulations. This implies that any US persons and corporate entities that continue to do business with Huawei would face heavy penalties that could potentially include criminal sanctions. Owing to the design of export control laws in the United States, the enforcement of the ban in the United States has extraterritorial effects. According to a Reuters report, US Secretary of State Mike Pompeo warned allies of potential difficulties in sustained cooperation and data sharing with the United States if they continued to use Huawei equipment despite the ban.

Huawei in the United States

Criminal charges pending against Huawei in US courts include serious allegations of corporate espionage, bank fraud, theft of trade secrets and most importantly, conspiracy to violate the International Emergency Economic Powers Act (50 U.S.C. 1701 et seq) (IEEPA) by export of telecommunications services provided by a US citizen to Iran without permission from the Office of Foreign Assets Control (OFAC). It was on the grounds of violation of the IEEPA that the US successfully urged Canada to detain Huawei’s CFO, Meng Wanzhou, who is now awaiting potential extradition to the United States for prosecution for the crimes alleged against Huawei.

Some in the US national security community have even argued that this could potentially be an abuse of the President’s emergency powers under the IEEPA, the legislation that enables the US to‘financially asphyxiate targeted countries, entities or individuals’ that pose ‘any unusual and extraordinary threat’ to US national security interests. Others, based on Trump’s statement that saw Huawei being potentially included in a future trade deal with China, take the view that the ban is no more than a leveraging tool to get concessions from the Chinese Government. Yet more view it as a measure designed purely to protect US telecom industries from Chinese competition in the 5G race, to prevent the US from losing its edge in communications technologies.

A major reason for the rapid rise of Huawei on the global tech scene has been its competitive prices and convenient payment plans. Thanks to the ban, rival companies like Cicso, Ericsson, Nokia and Samsung do indeed, stand to gain significant advantages and grab bigger market shares, but their prices so far have not been able to compete with those offered by their Chinese counterparts.

Despite this ‘emergency’, a week after President Trump signed an Executive Order, the restrictions were eased by the US Department of Commerce to give American companies a 90 days window to adapt to the new restrictions. In the time that has passed, several tech companies have severed business ties with Huawei. Google was the first to respond, cutting off Huawei’s access to its Android platform, restricting existing users’ access to future security patches and updates. Microsoft, Intel, Qualcomm, Xilinx, Broadcomm, Panasonic and British Chip manufacturer ARM soon followed suit, causing serious disruptions in the global ICT supply chain, especially in its smartphone manufacturing. However, the smart phone business is only a small part of Huawei’s overall products range. It is noteworthy that as on date, Huawei controls 28% of the global marketshare in telecom equipment. In the first quarter of 2019, Huawei surpassed Apple to become the world’s second largest manufacturer of smartphones. Much to the worry of American telcos, some forecasts indicate that China is expected to represent 40% of all global 5G connections by 2025.

Several reports indicate that Huawei had long been preparing for impending restrictions from the US Government. Reportedly, it is developing its own OS ‘Ark’ and has challenged the National Defence Authorization Act 2018, which bans US Government agencies from procuring products manufactured by Huawei or ZTE.

China’s Retaliation

Although Huawei’s founder and CEO, Ren Zhengfei has opposed retaliation by the Chinese Government against Apple or other American tech companies, it remains to be seen how China will respond in the ongoing trade tensions with the US. Some changes to set up a mechanism that allows for higher degree of protection to its own national security interests have already been introduced in Chinese cyber security law in response. China has also threatened the creation of a ‘sweeping blacklist of US firms’ in retaliation. Reports indicate that the export of rare earth minerals to China by the US could be the next frontier in these ‘hostilities’.

In addition, the Chinese Defense Minister Wei Fenghe came out to explicitly state that Huawei was not part of its military, several Chinese officials have refuted US claims alleging that the decision to blacklist Huawei was unsupported by any evidence. Unsurprisingly though, Russia has rolled out the red carpet for Huawei, where it signed a deal to develop 5G infrastructure for Russian telecom provider MTS.

However, the most important piece of Chinese legislation for India to consider is the Chinese intelligence law passed in 2017 that makes it obligatory upon Chinese companies and other entities to share onshore and offshore data with the Government as and when called upon in the interest of national security.

Huawei in India

Some have argued that India would need to conclusively prove allegations of assisting the Chinese government in carrying out cyber espionage before taking any concrete steps to ban Huawei, otherwise India risks undermining its strategic autonomy and playing into the hands of the US. However, the argument seems focused exclusively on the rapid introduction and operationalization of 5G in India and ignores India’s previous run-ins with Huawei’s technology.

Telecom companies through the Cellular Operators Association of India have sought clarification from the Department of Telecommunications (DoT) on its stance qua usage of Huawei-manufactured equipment by telecom operators. Such a clarification is much needed, considering that Huawei has been kept on a see-saw since September 2018, when the US first started attempting to persuade allies to wall out Huawei in the 5G race. In India, Huawei was first excluded, then extended an invitation which was later rescinded. Huawei India’s CEO Jay Chen recently made a statement demanding a ‘level playing field’ for Huawei in the 5G trials, reiterating the request of the Chinese Government from December of last year.

Presently, telecom operators including Airtel and Vodafone use Huawei equipment in many of their circles in India. While the TRAI has highlighted the need for indigenizing of telecom infrastructure, the truth of the matter is that as on date, almost 60% of the Government’s telecom equipment, including especially that of BSNL is supplied by the Chinese companies ZTE and Huawei. This is despite the fact that BSNL’s allegations against Huawei of hacking into its networks were investigated in 2014. This makes the argument that requires conclusive proof of malicious activity difficult to sustain, if the security of the existing infrastructure has already been compromised in the past.

Huawei itself has urged the DoT for an expedited decision on its inclusion in the 5G trials, reportedly after having answered all queries posed to it by the DoT. The DoT appears divided on the issue – with one section that views it as an issue of not just technology, but also one of security with geopoilitcal ramifications, and the other seemingly inclined towards Huawei’s inclusion to maintain the competition and mitigate risks of relying on supplies from European vendors alone.

The New Berlin Wall and India’s Posturing

At the moment, India seems to have been caught in the middle of what has been dubbed as the New Cold War in tech–faced with prohibitively high prices on the one side, and a risk of Chinese cyber espionage on the other. On this point, some take the view that ‘what is cheap now may not be good in the long run’. National security choices require nations to make difficult trade-offs between economic and strategic goals and considerations, and the contours of the new ‘Great Powers’ relations are radically different to the one that ended with the fall of the Berlin Wall. The New York Times viewed the ban as one that is “about much more than crippling one Chinese tech giant”, and is forcing “nations to make an agonizing choice: Which side of the new Berlin Wall do they want to live on?”

In the collision of tech and trade, foreign policy choices of Governments are now closely intertwined with the commercial interests and health of the domestic telecom and tech industries. Although it is reassuring that India’s telecom minister seems intent on taking ‘a serious look’ at the technological advantage versus security concerns calculus before deciding on Huawei’s inclusion in the upcoming 5G trials, remedial and mitigation measures like reviving MTNL and BSNL services are measures for the long run. However, what makes India the desired location for ‘proxy wars’ in tech is the treasure trove of data that lies beneath the massive subscriber base of over 1.19 billion individuals to telecom services. As for the health of domestic markets, if anything, Indian telecom giants like Reliance Jio that uses 4G equipment manufactured by Samsung, could potentially stand to gain from the move, if Huawei were to be excluded from the Indian market and the 5G trials. It remains to be seen whether such a protectionist measure, following the footsteps of the US, would be introduced by the new Government that has re-risen to power on the promise of strengthening national security. A legitimate concern is the threat of retaliatory pressure tactics from the US if India does fail to do so.

It is notable that India has taken some measures to avoid offending the United States’ declared policies, while the decision on Huawei remains pending. A week after the ban, India stopped importing oil from Iran as well as Venezuela to comply with US sanctions after the US ended exemptions for eight countries including India. More recently, the US revoked India’s preferential trade status under the GSP (Generalised System of Preferences) trade program, alleging that India has not “assured the US that it will provide equitable and reasonable access to its markets”. The US-China trade war presents a similar spectrum of choices to India – while the Ministry of Commerce is mulling over the imposition of ‘retaliatory tariffs’, others take the position that India should cut interest rates to take advantage of the trade war to gain a stronger foothold in both markets.

Against the backdrop of this new political economy of the cybersecurity industry, a new kind of non-alignment seems to be emerging, creating an unmistakable split in traditional alliances between NATO members. Only two of the ‘Five Eyes Alliance’ of intelligence sharing other than the United States – Australia and New Zealand responded quickly by banning Huawei in their respective national jurisdictions. Some European counties, specifically – the UK and Germany, while also remaining mindful of the risks posed by Chinese covert activity through its tech industry that has undeniably acquired a global influence, are seemingly intent on not abandoning Huawei in the design of their 5G infrastructure. Canada too, while juggling the pending extradition of Huawei’s CFO to the US, appears determined to make an independent decision on the 5G question. At the moment, India’s policies seem to just as non-aligned as those of Germany or the United Kingdom – aimed at maintaining the free flow of investments and information while steadily moving towards indigenization of ICT and expansion of markets instead of encouraging protectionism to curb competition.

Until such time that India can completely indigenize the equipment, or alter its telecom equipment procurement policies across the board to exclude obvious threats to the integrity of our cybersecurity infrastructure, India’s choice seems to be a limited to a cybersecurity policy along the lines of the Nehruvian-era doctrine of non-alignment, perhaps with only slight tilt— this time, toward the United States? It would appear that the time is ripe for NaMo 2.0 to revisit the doctrine as NAM 2.0, in a manner that allows India to preserve the security alliance with one side, and an economic partnership to avoid disruptions and price escalations in our ICT supply chain on the other. In other words, the need of the hour is ‘to effectively manage our global opportunities to maximize our choices’ while preserving strategic autonomy.

Transparency and Diversity in the 2017 MAG Renewal

By Puneeth Nagaraj

Two days before the ongoing MAG meeting, the 2017 MAG renewal was announced. The CSCG protested the lack of civil society representatives among the new MAG members. This brought back focus on the need for MAG reform. Our report on multistakeholderism had identified the lack of transparency and geographic diversity in MAG selections. These issues remain relevant as another set of MAG meetings kick off in Geneva.

The Multistakeholder Advisory Group (MAG) of the Internet Governance Forum (IGF) was renewed for 2017 on Monday. The renewal has attracted controversy as no civil society members were added to this year’s MAG. The announcement has brought into focus a persistent criticism on the lack of transparency in the MAG nomination process. The lack of transparency and geographic diversity in the MAG was discussed in our report on multistakeholderism. Some of its findings are relevant to the 2017 MAG renewal.

Created on the recommendation of the Working Group on Internet Governance (WGIG), the MAG is responsible for organising the annual IGF. The MAG is not a decision-making body by design. But  Jeremy Malcolm  (pp. 420-422) points out  that the MAG effectively chooses issues that are debated on a global stage in the course of organising the IGF. In this respect, he argues that the MAG plays an important agenda setting role in internet governance.

MAG Nomination Process and Transparency

The make-up of the MAG is similar to the WGIG in that consists of representatives of all stakeholder groups (government, private sector, civil society and technical community). The selection of MAG members is made by the United Nations Department of Economic and Social Affairs (UN DESA) under the authority of the UN Secretary General. Nominations to the UN DESA are made through focal points from different stakeholder groups, but applicants can also apply to the UN DESA directly.

As noted in our report (pp. 70-72), once nominations are sent to the UN DESA, there is no clarity on how members are selected to the MAG. The only available information on DESA’s selection criteria are the five criteria listed on the IGF website. These criteria include achieving a geographic and gender balance and that representatives should have strong linkages to their stakeholder groups.

The controversy in this year’s MAG renewal arose out of the lack of new civil society representation on the MAG. The Civil Society Coordination Group (CSCG), which is the focal point for civil society nominations wrote to the IGF secretariat asking it to reconsider its decision. They pointed out that no civil society members were added to the MAG this year despite two civil society members retiring from the MAG (members are selected for 3 year terms and a third of the MAG retires each year). The letter also called on the IGF secretariat to select an additional civil society member to the MAG.

This is not the first time that MAG nominations have been controversial. In 2016, the CSCG wrote to the IGF secretariat asking for greater transparency and inclusiveness in selections to the MAG. Similarly, as discussed in our report (p. 73), an Indian civil society member nominated by the CSCG was not selected to the MAG in 2014. In the above cases, the CSCG had contacted IGF secretariat asking for greater clarity on how selections were made.

Geographic Diversity

One of the findings of our report with respect to the MAG was on the geographic diversity of the group. As mentioned above, geographic diversity is one of the stated criteria based on which the UN DESA selects members to the MAG. Our report found that on average, 8-10% of MAG members were from the United States (based on their affiliation mentioned on the IGF website). As shown in the chart below, this was the highest percentage representation from any country between 2011 and 2015.

Membership by country as a percentage of total MAG Membership (2011-15)

us-igf-mag

Source: Multistakeholderism in Action

This trend has continued in the 2017 MAG renewal with 4 members or 7% of the MAG being from the United States. No other country has more than 2 members in the current MAG. The FAQ section on MAG renewals acknowledges this disparity. It stated that the MAG currently has an excess of members from Western Europe and Others Group. It also states that a new selection process will attempt to make the MAG more regionally balanced. It remains to be seen if this imbalance will be addressed in the next MAG renewal cycle.

The MAG nomination process raises questions on the transparency of the process and the diversity within the MAG. However, there is very little publicly available information or communication from the UN DESA beyond the criteria listed on the IGF website. The 2017 announcement was made one day before the IGF Open Consultations and MAG meeting were to begin in Geneva (1st March). A CSCG representative who circulated the letter believed that the issue of non-selection of a civil society member might be taken up at the meeting.

Puneeth Nagaraj is a Project Managers at the Centre for Communication Governance at National Law University Delhi