Emerging Framework for Gig Workers Welfare in India Leaves Critical Questions Unanswered

By Fawaz Shaheen

The Haryana Government has recently indicated that it is going to introduce legislation establishing a welfare board for social security of gig workers. This comes after Rajasthan in July 2023 became the first state in India, and among the first places in the world, to legislate social security measures for platform based gig workers. Even though the law is yet to be notified, it was hailed as an important milestone in providing gig workers with a measure of social security and rights against platforms. Several labour organisations, including the largest union of gig workers in India, also welcomed it as a crucial step forward. It is interesting to note that the Social Security Code, enacted by parliament in 2020, also contained similar provisions for welfare of gig workers. While the scheme under the Social Security Code has yet to be notified, the Rajasthan law seems to have drawn its basic outline from the Social Security Code and fleshed it out with more detail. The proposed law in Haryana may also take a similar route, according to a public statement made by the state’s Deputy Chief Minister. The ruling party in Karnataka had also promised a similar model in their manifesto during the state assembly polls last year. The model of a welfare board, with minor modifications, seems to be emerging as the accepted framework to deal with challenges of the gig economy in India. It is important to take a look at its basic contours and how it might impact the governance of digital platform-based businesses in India.  

One of the most important debates surrounding gig work is on the question of their status as employees. Platform companies are able to build competitive and viable business models due to the flexible nature of what is also known as ‘on-demand’ work. Not having to categorise their workers as employees saves them a fortune on benefits like healthcare, provident fund and paid leave, etc. The absence of formal employment and termination procedures also facilitates easy hiring and firing of workers for specific, time-bound tasks. This in turn allows companies to be adaptable in rapidly shifting market scenarios, a huge advantage that gives platform based companies a definite edge over traditional businesses. 

However, these same conditions make gig work extremely precarious for those who are actually carrying it out. The flexibility and adaptability prized by platform companies make gig work an unreliable source of regular income. This is despite the fact that much of the conditions of their work exhibit characteristics of regular employment. This includes the centrality of their work to the platform’s core business, the amount of control platforms exert over their work, both through rules and regulations and the functioning of algorithms, and limits on their ability to take up other employment. 

Emerging Legal Framework in India

Under the Social Security Code, 2020, ‘gig workers’ are defined as those who perform work outside of ‘traditional employer-employee relationship’ (Section 2(35), Social Security Code, 2020). The Rajasthan law has taken the same essential definition and added specifically that such work is carried out as part of a contract and results in a given rate of payment, or ‘piece-rate’ work. This definition seeks to adopt a pragmatic approach to defining ‘gig-work’ without getting into the debate of whether these are employees or independent contractors. However, by not explicitly recognising them as employees, it effectively validates the contention of platform companies that these workers are not their employees, and therefore their work conditions will not be governed by traditional labour law principles.

Another crucial aspect is the manner in which the Rajasthan law seeks to operationalise the social security schemes of gig workers. It calls for the setting up of a government controlled Welfare Board, that will have broad powers to formulate and administer schemes for gig workers. The Board will have representatives of gig workers as members, but these will be nominated by the state government and not elected by the unions or workers groups. This is again similar to the provision for welfare of gig workers provided under the Social Security Code, 2020, which also envisions a National Social Security Board, consisting of members nominated by the central government, as the central agency for governing social welfare schemes for gig workers. 

A number of questions have been raised with regards to enacting social security through the medium of welfare boards. For instance, the welfare board model ties the social security of gig workers to contributions made by them and their employers, instead of creating guaranteed entitlements. And by tying the welfare measures to individual transactions between the platform and the consumer, it also fails to distinguish between the kinds of work that are carried out on different platforms. For instance, a transaction on a food delivery or cab-hailing platform usually entails the involvement of only one gig worker in the form of the rider or the driver. But one order on an e-commerce platform might involve several workers at different stages from the order packing, handling, transportation to delivery. If the law recognised both of them as one transaction, with social security benefits tied to the contributions made by the platform on the basis of the number of transactions, the gig workers on an e-commerce platform would be at a significant disadvantage compared to workers carrying out similar tasks on a food delivery platform.

Gig Work and Digital Rights

One area which hasn’t yet received much attention is the manner in which these laws will impact informational privacy and digital rights of the gig workers. From a data protection perspective, the scheme laid out in the law raises a number of concerns, some of which are:

Registration of all workers on a State govt database: The law requires all gig workers to be registered on a government-run database and assigned a Unique id. This Unique id will be used to track all the work they undertake for various platforms, and become the basis for determining the kind of benefits they receive under any social service scheme notified by the government. The law does not specify any purpose limitation for data collected under this head. This is important for several reasons, since the data would allow anyone – including current and prospective employers – to map out the trajectory of a worker’s entire employment history. It also does not specify confidentiality or limiting access of the data to the Board. Without ensuring confidentiality and limiting the purposes for which it can be used, the aggregation of data concerning a worker’s jobs across different platforms could place them at a significant disadvantage. This is especially relevant considering recurring  concerns about unfair and non-transparent deactivation practices of employers.

Payments Monitoring and Tracking System: The law requires the setting up of a system for tracking all payments made on a platform. It is not clear why a separate payments tracking system is needed to operationalise the law, especially since all platforms are already tax-paying entities whose financial records are available with the government. This again contains significant potential for abuse of data, especially due to lack of purpose limitations on how this data on payments can be used.

Fails the least intrusive means test: The Supreme Court in Puttaswamy has laid down a clear standard of minimal intrusion for situations where private data of citizens is being collected and recorded by the state for welfare and for other necessary functions. This standard requires the state to find the least intrusive means of operationalising a particular scheme or program, so that the data being collected can be minimised. In the present case, the operationalisation of the entire scheme is predicated upon registration and tracking of payments made to the gig workers. This is precisely the opposite of the least intrusive means standard laid down by the apex court.

Conclusion

While the move to set up welfare boards for platform based gig workers across different states represents a crucial step forward in ensuring some level of social security for a very precarious class of workers, it still leaves many important questions unanswered. Much will depend on how the welfare boards function and the kinds of welfare schemes they introduce after they are notified. But ultimately the law itself is set up to entrench their status as temporary workers whose social security will be dependent on external inputs from the welfare board, and not guaranteed by virtue of their employment. It also fails to address the imbalance of technological power between the digital platforms and their workers, leaving them vulnerable to violations of informational privacy and subject to opaque data-driven decision making.

CCG’s Comments to the Ministry of Electronics and Information Technology on the Draft National Data Governance Framework Policy

Authors: Joanne D’Cunha and Bilal Mohamed

On 26th May 2022, the Ministry of Electronics and Information Technology (MeitY), released the Draft National Data Governance Framework Policy (NDG Policy) for feedback and public comments. CCG submitted its comments on the NDG Policy, highlighting its feedback and key concerns with the proposed Data Governance Framework. The comments were authored by Joanne D’Cunha and Bilal Mohamed, and reviewed and edited by Jhalak M. Kakkar and Shashank Mohan.

The draft National Data Governance Framework Policy is a successor to the draft ‘India Data Accessibility and Use’ Policy, which was circulated in February 2022 for public comments and feedback. Among other objectives, the NDG policy aims to “enhance access, quality, and use of data to enable a data-led governance” and “catalyze AI and Data led research and start-up ecosystem”.

“Mountain” by Mariah Jochai is licensed under CC BY 4.0

CCG’s comments to the MeitY are divided into five parts – 

In Part I, of the comments we foreground our concerns by emphasising the need for comprehensive data protection legislation to safeguard citizens from potential privacy risks before implementing a policy around non-personal data governance. 

In Part II, we focus on the NDG Policy’s objectives, scope, and key terminologies. We highlight that the NDG Policy lacks in  sufficiently defining key terms and phrases such as non personal data, anonymisation, data usage rights, Open Data Portal, Chief Data Officers (CDOs), datasets ecosystem, and ownership of data. Having clear definitions will bring in much needed clarity and help stakeholders appreciate the objectives and implications of the policy. This also improves  engagement from the stakeholders including the government in the policy consultation process. This also enhances engagement from the stakeholders, including the various government departments, in the policy consultation process.  We also highlight that the policy does not illustrate how it will intersect and interact with other proposed data governance frameworks such as the Data Protection Bill 2021 and the Non Personal Data Governance Framework. We express our concerns around the NDG Policy’s objective of cataloguing datasets for increased processing and sharing of data matching with the aim to deploy AI more efficiently.  It relies on creating a repository of data to further analytics, and AI and data led research. However, it does not take into consideration that increasing access to data might not be as beneficial if computational powers of the relevant technologies are inadequate. Therefore, it may be more useful if greater focus is placed on developing computing abilities as opposed to increasing the quantum of data used.

In Part III, we focus on the privacy risks, highlighting concerns around the development and formulation of anonymisation standards given the threat of re-identification from the linkage of different datasets. This, we argue, can pose significant risks to individual privacy, especially in the absence of a data protection legislation that can provide safeguards and recognise individual rights over personal data. In addition to individual privacy harms, we also point to the potential for collective harms from using aggregated data. To this end, we suggest the creation of frameworks that can keep up with the increased risks of reidentification posed by new and emerging technologies.

Part IV of our comments explores the institutional framework and regulatory structure of the proposed India Data Management Office. The proposed IDMO is responsible for framing, managing, reviewing, and revising the NDG Policy. Key concerns on the IDMO’s functioning pertain to the exclusion of technical experts and representatives of civil society and industry in the IDMO. There is also ambiguity on the technical expertise required for Chief Digital Officers of the Digital Management Units of government departments and ministries, and the implementation of the redressal mechanism. In this section, we also highlight the need for a framework within the Policy to define how user charges will be determined for data access. This is particularly relevant to ensure that access to datasets is not skewed and is available to all for the public good. 

You can read our full submission to the ministry here.

Critiquing the Definition of Cyber Security under India’s Information Technology Act

Archit Lohani

“Security Measures” by Afsal CMK is licensed under CC BY 4.0

Introduction

As boundary-less cyberspace becomes increasingly pervasive, cyber threats continue to pose serious challenges to all nations’ economic security and digital development. For example, sophisticated attacks such as the WannaCry ransomware attack in 2017 rendered more than two million computers useless with estimated damages of up to four billion dollars. As cyber security threats continue to proliferate and evolve at an unprecedented rate, incidents of doxing, distributed denial of service (DDoS), and phishing attacks are on the rise and are being offered as services for hire. The task at hand is intensified due to the sheer number of cyber incidents in India. A closer look suggests that the challenge is exacerbated due to an outdated framework and lack of basic safeguards.

This post will examine one such framework, namely the definition of cybersecurity under the Information Technology Act, 2000 (IT Act).

Under Section 2(1)(nb) of the IT Act:

“cyber security” means protecting information, equipment, devices computer, computer resource, communication device and information stored therein from unauthorised access, use, disclosure, disruption, modification or destruction;

This post contends that the Indian definitional approach adopts a predominantly technical view of cyber security and restricts effective measures to ensure cyber-resilience between governmental authorities, industry, non-governmental organisations, and academia. This piece also juxtaposes the definition against key elements from global standards under foreign legislations and industry practices.

What is Cyber security under the IT Act?

The current definition of cyber security was adopted under the Information Technology (Amendment) Act, 2009. This amendment act was hurriedly adopted in the aftermath of the Mumbai 26/11 terrorist attacks of 2008.  The definition was codified to facilitate protective functions under Sections 69B and 70B of the IT Act. Section 69B enables monitoring and collection of traffic data to enhance cyber security, prevent intrusion and spread of contaminants. Section 70B institutionalised Computer Emergency Response Team (CERT-In), to identify, forecast, issue alerts and guidelines, coordinate cyber incident response, etc. and further the state’s cyber security imperatives. Subsequently, the evolution of various institutions that perform key functions to detect, deter, protect and adapt cybersecurity measures has accelerated. However, this post argues that the current definition fails to incorporate elements necessary to contemporise and ensure effective implementation of cyber security policy.

Critique of the IT Act definition

It is clear that deterrence has failed as the volume of incidents does not appear to abate, making cyber-resilience a realistic objective that nations should strive for. The definition under the IT Act is an old articulation of protecting the referent objects of security- “information, equipment, devices computer, computer resource, communication device and information” against specific events that aim to cause harm these objects through “unauthorised access, use, disclosure, disruption, modification or destruction”.

There are a few issues with this dated articulation of cybersecurity. First, it suffers from the problem of restrictive listing as to what is being protected (aforementioned referent objects). Second, by limiting the referent objects and events within the definition it becomes prescriptive. Third, the definition does not capture the multiple, interwoven dimensions and inherent complexity of cybersecurity which includes interactions between humans and systems. Fourth, due to limited enlisting of events, similar protection is not afforded from accidental events and natural hazards to cyberspace-enabled systems (including cyber-physical systems and industrial control systems). Fifth, the definition is missing key elements – (1) It does not include technological solutions aspect of cyber security such as in the International Telecommunication Union (2009) definition that acknowledges “technologies that can be used to protect the cyber environment” and; (2) fails to incorporate the strategies, processes, and methods that will be undertaken. With key elements missing from the definition, it falls behind contemporary standards, which are addressed in the following section.

To put things in perspective, global conceptualisations of cybersecurity are undergoing a major overhaul to accommodate the increased complexity, pace, scale and interdependencies across the cyberspace and information and communication technologies (ICT) environments. In comparison, the definition under the IT Act has remained unchanged.

Although wider conceptualisations have been reflected through international and national engagements such as the National Cyber Security Policy (NCSP). For example, within the mission statement the policy document recognises technological solution elements; and interactions between humans and ICTs in cyberspace as one key rationale behind the cyber security policy.

However, differing conceptualisations across policy and legislative instruments can lead to confusion and introduce implementational challenges within cybersecurity regulation. For example, the 2013 CERT-In Rules rely on the IT Act’s definition of cyber security and define cyber security incidents and cyber security breaches. Further emphasising the narrow and technically dominant discourse which relate to the confidentiality, integrity, and availability triad.

The following section examines a few other definitions to illustrate the shortcomings highlighted above.

Key elements of Cyber security

Despite a plethora of definitions, there is no universal agreement on the conceptualisation of cybersecurity globally. This has manifested into the long-drawn deliberations at various international fora.

Cybersecurity aims to counter and tackle a constantly evolving threat landscape. Although it is difficult to build consensus on a singular definition, a few key features can be agreed upon. For example, the definition must address interdisciplinarity inherent to cyber security, its dynamic nature and the multi-level complex ecosystem cyber security exists in. A multidisciplinary definition can aid authorities and organizations in having visibility and insight as to how new technologies can affect their risk exposure. It will further ensure that such risks are suitably mitigated. To effectuate cyber-resilience, stakeholders have to navigate governance, policy, operational, technical and legal challenges.

An inclusive definition can ensure a better collective response and bring multiple stakeholders to the table. To institutionalise greater emphasis on resilience an inclusive definition can foster cooperation between various stakeholders rather than a punitive approach that focuses on liability and criminality. An inclusive definition can enable a bottom-up approach in countering cyber security threats and systemic incidents across sectors. It can also further CERT-In’s information-sharing objectives through collaboration between stakeholders under section 70B of the IT Act.

When it comes to the regulation of technologies that embody socio-political values, contrary to popular belief that technical deliberations are objective and value-neutral, such discourse (in this case, the definition) suffers from the dominance of technical perspectives. For example, the definition of cybersecurity under the National Institute of Standards and Technology (NIST) framework is, “the ability to protect or defend the use of cyberspace from cyber-attacks” directs the reader to the definitions of cyberspace and cyberattack to extensively cover its various elements. However, the said definitions also has a predominantly technical lens.

Alternatively, definitions of cyber security would benefit from inclusive conceptions that factor in human engagements with systems, acknowledge interrelated dimensions and inherent complexities of cybersecurity, which involves dynamic interactions between all inter-connected stakeholders. An effective cybersecurity strategy entails a judicious mix of people, policies and technology, as well as a robust public-private partnership.

Cybersecurity is a broad term and often has highly variable subjective definitions. This hinders the formulation of appropriately responsive policy and legislative actions. As a benchmark, we borrow the Dan Purse et al. definition of cybersecurity– “the organisation and collection of resources, processes, and structures used to protect cyberspace and cyberspace-enabled systems from occurrences that misalign de jure from de facto property rights.” The benefit of this articulation is that it necessitates a deeper understanding of the harms and consequences of cyber security threats and their impact. However, this definition cannot be adopted within the Indian legal framework as (a) property rights are not recognised as fundamental rights and (b) this narrows its application to a harms and consequences standard.

Most importantly, the authors identify five common elements to form a holistic and effective approach towards defining cybersecurity. The following elements are from a literature review of 9 cybersecurity definitions are:

  • technological solutions
  • events
  • strategies, processes, and methods
  • human engagement; and
  • referent objects.

These elements highlight the complexity of the process and involve interaction between humans and systems for protecting the digital assets and themselves from various known and unknown risks. Simply put, any unauthorized access, use, disclosure, disruption, modification or destruction results in at least, a loss of functional control over the affected computer device or resource to the detriment of the person and/or legal entity in whom lawful ownership of the computer device or resource is vested. The definition codified under the IT Act only partly captures the complexity of ‘cyber security’ and its implications.

Conclusion

Economic interest is a core objective that necessitates cyber-resilience. Recognising the economic consequences of such attacks rather than protecting limited resources such as computer systems acknowledges the complex approaches to cybersecurity. Currently, the definition of cybersecurity is dominated by technical perspectives, and disregards other disciplines that should be ideally acting in concert to address complex challenges. Cyber-resilience can be operationalised through a renewed definition; divergent approaches within India to tackle cybersecurity challenges will act as a strategic barrier to economic growth, data flow, investments, and most importantly effective security. It will also divert resources away from more effective strategies and capacity investments. Finally, the Indian approach should evolve and stem from the threat perception, the socio-technical character of the term, and aim to bring cybersecurity stakeholders together.

Technology and National Security Law Reflection Series Paper 5: Legality of Cyber Weapons Under International Law

Siddharth Gautam*

About the Author: The author is a 2020 graduate of National Law University, Delhi. 

Editor’s note: This post is part of the Reflection Series showcasing exceptional student essays from CCG-NLUD’s Seminar Course on Technology & National Security Law. In the present essay, the author reflects upon the following question: 

What are cyber weapons? Are they cyber weapons subject to any regulation under contemporary rules of international law? Explain with examples.

Introducing Cyber Weapons

In simple terms weapons are tools that harm humans or aim to harm the human body. In ancient times nomads used pointing tools to hunt and prey. Today’s world is naturally more advanced than that. In conventional methods of warfare, modern tools of weapons include rifles, grenades, artillery, missiles, etc. But in recent years the definition of warfare has changed immeasurably after the advancement of the internet and wider information and communication technologies (“ICT”). In this realm methods and ways of warfare are undergoing change. As internet technology develops we observe the advent/use of cyber weapons to carry out cyber warfare.

Cyber warfare through weapons that are built using technological know-how are low cost tools. Prominent usage of these tools is buttressed by wide availability of computer resources. Growth in the information technology (“IT”) industry and relatively cheap human resource markets have a substantial effect on the cost of cyber weapons which are capable of infiltrating other territories with relative ease. The aim of cyber weapons is to cause physical or psychological harm either by threat or material damage using computer codes or malware.

2007 Estonia Cyber Attack

For example during the Estonia –Russia conflict the conflict arose after the Soldier memorial was being shifted to the outskirts of Estonia. There was an uproar in the Russian speaking population over this issue. On 26th and 27th April, 2007 the capital saw rioting, defacing of property and numerous arrests.

On the same Friday cyber attacks were carried out using low tech methods like Ping, Floods and simple Denial-of-Service (DoS) attacks. Soon thereafter on 30th April, 2007 the scale and scope of the cyber attack increased sharply. Actors used botnets and were able to deploy large scale distributed denial of service (D-DoS) attacks to compromise 85 thousand computer systems and severely compromised the entire Estonian cyber and computer landscape. The incident caused widespread concerns/panic across the country.

Other Types of Cyber Weapons

Another prominent type of cyber weapon is HARM i.e. High-speed Anti Radiation missiles. It is a tactical air-to-surface anti radiation missile which can target electronic transmissions emitted from surface-to-air radar systems. These weapons are able to recognise the pulse repetition of enemy frequencies and accordingly search for the suitable target radar. Once it is visible and identified as hostile it will reach its radar antenna or transmitter target, and cause significant damage to those highly important targets. A prominent example of its usage is in the Syrian–Israel context. Israel launched cyber attacks against the Syrian Air defence system by blinding it. It attacked their Radar station in order not to display any information of Airplanes reaching their operators. 

A third cyber weapon worth analysing can be contextualised via the Stuxnet worm that sabotaged Iran’s nuclear programme by slowing the speed of its uranium reactors via fake input signals. It is alleged that the US and Israel jointly conducted this act of cyber warfare to damage Iran’s Nuclear programme.

In all three of the aforementioned cases, potential cyber weapons were used to infiltrate and used their own technology to conduct cyber warfare. Other types of cyber risks emerge from semantic attacks which are otherwise known as social engineering attacks. In such attacks perpetrators amend the information stored in a computer system and produce errors without the user being aware of the same. It specifically pertains to human interaction with information generated by a computer system, and the way that information may be interpreted or perceived by the user. These tactics can be used to extract valuable or classified information like passwords, financial details, etc. 

HACKERS (PT. 2) by Ifrah Yousuf. Licensed under CC BY 4.0.From CyberVisuals.org, a project of the Hewlett Foundation Cyber Initiative.

Applicable Landscape Under International Law

Now the question that attracts attention is whether there are any laws to regulate, minimise or stop the aforementioned attacks by the use of cyber weapons in International law? To answer this question we can look at a specific branch of Public international law; namely International Humanitarian law (“IHL”). IHL deals with armed conflict situations and not cyber attacks (specifically). IHL “seeks to moderate the conduct of armed conflict and to mitigate the suffering which it causes”. This statement itself comprises two major principles used in the laws of war.

Jus ad Bellum – the principle which determines whether countries have a right to resort to war through an armed conflict,

Jus in bellothe principle which governs the conduct of the countries’ soldiers/States itself which are engaging in war or an armed conflict

Both principles are subjected to the Hague and Geneva Conventions with Additional Protocol-1 providing means and ways as to how the warfare shall be conducted. Nine other treaties help safeguard and protect victims of war in armed conflict. The protections envisaged in the Hague and Geneva conventions are for situations concerning injuries, death, or in some cases  damage and/or destruction of property. If we analyse logically, cyber warfare may result in armed conflict through certain weapons, tools and techniques like Stuxnet, Trojan horse, Bugs, DSOS, malware HARM etc. The use of such weapons may ultimately yield certain results. Although computers are not a traditional weapon its use can still fulfil conditions which attract the applicability of provisions under the IHL.

Another principle of importance is Martens Clause. This clause says that even if some cases are not covered within conventional principles like humanity; principles relating to public conscience will apply to the combatants and civilians as derived from the established customs of International law. Which means that attacks shall not see the effects but by how they were employed

The Clause found in the Preamble to the Hague Convention IV of 1907 asserts that “even in cases not explicitly covered by specific agreements, civilians and combatants remain under the protection and authority of principles of international law derived from established custom, principles of humanity, and from the dictates of public conscience.” In other words, attacks should essentially be judged on the basis of their effects, rather than the means employed in the attack being the primary factor.

Article 35 says that “In any armed conflict, the right of the Parties to the conflict to choose methods or means of warfare is not unlimited. It is prohibited to employ weapons, projectiles and material and methods of warfare of a nature to cause superfluous injury and unnecessary suffering

The above clause means that the action of armed forces should be proportionate to the actual military advantage sought to be achieved. In simple words “indiscriminate attacks” shall not be undertaken to cause loss of civilian life and damage to civilians’ property in relation to the advantage.

Conclusion

Even though the terms of engagement vis-a-vis kinetic warfare is changing, the prospect of the potential of harm from cyber weapons could match the same. Instead of guns there are computers and instead of bullets there is malware, bugs, D-DOS etc. Some of the replacement of one type of weapon with another is caused by the fact that there are no explicit provisions in law that outlaw cyber warfare, independently or in war.

The principles detailed in the previous section must necessarily apply to cyber warfare because it limits the attacker’s ability to cause excessive collateral damage. On the same note cyber weapons are sui generis like the nuclear weapons that upshot in the significance to that of traditional weapons

Another parallel is that in cyber attacks often there are unnecessary sufferings and discrimination in proportionality and the same goes for  traditional armed conflict. Therefore, both should be governed by the principles of IHL. 

In short, if the cyber attacks produce results in the same way as kinetic attacks do, they will be subject to IHL.


*The views expressed in the blog are personal and should not be attributed to the institution.

The Supreme Court’s Pegasus Order

This blog post has been authored by Shrutanjaya Bhardwaj.

On 28th October 2021, the Supreme Court passed an order in the “Pegasus” case establishing a 3-member committee of technical experts to investigate allegations of illegal surveillance by hacking into the phones of several Indian citizens, including journalists. This post  analyses the Pegasus order. Analyses by others may be accessed here, here and here.

Overview

The writ petitioners alleged that the Indian Government and its agencies have been using a spyware tool called “Pegasus”—produced by an Israeli technology firm named the NSO Group—to spy on Indian citizens. As the Court notes, Pegasus can be installed on digital devices such as mobile phones, and once Pegasus infiltrates the device, “the entire control over the device is allegedly handed over to the Pegasus user who can then remotely control all the functionalities of the device.” Practically, this means the ‘Pegasus user’ (i.e., the infiltrator) has access to all data on the device (emails, texts, and calls) and can remotely activate the camera and microphone to surveil the device owner and their immediate surroundings. 

The Court records some basic facts that are instructive in understanding its final order:

  1. The NSO Group itself claims that it only sells Pegasus to governments. 
  2. In November 2019, the then-Minister of Electronics and IT acknowledged in Parliament that Pegasus had infected the devices of certain Indians. 
  3. In June-July 2020, reputed media houses uncovered instances of Pegasus spyware attacks on many Indians including “senior journalists, doctors, political persons, and even some Court staff”.
  4. Foreign governments have since taken steps to diplomatically engage with Israel or/and internally conduct investigations to understand the issue.
  5. Despite repeated requests by the Court, the Union Government did not furnish any specific information to assist the Court’s understanding of the matter.

These facts led the Court to conclude that the petitioners’ allegations of illegal surveillance by hacking need further investigation. The Court noted that the petitioners had placed on record expert reports and there also existed a wealth of ‘cross-verified media coverage’ coupled with the reactions of foreign governments to the use of Pegasus. The Court’s order leaves open the possibility that a foreign State or perhaps a private entity may have conducted surveillance on Indians. Additionally, the Union Government’s refusal to clarify its position on the legality and use of Pegasus in Court raised the possibility that the Union Government itself may have used the spyware. As discussed below, this possibility ultimately shaped the Court’s directions and relief.  

The Pegasus order is analysed below along three lines: (i) the Court’s acknowledgement of the threat to fundamental rights, (ii) the Union Government’s submissions before the Court, and (iii) the Court’s assertion of its constitutional duty of judicial review—even in the face of sensitive considerations like national security.

Acknowledging the risks to fundamental rights

While all fundamental rights may be reasonably restricted by the State, every right has different grounds on which it may be restricted. Identifying the precise right under threat is hence an important exercise. The Court articulates three distinct rights at risk in a Pegasus attack. Two flow from the freedom of speech under Article 19(1)(a) of the Constitution and one from the right to privacy under Article 21. 

The first right, relatable to Article 19(1)(a), is journalistic freedom. The Court noted that the awareness of being spied on causes the journalist to tread carefully and think twice before speaking the truth. Additionally, when a journalist’s entire private communication is accessible to the State, the chances of undue pressure increase manifold. The Court described such surveillance as “an assault on the vital public watchdog role of the press”.

The second right, also traced to Article 19(1)(a), is the journalist’s right to protect their sources. The Court treats this as a “basic condition” for the freedom of the press. “Without such protection, sources may be deterred from assisting the press in informing the public on matters of public interest,” which harms the free flow of information that Article 19(1)(a) is designed to ensure. This observation and acknowledgment by the Court is significant and it will be interesting to see how the Court’s jurisprudence develops and engages with this issue.The third right, traceable to Article 21 as interpreted in Puttaswamy, is the citizen’s right to privacy (see CCG’s case brief on the CCG’s Privacy Law Library of Puttaswamy). Surveillance and hacking are prima facie an invasion of privacy. However, the State may justify a privacy breach as a reasonable restriction on constitutional grounds if the legality, necessity, and proportionality of the State’s surveillance measure is established.

Court’s response to the Government’s “conduct” before the Court

The Court devotes a significant part of the Pegasus order to discuss the Union Government’s “conduct”in the litigation. The first formal response filed by the Government, characterised as a “limited affidavit”, did not furnish any details about the controversy owing to an alleged “paucity of time”. When the Court termed this affidavit as “insufficient” and demanded a more detailed affidavit, the Solicitor General cited national security implications as the reason for not filing a comprehensive response to the surveillance allegations. This was despite repeated assurances given by both the Petitioners and the Court that no sensitive information was being sought, and the Government need only disclose what was necessary to decide the matter at hand. Additionally, the Government did not specify the national security consequences that would arise if more details were disclosed. (The Court’s response to the invocation of the national security ground on merits is discussed in the next section.) 

In addition to invoking national security, the Government made three other arguments:

  1. The press reports and expert evidence were “motivated and self-serving” and thus of insufficient veracity to trigger the Court’s jurisdiction.
  2. While all technology may be misused, the use of Pegasus cannot per se be impermissible, and India had sufficient legal safeguards to guard against constitutionally impermissible surveillance.
  3. The Court need not establish a committee as the Union Government was prepared to constitute its own committee of experts to investigate the issue.

The Court noted that the nature and “sheer volume” of news reports are such that these materials “cannot be brushed aside”. The Court was unwilling to accept the other two arguments in part due to the Union Government’s broader “conduct” on the issue of Pegasus. It noted that the first reports of Pegasus use dated back to 2018 and a Union Minister had informed Parliament of the spyware’s use on Indians in 2019, yet no steps to investigate or resolve the issue had been taken until the present writ petitions had been filed. Additionally, the Court ruled that the limited documentation provided by the Government did not clarify its stand on the use of Pegasus. In this context, and owing to reasons of natural justice (discussed below), the Court opined that independent fact finding and judicial review were warranted.

Assertion of constitutional duty of judicial review

As noted above, the Union Government invoked national security as a ground to not file documentation regarding its alleged use of Pegasus. The Court acknowledged that the government is entitled to invoke this ground, and even noted that the scope of judicial review is narrow on issues of national security. However, the Court held that the mere invocation of national security is insufficient to exclude court intervention. Rather, the government must demonstrate how the information being withheld would raise national security concerns and the Court will decide whether the government’s concerns are legitimate. 

The order contains important observations on the Government’s use of the national security exception to exclude judicial scrutiny. The Court notes that such arguments are not new; and that governments have often urged constitutional courts to take a hands-off approach in matters that have a “political” facet (like those pertaining to defence and security). But the Court has previously held, and also affirmed in the Pegasus order, that it will not abstain from interfering merely because a case has a political complexion. The Court noted that it may certainly choose to defer to the Government on sensitive aspects, but there is no “omnibus prohibition” on judicial review in matters of national security. If the State wishes to withhold information from the Court, it must “plead and prove” the necessary facts to justify such withholding.

The Government had also suggested that the Court let the Government set up a committee to investigate the matter. The Supreme Court had adopted this approach in the Kashmir Internet Shutdowns case by setting up an executive-led committee to examine the validity and necessity of continuing internet shutdowns. That judgment was widely criticised (see here, here and here). However, in the present case, as the petitions alleged that the Union Government itself had used Pegasus on Indians, the Court held that allowing the Union Government to set up a committee to investigate would violate the principle of bias in inquiries. The Court quoted the age-old principle that “justice must not only be done, but also be seen to be done”, and refused to allow the Government to set up its own committee. This is consistent with the Court’s assertion of its constitutional obligation of judicial review in the earlier parts of the order. 

Looking ahead

The terms of reference of the Committee are pointed and meaningful. The Committee is required to investigate, inter alia, (i) whether Pegasus was used to hack into phones of Indian citizens, and if so which citizens; (ii) whether the Indian Government procured and deployed Pegasus; and (iii) if the Government did use Pegasus, what law or regulatory framework the spyware was used under. All governmental agencies have been directed to cooperate with the Committee and furnish any required information.

Additionally, the Committee is to make recommendations regarding the enactment of a new surveillance law or amendment of existing law(s), improvements to India’s cybersecurity systems, setting up a robust investigation and grievance-redressal mechanism for the benefit of citizens, and any ad-hoc arrangements to be made by the Supreme Court for the protection of citizen’s rights pending requisite action by Parliament.

The Court has directed the Committee to carry out its investigation “expeditiously” and listed the matter again after 8 weeks. As per the Supreme Court’s website, the petitions are tentatively to be listed on 3 January 2022.

This blog was written with the support of the Friedrich Naumann Foundation for Freedom.

The Future of Democracy in the Shadow of Big and Emerging Tech: CCG Essay Series

By Shrutanjaya Bhardwaj and Sangh Rakshita

In the past few years, the interplay between technology and democracy has reached a critical juncture. The untrammelled optimism for technology has now been shadowed by rising concerns over the survival of a meaningful democratic society. With the expanding reach of technology platforms, there have been increasing concerns in democratic societies around the world on the impact of such platforms on democracy and human rights. In this context, increasingly there has been focus on policy issues like  the need for an antitrust framework for digital platforms, platform regulation and free speech, the challenges of fake news, impact of misinformation on elections, invasion of privacy of citizens due to the deployment of emerging tech,  and cybersecurity. This has intensified the quest for optimal policy solutions. We, at the Centre for Communication Governance at National Law University Delhi (CCG), believe that a detailed academic exploration of the relationship between democracy, and big and emerging tech will aid our understanding of the current problems, help contextualise them and highlight potential policy and regulatory responses.

Thus, we bring to you this series of essays—written by experts in the domain—in an attempt to collate contemporary scholarly thought on some of the issues that arise in the context of the interaction of democracy, and big and emerging tech. The essay series is publicly available on the CCG website. We have also announced the release of the essay series on Twitter

Our first essay addresses the basic but critical question: What is ‘Big Tech’? Urvashi Aneja & Angelina Chamuah present a conceptual understanding of the phrase. While ‘Big Tech’ refers to a set of companies, it is certainly not a fixed set; companies become part of this set by exhibiting four traits or “conceptual markers” and—as a corollary—would stop being identified in this category if they were to lose any of the four markers. The first marker is that the company runs a data-centric model and has massive access to consumer data which can be leveraged or exploited. The second marker is that ‘Big Tech’ companies have a vast user base and are “multi-sided platforms that demonstrate strong network effects”. The third and fourth markers are the infrastructural and civic roles of these companies respectively, i.e., they not only control critical societal infrastructure (which is often acquired through lobbying efforts and strategic mergers and acquisitions) but also operate “consumer-facing platforms” which enable them to generate consumer dependence and gain huge power over the flow of information among citizens. It is these four markers that collectively define ‘Big Tech’. [U. Aneja and A. Chamuah, What is Big Tech? Four Conceptual Markers]

Since the power held by Big Tech is not only immense but also self-reinforcing, it endangers market competition, often by hindering other players from entering the market. Should competition law respond to this threat? If yes, how? Alok P. Kumar & Manjushree R.M. explore the purpose behind competition law and find that competition law is concerned not only with consumer protection but also—as evident from a conjoint reading of Articles 14 & 39 of the Indian Constitution—with preventing the concentration of wealth and material resources in a few hands. Seen in this light, the law must strive to protect “the competitive process”. But the present legal framework is too obsolete to achieve that aim. Current understanding of concepts such as ‘relevant market’, ‘hypothetical monopolist’ and ‘abuse of dominance’ is hard to apply to Big Tech companies which operate more on data than on money. The solution, it is proposed, lies in having ex ante regulation of Big Tech rather than a system of only subsequent sanctions through a possible code of conduct created after extensive stakeholder consultations. [A.P. Kumar and Manjushree R.M., Data, Democracy and Dominance: Exploring a New Antitrust Framework for Digital Platforms]

Market dominance and data control give an even greater power to Big Tech companies, i.e., control over the flow of information among citizens. Given the vital link between democracy and flow of information, many have called for increased control over social media with a view to checking misinformation. Rahul Narayan explores what these demands might mean for free speech theory. Could it be (as some suggest) that these demands are “a sign that the erstwhile uncritical liberal devotion to free speech was just hypocrisy”? Traditional free speech theory, Narayan argues, is inadequate to deal with the misinformation problem for two reasons. First, it is premised on protecting individual liberty from the authoritarian actions by governments, “not to control a situation where baseless gossip and slander impact the very basis of society.” Second, the core assumption behind traditional theory—i.e., the possibility of an organic marketplace of ideas where falsehood can be exposed by true speech—breaks down in context of modern era misinformation campaigns. Therefore, some regulation is essential to ensure the prevalence of truth. [R. Narayan, Fake News, Free Speech and Democracy]

Jhalak M. Kakkar and Arpitha Desai examine the context of election misinformation and consider possible misinformation regulatory regimes. Appraising the ideas of self-regulation and state-imposed prohibitions, they suggest that the best way forward for democracy is to strike a balance between the two. This can be achieved if the State focuses on regulating algorithmic transparency rather than the content of the speech—social media companies must be asked to demonstrate that their algorithms do not facilitate amplification of propaganda, to move from behavioural advertising to contextual advertising, and to maintain transparency with respect to funding of political advertising on their platforms. [J.M. Kakkar and A. Desai, Voting out Election Misinformation in India: How should we regulate Big Tech?]

Much like fake news challenges the fundamentals of free speech theory, it also challenges the traditional concepts of international humanitarian law. While disinformation fuels aggression by state and non-state actors in myriad ways, it is often hard to establish liability. Shreya Bose formulates the problem as one of causation: “How could we measure the effect of psychological warfare or disinformation campaigns…?” E.g., the cause-effect relationship is critical in tackling the recruitment of youth by terrorist outfits and the ultimate execution of acts of terror. It is important also in determining liability of state actors that commit acts of aggression against other sovereign states, in exercise of what they perceive—based on received misinformation about an incoming attack—as self-defence. The author helps us make sense of this tricky terrain and argues that Big Tech could play an important role in countering propaganda warfare, just as it does in promoting it. [S. Bose, Disinformation Campaigns in the Age of Hybrid Warfare]

The last two pieces focus attention on real-life, concrete applications of technology by the state. Vrinda Bhandari highlights the use of facial recognition technology (‘FRT’) in law enforcement as another area where the state deploys Big Tech in the name of ‘efficiency’. Current deployment of FRT is constitutionally problematic. There is no legal framework governing the use of FRT in law enforcement. Profiling of citizens as ‘habitual protestors’ has no rational nexus to the aim of crime prevention; rather, it chills the exercise of free speech and assembly rights. Further, FRT deployment is wholly disproportionate, not only because of the well-documented inaccuracy and bias-related problems in the technology, but also because—more fundamentally—“[t]reating all citizens as potential criminals is disproportionate and arbitrary” and “creates a risk of stigmatisation”. The risk of mass real-time surveillance adds to the problem. In light of these concerns, the author suggests a complete moratorium on the use of FRT for the time being. [V. Bhandari, Facial Recognition: Why We Should Worry the Use of Big Tech for Law Enforcement

In the last essay of the series, Malavika Prasad presents a case study of the Pune Smart Sanitation Project, a first-of-its-kind urban sanitation programme which pursues the Smart City Mission (‘SCM’). According to the author, the structure of city governance (through Municipalities) that existed even prior to the advent of the SCM violated the constitutional principle of self-governance. This flaw was only aggravated by the SCM which effectively handed over key aspects of city governance to state corporations. The Pune Project is but a manifestation of the undemocratic nature of this governance structure—it assumes without any justification that ‘efficiency’ and ‘optimisation’ are neutral objectives that ought to be pursued. Prasad finds that in the hunt for efficiency, the design of the Pune Project provides only for collection of data pertaining to users/consumers, hence excluding the marginalised who may not get access to the system in the first place owing to existing barriers. “Efficiency is hardly a neutral objective,” says Prasad, and the state’s emphasis on efficiency over inclusion and participation reflects a problematic political choice. [M. Prasad, The IoT-loaded Smart City and its Democratic Discontents]

We hope that readers will find the essays insightful. As ever, we welcome feedback.

This series is supported by the Friedrich Naumann Foundation for Freedom (FNF) and has been published by the National Law University Delhi Press. We are thankful for their support. 

Building an AI Governance Framework for India, Part III

Embedding Principles of Privacy, Transparency and Accountability

This post has been authored by Jhalak M. Kakkar and Nidhi Singh

In July 2020, the NITI Aayog released a draft Working Document entitled “Towards Responsible AI for All” (hereafter ‘NITI Aayog Working Document’ or ‘Working Document’). This Working Document was initially prepared for an expert consultation that was held on 21 July 2020. It was later released for comments by stakeholders on the development of a ‘Responsible AI’ policy in India. CCG’s comments and analysis  on the Working Document can be accessed here.

In our first post in the series, ‘Building an AI governance framework for India’, we discussed the legal and regulatory implications of the Working Document and argued that India’s approach to regulating AI should be (1) firmly grounded in its constitutional framework, and (2) based on clearly articulated overarching ‘Principles for Responsible AI’. Part II of the series discussed specific Principles for Responsible AI – Safety and Reliability, Equality, and Inclusivity and Non-Discrimination. We explored the constituent elements of these principles and the avenues for incorporating them into the Indian regulatory framework. 

In this final post of the series, we will discuss the remaining principles of Privacy, Transparency and Accountability. 

Principle of Privacy 

Given the diversity of AI systems, the privacy risks which they pose to the individuals, and society as a whole are also varied. These may be be broadly related to : 

(i) Data protection and privacy: This relates to privacy implications of the use of data by AI systems and subsequent data protection considerations which arise from this use. There are two broad aspects to think about in terms of the privacy implications from the use of data by AI systems. Firstly, AI systems must be tailored to the legal frameworks for data protection. Secondly, given that AI systems can be used to re-identify anonymised data, the mere anonymisation of data for the training of AI systems may not provide adequate levels of protection for the privacy of an individual.

a) Data protection legal frameworks: Machine learning and AI technologies have existed for decades, however, it was the explosion in the availability of data, which accounts for the advancement of AI technologies in recent years. Machine Learning and AI systems depend upon data for their training. Generally, the more data the system is given, the more it learns and ultimately the more accurate it becomes. The application of existing data protection frameworks to the use of data by AI systems may raise challenges. 

In the Indian context, the Personal Data Protection Bill, 2019 (PDP Bill), currently being considered by Parliament, contains some provisions that may apply to some aspects of the use of data by AI systems. One such provision is Clause 22 of the PDP Bill, which requires data fiduciaries to incorporate the seven ‘privacy by design’ principles and embed privacy and security into the design and operation of their product and/or network. However, given that AI systems rely significantly on anonymised personal data, their use of data may not fall squarely within the regulatory domain of the PDP Bill. The PDP Bill does not apply to the regulation of anonymised data at large but the Data Protection Authority has the power to specify a code of practice for methods of de-identification and anonymisation, which will necessarily impact AI technologies’ use of data.

b) Use of AI to re-identify anonymised data: AI applications can be used to re-identify anonymised personal data. To safeguard the privacy of individuals, datasets composed of the personal data of individuals are often anonymised through a de-identification and sampling process, before they are shared for the purposes of training AI systems to address privacy concerns. However, current technology makes it possible for AI systems to reverse this process of anonymisation to re-identify people, having significant privacy implications for an individual’s personal data. 

(ii) Impact on society: The impact of the use of AI systems on society essentially relates to broader privacy considerations that arise at a societal level due to the deployment and use of AI, including mass surveillance, psychological profiling, and the use of data to manipulate public opinion. The use of AI in facial recognition surveillance technology is one such AI system that has significant privacy implications for society as a whole. Such AI technology enables individuals to be easily tracked and identified and has the potential to significantly transform expectations of privacy and anonymity in public spaces. 

Due to the varying nature of privacy risks and implications caused by AI systems, we will have to design various regulatory mechanisms to address these concerns. It is important to put in place a reporting and investigation mechanism that collects and analyses information on privacy impacts caused by the deployment of AI systems, and privacy incidents that occur in different contexts. The collection of this data would allow actors across the globe to identify common threads of failure and mitigate against potential privacy failures arising from the deployment of AI systems. 

To this end, we can draw on a mechanism that is currently in place in the context of reporting and investigating aircraft incidents, as detailed under Annexure 13 of the Convention on International Civil Aviation (Chicago Convention). It lays down the procedure for investigating aviation incidents and a reporting mechanism to share information between countries. The aim of this accident investigation report is not to apportion blame or liability from the investigation, but rather to extensively study the cause of the accident and prevent future incidents. 

A similar incident investigation mechanism may be employed for AI incidents involving privacy breaches. With many countries now widely developing and deploying AI systems, such a model of incident investigation would ensure that countries can learn from each other’s experiences and deploy more privacy-secure AI systems.

Principle of Transparency

The concept of transparency is a recognised prerequisite for the realisation of ‘trustworthy AI’. The goal of transparency in ethical AI is to make sure that the functioning of the AI system and resultant outcomes are non-discriminatory, fair, and bias mitigating, and that the AI system inspires public confidence in the delivery of safe and reliable AI innovation and development. Additionally, transparency is also important in ensuring better adoption of AI technology—the more users feel that they understand the overall AI system, the more inclined and better equipped they are to use it.

The level of transparency must be tailored to its intended audience. Information about the working of an AI system should be contextualised to the various stakeholder groups interacting and using the AI system. The Institute of Electrical and Electronics Engineers, a global professional organisation of electronic and electrical engineers,  suggested that different stakeholder groups may require varying levels of transparency in accordance with the target group. This means that groups such as users, incident investigators, and the general public would require different standards of transparency depending upon the nature of information relevant for their use of the AI system.

Presently, many AI algorithms are black boxes where automated decisions are taken, based on machine learning over training datasets, and the decision making process is not explainable. When such AI systems produce a decision, human end users don’t know how it arrived at its conclusions. This brings us to two major transparency problems, the public perception and understanding of how AI works, and how much developers actually understand about their own AI system’s decision making process. In many cases, developers may not know, or be able to explain how an AI system makes conclusions or how it has arrived at certain solutions.

This results in a lack of transparency. Some organisations have suggested opening up AI algorithms for scrutiny and ending reliance on opaque algorithms. On the other hand, the NITI Working Document is of the view that disclosing the algorithm is not the solution and instead, the focus should be on explaining how the decisions are taken by AI systems. Given the challenges around explainability discussed above, it will be important for NITI Aayog to discuss how such an approach will be operationalised in practice.

While many countries and organisations are researching different techniques which may be useful in increasing the transparency of an AI system, one of the common suggestions which have gained traction in the last few years is the introduction of labelling mechanisms in AI systems. An example of this is Google’s proposal to use ‘Model Cards’, which are intended to clarify the scope of the AI systems deployment and minimise their usage in contexts for which they may not be well suited. 

Model cards are short documents which accompany a trained machine learning model. They enumerate the benchmarked evaluation of the working of an AI system in a variety of conditions, across different cultural, demographic, and intersectional groups which may be relevant to the intended application of the AI system. They also contain clear information on an AI system’s capabilities including the intended purpose for which it is being deployed, conditions under which it has been designed to function, expected accuracy and limitations. Adopting model cards and other similar labelling requirements in the Indian context may be a useful step towards introducing transparency into AI systems. 

Principle of Accountability

The Principle of Accountability aims to recognise the responsibility of different organisations and individuals that develop, deploy and use the AI systems. Accountability is about responsibility, answerability and trust. There is no one standard form of accountability, rather this is dependent upon the context of the AI and the circumstances of its deployment.

Holding individuals and entities accountable for harm caused by AI systems has significant challenges as AI systems generally involve multiple parties at various stages of the development process. The regulation of the adverse impacts caused by AI systems often goes beyond the existing regimes of tort law, privacy law or consumer protection law. Some degree of accountability can be achieved by enabling greater human oversight. In order to foster trust in AI and appropriately determine the party who is accountable, it is necessary to build a set of shared principles that clarify responsibilities of each stakeholder involved with the research, development and implementation of an AI system ranging from the developers, service providers and end users.

Accountability has to be ensured at the following stages of an AI system: 

(i) Pre-deployment: It would be useful to implement an audit process before the AI system is deployed. A potential mechanism for implementing this could be a multi-stage audit process which is undertaken post design, but before the deployment of the AI system by the developer. This would involve scoping, mapping and testing a potential AI system before it is released to the public. This can include ensuring risk mitigation strategies for changing development environments and ensuring documentation of policies, processes and technologies used in the AI system.

Depending on the nature of the AI system and the potential for risk, regulatory guidelines can be developed prescribing the involvement of various categories of auditors such as internal, expert third party and from the relevant regulatory agency, at various stages of the audit. Such audits which are conducted pre-deployment are aimed at closing the accountability gap which exists currently.

(ii) During deployment: Once the AI system has been deployed, it is important to keep auditing the AI system to note the changes being made/evolution happening in the AI system in the course of its deployment. AI systems constantly learn from the data and evolve to become better and more accurate. It is important that the development team is continuously monitoring the system to capture any errors that may arise, including inconsistencies arising from input data or design features, and address them promptly.

(iii) Post-deployment: Ensuring accountability post-deployment in an AI system can be challenging. The NITI Working Document also recognised that assigning accountability for specific decisions becomes difficult in a scenario with multiple players in the development and deployment of an AI system. In the absence of any consequences for decisions harming others, no one party would feel obligated to take responsibility or take actions to mitigate the effect of the AI systems. Additionally, the lack of accountability also leads to difficulties in grievance redressal mechanisms which can be used to address scenarios where harm has arisen from the use of AI systems. 

The Council of Europe, in its guidelines on the human rights impacts of algorithmic systems, highlighted the need for effective remedies to ensure responsibility and accountability for the protection of human rights in the context of the deployment of AI systems. A potential model for grievance redressal is the redressal mechanism suggested in the AI4People’s Ethical Framework for a Good Society report by the Atomium – European Institute for Science, Media and Democracy. The report suggests that any grievance redressal mechanism for AI systems would have to be widely accessible and include redress for harms inflicted, costs incurred, and other grievances caused by the AI system. It must demarcate a clear system of accountability for both organisations and individuals. Of the various redressal mechanisms they have suggested, two significant mechanisms are: 

(a) AI ombudsperson: This would ensure the auditing of allegedly unfair or inequitable uses of AI reported by users of the public at large through an accessible judicial process. 

(b) Guided process for registering a complaint: This envisions laying down a simple process, similar to filing a Right to Information request, which can be used to bring discrepancies, or faults in an AI system to the notice of the authorities.

Such mechanisms can be evolved to address the human rights concerns and harms arising from the use of AI systems in India. 

Conclusion

In early October, the Government of India hosted the Responsible AI for Social Empowerment (RAISE) Summit which has involved discussions around India’s vision and a roadmap for social transformation, inclusion and empowerment through Responsible AI. At the RAISE Summit, speakers underlined the need for adopting AI ethics and a human centred approach to the deployment of AI systems. However, this conversation is still at a nascent stage and several rounds of consultations may be required to build these principles into an Indian AI governance and regulatory framework. 

As India enters into the next stage of developing and deploying AI systems, it is important to have multi-stakeholder consultations to discuss mechanisms for the adoption of principles for Responsible AI. This will enable the framing of an effective governance framework for AI in India that is firmly grounded in India’s constitutional framework. While the NITI Aayog Working Document has introduced the concept of ‘Responsible AI’ and the ethics around which AI systems may be designed, it lacks substantive discussion on these principles. Hence, in our analysis, we have explored global views and practices around these principles and suggested mechanisms appropriate for adoption in India’s governance framework for AI. Our detailed analysis of these principles can be accessed in our comments to the NITI Aayog’s Working Document Towards Responsible AI for All.

Experimenting With New Models of Data Governance – Data Trusts

This post has been authored by Shashank Mohan

India is in the midst of establishing a robust data governance framework, which will impact the rights and liabilities of all key stakeholders – the government, private entities, and citizens at large. As a parliamentary committee debates its first personal data protection legislation (‘PDPB 2019’), proposals for the regulation of non-personal data and a data empowerment and protection architecture are already underway. 

As data processing capabilities continue to evolve at a feverish pace, basic data protection regulations like the PDPB 2019 might not be sufficient to address new challenges. For example, big data analytics renders traditional notions of consent meaningless as users have no knowledge of how such algorithms behave and what determinations are made about them by such technology. 

Creative data governance models, which are aimed at reversing the power dynamics in the larger data economy are the need of the hour. Recognising these challenges policymakers are driving the conversation on data governance in the right direction. However, they might be missing out on crucial experiments being run in other parts of the world

As users of digital products and services increasingly lose control over data flows, various new models of data governance are being recommended for example, data trusts, data cooperatives, and data commons. Out of these, one of the most promising new models of data governance is – data trusts. 

(For the purposes of this blog post, I’ll be using the phrase data processors as an umbrella term to cover data fiduciaries/controllers and data processors in the legal sense. The word users is meant to include all data principals/subjects.)

What are data trusts?

Though there are various definitions of data trusts, one which is helpful in understanding the concept is – ‘data trusts are intermediaries that aggregate user interests and represent them more effectively vis-à-vis data processors.’ 

To solve the information asymmetries and power imbalances between users and data processors, data trusts will act as facilitators of data flow between the two parties, but on the terms of the users. Data trusts will act in fiduciary duty and in the best interests of its members. They will have the requisite legal and technical knowledge to act on behalf of users. Instead of users making potentially ill-informed decisions over data processing, data trusts will make such decisions on their behalf, based on pre-decided factors like a bar on third-party sharing, and in their best interests. For example, data trusts to users can be what mutual fund managers are to potential investors in capital markets. 

Currently, in a typical transaction in the data economy, if users wish to use a particular digital service, neither do they have the knowledge to understand the possible privacy risks nor the negotiation powers for change. Data trusts with a fiduciary responsibility towards users, specialised knowledge, and multiple members might be successful in tilting back the power dynamics in favour of users. Data trusts might be relevant from the perspective of both the protection and controlled sharing of personal as well as non-personal data. 

(MeitY’s Non-Personal Data Governance Framework introduces the concept of data trustees and data trusts in India’s larger data governance and regulatory framework. But, this applies only to the governance of ‘non-personal data’ and not personal data, as being recommended here. CCG’s comments on MeitY’s Non-Personal Data Governance Framework, can be accessed – here)

Challenges with data trusts

Though creative solutions like data trusts seem promising in theory, they must be thoroughly tested and experimented with before wide-scale implementation. Firstly, such a new form of trusts, where the subject matter of the trust is data, is not envisaged by Indian law (see section 8 of the Indian Trusts Act, 1882, which provides for only property to be the subject matter of a trust). Current and even proposed regulatory structures don’t account for the regulation of institutions like data trusts (the non-personal data governance framework proposes data trusts, but only as data sharing institutions and not as data managers or data stewards, as being suggested here). Thus, data trusts will need to be codified into Indian law to be an operative model. 

Secondly, data processors might not embrace the notion of data trusts, as it may result in loss of market power. Larger tech companies, who have existing stores of data on numerous users may not be sufficiently incentivised to engage with models of data trusts. Structures will need to be built in a way that data processors are incentivised to participate in such novel data governance models. 

Thirdly, the business or operational models for data trusts will need to be aligned to their members i.e. users. Data trusts will require money to operate – for profit entities may not have the best interests of users in mind. Subscription based models, whether for profit or not, might fail as users are habitual to free services. Donation based models might need to be monitored closely for added transparency and accountability. 

Lastly, other issues like creation of technical specifications for data sharing and security, contours of consent, and whether data trusts will help in data sharing with the government, will need to be accounted for. 

Privacy centric data governance models

At this early stage of developing data governance frameworks suited to Indian needs, policymakers are at a crucial juncture of experimenting with different models. These models must be centred around the protection and preservation of privacy rights of Indians, both from private and public entities. Privacy must also be read in its expansive definition as provided by the Supreme Court in Justice K.S. Puttaswamy vs. Union of India. The autonomy, choice, and control over informational privacy are crucial to the Supreme Court’s interpretation of privacy. 

(CCG’s privacy law database that tracks privacy jurisprudence globally and currently contains information from India and Europe, can be accessed – here

Building an AI governance framework for India

This post has been authored by Jhalak M. Kakkar and Nidhi Singh

In July 2020, the NITI Aayog released a “Working Document: Towards Responsible AI for All” (“NITI Working Document/Working Document”). The Working Document was initially prepared for an expert consultation held on 21 July 2020. It was later released for comments by stakeholders on the development of a ‘Responsible AI’ policy in India. CCG responded with comments to the Working Document, and our analysis can be accessed here.

The Working Document highlights the potential of Artificial Intelligence (“AI”) in the Indian context. It attempts to identify the challenges that will be faced in the adoption of AI and makes some recommendations on how to address these challenges. The Working Document emphasises the economic potential of the adoption of AI in boosting India’s annual growth rate, its potential for use in the social sector (‘AI for All’) and the potential for India to export relevant social sector products to other emerging economies (‘AI Garage’). 

However, this is not the first time that the NITI Aayog has discussed the large-scale adoption of AI in India. In 2018, the NITI Aayog released a discussion paper on the “National Strategy for Artificial Intelligence” (“National Strategy”). Building upon the National Strategy, the Working Document attempts to delineate ‘Principles for Responsible AI’ and identify relevant policy and governance recommendations. 

Any framework for the regulation of AI systems needs to be based on clear principles. The ‘Principles for Responsible AI’ identified by the Working Document include the principles of safety and reliability, equality, inclusivity and non-discrimination, privacy and security, transparency, accountability, and the protection and reinforcement of positive human values. While the NITI Working Document introduces these principles, it does not go into any substantive details on the regulatory approach that India should adopt and what the adoption of these principles into India’s regulatory framework would entail. 

In a series of posts, we will discuss the legal and regulatory implications of the proposed Working Document and more broadly discuss the regulatory approach India should adopt to AI and the principles India should embed in it. In this first post, we map out key considerations that should be kept in mind in order to develop a comprehensive regulatory regime to govern the adoption and deployment of AI systems in India. Subsequent posts will discuss the various ‘Principles for Responsible AI’, their constituent elements and how we should think of incorporating them into the Indian regulatory framework.

Approach to building an AI regulatory framework 

While the adoption of AI has several benefits, there are several potential harms and unintended risks if the technology is not assessed adequately for its alignment with India’s constitutional principles and its impact on the safety of individuals. Depending upon the nature and scope of the deployment of an AI system, its potential risks can include the discriminatory impact on vulnerable and marginalised communities, and material harms such as the negative impact on the health and safety of individuals. In the case of deployments by the State, risks include violation of the fundamental rights to equality, privacy, freedom of assembly and association, and freedom of speech and expression. 

We highlight some of the regulatory considerations that should be considered below:

Anchoring AI regulatory principles within the constitutional framework of India

The use of AI systems has raised concerns about their potential to violate multiple rights protected under the Indian Constitution such as the right against discrimination, the right to privacy, the right to freedom of speech and expression, the right to assemble peaceably and the right to freedom of association. Any regulatory framework put in place to govern the adoption and deployment of AI technology in India will have to be in consonance with its constitutional framework. While the NITI Working Document does refer to the idea of the prevailing morality of India and its relation to constitutional morality, it does not comprehensively address the idea of framing AI principles in compliance with India’s constitutional principles.

For instance, the government is seeking to acquire facial surveillance technology, and the National Strategy discusses the use of AI-powered surveillance applications by the government to predict crowd behaviour and for crowd management. The use of AI powered surveillance systems such as these needs to be balanced with their impact on an individual’s right to freedom of speech and expression, privacy and equality. Operational challenges surrounding accuracy and fairness in these systems raise further concerns. Considering the risks posed to the privacy of individuals, the deployment of these systems by the government, if at all, should only be done in specific contexts for a particular purpose and in compliance with the principles laid down by the Supreme Court in the Puttaswamy case.

In the context of AI’s potential to exacerbate discrimination, it would be relevant to discuss the State’s use of AI systems for the sentencing of criminals and assessing recidivism. AI systems are trained on existing datasets. These datasets tend to contain historically biased, unequal and discriminatory data. We have to be cognizant of the propensity for historical bias’ and discrimination getting imported into AI systems and their decision making. This could further reinforce and exacerbate the existing discrimination in the criminal justice system towards marginalised and vulnerable communities, and result in a potential violation of their fundamental rights.

The National Strategy acknowledges the presence of such biases and proposes a technical approach to reduce bias. While such attempts are appreciable in their efforts to rectify the situation and yield fairer outcomes, such an approach disregards the fact that these datasets are biased because they arise from a biased, unequal and discriminatory world. As we seek to build effective regulation to govern the use and deployment of AI systems, we have to remember that these are socio-technical systems that reflect the world around us and embed the biases, inequality and discrimination inherent in the Indian society. We have to keep this broader Indian social context in mind as we design AI systems and create regulatory frameworks to govern their deployment. 

While, the Working Document introduces the principles for responsible AI such as equality, inclusivity and non-discrimination, and privacy and security, there needs to be substantive discussion around incorporating these principles into India’s regulatory framework in consonance with constitutional guaranteed rights.

Regulatory Challenges in the adoption of AI in India

As India designs a regulatory framework to govern the adoption and deployment of AI systems, it is important that we keep the following in focus: 

  • Heightened threshold of responsibility for government or public sector deployment of AI systems

The EU is considering adopting a risk-based approach for regulation of AI, with heavier regulation for high-risk AI systems. The extent of risk factors such as safety, consumer rights and fundamental rights are assessed by looking at the sector of deployment and the intended use of the AI system. Similarly, India must consider the adoption of a higher regulatory threshold for the use of AI by at least government institutions, given their potential for impacting citizen’s rights. Government use of AI systems that have the potential of severely impacting citizens’ fundamental rights include the use of AI in the disbursal of government benefits, surveillance, law enforcement and judicial sentencing

  • Need for overarching principles based AI regulatory framework

Different sectoral regulators are currently evolving regulations to address the specific challenges posed by AI in their sector. While it is vital to harness the domain expertise of a sectoral regulator and encourage the development of sector-specific AI regulations, such piecemeal development of AI principles can lead to fragmentation in the overall approach to regulating AI in India. Therefore, to ensure uniformity in the approach to regulating AI systems across sectors, it is crucial to put in place a horizontal overarching principles-based framework. 

  • Adaptation of sectoral regulation to effectively regulate AI

In addition to an overarching regulatory framework which forms the basis for the regulation of AI, it is equally important to envisage how this framework would work with horizontal or sector-specific laws such as consumer protection law and the applicability of product liability to various AI systems. Traditionally consumer protection and product liability regulatory frameworks have been structured around fault-based claims. However, given the challenges concerning explainability and transparency of decision making by AI systems, it may be difficult to establish the presence of defects in products and, for an individual who has suffered harm, to provide the necessary evidence in court. Hence, consumer protection laws may have to be adapted to stay relevant in the context of AI systems. Even sectoral legislation regulating the use of motor vehicles, such as the Motor Vehicles Act, 1988 would have to be modified to enable and regulate the use of autonomous vehicles and other AI transport systems. 

  • Contextualising AI systems for both their safe development and use

To ensure the effective and safe use of AI systems, they have to be designed, adapted and trained on relevant datasets depending on the context in which they will be deployed. The Working Document envisages India being the AI Garage for 40% of the world – developing AI solutions in India which can then be deployed in other emerging economies. Additionally, India will likely import AI systems developed in countries such as the US, EU and China to be deployed within the Indian context. Both scenarios involve the use of AI systems in a context distinct from the one in which they have been developed. Without effectively contextualising socio-technical systems like AI systems to the environment they are to be deployed in, there are enhanced safety, accuracy and reliability concerns. Regulatory standards and processes need to be developed in India to ascertain the safe use and deployment of AI systems that have been developed in contexts that are distinct from the ones in which they will be deployed. 

The NITI Working Document is the first step towards an informed discussion on the adoption of a regulatory framework to govern AI technology in India. However, there is a great deal of work to be done. Any regulatory framework developed by India to govern AI must balance the benefits and risks of deploying AI, diminish the risk of any harm and have a consumer protection framework in place to adequately address any harm that may arise. Besides this, the regulatory framework must ensure that the deployment and use of AI systems are in consonance with India’s constitutional scheme.

India’s Cybersecurity Budget FY 2013-14 to FY 2019-20: Analysis of Budgetary Allocations for Cybersecurity and Related Activities

This is an edited excerpt of Part V and Annexure ‘C’ of CCG’s Comments to the National Security Council Secretariat on the National Cyber Security Strategy 2020 (NCSS 2020). The full text of the Comments can be accessed here.

Note on Research Methodology

CCG compiled the data on allocations (budgeted and revised) and actual expenditure from the Demands for Grants of Ministries as approved by Parliament and presented in the Annual Expenditure Budget of various ministries and their respective departments which are related to cybersecurity from FY 2013-17 to FY 2019-20. 

The departments have been identified from publicly available information represented in the organograms presented as Annexure ‘B’. We understand a ‘relevant department’ to mean those departments which are either directly related to cybersecurity and/or support the functioning of the technical and security aspects of internet governance at large.

We have then identified those budget heads under the Union Budgets for FY 2013-14 through FY 2019-2020, which correspond most closely to the departments identified and highlighted in Annexure ‘B’ to calculate the total allocation to ministries for cybersecurity-related activities. We then analyse this data in under four broad categories:

(I) Department Wise Allocation: The departments that are directly related to the expenditure for cybersecurity are calculated under this heading. Various expenditures under Ministry of Electronics and Information Technology (MEITY), Department of Telecommunication (DOT), and Ministry of Home Affairs are tabulated for this. 

Under MeitY, we have included the budget heads for

  1. Computer Emergency Response Team (CERT-IN),
  2. Centre for Development of Advanced Computing (C-DAC),
  3. Centre for Materials for Electronics and IT (C-MET),
  4. Society for Applied Microwave Electronics Engineering and Research (SAMEER),
  5. Standardization Testing and Quality Certification (STQC),
  6. Controller of Certifying Authorities (CCA), and
  7. Foreign Trade and Export Promotion and
  8. Certain components of the Digital India Initiative, namely:
  • Manpower Development,
  • National Knowledge Network,
  • Promotion of electronics and IT HW manufacturing,
  • Cybersecurity projects (which includes National Cyber Coordination centre and others),
  • Research and Development in Electronics/IT,
  • Promotion of IT/ITeS industries,
  • Promotion of Digital Payment, and
  • Pradhan Mantri Digital Saksharta Abhiyan (PMGDISHA).

Under Ministry of Communication, our focus was only on the Department of Telecommunication. We considered the budget allocated to the following, to come up with the total Department budget. These heads are:

  1. Telecom Regulatory Authority of India (TRAI),
  2. Human Resource Management under National Institute of Communication Finance,
  3. Wireless Planning and Coordination,
  4. Telecom Engineering Centre,
  5. Technology Development and Investment Promotion,
  6. South Asia Sub-Regional Economic Cooperation (SASEC) under Information Highway Project,
  7. Telecom Testing and Security Certification Centre,
  8. Telecom Computer Emergency Response Team,
  9. Central Equipments Identity Register (CEIR),
  10. 5G Connectivity Test Bed,
  11. Promotion of Innovation and Incubation of Future Technologies for Telecom Sector,
  12. Centre for Development of Telematics (C-DoT), and
  13. Labour, Employment and Skill Development.

Under Ministry of Home Affairs, the funds allocated for the following budget heads have been included:

  1. Education, Training and Research purposes,
  2. Criminology and Forensic Science,
  3. Modernisation of Police Forces and Crime and Criminal Tracking Network and Systems (CCTNS),
  4. Indian Cyber Crime Coordination Centre, and
  5. Technical and Economic Cooperation with Other Countries.

All these budget heads were tabulated to come up with the total for department wise allocation. Along with departments mentioned under ‘Supporting Departments’, all these departments were again classified on the basis of their functions and activities,  and analysed under (III).

(II) Supporting Department Wise Allocation: While certain expenditures of the Ministry of Defence, Ministry of External Affairs, Department of Telecommunication, and Ministry of Home Affairs can potentially be used for cybersecurity-related activities, but it it is not possible to infer from the Demands for Grants, the share of cyber in the total allocation, we have treated them as ‘allocations to supporting departments’. In this data, the total funds indicated may not be directly related to cybersecurity efforts, but they contribute towards the larger security and governance framework, which enables the creation of a secure ecosystem for cyber. These headings are tabulated under this section.

Under Ministry of Defence, the following heads were considered to contribute towards the larger security and governance framework in cyberspace:

  1. Navy/Joint Staff,
  2. Ordnance Factories R&D,
  3. Research and Development, including the Research and Development component of R&D head,
  4. Capital Outlay on R&D, and
  5. Technology Development and Assistance for Prototype Development under Make Procedure

Under Ministry of External Affairs, we considered the following heads as important contributors:

  1. The Special Diplomatic Expenditure,
  2. Expenditure for International Cooperation,
  3. Expenditure for Technical and Economic Cooperation with other Countries, and
  4. Other Expenditure of Ministry

Under Department of Telecommunication again, there were several heads that we considered not to be directly related to cybersecurity, but they did significantly contribute towards it. These include allocations for

  1. Defence Spectrum,
  2. Capital Outlay on Telecommunication and Electronic Industries,
  3. Capital Outlay on Other Communication Services, and
  4. Universal Service Obligation Fund (USOF)

Under Ministry of Home Affairs, the departments that are involved with defence and intelligence along with law enforcement are important to be considered for cybersecurity. Thus we included the allocations for

  1. Intelligence Bureau,
  2. NATGRID,
  3. Delhi Police, and
  4. Capital Outlay on Police.

(III) Activity Wise Allocation: For further analysis, we have categorized the expenditures mentioned in Department Wise Allocation into five categories, each of which have been identified as constituent elements of the three Pillars of Strategy namely:

  1. Human Resource Development Component (Strengthen)
  2. Technical Research & Development Component, Capacity Building (Strengthen/Synergize)
  3. International Cooperation and Investment Promotion Component (Secure/Synergise)
  4. Standardisation, Quality Testing and Certification Component (Strengthen)
  5. Active Cyber Incident Response/ Defence Operations and Security Component (Secure/Strengthen)      

The total for these are calculated to identify if any trends or patterns emerge in expenditure by the ministries. Apart from the ministries covered in classifications (I) and (II), we have also included budgets of two other heads/departments. Namely, these are (i) the allocation towards corporate data management under the authority of the Ministry of Corporate Affairs, which has been included in category (5) indicated above and (ii) the allocation towards technical and economic cooperation with other countries for the Department of Economic Affairs under the Ministry of Finance, which has been included in category (3) indicated above.

(IV) Ministries share over Financial Year: The total value tabulated in Department wise allocation and supporting department wise allocation for the ministries is then used to calculate the share of budget allocated to Cyber Security and related activities with respect to the total budget allocation of ministries. The ministries taken into account, which contribute significantly to Cyber Security and related activities are:

  1. Department of Telecommunication (under the Ministry of Communications),
  2. Ministry of Defence,
  3. Ministry of External Affairs,
  4. Ministry of Electronics and Information Technology,
  5. Ministry of Home Affairs, and
  6. Department of Science and Technology (under the Ministry of Science and Technology).

Ministry-wise Allocations and Expenditure on Cybersecurity and Related Activities FY 2013-14 to FY 2019-20

Figure 9 depicts actual expenditure (from FY 2013-14 to FY 2017-18), the Revised Expenditure (RE) for FY 2018-19 and Budgeted Expenditure for FY 2019-20. With the exception of FY 2016-17, we can see a clear trend of increasing allocations for expenditure towards cyber-security related activities, especially for the DoT. It is relevant to point out that this representation also includes the expenditure on Departments playing a supporting role in cybersecurity activities, such as the IDS/Joint Staff and R&D under the Ministry of Defence (MoD) as well as the MEA’s expenditure on international technical cooperation. As the expenditure incurred on cybersecurity related activities alone cannot be inferred from these budget heads, they have been treated as Departments playing a supporting role for cybersecurity efforts and included in overall expenditure.

Figure 9: Ministry-wise Total Expenditure on Cybersecurity and Related Activities
FY 2013-14 to FY 2019-20

Figure 10 is a narrower subset of the expenses indicated in Figure 9. It represents the allocations to Departments in Ministries that have been entrusted with core activities that contribute towards cybersecurity operations, R&D, e-Governance and internet governance at large. These include, to name a few, the promotion of electronics and IT hardware manufacturing and other initiatives such as Digital India, C-DAC, NCCC and other similar programmes under MeitY, TRAI, C-DoT and the 5G test bed under the authority of the DoT and MHA’s expenses towards modernization of police forces, forensics, and initiatives such as the Indian Cyber Crime Coordination Centre.

Figure 10 reveals an immediate upsurge in such allocations in the time period during and immediately after the formulation of the National Cyber Security Policy 2013, after which the allocations begin to dwindle in FY 2014-15. We can also note that with the exception of FY 2015-16 actual expenditure is consistently lower than the Budgeted Expenditure allocated to all these Ministries for cybersecurity related activities.

Figure 10: Ministry-wise Total Expenditure on Cybersecurity and Related Activities
FY 2013-14 to FY 2019-20

It is interesting to note that if we convert the absolute figures represented in Figure 10 into percentages, and represent the same data set as such, it reveals a remarkable consistency and a clear pattern emerges in burden-sharing between these three Ministries (MHA, MeitY and DoT under the Ministry of Communications).

Figure 11 depicts the same allocations indicated as absolute figures in Figure 10 as percentages of the total expenditure on core cybersecurity activities. It is clear that the MHA consistently bears the bulk of expenses on cyber security related activities, clearly with an emphasis on cyber crimes. The remaining half seems to be divided between MeitY and DoT more or less equally. FY 2015-16 allocations and actual expenditure in FY 2014-15 is the only exception to this equal distribution.

Figure 11: Ministry-wise Total Allocation for Cybersecurity and Related Activities
FY 2013-14 to FY 2019-20

Activity-wise Allocation and Expenditure on Cybersecurity

To further analyse how these budgetary allocations are being utilized, we have re-categorized the expenditures mentioned in Department/Ministry wise allocation into five categories, each of which have been identified as constituent elements of the three Pillars of Strategy namely: 

  1. Human Resource Development Component (Strengthen)
  2. Technical Research and Development Component, Capacity Building (Strengthen/Synergize)
  3. International Cooperation and Investment Promotion Component (Secure/Synergise)
  4. Standardization, Quality Testing and Certification Component (Strengthen)
  5. Active Cyber Incident Response/ Cyber Defence Operations and Security Component (Secure/Strengthen)

The total expenses incurred for these allocations are calculated to identify if any trends or patterns emerge to identify which activities are being prioritized according to the actual expenditure incurred by the relevant ministries. It is important to note that none of these categories include any expenses earmarked for cyber defence operations under the MoD, as the budget heads do not permit drawing such an inference in its current format.

In this reclassification, we have included one budget head each for two other Departments that do not figure in the data represented in Figures 9, 10 or 11. Namely, these are (a) the allocation towards corporate data management under the authority of the Ministry of Corporate Affairs, which has been included in category (5) indicated above and (b) the allocation towards technical and economic cooperation with other countries for the Department of Economic Affairs under the Ministry of Finance, which has been included in category (3) indicated above.

Figure 12 represents activity-wise trends in these Ministries’ actual expenditure. The figures for FY 2018-19 and FY 2019-20 represent the RE and BE for those years, respectively. It is not surprising that the expenditure on international cooperation and investment promotion towers over all other activities, as the allocated expenses would contribute to overall cooperation efforts at the international level and the promotion of investment broadly, and not only cybersecurity. Nonetheless, these are crucial contributions to enhancing India’s cybersecurity posture at home and abroad. For a clearer analysis, we remove the indicator for expenses towards international cooperation and investment promotion in Figure 13.

Figure 12: Activity-wise Expenditure for Cyber Security
FY 2013-14 to FY 2019-20
Figure 13: Activity-wise Expenditure for Cybersecurity FY 2013-14 to FY 2019-20 (excluding international cooperation and investment promotion)

From Figure 13, we can clearly infer which of the four activities at the core of the Government’s cybersecurity efforts are being prioritized in terms of allocation of budgetary resources. Clearly, emphasis on equipment testing and certification needs to be sharpened. There is an apparent tension between the funds that are made available for active cybersecurity operations and programmes on the one hand, and investments in human resource development on the other.

We submit that in both these areas, the Government must look to the private sector to create synergies and supplement the financial resources available for these particular activities. We also recommend that the expenditure earmarked for quality testing, development of technical standards and certification should be increased, and accorded greater priority than before.

Share of Ministries’ Budget Allocated to Cybersecurity and Related Activities

If we try to contextualize the utilization of funds made available for cybersecurity-related activities against the total allocations to relevant Ministries, there is no identifiable trend in expenditure patterns of the MEA, MeitY and DoT. Figure 14 represents the total expenditure on cybersecurity-related activities as a percentage of the total expenses allocated to the relevant Ministry. Cybersecurity-related activities appear to be fluctuating in terms of the priority accorded to them over time, in the diversion of financial resources towards this area. The contribution of the Department of Science and Technology towards R&D in cybersecurity has been consistently low, almost negligible. This has only changed with the establishment of the National Mission on Interdisciplinary Cyber Physical Systems in FY 2018-19. has been MHA’s share of expenditure on cybersecurity activities appears relatively more consistent, and could potentially be leveraged to create synergies for the rationalization of expenditure across Ministries.

Figure 14: Share of Cybersecurity-related Activities in Total Budget Allocated to Ministries

Budget for NCSS 2020?

In anticipation of the National Cyber Security Strategy 2020 expected to be released soon, we will be closely monitoring the the Union Budget for FY 2020-21 for fresh allocations to the relevant departments indicated in our analysis. We will also be on the lookout for fresh allocations that may be relevant to various components of the NCSS 2020. Watch this space for more on India’s Cybersecurity Budget 2020, coming soon!