Comments on the draft amendments to the IT Rules (Jan 2023)

The Ministry of Electronics and Information Technology (“MeitY”) proposed amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (“Intermediary Guidelines”) on January 17, 2023. The draft amendments aim to regulate online gaming, but also seek to have intermediaries “make reasonable efforts” to cause their users not to upload or share content identified as “fake” or “false” by the Press Information Bureau (“PIB”), any Union Government department or authorised agency (See proposed amendment to Rule 3(1)(b)(v).) The draft amendments in their current form raise certain concerns that we believe merit additional scrutiny.  

CCG submitted comments on the proposed amendment to Rule 3(1)(b)(v), highlighting its key feedback and concerns. The comments were authored by Archit Lohani and Vasudev Devadasan and reviewed by Sachin Dhawan and Jhalak M. Kakkar. Some of the key issues raised in our comments are summarised below.

  1. Misinformation, fake, and false, include both unlawful and lawful expression

The proposed amendment does not define the term “misinformation” or provide any guidance on how determinations that content is “fake” or “false” are arrived at. Misinformation can include various forms of content, and experts have identified up to seven subtypes of misinformation such as: imposter content; fabricated content; false connection; false context; manipulated content; misleading content; and satire or parody. Different subtypes of misinformation can cause different types of harm (or no harm at all) and are treated differently under the law. Misinformation or false information thus includes both lawful and unlawful speech (e.g., satire is constitutionally protected speech).  

Within the broad ambit of misinformation, the draft amendment does not provide sufficient guidance to the PIB and government departments on what sort of expression is permissible and what should be restricted. The draft amendment effectively provides them with unfettered discretion to restrict both unlawful and lawful speech. When seeking to regulate misinformation, experts, platforms, and other countries have drawn up detailed definitions that take into consideration factors such as intention, form of sharing, virality, context, impact, public interest value, and public participation value. These definitions recognize the potential multiplicity of context, content, and propagation techniques. In the absence of clarity over what types of content may be restricted based on a clear definition of misinformation, the draft amendment will restrict both unlawful speech and constitutionally protected speech. It will thus constitute an overbroad restriction on free speech.

  1. Restricting information solely on the ground that it is “false” is constitutionally impermissible

Article 19(2) of the Indian Constitution allows the government to place reasonable restrictions on free speech in the interest of the sovereignty, integrity, or security of India, its friendly relations with foreign States, public order, decency or morality, or contempt of court. The Supreme Court has ruled that these grounds are exhaustive and speech cannot be restricted for reasons beyond Article 19(2), including where the government seeks to block content online. Crucially, Article 19(2) does not permit the State to restrict speech on the ground that it is false. If the government were to restrict “false information that may imminently cause violence”, such a restriction would be permissible as it would relate to the ground of “public order” in Article 19(2). However, if enacted, the draft amendment would restrict online speech solely on the ground that it is declared “false” or “fake” by the Union Government. This amounts to a State restriction on speech for reasons beyond those outlined in Article 19(2), and would thus be unconstitutional. Restrictions on free speech must have a direct connection to the grounds outlined in Article 19(2) and must be a necessary and proportionate restriction on citizens’ rights.

  1. Amendment does not adhere with the procedures set out in Section 69A of the IT Act

The Supreme Court upheld Section 69A of the IT Act in Shreya Singhal v Union of India inter alia because it permitted the government blocking of online content only on grounds consistent with Article 19(2) and provided important procedural safeguards, including a notice, hearing, and written order of blocking that can be challenged in court. Therefore, it is evident that the constitutionality of the government’s blocking power over is contingent on the substantive and procedural safeguards provided by Section 69A and the Information Technology (Procedure and Safeguards for Blocking for Access of Information by Public) Rules, 2009. The proposed amendment to the Intermediary Guidelines would permit the Union Government to restrict online speech in a manner that does not adhere to these safeguards. It would permit the blocking of content on grounds beyond those specified in Article 19(2), based on a unilateral determination by the Union Government, without a specific procedure for notice, hearing, or a written order.

  1. Alternate methods to counter the spread of misinformation

Any response to misinformation on social media platforms should be based on empirical evidence on the prevalence and harms of misinformation on social media. Thus, as a first step, social media companies should be required to provide greater transparency and facilitate researcher access to data. There are alternative methods to regulate the spread of misinformation that may be more effective and preserve free expression, such as labelling or flagging misinformation. We note that there does not yet exist widespread legal and industry consensus on standards for independent fact-checking, but organisations such as the ‘International Fact-Checking Network’ (IFCN) have laid down certain principles that independent fact-checking organisations should comply with. Having platforms label content pursuant to IFCN fact checks, and even notify users when the content they have interacted with has subsequently been flagged by an IFCN fact checker would provide users with valuable informational context without requiring content removal.

Guest Post: The Case Against Requiring Social Media Companies to Proactively Monitor for ‘Anti-Judiciary Content’

This post is authored by Dhruv Bhatnagar

Through an order dated July 19, 2022 (“Order”), Justice G.R. Swaminathan of the Madras High Court initiated proceedings for criminal contempt against YouTuber ‘Savukku’ Shankar. The genesis of this case is a tweet in which Shankar questioned who Justice Swaminathan met before delivering a verdict quashing criminal proceedings against another content creator. Shankar’s tweet on Justice Swaminathan has been described in the Order as ‘an innuendo intended to undermine the judge’s integrity’.

In the Order, Justice Swaminathan has observed that Chief Compliance Officers (“CCOs”) of social media companies (“SMCs”) are obligated to ensure that “content scandalising judges and judiciary” is not posted on their platforms “and if posted, [is] taken down”. To contain the proliferation of ‘anti-judiciary content’ on social media, Facebook, Twitter, and YouTube have been added as parties to this case. Their CCOs have been directed to document details of complaints received against Shankar and explain whether they have considered taking proactive steps to uphold the dignity of the judiciary.

Given that users access online speech through SMCs, compelling SMCs to exercise censorial power on behalf of State authorities is not a novel development. However, suo moto action to regulate ‘anti-judiciary content’ in India may create more problems than it would solve. After briefly discussing inconsistencies in India’s criminal contempt jurisprudence, this piece highlights the legal issues with standing judicial orders directing SMCs to proactively monitor for ‘anti-judiciary content’ on their platforms. It also catalogues the practical difficulties such orders would pose for SMCs and argues against the imposition of onerous proactive moderation obligations upon them to prevent the curtailment of users’ freedom of speech.

Criminal contempt in India: Contours and Splintered Jurisprudence

The Contempt of Courts Act, 1971 (“1971 Act”) codifies contempt both as a civil and criminal offence in India. Civil contempt refers to wilful disobedience of judicial pronouncements, whereas criminal contempt is defined as act(s) that either scandalise or lower the authority of the judiciary, interfere with the due course of judicial proceedings, or obstruct the administration of justice. Both types of contempt  are punishable with a fine of up to Rs. 2,000/-, imprisonment of up to six months, or both. The Supreme Court and High Courts, as courts of record, are both constitutionally (under Articles 129 and 215) and statutorily (under Section 15 of the 1971 Act) empowered to punish individuals for contempt of their own rulings.

Given that “scandalis[ing]” or “tend[ing] to scandalise” a court is a broad concept, judicial interpretation and principles constitute a crucial source for understanding the remit of this offence. However, there is little consistency on this front owing to a divergence in judicial decisions over the years, with some courts construing the offence in narrow terms and others broadly.

In 1978, Justice V.R. Krishna Iyer enunciated, inter-alia, the following guidelines for exercising criminal contempt jurisdiction in S. Mulgaokar (analysed here):

  • Courts should exercise a “wise economy of use” of their contempt power and should not be prompted by “easy irritability” (¶27).
  • Courts should strike a balance between the constitutional values of free criticism and the need for a fearless judicial process while deciding contempt cases. The benefit of doubt must always be given since even fierce or exaggerated criticism is not a crime (¶28).
  • Contempt is meant to prevent obstruction of justice, not offer protection to libelled judges (¶29).
  • Judges should not be hypersensitive to criticism. Instead, they should endeavour to deflate even vulgar denunciation through “condescending indifference…” (¶32).

Later, in P.N. Duda (analysed here), the Supreme Court restricted the scope of criminal contempt only to actions having a proximate connection to the obstruction of justice. The Court found that a minister’s speech assailing its judges for being prejudiced against the poor, though opinionated, was not contemptuous since it did not impair the administration of justice.

However, subsequent judgments have not always adopted this tolerant stance. For instance, in D.C. Saxena (analysed here), the Supreme Court found that the essence of this offence was lowering the dignity of judges, and even mere imputations of partiality were contemptuous. Later, in Arundhati Roy (analysed here), the Supreme Court held that opinions capable of diminishing public confidence in the judiciary also attract contempt. Here, the Court noted that the respondent had caused public injury by creating a negative impression in the minds of the people about judicial integrity. This line of reasoning deviates from Justice Krishna Iyer’s guidelines in Mulgaokar, which had advised against using contempt merely to defend the maligned reputation of judges. Not only does this rationale allow for easier invocation of the offence of contempt, but it is also premised on a paternalistic assumption that India’s impressionable citizenry may be swayed by malicious and irrelevant vilification of the judiciary.

Given the above disparity in judicial opinions, Shankar’s guilt ultimately depends on the standards applied to determine the legality of his tweet. As per the Mulgaokar principles, Shankar’s tweet may not be contemptuous since it does not present an imminent danger of interference with the administration of justice. However, if assessed according to the Saxena or Roy standard, the tweet could be considered contemptuous simply because it imputes ulterior motives to Justice Swaminathan’s decision-making.

It is submitted that the Mulgaokar principles more closely align with the constitutional requirement that restrictions on speech be ‘reasonable’ as the principles advocate only restricting speech that constitutes a proximate threat to a permissible state aim (contempt of court) set out in Article 19(2). For this reason, as general practise, it may be advisable for judges to consistently apply and endorse these principles while deciding criminal contempt cases.    

Difficulties in proactive regulation of ‘anti-judiciary content’

Justice Swaminathan’s observation in the Order that SMCs have a ‘duty to ensure content scandalising judges is not posted, and if posted is taken down’ suggests that he expects such content to be proactively identified and removed by SMCs from their platforms. However, practically, standing judicial orders imposing such broad obligations upon SMCs would not only exceed their obligations under extant Indian law but may also lead to legal speech being taken down. These concerns are elaborated below:

Incompatibility with legal obligations:

Although the Information Technology Act, 2000 does not specifically require SMCs to proactively monitor content, an obligation of this nature has been introduced through delegated legislation in Rule 4(4) of the 2021 IT Rules. This rule requires SMCs qualifying as ‘significant social media intermediaries’ (“SSMIs”) (explained here) to, inter-alia, “endeavour to deploy” technological measures to proactively identify content depicting rape, child sexual abuse or identical content previously disabled pursuant to governmental or judicial orders. However, ‘anti-judiciary content’ is not a content category which SSMIs need to endeavour to proactively identify. Thus, any judicial directions imposing this mandate upon them would exceed the scope of their legal obligations.

Further, in Shreya Singhal (analysed here), the Supreme Court expressly required a court order determining the illegality of content to be passed before SMCs were required to remove the content. However, if proactive monitoring obligations are imposed, SMCs would have to identify and remove content on their own, without a judicial determination of legality. Such obligations would also undermine the Court’s ruling in Visakha Industries (analysed here), which advised against proactive monitoring to prevent intermediaries from becoming “super censors” and “denud[ing] the internet of it[s] unique feature [as] a democratic medium for all to publish, access and read any and all kinds of information” (¶53).

Unrealistic expectations and undesirable content moderation outcomes:

Judicial orders directing SMCs to proactively disable ‘anti-judiciary content’ essentially require them to objectively and consistently enforce standards on criminal contempt on their platforms. This may be problematic considering that the doctrine of contempt emerging from constitutional courts, where judges possess a significantly higher degree of specialised knowledge on what constitutes contempt of court, is itself  ambiguous at best. Put simply, when even courts have regularly disagreed on the contours of contemptuous speech, it may be problematic to expect SMCs to take more coherent decisions.

A major risk with delegating the burden of complex decision-making about free speech to private intermediaries is excessive content removal. Across jurisdictions, platform providers have erred on the side of caution and over-removed content when faced with potential legal risks. This is evidenced through empirical studies on the notice-takedown regime for copyright infringing content in the US and due diligence obligations for intermediaries in India.

Given their documented propensity for over-compliance, directions by Indian courts requiring SMCs to proactively takedown ‘anti-judiciary content’, may incentivise excessive removal of even permissible critique of judicial actions by SMCs. This would ultimately restrict social media users’ right to free expression.

Way forward

Considering the issues outlined above, it may be advisable for the Madras High Court to refrain from imposing proactive monitoring obligations upon SMCs. Consistent with the Mulgaokar principles, judges should issue blocking directions for online contemptuous speech, in exercise of their criminal contempt jurisdiction, only against content which poses a credible threat to the obstruction of justice and not against content which they perceive to lower their reputation. Such directions should also identify specific pieces of content and not impose broad obligations on SMCs that may ultimately restrict free expression.

CCG’s Comments to the Ministry of Electronics & Information Technology on the proposed amendments to the Intermediary Guidelines 2021

On 6 June 2022, the Ministry of Electronics and Information Technology (“MeitY”), released the proposed amendments for Part 1 and Part II of the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (“2021 IT Rules”). CCG submitted its comments on the proposed amendments to the 2021 IT Rules, highlighting its key feedback and key concerns. The comments were authored by Vasudev Devadasan and Bilal Mohamed and reviewed and edited by Jhalak M Kakkar and Shashank Mohan.

The 2021 IT Rules were released in February last year, and Part I and II of the Guidelines set out the conditions intermediaries must satisfy to avail of legal immunity for hosting unlawful content (or ‘safe harbour’) under Section 79 of the Information Technology Act, 2000 (“IT Act”). The 2021 IT Rules have been challenged in several High Courts across the country, and the Supreme Court is currently hearing a transfer petition on whether these actions should be clubbed and heard collectively by the apex court. In the meantime, the MeitY has released the proposed amendments to the 2021 IT Rules which seek to make incremental but significant changes to the Rules.

CCG’s comments to the MeitY can be summarised as follows:

Dilution of safe harbour in contravention of Section 79(1) of the IT Act

The core intention behind providing intermediaries with safe harbour under Section 79(1) of the IT Act is to ensure that intermediaries do not restrict the free flow of information online due to the risk of being held liable for the third-party content uploaded by users. The proposed amendments to Rules 3(1)(a) and 3(1)(b) of the 2021 IT Rules potentially impose an obligation on intermediaries to “cause” and “ensure” their users do not upload unlawful content. These amendments may require intermediaries to make complex determinations on the legality of speech and cause online intermediaries to remove content that may carry even the slightest risk of liability. This may result in the restriction of online speech and the corporate surveillance of Indian internet users by intermediaries. In the event that the proposed amendments are to be interpreted as not requiring intermediaries to actively prevent users from uploading unlawful content, in such a situation, we note that the proposed amendments may be functionally redundant, and we suggest they be dropped to avoid legal uncertainty.

Concerns with Grievance Appellate Committee

The proposed amendments envisage one or more Grievance Appellate Committees (“GAC”) that sit in appeal of intermediary determinations with respect to content. Users may appeal to a GAC against the decision of an intermediary to not remove content despite a user complaint, or alternatively, request a GAC to reinstate content that an intermediary has voluntarily removed or lift account restrictions that an intermediary has imposed. The creation of GAC(s) may exceed Government’s rulemaking powers under the IT Act. Further, the GAC(s) lack the necessary safeguards in its composition and operation to ensure the independence required by law of such an adjudicatory body. Such independence and impartiality may be essential as the Union Government is responsible for appointing individuals to the GAC(s) but the Union Government or its functionaries or instrumentalities may also be a party before the GAC(s). Further, we note that the originator, the legality of whose content is at dispute before a GAC, has not expressly been granted a right to hearing before the GAC. Finally, we note that the GAC(s) may lack the capacity to deal with the high volume of appeals against content and account restrictions. This may lead to situations where, in practice, only a small number of internet users are afforded redress by the GAC(s), leading to inequitable outcomes and discrimination amongst users.

Concerns with grievance redressal timeline

Under the proposed amendment to Rule 3(2), intermediaries must acknowledge the complaint by an internet user for the removal of content within 24 hours, and ‘act and redress’ this complaint within 72 hours. CCG’s comments note that 72-hour timeline to address complaints proposed by the amendment to Rule 3(2) may cause online intermediaries to over-comply with content removal requests, leading to the possible take-down of legally protected speech at the behest of frivolous user complaints. Empirical studies conducted on Indian intermediaries have demonstrated that smaller intermediaries lack the capacity and resources to make complex legal determinations of whether the content complained against violates the standards set out in Rule 3(1)(b)(i)-(x), while larger intermediaries are unable to address the high volume of complaints within short timelines – leading to the mechanical takedown of content. We suggest that any requirement that online intermediaries address user complaints within short timelines could differentiate between types of content that are ex-facie (on the face of it) illegal and causes severe harm (e.g., child-sex abuse material or gratuitous violence), and other types of content where determinations of legality may require legal or judicial expertise, like copyright or defamation.

Need for specificity in defining due diligence obligations

Rule 3(1)(m) of the proposed amendments requires intermediaries to ensure a “reasonable expectation of due diligence, privacy and transparency” to avail of safe harbour; while Rule 3(1)(n) requires intermediaries to “respect the rights accorded to the citizens under the Constitution of India.” These rules do not impose clearly ascertainable legal obligations, which may lead to increased compliance burdens, hamper enforcement, and results in inconsistent outcomes. In the absence of specific data protection legislation, the obligation to ensure a “reasonable expectation of due diligence, privacy and transparency” is unclear. The contents of fundamental rights obligations were drafted and developed in the context of citizen-State relations and may not be suitable or aptly transposed to the relations between intermediaries and users. Further, the content of ‘respecting Fundamental Rights’ under the Constitution is itself contested and open to reasonable disagreement between various State and constitutional functionaries. Requiring intermediaries to uphold such obligations will likely lead to inconsistent outcomes based on varied interpretations.

Guest Post: Evaluating MIB’s emergency blocking power under Rule 16 of the 2021 IT Rules (Part II)

This post is authored by Dhruv Bhatnagar

Part I of this two part-series examined the contours of Rule 16 of the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (“2021 IT Rules”), and the  Bombay High Court’s rationale for refusing to stay the rule in the Leaflet case. This second part examines the legality and constitutionality of Rule 16. It argues that the rule’s constitutionality may be contested because it deprives impacted content publishers of a hearing when their content is restricted. It also argues that the MIB should provide information on blocking orders under Rule 16 to allow them to be challenged, both by users whose access to information is curtailed, and by publishers whose right to free expression is restricted.

Rule 16’s legality

At its core, Rule 16 is a legal provision granting discretionary authority to the government to take down content. Consistently, the Supreme Court (“SC”) has maintained that to be compliant with Article 14, discretionary authority must be backed by adequate safeguards.[1] Admittedly, Rule 16 is not entirely devoid of safeguards since it envisages an assessment of the credibility of content blocking recommendations at multiple levels (refer Part I for context). But this framework overlooks a core principle of natural justice – audi alteram partem (hear the other side) – by depriving the impacted publishers of a hearing.

In Tulsiram Patel, the SC recognised principles of natural justice as part of the guarantee under Article 14 and ruled that any law or state action abrogating these principles is susceptible to a constitutionality challenge. But the SC also found that natural justice principles are not absolute and can be curtailed under exceptional circumstances. Particularly, audi alteram partem, can be excluded in situations where the “promptitude or the urgency of taking action so demands”.

Arguably, the suspension of pre-decisional hearings under Rule 16 is justifiable considering the rule’s very purpose is to empower the Government to act with alacrity against content capable of causing immediate real-world harm. However, this rationale does not preclude the provision of a post-decisional hearing under the framework of the 2021 IT Rules. This is because, as posited by the SC in Maneka Gandhi (analysed here and here), the “audi alteram partem rule is sufficiently flexible” to address“the exigencies of myriad kinds of situations…”. Thus, a post-decisional hearing to impacted stakeholders, after the immediacy necessitating the issuance of interim blocking directions had subsided, could have been reasonably accommodated within Rule 16. Crucially, this would create a forum for the State to justify the necessity and proportionality of its speech restriction to the individuals’ impacted (strengthening legitimacy) and the public at large (strengthening the rule of law and public reasoning). Finally, in the case of ex-facie illegal content, originators are unlikely to avail of post-facto hearings, mitigating concerns of a burdensome procedure.       

Rule 16’s exercise by MIB

Opacity

MIB has exercised its power under Rule 16 of the 2021 IT Rules on five occasions. Collectively, it has ordered the blocking of approximately 93 YouTube channels, 6 websites, 4 Twitter accounts, and 2 Facebook accounts. Each time, MIB has announced content blocking only through press releases after theorders were passed but has not disclosed the actual blocking orders.

MIB’s reluctance to publish its blocking orders renders the manner it is exercising power under Rule 16 opaque. Although press statements inform the public that content has been blocked, blocking orders are required (under Rule 16(2) and Rule 16(4)) to record the reasons for which the content has been blocked. As discussed above, this limits the right to free expression of the originators of the content and denies them the ability to be heard.

Additionally, content recipients, whose right to view content and access information is curtailed through such orders, are not being made aware of the existence of these orders by the Ministry directly. Pertinently, the 2021 IT Rules appear to recognise the importance of informing users about the reasons for blocking digital content. This is evidenced by Rule 4(4), which requires ‘significant social media intermediaries’ to display a notice to users attempting to access proactively disabled content. However, in the absence of similar transparency obligations upon MIB under the 2021 IT Rules, content recipients aggrieved by the Ministry’s blocking orders may be compelled to rely on the cumbersome mechanism under the Right to Information Act, 2005 to seek the disclosure of these orders to challenge them.   

Although the 2021 IT Rules do not specifically mandate the publication of blocking orders by MIB, this obligation can be derived from the Anuradha Bhasin verdict. Here, in the context of the Telecom Suspension Rules, the SC held that any order affecting the “lives, liberty and property of people” must be published by the government, “regardless of whether the parent statute or rule prescribes the same”. The SC also held that the State should ensure the availability of governmental orders curtailing fundamental rights unless it claims specific privilege or public interest for refusing disclosure. Even then, courts will finally decide whether the State’s claims override the aggrieved litigants’ interests.

Considering the SC’s clear reasoning, MIB ought to make its blocking orders readily available in the interest of transparency, especially since a confidentiality provision restricting disclosure, akin to Rule 16 of the Information Technology (Procedure and Safeguards for Blocking for Access of Information by Public) Rules, 2009 (“2009 Blocking Rules”), is absent in the 2021 IT Rules.   

Overuse

Another concerning trend is MIB’s invocation of its emergency content-blocking power as the norm rather than the exception it was meant to be. For context, the 2021 IT Rules provide a non-emergency blocking process under Rules 14 and 15, whereunder impacted publishers are provided a pre-decisional hearing before an Inter-Departmental Committee required to be constituted under Rule 13(1)(b). However, thus far, MIB has exclusively relied on its emergency power to block ostensibly problematic digital content, including fake news.

While the Bombay High Court in the Leaflet case declined to expressly stay Rule 14 (noting that the Inter-Departmental Committee was yet to be set up) (¶19), the High Court’s stay on Rule 9(3) creates a measure of ambiguity as to whether Rules 14 and 15 are currently in effect. This is because Rule 9(3) states that there shall be a government oversight mechanism to “ensure adherence to the Code of Ethics”. A key part of this mechanism is the Inter-Departmental Committee whose role is to decide “violation[s] or contravention[s] of the Code of Ethics” (Rule 14(2)). The High Court even notes that it is “incomprehensible” how content may be taken down under Rule 14(5) for violating the Code of Ethics (¶27). Thus, despite the Bombay High Court’s refusal to stay Rule 14, it is arguable that the High Court’s stay on the operation of Rule 9(3) to prevent the ‘Code of Ethics’ from being applied against online news and curated content publishers, may logically extend to Rule 14(2) and 15. However, even if the Union were to proceed on a plain reading of the Leaflet order and infer that the Bombay High Court did not stay Rules 14 and 15, it is unclear if the MIB has constituted the Inter-Departmental Committee to facilitate non-emergency blocking.     

MeitY has also liberally invoked its emergency blocking power under Rule 9 of the 2009 Blocking Rules to disable access to content. Illustratively, in early 2021 Twitter received multiple blocking orders from MeitY, at least two of which were emergency orders, directing it to disable over 250 URLs and a thousand accounts for circulating content relating to farmers’ agitation against contentious farm laws. Commentators have also pointed out that there are almost no recorded instances of MeitY providing pre-decisional hearings to publishers under the 2009 Blocking Rules, indicating that in practice this crucial safeguard has been rendered illusory.  

Conclusion

Evidently, there is a need for the MIB to be more transparent when invoking its emergency content-blocking powers. A significant step forward in this direction would be ensuring that at least final blocking orders, which ratify emergency blocking directions, are made readily available, or at least provided to publishers/originators. Similarly, notices to any users trying to access blocked content would also enhance transparency. Crucially, these measures would reduce information asymmetry regarding the existence of blocking orders and allow a larger section of stakeholders, including the oft-neglected content recipients, the opportunity to challenge such orders before constitutional courts.

Additionally, the absence of hearings to impacted stakeholders, at any stage of the emergency blocking process under Rule 16 of the 2021 IT Rules limits their right to be heard and defend the legality of ‘at-issue’ content. Whilst the justification of urgency may be sufficient to deny a pre-decisional hearing, the procedural safeguard of a post-decisional hearing should be incorporated by MIB.

The aforesaid legal infirmities plague Rule 9 of the 2009 Blocking Rules as well, given its similarity with Rule 16 of the 2021 IT Rules. The Tanul Thakur case presents an ideal opportunity for the Delhi High Court to examine and address the limitations of these rules. Civil society organisations have for years advocated (here and here) for incorporation of a post-decisional hearing within the emergency blocking framework under the 2009 Blocking Rules too. Its adoption and diligent implementation could go a long way in upholding natural justice and mitigating the risk of arbitrary content blocking.


[1] State of Punjab v. Khan Chand, (1974) 1 SCC 549; Virendra v. The State of Punjab & Ors., AIR 1957 SC 896; State of West Bengal v. Anwar Ali, AIR 1952 SC 75.

Guest Post: Evaluating the legality of MIB’s emergency blocking power under the 2021 IT Rules (Part I)

This post is authored by Dhruv Bhatnagar

The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (“2021 IT Rules”) were challenged before several High Courts (refer here and here) almost immediately after their promulgation. In one such challenge, initiated by the publishers of the online news portal ‘The Leaflet’, the Bombay High Court, by an order dated August 14, 2021,  imposed an interim stay on the operation of Rules 9(1) and (3) of the 2021 IT Rules. Chiefly, this was done because these provisions subject online news and curated content publishers to a vaguely worded ‘code of ethics’, adherence to which would have had a ‘chilling effect’ on their freedom of speech. However, the Bombay High Court refused to stay Rule 16 of these rules, which empowers the Ministry of Information and Broadcasting (“MIB”) to direct blocking of digital content during an “emergency” where “no delay is acceptable”.

Part I of this two-part series, examines the contours of Rule 16 and argues that the Bombay High Court overlooked the procedural inadequacy of this rule when refusing to stay the provision in the Leaflet case. Part II assesses the legality and constitutionality of the rule.

Overview of Rule 16

Part III of the 2021 IT Rules authorises the MIB to direct blocking of digital content in case of an ‘emergency’ in the following manner:

The MIB has correctly noted that Rule 16 is modelled after Rule 9 of the Information Technology (Procedure and Safeguards for Blocking for Access of Information by Public) Rules, 2009 (“2009 Blocking Rules”) (analysed here), and confers upon the MIB similar emergency blocking powers which the Ministry of Electronics and Information Technology (“MeitY”) has possessed since 2009. Both provisions confer discretion upon authorised officers to determine what constitutes an emergency but fail to provide a hearing to impacted publishers or intermediaries at any stage.

Judicial findings on Rule 16

The Bombay High Court’s order in the Leaflet case is significant since it is the first time a constitutional court has recorded its preliminary findings on the rule’s legitimacy. Here, the Bombay High Court refused to stay Rule 16 primarily for two reasons. First, the High Court held that Rule 16 of the 2021 IT Rules is substantially similar to Rule 9 of the 2009 Blocking Rules, which is still in force. Second, the grounds upon which Rule 16 permits content blocking are coextensive with the grounds on which speech may be ‘reasonably restricted’ under Article 19(2) of the Indian Constitution. Respectfully, the plausibility of this reasoning is contestable:

Equivalence with the 2009 Blocking Rules: Section 69A of the IT Act and the 2009 Blocking Rules were previously challenged in Shreya Singhal, where both were upheld by the Supreme Court (“SC”). However, establishing an equivalence between Rule 16 of the 2021 IT Rules and Rule 9 of the 2009 Blocking Rules to understand the constitutionality of the former would have been useful only if Shreya Singhal contained a meaningful analysis of Rule 9. However, the SC did not examine this rule but rather broadly upheld the constitutionality of the 2009 Blocking Rules as a whole due to the presence of certain safeguards including: (a) the non-emergency process for content blocking under the 2009 Blocking Rules includes a pre-decisional hearing to identified intermediaries/originators before content was blocked; and (b) the 2009 Blocking Rules mandate the recording of reasons in blocking orders so that they may be challenged under Article 226 of the Constitution

However, the SC did not consider that the emergency blocking framework under Rule 9 of the 2009 Blocking Rules not only allows MeitY to bypass the essential safeguard of a pre-decisional hearing to impacted stakeholders but also fails to provide them with either a written order or a post-decisional hearing. It also did not address that Rule 16 of the 2009 Blocking Rules, which mandates confidentiality of blocking requests and subsequent actions, empowers MeitY to refuse disclosure of blocking orders to impacted stakeholders thus depriving them of the opportunity to challenge such orders.

In fact, Rule 16 was cited by MeitY as a basis for denying film critic Mr. Tanul Thakur access to the blocking order by which his satirical website ‘Dowry Calculator’ was banned. Mr. Thakur challenged Rule 16 of the 2009 Blocking Rules and highlighted the secrecy with which MeitY exercises its blocking powers in a writ petition which is being heard by the Delhi High Court. Recently, through an interim order dated 11 May 2022, the Delhi High Court directed MeitY to provide Mr. Thakur with a copy of the blocking order blocking his website, and offer him a post-decisional hearing. This is a significant development since it is the first recorded instances of such a hearing being provided to an originator under the 2009 Blocking Rules.

Thus, the Bombay High Court’s attempt in the Leaflet case to claim equivalence with Rule 9 of the 2009 Blocking Rules as a basis to defend the constitutionality of Rule 16 of the 2021 IT Rules was inapposite since Rule 9 itself was not substantively reviewed in Shreya Singhal, and its operation has since been challenged on constitutional grounds.

Procedural safeguards: Merely because Rule 16 of the 2021 IT Rules permits content blocking only under the circumstances enumerated under Article 19(2), does not automatically render it procedurally reasonable. In People’s Union of Civil Liberties (“PUCL”) the SC examined the procedural propriety of Section 5(2) of the Telegraph Act, 1885, which permits phone-tapping. Even though this provision restricts fundamental rights only on constitutionally permissible grounds, the SC found that substantive law had to be backed by adequate procedural safeguards to rule out arbitrariness. Although the SC declined to strike down Section 5(2) in PUCL, it framed interim guidelines to govern the provision’s exercise to compensate for the lack of adequate safeguards.

Since Rule 16 restricts the freedom of speech, its proportionality should be tested as part of any meaningful constitutionality analysis. To be proportionate, restrictions on fundamental rights must satisfy four prongs[1]: (a) legality – the requirement of a law having a legitimate aim; (b) suitability – a rational nexus between the means adopted to restrict rights and the end of achieving this aim, (c) necessity – proposed restrictions must be the ‘least restrictive measures’ for achieving the aim; and (d) balancing – balance between the extent to which rights are restricted and the need to achieve the aim. Justice Kaul’s opinion in Puttaswamy (9JB) also highlights the need for procedural safeguards against the abuse of measures interfering with fundamental rights (para 70 Kaul J).  

Arguably, by demonstrating the connection between Rule 16 and Article 19(2), the Bombay High Court has proven that Rule 16 potentially satisfies the ‘legality’ prong. However, even at an interim stage, before finally ascertaining Rule 16’s constitutionality by testing it against the other proportionality parameters identified above, the Bombay High Court should have considered whether the absence of procedural safeguards under this rule merited staying its operation.

For these reasons, the Bombay High Court could have ruled differently in deciding whether to stay the operation of Rule 16 in the Leaflet case. While these are important considerations at the interim stage, ultimately the larger question of constitutionality must be addressed. The second post in this series will critically examines the legality and constitutionality of Rule 16.


[1] Modern Dental College and Research Centre and Ors. v. State of Madhya Pradesh and Ors., (2016) 7 SCC 353; Justice K.S. Puttaswamy & Ors. v. Union of India (UOI) & Ors., (2019) 1 SCC 1; Anuradha Bhasin and Ors. v. Union of India (UOI) & Ors., (2020) 3 SCC 637.

Transparency reporting under the Intermediary Guidelines is a mess: Here’s how we can improve it

The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (“Intermediary Guidelines”) represents India’s first attempt at regulating large social media platforms, with the Guidelines creating distinct obligations for ‘Significant Social Media Intermediaries’ (“SSMIs”). While certain provisions of the Guidelines concerning SSMIs (like the traceability requirement) are currently under legal challenge, the Guidelines also introduced a less controversial requirement that SSMIs publish monthly transparency reports regarding their content moderation activities. While this reporting requirement is arguably a step in the right direction, scrutinising the actual documents published by SSMIs reveals a patchwork of inconsistent and incomplete information – suggesting that Indian regulators need to adopt a more comprehensive approach to platform transparency.

This post briefly sets out the reporting requirement under the Intermediary Guidelines before analysing the transparency reports released by SSMIs. It highlights how a focus on figures coupled with the wide discretion granted to platforms to frame their reports undermines the goal of meaningful transparency. The figures referred to when analysing SSMI reports pertain to the February-March of 2022 reporting period, but the distinct methodologies used by each SSMI to arrive at these figures (more relevant for the present discussion) has remained broadly unchanged since reporting began in mid-2021. The post concludes by making suggestions on how the Ministry of Electronics and Information Technology (“MeitY”) can strengthen the reporting requirements under the Intermediary Guidelines.   

Transparency reporting under the Intermediary Guidelines

Social media companies structure speech on their platforms through their content moderation policies and practices, which determine when content stays online and when content is taken down. Even if content is not illegal or taken down pursuant to a court or government order, platforms may still take it down for violating their terms of service (or Community Guidelines) (let us call this content ‘violative content’ for now i.e., content that violates terms of service). However, ineffective content moderation can result in violative and even harmful content remaining online or non-violative content mistakenly being taken down. Given the centrality of content moderation to online speech, the Intermediary Guidelines seek to bring some transparency to the content moderation practices of SSMIs by requiring them to publish monthly reports on their content moderation activities. Transparency reporting helps users and the government understand the decisions made by platforms with respect to online speech. Given the opacity with which social media platforms often operate, transparency reporting requirements can be an essential tool to hold platforms accountable for ineffective or discriminatory content moderation practices.  

Rule 4(1)(d) of the Intermediary Guidelines requires SSMIs to publish monthly transparency reports specifying: (i) the details of complaints received, and actions taken in response, (ii) the number of “parts of information” proactively taken down using automated tools; and (iii) any other relevant information specified by the government. The Rule therefore covers both ‘reactive moderation’, where a platform responds to a user’s complaints against content, and ‘proactive moderation’, where the platform itself seeks out unwanted content even before a user reports it.

Transparency around reactive moderation helps us understand trends in user reporting and how responsive an SSMI is to user complaints, while disclosures on proactive moderation shed light on the scale and accuracy of an SSMI’s independent moderation activities. A key goal of both reporting datasets is to understand whether the platform is taking down as much harmful content as possible without accidentally also taking down non-violative content. Unfortunately, Rule 4(1)(d) merely requires SSMIs to report the number of links taken down during their content moderation (this is re-iterated by the MeitY’s FAQs on the Intermediary Guidelines). The problems with an overtly simplistic approach come to the fore upon an examination of the actual reports published by SSMIs.   

Contents of SSMI reports – proactive moderation

Based on its latest monthly transparency reports, Twitter proactively suspended 39,588 accounts while Google used automated tools to remove 338,938 pieces of content. However, these figures only document the scale of proactive monitoring and do not provide any insight into the accuracy of the platforms’ moderation – how accurate is the moderation in distinguishing between violative and non-violative content. The reporting also does not specify whether this content was taken down using solely automated tools, or some mix of automated tools and human review or oversight. Meta (reporting for Facebook and Instagram) reports the volume of content proactively taken down, but also provides a “Proactivity Rate”. The Proactivity Rate is defined as the percentage of content flagged proactively (before a user reported it) as a subset of all flagged content. Proactivity Rate = [proactively flagged content ÷ (proactively flagged content + user reported content)]. However, this metric is also of little use in understanding the accuracy of Meta’s automated tools. Take the following example:

Assume a platform has 100 pieces of content, of which 50 pieces violate the platforms terms of service and 50 do not. The platform relies on both proactive monitoring through automated tools and user reporting to identify violative content. Now, if the automated tools detect 49 pieces of violative content, and a user reports 1, the platform states that: ‘49 pieces of content were taken down pursuant to proactive monitoring at a Proactivity Rate of 98%’. However, this reporting does not inform citizens or regulators: (i) if the 49 pieces of content identified by the automated tools are in fact the 49 pieces that violate the platform’s terms of service (or whether the tools mistakenly took down some legitimate, non-violative content); (ii) how many users saw but did not report the content that was eventually flagged by automated tools and taken down; and (iii) what level and extent of human oversight was exercised in removing content. A high proactivity rate merely indicates that automated tools flagged more content than users, which is to be expected. Simply put, numbers aren’t everything, they only disclose the scale of content moderation and not its quality.  

This criticism begs the question, how do you understand the quality of proactive moderation? The Santa Clara Principles represent high level guidance on content moderation practices developed by international human rights organisations and academic experts to facilitate platform accountability with respect to users’ speech. The Principles require that platforms report: (i) when and how automated tools are used; (ii) the key criteria used by automated tools in making decisions; (iii) the confidence, accuracy, or success rate of automated tools, including in different languages; (iv) the extent of human oversight over automated tools; and (v) the outcomes of appeals against moderation decisions made by automated tools. This last requirement of reporting the outcome of appeals (how many users successfully got content reinstated after it was taken down by proactive monitoring) is a particularly useful metric as it provides an indicator of when the platforms themselves acknowledge that its proactive moderation was inaccurate. Draft legislation in Europe and the United States requires platforms to report how often proactive monitoring decisions are reversed. Mandating the reporting of even some of these elements under the Intermediary Guidelines would provide a clearer picture of the accuracy of proactive moderation.

Finally, it is relevant to note that Rule 4(4) of the Intermediary Guidelines requires that the automated tools for proactive monitoring of certain classes of content must be ‘reviewed for accuracy and fairness’. The desirability of such proactive monitoring aside, Rule 4(4) is not self-enforcing and does not specify whoshould undertake this review, how often it should be carried out, and whom the results should be communicated to.  

Contents of SSMI reports – reactive moderation

Transparency reporting with respect to reactive moderation aims to understand trends in user reporting of content and a platform’s responses to user flagging of content. Rule 4(1)(d) requires platforms to disclose the “details of complaints received and actions taken thereon”. However, a perusal of SSMI reporting reveals how the broad discretion granted to SSMIs to frame their reports is undermining the usefulness of the reporting.  

Google’s transparency report has the most straightforward understanding of “complaints received”, with the platform disclosing the number of ‘complaints that relate to third-party content that is believed to violate local laws or personal rights’. In other words, where users raise a complaint against a piece of content, Google reports it (30,065 complaints in February 2022). Meta on the other hand only reports complaints from: (i) a specific contact form, a link for which is provided in its ‘Help Centre’; and (ii) complaints addressed to the physical post-box mail address published on the ‘Help Centre’. For February 2022, Facebook received a mere 478 complaints, of which only 43 pertained to content (inappropriate or sexual content), while 135 were from users whose accounts have been hacked, and 59 were from users who had lost access to a group or page. If 43 user reports a month against content on Facebook seems suspiciously low, it likely is – because the method of user reporting of content that involves the least amount of friction for users (simply clicking on the post and reporting it directly) bypasses the specific contact form that Facebook uses to collate India complaints, and thus appears to be absent from Facebook’s transparency reporting. Most of Facebook’s 478 complaints for February have nothing to do with content on Facebook and offer little insight into how Facebook responds to user complaints against content or what types of content users report.

In contrast, Twitter’s transparency reporting expressly states that it does notinclude non-content related complaints (e.g., a user locked out of their account), instead limiting its transparency reporting to content related complaints – 795 complaints for March 2022: 606 of abuse or harassment, 97 of hateful conduct, and 33 of misinformation were the top categories. However, like Facebook, Twitter also has both a ‘support form’ and allows users to report content directly by clicking on it, but fails to specify from what sources “complaints” are compiled from for its India transparency reports. Twitter merely notes that ‘users can report grievances by the grievance mechanism by using the contact details of the Indian Grievance Officer’.

These apparent discrepancies in the number of complaints reported bear even greater scrutiny when the number of users of these platforms is factored in. Twitter (795 complaints/month) has an estimated 23 million users in India while Facebook (406 complaints/month) has an estimated 329 million users. It is reasonable to expect user complaints to scale with the number of users, but this is evidently not happening suggesting that these platforms are using different sources and methodologies to determine what constitutes a “complaint” for the purposes of Rule 4(1)(d). This is perhaps a useful time to discuss another SSMI, ShareChat.

ShareChat is reported to have an estimated 160 million users, and for February 2022 the platform reported 56,81,213 user complaints (substantially more than Twitter and Facebook). These complaints are content related (e.g., hate speech, spam etc.) although with 30% of complaints merely classified as ‘Others’, there is some uncertainty as to what these complaints pertain to. ShareChat’s reports states that it collates complaints from ‘reporting mechanism across the platform’. This would suggest that, unlike Facebook (and potentially Twitter), it compiles user complaint numbers from all methods a user can complain against content and not just a single form tucked away in its help centre documentation. While this may be a more holistic approach, ShareChat’s reporting suffers from other crucial deficiencies. Sharechat’s report makes no distinction between reactive and proactive moderation, merely giving a figure for content that has taken down. This makes it hard to judge how ShareChat responded to these over 56,00,000 complaints.    

Conclusion

Before concluding, it is relevant to note that no SSMI reporting discusses content that has been subjected to reduced visibility or algorithmically downranked. In the case of proactive moderation, Rule 4(1)(d) unfortunately limits itself to content that has been “removed”, although in the case of reactive moderation, reduced visibility would come within the ambit of ‘actions taken in response to complaints’ and should be reported on. Best practices would require platforms to disclose when and what content is subjected to reduced visibility to users. Rule 4(1)(d) did not form part of the draft intermediary guidelines that were subjected to public consultation in 2018, rather appearing for the first time in its current form in 2021. Ensuring broader consultation at the time of drafting may have resulted in such regulatory lacunae being eliminated and a more robust framework for transparency reporting.

That said, getting meaningful transparency reporting is a hard task. Standardising reporting procedures is a detailed and fraught process that likely requires platforms and regulators to engage in a consultative process – see this document created by Daphne Keller listing out potential problems in reporting procedures. Sample problem: “If ten users notify platforms about the same piece of content, and the platform takes it down after reviewing the first notice, is that ten successful notices, or one successful notice and nine rejected ones?” Given the scale of the regulatory and technical challenges, it is perhaps unsurprising that the transparency reporting under the Intermediary Guidelines has gotten off to a rocky start. However, Rule 4(1)(d) itself offers an avenue for improvement. The Rule allows the MeitY to specify any additional information that platforms should publish in their transparency reports. In the case of proactive monitoring, requiring platforms to specify exactly how automated tools are deployed, and when content take downs based on these tools are reversed would be a good place to start. The MeitY must also engage with the functionality and internal procedures of SSMIs to ensure that reporting is harmonised to the extent possible. For example, reporting a “complaint” for Facebook and ShareChat should ideally have some equivalence. This requires, for a start, MeitY to consult with platforms, users, civil society, and academic experts when thinking about transparency.

Technology and National Security Law Reflection Series Paper 12 (B): Contours of Access to Internet as a Fundamental Right

Shreyasi Tripathi*

About the Author: The author is a 2021 graduate of National Law University, Delhi. She is currently working as a Research Associate with the Digital Media Content Regulatory Council.

Editor’s Note: This post is part of the Reflection Series showcasing exceptional student essays from CCG-NLUD’s Seminar Course on Technology & National Security Law.  Along with a companion piece by Tejaswita Kharel, the two essays bring to a life a fascinating debate by offering competing responses to the following question:

Do you agree with the Supreme Court’s pronouncement in Anuradha Bhasin that access to the internet is an enabler of other rights, but not a fundamental right in and of itself? Why/why not? Assuming for the sake of argument, that access to the internet is a fundamental right (as held by the Kerala High Court in Faheema Shirin), would the test of reasonableness of restrictions be applied differently, i.e. would this reasoning lead to a different outcome on the constitutionality (or legality) of internet shutdowns?

Both pieces were developed in the spring semester, 2020 and do not reflect an updated knowledge of subsequent factual developments vis-a-vis COVID-19 or the ensuing pandemic.

  1. INTRODUCTION 

Although it did little to hold the government accountable for its actions in Kashmir, it would be incorrect to say that the judgment of Anuradha Bhasin v. The Union of India is a complete failure. This reflection paper evaluates the lessons learnt from Anuradha Bhasin and argues in favour of access to the internet as a fundamental right, especially in light of the COVID-19 pandemic. 

Image by Khaase. Licensed under Pixabay License.
  1. EXAMINING INDIA’S LEGAL POSITION ON RIGHT TO INTERNET 

Perhaps the greatest achievement of the Anuradha Bhasin judgement is the fact that the Government is no longer allowed to pass confidential orders to shut down the internet for a region. Moreover, the reasons behind internet shutdown orders must not only be available for public scrutiny but also be reviewed by a Committee. The Committee will need to scrutinise the reasons for the shutdown and must benchmark it against the proportionality test. This includes evaluating the pursuit of a legitimate aim, exploration of suitable alternatives, and adoption of the least restrictive measure while also making the order available for judicial review. The nature of the restriction,  its territorial and temporal scope will be relevant factors to determine whether it is proportionate to the aim sought to be achieved. The court also expanded fundamental rights to extend to the virtual space with the same protections. In this regard, the Court  made certain important pronouncements on the right to freedom of speech and expression. These elements will not be discussed here as they fall outside the scope of this paper. 

A few months prior in 2019, the Kerala High Court recognised access to the internet as a fundamental right. Its judgement in Faheema Sharin v. State of Kerala, the High Court addressed a host of possible issues that arise with a life online. Specifically, the High Court recognised how the internet extends individual liberty by giving people a choice to access the content of their choice, free from control of the government. The High Court relied on a United Nations General Assembly Resolution to note that the internet “… facilitates vast opportunities for affordable and inclusive education globally, thereby being an important tool to facilitate the promotion of the right to education…” – a fact that has only strengthened in value during the pandemic. The Kerala High Court held that since the Right to Education is an integral part of the right to life and liberty enshrined under Article 21 of the Constitution, access to the internet becomes an inalienable right in and of itself. The High Court also recognised the value of the internet to the freedom of speech and expression to say that the access to the internet is protected under Art. 19(1)(a) of the Constitution and can be restricted on grounds consistent with Art. 19(2).

  1. ARGUING IN FAVOUR OF RIGHT TO INTERNET  

In the pandemic, a major reason why some of us have any semblance of freedom and normalcy in our lives is because of the internet. At a time when many aspects of our day to day lives have moved online, including education, healthcare, shopping for essential services, etc. – the fundamental importance of the internet should not even be up for debate. The Government also uses the internet to disseminate essential information. In 2020 it used a contact tracing app (Aarogya Setu) which relied on the internet for its functioning. There also exists a WhatsApp chatbot to give accurate information about the pandemic. The E-Vidya Programme was launched by the Government to allow schools to become digital. In times like this, the internet is not one of the means to access constitutionally guaranteed services, it is the only way (Emphasis Added)

In  this context, the right of access to the internet should be read as part of the Right to Life and Liberty under Art. 21. Therefore, internet access should be subject to restrictions only based on procedures established by law. To better understand what shape such restrictions could take, lawmakers and practitioners can seek guidance from another recent addition to the list of rights promised under Art. 21- the right to privacy. The proportionality test was laid down in the Puttaswamy I judgment and reiterated in  Puttaswamy II (“Aadhaar Judgement”). In the Aadhar Judgement  when describing the proportionality for reasonable restrictions, the Supreme Court stated –

…a measure restricting a right must, first, serve a legitimate goal (legitimate goal stage); it must, secondly, be a suitable means of furthering this goal (suitability or rational connection stage); thirdly, there must not be any less restrictive but equally effective alternative (necessity stage); and fourthly, the measure must not have a disproportionate impact on the right-holder (balancing stage).” –

This excerpt from Puttaswamy II provides as a defined view on the proportionality test upheld by the court in Anuradha Bhasin. This means that before passing an order to shut down the internet the appropriate authority must assess whether the order aims to meet a goal which is of sufficient importance to override a constitutionally protected right. More specifically, does the goal fall under the category of reasonable restrictions as provided for in the Constitution. Next, there must be a rational connection between this goal and the means of achieving it. The appropriate authority must ensure that an alternative method cannot achieve this goal with just as much effectiveness. The authority must ensure that the method being employed is the least restrictive. Lastly, the internet shutdown must not have a disproportionate impact on the right holder i.e. the citizen, whose right to freedom of expression or right to health is being affected by the shutdown. These reasons must be put down in writing and be subject to judicial review.

Based on the judgment in Faheema Sharin, an argument can be made how the pandemic has further highlighted the importance of access to the internet, not created it. The reliance of the Government on becoming digital with e-governance and digital payment platforms shows an intention to herald the country in a world that has more online presence than ever before. 

  1. CONCLUSION 

People who are without access to the internet right now* – people in Kashmir, who have access to only 2G internet on mobile phones, or those who do not have the socio-economic and educational means to access the internet – are suffering. Not only are they being denied access to education, the lack of access to updated information about a disease about which we are still learning could prove fatal. Given the importance of the internet at this time of crisis, and for the approaching future, where people would want to avoid being in crowded classrooms, marketplaces, or hospitals- access to the internet should be regarded as a fundamental right.

This is not to say that the Court’s recognition of this right can herald India into a new world. The recognition of the right to access the internet will only be a welcome first step towards bringing the country into the digital era. The right to access the internet should also be made a socio-economic right. Which, if implemented robustly, will have far reaching consequences such as ease of social mobility, increased innovation, and fostering of greater creativity.


*Views expressed in the blog are personal and should not be attributed to the institution.

France’s Cyber Influence Warfare Doctrine (L2I) 

By Ananya Moncourt

On 20th October 2021, the French Minister of Defense released the French Armed Force’s Cyber Influence Warfare Doctrine (“Lutte Informatique d’influence” in French, abbreviated as L2I). The doctrine lays out a framework for “military operations conducted in the information layer of cyberspace to detect, characterize and counter attacks” and undertake “intelligence gathering or deception operations”. In this blogpost, I highlight and analyze key features of this new doctrine for the conduct of information warfare by the French military.  

Cyberspace in this context is comprised of three inseparable layers– a physical layer (equipment, computer systems, other materials), a logical layer (digital data, software, data exchange flows) and an information or semantic layer (information and social interactions). The applied misuse of the semantic layer can be seen at works in information influence operations that are used to sway public opinion ahead of key elections or on matters of national importance. France has experienced firsthand the perils of such operations in the Macaron Leaks in 2017

With the release of L2I ahead France’s presidential elections in April 2022, the legitimisation of  offensive influence operation conduct is consequential. Who is conducting these information influence operations, under what legal constraints, and the justification for doing so in terms of identified threat groups are questions that guide this assessment.

Over the last five years, there has been a gradual shift in France’s diplomatic standing from a defensive approach i.e., the use of force when necessary, to a more offensive and unhesitant preparedness to use force. The relocation of military strategy from a “peace-war-crisis continuum” to a “triptych of competition-contestation-confrontation” in L2I reflects this change clearly. With a guiding maxim to “win the war before the war”, L2I is one part of a three-pronged Strategic Vision, released in November 2021. More broadly, it is the final element of a conceptual framework put forth by the military for acting in the information field – the first was the LID, a defensive IT Policy (2018) and the second LIO, an offensive computer warfare doctrine (2019).  

Identifying Threats 

The root cause for identifying threats in the semantic layer stems from the possibility of information manipulation in cyberspace – a key component of hybrid warfare strategies today. In her speech presenting L2I to the world, Florence Parly (Minister of the Armed Forces) highlighted that “false, manipulated or subverted information is a weapon”. Threats that arise from such weaponisation of information form the subtext of the doctrine that references the authenticity with which modern technologies make it possible to create fake news (deep fakes of false remarks by soldiers in operations and false speeches by politicians for example). These developments are seen as direct threats to the legitimacy and capacities of the French military. 

Two points about the locus of action for influence operations in L2I are significant. One, that L2I operations takes place within a framework strictly limited to military operations outside France’s national territory. Two, that its “theatre of operations” is the information layer of cyberspace. The doctrine also explicitly identifies two threats to the French military – “organised armed groups” and “State actors”. The former includes terrorists’ groups and quasi-states (eg: ISIS/Daesh) who leverage the information layer of cyberspace to fund, recruit and co-ordinate violence. The latter refers to proto-states or State actors using intermediaries whose aim is to destabilize state structures and public opinion by promoting false narratives and undertaking informational attacks.  

The Theatre of the ‘War before the War”: Cyberspace as a Battleground 

L2I deems cyberspace a “fertile breeding ground” for information warfare, due to the ease with which legitimacy can be gained by any individual or group within their established networks online. What merits attention, and further research, is the doctrine’s perceptive articulation of a ‘cognitive dimension’ of the information layer of cyberspace. An outcome of human-computer interactions, it is the emotional, irrational, and legitimate stimulation of people who interact in an online information environment that characterises this ‘cognitive layer’. Under the grammar of the doctrine, susceptibility to disinformation thus becomes an obvious threat in cyberspace. Achieving technological superiority and developing offensive cyber capabilities of the armed forces is presented as a straightforward goal. The doctrine further lucidly presents six characteristics of the information or cognitive layer of cyberspace:  

1] A contraction of time and space: The immediacy of information today combined with its large-scale dissemination promotes interaction and connectivity. The geographic boundaries of information and its protracted transmission have faded away.  

2] Possibility of concealing sources of information: Mastery of related technologies makes it possible to conceal or falsify the origins of information. This anonymity makes the use of cyberspace conducive for purposes of influence by States or groups of individuals. 

3] Information persistence:  Information is difficult to erase in cyberspace because it can be duplicated easily or stored elsewhere. Information can therefore be reused outside of any verifiable context. 

4] Freedom of individuals: Anyone can produce and broadcast information, true or false, without any editorial control in cyberspace. This promotes an unbridled production of information.

5] Technological innovation: Continuous innovation in creation, storage and dissemination of information is a significant feature of cyberspace. 

6] A space modelled by Big Tech: Cyberspace is emerging with major digital operators who, de facto; impose their own regulations and terms. 

The characterization of cyberspace as a “deterritorialised” realm in the doctrine raises the question of whether information warfare can be governed through existing international law frameworks that are based on territorial sovereignty. Nevertheless, respect for international law in L2I is carved out in two distinct spheres. In peacetime, L2I is subject to the United Nations Charter and principles of non-interference, and during times of armed conflict International Humanitarian Law principles of necessity, proportionality, distinction and precaution are highlighted. Further, every operation carried out under L2I is subjected to political and legal constraints outlined by ROE (Rules of Operational Engagement), conceived of to define the circumstances and conditions of implementation.  

It is clear that an inherent contradiction lies in L2I’s recognition of a borderless cyberspace (that is diffusing the boundaries between peace and war times) and the subject of its operations to international laws that are distinct for peace and war times. While it is acknowledged that the functioning of cyberspace is premised on an “entanglement of boundaries” and application of legal provisions is complex, a lack of clarity on the line between a free reign of development of capabilities and the checks and balances necessary for use of these capabilities is evident. This begs the question of what can be considered peace and/ or war time in the information layer of cyberspace, and whether such a distinction is relevant at all. Moreover, determining how territorial sovereignty is defined with regard to state action in this particular layer of cyberspace is an important first step towards developing regulatory guidelines for information influence operations. 

New age combatants for new age threats? 

L2I further outlines a dedicated chain of command under the apex authority of the President followed by the Chief of Staff of the Armed Forces. The post of a General Commanding Officer has been created in recognition of cyber influence operations occurring at the confluence of offensive and defensive strategies. Further, in a multi-disciplinary approach to development of human resources, the doctrine recognises the need for highly specialised skills across disciplines and proposes investment in a cyberwarfare troop comprised of computer graphic designers, psychologists, sociologists, linguists and social media specialists.  

The pervasiveness of information in combination with the interconnectedness of our communication systems and increasingly sophisticated technology capabilities has led to evident potential for exploitation of information in cyberspace. In particular, social media has enabled nation-states to delve into the minds of people, communities and adversaries, to control and push certain narratives while marginalising other kinds of information and perspectives for power. The human mind, intertwined with open societies and networks, can be seen as an emerging battle-space of the future. 

Naturally, what groups are identified as threats and what national agencies are mandated to tackle them in cyberspace are critical. The degree of transparency with which these systemised influence operations, often covert, are sanctioned in a country’s legal framework also has significant geopolitical and human rights implications. This is especially important in democratic political systems where people’s trust in institutions depends on the degree of accountability and transparency built into institutions that undertake influence operations. 

Parallelly, in a major move in India in December 2020, the Ministry of Defense has created a new post of Director General of Information Warfare in light of hybrid warfare, social media realities and future battlefields. The scope of authority and areas of work the office will undertake have not been detailed. As India prepares to strengthen her bilateral defence and security partnership with France, clarity on information operation strategies will improve the quality of such cooperation. As such, what the ‘theatre of operations’ and identified threats groups will be for the Indian military are important questions that require articulation.  

The Future of Democracy in the Shadow of Big and Emerging Tech: CCG Essay Series

By Shrutanjaya Bhardwaj and Sangh Rakshita

In the past few years, the interplay between technology and democracy has reached a critical juncture. The untrammelled optimism for technology has now been shadowed by rising concerns over the survival of a meaningful democratic society. With the expanding reach of technology platforms, there have been increasing concerns in democratic societies around the world on the impact of such platforms on democracy and human rights. In this context, increasingly there has been focus on policy issues like  the need for an antitrust framework for digital platforms, platform regulation and free speech, the challenges of fake news, impact of misinformation on elections, invasion of privacy of citizens due to the deployment of emerging tech,  and cybersecurity. This has intensified the quest for optimal policy solutions. We, at the Centre for Communication Governance at National Law University Delhi (CCG), believe that a detailed academic exploration of the relationship between democracy, and big and emerging tech will aid our understanding of the current problems, help contextualise them and highlight potential policy and regulatory responses.

Thus, we bring to you this series of essays—written by experts in the domain—in an attempt to collate contemporary scholarly thought on some of the issues that arise in the context of the interaction of democracy, and big and emerging tech. The essay series is publicly available on the CCG website. We have also announced the release of the essay series on Twitter

Our first essay addresses the basic but critical question: What is ‘Big Tech’? Urvashi Aneja & Angelina Chamuah present a conceptual understanding of the phrase. While ‘Big Tech’ refers to a set of companies, it is certainly not a fixed set; companies become part of this set by exhibiting four traits or “conceptual markers” and—as a corollary—would stop being identified in this category if they were to lose any of the four markers. The first marker is that the company runs a data-centric model and has massive access to consumer data which can be leveraged or exploited. The second marker is that ‘Big Tech’ companies have a vast user base and are “multi-sided platforms that demonstrate strong network effects”. The third and fourth markers are the infrastructural and civic roles of these companies respectively, i.e., they not only control critical societal infrastructure (which is often acquired through lobbying efforts and strategic mergers and acquisitions) but also operate “consumer-facing platforms” which enable them to generate consumer dependence and gain huge power over the flow of information among citizens. It is these four markers that collectively define ‘Big Tech’. [U. Aneja and A. Chamuah, What is Big Tech? Four Conceptual Markers]

Since the power held by Big Tech is not only immense but also self-reinforcing, it endangers market competition, often by hindering other players from entering the market. Should competition law respond to this threat? If yes, how? Alok P. Kumar & Manjushree R.M. explore the purpose behind competition law and find that competition law is concerned not only with consumer protection but also—as evident from a conjoint reading of Articles 14 & 39 of the Indian Constitution—with preventing the concentration of wealth and material resources in a few hands. Seen in this light, the law must strive to protect “the competitive process”. But the present legal framework is too obsolete to achieve that aim. Current understanding of concepts such as ‘relevant market’, ‘hypothetical monopolist’ and ‘abuse of dominance’ is hard to apply to Big Tech companies which operate more on data than on money. The solution, it is proposed, lies in having ex ante regulation of Big Tech rather than a system of only subsequent sanctions through a possible code of conduct created after extensive stakeholder consultations. [A.P. Kumar and Manjushree R.M., Data, Democracy and Dominance: Exploring a New Antitrust Framework for Digital Platforms]

Market dominance and data control give an even greater power to Big Tech companies, i.e., control over the flow of information among citizens. Given the vital link between democracy and flow of information, many have called for increased control over social media with a view to checking misinformation. Rahul Narayan explores what these demands might mean for free speech theory. Could it be (as some suggest) that these demands are “a sign that the erstwhile uncritical liberal devotion to free speech was just hypocrisy”? Traditional free speech theory, Narayan argues, is inadequate to deal with the misinformation problem for two reasons. First, it is premised on protecting individual liberty from the authoritarian actions by governments, “not to control a situation where baseless gossip and slander impact the very basis of society.” Second, the core assumption behind traditional theory—i.e., the possibility of an organic marketplace of ideas where falsehood can be exposed by true speech—breaks down in context of modern era misinformation campaigns. Therefore, some regulation is essential to ensure the prevalence of truth. [R. Narayan, Fake News, Free Speech and Democracy]

Jhalak M. Kakkar and Arpitha Desai examine the context of election misinformation and consider possible misinformation regulatory regimes. Appraising the ideas of self-regulation and state-imposed prohibitions, they suggest that the best way forward for democracy is to strike a balance between the two. This can be achieved if the State focuses on regulating algorithmic transparency rather than the content of the speech—social media companies must be asked to demonstrate that their algorithms do not facilitate amplification of propaganda, to move from behavioural advertising to contextual advertising, and to maintain transparency with respect to funding of political advertising on their platforms. [J.M. Kakkar and A. Desai, Voting out Election Misinformation in India: How should we regulate Big Tech?]

Much like fake news challenges the fundamentals of free speech theory, it also challenges the traditional concepts of international humanitarian law. While disinformation fuels aggression by state and non-state actors in myriad ways, it is often hard to establish liability. Shreya Bose formulates the problem as one of causation: “How could we measure the effect of psychological warfare or disinformation campaigns…?” E.g., the cause-effect relationship is critical in tackling the recruitment of youth by terrorist outfits and the ultimate execution of acts of terror. It is important also in determining liability of state actors that commit acts of aggression against other sovereign states, in exercise of what they perceive—based on received misinformation about an incoming attack—as self-defence. The author helps us make sense of this tricky terrain and argues that Big Tech could play an important role in countering propaganda warfare, just as it does in promoting it. [S. Bose, Disinformation Campaigns in the Age of Hybrid Warfare]

The last two pieces focus attention on real-life, concrete applications of technology by the state. Vrinda Bhandari highlights the use of facial recognition technology (‘FRT’) in law enforcement as another area where the state deploys Big Tech in the name of ‘efficiency’. Current deployment of FRT is constitutionally problematic. There is no legal framework governing the use of FRT in law enforcement. Profiling of citizens as ‘habitual protestors’ has no rational nexus to the aim of crime prevention; rather, it chills the exercise of free speech and assembly rights. Further, FRT deployment is wholly disproportionate, not only because of the well-documented inaccuracy and bias-related problems in the technology, but also because—more fundamentally—“[t]reating all citizens as potential criminals is disproportionate and arbitrary” and “creates a risk of stigmatisation”. The risk of mass real-time surveillance adds to the problem. In light of these concerns, the author suggests a complete moratorium on the use of FRT for the time being. [V. Bhandari, Facial Recognition: Why We Should Worry the Use of Big Tech for Law Enforcement

In the last essay of the series, Malavika Prasad presents a case study of the Pune Smart Sanitation Project, a first-of-its-kind urban sanitation programme which pursues the Smart City Mission (‘SCM’). According to the author, the structure of city governance (through Municipalities) that existed even prior to the advent of the SCM violated the constitutional principle of self-governance. This flaw was only aggravated by the SCM which effectively handed over key aspects of city governance to state corporations. The Pune Project is but a manifestation of the undemocratic nature of this governance structure—it assumes without any justification that ‘efficiency’ and ‘optimisation’ are neutral objectives that ought to be pursued. Prasad finds that in the hunt for efficiency, the design of the Pune Project provides only for collection of data pertaining to users/consumers, hence excluding the marginalised who may not get access to the system in the first place owing to existing barriers. “Efficiency is hardly a neutral objective,” says Prasad, and the state’s emphasis on efficiency over inclusion and participation reflects a problematic political choice. [M. Prasad, The IoT-loaded Smart City and its Democratic Discontents]

We hope that readers will find the essays insightful. As ever, we welcome feedback.

This series is supported by the Friedrich Naumann Foundation for Freedom (FNF) and has been published by the National Law University Delhi Press. We are thankful for their support. 

The Pegasus Hack: A Hark Back to the Wassenaar Arrangement

By Sharngan Aravindakshan

The world’s most popular messaging application, Whatsapp, recently revealed that a significant number of Indians were among the targets of Pegasus, a sophisticated spyware that operates by exploiting a vulnerability in Whatsapp’s video-calling feature. It has also come to light that Whatsapp, working with the University of Toronto’s Citizen Lab, an academic research organization with a focus on digital threats to civil society, has traced the source of the spyware to NSO Group, an Israeli company well known both for developing and selling hacking and surveillance technology to governments with a questionable record in human rights. Whatsapp’s lawsuit against NSO Group in a federal court in California also specifically alludes to NSO Group’s clients “which include but are not limited to government agencies in the Kingdom of Bahrain, the United Arab Emirates, and Mexico as well as private entities.” The complaint filed by Whatsapp against NSO Group can be accessed here.

In this context, we examine the shortcomings of international efforts in limiting or regulating the transfers or sale of advanced and sophisticated technology to governments that often use it to violate human rights, as well as highlight the often complex and blurred lines between the military and civil use of these technologies by the government.

The Wassenaar Arrangement on Export Controls for Conventional Arms and Dual-Use Goods and Technologies (WA) exists for this precise reason. Established in 1996 and voluntary / non-binding in nature[I], its stated mission is “to contribute to regional and international security and stability, by promoting transparency and greater responsibility in transfers of conventional arms and dual-use goods and technologies, thus preventing destabilizing accumulations.”[ii] Military advancements across the globe, significant among which were the Indian and Pakistani nuclear tests, rocket tests by India and South Korea and the use of chemical warfare during the Iran-Iraq war, were all catalysts in the formulation of this multilateral attempt to regulate the transfer of advanced technologies capable of being weaponized.[iii] With more and more incidents coming to light of authoritarian regimes utilizing advanced western technology to violate human rights, the WA was amended to bring within its ambit “intrusion software” and “IP network surveillance systems” as well. 

Wassenaar: A General Outline

With a current membership of 42 countries (India being the latest to join in late 2017), the WA is the successor to the cold war-era Coordinating Committee for Multilateral Export Controls (COCOM) which had been established by the Western Bloc in order to prevent weapons and technology exports to the Eastern Bloc or what was then known as the Soviet Union.[iv] However, unlike its predecessor, the WA does not target any nation-state, and its members cannot exercise any veto power over other member’s export decisions.[v] Notably, while Russia is a member, Israel and China are not.

The WA lists out the different technologies in the form of “Control Lists” primarily consisting of the “List of Dual-Use Goods and Technologies” or the Basic List, and the “Munitions List”.[vi] The term “dual-use technology” typically refers to technology that can be used for both civilian and military purposes.[vii] The Basic List consists of ten categories[viii]

  • Special Materials and Related Equipment (Category 1); 
  • Materials Processing (Category 2); 
  • Electronics (Category 3); 
  • Computers (Category 4); 
  • Telecommunications (Category 5, Part 1); 
  • Information Security (Category 5, Part 2); 
  • Sensors and Lasers (Category 6); 
  • Navigation and Avionics (Category 7); 
  • Marine (Category 8); 
  • Aerospace and Propulsion (Category 9). 

Additionally, the Basic List also has the Sensitive and Very Sensitive Lists which include technologies covering radiation, submarine technology, advanced radar, etc. 

An outline of the WA’s principles is provided in its Guidelines & Procedures, including the Initial Elements. Typically, participating countries enforce controls on transfer of the listed items by enacting domestic legislation requiring licenses for export of these items and are also expected to ensure that the exports “do not contribute to the development or enhancement of military capabilities which undermine these goals, and are not diverted to support such capabilities.[ix]

While the Guidelines & Procedures document does not expressly proscribe the export of the specified items to non-WA countries, members are expected to notify other participants twice a year if a license under the Dual List is denied for export to any non-WA country.[x]

Amid concerns of violation of civil liberties

Unlike conventional weapons, cyberspace and information technology is one of those sectors where the government does not yet have a monopoly in expertise. In what can only be termed a “cyber-arms race”, it would be fair to say that most governments are even now busily acquiring technology from private companies to enhance their cyber-capacity, which includes surveillance technology for intelligence-gathering efforts. This, by itself, is plain real-politik.

However, amid this weaponization of the cyberspace, there were growing concerns that this technology was being purchased by authoritarian or repressive governments for use against their citizens. For instance, Eagle, monitoring technology owned by Amesys (a unit of the French firm Bull SA), Boeing Co.’s internet-filtering Narus, and China’s ZTE Corp. all contributed to the surveillance efforts by Col. Gaddafi’s regime in Libya. Surveillance technology equipment sold by Siemens AG and maintained by Nokia Siemens Networks were used against human rights activists in Bahrain. These instances, as part of a wider pattern that came to the spotlight, galvanized the WA countries in 2013 to include “intrusion software” and “IP network surveillance systems” in the Control List to attempt to limit the transfer of these technologies to known repressive regimes. 

Unexpected Consequences

The 2013 Amendment to the Control Lists was the subject of severe criticism by tech companies and civil society groups across the board. While the intention behind it was recognized as laudable, the terms “intrusion software” and “IP network surveillance system” were widely viewed as over-broad and having the unintended consequence of looping in both legitimate as well as illegitimate use of technology. The problems pointed out by cybersecurity experts are manifold and are a result of a misunderstanding of how cybersecurity works.

The inclusion of these terms, which was meant to regulate surveillance based on computer codes / programmes, also has the consequence of bringing within its ambit legitimate and often beneficial uses of these technologies, including even antivirus technology according to one view. Cybersecurity research and development often involves making use of “zero-day exploits” or vulnerabilities in the developed software, which when discovered and reported by any “bounty hunter”, is typically bought by the company owning the software. This helps the company immediately develop a “patch” for the reported vulnerability. These transactions are often necessarily cross-border. Experts complained that if directly transposed to domestic law, the changes would have a chilling effect on the vital exchange of information and research in this area, which was a major hurdle for advances in cybersecurity, making cyberspace globally less safer. A prime example is HewlettPackard’s (HP)  withdrawal from Pwn2Own—a computer hacking contest held annually at the PacSecWest security conference where contestants are challenged to hack into / exploit vulnerabilities on widely used software. HP, which sponsored the event, was forced to withdraw in 2015 citing the “complexity in obtaining real-time import /export licenses in countries that participate in the Wassenaar Arrangement”, among others. The member nation in this case was Japan.

After facing fierce opposition on its home soil, the United States decided to not implement the WA amendment and instead, decided to argue for a reversal at the next Plenary session of the WA, which failed. Other nations, including the EU and Japan have implemented the WA amendment export controls with varying degrees of success.

The Pegasus Hack, India and the Wassenaar

Considering many of the Indians identified as victims of the Pegasus hack were either journalists or human rights activists, with many of them being associated with the highly-contentious Bhima-Koregaon case, speculation is rife that the Indian government is among those purchasing and utilizing this kind of advanced surveillance technology to spy on its own citizens. Adding this to the NSO Group’s public statement that its “sole purpose” is to “provide technology to licensed government intelligence and law enforcement agencies to help them fight terrorism and serious crime”, it appears there are credible allegations that the Indian government was involved in the hack. The government’s evasiveness in responding and insistence on so-called “standard operating procedures” having been followed are less than reassuring.

While India’s entry to the WA as its 42nd member in 2018 has certainly elevated its status in the international arms control regime by granting it access to three of the world’s four main arms-control regimes (the others being the Nuclear Suppliers’ Group / NSG, the Missile Technology Control Group / MTCR and the Australia Group), the Pegasus Hack incident and the apparent connection to the Indian government shows us that its commitment to the principles underlying the WA is doubtful. The purpose of the inclusion of “intrusion software” and “IP network surveillance system” in the WA’s Control Lists by way of the 2013 Amendment, no matter their unintended consequences for legitimate uses of such technology, was to prevent governmental purchases exactly like this one. Hence, even though the WA does not prohibit the purchase of any surveillance technology from a non-member, the Pegasus incident arguably, is still a serious detraction from India’s commitment to the WA, even if not an explicit violation.

Military Cyber-Capability Vs Law Enforcement Cyber-Capability

Given what we know so far, it appears that highly sophisticated surveillance technology has also come into the hands of local law enforcement agencies. Had it been disclosed that the Pegasus software was being utilized by a military wing against external enemies, by, say, even the newly created Defence Cyber Agency, it would have probably caused fewer ripples. In fact, it might even have come off as reassuring evidence of the country’s advanced cyber-capabilities. However, the idea of such advanced, sophisticated technologies at the easy disposal of local law enforcement agencies is cause for worry. This is because while traditionally the domain of the military is external, the domain of law enforcement agencies is internal, i.e., the citizenry. There is tremendous scope for misuse by such authorities, including increased targeting of minorities. The recent incident of police officials in Hyderabad randomly collecting biometric data including their fingerprints and clicking people’s pictures only exacerbates this point. Even abroad, there already exist on-going efforts to limit the use of surveillance technologies by local law enforcement such as the police.

The conflation of technology use by both military and civil agencies  is a problem that is created in part at least, by the complex and often dual-use nature of technology. While dual use technology is recognized by the WA, this problem is not one that it is able to solve. As explained above, dual use technology is technology that can be used for both civil and military purposes. The demands of real-politik, increase in cyber-terrorism and the manifold ways in which a nation’s security can be compromised in cyberspace necessitate any government in today’s world to increase and improve its cyber-military-capacity by acquiring such technology. After all, a government that acquires surveillance technology undoubtedly increases the effectiveness of its intelligence gathering and ergo, its security efforts. But at the same time, the government also acquires the power to simultaneously spy on its own citizens, which can easily cascade into more targeted violations. 

Governments must resist the impulse to turn such technology on its own citizens. In the Indian scenario, citizens have been granted a ring of protection by way of the Puttaswamy judgement, which explicitly recognizes their right to privacy as a fundamental right. Interception and surveillance by the government while currently limited by laid-down protocols, are not regulated by any dedicated law. While there are calls for urgent legislation on the subject, few deal with the technology procurement processes involved. It has also now emerged that Chhattisgarh’s State Government has set up a panel to look into allegations that that NSO officials had a meeting with the state police a few years ago. This raises questions of oversight in the relevant authorities’ public procurement processes, apart from their legal authority to actually carry out domestic surveillance by exploiting zero-day vulnerabilities.  It is now becoming evident that any law dealing with surveillance will need to ensure transparency and accountability in the procurement of and use of the different kinds of invasive technology adopted by Central or State authorities to carry out such surveillance. 


[i]A Guide to the Wassenaar Arrangement, Daryl Kimball, Arms Control Association, December 9, 2013, https://www.armscontrol.org/factsheets/wassenaar, last accessed on November 27, 2019.

[ii]Ibid.

[iii]Data, Interrupted: Regulating Digital Surveillance Exports, Tim Maurerand Jonathan Diamond, November 24, 2015, World Politics Review.

[iv]Wassenaar Arrangement: The Case of India’s Membership, Rajeswari P. Rajagopalan and Arka Biswas, , ORF Occasional Paper #92 p.3, OBSERVER RESEARCH FOUNDATION, May 5, 2016, http://www.orfonline.org/wp-content/uploads/2016/05/ORF-Occasional-Paper_92.pdf, last accessed on November 27, 2019.

[v]Ibid, p. 3

[vi]“List of Dual-Use Goods and Technologies And Munitions List,” The Wassenaar Arrangement, available at https://www.wassenaar.org/public-documents/, last accessed on November 27, 2019. 

[vii]Article 2(1), Proposal for a REGULATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL setting up a Union regime for the control of exports, transfer, brokering, technical assistance and transit of dual-use items (recast), European Commission, September 28th, 2016, http://trade.ec.europa.eu/doclib/docs/2016/september/tradoc_154976.pdf, last accessed on November 27, 2019. 

[viii]supra note vi.

[ix]Guidelines & Procedures, including the Initial Elements, The Wassenaar Arrangement, December, 2016, http://www.wassenaar.org/wp- content/uploads/2016/12/Guidelines-and-procedures-including-the-Initial-Elements-2016.pdf, last accessed on November 27, 2019.

[x]Articles V(1) & (2), Guidelines & Procedures, including the Initial Elements, The Wassenaar Arrangement, December, 2016, https://www.wassenaar.org/public-documents/, last accessed on November 27, 2019.