Comments on the draft amendments to the IT Rules (Jan 2023)

The Ministry of Electronics and Information Technology (“MeitY”) proposed amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (“Intermediary Guidelines”) on January 17, 2023. The draft amendments aim to regulate online gaming, but also seek to have intermediaries “make reasonable efforts” to cause their users not to upload or share content identified as “fake” or “false” by the Press Information Bureau (“PIB”), any Union Government department or authorised agency (See proposed amendment to Rule 3(1)(b)(v).) The draft amendments in their current form raise certain concerns that we believe merit additional scrutiny.  

CCG submitted comments on the proposed amendment to Rule 3(1)(b)(v), highlighting its key feedback and concerns. The comments were authored by Archit Lohani and Vasudev Devadasan and reviewed by Sachin Dhawan and Jhalak M. Kakkar. Some of the key issues raised in our comments are summarised below.

  1. Misinformation, fake, and false, include both unlawful and lawful expression

The proposed amendment does not define the term “misinformation” or provide any guidance on how determinations that content is “fake” or “false” are arrived at. Misinformation can include various forms of content, and experts have identified up to seven subtypes of misinformation such as: imposter content; fabricated content; false connection; false context; manipulated content; misleading content; and satire or parody. Different subtypes of misinformation can cause different types of harm (or no harm at all) and are treated differently under the law. Misinformation or false information thus includes both lawful and unlawful speech (e.g., satire is constitutionally protected speech).  

Within the broad ambit of misinformation, the draft amendment does not provide sufficient guidance to the PIB and government departments on what sort of expression is permissible and what should be restricted. The draft amendment effectively provides them with unfettered discretion to restrict both unlawful and lawful speech. When seeking to regulate misinformation, experts, platforms, and other countries have drawn up detailed definitions that take into consideration factors such as intention, form of sharing, virality, context, impact, public interest value, and public participation value. These definitions recognize the potential multiplicity of context, content, and propagation techniques. In the absence of clarity over what types of content may be restricted based on a clear definition of misinformation, the draft amendment will restrict both unlawful speech and constitutionally protected speech. It will thus constitute an overbroad restriction on free speech.

  1. Restricting information solely on the ground that it is “false” is constitutionally impermissible

Article 19(2) of the Indian Constitution allows the government to place reasonable restrictions on free speech in the interest of the sovereignty, integrity, or security of India, its friendly relations with foreign States, public order, decency or morality, or contempt of court. The Supreme Court has ruled that these grounds are exhaustive and speech cannot be restricted for reasons beyond Article 19(2), including where the government seeks to block content online. Crucially, Article 19(2) does not permit the State to restrict speech on the ground that it is false. If the government were to restrict “false information that may imminently cause violence”, such a restriction would be permissible as it would relate to the ground of “public order” in Article 19(2). However, if enacted, the draft amendment would restrict online speech solely on the ground that it is declared “false” or “fake” by the Union Government. This amounts to a State restriction on speech for reasons beyond those outlined in Article 19(2), and would thus be unconstitutional. Restrictions on free speech must have a direct connection to the grounds outlined in Article 19(2) and must be a necessary and proportionate restriction on citizens’ rights.

  1. Amendment does not adhere with the procedures set out in Section 69A of the IT Act

The Supreme Court upheld Section 69A of the IT Act in Shreya Singhal v Union of India inter alia because it permitted the government blocking of online content only on grounds consistent with Article 19(2) and provided important procedural safeguards, including a notice, hearing, and written order of blocking that can be challenged in court. Therefore, it is evident that the constitutionality of the government’s blocking power over is contingent on the substantive and procedural safeguards provided by Section 69A and the Information Technology (Procedure and Safeguards for Blocking for Access of Information by Public) Rules, 2009. The proposed amendment to the Intermediary Guidelines would permit the Union Government to restrict online speech in a manner that does not adhere to these safeguards. It would permit the blocking of content on grounds beyond those specified in Article 19(2), based on a unilateral determination by the Union Government, without a specific procedure for notice, hearing, or a written order.

  1. Alternate methods to counter the spread of misinformation

Any response to misinformation on social media platforms should be based on empirical evidence on the prevalence and harms of misinformation on social media. Thus, as a first step, social media companies should be required to provide greater transparency and facilitate researcher access to data. There are alternative methods to regulate the spread of misinformation that may be more effective and preserve free expression, such as labelling or flagging misinformation. We note that there does not yet exist widespread legal and industry consensus on standards for independent fact-checking, but organisations such as the ‘International Fact-Checking Network’ (IFCN) have laid down certain principles that independent fact-checking organisations should comply with. Having platforms label content pursuant to IFCN fact checks, and even notify users when the content they have interacted with has subsequently been flagged by an IFCN fact checker would provide users with valuable informational context without requiring content removal.

CCG’s Comments to the Ministry of Electronics & Information Technology on the proposed amendments to the Intermediary Guidelines 2021

On 6 June 2022, the Ministry of Electronics and Information Technology (“MeitY”), released the proposed amendments for Part 1 and Part II of the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (“2021 IT Rules”). CCG submitted its comments on the proposed amendments to the 2021 IT Rules, highlighting its key feedback and key concerns. The comments were authored by Vasudev Devadasan and Bilal Mohamed and reviewed and edited by Jhalak M Kakkar and Shashank Mohan.

The 2021 IT Rules were released in February last year, and Part I and II of the Guidelines set out the conditions intermediaries must satisfy to avail of legal immunity for hosting unlawful content (or ‘safe harbour’) under Section 79 of the Information Technology Act, 2000 (“IT Act”). The 2021 IT Rules have been challenged in several High Courts across the country, and the Supreme Court is currently hearing a transfer petition on whether these actions should be clubbed and heard collectively by the apex court. In the meantime, the MeitY has released the proposed amendments to the 2021 IT Rules which seek to make incremental but significant changes to the Rules.

CCG’s comments to the MeitY can be summarised as follows:

Dilution of safe harbour in contravention of Section 79(1) of the IT Act

The core intention behind providing intermediaries with safe harbour under Section 79(1) of the IT Act is to ensure that intermediaries do not restrict the free flow of information online due to the risk of being held liable for the third-party content uploaded by users. The proposed amendments to Rules 3(1)(a) and 3(1)(b) of the 2021 IT Rules potentially impose an obligation on intermediaries to “cause” and “ensure” their users do not upload unlawful content. These amendments may require intermediaries to make complex determinations on the legality of speech and cause online intermediaries to remove content that may carry even the slightest risk of liability. This may result in the restriction of online speech and the corporate surveillance of Indian internet users by intermediaries. In the event that the proposed amendments are to be interpreted as not requiring intermediaries to actively prevent users from uploading unlawful content, in such a situation, we note that the proposed amendments may be functionally redundant, and we suggest they be dropped to avoid legal uncertainty.

Concerns with Grievance Appellate Committee

The proposed amendments envisage one or more Grievance Appellate Committees (“GAC”) that sit in appeal of intermediary determinations with respect to content. Users may appeal to a GAC against the decision of an intermediary to not remove content despite a user complaint, or alternatively, request a GAC to reinstate content that an intermediary has voluntarily removed or lift account restrictions that an intermediary has imposed. The creation of GAC(s) may exceed Government’s rulemaking powers under the IT Act. Further, the GAC(s) lack the necessary safeguards in its composition and operation to ensure the independence required by law of such an adjudicatory body. Such independence and impartiality may be essential as the Union Government is responsible for appointing individuals to the GAC(s) but the Union Government or its functionaries or instrumentalities may also be a party before the GAC(s). Further, we note that the originator, the legality of whose content is at dispute before a GAC, has not expressly been granted a right to hearing before the GAC. Finally, we note that the GAC(s) may lack the capacity to deal with the high volume of appeals against content and account restrictions. This may lead to situations where, in practice, only a small number of internet users are afforded redress by the GAC(s), leading to inequitable outcomes and discrimination amongst users.

Concerns with grievance redressal timeline

Under the proposed amendment to Rule 3(2), intermediaries must acknowledge the complaint by an internet user for the removal of content within 24 hours, and ‘act and redress’ this complaint within 72 hours. CCG’s comments note that 72-hour timeline to address complaints proposed by the amendment to Rule 3(2) may cause online intermediaries to over-comply with content removal requests, leading to the possible take-down of legally protected speech at the behest of frivolous user complaints. Empirical studies conducted on Indian intermediaries have demonstrated that smaller intermediaries lack the capacity and resources to make complex legal determinations of whether the content complained against violates the standards set out in Rule 3(1)(b)(i)-(x), while larger intermediaries are unable to address the high volume of complaints within short timelines – leading to the mechanical takedown of content. We suggest that any requirement that online intermediaries address user complaints within short timelines could differentiate between types of content that are ex-facie (on the face of it) illegal and causes severe harm (e.g., child-sex abuse material or gratuitous violence), and other types of content where determinations of legality may require legal or judicial expertise, like copyright or defamation.

Need for specificity in defining due diligence obligations

Rule 3(1)(m) of the proposed amendments requires intermediaries to ensure a “reasonable expectation of due diligence, privacy and transparency” to avail of safe harbour; while Rule 3(1)(n) requires intermediaries to “respect the rights accorded to the citizens under the Constitution of India.” These rules do not impose clearly ascertainable legal obligations, which may lead to increased compliance burdens, hamper enforcement, and results in inconsistent outcomes. In the absence of specific data protection legislation, the obligation to ensure a “reasonable expectation of due diligence, privacy and transparency” is unclear. The contents of fundamental rights obligations were drafted and developed in the context of citizen-State relations and may not be suitable or aptly transposed to the relations between intermediaries and users. Further, the content of ‘respecting Fundamental Rights’ under the Constitution is itself contested and open to reasonable disagreement between various State and constitutional functionaries. Requiring intermediaries to uphold such obligations will likely lead to inconsistent outcomes based on varied interpretations.

Understanding CERT-In’s Cybersecurity Directions, 2022

Sukanya Thapliyal

“Cyber Specialists” by Khahn Tran is licensed under CC BY 4.0

INTRODUCTION

The Indian Government is set to initiate a widely discussed cybersecurity regulation later this month. On April 28, 2022, India’s national agency for computer incident response, also known as the Indian Computer Emergency Response Team (CERT-In), released Directions relating to information security practices, the procedure, prevention, response, and reporting of cyber incidents for Safe & Trusted Internet. These Directions were introduced under section 70B(6) of India’s Information Technology Act, 2000 (IT Act). This provision allows CERT-In to call for information and issue Directions to carry out its obligations relating to:
1. facilitating the collection, analysis and dissemination of information related to cyber incidents,
2. releasing forecasts and alerts, and
3. taking emergency measures.

According to the IT Act, the new Directions are mandatory in nature, and non-compliance attracts criminal penalties which includes imprisonment of up to one year. The notification states that the Directions will become effective 60 days from the days of issuance i.e. on June 28, 2022. The Directions were later followed by a separate Frequently Asked Questions (FAQ) document, released as a response to stakeholder queries and concerns.

These Directions have been introduced in response to increasing instances of cyber security incidents which undermine national security, public order, essential government functions, economic development, and security threats against individuals operating through cyberspace. Further, recognizing that the private sector is a crucial component of the digital ecosystem, the Directions also push for closer cooperation between private organisations and government enforcement agencies. Consequently, the Directions have identified sharing of information for analysis, investigation, and coordination concerning the cyber security incidents as one of its prime objectives.

POLICY SIGNIFICANCE OF DIRECTIONS

Presently, Indian cybersecurity policy lacks a definite form. The National Cyber Security Policy (NCSP) was released in 2013 serves as an “umbrella framework for defining and guiding the actions related to security of cyberspace”. However, the policy has seen very limited implementation and has been mired in a multi-year reform which awaits completion. The new cybersecurity strategy is still in the works, and there is no single agency to oversee all relevant entities and hold them accountable.

Cybersecurity policymaking and governance are progressing through different government departments at national and state levels in silos and in a piecemeal manner. Several cybersecurity experts have also identified the lack of adequate technical skills and resource constraints as a significant challenge for government bodies. The Indian cybersecurity policy landscape needs to address these existing and emerging threats and challenges by instilling appropriate security standards, efficient implementation of modern technologies, framing of effective and laws and security policies, and adapting multi-stakeholder approaches within cybersecurity governance.

Industry associations and lobby groups such as US Chamber of Commerce (USCC), US-India Business Council (USIBC), The Software Alliance (BSA), and Information Technology Industry Council (ITI) have responded to the Directions with criticism. These organisations have stated that these Directions, in present format, would negatively impact Indian and global enterprises and undermine cybersecurity. Moreover, the Directions were released without any public consultations and therefore, lack necessary stakeholder inputs from across industry, civil society, academia and technologists.

The new CERT-In Directions mandate covered entities (service providers, intermediaries, data centers, body corporate and governmental organisations) to comply with prescriptive requirements that include time synchronisation of ICT clocks, excessive data retention requirements, 6 hr reporting requirement of cyber incidents, among others. The next section critically evaluates salient features of the Directions.

SALIENT FEATURES OF THE DIRECTIONS

Time Synchronisation: Clause (i) of the Directions mandates service providers, intermediaries, data centers, body corporate and governmental organisations to connect to the Network Time Protocol (NTP) Server of National Informatics Centre (NIC) or National Physical Laboratory (NPL) or with NTP servers traceable to these NTP servers, for synchronisation of all their ICT systems clocks. For organisations whose operations span multiple jurisdictions, the Directions allow relaxation by allowing them to use alternative servers. However, the time source of concerned servers should be the same as that of NPL or NIC. Several experts have raised that the requirement as extremely cumbersome, resource-intensive, and not in conformity with industry best practices. As per the established practice, companies often base their decision regarding NTP servers on practicability (lower latency) and technical efficiency. The experts have raised concerns over the technical and resource constraints with NIC and NPL servers in managing traffic volumes, and thus questioning the practical viability of the provision. .

Six-hour Reporting Requirement: Clause (ii) requires covered entities to mandatorily report cyber incidents within six hours of noticing such incidents or being notified about such incidents. The said Direction imposes a stricter requirement than what has been prescribed under Information Technology (The Indian Computer Emergency Response Team and Manner of Performing Functions and Duties) Rules, 2013 (CERT-In Rules) that allows the covered entities to report the reportable cyber incident within “a reasonable time of occurrence or noticing the incident to have scope for timely action”. The six hour reporting requirement is also stricter than the established norms in other jurisdictions, including the USA, EU, UK, and Australia. Such reporting requirements normally range from 24 hours to 72 hours, depending upon the affected sector, type of cyber intrusion, and attack severity. The CERT-In Directions make no such distinctions in its reporting requirement. Further, the reportable cyber security incidents under Annexure 1 feature an expanded list of cyber incidents (compared to what are mentioned in the CERT-In Rules). These reportable cyber incidents are defined very broadly and range from unauthorised access to systems, identity theft, spoofing and phishing attacks to data branches and data theft. Considering that an average business entity with digital presence engages in multiple digital activities and there is no segregation on the basis of scale or severity of incident, the Direction may be impractical to achieve, and may create operational/compliance challenges for many smaller business entities covered under the Directions. Government agencies often require business entities to comply with incident/breach reporting requirements to understand macro cybersecurity trends, cross-cutting issues, and sectoral weaknesses. Therefore, governments must design cyber incident reporting requirements tailormade to sectors, severity, risk and scale of impact. Not making these distinctions can make reporting exercise resource-intensive and futile for both affected entities and government enforcement agencies.

Maintenance of logs for 180 days for all ICT systems within India: Clause (iv) mandates covered entities to maintain logs of all the ICT systems for a period of 180 days and to store the same within Indian jurisdiction. Such details may be provided to CERT-In while reporting a cyber incident or otherwise when directed. Several experts have raised concerns over a lack of clarity regarding scope of the provision. The term “all ICT systems” in its present form could include a huge trove of log information that may extend up to 1 Terabyte a day. It further requires the entities to retain log information for 180 days as opposed to the current industry practice (30 days). This Direction is not in line with the purpose limitation and the data minimisation principles recognized widely in several other jurisdictions including EU’s General Data Protection Regulation (GDPR) and does not provide adequate safeguard against indiscriminate data collection that may negatively impact the end users. Further, many experts have pointed out that the concerned Direction lacks transparency and is detrimental to the privacy of the users. As the log information often carries personally indefinable information (PII), the provision may conflict with users informational privacy rights. CERT-In’s Directions are not sufficiently clear on the safeguard measures to balance legal enforcement objectives with the fundamental rights.

Strict data retention requirements for VPN and Cloud Service Providers: Clause (v) requires “Data Centres, Virtual Private Server (VPS) providers, Cloud Service providers, and Virtual Private Network Service (VPN Service) providers” to register accurate and detailed information regarding subscribers or customers hiring the services for a period of 5 years or longer after any cancellation or withdrawal of the registration. Such information shall include the name, address, and contact details of subscribers/ customers hiring the services, their ownership pattern, the period of hire of such services, and e-mail ID, IP address, and time stamp used at the time of registration. Clause (vi) directs virtual asset service providers, virtual asset exchange providers, and custodian wallet providers to maintain all KYC records and details of all financial transactions for a five year period. These Directions are resource-intensive and would substantially increase the compliance cost for many companies. It is also important to note that bulk data retention for a longer time period also creates greater vulnerabilities and attack surfaces of private/sensitive/commercial ICT use. As India is still to enact its data protection law, and the Directions are silent on fundamental rights safeguards, it has also led to serious privacy concerns. Further, some entities covered under this direction, including VPS or VPN providers, are privacy and security advancing services that operate on a strict no-log policy. VPN services provide a secure channel for storing and sharing information by individuals and businesses. VPNs are readily used by the business and individuals to protect themselves on unsecured, public Wifi networks, prevent website tracking, protect themselves from malicious websites, against government surveillance, and for transferring sensitive and confidential information. While VPNs have come under fire for being used by cybercriminals and other malicious actors, a blanket requirement for maintaining logs and excessive data retention requirement goes against the very nature of the service and may render these services pointless (and even insecure) for many users. The Frequently Asked Questions (FAQs), released following the CERT-In Directions have absolved the Enterprise/Corporate VPNs from the said requirement. However, the Directions still stand for VPN Service providers that provide “Internet proxy like services” to general Internet subscribers/users. As a result, some of the largest VPN service providers including NordVPN, and PureVPN have indicated the possibility of pulling their servers out of India and quitting their operations in India.

In a separate provision [Clause (iii)], CERT-In has also directed the service providers, intermediaries, data centers, body corporate, and government organisations to designate a point of contact to interface with CERT-In. The Directions have also asked the covered entities to provide information or any other assistance that CERT-In may require as part of cyber security mitigation actions and enhanced cyber security situational awareness.

CONCLUSION

Our ever-growing dependence on digital technology and its proceeds has exposed us to several vulnerabilities. Therefore, the State plays a vital role in intervening through concrete and suitable policies, institutions and digital infrastructures to protect against future cyber threats and attacks. However, the task is too vast to be handled by the governments alone and requires active participation by the private sector, civil society, and academia. While the government has a broader perspective of potential threats through law enforcement and intelligence organisations and perceives cybersecurity concerns from a national security lens, the commercial and fundamental rights dimensions of cybersecurity would benefit from inputs from the wider stakeholder community across the cybersecurity ecosystem.

Although in recent years, India has shown some inclination of embracing multi-stakeholder governance within cybersecurity policymaking, the CERT-In Directions point in the opposite direction. Several of the directions mentioned by the CERT-In, such as the six-hour reporting requirement, excessive data retention requirements, synchronisation of ICT clocks indicate that the government appear to adopt a “command and control” approach which may not be the most beneficial way of approaching cybersecurity issues. Further, the Directions have also failed to address the core issue of capacity constraints, lack of skilled specialists and lack of awareness which could be achieved by establishing a more collaborative approach by partnering with the private sector, civil society and academia to achieve the shared goal of cybersecurity. The multi stakeholder approaches to policy making have stood the test of time and have been successfully applied in a range of policy space including climate change, health, food security, sustainable economic development, among others. In cybersecurity too, the need for effective cross-stakeholder collaboration is now recognised as a key to solving difficult and challenging policy issues and produce credible and workable solutions. The government, therefore, needs to affix institutions and policies that fully recognize the need and advantages of taking up multi stakeholder approaches without compromising accountability systems that give due consideration to security threats and safeguard citizen rights.

Guest Post: Evaluating MIB’s emergency blocking power under Rule 16 of the 2021 IT Rules (Part II)

This post is authored by Dhruv Bhatnagar

Part I of this two part-series examined the contours of Rule 16 of the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (“2021 IT Rules”), and the  Bombay High Court’s rationale for refusing to stay the rule in the Leaflet case. This second part examines the legality and constitutionality of Rule 16. It argues that the rule’s constitutionality may be contested because it deprives impacted content publishers of a hearing when their content is restricted. It also argues that the MIB should provide information on blocking orders under Rule 16 to allow them to be challenged, both by users whose access to information is curtailed, and by publishers whose right to free expression is restricted.

Rule 16’s legality

At its core, Rule 16 is a legal provision granting discretionary authority to the government to take down content. Consistently, the Supreme Court (“SC”) has maintained that to be compliant with Article 14, discretionary authority must be backed by adequate safeguards.[1] Admittedly, Rule 16 is not entirely devoid of safeguards since it envisages an assessment of the credibility of content blocking recommendations at multiple levels (refer Part I for context). But this framework overlooks a core principle of natural justice – audi alteram partem (hear the other side) – by depriving the impacted publishers of a hearing.

In Tulsiram Patel, the SC recognised principles of natural justice as part of the guarantee under Article 14 and ruled that any law or state action abrogating these principles is susceptible to a constitutionality challenge. But the SC also found that natural justice principles are not absolute and can be curtailed under exceptional circumstances. Particularly, audi alteram partem, can be excluded in situations where the “promptitude or the urgency of taking action so demands”.

Arguably, the suspension of pre-decisional hearings under Rule 16 is justifiable considering the rule’s very purpose is to empower the Government to act with alacrity against content capable of causing immediate real-world harm. However, this rationale does not preclude the provision of a post-decisional hearing under the framework of the 2021 IT Rules. This is because, as posited by the SC in Maneka Gandhi (analysed here and here), the “audi alteram partem rule is sufficiently flexible” to address“the exigencies of myriad kinds of situations…”. Thus, a post-decisional hearing to impacted stakeholders, after the immediacy necessitating the issuance of interim blocking directions had subsided, could have been reasonably accommodated within Rule 16. Crucially, this would create a forum for the State to justify the necessity and proportionality of its speech restriction to the individuals’ impacted (strengthening legitimacy) and the public at large (strengthening the rule of law and public reasoning). Finally, in the case of ex-facie illegal content, originators are unlikely to avail of post-facto hearings, mitigating concerns of a burdensome procedure.       

Rule 16’s exercise by MIB

Opacity

MIB has exercised its power under Rule 16 of the 2021 IT Rules on five occasions. Collectively, it has ordered the blocking of approximately 93 YouTube channels, 6 websites, 4 Twitter accounts, and 2 Facebook accounts. Each time, MIB has announced content blocking only through press releases after theorders were passed but has not disclosed the actual blocking orders.

MIB’s reluctance to publish its blocking orders renders the manner it is exercising power under Rule 16 opaque. Although press statements inform the public that content has been blocked, blocking orders are required (under Rule 16(2) and Rule 16(4)) to record the reasons for which the content has been blocked. As discussed above, this limits the right to free expression of the originators of the content and denies them the ability to be heard.

Additionally, content recipients, whose right to view content and access information is curtailed through such orders, are not being made aware of the existence of these orders by the Ministry directly. Pertinently, the 2021 IT Rules appear to recognise the importance of informing users about the reasons for blocking digital content. This is evidenced by Rule 4(4), which requires ‘significant social media intermediaries’ to display a notice to users attempting to access proactively disabled content. However, in the absence of similar transparency obligations upon MIB under the 2021 IT Rules, content recipients aggrieved by the Ministry’s blocking orders may be compelled to rely on the cumbersome mechanism under the Right to Information Act, 2005 to seek the disclosure of these orders to challenge them.   

Although the 2021 IT Rules do not specifically mandate the publication of blocking orders by MIB, this obligation can be derived from the Anuradha Bhasin verdict. Here, in the context of the Telecom Suspension Rules, the SC held that any order affecting the “lives, liberty and property of people” must be published by the government, “regardless of whether the parent statute or rule prescribes the same”. The SC also held that the State should ensure the availability of governmental orders curtailing fundamental rights unless it claims specific privilege or public interest for refusing disclosure. Even then, courts will finally decide whether the State’s claims override the aggrieved litigants’ interests.

Considering the SC’s clear reasoning, MIB ought to make its blocking orders readily available in the interest of transparency, especially since a confidentiality provision restricting disclosure, akin to Rule 16 of the Information Technology (Procedure and Safeguards for Blocking for Access of Information by Public) Rules, 2009 (“2009 Blocking Rules”), is absent in the 2021 IT Rules.   

Overuse

Another concerning trend is MIB’s invocation of its emergency content-blocking power as the norm rather than the exception it was meant to be. For context, the 2021 IT Rules provide a non-emergency blocking process under Rules 14 and 15, whereunder impacted publishers are provided a pre-decisional hearing before an Inter-Departmental Committee required to be constituted under Rule 13(1)(b). However, thus far, MIB has exclusively relied on its emergency power to block ostensibly problematic digital content, including fake news.

While the Bombay High Court in the Leaflet case declined to expressly stay Rule 14 (noting that the Inter-Departmental Committee was yet to be set up) (¶19), the High Court’s stay on Rule 9(3) creates a measure of ambiguity as to whether Rules 14 and 15 are currently in effect. This is because Rule 9(3) states that there shall be a government oversight mechanism to “ensure adherence to the Code of Ethics”. A key part of this mechanism is the Inter-Departmental Committee whose role is to decide “violation[s] or contravention[s] of the Code of Ethics” (Rule 14(2)). The High Court even notes that it is “incomprehensible” how content may be taken down under Rule 14(5) for violating the Code of Ethics (¶27). Thus, despite the Bombay High Court’s refusal to stay Rule 14, it is arguable that the High Court’s stay on the operation of Rule 9(3) to prevent the ‘Code of Ethics’ from being applied against online news and curated content publishers, may logically extend to Rule 14(2) and 15. However, even if the Union were to proceed on a plain reading of the Leaflet order and infer that the Bombay High Court did not stay Rules 14 and 15, it is unclear if the MIB has constituted the Inter-Departmental Committee to facilitate non-emergency blocking.     

MeitY has also liberally invoked its emergency blocking power under Rule 9 of the 2009 Blocking Rules to disable access to content. Illustratively, in early 2021 Twitter received multiple blocking orders from MeitY, at least two of which were emergency orders, directing it to disable over 250 URLs and a thousand accounts for circulating content relating to farmers’ agitation against contentious farm laws. Commentators have also pointed out that there are almost no recorded instances of MeitY providing pre-decisional hearings to publishers under the 2009 Blocking Rules, indicating that in practice this crucial safeguard has been rendered illusory.  

Conclusion

Evidently, there is a need for the MIB to be more transparent when invoking its emergency content-blocking powers. A significant step forward in this direction would be ensuring that at least final blocking orders, which ratify emergency blocking directions, are made readily available, or at least provided to publishers/originators. Similarly, notices to any users trying to access blocked content would also enhance transparency. Crucially, these measures would reduce information asymmetry regarding the existence of blocking orders and allow a larger section of stakeholders, including the oft-neglected content recipients, the opportunity to challenge such orders before constitutional courts.

Additionally, the absence of hearings to impacted stakeholders, at any stage of the emergency blocking process under Rule 16 of the 2021 IT Rules limits their right to be heard and defend the legality of ‘at-issue’ content. Whilst the justification of urgency may be sufficient to deny a pre-decisional hearing, the procedural safeguard of a post-decisional hearing should be incorporated by MIB.

The aforesaid legal infirmities plague Rule 9 of the 2009 Blocking Rules as well, given its similarity with Rule 16 of the 2021 IT Rules. The Tanul Thakur case presents an ideal opportunity for the Delhi High Court to examine and address the limitations of these rules. Civil society organisations have for years advocated (here and here) for incorporation of a post-decisional hearing within the emergency blocking framework under the 2009 Blocking Rules too. Its adoption and diligent implementation could go a long way in upholding natural justice and mitigating the risk of arbitrary content blocking.


[1] State of Punjab v. Khan Chand, (1974) 1 SCC 549; Virendra v. The State of Punjab & Ors., AIR 1957 SC 896; State of West Bengal v. Anwar Ali, AIR 1952 SC 75.

Guest Post: Evaluating the legality of MIB’s emergency blocking power under the 2021 IT Rules (Part I)

This post is authored by Dhruv Bhatnagar

The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (“2021 IT Rules”) were challenged before several High Courts (refer here and here) almost immediately after their promulgation. In one such challenge, initiated by the publishers of the online news portal ‘The Leaflet’, the Bombay High Court, by an order dated August 14, 2021,  imposed an interim stay on the operation of Rules 9(1) and (3) of the 2021 IT Rules. Chiefly, this was done because these provisions subject online news and curated content publishers to a vaguely worded ‘code of ethics’, adherence to which would have had a ‘chilling effect’ on their freedom of speech. However, the Bombay High Court refused to stay Rule 16 of these rules, which empowers the Ministry of Information and Broadcasting (“MIB”) to direct blocking of digital content during an “emergency” where “no delay is acceptable”.

Part I of this two-part series, examines the contours of Rule 16 and argues that the Bombay High Court overlooked the procedural inadequacy of this rule when refusing to stay the provision in the Leaflet case. Part II assesses the legality and constitutionality of the rule.

Overview of Rule 16

Part III of the 2021 IT Rules authorises the MIB to direct blocking of digital content in case of an ‘emergency’ in the following manner:

The MIB has correctly noted that Rule 16 is modelled after Rule 9 of the Information Technology (Procedure and Safeguards for Blocking for Access of Information by Public) Rules, 2009 (“2009 Blocking Rules”) (analysed here), and confers upon the MIB similar emergency blocking powers which the Ministry of Electronics and Information Technology (“MeitY”) has possessed since 2009. Both provisions confer discretion upon authorised officers to determine what constitutes an emergency but fail to provide a hearing to impacted publishers or intermediaries at any stage.

Judicial findings on Rule 16

The Bombay High Court’s order in the Leaflet case is significant since it is the first time a constitutional court has recorded its preliminary findings on the rule’s legitimacy. Here, the Bombay High Court refused to stay Rule 16 primarily for two reasons. First, the High Court held that Rule 16 of the 2021 IT Rules is substantially similar to Rule 9 of the 2009 Blocking Rules, which is still in force. Second, the grounds upon which Rule 16 permits content blocking are coextensive with the grounds on which speech may be ‘reasonably restricted’ under Article 19(2) of the Indian Constitution. Respectfully, the plausibility of this reasoning is contestable:

Equivalence with the 2009 Blocking Rules: Section 69A of the IT Act and the 2009 Blocking Rules were previously challenged in Shreya Singhal, where both were upheld by the Supreme Court (“SC”). However, establishing an equivalence between Rule 16 of the 2021 IT Rules and Rule 9 of the 2009 Blocking Rules to understand the constitutionality of the former would have been useful only if Shreya Singhal contained a meaningful analysis of Rule 9. However, the SC did not examine this rule but rather broadly upheld the constitutionality of the 2009 Blocking Rules as a whole due to the presence of certain safeguards including: (a) the non-emergency process for content blocking under the 2009 Blocking Rules includes a pre-decisional hearing to identified intermediaries/originators before content was blocked; and (b) the 2009 Blocking Rules mandate the recording of reasons in blocking orders so that they may be challenged under Article 226 of the Constitution

However, the SC did not consider that the emergency blocking framework under Rule 9 of the 2009 Blocking Rules not only allows MeitY to bypass the essential safeguard of a pre-decisional hearing to impacted stakeholders but also fails to provide them with either a written order or a post-decisional hearing. It also did not address that Rule 16 of the 2009 Blocking Rules, which mandates confidentiality of blocking requests and subsequent actions, empowers MeitY to refuse disclosure of blocking orders to impacted stakeholders thus depriving them of the opportunity to challenge such orders.

In fact, Rule 16 was cited by MeitY as a basis for denying film critic Mr. Tanul Thakur access to the blocking order by which his satirical website ‘Dowry Calculator’ was banned. Mr. Thakur challenged Rule 16 of the 2009 Blocking Rules and highlighted the secrecy with which MeitY exercises its blocking powers in a writ petition which is being heard by the Delhi High Court. Recently, through an interim order dated 11 May 2022, the Delhi High Court directed MeitY to provide Mr. Thakur with a copy of the blocking order blocking his website, and offer him a post-decisional hearing. This is a significant development since it is the first recorded instances of such a hearing being provided to an originator under the 2009 Blocking Rules.

Thus, the Bombay High Court’s attempt in the Leaflet case to claim equivalence with Rule 9 of the 2009 Blocking Rules as a basis to defend the constitutionality of Rule 16 of the 2021 IT Rules was inapposite since Rule 9 itself was not substantively reviewed in Shreya Singhal, and its operation has since been challenged on constitutional grounds.

Procedural safeguards: Merely because Rule 16 of the 2021 IT Rules permits content blocking only under the circumstances enumerated under Article 19(2), does not automatically render it procedurally reasonable. In People’s Union of Civil Liberties (“PUCL”) the SC examined the procedural propriety of Section 5(2) of the Telegraph Act, 1885, which permits phone-tapping. Even though this provision restricts fundamental rights only on constitutionally permissible grounds, the SC found that substantive law had to be backed by adequate procedural safeguards to rule out arbitrariness. Although the SC declined to strike down Section 5(2) in PUCL, it framed interim guidelines to govern the provision’s exercise to compensate for the lack of adequate safeguards.

Since Rule 16 restricts the freedom of speech, its proportionality should be tested as part of any meaningful constitutionality analysis. To be proportionate, restrictions on fundamental rights must satisfy four prongs[1]: (a) legality – the requirement of a law having a legitimate aim; (b) suitability – a rational nexus between the means adopted to restrict rights and the end of achieving this aim, (c) necessity – proposed restrictions must be the ‘least restrictive measures’ for achieving the aim; and (d) balancing – balance between the extent to which rights are restricted and the need to achieve the aim. Justice Kaul’s opinion in Puttaswamy (9JB) also highlights the need for procedural safeguards against the abuse of measures interfering with fundamental rights (para 70 Kaul J).  

Arguably, by demonstrating the connection between Rule 16 and Article 19(2), the Bombay High Court has proven that Rule 16 potentially satisfies the ‘legality’ prong. However, even at an interim stage, before finally ascertaining Rule 16’s constitutionality by testing it against the other proportionality parameters identified above, the Bombay High Court should have considered whether the absence of procedural safeguards under this rule merited staying its operation.

For these reasons, the Bombay High Court could have ruled differently in deciding whether to stay the operation of Rule 16 in the Leaflet case. While these are important considerations at the interim stage, ultimately the larger question of constitutionality must be addressed. The second post in this series will critically examines the legality and constitutionality of Rule 16.


[1] Modern Dental College and Research Centre and Ors. v. State of Madhya Pradesh and Ors., (2016) 7 SCC 353; Justice K.S. Puttaswamy & Ors. v. Union of India (UOI) & Ors., (2019) 1 SCC 1; Anuradha Bhasin and Ors. v. Union of India (UOI) & Ors., (2020) 3 SCC 637.

Transparency reporting under the Intermediary Guidelines is a mess: Here’s how we can improve it

The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (“Intermediary Guidelines”) represents India’s first attempt at regulating large social media platforms, with the Guidelines creating distinct obligations for ‘Significant Social Media Intermediaries’ (“SSMIs”). While certain provisions of the Guidelines concerning SSMIs (like the traceability requirement) are currently under legal challenge, the Guidelines also introduced a less controversial requirement that SSMIs publish monthly transparency reports regarding their content moderation activities. While this reporting requirement is arguably a step in the right direction, scrutinising the actual documents published by SSMIs reveals a patchwork of inconsistent and incomplete information – suggesting that Indian regulators need to adopt a more comprehensive approach to platform transparency.

This post briefly sets out the reporting requirement under the Intermediary Guidelines before analysing the transparency reports released by SSMIs. It highlights how a focus on figures coupled with the wide discretion granted to platforms to frame their reports undermines the goal of meaningful transparency. The figures referred to when analysing SSMI reports pertain to the February-March of 2022 reporting period, but the distinct methodologies used by each SSMI to arrive at these figures (more relevant for the present discussion) has remained broadly unchanged since reporting began in mid-2021. The post concludes by making suggestions on how the Ministry of Electronics and Information Technology (“MeitY”) can strengthen the reporting requirements under the Intermediary Guidelines.   

Transparency reporting under the Intermediary Guidelines

Social media companies structure speech on their platforms through their content moderation policies and practices, which determine when content stays online and when content is taken down. Even if content is not illegal or taken down pursuant to a court or government order, platforms may still take it down for violating their terms of service (or Community Guidelines) (let us call this content ‘violative content’ for now i.e., content that violates terms of service). However, ineffective content moderation can result in violative and even harmful content remaining online or non-violative content mistakenly being taken down. Given the centrality of content moderation to online speech, the Intermediary Guidelines seek to bring some transparency to the content moderation practices of SSMIs by requiring them to publish monthly reports on their content moderation activities. Transparency reporting helps users and the government understand the decisions made by platforms with respect to online speech. Given the opacity with which social media platforms often operate, transparency reporting requirements can be an essential tool to hold platforms accountable for ineffective or discriminatory content moderation practices.  

Rule 4(1)(d) of the Intermediary Guidelines requires SSMIs to publish monthly transparency reports specifying: (i) the details of complaints received, and actions taken in response, (ii) the number of “parts of information” proactively taken down using automated tools; and (iii) any other relevant information specified by the government. The Rule therefore covers both ‘reactive moderation’, where a platform responds to a user’s complaints against content, and ‘proactive moderation’, where the platform itself seeks out unwanted content even before a user reports it.

Transparency around reactive moderation helps us understand trends in user reporting and how responsive an SSMI is to user complaints, while disclosures on proactive moderation shed light on the scale and accuracy of an SSMI’s independent moderation activities. A key goal of both reporting datasets is to understand whether the platform is taking down as much harmful content as possible without accidentally also taking down non-violative content. Unfortunately, Rule 4(1)(d) merely requires SSMIs to report the number of links taken down during their content moderation (this is re-iterated by the MeitY’s FAQs on the Intermediary Guidelines). The problems with an overtly simplistic approach come to the fore upon an examination of the actual reports published by SSMIs.   

Contents of SSMI reports – proactive moderation

Based on its latest monthly transparency reports, Twitter proactively suspended 39,588 accounts while Google used automated tools to remove 338,938 pieces of content. However, these figures only document the scale of proactive monitoring and do not provide any insight into the accuracy of the platforms’ moderation – how accurate is the moderation in distinguishing between violative and non-violative content. The reporting also does not specify whether this content was taken down using solely automated tools, or some mix of automated tools and human review or oversight. Meta (reporting for Facebook and Instagram) reports the volume of content proactively taken down, but also provides a “Proactivity Rate”. The Proactivity Rate is defined as the percentage of content flagged proactively (before a user reported it) as a subset of all flagged content. Proactivity Rate = [proactively flagged content ÷ (proactively flagged content + user reported content)]. However, this metric is also of little use in understanding the accuracy of Meta’s automated tools. Take the following example:

Assume a platform has 100 pieces of content, of which 50 pieces violate the platforms terms of service and 50 do not. The platform relies on both proactive monitoring through automated tools and user reporting to identify violative content. Now, if the automated tools detect 49 pieces of violative content, and a user reports 1, the platform states that: ‘49 pieces of content were taken down pursuant to proactive monitoring at a Proactivity Rate of 98%’. However, this reporting does not inform citizens or regulators: (i) if the 49 pieces of content identified by the automated tools are in fact the 49 pieces that violate the platform’s terms of service (or whether the tools mistakenly took down some legitimate, non-violative content); (ii) how many users saw but did not report the content that was eventually flagged by automated tools and taken down; and (iii) what level and extent of human oversight was exercised in removing content. A high proactivity rate merely indicates that automated tools flagged more content than users, which is to be expected. Simply put, numbers aren’t everything, they only disclose the scale of content moderation and not its quality.  

This criticism begs the question, how do you understand the quality of proactive moderation? The Santa Clara Principles represent high level guidance on content moderation practices developed by international human rights organisations and academic experts to facilitate platform accountability with respect to users’ speech. The Principles require that platforms report: (i) when and how automated tools are used; (ii) the key criteria used by automated tools in making decisions; (iii) the confidence, accuracy, or success rate of automated tools, including in different languages; (iv) the extent of human oversight over automated tools; and (v) the outcomes of appeals against moderation decisions made by automated tools. This last requirement of reporting the outcome of appeals (how many users successfully got content reinstated after it was taken down by proactive monitoring) is a particularly useful metric as it provides an indicator of when the platforms themselves acknowledge that its proactive moderation was inaccurate. Draft legislation in Europe and the United States requires platforms to report how often proactive monitoring decisions are reversed. Mandating the reporting of even some of these elements under the Intermediary Guidelines would provide a clearer picture of the accuracy of proactive moderation.

Finally, it is relevant to note that Rule 4(4) of the Intermediary Guidelines requires that the automated tools for proactive monitoring of certain classes of content must be ‘reviewed for accuracy and fairness’. The desirability of such proactive monitoring aside, Rule 4(4) is not self-enforcing and does not specify whoshould undertake this review, how often it should be carried out, and whom the results should be communicated to.  

Contents of SSMI reports – reactive moderation

Transparency reporting with respect to reactive moderation aims to understand trends in user reporting of content and a platform’s responses to user flagging of content. Rule 4(1)(d) requires platforms to disclose the “details of complaints received and actions taken thereon”. However, a perusal of SSMI reporting reveals how the broad discretion granted to SSMIs to frame their reports is undermining the usefulness of the reporting.  

Google’s transparency report has the most straightforward understanding of “complaints received”, with the platform disclosing the number of ‘complaints that relate to third-party content that is believed to violate local laws or personal rights’. In other words, where users raise a complaint against a piece of content, Google reports it (30,065 complaints in February 2022). Meta on the other hand only reports complaints from: (i) a specific contact form, a link for which is provided in its ‘Help Centre’; and (ii) complaints addressed to the physical post-box mail address published on the ‘Help Centre’. For February 2022, Facebook received a mere 478 complaints, of which only 43 pertained to content (inappropriate or sexual content), while 135 were from users whose accounts have been hacked, and 59 were from users who had lost access to a group or page. If 43 user reports a month against content on Facebook seems suspiciously low, it likely is – because the method of user reporting of content that involves the least amount of friction for users (simply clicking on the post and reporting it directly) bypasses the specific contact form that Facebook uses to collate India complaints, and thus appears to be absent from Facebook’s transparency reporting. Most of Facebook’s 478 complaints for February have nothing to do with content on Facebook and offer little insight into how Facebook responds to user complaints against content or what types of content users report.

In contrast, Twitter’s transparency reporting expressly states that it does notinclude non-content related complaints (e.g., a user locked out of their account), instead limiting its transparency reporting to content related complaints – 795 complaints for March 2022: 606 of abuse or harassment, 97 of hateful conduct, and 33 of misinformation were the top categories. However, like Facebook, Twitter also has both a ‘support form’ and allows users to report content directly by clicking on it, but fails to specify from what sources “complaints” are compiled from for its India transparency reports. Twitter merely notes that ‘users can report grievances by the grievance mechanism by using the contact details of the Indian Grievance Officer’.

These apparent discrepancies in the number of complaints reported bear even greater scrutiny when the number of users of these platforms is factored in. Twitter (795 complaints/month) has an estimated 23 million users in India while Facebook (406 complaints/month) has an estimated 329 million users. It is reasonable to expect user complaints to scale with the number of users, but this is evidently not happening suggesting that these platforms are using different sources and methodologies to determine what constitutes a “complaint” for the purposes of Rule 4(1)(d). This is perhaps a useful time to discuss another SSMI, ShareChat.

ShareChat is reported to have an estimated 160 million users, and for February 2022 the platform reported 56,81,213 user complaints (substantially more than Twitter and Facebook). These complaints are content related (e.g., hate speech, spam etc.) although with 30% of complaints merely classified as ‘Others’, there is some uncertainty as to what these complaints pertain to. ShareChat’s reports states that it collates complaints from ‘reporting mechanism across the platform’. This would suggest that, unlike Facebook (and potentially Twitter), it compiles user complaint numbers from all methods a user can complain against content and not just a single form tucked away in its help centre documentation. While this may be a more holistic approach, ShareChat’s reporting suffers from other crucial deficiencies. Sharechat’s report makes no distinction between reactive and proactive moderation, merely giving a figure for content that has taken down. This makes it hard to judge how ShareChat responded to these over 56,00,000 complaints.    

Conclusion

Before concluding, it is relevant to note that no SSMI reporting discusses content that has been subjected to reduced visibility or algorithmically downranked. In the case of proactive moderation, Rule 4(1)(d) unfortunately limits itself to content that has been “removed”, although in the case of reactive moderation, reduced visibility would come within the ambit of ‘actions taken in response to complaints’ and should be reported on. Best practices would require platforms to disclose when and what content is subjected to reduced visibility to users. Rule 4(1)(d) did not form part of the draft intermediary guidelines that were subjected to public consultation in 2018, rather appearing for the first time in its current form in 2021. Ensuring broader consultation at the time of drafting may have resulted in such regulatory lacunae being eliminated and a more robust framework for transparency reporting.

That said, getting meaningful transparency reporting is a hard task. Standardising reporting procedures is a detailed and fraught process that likely requires platforms and regulators to engage in a consultative process – see this document created by Daphne Keller listing out potential problems in reporting procedures. Sample problem: “If ten users notify platforms about the same piece of content, and the platform takes it down after reviewing the first notice, is that ten successful notices, or one successful notice and nine rejected ones?” Given the scale of the regulatory and technical challenges, it is perhaps unsurprising that the transparency reporting under the Intermediary Guidelines has gotten off to a rocky start. However, Rule 4(1)(d) itself offers an avenue for improvement. The Rule allows the MeitY to specify any additional information that platforms should publish in their transparency reports. In the case of proactive monitoring, requiring platforms to specify exactly how automated tools are deployed, and when content take downs based on these tools are reversed would be a good place to start. The MeitY must also engage with the functionality and internal procedures of SSMIs to ensure that reporting is harmonised to the extent possible. For example, reporting a “complaint” for Facebook and ShareChat should ideally have some equivalence. This requires, for a start, MeitY to consult with platforms, users, civil society, and academic experts when thinking about transparency.

Critiquing the Definition of Cyber Security under India’s Information Technology Act

Archit Lohani

“Security Measures” by Afsal CMK is licensed under CC BY 4.0

Introduction

As boundary-less cyberspace becomes increasingly pervasive, cyber threats continue to pose serious challenges to all nations’ economic security and digital development. For example, sophisticated attacks such as the WannaCry ransomware attack in 2017 rendered more than two million computers useless with estimated damages of up to four billion dollars. As cyber security threats continue to proliferate and evolve at an unprecedented rate, incidents of doxing, distributed denial of service (DDoS), and phishing attacks are on the rise and are being offered as services for hire. The task at hand is intensified due to the sheer number of cyber incidents in India. A closer look suggests that the challenge is exacerbated due to an outdated framework and lack of basic safeguards.

This post will examine one such framework, namely the definition of cybersecurity under the Information Technology Act, 2000 (IT Act).

Under Section 2(1)(nb) of the IT Act:

“cyber security” means protecting information, equipment, devices computer, computer resource, communication device and information stored therein from unauthorised access, use, disclosure, disruption, modification or destruction;

This post contends that the Indian definitional approach adopts a predominantly technical view of cyber security and restricts effective measures to ensure cyber-resilience between governmental authorities, industry, non-governmental organisations, and academia. This piece also juxtaposes the definition against key elements from global standards under foreign legislations and industry practices.

What is Cyber security under the IT Act?

The current definition of cyber security was adopted under the Information Technology (Amendment) Act, 2009. This amendment act was hurriedly adopted in the aftermath of the Mumbai 26/11 terrorist attacks of 2008.  The definition was codified to facilitate protective functions under Sections 69B and 70B of the IT Act. Section 69B enables monitoring and collection of traffic data to enhance cyber security, prevent intrusion and spread of contaminants. Section 70B institutionalised Computer Emergency Response Team (CERT-In), to identify, forecast, issue alerts and guidelines, coordinate cyber incident response, etc. and further the state’s cyber security imperatives. Subsequently, the evolution of various institutions that perform key functions to detect, deter, protect and adapt cybersecurity measures has accelerated. However, this post argues that the current definition fails to incorporate elements necessary to contemporise and ensure effective implementation of cyber security policy.

Critique of the IT Act definition

It is clear that deterrence has failed as the volume of incidents does not appear to abate, making cyber-resilience a realistic objective that nations should strive for. The definition under the IT Act is an old articulation of protecting the referent objects of security- “information, equipment, devices computer, computer resource, communication device and information” against specific events that aim to cause harm these objects through “unauthorised access, use, disclosure, disruption, modification or destruction”.

There are a few issues with this dated articulation of cybersecurity. First, it suffers from the problem of restrictive listing as to what is being protected (aforementioned referent objects). Second, by limiting the referent objects and events within the definition it becomes prescriptive. Third, the definition does not capture the multiple, interwoven dimensions and inherent complexity of cybersecurity which includes interactions between humans and systems. Fourth, due to limited enlisting of events, similar protection is not afforded from accidental events and natural hazards to cyberspace-enabled systems (including cyber-physical systems and industrial control systems). Fifth, the definition is missing key elements – (1) It does not include technological solutions aspect of cyber security such as in the International Telecommunication Union (2009) definition that acknowledges “technologies that can be used to protect the cyber environment” and; (2) fails to incorporate the strategies, processes, and methods that will be undertaken. With key elements missing from the definition, it falls behind contemporary standards, which are addressed in the following section.

To put things in perspective, global conceptualisations of cybersecurity are undergoing a major overhaul to accommodate the increased complexity, pace, scale and interdependencies across the cyberspace and information and communication technologies (ICT) environments. In comparison, the definition under the IT Act has remained unchanged.

Although wider conceptualisations have been reflected through international and national engagements such as the National Cyber Security Policy (NCSP). For example, within the mission statement the policy document recognises technological solution elements; and interactions between humans and ICTs in cyberspace as one key rationale behind the cyber security policy.

However, differing conceptualisations across policy and legislative instruments can lead to confusion and introduce implementational challenges within cybersecurity regulation. For example, the 2013 CERT-In Rules rely on the IT Act’s definition of cyber security and define cyber security incidents and cyber security breaches. Further emphasising the narrow and technically dominant discourse which relate to the confidentiality, integrity, and availability triad.

The following section examines a few other definitions to illustrate the shortcomings highlighted above.

Key elements of Cyber security

Despite a plethora of definitions, there is no universal agreement on the conceptualisation of cybersecurity globally. This has manifested into the long-drawn deliberations at various international fora.

Cybersecurity aims to counter and tackle a constantly evolving threat landscape. Although it is difficult to build consensus on a singular definition, a few key features can be agreed upon. For example, the definition must address interdisciplinarity inherent to cyber security, its dynamic nature and the multi-level complex ecosystem cyber security exists in. A multidisciplinary definition can aid authorities and organizations in having visibility and insight as to how new technologies can affect their risk exposure. It will further ensure that such risks are suitably mitigated. To effectuate cyber-resilience, stakeholders have to navigate governance, policy, operational, technical and legal challenges.

An inclusive definition can ensure a better collective response and bring multiple stakeholders to the table. To institutionalise greater emphasis on resilience an inclusive definition can foster cooperation between various stakeholders rather than a punitive approach that focuses on liability and criminality. An inclusive definition can enable a bottom-up approach in countering cyber security threats and systemic incidents across sectors. It can also further CERT-In’s information-sharing objectives through collaboration between stakeholders under section 70B of the IT Act.

When it comes to the regulation of technologies that embody socio-political values, contrary to popular belief that technical deliberations are objective and value-neutral, such discourse (in this case, the definition) suffers from the dominance of technical perspectives. For example, the definition of cybersecurity under the National Institute of Standards and Technology (NIST) framework is, “the ability to protect or defend the use of cyberspace from cyber-attacks” directs the reader to the definitions of cyberspace and cyberattack to extensively cover its various elements. However, the said definitions also has a predominantly technical lens.

Alternatively, definitions of cyber security would benefit from inclusive conceptions that factor in human engagements with systems, acknowledge interrelated dimensions and inherent complexities of cybersecurity, which involves dynamic interactions between all inter-connected stakeholders. An effective cybersecurity strategy entails a judicious mix of people, policies and technology, as well as a robust public-private partnership.

Cybersecurity is a broad term and often has highly variable subjective definitions. This hinders the formulation of appropriately responsive policy and legislative actions. As a benchmark, we borrow the Dan Purse et al. definition of cybersecurity– “the organisation and collection of resources, processes, and structures used to protect cyberspace and cyberspace-enabled systems from occurrences that misalign de jure from de facto property rights.” The benefit of this articulation is that it necessitates a deeper understanding of the harms and consequences of cyber security threats and their impact. However, this definition cannot be adopted within the Indian legal framework as (a) property rights are not recognised as fundamental rights and (b) this narrows its application to a harms and consequences standard.

Most importantly, the authors identify five common elements to form a holistic and effective approach towards defining cybersecurity. The following elements are from a literature review of 9 cybersecurity definitions are:

  • technological solutions
  • events
  • strategies, processes, and methods
  • human engagement; and
  • referent objects.

These elements highlight the complexity of the process and involve interaction between humans and systems for protecting the digital assets and themselves from various known and unknown risks. Simply put, any unauthorized access, use, disclosure, disruption, modification or destruction results in at least, a loss of functional control over the affected computer device or resource to the detriment of the person and/or legal entity in whom lawful ownership of the computer device or resource is vested. The definition codified under the IT Act only partly captures the complexity of ‘cyber security’ and its implications.

Conclusion

Economic interest is a core objective that necessitates cyber-resilience. Recognising the economic consequences of such attacks rather than protecting limited resources such as computer systems acknowledges the complex approaches to cybersecurity. Currently, the definition of cybersecurity is dominated by technical perspectives, and disregards other disciplines that should be ideally acting in concert to address complex challenges. Cyber-resilience can be operationalised through a renewed definition; divergent approaches within India to tackle cybersecurity challenges will act as a strategic barrier to economic growth, data flow, investments, and most importantly effective security. It will also divert resources away from more effective strategies and capacity investments. Finally, the Indian approach should evolve and stem from the threat perception, the socio-technical character of the term, and aim to bring cybersecurity stakeholders together.