This post is authored by Sachin Dhawan and Vignesh Shanmugam
The grievance appellate committee (‘GAC’) provision in the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2022 has garnered significant controversy. While it seeks to empower users to challenge the arbitrary moderation decisions of platforms, the provision itself has been criticised for being arbitrary. Lawyers, privacy advocates, technology companies, and other stakeholders have raised many concerns about the constitutional validity of the GAC, its lack of transparency and independence, and excessive delegated power.
Governments, platforms, and other stakeholders must therefore focus on: (i) examining the systemic issues which remain unaddressed by content moderation systems; and (ii) ensuring that platforms implement adequate structural measures to effectively reduce the number of individual grievances as well as systemic issues.
The limitations of the current content moderation systems
Globally, a majority of platforms rely on an individual case-by-case approach for content moderation. Due to the limited scope of this method, platforms are unable to resolve, or even identify, several types of systemic issues. This, in turn, increases the number of content moderation cases.
To illustrate the problem, here are a few examples of systemic issues which are unaddressed by content moderation systems: (i) coordinated or periodic attacks (such as mass reporting of users/posts) which target a specific class of users (based on gender, sexuality, race, caste, religion, etc.); (ii) differing content moderation criteria in different geographical locations; and (iii) errors, biases or other issues with algorithms, programs or platform design which lead to increased flagging of users/posts for content moderation.
Considering the gravity of these systemic issues, platforms must adopt effective measures to improve the standards of content moderation and reduce the number of grievances.
Addressing the structural concerns in content moderation systems
Several legal scholars have recommended the adoption of a ‘systems thinking’ approach to address the various systemic concerns in content moderation. This approach requires platforms to implement corporate structural changes, administrative practices, and procedural accountability measures for effective content moderation and grievance redressal.
Accordingly, revising the existing content moderation frameworks in India to include the following key ‘systems thinking’ principles would ensure fairness, transparency and accountability in content moderation.
Establishing independent content moderation systems. Although platforms have designated content moderation divisions, these divisions are, in many cases, influenced by the platforms’ corporate or financial interests, advertisers’ interests, or political interests, which directly impacts the quality and validity of their content moderation practices. Hence, platforms must implement organisational restructuring measures to ensure that content moderation and grievance redressal processes are (i) solely undertaken by a separate and independent ‘rule-enforcement’ division; and (ii) not overruled or influenced by any other divisions in the corporate structure of the platforms. Additionally, platforms must designate a specific individual as the authorised officer in-charge of the rule-enforcement division. This ensures transparency and accountability from a corporate governance viewpoint.
Robust transparency measures. Across jurisdictions, there is a growing trend of governments issuing formal or informal orders to platforms, including orders to suspend or ban specific accounts, take down specific posts, etc. In addition to ensuring transparency of the internal functioning of platforms’ content moderation systems, platforms must also provide clarity on the number of measures undertaken (and other relevant details) in compliance with such governmental orders. Ensuring that platforms’ transparency reports separately disclose the frequency and total number of such measures will provide a greater level of transparency to users, and the public at large.
Aggregation and assessment of claims. As stated earlier, individual cases provide limited insight into the overall systemic issues present on the platform. Platforms can gain a greater level of insight through (i) periodic aggregation of claims received by them; and (ii) assessment of these aggregated claims for any patterns of harm or bias (for example: assessing for the presence of algorithmic/human bias against certain demographics). Doing so will illuminate algorithmic issues, design issues, unaccounted bias, or other systemic issues which would otherwise remain unidentified and unaddressed.
Annual reporting of systemic issues. In order to ensure internal enforcement of systemic reform, the rule-enforcement divisions must provide annual reports to the board of directors (or the appropriate executive authority of the platform), containing systemic issues observed, recommendations for certain systemic issues, and protective measures to be undertaken by the platforms (if any). To aid in identifying further systemic issues, the division must conduct comprehensive risk assessments on a periodic basis, and record its findings in the next annual report.
Implementation of accountability measures. As is established corporate practice for financial, accounting, and other divisions of companies, periodic quality assurance (‘QA’) and independent auditing of the rule-enforcement division will further ensure accountability and transparency.
Conclusion
Current discussions regarding content moderation regulations are primarily centred around the GAC, and the various procedural safeguards which can rectify its flaws. However, even if the GAC becomes an effectively functioning independent appellate forum, the systemic problems plaguing content moderation will remain unresolved. It is for this reason that platforms must actively adopt the structural measures suggested above. Doing so will (i) increase the quality of content moderation and internal grievance decisions; (ii) reduce the burden on appellate forums; and (iii) decrease the likelihood of governments imposing stringent content moderation regulations that undermine the free speech rights of users.
The Ministry of Electronics and Information Technology (“MeitY”) proposed amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (“Intermediary Guidelines”) on January 17, 2023. The draft amendments aim to regulate online gaming, but also seek to have intermediaries “make reasonable efforts” to cause their users not to upload or share content identified as “fake” or “false” by the Press Information Bureau (“PIB”), any Union Government department or authorised agency (See proposed amendment to Rule 3(1)(b)(v).) The draft amendments in their current form raise certain concerns that we believe merit additional scrutiny.
CCG submitted comments on the proposed amendment to Rule 3(1)(b)(v), highlighting its key feedback and concerns. The comments were authored by Archit Lohani and Vasudev Devadasan and reviewed by Sachin Dhawan and Jhalak M. Kakkar. Some of the key issues raised in our comments are summarised below.
Misinformation, fake, and false, include both unlawful and lawful expression
The proposed amendment does not define the term “misinformation” or provide any guidance on how determinations that content is “fake” or “false” are arrived at. Misinformation can include various forms of content, and experts have identified up to seven subtypes of misinformation such as: imposter content; fabricated content; false connection; false context; manipulated content; misleading content; and satire or parody. Different subtypes of misinformation can cause different types of harm (or no harm at all) and are treated differently under the law. Misinformation or false information thus includes both lawful and unlawful speech (e.g., satire is constitutionally protected speech).
Within the broad ambit of misinformation, the draft amendment does not provide sufficient guidance to the PIB and government departments on what sort of expression is permissible and what should be restricted. The draft amendment effectively provides them with unfettered discretion to restrict both unlawful and lawful speech. When seeking to regulate misinformation, experts, platforms, and other countries have drawn up detailed definitions that take into consideration factors such as intention, form of sharing, virality, context, impact, public interest value, and public participation value. These definitions recognize the potential multiplicity of context, content, and propagation techniques. In the absence of clarity over what types of content may be restricted based on a clear definition of misinformation, the draft amendment will restrict both unlawful speech and constitutionally protected speech. It will thus constitute an overbroad restriction on free speech.
Restricting information solely on the ground that it is “false” is constitutionally impermissible
Article 19(2) of the Indian Constitution allows the government to place reasonable restrictions on free speech in the interest of the sovereignty, integrity, or security of India, its friendly relations with foreign States, public order, decency or morality, or contempt of court. The Supreme Court has ruled that these grounds are exhaustive and speech cannot be restricted for reasons beyond Article 19(2), including where the government seeks to block content online. Crucially, Article 19(2) does not permit the State to restrict speech on the ground that it is false. If the government were to restrict “false information that may imminently cause violence”, such a restriction would be permissible as it would relate to the ground of “public order” in Article 19(2). However, if enacted, the draft amendment would restrict online speech solely on the ground that it is declared “false” or “fake” by the Union Government. This amounts to a State restriction on speech for reasons beyond those outlined in Article 19(2), and would thus be unconstitutional. Restrictions on free speech must have a direct connection to the grounds outlined in Article 19(2) and must be a necessary and proportionate restriction on citizens’ rights.
Amendment does not adhere with the procedures set out in Section 69A of the IT Act
The Supreme Court upheld Section 69A of the IT Act in Shreya Singhal v Union of Indiainter alia because it permitted the government blocking of online content only on grounds consistent with Article 19(2) and provided important procedural safeguards, including a notice, hearing, and written order of blocking that can be challenged in court. Therefore, it is evident that the constitutionality of the government’s blocking power over is contingent on the substantive and procedural safeguards provided by Section 69A and the Information Technology (Procedure and Safeguards for Blocking for Access of Information by Public) Rules, 2009. The proposed amendment to the Intermediary Guidelines would permit the Union Government to restrict online speech in a manner that does not adhere to these safeguards. It would permit the blocking of content on grounds beyond those specified in Article 19(2), based on a unilateral determination by the Union Government, without a specific procedure for notice, hearing, or a written order.
Alternate methods to counter the spread of misinformation
Any response to misinformation on social media platforms should be based on empirical evidence on the prevalence and harms of misinformation on social media. Thus, as a first step, social media companies should be required to provide greater transparency and facilitate researcher access to data. There are alternative methods to regulate the spread of misinformation that may be more effective and preserve free expression, such as labelling or flagging misinformation. We note that there does not yet exist widespread legal and industry consensus on standards for independent fact-checking, but organisations such as the ‘International Fact-Checking Network’ (IFCN) have laid down certain principles that independent fact-checking organisations should comply with. Having platforms label content pursuant to IFCN fact checks, and even notify users when the content they have interacted with has subsequently been flagged by an IFCN fact checker would provide users with valuable informational context without requiring content removal.
The question of when intermediaries are liable, or conversely not liable, for content they host or transmit is often at the heart of regulating content on the internet. This is especially true in India, where the Government has relied almost exclusively on intermediary liability to regulate online content. With the advent of the Intermediary Guidelines 2021, and their subsequent amendment in October 2022, there has been a paradigm shift in the regulation of online intermediaries in India.
To help understand this new regulatory reality, the Centre for Communication Governance (CCG) is releasing its ‘Report on Intermediary Liability in India’ (December 2022).
This report aims to provide a comprehensive overview of the regulation of online intermediaries and their obligations with respect to unlawful content. It updates and expands on the Centre for Communication Governance’s 2015 report documenting the liability of online intermediaries to now cover the decisions in Shreya Singhal vs. Union of India and Myspace vs. Super Cassettes Industries Ltd, the Intermediary Guidelines 2021 (including the October 2022 Amendment), the E-Commerce Rules, and the IT Blocking Rules. It captures the over two decades of regulatory and judicial practice on the issue of intermediary liability since the adoption of the IT Act. The report aims to provide practitioners, lawmakers and regulators, judges, and academics with valuable insights as they embark on shaping the coming decades of intermediary liability in India.
Some key insights that emerge from the report are summarised below:
Limitations of Section 79 (‘Safe Harbour’) Approach: In the cases analysed in this report, there is little judicial consistency in the application of secondarily liability principles to intermediaries, including the obligations set out in Intermediary Guidelines 2021, and monetary damages for transmitting or hosting unlawful content are almost never imposed on intermediaries. This suggests that there are significant limitations to the regulatory impact of obligations imposed on intermediaries as pre-conditions to safe harbour.
Need for clarity on content moderation and curation: The text of Section 79(2) of the IT Act grants intermediaries safe harbour provided they act as mere conduits, not interfering with the transmission of content. There exists ambiguity over whether content moderation and curation activities would cause intermediaries to violate Section 79(2) and lose safe harbour. The Intermediary Guidelines 2021 have partially remedied this ambiguity by expressly stating that voluntary content moderation will not result in an intermediary ‘interfering’ with the transmission under Section 79(2). However, ultimately amendments to the IT Act are required to provide regulatory certainty.
Intermediary status and immunity on a case-by-case basis: An entity’s classification as an intermediary is not a status that applies across all its operations (like a ‘company’ or a ‘partnership’), but rather the function it is performing vis-à-vis the specific electronic content it is sued in connection with. Courts should determine whether an entity is an ‘intermediary’ and whether it complied with the conditions of Section 79 in relation to the content it is being sued for. Consistently making this determination at a preliminary stage of litigation would greatly further the efficacy of Section 79’s safe harbour approach.
Concerns over GACs: While the October 2022 Amendment stipulates that two members of every GAC shall be independent, no detail is provided as to how such independence shall be secured (e.g., security of tenure and salary, oath of office, minimum judicial qualifications etc.). Such independence is vital as GAC members are appointed by the Union Government but the Union Government or its functionaries or instrumentalities may also be parties before a GAC. Further, given that the GACs are authorities ‘under the control of the Government of India’, they have an obligation to abide by the principles of natural justice, due process, and comply with the Fundamental Rights set out in the Constitution. If a GAC directs the removal of content beyond the scope of Article 19(2) of the Constitution, questions of an impermissible restriction on free expression may be raised.
Actual knowledge in 2022: The October 2022 Amendment requires intermediaries to make reasonable efforts to “cause” their users not to upload certain categories of content and ‘act on’ user complaints against content within seventy-two hours. Requiring intermediaries to remove content at the risk of losing safe harbour in circumstances other than the receipt of a court or government order prima facie violates the decision of Shreya Singhal. Further, India’s approach to notice and takedown continues to lack a system for reinstatement of content.
Uncertainty over government blocking power: Section 69A of the IT Act expressly grants the Union Government power to block content, subject to a hearing by the originator (uploader) or intermediary. However, Section 79(3)(b) of the IT Act may also be utilised to require intermediaries to take down content absent some of the safeguards provided in Section 69A. The fact that the Government has relied on both provisions in the past and that it does not voluntarily disclose blocking orders makes a robust legal analysis of the blocking power challenging.
Hearing originators when blocking: The decision in Shreya Singhal and the requirements of due process support the understanding that the originator must be notified and granted a hearing under the IT Blocking Rules prior to their content being restricted under Section 69A. However, evidence suggests that the government regularly does not provide originators with hearings, even where the originator is known to the government. Instead, the government directly communicates with intermediaries away from the public eye, raising rule of law concerns.
Issues with first originators: Both the methods proposed for ‘tracing first originators’ (hashing unique messages and affixing encrypted originator information) are easily circumvented, require significant technical changes to the architecture of messaging services, offer limited investigatory or evidentiary value, and will likely undermine the privacy and security of all users to catch a few bad actors. Given these considerations, it is unlikely that such a measure would satisfy the proportionality test laid out by current Supreme Court doctrine.
Broad and inconsistent injunctions: An analysis of injunctions against online content reveals that the contents of court orders are often sweeping, imposing vague compliance burdens on intermediaries. When issuing injunctions against online content, courts should limit blocking or removals to specific URLs. Further courts should be cognisant of the fact that intermediaries have themselves not committed any wrongdoing, and the effect of an injunction should be seen as meaningfully dissuading users from accessing content rather than an absolute prohibition.
This report was made possible by the generous support we received from National Law University Delhi. CCG would like to thank our Faculty Advisor Dr. Daniel Mathew for his continuous direction and mentorship. This report would not be possible without the support provided by the Friedrich Naumann Foundation for Freedom, South Asia. We are grateful for comments received from the Data Governance Network and its reviewers. CCG would also like to thank Faiza Rahman and Shashank Mohan for their review and comments, and Jhalak M. Kakkar and Smitha Krishna Prasad for facilitating the report. We thank Oshika Nayak of National Law University Delhi for providing invaluable research assistance for this report. Lastly, we would also like to thank all members of CCG for the many ways in which they supported the report, in particular, the ever-present and ever-patient Suman Negi and Preeti Bhandari for the unending support for all the work we do.
On 6 June 2022, the Ministry of Electronics and Information Technology (“MeitY”), released the proposed amendments for Part 1 and Part II of the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (“2021 IT Rules”). CCG submitted its comments on the proposed amendments to the 2021 IT Rules, highlighting its key feedback and key concerns. The comments were authored by Vasudev Devadasan and Bilal Mohamed and reviewed and edited by Jhalak M Kakkar and Shashank Mohan.
The 2021 IT Rules were released in February last year, and Part I and II of the Guidelines set out the conditions intermediaries must satisfy to avail of legal immunity for hosting unlawful content (or ‘safe harbour’) under Section 79 of the Information Technology Act, 2000 (“IT Act”). The 2021 IT Rules have been challenged in several High Courts across the country, and the Supreme Court is currently hearing a transfer petition on whether these actions should be clubbed and heard collectively by the apex court. In the meantime, the MeitY has released the proposed amendments to the 2021 IT Rules which seek to make incremental but significant changes to the Rules.
CCG’s comments to the MeitY can be summarised as follows:
Dilution of safe harbour in contravention of Section 79(1) of the IT Act
The core intention behind providing intermediaries with safe harbour under Section 79(1) of the IT Act is to ensure that intermediaries do not restrict the free flow of information online due to the risk of being held liable for the third-party content uploaded by users. The proposed amendments to Rules 3(1)(a) and 3(1)(b) of the 2021 IT Rules potentially impose an obligation on intermediaries to “cause” and “ensure” their users do not upload unlawful content. These amendments may require intermediaries to make complex determinations on the legality of speech and cause online intermediaries to remove content that may carry even the slightest risk of liability. This may result in the restriction of online speech and the corporate surveillance of Indian internet users by intermediaries. In the event that the proposed amendments are to be interpreted as not requiring intermediaries to actively prevent users from uploading unlawful content, in such a situation, we note that the proposed amendments may be functionally redundant, and we suggest they be dropped to avoid legal uncertainty.
Concerns with Grievance Appellate Committee
The proposed amendments envisage one or more Grievance Appellate Committees (“GAC”) that sit in appeal of intermediary determinations with respect to content. Users may appeal to a GAC against the decision of an intermediary to not remove content despite a user complaint, or alternatively, request a GAC to reinstate content that an intermediary has voluntarily removed or lift account restrictions that an intermediary has imposed. The creation of GAC(s) may exceed Government’s rulemaking powers under the IT Act. Further, the GAC(s) lack the necessary safeguards in its composition and operation to ensure the independence required by law of such an adjudicatory body. Such independence and impartiality may be essential as the Union Government is responsible for appointing individuals to the GAC(s) but the Union Government or its functionaries or instrumentalities may also be a party before the GAC(s). Further, we note that the originator, the legality of whose content is at dispute before a GAC, has not expressly been granted a right to hearing before the GAC. Finally, we note that the GAC(s) may lack the capacity to deal with the high volume of appeals against content and account restrictions. This may lead to situations where, in practice, only a small number of internet users are afforded redress by the GAC(s), leading to inequitable outcomes and discrimination amongst users.
Concerns with grievance redressal timeline
Under the proposed amendment to Rule 3(2), intermediaries must acknowledge the complaint by an internet user for the removal of content within 24 hours, and ‘act and redress’ this complaint within 72 hours. CCG’s comments note that 72-hour timeline to address complaints proposed by the amendment to Rule 3(2) may cause online intermediaries to over-comply with content removal requests, leading to the possible take-down of legally protected speech at the behest of frivolous user complaints. Empirical studies conducted on Indian intermediaries have demonstrated that smaller intermediaries lack the capacity and resources to make complex legal determinations of whether the content complained against violates the standards set out in Rule 3(1)(b)(i)-(x), while larger intermediaries are unable to address the high volume of complaints within short timelines – leading to the mechanical takedown of content. We suggest that any requirement that online intermediaries address user complaints within short timelines could differentiate between types of content that are ex-facie (on the face of it) illegal and causes severe harm (e.g., child-sex abuse material or gratuitous violence), and other types of content where determinations of legality may require legal or judicial expertise, like copyright or defamation.
Need for specificity in defining due diligence obligations
Rule 3(1)(m) of the proposed amendments requires intermediaries to ensure a “reasonable expectation of due diligence, privacy and transparency” to avail of safe harbour; while Rule 3(1)(n) requires intermediaries to “respect the rights accorded to the citizens under the Constitution of India.” These rules do not impose clearly ascertainable legal obligations, which may lead to increased compliance burdens, hamper enforcement, and results in inconsistent outcomes. In the absence of specific data protection legislation, the obligation to ensure a “reasonable expectation of due diligence, privacy and transparency” is unclear. The contents of fundamental rights obligations were drafted and developed in the context of citizen-State relations and may not be suitable or aptly transposed to the relations between intermediaries and users. Further, the content of ‘respecting Fundamental Rights’ under the Constitution is itself contested and open to reasonable disagreement between various State and constitutional functionaries. Requiring intermediaries to uphold such obligations will likely lead to inconsistent outcomes based on varied interpretations.
Part I of this two part-series examined the contours of Rule 16 of the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (“2021 IT Rules”), and the Bombay High Court’s rationale for refusing to stay the rule in the Leaflet case. This second part examines the legality and constitutionality of Rule 16. It argues that the rule’s constitutionality may be contested because it deprives impacted content publishers of a hearing when their content is restricted. It also argues that the MIB should provide information on blocking orders under Rule 16 to allow them to be challenged, both by users whose access to information is curtailed, and by publishers whose right to free expression is restricted.
Rule 16’s legality
At its core, Rule 16 is a legal provision granting discretionary authority to the government to take down content. Consistently, the Supreme Court (“SC”) has maintained that to be compliant with Article 14, discretionary authority must be backed by adequate safeguards.[1] Admittedly, Rule 16 is not entirely devoid of safeguards since it envisages an assessment of the credibility of content blocking recommendations at multiple levels (refer Part I for context). But this framework overlooks a core principle of natural justice – audi alteram partem (hear the other side) – by depriving the impacted publishers of a hearing.
In Tulsiram Patel, the SC recognised principles of natural justice as part of the guarantee under Article 14 and ruled that any law or state action abrogating these principles is susceptible to a constitutionality challenge. But the SC also found that natural justice principles are not absolute and can be curtailed under exceptional circumstances. Particularly, audi alteram partem, can be excluded in situations where the “promptitude or the urgency of taking action so demands”.
Arguably, the suspension of pre-decisional hearings under Rule 16 is justifiable considering the rule’s very purpose is to empower the Government to act with alacrity against content capable of causing immediate real-world harm. However, this rationale does not preclude the provision of a post-decisional hearing under the framework of the 2021 IT Rules. This is because, as posited by the SC in Maneka Gandhi (analysed here and here), the “audi alteram partem rule is sufficiently flexible” to address“the exigencies of myriad kinds of situations…”. Thus, a post-decisional hearing to impacted stakeholders, after the immediacy necessitating the issuance of interim blocking directions had subsided, could have been reasonably accommodated within Rule 16. Crucially, this would create a forum for the State to justify the necessity and proportionality of its speech restriction to the individuals’ impacted (strengthening legitimacy) and the public at large (strengthening the rule of law and public reasoning). Finally, in the case of ex-facie illegal content, originators are unlikely to avail of post-facto hearings, mitigating concerns of a burdensome procedure.
Rule 16’s exercise by MIB
Opacity
MIB has exercised its power under Rule 16 of the 2021 IT Rules on five occasions. Collectively, it has ordered the blocking of approximately 93 YouTube channels, 6 websites, 4 Twitter accounts, and 2 Facebook accounts. Each time, MIB has announced content blocking only through press releases after theorders were passed but has not disclosed the actual blocking orders.
MIB’s reluctance to publish its blocking orders renders the manner it is exercising power under Rule 16 opaque. Although press statements inform the public that content has been blocked, blocking orders are required (under Rule 16(2) and Rule 16(4)) to record the reasons for which the content has been blocked. As discussed above, this limits the right to free expression of the originators of the content and denies them the ability to be heard.
Additionally, content recipients, whose right to view content and access information is curtailed through such orders, are not being made aware of the existence of these orders by the Ministry directly. Pertinently, the 2021 IT Rules appear to recognise the importance of informing users about the reasons for blocking digital content. This is evidenced by Rule 4(4), which requires ‘significant social media intermediaries’ to display a notice to users attempting to access proactively disabled content. However, in the absence of similar transparency obligations upon MIB under the 2021 IT Rules, content recipients aggrieved by the Ministry’s blocking orders may be compelled to rely on the cumbersome mechanism under the Right to Information Act, 2005 to seek the disclosure of these orders to challenge them.
Although the 2021 IT Rules do not specifically mandate the publication of blocking orders by MIB, this obligation can be derived from the Anuradha Bhasin verdict. Here, in the context of the Telecom Suspension Rules, the SC held that any order affecting the “lives, liberty and property of people” must be published by the government, “regardless of whether the parent statute or rule prescribes the same”. The SC also held that the State should ensure the availability of governmental orders curtailing fundamental rights unless it claims specific privilege or public interest for refusing disclosure. Even then, courts will finally decide whether the State’s claims override the aggrieved litigants’ interests.
Considering the SC’s clear reasoning, MIB ought to make its blocking orders readily available in the interest of transparency, especially since a confidentiality provision restricting disclosure, akin to Rule 16 of the Information Technology (Procedure and Safeguards for Blocking for Access of Information by Public) Rules, 2009 (“2009 Blocking Rules”), is absent in the 2021 IT Rules.
Overuse
Another concerning trend is MIB’s invocation of its emergency content-blocking power as the norm rather than the exception it was meant to be. For context, the 2021 IT Rules provide a non-emergency blocking process under Rules 14 and 15, whereunder impacted publishers are provided a pre-decisional hearing before an Inter-Departmental Committee required to be constituted under Rule 13(1)(b). However, thus far, MIB has exclusively reliedon its emergency power to block ostensibly problematic digital content, including fake news.
While the Bombay High Court in the Leaflet casedeclined to expressly stay Rule 14 (noting that the Inter-Departmental Committee was yet to be set up) (¶19), the High Court’s stay on Rule 9(3) creates a measure of ambiguity as to whether Rules 14 and 15 are currently in effect. This is because Rule 9(3) states that there shall be a government oversight mechanism to “ensure adherence to the Code of Ethics”. A key part of this mechanism is the Inter-Departmental Committee whose role is to decide “violation[s] or contravention[s] of the Code of Ethics” (Rule 14(2)). The High Court even notes that it is “incomprehensible” how content may be taken down under Rule 14(5) for violating the Code of Ethics (¶27). Thus, despite the Bombay High Court’s refusal to stay Rule 14, it is arguable that the High Court’s stay on the operation of Rule 9(3) to prevent the ‘Code of Ethics’ from being applied against online news and curated content publishers, may logically extend to Rule 14(2) and 15. However, even if the Union were to proceed on a plain reading of the Leaflet order and infer that the Bombay High Court did not stay Rules 14 and 15, it is unclear if the MIB has constituted the Inter-Departmental Committee to facilitate non-emergency blocking.
MeitY has also liberally invoked its emergency blocking power under Rule 9 of the 2009 Blocking Rules to disable access to content. Illustratively, in early 2021 Twitter received multiple blocking orders from MeitY, at least two of which were emergency orders, directing it to disable over 250 URLs and a thousand accounts for circulating content relating to farmers’ agitation against contentious farm laws. Commentators have also pointed out that there are almost no recorded instances of MeitY providing pre-decisional hearings to publishers under the 2009 Blocking Rules, indicating that in practice this crucial safeguard has been rendered illusory.
Conclusion
Evidently, there is a need for the MIB to be more transparent when invoking its emergency content-blocking powers. A significant step forward in this direction would be ensuring that at least final blocking orders, which ratify emergency blocking directions, are made readily available, or at least provided to publishers/originators. Similarly, notices to any users trying to access blocked content would also enhance transparency. Crucially, these measures would reduce information asymmetry regarding the existence of blocking orders and allow a larger section of stakeholders, including the oft-neglected content recipients, the opportunity to challenge such orders before constitutional courts.
Additionally, the absence of hearings to impacted stakeholders, at any stage of the emergency blocking process under Rule 16 of the 2021 IT Rules limits their right to be heard and defend the legality of ‘at-issue’ content. Whilst the justification of urgency may be sufficient to deny a pre-decisional hearing, the procedural safeguard of a post-decisional hearing should be incorporated by MIB.
The aforesaid legal infirmities plague Rule 9 of the 2009 Blocking Rules as well, given its similarity with Rule 16 of the 2021 IT Rules. The Tanul Thakur case presents an ideal opportunity for the Delhi High Court to examine and address the limitations of these rules. Civil society organisations have for years advocated (here and here) for incorporation of a post-decisional hearing within the emergency blocking framework under the 2009 Blocking Rules too. Its adoption and diligent implementation could go a long way in upholding natural justice and mitigating the risk of arbitrary content blocking.
[1]State of Punjab v. Khan Chand, (1974) 1 SCC 549; Virendra v. The State of Punjab & Ors., AIR 1957 SC 896; State of West Bengal v. Anwar Ali, AIR 1952 SC 75.
The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (“2021 IT Rules”) were challenged before several High Courts (refer here and here) almost immediately after their promulgation. In one such challenge, initiated by the publishers of the online news portal ‘The Leaflet’, the Bombay High Court, by an order dated August 14, 2021, imposed an interim stay on the operation of Rules 9(1) and (3) of the 2021 IT Rules. Chiefly, this was done because these provisions subject online news and curated content publishers to a vaguely worded ‘code of ethics’, adherence to which would have had a ‘chilling effect’ on their freedom of speech. However, the Bombay High Court refused to stay Rule 16 of these rules, which empowers the Ministry of Information and Broadcasting (“MIB”) to direct blocking of digital content during an “emergency” where “no delay is acceptable”.
Part I of this two-part series, examines the contours of Rule 16 and argues that the Bombay High Court overlooked the procedural inadequacy of this rule when refusing to stay the provision in the Leaflet case. Part II assesses the legality and constitutionality of the rule.
Overview of Rule 16
Part III of the 2021 IT Rules authorises the MIB to direct blocking of digital content in case of an ‘emergency’ in the following manner:
The MIB has correctly noted that Rule 16 is modelled after Rule 9 of the Information Technology (Procedure and Safeguards for Blocking for Access of Information by Public) Rules, 2009 (“2009 Blocking Rules”) (analysed here), and confers upon the MIB similar emergency blocking powers which the Ministry of Electronics and Information Technology (“MeitY”) has possessed since 2009. Both provisions confer discretion upon authorised officers to determine what constitutes an emergency but fail to provide a hearing to impacted publishers or intermediaries at any stage.
Judicial findings on Rule 16
The Bombay High Court’s order in the Leaflet case is significant since it is the first time a constitutional court has recorded its preliminary findings on the rule’s legitimacy. Here, the Bombay High Court refused to stay Rule 16 primarily for two reasons. First, the High Court held that Rule 16 of the 2021 IT Rules is substantially similar to Rule 9 of the 2009 Blocking Rules, which is still in force. Second, the grounds upon which Rule 16 permits content blocking are coextensive with the grounds on which speech may be ‘reasonably restricted’ under Article 19(2) of the Indian Constitution. Respectfully, the plausibility of this reasoning is contestable:
Equivalence with the 2009 Blocking Rules: Section 69A of the IT Act and the 2009 Blocking Rules were previously challenged in Shreya Singhal, where both were upheld by the Supreme Court (“SC”). However, establishing an equivalence between Rule 16 of the 2021 IT Rules and Rule 9 of the 2009 Blocking Rules to understand the constitutionality of the former would have been useful only if Shreya Singhal contained a meaningful analysis of Rule 9. However, the SC did not examine this rule but rather broadly upheld the constitutionality of the 2009 Blocking Rules as a whole due to the presence of certain safeguards including: (a) the non-emergency process for content blocking under the 2009 Blocking Rules includes a pre-decisional hearing to identified intermediaries/originators before content was blocked; and (b) the 2009 Blocking Rules mandate the recording of reasons in blocking orders so that they may be challenged under Article 226 of the Constitution
However, the SC did not consider that the emergency blocking framework under Rule 9 of the 2009 Blocking Rules not only allows MeitY to bypass the essential safeguard of a pre-decisional hearing to impacted stakeholders but also fails to provide them with either a written order or a post-decisional hearing. It also did not address that Rule 16 of the 2009 Blocking Rules, which mandates confidentiality of blocking requests and subsequent actions, empowers MeitY to refuse disclosure of blocking orders to impacted stakeholders thus depriving them of the opportunity to challenge such orders.
Thus, the Bombay High Court’s attempt in the Leaflet case to claim equivalence with Rule 9 of the 2009 Blocking Rules as a basis to defend the constitutionality of Rule 16 of the 2021 IT Rules was inapposite since Rule 9 itself was not substantively reviewed in Shreya Singhal, and its operation has since been challenged on constitutional grounds.
Procedural safeguards: Merely because Rule 16 of the 2021 IT Rules permits content blocking only under the circumstances enumerated under Article 19(2), does not automatically render it procedurally reasonable. In People’s Union of Civil Liberties (“PUCL”) the SC examined the procedural propriety of Section 5(2) of the Telegraph Act, 1885, which permits phone-tapping. Even though this provision restricts fundamental rights only on constitutionally permissible grounds, the SC found that substantive law had to be backed by adequate procedural safeguards to rule out arbitrariness. Although the SC declined to strike down Section 5(2) in PUCL, it framed interim guidelines to govern the provision’s exercise to compensate for the lack of adequate safeguards.
Since Rule 16 restricts the freedom of speech, its proportionality should be tested as part of any meaningful constitutionality analysis. To be proportionate, restrictions on fundamental rights must satisfy four prongs[1]: (a) legality – the requirement of a law having a legitimate aim; (b) suitability – a rational nexus between the means adopted to restrict rights and the end of achieving this aim, (c) necessity – proposed restrictions must be the ‘least restrictive measures’ for achieving the aim; and (d) balancing – balance between the extent to which rights are restricted and the need to achieve the aim. Justice Kaul’s opinion in Puttaswamy (9JB)also highlights the need for procedural safeguards against the abuse of measures interfering with fundamental rights (para 70 Kaul J).
Arguably, by demonstrating the connection between Rule 16 and Article 19(2), the Bombay High Court has proven that Rule 16 potentially satisfies the ‘legality’ prong. However, even at an interim stage, before finally ascertaining Rule 16’s constitutionality by testing it against the other proportionality parameters identified above, the Bombay High Court should have considered whether the absence of procedural safeguards under this rule merited staying its operation.
For these reasons, the Bombay High Court could have ruled differently in deciding whether to stay the operation of Rule 16 in the Leaflet case. While these are important considerations at the interim stage, ultimately the larger question of constitutionality must be addressed. The second post in this series will critically examines the legality and constitutionality of Rule 16.
[1]Modern Dental College and Research Centre and Ors. v. State of Madhya Pradesh and Ors., (2016) 7 SCC 353; Justice K.S. Puttaswamy & Ors. v. Union of India (UOI) & Ors., (2019) 1 SCC 1; Anuradha Bhasin and Ors. v. Union of India (UOI) & Ors., (2020) 3 SCC 637.
The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (“Intermediary Guidelines”) represents India’s first attempt at regulating large social media platforms, with the Guidelines creating distinct obligations for ‘Significant Social Media Intermediaries’ (“SSMIs”). While certain provisions of the Guidelines concerning SSMIs (like the traceability requirement) are currently under legal challenge, the Guidelines also introduced a less controversial requirement that SSMIs publish monthly transparency reports regarding their content moderation activities. While this reporting requirement is arguably a step in the right direction, scrutinising the actual documents published by SSMIs reveals a patchwork of inconsistent and incomplete information – suggesting that Indian regulators need to adopt a more comprehensive approach to platform transparency.
This post briefly sets out the reporting requirement under the Intermediary Guidelines before analysing the transparency reports released by SSMIs. It highlights how a focus on figures coupled with the wide discretion granted to platforms to frame their reports undermines the goal of meaningful transparency. The figures referred to when analysing SSMI reports pertain to the February-March of 2022 reporting period, but the distinct methodologies used by each SSMI to arrive at these figures (more relevant for the present discussion) has remained broadly unchanged since reporting began in mid-2021. The post concludes by making suggestions on how the Ministry of Electronics and Information Technology (“MeitY”) can strengthen the reporting requirements under the Intermediary Guidelines.
Transparency reporting under the Intermediary Guidelines
Social media companies structure speech on their platforms through their content moderation policies and practices, which determine when content stays online and when content is taken down. Even if content is not illegal or taken down pursuant to a court or government order, platforms may still take it down for violating their terms of service (or Community Guidelines) (let us call this content ‘violative content’ for now i.e., content that violates terms of service). However, ineffective content moderation can result in violative and even harmful content remaining online or non-violative content mistakenly being taken down. Given the centrality of content moderation to online speech, the Intermediary Guidelines seek to bring some transparency to the content moderation practices of SSMIs by requiring them to publish monthly reports on their content moderation activities. Transparency reporting helps users and the government understand the decisions made by platforms with respect to online speech. Given the opacity with which social media platforms often operate, transparency reporting requirements can be an essential tool to hold platforms accountable for ineffective or discriminatory content moderation practices.
Rule 4(1)(d) of the Intermediary Guidelines requires SSMIs to publish monthly transparency reports specifying: (i) the details of complaints received, and actions taken in response, (ii) the number of “parts of information” proactively taken down using automated tools; and (iii) any other relevant information specified by the government. The Rule therefore covers both ‘reactive moderation’, where a platform responds to a user’s complaints against content, and ‘proactive moderation’, where the platform itself seeks out unwanted content even before a user reports it.
Transparency around reactive moderation helps us understand trends in user reporting and how responsive an SSMI is to user complaints, while disclosures on proactive moderation shed light on the scale and accuracy of an SSMI’s independent moderation activities. A key goal of both reporting datasets is to understand whether the platform is taking down as much harmful content as possible without accidentally also taking down non-violative content. Unfortunately, Rule 4(1)(d) merely requires SSMIs to report the number of links taken down during their content moderation (this is re-iterated by the MeitY’s FAQs on the Intermediary Guidelines). The problems with an overtly simplistic approach come to the fore upon an examination of the actual reports published by SSMIs.
Contents of SSMI reports – proactive moderation
Based on its latest monthly transparency reports, Twitter proactively suspended 39,588 accounts while Google used automated tools to remove 338,938 pieces of content. However, these figures only document the scale of proactive monitoring and do not provide any insight into the accuracy of the platforms’ moderation – how accurate is the moderation in distinguishing between violative and non-violative content. The reporting also does not specify whether this content was taken down using solely automated tools, or some mix of automated tools and human review or oversight. Meta (reporting for Facebook and Instagram) reports the volume of content proactively taken down, but also provides a “Proactivity Rate”. The Proactivity Rate is defined as the percentage of content flagged proactively (before a user reported it) as a subset of all flagged content. Proactivity Rate = [proactively flagged content ÷ (proactively flagged content + user reported content)]. However, this metric is also of little use in understanding the accuracy of Meta’s automated tools. Take the following example:
Assume a platform has 100 pieces of content, of which 50 pieces violate the platforms terms of service and 50 do not. The platform relies on both proactive monitoring through automated tools and user reporting to identify violative content. Now, if the automated tools detect 49 pieces of violative content, and a user reports 1, the platform states that: ‘49 pieces of content were taken down pursuant to proactive monitoring at a Proactivity Rate of 98%’. However, this reporting does not inform citizens or regulators: (i) if the 49 pieces of content identified by the automated tools are in fact the 49 pieces that violate the platform’s terms of service (or whether the tools mistakenly took down some legitimate, non-violative content); (ii) how many users saw but did not report the content that was eventually flagged by automated tools and taken down; and (iii) what level and extent of human oversight was exercised in removing content. A high proactivity rate merely indicates that automated tools flagged more content than users, which is to be expected. Simply put, numbers aren’t everything, they only disclose the scale of content moderation and not its quality.
This criticism begs the question, how do you understand the quality of proactive moderation? The Santa Clara Principles represent high level guidance on content moderation practices developed by international human rights organisations and academic experts to facilitate platform accountability with respect to users’ speech. The Principles require that platforms report: (i) when and how automated tools are used; (ii) the key criteria used by automated tools in making decisions; (iii) the confidence, accuracy, or success rate of automated tools, including in different languages; (iv) the extent of human oversight over automated tools; and (v) the outcomes of appeals against moderation decisions made by automated tools. This last requirement of reporting the outcome of appeals (how many users successfully got content reinstated after it was taken down by proactive monitoring) is a particularly useful metric as it provides an indicator of when the platforms themselves acknowledge that its proactive moderation was inaccurate. Draft legislation in Europe and the United States requires platforms to report how often proactive monitoring decisions are reversed. Mandating the reporting of even some of these elements under the Intermediary Guidelines would provide a clearer picture of the accuracy of proactive moderation.
Finally, it is relevant to note that Rule 4(4) of the Intermediary Guidelines requires that the automated tools for proactive monitoring of certain classes of content must be ‘reviewed for accuracy and fairness’. The desirability of such proactive monitoring aside, Rule 4(4) is not self-enforcing and does not specify whoshould undertake this review, how often it should be carried out, and whom the results should be communicated to.
Contents of SSMI reports – reactive moderation
Transparency reporting with respect to reactive moderation aims to understand trends in user reporting of content and a platform’s responses to user flagging of content. Rule 4(1)(d) requires platforms to disclose the “details of complaints received and actions taken thereon”. However, a perusal of SSMI reporting reveals how the broad discretion granted to SSMIs to frame their reports is undermining the usefulness of the reporting.
Google’s transparency report has the most straightforward understanding of “complaints received”, with the platform disclosing the number of ‘complaints that relate to third-party content that is believed to violate local laws or personal rights’. In other words, where users raise a complaint against a piece of content, Google reports it (30,065 complaints in February 2022). Meta on the other hand only reports complaints from: (i) a specific contact form, a link for which is provided in its ‘Help Centre’; and (ii) complaints addressed to the physical post-box mail address published on the ‘Help Centre’. For February 2022, Facebook received a mere 478 complaints, of which only 43 pertained to content (inappropriate or sexual content), while 135 were from users whose accounts have been hacked, and 59 were from users who had lost access to a group or page. If 43 user reports a month against content on Facebook seems suspiciously low, it likely is – because the method of user reporting of content that involves the least amount of friction for users (simply clicking on the post and reporting it directly) bypasses the specific contact form that Facebook uses to collate India complaints, and thus appears to be absent from Facebook’s transparency reporting. Most of Facebook’s 478 complaints for February have nothing to do with content on Facebook and offer little insight into how Facebook responds to user complaints against content or what types of content users report.
In contrast, Twitter’s transparency reporting expressly states that it does notinclude non-content related complaints (e.g., a user locked out of their account), instead limiting its transparency reporting to content related complaints – 795 complaints for March 2022: 606 of abuse or harassment, 97 of hateful conduct, and 33 of misinformation were the top categories. However, like Facebook, Twitter also has both a ‘support form’ and allows users to report content directly by clicking on it, but fails to specify from what sources “complaints” are compiled from for its India transparency reports. Twitter merely notes that ‘users can report grievances by the grievance mechanism by using the contact details of the Indian Grievance Officer’.
These apparent discrepancies in the number of complaints reported bear even greater scrutiny when the number of users of these platforms is factored in. Twitter (795 complaints/month) has an estimated 23 million users in India while Facebook (406 complaints/month) has an estimated 329 million users. It is reasonable to expect user complaints to scale with the number of users, but this is evidently not happening suggesting that these platforms are using different sources and methodologies to determine what constitutes a “complaint” for the purposes of Rule 4(1)(d). This is perhaps a useful time to discuss another SSMI, ShareChat.
ShareChat is reported to have an estimated 160 million users, and for February 2022 the platform reported 56,81,213 user complaints (substantially more than Twitter and Facebook). These complaints are content related (e.g., hate speech, spam etc.) although with 30% of complaints merely classified as ‘Others’, there is some uncertainty as to what these complaints pertain to. ShareChat’s reports states that it collates complaints from ‘reporting mechanism across the platform’. This would suggest that, unlike Facebook (and potentially Twitter), it compiles user complaint numbers from all methods a user can complain against content and not just a single form tucked away in its help centre documentation. While this may be a more holistic approach, ShareChat’s reporting suffers from other crucial deficiencies. Sharechat’s report makes no distinction between reactive and proactive moderation, merely giving a figure for content that has taken down. This makes it hard to judge how ShareChat responded to these over 56,00,000 complaints.
Conclusion
Before concluding, it is relevant to note that no SSMI reporting discusses content that has been subjected to reduced visibility or algorithmically downranked. In the case of proactive moderation, Rule 4(1)(d) unfortunately limits itself to content that has been “removed”, although in the case of reactive moderation, reduced visibility would come within the ambit of ‘actions taken in response to complaints’ and should be reported on. Best practices would require platforms to disclose when and what content is subjected to reduced visibility to users. Rule 4(1)(d) did not form part of the draft intermediary guidelines that were subjected to public consultation in 2018, rather appearing for the first time in its current form in 2021. Ensuring broader consultation at the time of drafting may have resulted in such regulatory lacunae being eliminated and a more robust framework for transparency reporting.
That said, getting meaningful transparency reporting is a hard task. Standardising reporting procedures is a detailed and fraught process that likely requires platforms and regulators to engage in a consultative process – see this document created by Daphne Keller listing out potential problems in reporting procedures. Sample problem: “If ten users notify platforms about the same piece of content, and the platform takes it down after reviewing the first notice, is that ten successful notices, or one successful notice and nine rejected ones?” Given the scale of the regulatory and technical challenges, it is perhaps unsurprising that the transparency reporting under the Intermediary Guidelines has gotten off to a rocky start. However, Rule 4(1)(d) itself offers an avenue for improvement. The Rule allows the MeitY to specify any additional information that platforms should publish in their transparency reports. In the case of proactive monitoring, requiring platforms to specify exactly how automated tools are deployed, and when content take downs based on these tools are reversed would be a good place to start. The MeitY must also engage with the functionality and internal procedures of SSMIs to ensure that reporting is harmonised to the extent possible. For example, reporting a “complaint” for Facebook and ShareChat should ideally have some equivalence. This requires, for a start, MeitY to consult with platforms, users, civil society, and academic experts when thinking about transparency.
On 25 February 2021, the Central Government notified the Information Technology (Guidelines for Intermediaries and Digital Media Ethics Code) Rules, 2021 (‘2021 Rules’). These Rules have been the subject of much controversy as social media intermediaries and media houses have challenged them in various High Courts across the country. The Bombay High Court in AGIJ Promotion of Nineteenonea Media v Union of Indiastayed the operation of Rule 9(1) and Rule 9(3), the former provision mandating adherence to the ‘Code of Ethics’ and the latter creating a three-tiered structure to regulate online curated content. The High Court held that these rules contravened Article 19(1)(a) of the Constitution and transgressed the rule-making power delegated by the Information Technology Act, 2000 (‘IT Act’). This was affirmed by the Madras High Court in Digital News Publishers Association v Union of India,which noted that the order passed by the Bombay High Court had a pan-India effect.
While the Information Technology (Intermediary Guidelines), 2011 applied solely to intermediaries, the 2021 Rules cover both intermediaries and publishers of digital content, including OTT platforms (that fall under ‘publisher of online curated content). At the outset, the departure from utilising existing legislations such as the Cinematograph Act, 1952, or the Cable Television Networks (Regulation) Act, 1955, and invoking the IT Act to regulate publishers of film and television is curious. The aforementioned Bombay High Court judgement addressed this, observing that fields which stood occupied by independent legislations could not possibly be brought within the purview of the 2021 Rules.
The regulation of OTT platforms assumes particular significance given the recent controversies concerning web series that allegedly contain objectionable content or offend religious beliefs. For instance, FIRs were lodged against the makers of the web series Tandav, which led to Amazon Prime Video’s India head moving the Supreme Court for protection against arrest. Similarly, Netflix’s A Suitable Boy also triggered a police case after a political leader found the scene wherein the protagonist kissed a Muslim boy at a Hindu temple objectionable. FIRs have also been registered against the makers and producers of Mirzapur for offending religious beliefs, and a petition has been filed before the Supreme Court for portraying the Uttar Pradesh district in a negative manner.
This blog will first set out how the 2021 Rules are applicable to OTT platforms. Second, it will examine whether the regulatory mechanisms conceived by the 2021 Rules provide unduly broad censorial powers to the Central Government, potentially threatening free speech and expression guaranteed by the Indian Constitution.
The 2021 Rules and OTT Platforms In February 2019, the Ministry of Electronics and Information Technology (‘MeitY’) told the Delhi High Court that the IT Act already provided stringent provisions for website blocking (under Section 69A) in case of illegal content on OTT Platforms and therefore, no mandamus could be issued to the Centre for framing general guidelines or separate provisions for OTT content. However, in February 2021, amidst rising controversies revolving around various shows, the Centre notified the 2021 Rules, Part III of which is titled “Code of Ethics and Procedure and Safeguard in Relation to Digital/Online Media”.
Rule 2(u) of the 2021 Rules defines “publisher of online curated content” as any publisher who makes available to users, on demand, audio-visual content (that is owned or licensed by the publisher) via a computer resource over the internet. OTT platforms such as Netflix, Amazon Prime Video, and Disney+Hotstar squarely fall within the ambit of such ‘publishers of online curated content’. Under Rule 8(2) of the 2021 Rules, such publishers are bound by Part III of the 2021 Rules, while Rule 9 requires such publishers to adhere to the ‘Code of Ethics’ found in the Appendix to the 2021 Rules. This Code lays down five broad principles, ranging from age classification of content to exercising due caution and discretion while depicting India’s multi-cultural background.
Perhaps the most salient feature of Part III is its three-tier structure for redressal of grievances against content, which is applicable to both publishers of news and current affairs and publishers of online curated content. Any complaints that a publisher’s content violates the Code of Ethics or that the publisher is in breach of any rule in Part III of the 2021 Rules are addressed through the following structure:
Beyond the 2021 Rules, there will also be an establishment of an “Online Grievance Portal” by the Ministry of Information & Broadcasting (‘MIB’) where any person who objects to the content of a publisher can register their grievance. This grievance will be electronically directed to the publisher, the Ministry, as well as the self-regulating body.
The impact of the 2021 Rules Films released in theatres in India are subjected to pre-certification from the Central Board of Film Certification (‘CBFC’) as per the Cinematograph Act, 1952, and television programmes are governed as per the Cable Television Network (Regulation) Act, 1995. However, OTT platforms, till now, escaped the scrutiny of the law due to an absence of clarity as to which Ministry would regulate them, i.e., the MietY or the MIB. The matter was resolved in November 2020 when the Government of India (Allocation of Business) Rules, 1961 were amended to include “Films and Audio-Visual programmes made available by online content providers” within the ambit of the MIB.
Overregulation and independent regulatory bodies The 2021 Rules pose a danger of overregulation vis-a-vis OTT platforms; they promote self-censorship and potentially increase government oversight over digital content. Beginning with the second-tier of the mechanism established by the 2021 Rules, it requires a self-regulatory body to be set up which is to be headed by a Supreme Court or High Court Judge, or an independent eminent person from the field of media, broadcasting, entertainment, child rights, human rights or such other field; the members of this body, not exceeding six, are experts from various fields. Rule 12(3) dictates that the self-regulating body, after constitution, needs to register itself with the MIB. However, this registration is predicated upon the subjective satisfaction of the MIB that the body has been constituted according to Rule 12(2) and has agreed to perform functions laid down in sub-rules (4) and (5), which effectively hinders the independence of the body as the Rules fail to circumscribe the discretion that can be exercised by MIB in refusing registration to the body.
This self-regulating body can sit in appeal as well as issue guidance or advisories to the publishers, including requiring the issuance of apologies or inclusion of warning cards by publishers. However, decisions pertaining to the need to take action to delete or modify content, or instances where the publisher fails to comply with guidance or advisories of the body, are to be referred to the Oversight Mechanism under Rule 13 [Rules 12(5)(e) and 12(7)].
Additional concerns arise at Level III – the Oversight Mechanism under Rule 13. This Oversight Mechanism requires the MIB to form an Inter-Departmental Committee (‘IDC’), which shall consist of representatives from various other Ministries; the Chairperson
of this Committee is an Authorised Officer appointed by the MIB. Rule 14(2) stipulates that the Committee shall meet periodically to hear complaints arising out of grievances with respect to decisions taken at Level I or II, or complaints referred to it directly by the MIB. This may pose certain challenges — as the IDC, which is constituted and chaired by the MIB, and consists of individuals from other Ministries, will effectively also preside over complaints referred to it by the MIB. Furthermore, the recommendations of the IDC are made to the MIB itself for issuance of appropriate orders and directions for compliance. This has the potential to create a conflict of interest, and it violates the principle of natural justice that one cannot be a judge in their own case.
A bare perusal of the functions of Level II and Level III portrays that the powers bestowed upon the self-regulating body and the IDC overlap to a great extent. The self-regulating body may be rendered irrelevant as decisions regarding modification or removal of content or punishment of the publisher for failure to comply rest with the IDC. As the IDC is constituted by the MIB and its recommendations are referred to the MIB for issuance of orders to the publishers, for all intents and purposes, the Central Government has the final say in the online content that can be published by OTT platforms. This may make publishers wary and could have a chilling effect on freedom of speech and expression as content unfavourable to or critical of the government in power may be referred to the IDC/MIB and blocked.
The IDC has considerable discretion when it comes to its position as an Appellate Authority. More importantly, Rule 16 allows the Authorised Officer to block content under Section 69A of the IT Act in any case of emergency may have potential for misuse. To confer upon one individual appointed by the MIB the power to block content, without providing an opportunity for hearing to the publisher, is excessive and does not provide sufficient procedural safeguards; an issue that had been glossed over by the Supreme Court while upholding the constitutionality of Section 69A and Information Technology (Blocking Rules), 2009, in Shreya Singhal v Union of India.
In Hiralal M. Shah v The Central Board of Film Certification, Bombay, an order of the Joint Secretary to the Government of India directing a Marathi feature film to not be certified for public exhibition was challenged andthe Bombay High Court held that the Joint Secretary was not qualified to judge the effects of the film on the public, nor did he have the experience in examination of films. The High Court observed that allowing a bureaucrat to sit in judgement over the same would make “a mockery of the substantive right of appeal conferred on the producer”. According to the Court, it was difficult to comprehend why an informed decision by an expert body, i.e. the Film Certification Appellate Tribunal constituted under the Cinematograph Act, 1952, was to be replaced with the moral standards of a bureaucrat. A similar mechanism for regulation is being constructed by way of the 2021 Rules.
The three-tier mechanism stipulated by the 2021 Rules also raises the query as to why OTT platforms need to be regulated under the IT Act in the first place. If regulation is required, instead of adverting to the IT Act or the Cinematograph Act, 1952, which regulates traditional media, the regulatory system envisaged under the Cinematograph Act can be emulated to some extent in an alternate legislation solely governing OTT platforms. While the Cinematograph Act may be inadequate in terms of regulating new media, the current IT Rules stretch the boundaries of rule-making power of the Parliament by delving into an area of regulation that is not permissible under the IT Act.
The 2021 Rules are subordinate legislation, and it remains contested whether Part III of the Rules could have been promulgated using the rule-making power conferred on the Central Government under the IT Act. In the case of State of Tamil Nadu v P. Krishnamoorthy, the Supreme Court held that delegated legislation could be challenged if there was failure to conform to the statute under which it was made or if it exceeded the limits of authority conferred by the enabling Act, or if there was manifest arbitrariness or unreasonableness (to an extent where the Court may say that the legislature never intended to give authority to make such rules). With respect to the 2021 Rules, when such broad and arbitrary powers are being conferred on entities which could restrict fundamental rights under Articles 19(1)(a) and 19(1)(g), it should stem from a parent Act that lays down the objective and purpose that drives such regulation. The IT Act only regulates content to the extent of specific offences under Sections 66F, 67, 67A, 67B etc. that are to be judicially assessed, and Section 79 lays down guidelines that must be followed by intermediaries to avail of safe harbour. However, by introducing a distinct class of entities that must adhere to “digital media ethics” and must constitute their own regulation bodies, there is prima facie overreach by the 2021 Rules.
Are the IT Rules Violative of the Constitutional Rights of Free Speech and Expression? The three-tier mechanism under the 2021 Rules may have a chilling effect on creators and producers who may be disincentivized from publishing and distributing content that could potentially be considered offensive to even a small section of society. For example, even in absence of the 2021 Rules, the makers of Tandav agreed to make voluntary cuts and tendered an apology. Similarly, despite the partial stay of the 2021 Rules by the High Courts of Bombay and Madras, OTT platforms have stated that they will play it safe and exercise restraint over potentially controversial content. After the 2021 Rules, criticism that offends the sensibilities of an individual could potentially result in a grievance under Part III, ultimately leading to content being restricted.
In addition to this, the Code of Ethics appended to Part III states that a publisher shall “exercise due caution and discretion” in relation to content featuring the activities, beliefs, practices, or views of any racial or religious group. This higher degree of responsibility, which is ambiguous, may restrict the artistic expression of OTT Platforms. In Shreya Singhal v Union of India, the Supreme Court struck down Section 66A of the IT Act, holding that “where no reasonable standards are laid down to define guilt in a section which creates an offence and where no clear guidance is given to either law abiding citizens or to authorities and courts, a section which creates an offence and which is vague must be struck down as being arbitrary and unreasonable”. By stating that the Constitution did not permit the legislature “to set a net large enough to catch all possible offenders and leave it to the Court to step in and decide who could be held guilty”, the Supreme Court decisively ruled that a law which is vague would be void. Although a breach of the 2021 Rules does not have penal consequences, the Code of Ethics utilises open-ended, broad language whose interpretation could confer excessive discretion on the IDC in deciding what content to remove.
Under India’s constitutional structure, free expression can only be limited to the extent prescribed by Article 19(2), and courts scrutinise any restrictions of expression stringently due to the centrality of free speech and expression to the continued maintenance of constitutional democracy. In S. Rangarajan v P. Jagivan Ram, the Supreme Court observed that the medium of a movie was a legitimate mode to address issues of general concern. Further, the producer had the right to ‘think out’ and project his own message despite the disapproval of others; “it is a part of democratic give-and-take to which no one could complain. The State cannot prevent open discussion and open expression, however hateful to its policies”. The Apex Court further stated that it was the duty of the State to protect the freedom of expression. In K.A. Abbas v Union of India, the Supreme Court upheld the constitutionality of censorship under the Cinematograph Act, but cautioned that the censorship could only be in the interest of society, and that if it ventured beyond this arena, it could be questioned on the ground that a legitimate power was being misused.
In the aforementioned cases, the courts, while upholding censorship guidelines, acknowledged that the same had to be grounded within the four corners of Article 19(2), and the standard for censorship had to be that of an ordinary individual of common sense and prudence, and not that of a hypersensitive individual. However, in recent times, there have been regular outcries against films and web series which may offend the sensitivities of the certain sections of the public. It must be noted that the Government also has a duty to protect the speakers of unpopular opinions, and restrictions on the freedom of speech must only be a last resort when the situations provided for in Article 19(2) (e.g., public order or security of the State) are at stake. Such an approach would help allay the concerns of publishers who may otherwise either resist from creating content that could be potentially controversial or remove or modify scenes.
Conclusion A mechanism that risks the overregulation of content on OTT platforms, as well as grants significant discretion to the Ministry by way of formation of the IDC has the potential to dilute constitutional rights. Further, with India’s burgeoning influence as a producer of cultural content, such a rigid and subjective manner of regulation inhibits artistic expression and may have a chilling effect on the exercise of free speech and expression. Publishing of content on OTT platforms is different from traditional broadcasting in the way that it is made available to the public. Streaming of content on OTT platforms is based on an ‘on-demand’ principle where viewers actively choose the content they wish to consume, and thus it may require specialised regulation. A balanced approach should be adopted for regulation of OTT platforms which adhere to the values embedded in the Constitution as well as guidelines envisioned by the Supreme Court in judgements discussed above.
This blog was written with the support of the Friedrich Naumann Foundation for Freedom.