High Court of Delhi cites CCG’s Working Paper on Tackling Non-Consensual Intimate Images

In December 2022, CCG held a roundtable discussion on addressing the dissemination of non-consensual intimate images (“NCII”) online and in January 2023 it published a working paper titled “Tackling the dissemination and redistribution of NCII”. We are thrilled to note that the conceptual frameworks in our Working Paper have been favourably cited and relied on by the High Court of Delhi in Mrs. X v Union of India W.P. (Cri) 1505 of 2021 (High Court of Delhi, 26 April, 2023)

We acknowledge the High Court’s detailed approach in addressing the issue of the online circulation of NCII and note that several of the considerations flagged in our Working Paper have been recognised by the High Court. While the High Court has clearly recognised the free speech risks with imposing overbroad monitoring mandates on online intermediaries, we note with concern that some key safeguards we had identified in our Working Paper regarding the independence and accountability of technologically-facilitated removal tools have not been included in the High Court’s final directions. 

CCG’s Working Paper 

A key issue in curbing the spread of NCII is that it is often hosted on ‘rogue’ websites that have no recognised grievance officers or active complaint mechanisms. Thus, individuals are often compelled to approach courts to obtain orders directing Internet Service Providers (“ISPs”) to block the URLs hosting their NCII. However, even after URLs are blocked, the same content may resurface at different locations, effectively requiring individuals to continually re-approach courts with new URLs. Our Working Paper acknowledged that this situation imposed undue burdens on victims of NCII abuse, but also argued against a proactive monitoring mandate for scanning of NCII content by internet intermediaries. We noted that such proactive monitoring mandates create free speech risks, as they typically lead to more content removal but not better content removal and run the risk of ultimately restricting lawful expression. Moreover, given the limited technological and operational transparency surrounding proactive monitoring/automated filtering, the effectiveness and quality of such operations are hard for external stakeholders and regulators to assess. 

Instead, our Working Paper proposed a multi-stakeholder regulatory solution that relied on the targeted removal of repeat NCII content using hash-matching technology. Hash-matching technology would ascribe reported NCII content a discrete hash (stored in a secure database) and then check the hash of new content against known NCII content. This would allow for rapid identification (by comparing hashes) and removal of content where previously reported NCII content is re-uploaded. Our Working Paper recommended the creation of an independent body to maintain such a hash database of known NCII content. Thus, once NCII was reported and hashed the first time by an intermediary, it would be added to the independent body’s database, and if it was detected again at different locations, it could be rapidly removed without requiring court intervention. 

This approach also minimises free speech risks as content would only be removed if it matched known NCII content, and the independent body would conduct rigorous checks to ensure that only NCII content was added to the database. Companies such as Meta, TikTok, and Bumble are already adopting hash-matching technologies to deal with NCII, and more broadly, hash-matching technology has been used to combat child-sex abuse material for over a decade. Since such an approach would potentially require legal and regulatory changes to the existing rules under the Information Technology Act, 2000, our Working Paper also suggested a short-term solution using a token system. We recommended that all large digital platforms adopt a token-based approach to allow for the quick removal of previously removed or de-indexed content, with minimal human intervention. 

Moreover, the long-term approach proposed in the Working Paper would also significantly reduce the administrative burden of seeking the removal of NCII for victims. It does so by: (a) reducing the time, cost, and effort they have to expend by going to court to remove or block access to NCII (since the independent body could work with the DoT to direct ISPs to block access to specific web pages containing NCII); (b) not requiring victims to re-approach courts for blocking already-identified NCII, particularly if the independent body is allowed to search for, or use a web crawler to proactively detect copies of previously hashed NCII; and (c) providing administrative, legal, and social support to victims.

The High Court’s decision 

In X v Union of India, the High Court was faced with a writ petition filed by a victim of NCII abuse, whose pictures and videos had been posted on various pornographic websites and YouTube without her consent. The Petitioner sought the blocking of the URLs where her NCII was located and the removal of the videos from YouTube. A key claim of the Petitioner was that even after content was blocked pursuant to court orders and directions by the government, the offending material was consistently being re-uploaded at new locations on the internet, and was searchable using specific keywords on popular online search engines. 

Despite the originator who was posting this NCII being apprehended during the hearings, the High Court saw it fit to examine the obligations of intermediaries, in particular search engines, in responding to user complaints on NCII. The High Court’s focus on search engines can be attributed to the fact that NCII is often hosted on independent ‘rogue’ websites that are unresponsive to user complaints, and that individuals often use search engines to locate such content. This may be contrasted with social media platforms that have reporting structures for NCII content and are typically more responsive. Thus, the two mechanisms that are then available to tackle the distribution of NCII on ‘rogue’ websites is to have ISPs disable access to specific URLs or/and have search engines de-index the relevant URLs. However, ISPs have little or no ability to detect unlawful content and do not typically respond to complaints by users, instead coordinating directly with state authorities. 

In fact, the High Court expressly cited CCG’s Working Paper to recognise this diversity in intermediary functionality, noting that “[CCG’s] paper espouses that due to the heterogenous nature of intermediaries, mandating a single approach for removal of NCII content might prove to be ineffective.” We believe this is a crucial observation as previous court decisions have imposed broad monitoring obligations on all intermediaries, even when they possess little or no control over content on their networks (See WP (Cri) 1082 of 2020 High Court of Delhi, 20 April 2021). Recognising the different functionality offered by different intermediaries allowed the High Court to identify de-indexing of URLs as an important remedy for tackling  NCII, with the Court noting that, “[search engines] can de-index specific URLs that can render the said content impossible to find due to the billions of webpages available on the internet and, consequently, reduce traffic to the said website significantly.” 

However, this would nevertheless be a temporary solution, since victims would still be required to repeatedly approach search engines for de-indexing each instance of NCII that is hosted on different websites. To address this issue, the long-term solution proposed in the Working Paper relies on a multi-stakeholder approach that relies on an independently maintained hash database for NCII content. The independent body maintaining the database would work with platforms, law enforcement, and the government to take down copies of identified NCII content, thereby reducing the burden on victims.

The High Court also adopted some aspects of the Working Paper’s short-term recommendations for the swift removal of NCII. The Working Paper recommended that platforms voluntarily use a token or digital identifier-based approach to allow for the quick removal of previously removed content. Complainants, who would be assigned a unique token upon the initial takedown of NCII, could submit URLs of any copies of the NCII along with the token. The search engine or platform would thereafter only need to check whether the URL contains the same content as the identified NCII linked to the token. The Court, in its order, requires search engines to adopt a similar token-based approach to “ensure that the de-indexed content does not resurface (¶61),” and notes that search engines “cannot insist on requiring the specific URLs from the victim for the purpose of removing access to the content that has already been ordered to be taken down (¶61)”. However, the judgment does not clarify if this means that search engines are required to disable access to copies of identified NCII without the complainant identifying where they have been uploaded, and if so, then how search engines will remove the repeat instances of identified NCII. The order only states that it is the responsibility of search engines to use tools that already exist to ensure that access to offending content is immediately removed. 

More broadly, the Court agreed with our stand that proactive filtering mandates against NCII may harm free speech, noting that “The working paper published by CCG records the risk that overbroad directions may pose (¶56)” further holding that “any directions that necessitates pro-active filtering on the part of intermediaries may have a negative impact on the right to free speech. No matter the intention of deployment of such technology, its application may lead to consequences that are far worse and dictatorial. (¶54)” We applaud the High Court’s recognition that general filtering mandates against unlawful content may significantly harm free speech. 

Final directions by the court

The High Court acknowledged the use of hash-matching technology in combating NCII as deployed by Meta’s ‘Stop NCII’ program (www.stopncii.org) and explained how such technology “can be used by the victim to create a unique fingerprint of the offending image which is stored in the database to prevent re-uploads (¶53). As noted above, our Working Paper also recognised the benefits of hash-matching technology in combating NCII. However, we also noted that such technology has the scope for abuse and thus must be operationalised in a manner that is publicly transparent and accountable. 

In its judgment, the Court issued numerous directions and recommendations to the Ministry of Electronics and Information Technology (MeitY), the Delhi Police, and search engines to address the challenge of circulation of NCII online. Importantly, it noted that the definition of NCII must include sexual content intended for “private and confidential relationships,” in addition to sexual content obtained without the consent of the relevant individual. This is significant as it expands the scope of illegal NCII content to include instances where images or other content have been taken with consent, but have thereafter been published or circulated without the consent of the relevant individual. NCII content may often be generated within the private realm of relationships, but subsequently illegally shared online.

The High Court framed its final directions by noting that “it is not justifiable, morally or otherwise, to suggest that an NCII abuse victim will have to constantly subject themselves to trauma by having to scour the internet for NCII content relating to them and having to approach authorities again and again (¶57).” To prevent this outcome, the Court issued the following directions: 

  1. Where NCII has been disseminated, individuals can approach the Grievance Officer of the relevant intermediary or the Online Cybercrime Reporting Portal (www.cybercrime.gov.in) and file a formal complaint for the removal of the content. The Cybercrime Portal must specifically display the various redressal mechanisms that can be accessed to prevent the further dissemination of NCII; 
  2. Upon receipt of a complaint of NCII, the police must immediately register a formal complaint in relation to Section 66E of the IT Act (punishing NCII) and seek to apprehend the primary wrongdoer (originator); 
  3. Individuals can also approach the court and file a petition identifying the NCII content and the URLs where it is located, allowing the court to make an ex-facie determination of its illegality; 
  4. Where a user complains against NCII content under Rule 3(2)(b) of the Intermediary Guidelines to a search engine, search engines must employ hash-matching technology to ensure future webpages with identical NCII content are also de-indexed to ensure that the complained against content does not re-surface. The Court held that users should be able to directly re-approach search engines to seek de-indexing of new URLs containing previously de-indexed content without having to obtain subsequent court or government orders;
  5. A fully-functional helpline available 24/7 must be devised for reporting NCII content. It must be staffed by individuals who are sensitised about the nature of NCII content and would not shame victims, and must direct victims to organisations that would provide social and legal support. Our Working Paper proposed a similar approach, where the independent body would work with organisations that would provide social, legal, and administrative support to victims of NCII;
  6. When a victim obtains a takedown order for NCII, search engines must use a token/ digital identifier to de-index content, and ensure that it does not resurface. The search engines also cannot insist on requiring specific URLs for removing access to content ordered to be taken down. Though our Working Paper recommended the use of a similar system, to mitigate against the risks of proactive monitoring, we suggested that (a) this could be a voluntary system adopted by digital platforms to quickly remove identified NCII, and (b) that complainants would submit URLs of copies of identified NCII along with the identifier, so that platform would only need to check whether the URL contains the same content linked to the token to remove access; and
  7. MeitY may develop a “trusted third-party encrypted platform” in collaboration with search engines for registering NCII content, and use hash-matching to remove identified NCII content. This is similar to the long-term recommendation in the Working Paper, where we recommend that an independent body is set up to maintain such a database and work with the State and platforms to remove identified NCII content. We also recommended various safeguards to ensure that only NCII content was added to the database.

Conclusion 

Repeated court orders to curtail the spread of NCII content represents a classic ‘whack-a-mole’ dilemma and we applaud the High Court’s acknowledgement and nuanced engagement with this issue. Particularly, the High Court recognises the significant mental distress and social stigma that the dissemination of one’s NCII can cause, and attempts to reduce the burdens on victims of NCII abuse by ensuring that they do not have to continually identify and ensure the de-indexing of new URLs hosting their NCII. The use of hash-matching technology is significantly preferable to broad proactive monitoring mandates.

However, our Working Paper also noted that it was of paramount importance to ensure that only NCII content was added to any proposed hash database, to ensure that lawful content was not accidently added to the database and continually removed every time it resurfaced. To ensure this, our Working Paper proposed several important institutional safeguards including: (i) setting up an independent body to maintain the hash database; (ii) having multiple experts vet each piece of NCII content that was added to the database; (iii) where NCII content had public interest implications (e.g., it involved a public figure), a judicial determination should be required; (iv) ensuring that the independent body provides regular transparency reports and conducts audits of the hash database; and (v) imposing sanctions on the key functionaries of the independent body if the hash database was found to include lawful content. 

We believe that where hash-databases (or any technological solutions) are utilised to prevent the re-uploading of unlawful content, these strong institutional safeguards are essential to ensure the public accountability of such databases. Absent this public accountability, it is hard to ascertain the effectiveness of such solutions, allowing large technology companies to comply with such mandates on their own terms. While the High Court did not substantively engage with these institutional mechanisms outlined in our Working Paper, we believe that the adoption of the upcoming Digital India Bill represents an excellent opportunity to consider these issues and further our discussion on combating NCII.

Report on Intermediary Liability in India

The question of when intermediaries are liable, or conversely not liable, for content they host or transmit is often at the heart of regulating content on the internet. This is especially true in India, where the Government has relied almost exclusively on intermediary liability to regulate online content. With the advent of the Intermediary Guidelines 2021, and their subsequent amendment in October 2022, there has been a paradigm shift in the regulation of online intermediaries in India. 

To help understand this new regulatory reality, the Centre for Communication Governance (CCG) is releasing its ‘Report on Intermediary Liability in India’ (December 2022).

This report aims to provide a comprehensive overview of the regulation of online intermediaries and their obligations with respect to unlawful content. It updates and expands on the Centre for Communication Governance’s 2015 report documenting the liability of online intermediaries to now cover the decisions in Shreya Singhal vs. Union of India and Myspace vs. Super Cassettes Industries Ltd, the Intermediary Guidelines 2021 (including the October 2022 Amendment), the E-Commerce Rules, and the IT Blocking Rules. It captures the over two decades of regulatory and judicial practice on the issue of intermediary liability since the adoption of the IT Act. The report aims to provide practitioners, lawmakers and regulators, judges, and academics with valuable insights as they embark on shaping the coming decades of intermediary liability in India.

Some key insights that emerge from the report are summarised below:

Limitations of Section 79 (‘Safe Harbour’) Approach: In the cases analysed in this report, there is little judicial consistency in the application of secondarily liability principles to intermediaries, including the obligations set out in Intermediary Guidelines 2021, and monetary damages for transmitting or hosting unlawful content are almost never imposed on intermediaries. This suggests that there are significant limitations to the regulatory impact of obligations imposed on intermediaries as pre-conditions to safe harbour.

Need for clarity on content moderation and curation: The text of Section 79(2) of the IT Act grants intermediaries safe harbour provided they act as mere conduits, not interfering with the transmission of content. There exists ambiguity over whether content moderation and curation activities would cause intermediaries to violate Section 79(2) and lose safe harbour. The Intermediary Guidelines 2021 have partially remedied this ambiguity by expressly stating that voluntary content moderation will not result in an intermediary ‘interfering’ with the transmission under Section 79(2). However, ultimately amendments to the IT Act are required to provide regulatory certainty.

Intermediary status and immunity on a case-by-case basis: An entity’s classification as an intermediary is not a status that applies across all its operations (like a ‘company’ or a ‘partnership’), but rather the function it is performing vis-à-vis the specific electronic content it is sued in connection with. Courts should determine whether an entity is an ‘intermediary’ and whether it complied with the conditions of Section 79 in relation to the content it is being sued for. Consistently making this determination at a preliminary stage of litigation would greatly further the efficacy of Section 79’s safe harbour approach.

Concerns over GACs: While the October 2022 Amendment stipulates that two members of every GAC shall be independent, no detail is provided as to how such independence shall be secured (e.g., security of tenure and salary, oath of office, minimum judicial qualifications etc.). Such independence is vital as GAC members are appointed by the Union Government but the Union Government or its functionaries or instrumentalities may also be parties before a GAC. Further, given that the GACs are authorities ‘under the control of the Government of India’, they have an obligation to abide by the principles of natural justice, due process, and comply with the Fundamental Rights set out in the Constitution. If a GAC directs the removal of content beyond the scope of Article 19(2) of the Constitution, questions of an impermissible restriction on free expression may be raised.

Actual knowledge in 2022: The October 2022 Amendment requires intermediaries to make reasonable efforts to “cause” their users not to upload certain categories of content and ‘act on’ user complaints against content within seventy-two hours. Requiring intermediaries to remove content at the risk of losing safe harbour in circumstances other than the receipt of a court or government order prima facie violates the decision of Shreya Singhal. Further, India’s approach to notice and takedown continues to lack a system for reinstatement of content.  

Uncertainty over government blocking power: Section 69A of the IT Act expressly grants the Union Government power to block content, subject to a hearing by the originator (uploader) or intermediary. However, Section 79(3)(b) of the IT Act may also be utilised to require intermediaries to take down content absent some of the safeguards provided in Section 69A. The fact that the Government has relied on both provisions in the past and that it does not voluntarily disclose blocking orders makes a robust legal analysis of the blocking power challenging.

Hearing originators when blocking: The decision in Shreya Singhal and the requirements of due process support the understanding that the originator must be notified and granted a hearing under the IT Blocking Rules prior to their content being restricted under Section 69A. However, evidence suggests that the government regularly does not provide originators with hearings, even where the originator is known to the government. Instead, the government directly communicates with intermediaries away from the public eye, raising rule of law concerns.

Issues with first originators: Both the methods proposed for ‘tracing first originators’ (hashing unique messages and affixing encrypted originator information) are easily circumvented, require significant technical changes to the architecture of messaging services, offer limited investigatory or evidentiary value, and will likely undermine the privacy and security of all users to catch a few bad actors. Given these considerations, it is unlikely that such a measure would satisfy the proportionality test laid out by current Supreme Court doctrine.

Broad and inconsistent injunctions: An analysis of injunctions against online content reveals that the contents of court orders are often sweeping, imposing vague compliance burdens on intermediaries. When issuing injunctions against online content, courts should limit blocking or removals to specific URLs. Further courts should be cognisant of the fact that intermediaries have themselves not committed any wrongdoing, and the effect of an injunction should be seen as meaningfully dissuading users from accessing content rather than an absolute prohibition.

This report was made possible by the generous support we received from National Law University Delhi. CCG would like to thank our Faculty Advisor Dr. Daniel Mathew for his continuous direction and mentorship. This report would not be possible without the support provided by the Friedrich Naumann Foundation for Freedom, South Asia. We are grateful for comments received from the Data Governance Network and its reviewers. CCG would also like to thank Faiza Rahman and Shashank Mohan for their review and comments, and Jhalak M. Kakkar and Smitha Krishna Prasad for facilitating the report. We thank Oshika Nayak of National Law University Delhi for providing invaluable research assistance for this report. Lastly, we would also like to thank all members of CCG for the many ways in which they supported the report, in particular, the ever-present and ever-patient Suman Negi and Preeti Bhandari for the unending support for all the work we do.

Guest Post: A Positive Obligation to Ensure and Promote Media Diversity

This post was authored by: Aishvarya Rajesh

A positive obligation with respect to a human right is one that requires States to put into effect both preventive measures against violations (through appropriate legislative, judicial or administrative measures) and remedial measures (access to judicial reform once violations have occurred). This piece examines whether ensuring media diversity can be considered a positive obligation on States under Article 19 of the International Covenant on Civil and Political Rights (“ICCPR”), and if yes, what the scope and nature of this obligation is.

Positive obligation on States to create a favourable environment for sharing diverse views  

The right to freedom of speech and expression enshrined under Article 19 of the ICCPR forms the cornerstone of democratic societies. It, along with its corollary freedom of opinion, is vital for the full development of a person and for the true participation in public debate. The ECtHR, in its landmark decision of Dink v. Turkey, has interpreted the right to freedom of expression to include a positive obligation on States to ensure the effective protection of free expression from being wrongfully interfered by private/non-state actors, and for the State itself to create “an enabling environment by allowing for everyone to take part in public debate and express their thoughts and opinions” (¶137). The Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression has also acknowledged that there has been an increasing recognition that States have positive regulatory obligations to promote free speech and expression in online spaces too. The Joint Declaration on Diversity of 2007, a document prepared by several eminent jurists appointed as Representatives or Rapporteurs by the UN, OSCE, OAS, and ACHPR has similarly identified States’ positive obligation to regulate private actors so as to promote diversity in the media and prevent the undue concentration of media ownership.

The requirement for media diversity as a positive obligation on States may also be seen as emanating from interpretations of different international instruments read together, an outcome that has also been reflected in the decisions of different human rights bodies. For instance, a conjunctive reading of Art.19 and Art.2 of the ICCPR (as with the parallel provisions in the UDHR and regional human rights instruments) can be interpreted to show the positive obligation on States to promote media diversity. This interpretation has been endorsed by the Inter-American Commission on Human Rights in inter alia Baruch Ivcher Bronstein v. Peru, (2001) which opined that “…consequently, it is vital that [media] can gather the most diverse information and opinions” (¶149); and by the European Court in Informationsverein  Lentia  and  Others  v.  Austria (1993) noting, “…Such an undertaking cannot be successfully accomplished unless it is grounded in the principle of pluralism, of which the State is the ultimate guarantor…” (¶38).

The positive obligation includes within its ambit an obligation to prevent undue concentration within media eco-systems

A positive obligation on the State to foster an environment where a diversity of ideas and opinions (media diversity) is available to the public can entail a very wide array of obligations on the State. For instance, this raises questions regarding the extent or the scope of this obligation in the regulation of social media intermediaries who have managed to accumulate significant control within the online media space. This sort of control could be seen as giving them the ability to behave in a near monopolistic manner. The Centre for Law & Democracy, in February, 2022 gave their submissions on the Practical Application of the Guiding Principles on Business and Human Rights to the Activities of Technology Companies where they opined inter alia that States can be obligated to undertake measures to promote diversity in an online space that has seen high market concentration by large social media companies.

Concentration within media eco-systems is antithetical to the idea of media diversity

Given that a positive obligation to promote media diversity exists, a necessary corollary of this would be the need to prevent undue concentration within media eco-systems. According to UNESCO, undue concentration in media refers to when one corporate body (or individual) “exercises overall control over an important part of an overall media market”. This would prevent and hinder the ability of people to receive information from multiple sources, which is crucial for the true exercise of the freedom of speech. This is because media monopoly can cloud the ‘marketplace of ideas’, and according to the Special Rapporteur for Freedom of Expression, “leads to the uniformity of the content that they produce or disseminate”. Furthermore according to UNESCO, a media monopoly poses a threat to not just the freedom of expression but by extension also to democracy as it hinders the ability of media to reflect the variety of opinions and ideas generated in the society as a whole.

Obligation to monitor and restrict M&As in the media space

In 2007, the Joint Declaration on Diversity (by the Special Rapporteurs of the UN, OAS and ACHPR and the OSCE Representative on freedom of the media) in broadcasting emphasized the requirement to put in place anti-monopoly (both horizontal and vertical) rules, including ‘stringent requirements’ of transparency enforced through active monitoring. This also covered the need to prevent powerful combinations as a result of merger activity in the media space. The Committee of Ministers of the Council of Europe has emphasized the need for licensing to be made contingent on media platform owners acting in harmony with the requirement to ensure media diversity. UNESCO’s Media Development Indicators, also acknowledge that States are required to prevent monopolies or oligopolies and must take this into account during the provision/renewal of license. The measures that States were required to take to promote media diversity and prevent monopoly were called ‘special measures’ (in the Joint Declaration on the Protection of Freedom of Expression and Diversity in the Digital Terrestrial Transition), going beyond those already existing in commercial sectors, which indicates a recognition of the need to secure media pluralism inter alia through ensuring competitiveness in the space.

Conclusion

A State’s positive obligations under the right to free speech and expression can be viewed as emanating directly from treaty obligations and has also been widely interpreted by a multitude of judicial decisions and eminent jurists. Acknowledging these as sources of international law under Articles 38(1)(a) and 38(1)(d) of the ICJ Statute we can argue that a State’s positive obligations under Art. 19 of the ICCPR and analogous free speech protections under international law must also include within their ambit obligations to ensure media diversity. This includes the protection of both, the rights of the speaker and the audience, under the right to freedom of speech and expression. Some ways in which this can be ensured is through allocation of funds specifically for public interest content and other at-risk sectors; establish holistic and functional market concentration monitoring systems; and also delegate, through co-regulation or self-regulation, a part of the State’s positive obligation directly to the media platforms itself to ensure diversity in its operations. The measures undertaken must be carefully designed and should fulfill the aims of promoting diversity, avoiding monopolistic behaviour, and not put at risk the independence of the media.

Transparency reporting under the Intermediary Guidelines is a mess: Here’s how we can improve it

The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (“Intermediary Guidelines”) represents India’s first attempt at regulating large social media platforms, with the Guidelines creating distinct obligations for ‘Significant Social Media Intermediaries’ (“SSMIs”). While certain provisions of the Guidelines concerning SSMIs (like the traceability requirement) are currently under legal challenge, the Guidelines also introduced a less controversial requirement that SSMIs publish monthly transparency reports regarding their content moderation activities. While this reporting requirement is arguably a step in the right direction, scrutinising the actual documents published by SSMIs reveals a patchwork of inconsistent and incomplete information – suggesting that Indian regulators need to adopt a more comprehensive approach to platform transparency.

This post briefly sets out the reporting requirement under the Intermediary Guidelines before analysing the transparency reports released by SSMIs. It highlights how a focus on figures coupled with the wide discretion granted to platforms to frame their reports undermines the goal of meaningful transparency. The figures referred to when analysing SSMI reports pertain to the February-March of 2022 reporting period, but the distinct methodologies used by each SSMI to arrive at these figures (more relevant for the present discussion) has remained broadly unchanged since reporting began in mid-2021. The post concludes by making suggestions on how the Ministry of Electronics and Information Technology (“MeitY”) can strengthen the reporting requirements under the Intermediary Guidelines.   

Transparency reporting under the Intermediary Guidelines

Social media companies structure speech on their platforms through their content moderation policies and practices, which determine when content stays online and when content is taken down. Even if content is not illegal or taken down pursuant to a court or government order, platforms may still take it down for violating their terms of service (or Community Guidelines) (let us call this content ‘violative content’ for now i.e., content that violates terms of service). However, ineffective content moderation can result in violative and even harmful content remaining online or non-violative content mistakenly being taken down. Given the centrality of content moderation to online speech, the Intermediary Guidelines seek to bring some transparency to the content moderation practices of SSMIs by requiring them to publish monthly reports on their content moderation activities. Transparency reporting helps users and the government understand the decisions made by platforms with respect to online speech. Given the opacity with which social media platforms often operate, transparency reporting requirements can be an essential tool to hold platforms accountable for ineffective or discriminatory content moderation practices.  

Rule 4(1)(d) of the Intermediary Guidelines requires SSMIs to publish monthly transparency reports specifying: (i) the details of complaints received, and actions taken in response, (ii) the number of “parts of information” proactively taken down using automated tools; and (iii) any other relevant information specified by the government. The Rule therefore covers both ‘reactive moderation’, where a platform responds to a user’s complaints against content, and ‘proactive moderation’, where the platform itself seeks out unwanted content even before a user reports it.

Transparency around reactive moderation helps us understand trends in user reporting and how responsive an SSMI is to user complaints, while disclosures on proactive moderation shed light on the scale and accuracy of an SSMI’s independent moderation activities. A key goal of both reporting datasets is to understand whether the platform is taking down as much harmful content as possible without accidentally also taking down non-violative content. Unfortunately, Rule 4(1)(d) merely requires SSMIs to report the number of links taken down during their content moderation (this is re-iterated by the MeitY’s FAQs on the Intermediary Guidelines). The problems with an overtly simplistic approach come to the fore upon an examination of the actual reports published by SSMIs.   

Contents of SSMI reports – proactive moderation

Based on its latest monthly transparency reports, Twitter proactively suspended 39,588 accounts while Google used automated tools to remove 338,938 pieces of content. However, these figures only document the scale of proactive monitoring and do not provide any insight into the accuracy of the platforms’ moderation – how accurate is the moderation in distinguishing between violative and non-violative content. The reporting also does not specify whether this content was taken down using solely automated tools, or some mix of automated tools and human review or oversight. Meta (reporting for Facebook and Instagram) reports the volume of content proactively taken down, but also provides a “Proactivity Rate”. The Proactivity Rate is defined as the percentage of content flagged proactively (before a user reported it) as a subset of all flagged content. Proactivity Rate = [proactively flagged content ÷ (proactively flagged content + user reported content)]. However, this metric is also of little use in understanding the accuracy of Meta’s automated tools. Take the following example:

Assume a platform has 100 pieces of content, of which 50 pieces violate the platforms terms of service and 50 do not. The platform relies on both proactive monitoring through automated tools and user reporting to identify violative content. Now, if the automated tools detect 49 pieces of violative content, and a user reports 1, the platform states that: ‘49 pieces of content were taken down pursuant to proactive monitoring at a Proactivity Rate of 98%’. However, this reporting does not inform citizens or regulators: (i) if the 49 pieces of content identified by the automated tools are in fact the 49 pieces that violate the platform’s terms of service (or whether the tools mistakenly took down some legitimate, non-violative content); (ii) how many users saw but did not report the content that was eventually flagged by automated tools and taken down; and (iii) what level and extent of human oversight was exercised in removing content. A high proactivity rate merely indicates that automated tools flagged more content than users, which is to be expected. Simply put, numbers aren’t everything, they only disclose the scale of content moderation and not its quality.  

This criticism begs the question, how do you understand the quality of proactive moderation? The Santa Clara Principles represent high level guidance on content moderation practices developed by international human rights organisations and academic experts to facilitate platform accountability with respect to users’ speech. The Principles require that platforms report: (i) when and how automated tools are used; (ii) the key criteria used by automated tools in making decisions; (iii) the confidence, accuracy, or success rate of automated tools, including in different languages; (iv) the extent of human oversight over automated tools; and (v) the outcomes of appeals against moderation decisions made by automated tools. This last requirement of reporting the outcome of appeals (how many users successfully got content reinstated after it was taken down by proactive monitoring) is a particularly useful metric as it provides an indicator of when the platforms themselves acknowledge that its proactive moderation was inaccurate. Draft legislation in Europe and the United States requires platforms to report how often proactive monitoring decisions are reversed. Mandating the reporting of even some of these elements under the Intermediary Guidelines would provide a clearer picture of the accuracy of proactive moderation.

Finally, it is relevant to note that Rule 4(4) of the Intermediary Guidelines requires that the automated tools for proactive monitoring of certain classes of content must be ‘reviewed for accuracy and fairness’. The desirability of such proactive monitoring aside, Rule 4(4) is not self-enforcing and does not specify whoshould undertake this review, how often it should be carried out, and whom the results should be communicated to.  

Contents of SSMI reports – reactive moderation

Transparency reporting with respect to reactive moderation aims to understand trends in user reporting of content and a platform’s responses to user flagging of content. Rule 4(1)(d) requires platforms to disclose the “details of complaints received and actions taken thereon”. However, a perusal of SSMI reporting reveals how the broad discretion granted to SSMIs to frame their reports is undermining the usefulness of the reporting.  

Google’s transparency report has the most straightforward understanding of “complaints received”, with the platform disclosing the number of ‘complaints that relate to third-party content that is believed to violate local laws or personal rights’. In other words, where users raise a complaint against a piece of content, Google reports it (30,065 complaints in February 2022). Meta on the other hand only reports complaints from: (i) a specific contact form, a link for which is provided in its ‘Help Centre’; and (ii) complaints addressed to the physical post-box mail address published on the ‘Help Centre’. For February 2022, Facebook received a mere 478 complaints, of which only 43 pertained to content (inappropriate or sexual content), while 135 were from users whose accounts have been hacked, and 59 were from users who had lost access to a group or page. If 43 user reports a month against content on Facebook seems suspiciously low, it likely is – because the method of user reporting of content that involves the least amount of friction for users (simply clicking on the post and reporting it directly) bypasses the specific contact form that Facebook uses to collate India complaints, and thus appears to be absent from Facebook’s transparency reporting. Most of Facebook’s 478 complaints for February have nothing to do with content on Facebook and offer little insight into how Facebook responds to user complaints against content or what types of content users report.

In contrast, Twitter’s transparency reporting expressly states that it does notinclude non-content related complaints (e.g., a user locked out of their account), instead limiting its transparency reporting to content related complaints – 795 complaints for March 2022: 606 of abuse or harassment, 97 of hateful conduct, and 33 of misinformation were the top categories. However, like Facebook, Twitter also has both a ‘support form’ and allows users to report content directly by clicking on it, but fails to specify from what sources “complaints” are compiled from for its India transparency reports. Twitter merely notes that ‘users can report grievances by the grievance mechanism by using the contact details of the Indian Grievance Officer’.

These apparent discrepancies in the number of complaints reported bear even greater scrutiny when the number of users of these platforms is factored in. Twitter (795 complaints/month) has an estimated 23 million users in India while Facebook (406 complaints/month) has an estimated 329 million users. It is reasonable to expect user complaints to scale with the number of users, but this is evidently not happening suggesting that these platforms are using different sources and methodologies to determine what constitutes a “complaint” for the purposes of Rule 4(1)(d). This is perhaps a useful time to discuss another SSMI, ShareChat.

ShareChat is reported to have an estimated 160 million users, and for February 2022 the platform reported 56,81,213 user complaints (substantially more than Twitter and Facebook). These complaints are content related (e.g., hate speech, spam etc.) although with 30% of complaints merely classified as ‘Others’, there is some uncertainty as to what these complaints pertain to. ShareChat’s reports states that it collates complaints from ‘reporting mechanism across the platform’. This would suggest that, unlike Facebook (and potentially Twitter), it compiles user complaint numbers from all methods a user can complain against content and not just a single form tucked away in its help centre documentation. While this may be a more holistic approach, ShareChat’s reporting suffers from other crucial deficiencies. Sharechat’s report makes no distinction between reactive and proactive moderation, merely giving a figure for content that has taken down. This makes it hard to judge how ShareChat responded to these over 56,00,000 complaints.    

Conclusion

Before concluding, it is relevant to note that no SSMI reporting discusses content that has been subjected to reduced visibility or algorithmically downranked. In the case of proactive moderation, Rule 4(1)(d) unfortunately limits itself to content that has been “removed”, although in the case of reactive moderation, reduced visibility would come within the ambit of ‘actions taken in response to complaints’ and should be reported on. Best practices would require platforms to disclose when and what content is subjected to reduced visibility to users. Rule 4(1)(d) did not form part of the draft intermediary guidelines that were subjected to public consultation in 2018, rather appearing for the first time in its current form in 2021. Ensuring broader consultation at the time of drafting may have resulted in such regulatory lacunae being eliminated and a more robust framework for transparency reporting.

That said, getting meaningful transparency reporting is a hard task. Standardising reporting procedures is a detailed and fraught process that likely requires platforms and regulators to engage in a consultative process – see this document created by Daphne Keller listing out potential problems in reporting procedures. Sample problem: “If ten users notify platforms about the same piece of content, and the platform takes it down after reviewing the first notice, is that ten successful notices, or one successful notice and nine rejected ones?” Given the scale of the regulatory and technical challenges, it is perhaps unsurprising that the transparency reporting under the Intermediary Guidelines has gotten off to a rocky start. However, Rule 4(1)(d) itself offers an avenue for improvement. The Rule allows the MeitY to specify any additional information that platforms should publish in their transparency reports. In the case of proactive monitoring, requiring platforms to specify exactly how automated tools are deployed, and when content take downs based on these tools are reversed would be a good place to start. The MeitY must also engage with the functionality and internal procedures of SSMIs to ensure that reporting is harmonised to the extent possible. For example, reporting a “complaint” for Facebook and ShareChat should ideally have some equivalence. This requires, for a start, MeitY to consult with platforms, users, civil society, and academic experts when thinking about transparency.