High Court of Delhi cites CCG’s Working Paper on Tackling Non-Consensual Intimate Images

In December 2022, CCG held a roundtable discussion on addressing the dissemination of non-consensual intimate images (“NCII”) online and in January 2023 it published a working paper titled “Tackling the dissemination and redistribution of NCII”. We are thrilled to note that the conceptual frameworks in our Working Paper have been favourably cited and relied on by the High Court of Delhi in Mrs. X v Union of India W.P. (Cri) 1505 of 2021 (High Court of Delhi, 26 April, 2023)

We acknowledge the High Court’s detailed approach in addressing the issue of the online circulation of NCII and note that several of the considerations flagged in our Working Paper have been recognised by the High Court. While the High Court has clearly recognised the free speech risks with imposing overbroad monitoring mandates on online intermediaries, we note with concern that some key safeguards we had identified in our Working Paper regarding the independence and accountability of technologically-facilitated removal tools have not been included in the High Court’s final directions. 

CCG’s Working Paper 

A key issue in curbing the spread of NCII is that it is often hosted on ‘rogue’ websites that have no recognised grievance officers or active complaint mechanisms. Thus, individuals are often compelled to approach courts to obtain orders directing Internet Service Providers (“ISPs”) to block the URLs hosting their NCII. However, even after URLs are blocked, the same content may resurface at different locations, effectively requiring individuals to continually re-approach courts with new URLs. Our Working Paper acknowledged that this situation imposed undue burdens on victims of NCII abuse, but also argued against a proactive monitoring mandate for scanning of NCII content by internet intermediaries. We noted that such proactive monitoring mandates create free speech risks, as they typically lead to more content removal but not better content removal and run the risk of ultimately restricting lawful expression. Moreover, given the limited technological and operational transparency surrounding proactive monitoring/automated filtering, the effectiveness and quality of such operations are hard for external stakeholders and regulators to assess. 

Instead, our Working Paper proposed a multi-stakeholder regulatory solution that relied on the targeted removal of repeat NCII content using hash-matching technology. Hash-matching technology would ascribe reported NCII content a discrete hash (stored in a secure database) and then check the hash of new content against known NCII content. This would allow for rapid identification (by comparing hashes) and removal of content where previously reported NCII content is re-uploaded. Our Working Paper recommended the creation of an independent body to maintain such a hash database of known NCII content. Thus, once NCII was reported and hashed the first time by an intermediary, it would be added to the independent body’s database, and if it was detected again at different locations, it could be rapidly removed without requiring court intervention. 

This approach also minimises free speech risks as content would only be removed if it matched known NCII content, and the independent body would conduct rigorous checks to ensure that only NCII content was added to the database. Companies such as Meta, TikTok, and Bumble are already adopting hash-matching technologies to deal with NCII, and more broadly, hash-matching technology has been used to combat child-sex abuse material for over a decade. Since such an approach would potentially require legal and regulatory changes to the existing rules under the Information Technology Act, 2000, our Working Paper also suggested a short-term solution using a token system. We recommended that all large digital platforms adopt a token-based approach to allow for the quick removal of previously removed or de-indexed content, with minimal human intervention. 

Moreover, the long-term approach proposed in the Working Paper would also significantly reduce the administrative burden of seeking the removal of NCII for victims. It does so by: (a) reducing the time, cost, and effort they have to expend by going to court to remove or block access to NCII (since the independent body could work with the DoT to direct ISPs to block access to specific web pages containing NCII); (b) not requiring victims to re-approach courts for blocking already-identified NCII, particularly if the independent body is allowed to search for, or use a web crawler to proactively detect copies of previously hashed NCII; and (c) providing administrative, legal, and social support to victims.

The High Court’s decision 

In X v Union of India, the High Court was faced with a writ petition filed by a victim of NCII abuse, whose pictures and videos had been posted on various pornographic websites and YouTube without her consent. The Petitioner sought the blocking of the URLs where her NCII was located and the removal of the videos from YouTube. A key claim of the Petitioner was that even after content was blocked pursuant to court orders and directions by the government, the offending material was consistently being re-uploaded at new locations on the internet, and was searchable using specific keywords on popular online search engines. 

Despite the originator who was posting this NCII being apprehended during the hearings, the High Court saw it fit to examine the obligations of intermediaries, in particular search engines, in responding to user complaints on NCII. The High Court’s focus on search engines can be attributed to the fact that NCII is often hosted on independent ‘rogue’ websites that are unresponsive to user complaints, and that individuals often use search engines to locate such content. This may be contrasted with social media platforms that have reporting structures for NCII content and are typically more responsive. Thus, the two mechanisms that are then available to tackle the distribution of NCII on ‘rogue’ websites is to have ISPs disable access to specific URLs or/and have search engines de-index the relevant URLs. However, ISPs have little or no ability to detect unlawful content and do not typically respond to complaints by users, instead coordinating directly with state authorities. 

In fact, the High Court expressly cited CCG’s Working Paper to recognise this diversity in intermediary functionality, noting that “[CCG’s] paper espouses that due to the heterogenous nature of intermediaries, mandating a single approach for removal of NCII content might prove to be ineffective.” We believe this is a crucial observation as previous court decisions have imposed broad monitoring obligations on all intermediaries, even when they possess little or no control over content on their networks (See WP (Cri) 1082 of 2020 High Court of Delhi, 20 April 2021). Recognising the different functionality offered by different intermediaries allowed the High Court to identify de-indexing of URLs as an important remedy for tackling  NCII, with the Court noting that, “[search engines] can de-index specific URLs that can render the said content impossible to find due to the billions of webpages available on the internet and, consequently, reduce traffic to the said website significantly.” 

However, this would nevertheless be a temporary solution, since victims would still be required to repeatedly approach search engines for de-indexing each instance of NCII that is hosted on different websites. To address this issue, the long-term solution proposed in the Working Paper relies on a multi-stakeholder approach that relies on an independently maintained hash database for NCII content. The independent body maintaining the database would work with platforms, law enforcement, and the government to take down copies of identified NCII content, thereby reducing the burden on victims.

The High Court also adopted some aspects of the Working Paper’s short-term recommendations for the swift removal of NCII. The Working Paper recommended that platforms voluntarily use a token or digital identifier-based approach to allow for the quick removal of previously removed content. Complainants, who would be assigned a unique token upon the initial takedown of NCII, could submit URLs of any copies of the NCII along with the token. The search engine or platform would thereafter only need to check whether the URL contains the same content as the identified NCII linked to the token. The Court, in its order, requires search engines to adopt a similar token-based approach to “ensure that the de-indexed content does not resurface (¶61),” and notes that search engines “cannot insist on requiring the specific URLs from the victim for the purpose of removing access to the content that has already been ordered to be taken down (¶61)”. However, the judgment does not clarify if this means that search engines are required to disable access to copies of identified NCII without the complainant identifying where they have been uploaded, and if so, then how search engines will remove the repeat instances of identified NCII. The order only states that it is the responsibility of search engines to use tools that already exist to ensure that access to offending content is immediately removed. 

More broadly, the Court agreed with our stand that proactive filtering mandates against NCII may harm free speech, noting that “The working paper published by CCG records the risk that overbroad directions may pose (¶56)” further holding that “any directions that necessitates pro-active filtering on the part of intermediaries may have a negative impact on the right to free speech. No matter the intention of deployment of such technology, its application may lead to consequences that are far worse and dictatorial. (¶54)” We applaud the High Court’s recognition that general filtering mandates against unlawful content may significantly harm free speech. 

Final directions by the court

The High Court acknowledged the use of hash-matching technology in combating NCII as deployed by Meta’s ‘Stop NCII’ program (www.stopncii.org) and explained how such technology “can be used by the victim to create a unique fingerprint of the offending image which is stored in the database to prevent re-uploads (¶53). As noted above, our Working Paper also recognised the benefits of hash-matching technology in combating NCII. However, we also noted that such technology has the scope for abuse and thus must be operationalised in a manner that is publicly transparent and accountable. 

In its judgment, the Court issued numerous directions and recommendations to the Ministry of Electronics and Information Technology (MeitY), the Delhi Police, and search engines to address the challenge of circulation of NCII online. Importantly, it noted that the definition of NCII must include sexual content intended for “private and confidential relationships,” in addition to sexual content obtained without the consent of the relevant individual. This is significant as it expands the scope of illegal NCII content to include instances where images or other content have been taken with consent, but have thereafter been published or circulated without the consent of the relevant individual. NCII content may often be generated within the private realm of relationships, but subsequently illegally shared online.

The High Court framed its final directions by noting that “it is not justifiable, morally or otherwise, to suggest that an NCII abuse victim will have to constantly subject themselves to trauma by having to scour the internet for NCII content relating to them and having to approach authorities again and again (¶57).” To prevent this outcome, the Court issued the following directions: 

  1. Where NCII has been disseminated, individuals can approach the Grievance Officer of the relevant intermediary or the Online Cybercrime Reporting Portal (www.cybercrime.gov.in) and file a formal complaint for the removal of the content. The Cybercrime Portal must specifically display the various redressal mechanisms that can be accessed to prevent the further dissemination of NCII; 
  2. Upon receipt of a complaint of NCII, the police must immediately register a formal complaint in relation to Section 66E of the IT Act (punishing NCII) and seek to apprehend the primary wrongdoer (originator); 
  3. Individuals can also approach the court and file a petition identifying the NCII content and the URLs where it is located, allowing the court to make an ex-facie determination of its illegality; 
  4. Where a user complains against NCII content under Rule 3(2)(b) of the Intermediary Guidelines to a search engine, search engines must employ hash-matching technology to ensure future webpages with identical NCII content are also de-indexed to ensure that the complained against content does not re-surface. The Court held that users should be able to directly re-approach search engines to seek de-indexing of new URLs containing previously de-indexed content without having to obtain subsequent court or government orders;
  5. A fully-functional helpline available 24/7 must be devised for reporting NCII content. It must be staffed by individuals who are sensitised about the nature of NCII content and would not shame victims, and must direct victims to organisations that would provide social and legal support. Our Working Paper proposed a similar approach, where the independent body would work with organisations that would provide social, legal, and administrative support to victims of NCII;
  6. When a victim obtains a takedown order for NCII, search engines must use a token/ digital identifier to de-index content, and ensure that it does not resurface. The search engines also cannot insist on requiring specific URLs for removing access to content ordered to be taken down. Though our Working Paper recommended the use of a similar system, to mitigate against the risks of proactive monitoring, we suggested that (a) this could be a voluntary system adopted by digital platforms to quickly remove identified NCII, and (b) that complainants would submit URLs of copies of identified NCII along with the identifier, so that platform would only need to check whether the URL contains the same content linked to the token to remove access; and
  7. MeitY may develop a “trusted third-party encrypted platform” in collaboration with search engines for registering NCII content, and use hash-matching to remove identified NCII content. This is similar to the long-term recommendation in the Working Paper, where we recommend that an independent body is set up to maintain such a database and work with the State and platforms to remove identified NCII content. We also recommended various safeguards to ensure that only NCII content was added to the database.

Conclusion 

Repeated court orders to curtail the spread of NCII content represents a classic ‘whack-a-mole’ dilemma and we applaud the High Court’s acknowledgement and nuanced engagement with this issue. Particularly, the High Court recognises the significant mental distress and social stigma that the dissemination of one’s NCII can cause, and attempts to reduce the burdens on victims of NCII abuse by ensuring that they do not have to continually identify and ensure the de-indexing of new URLs hosting their NCII. The use of hash-matching technology is significantly preferable to broad proactive monitoring mandates.

However, our Working Paper also noted that it was of paramount importance to ensure that only NCII content was added to any proposed hash database, to ensure that lawful content was not accidently added to the database and continually removed every time it resurfaced. To ensure this, our Working Paper proposed several important institutional safeguards including: (i) setting up an independent body to maintain the hash database; (ii) having multiple experts vet each piece of NCII content that was added to the database; (iii) where NCII content had public interest implications (e.g., it involved a public figure), a judicial determination should be required; (iv) ensuring that the independent body provides regular transparency reports and conducts audits of the hash database; and (v) imposing sanctions on the key functionaries of the independent body if the hash database was found to include lawful content. 

We believe that where hash-databases (or any technological solutions) are utilised to prevent the re-uploading of unlawful content, these strong institutional safeguards are essential to ensure the public accountability of such databases. Absent this public accountability, it is hard to ascertain the effectiveness of such solutions, allowing large technology companies to comply with such mandates on their own terms. While the High Court did not substantively engage with these institutional mechanisms outlined in our Working Paper, we believe that the adoption of the upcoming Digital India Bill represents an excellent opportunity to consider these issues and further our discussion on combating NCII.

Re-thinking content moderation: structural solutions beyond the GAC

This post is authored by Sachin Dhawan and Vignesh Shanmugam

The grievance appellate committee (‘GAC’) provision in the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2022 has garnered significant controversy. While it seeks to empower users to challenge the arbitrary moderation decisions of platforms, the provision itself has been criticised for being arbitrary. Lawyers, privacy advocates, technology companies, and other stakeholders have raised many  concerns about the constitutional validity of the GAC, its lack of transparency and independence, and excessive delegated power.

Although these continuing discussions on the GAC are necessary, they do not address the main concerns plaguing content moderation today. Even if sufficient legal and procedural safeguards are incorporated, the GAC will still be incapable of resolving the systemic issues in content moderation. This fundamental limitation persists because “governing content moderation by trying to regulate individual decisions is [like] using a teaspoon to remove water from a sinking ship”.  

Governments, platforms, and other stakeholders must therefore focus on: (i) examining the systemic issues which remain unaddressed by content moderation systems; and (ii) ensuring that platforms implement adequate structural measures to effectively reduce the number of individual grievances as well as systemic issues.

The limitations of the current content moderation systems

Globally, a majority of platforms rely on an individual case-by-case approach for content moderation. Due to the limited scope of this method, platforms are unable to resolve, or even identify, several types of systemic issues. This, in turn, increases the number of content moderation cases.

To illustrate the problem, here are a few examples of systemic issues which are unaddressed by content moderation systems: (i) coordinated or periodic attacks (such as mass reporting of users/posts) which target a specific class of users (based on gender, sexuality, race, caste, religion, etc.); (ii) differing content moderation criteria in different geographical locations; and (iii) errors, biases or other issues with algorithms, programs or platform design which lead to increased flagging of users/posts for content moderation.

Considering the gravity of these systemic issues, platforms must adopt effective measures to improve the standards of content moderation and reduce the number of grievances.

Addressing the structural concerns in content moderation systems

Several legal scholars have recommended the adoption of a ‘systems thinking’ approach to address the various systemic concerns in content moderation. This approach requires platforms to implement corporate structural changes, administrative practices, and procedural accountability measures for effective content moderation and grievance redressal. 

Accordingly, revising the existing content moderation frameworks in India to include the following key ‘systems thinking’ principles would ensure fairness, transparency and accountability in content moderation.

  • Establishing independent content moderation systems. Although platforms have designated content moderation divisions, these divisions are, in many cases, influenced by the platforms’ corporate or financial interests, advertisers’ interests, or political interests, which directly impacts the quality and validity of their content moderation practices. Hence, platforms must implement organisational restructuring measures to ensure that content moderation and grievance redressal processes are (i) solely undertaken by a separate and independent ‘rule-enforcement’ division; and (ii) not overruled or influenced by any other divisions in the corporate structure of the platforms. Additionally, platforms must designate a specific individual as the authorised officer in-charge of the rule-enforcement division. This ensures transparency and accountability from a corporate governance viewpoint. 
  • Robust transparency measures. Across jurisdictions, there is a growing trend of governments issuing formal or informal orders to platforms, including orders to suspend or ban specific accounts, take down specific posts, etc. In addition to ensuring transparency of the internal functioning of platforms’ content moderation systems, platforms must also provide clarity on the number of measures undertaken (and other relevant details) in compliance with such governmental orders. Ensuring that platforms’ transparency reports separately disclose the frequency and total number of such measures will provide a greater level of transparency to users, and the public at large.
  • Aggregation and assessment of claims. As stated earlier, individual cases provide limited insight into the overall systemic issues present on the platform. Platforms can gain a greater level of insight  through (i) periodic aggregation of claims received by them; and (ii) assessment of  these aggregated claims for any patterns of harm or bias (for example: assessing for the presence of algorithmic/human bias against certain demographics). Doing so will illuminate algorithmic issues, design issues, unaccounted bias, or other systemic issues which would otherwise remain unidentified and unaddressed.
  • Annual reporting of systemic issues. In order to ensure internal enforcement of systemic reform, the rule-enforcement divisions must provide annual reports to the board of directors (or the appropriate executive authority of the platform), containing systemic issues observed, recommendations for certain systemic issues, and protective measures to be undertaken by the platforms (if any). To aid in identifying further systemic issues, the division must conduct comprehensive risk assessments on a periodic basis, and record its findings in the next annual report.
  • Implementation of accountability measures. As is established corporate practice for financial, accounting, and other divisions of companies, periodic quality assurance (‘QA’) and independent auditing of the rule-enforcement division will further ensure accountability and transparency.

Conclusion

Current discussions regarding content moderation regulations are primarily centred around the GAC, and the various procedural safeguards which can rectify its flaws. However, even if the GAC  becomes an effectively functioning independent appellate forum, the systemic problems plaguing content moderation will remain unresolved. It is for this reason that platforms must actively adopt the structural measures suggested above. Doing so will (i) increase the quality of content moderation and internal grievance decisions; (ii) reduce the burden on appellate forums; and (iii) decrease the likelihood of governments imposing stringent content moderation regulations that undermine  the free speech rights of users.