In December 2022, CCG held a roundtable discussion on addressing the dissemination of non-consensual intimate images (“NCII”) online and in January 2023 it published a working paper titled “Tackling the dissemination and redistribution of NCII”. We are thrilled to note that the conceptual frameworks in our Working Paper have been favourably cited and relied on by the High Court of Delhi in Mrs. X v Union of India W.P. (Cri) 1505 of 2021 (High Court of Delhi, 26 April, 2023).
We acknowledge the High Court’s detailed approach in addressing the issue of the online circulation of NCII and note that several of the considerations flagged in our Working Paper have been recognised by the High Court. While the High Court has clearly recognised the free speech risks with imposing overbroad monitoring mandates on online intermediaries, we note with concern that some key safeguards we had identified in our Working Paper regarding the independence and accountability of technologically-facilitated removal tools have not been included in the High Court’s final directions.
CCG’s Working Paper
A key issue in curbing the spread of NCII is that it is often hosted on ‘rogue’ websites that have no recognised grievance officers or active complaint mechanisms. Thus, individuals are often compelled to approach courts to obtain orders directing Internet Service Providers (“ISPs”) to block the URLs hosting their NCII. However, even after URLs are blocked, the same content may resurface at different locations, effectively requiring individuals to continually re-approach courts with new URLs. Our Working Paper acknowledged that this situation imposed undue burdens on victims of NCII abuse, but also argued against a proactive monitoring mandate for scanning of NCII content by internet intermediaries. We noted that such proactive monitoring mandates create free speech risks, as they typically lead to more content removal but not better content removal and run the risk of ultimately restricting lawful expression. Moreover, given the limited technological and operational transparency surrounding proactive monitoring/automated filtering, the effectiveness and quality of such operations are hard for external stakeholders and regulators to assess.
Instead, our Working Paper proposed a multi-stakeholder regulatory solution that relied on the targeted removal of repeat NCII content using hash-matching technology. Hash-matching technology would ascribe reported NCII content a discrete hash (stored in a secure database) and then check the hash of new content against known NCII content. This would allow for rapid identification (by comparing hashes) and removal of content where previously reported NCII content is re-uploaded. Our Working Paper recommended the creation of an independent body to maintain such a hash database of known NCII content. Thus, once NCII was reported and hashed the first time by an intermediary, it would be added to the independent body’s database, and if it was detected again at different locations, it could be rapidly removed without requiring court intervention.
This approach also minimises free speech risks as content would only be removed if it matched known NCII content, and the independent body would conduct rigorous checks to ensure that only NCII content was added to the database. Companies such as Meta, TikTok, and Bumble are already adopting hash-matching technologies to deal with NCII, and more broadly, hash-matching technology has been used to combat child-sex abuse material for over a decade. Since such an approach would potentially require legal and regulatory changes to the existing rules under the Information Technology Act, 2000, our Working Paper also suggested a short-term solution using a token system. We recommended that all large digital platforms adopt a token-based approach to allow for the quick removal of previously removed or de-indexed content, with minimal human intervention.
Moreover, the long-term approach proposed in the Working Paper would also significantly reduce the administrative burden of seeking the removal of NCII for victims. It does so by: (a) reducing the time, cost, and effort they have to expend by going to court to remove or block access to NCII (since the independent body could work with the DoT to direct ISPs to block access to specific web pages containing NCII); (b) not requiring victims to re-approach courts for blocking already-identified NCII, particularly if the independent body is allowed to search for, or use a web crawler to proactively detect copies of previously hashed NCII; and (c) providing administrative, legal, and social support to victims.
The High Court’s decision
In X v Union of India, the High Court was faced with a writ petition filed by a victim of NCII abuse, whose pictures and videos had been posted on various pornographic websites and YouTube without her consent. The Petitioner sought the blocking of the URLs where her NCII was located and the removal of the videos from YouTube. A key claim of the Petitioner was that even after content was blocked pursuant to court orders and directions by the government, the offending material was consistently being re-uploaded at new locations on the internet, and was searchable using specific keywords on popular online search engines.
Despite the originator who was posting this NCII being apprehended during the hearings, the High Court saw it fit to examine the obligations of intermediaries, in particular search engines, in responding to user complaints on NCII. The High Court’s focus on search engines can be attributed to the fact that NCII is often hosted on independent ‘rogue’ websites that are unresponsive to user complaints, and that individuals often use search engines to locate such content. This may be contrasted with social media platforms that have reporting structures for NCII content and are typically more responsive. Thus, the two mechanisms that are then available to tackle the distribution of NCII on ‘rogue’ websites is to have ISPs disable access to specific URLs or/and have search engines de-index the relevant URLs. However, ISPs have little or no ability to detect unlawful content and do not typically respond to complaints by users, instead coordinating directly with state authorities.
In fact, the High Court expressly cited CCG’s Working Paper to recognise this diversity in intermediary functionality, noting that “[CCG’s] paper espouses that due to the heterogenous nature of intermediaries, mandating a single approach for removal of NCII content might prove to be ineffective.” We believe this is a crucial observation as previous court decisions have imposed broad monitoring obligations on all intermediaries, even when they possess little or no control over content on their networks (See WP (Cri) 1082 of 2020 High Court of Delhi, 20 April 2021). Recognising the different functionality offered by different intermediaries allowed the High Court to identify de-indexing of URLs as an important remedy for tackling NCII, with the Court noting that, “[search engines] can de-index specific URLs that can render the said content impossible to find due to the billions of webpages available on the internet and, consequently, reduce traffic to the said website significantly.”
However, this would nevertheless be a temporary solution, since victims would still be required to repeatedly approach search engines for de-indexing each instance of NCII that is hosted on different websites. To address this issue, the long-term solution proposed in the Working Paper relies on a multi-stakeholder approach that relies on an independently maintained hash database for NCII content. The independent body maintaining the database would work with platforms, law enforcement, and the government to take down copies of identified NCII content, thereby reducing the burden on victims.
The High Court also adopted some aspects of the Working Paper’s short-term recommendations for the swift removal of NCII. The Working Paper recommended that platforms voluntarily use a token or digital identifier-based approach to allow for the quick removal of previously removed content. Complainants, who would be assigned a unique token upon the initial takedown of NCII, could submit URLs of any copies of the NCII along with the token. The search engine or platform would thereafter only need to check whether the URL contains the same content as the identified NCII linked to the token. The Court, in its order, requires search engines to adopt a similar token-based approach to “ensure that the de-indexed content does not resurface (¶61),” and notes that search engines “cannot insist on requiring the specific URLs from the victim for the purpose of removing access to the content that has already been ordered to be taken down (¶61)”. However, the judgment does not clarify if this means that search engines are required to disable access to copies of identified NCII without the complainant identifying where they have been uploaded, and if so, then how search engines will remove the repeat instances of identified NCII. The order only states that it is the responsibility of search engines to use tools that already exist to ensure that access to offending content is immediately removed.
More broadly, the Court agreed with our stand that proactive filtering mandates against NCII may harm free speech, noting that “The working paper published by CCG records the risk that overbroad directions may pose (¶56)” further holding that “any directions that necessitates pro-active filtering on the part of intermediaries may have a negative impact on the right to free speech. No matter the intention of deployment of such technology, its application may lead to consequences that are far worse and dictatorial. (¶54)” We applaud the High Court’s recognition that general filtering mandates against unlawful content may significantly harm free speech.
Final directions by the court
The High Court acknowledged the use of hash-matching technology in combating NCII as deployed by Meta’s ‘Stop NCII’ program (www.stopncii.org) and explained how such technology “can be used by the victim to create a unique fingerprint of the offending image which is stored in the database to prevent re-uploads (¶53). As noted above, our Working Paper also recognised the benefits of hash-matching technology in combating NCII. However, we also noted that such technology has the scope for abuse and thus must be operationalised in a manner that is publicly transparent and accountable.
In its judgment, the Court issued numerous directions and recommendations to the Ministry of Electronics and Information Technology (MeitY), the Delhi Police, and search engines to address the challenge of circulation of NCII online. Importantly, it noted that the definition of NCII must include sexual content intended for “private and confidential relationships,” in addition to sexual content obtained without the consent of the relevant individual. This is significant as it expands the scope of illegal NCII content to include instances where images or other content have been taken with consent, but have thereafter been published or circulated without the consent of the relevant individual. NCII content may often be generated within the private realm of relationships, but subsequently illegally shared online.
The High Court framed its final directions by noting that “it is not justifiable, morally or otherwise, to suggest that an NCII abuse victim will have to constantly subject themselves to trauma by having to scour the internet for NCII content relating to them and having to approach authorities again and again (¶57).” To prevent this outcome, the Court issued the following directions:
- Where NCII has been disseminated, individuals can approach the Grievance Officer of the relevant intermediary or the Online Cybercrime Reporting Portal (www.cybercrime.gov.in) and file a formal complaint for the removal of the content. The Cybercrime Portal must specifically display the various redressal mechanisms that can be accessed to prevent the further dissemination of NCII;
- Upon receipt of a complaint of NCII, the police must immediately register a formal complaint in relation to Section 66E of the IT Act (punishing NCII) and seek to apprehend the primary wrongdoer (originator);
- Individuals can also approach the court and file a petition identifying the NCII content and the URLs where it is located, allowing the court to make an ex-facie determination of its illegality;
- Where a user complains against NCII content under Rule 3(2)(b) of the Intermediary Guidelines to a search engine, search engines must employ hash-matching technology to ensure future webpages with identical NCII content are also de-indexed to ensure that the complained against content does not re-surface. The Court held that users should be able to directly re-approach search engines to seek de-indexing of new URLs containing previously de-indexed content without having to obtain subsequent court or government orders;
- A fully-functional helpline available 24/7 must be devised for reporting NCII content. It must be staffed by individuals who are sensitised about the nature of NCII content and would not shame victims, and must direct victims to organisations that would provide social and legal support. Our Working Paper proposed a similar approach, where the independent body would work with organisations that would provide social, legal, and administrative support to victims of NCII;
- When a victim obtains a takedown order for NCII, search engines must use a token/ digital identifier to de-index content, and ensure that it does not resurface. The search engines also cannot insist on requiring specific URLs for removing access to content ordered to be taken down. Though our Working Paper recommended the use of a similar system, to mitigate against the risks of proactive monitoring, we suggested that (a) this could be a voluntary system adopted by digital platforms to quickly remove identified NCII, and (b) that complainants would submit URLs of copies of identified NCII along with the identifier, so that platform would only need to check whether the URL contains the same content linked to the token to remove access; and
- MeitY may develop a “trusted third-party encrypted platform” in collaboration with search engines for registering NCII content, and use hash-matching to remove identified NCII content. This is similar to the long-term recommendation in the Working Paper, where we recommend that an independent body is set up to maintain such a database and work with the State and platforms to remove identified NCII content. We also recommended various safeguards to ensure that only NCII content was added to the database.
Conclusion
Repeated court orders to curtail the spread of NCII content represents a classic ‘whack-a-mole’ dilemma and we applaud the High Court’s acknowledgement and nuanced engagement with this issue. Particularly, the High Court recognises the significant mental distress and social stigma that the dissemination of one’s NCII can cause, and attempts to reduce the burdens on victims of NCII abuse by ensuring that they do not have to continually identify and ensure the de-indexing of new URLs hosting their NCII. The use of hash-matching technology is significantly preferable to broad proactive monitoring mandates.
However, our Working Paper also noted that it was of paramount importance to ensure that only NCII content was added to any proposed hash database, to ensure that lawful content was not accidently added to the database and continually removed every time it resurfaced. To ensure this, our Working Paper proposed several important institutional safeguards including: (i) setting up an independent body to maintain the hash database; (ii) having multiple experts vet each piece of NCII content that was added to the database; (iii) where NCII content had public interest implications (e.g., it involved a public figure), a judicial determination should be required; (iv) ensuring that the independent body provides regular transparency reports and conducts audits of the hash database; and (v) imposing sanctions on the key functionaries of the independent body if the hash database was found to include lawful content.
We believe that where hash-databases (or any technological solutions) are utilised to prevent the re-uploading of unlawful content, these strong institutional safeguards are essential to ensure the public accountability of such databases. Absent this public accountability, it is hard to ascertain the effectiveness of such solutions, allowing large technology companies to comply with such mandates on their own terms. While the High Court did not substantively engage with these institutional mechanisms outlined in our Working Paper, we believe that the adoption of the upcoming Digital India Bill represents an excellent opportunity to consider these issues and further our discussion on combating NCII.