Legal identity and data protection in modern societies

By Risa Arai and Aishwarya Giridhar

In an increasingly interconnected digital age, the very essence of legal identity plays a pivotal role in how we navigate societal structures and access basic rights and services. At the heart of this lies the concept of legal identity – a recognized and validated assertion of one’s personal existence within the legal framework of a country. Just as vital is the overarching responsibility to protect the data that substantiates this identity. Data breaches, unauthorized access, and misuse can have dire consequences, not just technologically but also socio-economically and politically. In a new report UNDP and the Centre for Communication Governance at National Law University Delhi delve into the crucial nature of legal identity, why it is a linchpin in modern society, and the paramount importance of ensuring robust data protection for legal identity systems to safeguard everyone’s rights and freedoms.

Several international instruments such as the Universal Declaration of Human Rights and the International Covenant on Civil and Political Rights, affirm every person’s right to a legal identity. Target 16.9 of the Sustainable Development Goals seeks to “provide legal identity for all, including birth registration, by 2030”. Legal identity ensures that individuals are recognized by the law and can avail themselves of rights and benefits available to them, to improve public policy, and to ensure better welfare services. 

Granting legal identity requires extensive personal data; names, birth dates, gender, and demographic information like residential addresses. Countries such as Estonia, Mexico, India, and Iceland have adopted advanced digital technologies, often incorporating biometric data such as fingerprints and iris scans. As governments build digital public infrastructures with digital legal identity as the foundation, inter-institutional data sharing grows. While these boost efficiency and service convenience, they also introduce challenges. They can increase privacy risks, elevating the threat of data breaches and unwarranted surveillance.

One of the primary mechanisms to mitigate privacy risks is to institute robust data privacy laws. 

As states roll out legal identity systems, it’s crucial to implement robust protection laws that address the unique challenges. UNDP and the Centre for Communication Governance at National Law University Delhi recently co-authored Drafting Data Protection Legislation: A Study of Regional Frameworks. This is designed to equip countries with the necessary tools and context to draft privacy-protecting domestic data protection legislation. Drawing from global and regional regulations, academic insights, and domestic laws, the report offers a detailed overview of essential data protection principles and encourages governments to consider the following aspects:

Definitions of key terms: Personal data and its sub-categories, can help clarify the scope of applicability of a data protection framework and reduce ambiguity in interpretation. An inclusive definition of ‘personal data’ may ensure more comprehensive protection under the relevant framework.

Data protection principles: The core concepts underpinning all data protection frameworks have been developed over many decades. They place the individual at the centre of data protection and cover all stages of the life cycle of data, from before it is collected, to how it is used and stored. These principles are meant to hold those collecting and processing data accountable for their use of data, and to ensure that data processing takes place in a privacy-preserving manner.

Rights of data subjects: Rights are a core component of data protection frameworks because they empower data subjects to take control of their data and allow them to obtain redressal for privacy harms. Some rights are essential – for example, allowing individuals to confirm whether a data controller has collected and processed data relating to them. Other rights empower data subjects to take necessary actions once data has been processed such as rectifying information or restricting data processing in certain circumstances. 

Children’s data: Children may be particularly vulnerable to risks from both governmental and private use of their personal data, particularly as education and other services increasingly shift online. It is important to consider age of consent requirements, age verification methods, and varying levels of cognitive development, differing cultural contexts and socio-economic backgrounds in framing regulations that protect children’s data.

Cross-border data flows: States and businesses often have a legitimate need to share data across national borders for economic and logistical purposes. This must be balanced with the protection of privacy and data security of the data of their citizens. It is important for frameworks to consider how to institute accountability mechanisms to ensure that those processing and using the data are accountable.

The regulatory and enforcement structure: The enforcement of any data protection legislation depends in large measure on its regulatory structure. Though there are likely to be differences in the design based on national and local requirements, frameworks must ensure that regulators are able to function independently and are transparent and accountable. They also should be empowered to take appropriate action when there is non-compliance.

Exemptions: While many frameworks commonly provide states with exemptions from data protection obligations for specific reasons like national security or law and order, it’s vital that these exemptions adhere to set standards. They should align with international human rights norms and be narrowly defined, ensuring they’re proportional to their goals.

We’ve crafted this guide as a comprehensive tool on data protection, catering to practitioners within international organizations and Member States to civil societies, academics, and beyond. We hope to create robust frameworks that safeguard privacy and other human rights while supporting social and policy goals. As the digital age unfolds, our hope is that this guide serves as both a reference and a foundation for those at the forefront of safeguarding our digital futures and our rights to a legal identity.

*This post was first published as a blog by The UNDP on October 25, 2023. It has been cross-posted with the authors’ permission.

Navigating the Indian Data Protection Law: Part I- Duties of Data Principals and the Impact on Individual Rights

Editor’s note: This blog is a part of our ongoing Data Protection Blog Series, titled Navigating the Indian Data Protection Law. This series will be updated regularly, and will explore the practical implications and shortcomings of the Digital Personal Data Protection Act, 2023, and where appropriate, suggest suitable safeguards that can be implemented to further protect the rights of the data principals.

For a detailed analysis of the Indian data protection legislation, the comprehensive comments provided by the Centre for Communication Governance on the 2022 DPDP Bill and the 2018 DPDP Bill can be accessed here. For a detailed comparison between the provisions of the DPDP Act and the 2022 Bill, our comparative tracker can be accessed here. Moreover, we have also provided an in-depth analysis of individuals’ rights under the DPDP Act in the Data Protection 101 episode of our CCG Tech Podcast.

The Digital Personal Data Protection Act, 2023 (“Act”) seeks to balance the right of individuals to protect their data and the need for processing of such data. However, there are concerns regarding whether this balance is truly being struck. This post is the first part of a two-part blog analysing Section 15 of the Act which provides for duties of data principals. Part I will explain how the inclusion of provision on duties of data principals may affect the rights of individuals. 

Unpacking Duties

Section 15 of the Act provides that data principals (a) must comply with all applicable laws while exercising their rights under this Act, (b) must not impersonate another person while providing their personal data, (c) must not suppress information for documents provided by the State, (d) must not register a false or frivolous grievance with a data fiduciary or the Data Protection Board (Board), and (e) must furnish verifiably authentic information while exercising their right to correction or erasure of data. While prima-facie the goals sought to be achieved by Section 15 seem to be reasonable, the high monetary penalty and vague terminology leave room for arbitrary enforcement. Under this Act, a data principal’s failure to observe duties may lead to a penalty of up to INR 10,000.

The imposition of duties and penalties upon data principals may act as an impediment in filing complaints with the Board and exercising of data principal rights under the Act. While the addition of an enforceable provision on duties is in itself concerning, specific provisions need to be examined closely to look at how they may contradict the goals of data protection. 

Compliance with all applicable laws

Section 15(a) of the Act requires that individuals must comply with the provisions of all applicable laws in India, while exercising their rights under the Act. The scope of this duty is overbroad and vague. It does not give sufficient clarity to data principals as to which laws shall they not contravene towards this duty. For example, would an outstanding traffic violation or an unrelated procedural non-compliance under the Companies Act, be counted as non-compliance with ‘all applicable laws’ in India, triggering non-compliance under the Act? The Act fails to provide sufficient guidance for the interpretation of this broad terminology. 

Section 15(a) fails to provide the link between non-compliance with ‘all laws’ and the exercise of data protection specific rights under the Act. Hence, the vagueness of this particular sub-section renders it ineffectual in fulfilling its intended objectives and leaves scope for arbitrary penalisation of individuals. In an event where an individual has failed to comply with any applicable law, irrespective of their intention, a monetary penalty of up to INR 10,000 may be awarded. Such punitive actions may serve as a deterrent to individuals in exercising their rights under the Act. 

False Grievances

In addition to Section 15(a), Section 15(d) provides that a data principal must not register a false or frivolous grievance or complaint with a data fiduciary or the Board. It fails to specify the factors for determination of when a complaint is “false or frivolous”. These broad and vague terms may disincentivise data principals from filing complaints, especially due to the high financial penalty attached to the failure to fulfill such duties.  

Due to low digital literacy, there remain existing concerns around accessibility to the Board by ordinary data principals. Section 15(d) exacerbates such issues by failing to consider the power imbalances that exist between data principals and the data fiduciaries. 

A data principal who believes that their personal data has been breached or rights have been violated, may approach the data fiduciary with a complaint and may ultimately be penalised for filing their grievance. Even if harm has occurred, or the complaint has been filed in good faith, the Board may impose penalties for frivolity. The only recourse the data principal will then have is to file a writ with an appropriate High Court, which may be a more expensive and time-consuming affair than the maximum penalty attached to the non-adherence of Section 15. 

Impersonation & Suppressing of Information

The remaining provisions which bar data principals from impersonating another person; and suppressing information while applying for any document issued by the State also suffer from issues of vagueness and lack of clarity. For example, could an individual be penalised for creating parody or satirical social media accounts related to celebrities? Does this provision intend to bar individuals from creating fan accounts on social media? Currently, there is no clarity on these questions. Hence, when Section 15(b) on impersonation is being looked at, it is necessary to consider factors such as the intention of the individual in question, the resulting harm, and the safeguards in place for freedom of speech and expression. 

Similarly, the intention must be looked at before penalising any individual for non-observance of Section 15(c) which provides that a data principal must not suppress any material information from the State. Due to the broad language of this provision, a user may be penalised when incorrect information has been provided erroneously. Such instances should specifically be excluded from the scope of the provision. For instance, individuals may not have correct information on their age or may have the wrong age written on their official documents which they use to access senior citizen benefits. Would such situations be considered to be a breach of their duties? The provision fails to clarify.

Note: This is Part I of a two-part blog on the duties of data prinicpals under the Act, from our ongoing Data Protection Blog Series.

Navigating the Indian Data Protection Law: The overlooked rights of persons with disabilities under the DPDP Act

By Tithi Neogi

Editor’s note: This blog is a part of our ongoing Data Protection Blog Series, titled Navigating the Indian Data Protection Law. This series will be updated regularly, and will explore the practical implications and shortcomings of the Digital Personal Data Protection Act, 2023 (“DPDP Act”), and where appropriate, suggest suitable safeguards that can be implemented to further protect the rights of the data principals.For a detailed analysis of the Indian data protection legislation, the comprehensive comments provided by the Centre for Communication Governance on the 2022 DPDP Bill and the 2018 DPDP Bill can be accessed here. For a detailed comparison between the provisions of the DPDP Act and the 2022 Bill, our comparative tracker can be accessed here. Moreover, we have also provided an in-depth analysis of individuals’ rights under the DPDP Act in the Data Protection 101 episode of our CCG Tech Podcast.

The recently passed DPDP Act has generated a great deal of commentary, both positive and negative. But this commentary overlooks one glaring omission in the DPDP Act, which is that it fails to adequately safeguard the rights of persons with disabilities. 

This post highlights the key problems associated with the DPDP Act’s provision on persons with disabilities, specifically – (1) Incompatibility of the DPDP Act with the Rights of Persons with Disability Act, 2016 (“Disability Act”), and (2) Reduced protection for Persons with Disabilities as Data Principal in the DPDP Act as compared to the Personal Data Protection Bill, 2019 (“2019 Bill”). 

The post concludes with solutions for how these problems can be addressed. Specifically these include – (1) New rules to specify which categories of persons with disabilities need a lawful guardian to act as data principal on their behalf, and (2) safeguards to prevent lawful guardians from exerting undue influence on persons with disabilities. These measures will make the DPDP Act more compatible with the needs of persons with disabilities. 

Inconsistency between the DPDP Act and the Disability Act

The lack of robust safeguards and dilution of consent of persons with disabilities in the DPDP Act will likely result in the loss of autonomy that they enjoy under the Disability Act. These concerns are explored further below – 

(a) Overbroad Definition of Data Principal: In the DPDP Act, the definition of ‘data principal’ for a person with disability includes their lawful guardian acting on their behalf. The blanket nature in which lawful guardians are treated as data principals for all persons with disabilities is confusing. It lacks much needed specificity and nuance. 

Such a definition sheds no light on whether the inclusion of lawful guardian applies to all persons with disabilities, or only for persons with benchmark disabilities/persons with disability having high support needs, as categorized in the Disability Act. A literal interpretation of the definition of data principal in the DPDP Act suggests that all persons with disabilities are incapable of making legally binding decisions and need a representative data principal. The Disability Act, however, clearly categorizes disabilities on the basis of capacity to make legally binding decisions.

(b) Unclear Nature of Guardianship: The DPDP Act fails to clarify whether the nature of guardianship for a guardian acting as a data principal is limited or perpetual guardianship. 

The Disability Act, on the other hand, is much clearer. It provides for a limited guardianship (under Section 14), which is a joint decision making system that relies on mutual understanding and trust between the person with disability and their lawful guardian. Limited guardianship operates for a specific purpose within a specific period. The will of the person with disability is of paramount importance in this arrangement. As per the Disability Act, any guardian appointed for a person with disability under any law will be deemed to function as a limited guardian. Thus the lawful guardian enjoying the same rights as a data principal under the DPDP Act exceeds the powers bestowed upon them by the Disability Act.

(c) No Distinction between Child and Person with Disability: Section 9 of the DPDP Act lays down provisions for processing of personal data of children and persons with disability. The manner of processing data for children and persons with disability is the same, i.e. by obtaining verifiable consent of parent/lawful guardian. 

The lack of a distinction in processing of data of children and persons with disabilities results in equating the ability to consent of a person with disability to that of a child. This inevitably takes away the autonomy of a person with disability. It also equates the status of a lawful guardian of a person with disability to that of a parent, which contradicts the fiduciary nature of relationship between the person with disability and their limited guardian, as provided in the Disability Act.

Section 13 of the Disability Act holds the government accountable for ensuring that persons with disabilities enjoy legal capacity on an equal basis with others in all aspects of life. The DPDP Act fails to recognise persons with disabilities as equal in capacity to persons without disabilities.

(d) Difficulty in Pursuing Grievance Redressal: Persons with disabilities might face challenges relating to grievance redressal under the DPDP Act. By potentially imposing lawful guardians as data principals on all persons with disabilities the DPDP Act imposes a significant hurdle on them. This is because it might mean that they – persons with disabilities – cannot register a grievance on their own. If this is so, then having to go through a lawful guardian to register a grievance will likely deter them from reporting a data breach to the relevant authorities. It makes the process of registering a grievance more cumbersome. This denudes the right to access justice of a person with disability, which is otherwise guaranteed under the Disability Act (under Section 12).

Safeguards in the DPDP Act versus the 2019 Bill

The DPDP Act has provisions specific to persons with disability, unlike the 2019 Bill. However, these provisions are relatively insufficient to protect the interests of persons with disabilities.

Moreover, the 2019 Bill did not differentiate between data principals on the basis of disability. Persons with disabilities enjoyed the same rights as data principals as persons without disabilities. Further, the categorization of certain data as sensitive personal data, including health data and genetic data, and the specific safeguards for such data in the 2019 Bill were better safety nets for the privacy rights of persons with disabilities. Identification of a separate category of sensitive personal data and related safeguards stalled the risk of misuse of data of persons with disabilities. The absence of the category of sensitive personal data, and the dilution of the rights of persons with disabilities as data principals in the DPDP Act could pose significant risk to the data of persons with disabilities.

Recommendations and Conclusion

The DPDP Act lacks clarity on whether all or only some persons with disabilities require a lawful guardian. Consequently, regulations in this regard, specifying the sub-category of persons with disabilities for whom limited guardians will act as data principals, need to be framed. This needs to be done in line with the Schedule on Specified Disabilities given in the Disability Act. 

Secondly, rules are needed to safeguard the interests of a person with disability in case of a conflict of interest between them and their lawful guardian. A consultation mechanism whereby the lawful guardian is restricted from consenting on the behalf of a person with disability without consulting them first, or by exerting undue influence on the person with disability, needs to be framed. Section 13(5) of the Disability Act specifically mandates that persons providing support to persons with disability must (a) refrain from exerting undue influence and (b) respect the autonomy, dignity and privacy of a person with disability.

Thirdly, identifying health data as sensitive personal data and providing additional safeguards for such data is crucial to prevent misuse of data of persons with disabilities. 

Navigating the Indian Data Protection Law: Disproportionate state access to personal data

By Shobhit S.

Editor’s note: This blog is a part of our ongoing Data Protection Blog Series, titled Navigating the Indian Data Protection Law. This series will be updated regularly, and will explore the practical implications and shortcomings of the Digital Personal Data Protection Act, 2023 (“DPDP Act”), and where appropriate, suggest suitable safeguards that can be implemented to further protect the rights of the data principals. For a detailed analysis of the Indian data protection legislation, the comprehensive comments provided by the Centre for Communication Governance on the 2022 DPDP Bill and the 2018 DPDP Bill can be accessed here. For a detailed comparison between the provisions of the DPDP Act and the 2022 Bill, our comparative tracker can be accessed here. Moreover, we have also provided an in-depth analysis of individuals’ rights under the DPDP Act in the Data Protection 101 episode of our CCG Tech Podcast.

A brief genesis of the DPDP Act

On August 11, 2023, after a protracted pre-legislative saga involving five drafts in six years, India’s first personal data protection regime, the DPDP Act, came into force. 

The Aadhaar project, involving the systematic accumulation of citizens’ personal data by the state, sparked public discourse surrounding legal protection of informational privacy. Prompted initially to consider the legality of Aadhaar, the Supreme Court emphatically reaffirmed the fundamental right to privacy in Indian constitutional jurisprudence, in its hallowed decision in August 2017 (“Puttaswamy I”). 

Recognising privacy as an innate aspect of human dignity and autonomy, the Court held that any interference with it must be justified against the touchstone of ‘proportionality’. According to the majority opinion, such interference must be: (i) sanctioned by law, (ii) in furtherance of a legitimate purpose; (iii) proportionate in extent to the purpose sought to be achieved; and (d) accompanied with procedural guarantees against abuse. While inconsistencies have been noted (here and here) in the application of the proportionality-standard by the Court, it has become central to fundamental rights adjudication, signalling a putative shift from “a culture of authority to a culture of justification”. 

The Court identified ‘informational autonomy’, i.e., an individual’s control over the dissemination of information personal to them, as a key facet of their privacy. Accordingly, it expressed the expectation that the state would institute a robust data protection framework, aligned with principles enunciated by it. This expectation was restated with added urgency in September 2018, in the Court’s judgement upholding Aadhaar (‘Puttaswamy II’). In fact, the judgement  manifestly rested on the expectation that the state would imminently enact such a framework, following the Srikrishna Committee’s report in July 2018.    

Viewed thus, the DPDP Act represents a long-awaited response to the Court’s directions to protect individuals’ informational privacy, both against the state and private entities. However, it is striking for its disavowal of proportionality (and all its judicial constructions), in its application to the state as a data fiduciary. 

Broad grounds for confidential processing of personal data

Under the DPDP Act, the state can process personal data confidentially, i.e., without the individual’s consent or knowledge (prior or subsequent), towards any state function under any extant law (Section 7(c)). Even without a legally-provided function, it can do so in the ambiguous interests of national sovereignty, integrity or security (Section 7(c)). It can use data collected in any other context by any of its notified agencies, to purportedly facilitate the issuance of benefits, subsidies, certifications, licenses, or permits. (Section 7(b)).1 Moreover, it can retain any data in perpetuity, even where the original purpose stands served (Section 17(4)). 

The breadth and the malleability of the grounds on which the state can confidentially process personal data, invites the possibility of “function creep” – it enables the state to use personal data collected for a specific purpose towards any other, without the individual’s knowledge and without any other mechanism for accountability. It also magnifies recognised privacy-risks associated with integration of personal datasets at scale and with profiling of citizens by the state. Notably, the power to process data confidentially can be exercised by any “instrumentality of the state” – an expression interpreted liberally by the Court to even include entities such as statutory corporations. While the broad interpretation has generally aided in the invocation of fundamental rights against these ‘instrumentalities’, it empowers them to collect and use personal data in complete opacity under the DPDP Act. 

Admittedly, there are circumstances in which alerting an individual before or upon processing their data may be counterproductive, say, where such processing is to respond to an imminent threat to public security. Nevertheless, since confidential use of individuals’ personal data interferes with their privacy, any grounds for such use must be proportional to the legislative aim sought to be achieved. But in enabling practically limitless and unaccountable processing by the executive, the DPDP Act sidesteps any such consideration. 

A law more cognisant of proportionality would, first, narrowly define the constitutionally permissible ends that the state may pursue via confidential processing. It would require that confidentiality have a rational nexus with the ends and be functionally suitable to achieve them – this would arguably preclude confidential use of personal data for provision of public services, where public scrutiny is particularly crucial. Further, it would require the state to consider alternative means, which are less intrusive, to achieve such ends. For example, if the state envisages confidentially processing citizens’ biometric data for delivery of benefits, the law would require it to consider whether verification of beneficiaries for such delivery can be undertaken using less sensitive forms of personal data. 

Where (and only where) alternative means are not available or feasible, the law would provide narrow grounds for confidential processing, considering the importance of the desired ends and only to the extent necessary to achieve them. In such cases, it would include procedural safeguards to protect individuals’ privacy against arbitrary state interference, per Puttaswamy I. Illustratively, such safeguards could include a requirement to report instances of confidential processing to an independent authority, or to erase personal data after the underlying purpose is served.

Blanket exemption for notified agencies 

In addition to provisions that enable the state to confidentially use personal data for vague purposes, the DPDP Act allows exemption of certain state agencies and instrumentalities from all obligations under it. The executive can notify any such entity, in the interests of national sovereignty and integrity, security of the state, friendly relations with foreign states, maintenance of public order or preventing incitement to any cognisable offence relating to any of these  (Section 17(2)(a)). 

Much like provisions enabling opaque data processing, this exemption (unlike the 2019 Bill (Clause 35) and the 2018 Bill (Clause 43)) does not evince any attempt to balance the state’s powers with the legislative interests sought to be guarded. It enumerates ill-defined interests to enable privacy-incursions, without requiring the state to demonstrate any particular threats to the stated interest. It does not provide any legislative guidance on the nature of the exempted agencies that may be notified. Further, it empowers such entities to process data without upholding any other duties that ordinarily attach to data fiduciaries. These include duties integral to secure data processing (and are wholly unrelated to the interests sought to be protected), such as those to institute security measures to prevent breaches (Section 8(5)) and to protect children’s data (Section 9). In allowing such carte blanche, Section 17 effectively discharges notified entities from their fiduciary relationship with data principals – a relationship considered intrinsic to the processing of an individual’s personal data.

Concluding remarks

The analysis above points to the ways in which the DPDP Act fails to meaningfully protect individuals’ informational privacy against the state.2 Styled as a data protection framework, the Act affirmatively facilitates disproportionate encroachment into the private realm, and dubious surveillance measures akin to those struck down by the Court in 1962 (Kharak Singh v State of UP).

It is in recognition of such legally-enabled abuse that the Court recently emphasised the requirement of ‘sufficient safeguards’, in assessing the proportionality of any law (Ramesh Chandra v. State of UP). The decision provides a sound basis for challenging laws that invite even the possibility of abuse, where actual instances cannot (yet) be demonstrated. As Bhatia notes, it acknowledges that abuse usually “takes place not in open contravention of the law, but under the cover of a law that leaves wide discretion for executive action within its interstices”. Hopefully, this exposition of proportionality would assist courts in (at least) reading down the DPDP Act and thereby, reducing the risk of abuse embedded in it.

______________________

1 Pertinently, Section 7(b) does provide scope for further standards for processing of such data, to be notified under a policy issued by the Central Government. Further, Section 40(1)(e) allows the Central Government to notify specific subsidies, benefits, services, certificates, licences or permits for the provision of which personal data may be processed under Section 7(b).

2 As we have argued here and here, these concerns are exacerbated by the lack of regulatory powers and the lack of independence of the statutory data protection authority.

Navigating the Indian Data Protection Law: Highlighting the Key Misses in the DPDP Act

By Vignesh Shanmugam 

Editor’s note: This is the first blog in CCG’s Data Protection Blog Series, titled Navigating the Indian Data Protection Law. This series will be updated regularly, and will explore the practical implications and shortcomings of the DPDP Act, and suggest suitable safeguards that can be implemented to further protect the rights of the data principals, where appropriate. For a detailed analysis of the Indian data protection legislation, the comprehensive comments provided by the Centre for Communication Governance on the 2022 DPDP Bill and the 2018 DPDP Bill can be accessed here. For a detailed comparison between the provisions of the DPDP Act and the 2022 Bill, our comparative tracker can be accessed here. Moreover, we have also provided an in-depth analysis of individuals’ rights under the DPDP Act in the Data Protection 101 episode of our CCG Tech Podcast.

The Digital Personal Data Protection Act (“DPDP Act”) is India’s first personal data protection legislation. It is the result of over six years of deliberations, public discussions, and recommendations from the judiciary, parliamentary committees, legal and industry experts, and civil society members on the need for a data protection framework.

Several stakeholders had also proposed recommendations to the previous iterations of the personal data protection bills (namely the “2018 Bill”, “2019 Bill”, and “2022 Bill”). Although the DPDP Act retains the broad framework of the 2022 Bill, it has made notable changes to certain provisions. Since it has been enacted into law, it is important to highlight the key data protection principles and safeguards that are missing in the DPDP Act. 

The absence of these provisions directly affects the rights of data principals, and must be addressed by the DPDP Act, either through subsequent amendments or delegated legislation. We have listed below a few crucial concepts and provisions that ought to be included in a data protection legislation to effectively protect data principals’ rights.

  • Additional safeguards for sensitive personal data: 

The term ‘sensitive personal data’ is missing in the DPDP Act. This term refers to certain ‘sensitive’ categories of personal data such as biometric data, health data, and data recording a person’s religion, caste, sex, and sexual orientation. Various data protection frameworks have provided additional safeguards for the processing of sensitive personal data, as any misuse of such data would pose significant risks to the data principals. 

Section 10(1)(a) of the DPDP Act provides ‘sensitivity of the data processed’ as a determining factor for notifying certain data fiduciaries as significant data fiduciaries. However, the Act does not provide any procedures for assessing the ‘sensitivity of data’, thereby making this provision ambiguous and unclear. 

The DPDP Act also permits the processing of personal data for certain legitimate uses without seeking explicit consent from the data principals (under Section 7). It further provides wide exemptions to the Central Government, instrumentalities of the State, and certain data fiduciaries from the applicability of various provisions of the Act (under Section 17).

Therefore, the DPDP Act must provide additional safeguards for the processing of sensitive personal data, to protect sensitive personal data from being misused through the wide exemptions present in the Act. The DPDP Act should incorporate appropriate provisions to balance the rights of data principals against the exemptions provided to the State, significant data fiduciaries, and other entities.

  • Definition of ‘harm’:

Notably, the DPDP Act has removed the definition of ‘harm’, and all references to it in the Act. Without an explicit definition, the data protection board and the courts would have a higher burden to provide the scope and definition of ‘harm’ when adjudicating data protection infractions. 

In Justice K. S. Puttaswamy vs. Union of India, the Supreme Court reaffirmed the right to privacy as a fundamental right which is inextricably linked to various aspects of a person’s life, including their dignity, bodily integrity, and decisional autonomy. The DPDP Act should similarly adopt an expansive definition of harm, which protects the different aspects of privacy that are recognised by the Supreme Court. 

There are several instances of data fiduciaries processing personal data based on uninformed consent, or using manipulative and deceptive tools or ‘dark patterns’ to dilute a data principal’s authority. Without a clear definition of ‘harm’, many data principals will be unable to recognise these harms arising out of privacy violations. A clear definition would also assist the data protection board and the courts in determining the quantum of ‘harm’ and the quantum of penalties in such cases in a more uniform manner.

  • Provisions on data anonymisation and re-identification:

The DPDP Act does not have any provisions relating to anonymisation of personal data and re-identification of anonymised data. Although these provisions may be incorporated into the proposed Digital India Act, it is essential for them to also be included in the DPDP Act. 

Artificial Intelligence and data processing systems are currently capable of processing and linking separate datasets containing anonymised data and/or personal data. Using these methods, AI systems have re-identified individual users or user groups from anonymised datasets (such as datasets from social media, e-commerce, and ride-hailing platforms). This has led to significant data breaches, including the identification of the data principals’ age, race, religion, sex, sexual orientation, and other personal attributes from anonymised data.

In light of these concerns, it is crucial for the DPDP Act to include provisions relating to data anonymisation and re-identification. The Act must include additional safeguards for data principals against the re-identification of anonymised data, and provide redressal for any privacy harms arising from de-anonymised data.

  • Provisions on data portability:

There have been indications that provisions relating to data portability would be incorporated into the proposed Digital India Act. However, it is crucial for the data principals’ right to data portability to be recognised under the DPDP Act as well. This right allows data principals to obtain their personal data from data fiduciaries and use it for their own purposes. It also enables them to transfer their personal data from one data fiduciary to another (subject to certain restrictions and the feasibility of transferring the data). The right to data portability further promotes the interoperability of the personal data being processed or stored. However, by not recognising this right, the DPDP Act restricts the autonomy of data principals, their ability to access their own personal data, and choose the manner of processing the data.

Conclusion

The preamble of the DPDP Act states that it recognises “the right of individuals to protect their personal data”. This intent can be fulfilled by ensuring that the DPDP Act recognises certain key data protection principles, and the various aspects of the right to privacy reaffirmed by the Supreme Court in Puttaswamy

By recognising sensitive personal data and anonymised data, the DPDP Act will minimise the risks of misuse of personal data. By providing a definition of ‘harm’ consistent with the Supreme Court’s jurisprudence, the DPDP Act will reduce the burden on the data protection board and the courts in defining it on a case-to-case basis. Finally, the right to data portability will further empower the data principals with the agency and control to seamlessly migrate their personal data from one digital service to another without losing their records in the process.

Hence, it is essential for the legislators to introduce amendments to the DPDP Act or formulate appropriate rules under the Act to protect the rights of the data principals against the concerns highlighted above.

Navigating the Indian Data Protection Law: A Comparative Tracker for the Digital Personal Data Protection Act

On 11 August 2023, the Indian Parliament enacted the Digital Personal Data Protection Act, 2023 (“2023 Act”). The 2023 Act has retained the general framework of the Digital Personal Data Protection Bill, 2022 (“2022 Bill”). However, there are some notable and crucial changes between the 2023 Act and the 2022 Bill.

Last year, we prepared a detailed tracker to record the changes made in the 2022 Bill against the earlier iteration of the Bill (“2019 Bill”). This served as a helpful reference and a quick guide for analysing the two Bills, and comparing the differences. The tracker comparing the 2022 Bill and the 2019 Bill can be accessed here.

We have now prepared a comparative tracker to record the differences between the 2023 Act and the 2022 Bill. Similar to the previous tracker, we have analysed each clause and sub-clause of the 2023 Act and compared it to the corresponding provisions of the 2022 Bill. We have provided the full text of the provisions of the 2023 Act and the 2022 Bill (highlighting the differences) and a brief summary of changes under the 2023 Act. We have also provided additional notes on the relevant changes from the 2019 Bill where necessary.

In addition to the comparative tracker, we will also be publishing a series of blogs on Navigating the Indian Data Protection Law. Since the 2023 Act has been enacted into law, this blog series will explore the practical implications and shortcomings of the Act, and suggest suitable safeguards that can be implemented to further protect the rights of the data principals.

The first blog in this series will highlight certain key data protection principles and provisions that are absent in the 2023 Act, and briefly explain the importance of incorporating them, either through amendments or by way of delegated legislation.

The updated tracker can be accessed here.

(The comparative tracker was compiled by Ananya Moncourt, Tejaswita Kharel, and Vignesh Shanmugam, and edited by Shobhit.)

Big Tech vs News Publishers: A Rights-Based Perspective

Angelina Dash

As news consumption shifts online, a codependent relationship has developed between news publishers and Big Tech giants like Google and Meta. This move is accompanied by many users relying on online platforms like Facebook for news. As a result, Big Tech acts as an intermediary to generate traffic for news websites, while taking a portion of ad revenue in exchange. The problem arises when Google allegedly takes on a higher percentage of ad revenue generated (the extent of which is uncertain due to an opaque adtech ecosystem). A common response to this has been to mandate ad revenue sharing between Big Tech giants and news publishers. The most recent instance is the Canadian Online News Act (“the law”), which requires Big Tech giants to pay news publishers for news content published on their platforms. Here’s why this may not be the best solution. 

The Canadian Tussle

Google has recently been in the news for pulling its Google News service from Canada in response to the law. Prior to this law, several news publishers had filed lawsuits against Google on multiple grounds. These grounds included Google’s monopoly in the adtech sector and the ad revenue gap arising out of this monopoly. Additionally, copyright concerns arose over the use of snippets of text from the linked article within the search results, due to which news publishers lost traffic to their websites. 

Consequently, the law was enacted in Canada. Off-late, this has been a common thread across countries, with similar iterations in Australia and France. It is currently being considered in India as well. 

What are the implications of this?

The primary issue that arises is free speech being hindered by the law itself. The Canadian law lays down a bargaining process for tech giants and news publishers to arrive at a fair revenue sharing model. However, such a negotiation would be riddled by power imbalance in a scenario where an unsatisfied tech giant may simply threaten to pull its services from the country. Such negotiations would result in smaller publishers with asymmetrical bargaining power scrambling to secure a deal regardless of how equitable it is. Additionally, tech giants may prefer deals with established publishing houses that have better funding or stronger political affiliations. Since such preferential treatment may lead to skewed news reporting, the law essentially makes tech giants the final arbiter not only of which news publisher, but also of what news is worthy of being heard. 

The Canadian law aims to bridge this gap through non-discrimination provisions, as well as mediation and arbitration as a backstop. However, a lack of transparency impedes the efficacy of these provisions. The law protects against disclosure of confidential information. This does not make the entire negotiation opaque by default. However, parties have the liberty to designate certain aspects of the negotiation as confidential information which would not be disclosed to the public, including other news publishers. This makes it virtually impossible for other news publishers to establish whether discrimination or preferential treatment has taken place when they themselves enter the bargaining process. 

Moreover, the informational diversity and free speech restricted through the provisions of such laws are further limited when tech giants retaliate and “pull their services” from the country. What this effectively means is that when a user looks for a news item through the search engine or within a platform, they will only be shown links from other countries’ news websites and not from Canada. The only way they can access Canadian news sites is if they type the URL directly, or access the pages through the news publisher’s website, app, or online subscriptions.

Tech giants pulling their services from a country not only impacts the users’ right to know but is also detrimental for news publishers themselves. This was the case in Germany, where a German publisher pulled its content from Google News services but had to rejoin due to a reduction in traffic generation. Moreover, fewer people today are willing to pay for online subscriptions, sounding the death knell for smaller news publishers who rely extensively on subscriptions for funding. 

What does this mean for India? 

A hot button issue like ad-revenue sharing is also being considered under the broader framework of the upcoming Digital India Bill. This is in the run up to the pleas filed with the Competition Commission of India (“CCI”) against Google by the Digital News Publishers Association and others, on grounds of ad market monopoly, inadequate remuneration, along with a failure to disclose ad revenue data and the basis for deciding the quantum of revenue distribution. This has resulted in the CCI ordering a probe into the matter. Additionally, the Australian MP, Paul Fletcher, recently lauded the Australian model, confident that such a model would be equally, if not more successful in the Indian market. This is because the size of the Indian population would give the Indian market better bargaining power while negotiating with tech giants. 

In the event of the enactment of such a law, and of Google then retaliating and pulling its services out of the country, what this means for India is that free speech would similarly be hindered, and curbing misinformation would be virtually impossible. Not only does this impede the accessibility of private, non-partisan fact-checkers, but also, even in a scenario where the Press Information Bureau remains the official fact-checker, people will simply have fewer avenues to explore the veracity of the news they receive. This is especially significant in the backdrop of the 2024 general elections in India. For a country dealing with voter misinformation challenges, access to local news which is equal parts trustworthy, linguistically accessible, and accurate, becomes imperative to uphold democracy. 

Even if the above scenario is avoided by ensuring that tech giants continue their services in India through bargaining and negotiating “equitable” deals, what will inevitably occur is what has already ensued elsewhere – the same backroom lobbying and smaller news publishers in jeopardy– unless adequate legislative and policy safeguards are put in place. 

A Way Forward

Google itself has come out with measures like the Google News Initiative grants and Google News Showcase partnerships that assist news publishers in growing ad revenue and fighting misinformation. However, these only perpetuate many of the pre-existing gaps, including confidentiality of terms of the agreement. 

At this juncture, in the absence of definitive solutions, there are certain standards that India must adhere to while resolving the tussle between Big Tech and news publishers. These standards include transparency, sustainability, free speech, equity in compensation, informational diversity, and upholding the interests of smaller publishers. These were some of the goals behind a law envisioned by New Zealand. Collective bargaining is also a viable option, with open communication allowing several news publishers to jointly leverage better deals with tech giants. 

Any such law, if enacted in India, must ensure transparency is embedded into the law in a manner similar to the Digital Services Act and the Digital Markets Act. Particularly, in terms of accountability for the adtech sector and revenue distribution data, as well as in terms of commercial negotiations between news publishers and tech giants. Such a disclosure mechanism could look like legislation mandating some form of transparency output which is made publicly available post the bargaining process. This could include agreed upon metrics comprising broader terms of the negotiation process and division of percentages of revenue sharing where possible. Timelines can also be mutually decided upon between regulators, news publishers and online platforms to ensure these outputs are disclosed on a regular basis, whether annually or semi-annually. Some of these aspects of transparency have been covered by the Canadian law. 

Undoubtedly, any such law will need to be contextualised to the Indian backdrop, with emphasis on the dual rights of the freedom of speech of news outlets and the readers’ right to know and receive information. It must also provide clarity on what encompasses the term “news publisher”, because the broad ambit of the term could result in individual content creators being negatively impacted. The law on fair and equitable compensation to news publishers in the digital ecosystem is still at a nascent stage globally. As India envisages its own variant of such a law, lessons in what to do and, more importantly, what not to do, must both be learnt from Canada and other jurisdictions.

The United Nations Ad-hoc Committee for Development of an International Cybercrime Convention: Overview and Key Observations from Week II of the Fifth Substantive Session

Sukanya Thapliyal

In Part I of the two-part blog series, we briefed our readers on the developments that took place in the first week of the Fifth Session of the Ad-Hoc Committee. In Part II of the series, we aim to capture the key discussion on provisions on (i) technical assistance, (ii) preventative measures, (iii) final provisions and (iv) the Preamble.

  1. Provisions on Technical Assistance:

The Chapter on Technical Assistance listed down provisions including, general principles of technical assistance, and provision setting the scope of technical assistance (Training and technical assistance, exchange of information, and implementation of the Convention through economic development and technical assistance). The provisions listed under this Chapter highlight the importance of technical assistance and capacity building for developing countries. Further, the provisions also lay down obligations and responsibilities on the State Parties to initiate, develop and implement the widest measure of technical assistance and capacity-building that includes material support, training, mutual exchange of relevant experience and specialised knowledge, among others. 

All of the Member Countries and non-member Observer States were in agreement on the importance of the Chapter on technical assistance as an essential tool in combating and countering cybercrime. Technical assistance and capacity building helps in developing resources, institutional capacity, policies and programmes that help in mitigating and preventing cybercrime. A number of developing countries including, Iran, China, Nigeria, South Africa provided suggestions such as inclusion of “transfer of technology” and “technical assistance” to the existing text of the provisions in order to effectively broaden the scope of the chapter. 

On the other hand, several developed countries, including the United Kingdom, Germany, Japan, Norway, and Australia emphasised that provisions relating to technical assistance and capacity building should be voluntary in nature and should avoid an overly prescriptive approach. It should rather be based on mutual trust, be demand-driven, and correspond to nationally identified needs and priorities. These State Parties accordingly provided alternative provisions on similar lines for the said Chapter for the consideration of Member Countries and the Chair. 

The fifth session of the Ad-Hoc committee witnessed advanced discussions on technical assistance. Previously, technical assistance was discussed in the third session of the ad-hoc committee where discussions primarily revolved around the submission/ proposals from the Member Countries and non-member observer States. The CND presented ahead of the fifth session was well articulated and neatly organised into various provisions outlining the scope and mechanisms for technical assistance and capacity building to meet the objectives of the Convention.

  1. Provisions on Preventative Measures

The provisions charted out under the Chapter on the Preventative Measures (Article 91 to 93 of CND) included general provisions on prevention, establishment of authorities responsible for preventing and combating cybercrime, and prevention and detection of transfers of proceeds of cybercrime. The chapter underscores the role of effective preventative measures and substantial impact of these measures in attaining the objectives of the proposed convention and reducing the immeasurable financial losses incurred by the States due to cybercrime. 

Majority of State Parties signalled their support on inclusion of the chapter on Preventative Measures. In addition, non-member observer States and the Member States including European Union, Netherlands, United Kingdom, Australia, New Zealand, Canada, United States of America made interesting proposals on building effective and coordinated policies for prevention of cybercrime. These Member Countries argued in favour of broadening the current understanding of the term “vulnerable groups”, inclusion of the reference of international human rights, and advocated for developing, facilitating and promoting programmes and activities to discourage persons at risk of committing cybercrime.  

There were interesting proposals aimed at strengthening cooperation between law enforcement agencies and relevant entities (private sector, academia, non-governmental organizations and general public) to counter gender-based violence and mitigate the dissemination of children sexual abuse and exploitation material online. The Member Countries also supported the proposal for Offender Prevention Programmes aimed at preventing (repeated) criminal behaviour among (potential) offenders of cyber-dependent crime.

Member Countries such as China submitted in favour of inclusion of classified tiered measures to provide multi-level protection schemes for cybersecurity. They also called for legislative and other measures to require service providers in their respective territory to take active preventive and technical measures. 

The discussions undertaken in the fifth session of the Ad-Hoc committee were based on the text provided under the CND in the form of concrete provisions wherein various participants provided their detailed submissions on the text. The session also witnessed new proposals on technical assistance such as multi-level protection schemes for cybersecurity, 24*7 network, preventive monitoring to timely detect, suppress and investigate crimes by different Member Countries.

  1. Final Provisions

The Chapter on Final Provisions (Article 96-103 of the CND) listed crucial provisions namely, implementation of the Convention, relation with protocols, settlement of disputes concerning the interpretation or implementation of the Convention, and the signature, ratification, acceptance, approval and accession to the Convention. The CND also included provisions relating to the date of enforcement and procedure of amendment to the Convention. 

The Member States and non-members observer States unanimously recognised the importance of the provisions listed under the Chapter on Final Provisions. The non-member observer State and the Member Countries, including the United States of America, Singapore, European Union and others, emphasised that the provision listed under the CND should be in conformity with the existing legal instruments and other existing regional conventions. 

Member Countries such as China and Russia also recognised the importance of the existing legal frameworks. However, these countries further reminded the State Parties that comprehensiveness and universality are the twin goals of this Convention. Therefore, these countries stressed on the need for a “harmonious approach” or a “mutually reinforcing approach” regarding the same. 

Beside this, the Member States also showcased divergent opinions on the minimum number of ratification required for the Convention to come into force. Member Countries, including USA, Norway, New Zealand, Singapore and Canada, have opted for at least 90 ratifications. Member Countries, including Russia, Egypt, China, Brazil, India, and Nigeria, have supported thirty ratifications. Beside these, Japan, United Kingdom, European Union, Ghana and others have opted for forty to fifty ratifications as reasonable for the proposed Convention to come into force. 

The Member Countries supporting wider ratification have submitted that the support of a large number of Member States is indispensable for the success of the prospective Convention. On the other hand, the Member Countries supporting 30 ratifications have focused on the urgency of action in respect of cybercrime and therefore have supported a minimum number of ratifications to get the Convention up and running at the earliest.

Aside from this, Member Countries such as Mexico floated an interesting proposal to devise and incorporate Technical Annexes for ensuring that this Convention adapts and responds adequately to new and emerging challenges. The proposal garnered significant support from other State Parties. 

  1. Preamble of the Convention

The CND tabled for the fifth session also featured the draft Preamble for the Convention. Member Countries and non-member observer States unanimously agreed on the inclusion of the Preamble to the prospective convention. The Member Countries maintained that the Preamble is an integral part of the convention and features the purpose and intention of the Convention. 

At the same time, several Member Countries stated that the draft Preamble provided under the CND can be improved further in order to bring more clarity. The Member Countries accordingly provided a wide range of suggestions regarding the same. 

Member Countries such as CARICOM, Norway, Dominican Republic, Kenya, Brazil, suggested that the Preamble should highlight the challenges and opportunities (negative economic and social implications) faced by the Countries with regard to information and communications technologies. Member States including Mexico, New Zealand, Singapore and others proposed the inclusion of – promotion of open, secure, stable, accessible and peaceful cyberspace, application of international law and human rights – in the Preamble of the CND. 

Additionally, Member States suggested the inclusion of denying safe havens to those who engage in cybercrime, prosecuting cybercrimes, international cooperation, collection and sharing of evidence, recovering and returning proceeds of cybercrime, technical assistance and capacity building as key objectives of the Convention. The Member States also recognised the seriousness of use of information and communications technologies violence against women and girls and children; consequently, they called for the inclusion of these concerns in the Preamble of the prospective Convention. 

Way Forward 

The intensive discussion between the Chair, Member States and non-member observer States on various agenda items culminated in the text of the CND being revised. The views expressed will be taken into consideration by the Chair in developing a more advanced draft text of the convention, in accordance with the road map and mode of work for the Committee, adopted at its first session (A/AC.291/7, annex II).

High Court of Delhi cites CCG’s Working Paper on Tackling Non-Consensual Intimate Images

In December 2022, CCG held a roundtable discussion on addressing the dissemination of non-consensual intimate images (“NCII”) online and in January 2023 it published a working paper titled “Tackling the dissemination and redistribution of NCII”. We are thrilled to note that the conceptual frameworks in our Working Paper have been favourably cited and relied on by the High Court of Delhi in Mrs. X v Union of India W.P. (Cri) 1505 of 2021 (High Court of Delhi, 26 April, 2023)

We acknowledge the High Court’s detailed approach in addressing the issue of the online circulation of NCII and note that several of the considerations flagged in our Working Paper have been recognised by the High Court. While the High Court has clearly recognised the free speech risks with imposing overbroad monitoring mandates on online intermediaries, we note with concern that some key safeguards we had identified in our Working Paper regarding the independence and accountability of technologically-facilitated removal tools have not been included in the High Court’s final directions. 

CCG’s Working Paper 

A key issue in curbing the spread of NCII is that it is often hosted on ‘rogue’ websites that have no recognised grievance officers or active complaint mechanisms. Thus, individuals are often compelled to approach courts to obtain orders directing Internet Service Providers (“ISPs”) to block the URLs hosting their NCII. However, even after URLs are blocked, the same content may resurface at different locations, effectively requiring individuals to continually re-approach courts with new URLs. Our Working Paper acknowledged that this situation imposed undue burdens on victims of NCII abuse, but also argued against a proactive monitoring mandate for scanning of NCII content by internet intermediaries. We noted that such proactive monitoring mandates create free speech risks, as they typically lead to more content removal but not better content removal and run the risk of ultimately restricting lawful expression. Moreover, given the limited technological and operational transparency surrounding proactive monitoring/automated filtering, the effectiveness and quality of such operations are hard for external stakeholders and regulators to assess. 

Instead, our Working Paper proposed a multi-stakeholder regulatory solution that relied on the targeted removal of repeat NCII content using hash-matching technology. Hash-matching technology would ascribe reported NCII content a discrete hash (stored in a secure database) and then check the hash of new content against known NCII content. This would allow for rapid identification (by comparing hashes) and removal of content where previously reported NCII content is re-uploaded. Our Working Paper recommended the creation of an independent body to maintain such a hash database of known NCII content. Thus, once NCII was reported and hashed the first time by an intermediary, it would be added to the independent body’s database, and if it was detected again at different locations, it could be rapidly removed without requiring court intervention. 

This approach also minimises free speech risks as content would only be removed if it matched known NCII content, and the independent body would conduct rigorous checks to ensure that only NCII content was added to the database. Companies such as Meta, TikTok, and Bumble are already adopting hash-matching technologies to deal with NCII, and more broadly, hash-matching technology has been used to combat child-sex abuse material for over a decade. Since such an approach would potentially require legal and regulatory changes to the existing rules under the Information Technology Act, 2000, our Working Paper also suggested a short-term solution using a token system. We recommended that all large digital platforms adopt a token-based approach to allow for the quick removal of previously removed or de-indexed content, with minimal human intervention. 

Moreover, the long-term approach proposed in the Working Paper would also significantly reduce the administrative burden of seeking the removal of NCII for victims. It does so by: (a) reducing the time, cost, and effort they have to expend by going to court to remove or block access to NCII (since the independent body could work with the DoT to direct ISPs to block access to specific web pages containing NCII); (b) not requiring victims to re-approach courts for blocking already-identified NCII, particularly if the independent body is allowed to search for, or use a web crawler to proactively detect copies of previously hashed NCII; and (c) providing administrative, legal, and social support to victims.

The High Court’s decision 

In X v Union of India, the High Court was faced with a writ petition filed by a victim of NCII abuse, whose pictures and videos had been posted on various pornographic websites and YouTube without her consent. The Petitioner sought the blocking of the URLs where her NCII was located and the removal of the videos from YouTube. A key claim of the Petitioner was that even after content was blocked pursuant to court orders and directions by the government, the offending material was consistently being re-uploaded at new locations on the internet, and was searchable using specific keywords on popular online search engines. 

Despite the originator who was posting this NCII being apprehended during the hearings, the High Court saw it fit to examine the obligations of intermediaries, in particular search engines, in responding to user complaints on NCII. The High Court’s focus on search engines can be attributed to the fact that NCII is often hosted on independent ‘rogue’ websites that are unresponsive to user complaints, and that individuals often use search engines to locate such content. This may be contrasted with social media platforms that have reporting structures for NCII content and are typically more responsive. Thus, the two mechanisms that are then available to tackle the distribution of NCII on ‘rogue’ websites is to have ISPs disable access to specific URLs or/and have search engines de-index the relevant URLs. However, ISPs have little or no ability to detect unlawful content and do not typically respond to complaints by users, instead coordinating directly with state authorities. 

In fact, the High Court expressly cited CCG’s Working Paper to recognise this diversity in intermediary functionality, noting that “[CCG’s] paper espouses that due to the heterogenous nature of intermediaries, mandating a single approach for removal of NCII content might prove to be ineffective.” We believe this is a crucial observation as previous court decisions have imposed broad monitoring obligations on all intermediaries, even when they possess little or no control over content on their networks (See WP (Cri) 1082 of 2020 High Court of Delhi, 20 April 2021). Recognising the different functionality offered by different intermediaries allowed the High Court to identify de-indexing of URLs as an important remedy for tackling  NCII, with the Court noting that, “[search engines] can de-index specific URLs that can render the said content impossible to find due to the billions of webpages available on the internet and, consequently, reduce traffic to the said website significantly.” 

However, this would nevertheless be a temporary solution, since victims would still be required to repeatedly approach search engines for de-indexing each instance of NCII that is hosted on different websites. To address this issue, the long-term solution proposed in the Working Paper relies on a multi-stakeholder approach that relies on an independently maintained hash database for NCII content. The independent body maintaining the database would work with platforms, law enforcement, and the government to take down copies of identified NCII content, thereby reducing the burden on victims.

The High Court also adopted some aspects of the Working Paper’s short-term recommendations for the swift removal of NCII. The Working Paper recommended that platforms voluntarily use a token or digital identifier-based approach to allow for the quick removal of previously removed content. Complainants, who would be assigned a unique token upon the initial takedown of NCII, could submit URLs of any copies of the NCII along with the token. The search engine or platform would thereafter only need to check whether the URL contains the same content as the identified NCII linked to the token. The Court, in its order, requires search engines to adopt a similar token-based approach to “ensure that the de-indexed content does not resurface (¶61),” and notes that search engines “cannot insist on requiring the specific URLs from the victim for the purpose of removing access to the content that has already been ordered to be taken down (¶61)”. However, the judgment does not clarify if this means that search engines are required to disable access to copies of identified NCII without the complainant identifying where they have been uploaded, and if so, then how search engines will remove the repeat instances of identified NCII. The order only states that it is the responsibility of search engines to use tools that already exist to ensure that access to offending content is immediately removed. 

More broadly, the Court agreed with our stand that proactive filtering mandates against NCII may harm free speech, noting that “The working paper published by CCG records the risk that overbroad directions may pose (¶56)” further holding that “any directions that necessitates pro-active filtering on the part of intermediaries may have a negative impact on the right to free speech. No matter the intention of deployment of such technology, its application may lead to consequences that are far worse and dictatorial. (¶54)” We applaud the High Court’s recognition that general filtering mandates against unlawful content may significantly harm free speech. 

Final directions by the court

The High Court acknowledged the use of hash-matching technology in combating NCII as deployed by Meta’s ‘Stop NCII’ program (www.stopncii.org) and explained how such technology “can be used by the victim to create a unique fingerprint of the offending image which is stored in the database to prevent re-uploads (¶53). As noted above, our Working Paper also recognised the benefits of hash-matching technology in combating NCII. However, we also noted that such technology has the scope for abuse and thus must be operationalised in a manner that is publicly transparent and accountable. 

In its judgment, the Court issued numerous directions and recommendations to the Ministry of Electronics and Information Technology (MeitY), the Delhi Police, and search engines to address the challenge of circulation of NCII online. Importantly, it noted that the definition of NCII must include sexual content intended for “private and confidential relationships,” in addition to sexual content obtained without the consent of the relevant individual. This is significant as it expands the scope of illegal NCII content to include instances where images or other content have been taken with consent, but have thereafter been published or circulated without the consent of the relevant individual. NCII content may often be generated within the private realm of relationships, but subsequently illegally shared online.

The High Court framed its final directions by noting that “it is not justifiable, morally or otherwise, to suggest that an NCII abuse victim will have to constantly subject themselves to trauma by having to scour the internet for NCII content relating to them and having to approach authorities again and again (¶57).” To prevent this outcome, the Court issued the following directions: 

  1. Where NCII has been disseminated, individuals can approach the Grievance Officer of the relevant intermediary or the Online Cybercrime Reporting Portal (www.cybercrime.gov.in) and file a formal complaint for the removal of the content. The Cybercrime Portal must specifically display the various redressal mechanisms that can be accessed to prevent the further dissemination of NCII; 
  2. Upon receipt of a complaint of NCII, the police must immediately register a formal complaint in relation to Section 66E of the IT Act (punishing NCII) and seek to apprehend the primary wrongdoer (originator); 
  3. Individuals can also approach the court and file a petition identifying the NCII content and the URLs where it is located, allowing the court to make an ex-facie determination of its illegality; 
  4. Where a user complains against NCII content under Rule 3(2)(b) of the Intermediary Guidelines to a search engine, search engines must employ hash-matching technology to ensure future webpages with identical NCII content are also de-indexed to ensure that the complained against content does not re-surface. The Court held that users should be able to directly re-approach search engines to seek de-indexing of new URLs containing previously de-indexed content without having to obtain subsequent court or government orders;
  5. A fully-functional helpline available 24/7 must be devised for reporting NCII content. It must be staffed by individuals who are sensitised about the nature of NCII content and would not shame victims, and must direct victims to organisations that would provide social and legal support. Our Working Paper proposed a similar approach, where the independent body would work with organisations that would provide social, legal, and administrative support to victims of NCII;
  6. When a victim obtains a takedown order for NCII, search engines must use a token/ digital identifier to de-index content, and ensure that it does not resurface. The search engines also cannot insist on requiring specific URLs for removing access to content ordered to be taken down. Though our Working Paper recommended the use of a similar system, to mitigate against the risks of proactive monitoring, we suggested that (a) this could be a voluntary system adopted by digital platforms to quickly remove identified NCII, and (b) that complainants would submit URLs of copies of identified NCII along with the identifier, so that platform would only need to check whether the URL contains the same content linked to the token to remove access; and
  7. MeitY may develop a “trusted third-party encrypted platform” in collaboration with search engines for registering NCII content, and use hash-matching to remove identified NCII content. This is similar to the long-term recommendation in the Working Paper, where we recommend that an independent body is set up to maintain such a database and work with the State and platforms to remove identified NCII content. We also recommended various safeguards to ensure that only NCII content was added to the database.

Conclusion 

Repeated court orders to curtail the spread of NCII content represents a classic ‘whack-a-mole’ dilemma and we applaud the High Court’s acknowledgement and nuanced engagement with this issue. Particularly, the High Court recognises the significant mental distress and social stigma that the dissemination of one’s NCII can cause, and attempts to reduce the burdens on victims of NCII abuse by ensuring that they do not have to continually identify and ensure the de-indexing of new URLs hosting their NCII. The use of hash-matching technology is significantly preferable to broad proactive monitoring mandates.

However, our Working Paper also noted that it was of paramount importance to ensure that only NCII content was added to any proposed hash database, to ensure that lawful content was not accidently added to the database and continually removed every time it resurfaced. To ensure this, our Working Paper proposed several important institutional safeguards including: (i) setting up an independent body to maintain the hash database; (ii) having multiple experts vet each piece of NCII content that was added to the database; (iii) where NCII content had public interest implications (e.g., it involved a public figure), a judicial determination should be required; (iv) ensuring that the independent body provides regular transparency reports and conducts audits of the hash database; and (v) imposing sanctions on the key functionaries of the independent body if the hash database was found to include lawful content. 

We believe that where hash-databases (or any technological solutions) are utilised to prevent the re-uploading of unlawful content, these strong institutional safeguards are essential to ensure the public accountability of such databases. Absent this public accountability, it is hard to ascertain the effectiveness of such solutions, allowing large technology companies to comply with such mandates on their own terms. While the High Court did not substantively engage with these institutional mechanisms outlined in our Working Paper, we believe that the adoption of the upcoming Digital India Bill represents an excellent opportunity to consider these issues and further our discussion on combating NCII.