The United Nations Ad-hoc Committee for Development of an International Cybercrime Convention: Overview and Key Observations from Fourth Substantive Session

Sukanya Thapliyal

  1. Background/ Overview 

Last month, the Centre for Communication Governance at National Law University Delhi had the opportunity to participate as a stakeholder in the Fourth Session of the United Nations Ad-hoc Committee, tasked to elaborate a comprehensive international convention on countering the use of information and communications technologies (ICTs) for criminal purposes (“the Ad Hoc Committee”). 

The open-ended Ad-hoc Committee is an intergovernmental committee of experts representative of all regions.  It was established by the UN General Assembly-Resolution 74/247 under the Third Committee of the UN General Assembly. The committee was originally proposed by the Russian Federation and 17 co-sponsors in 2019. The UN Ad-hoc Committee is mandated to provide a draft of the convention to the General Assembly at its seventy-eighth session in 2023 (UNGA Resolution 75/282). 

The three previous sessions of the Ad Hoc Committee witnessed the exchange of general views of the Member States on the scope, and objectives of the comprehensive convention, and agreement on the structure of the convention. This was followed by themed discussions and a first reading of the provisions on criminalisation, procedural measures and legal enforcement, international cooperation, technical assistance, preventive measures, among others. (We had previously covered the proceedings from the First Session of the Ad-Hoc Committee here.)

The fourth session of the Ad Hoc Committee was marked by a significant development – the preparation of a Consolidated Negotiating Document (CND) to facilitate the remainder of the negotiation process. The CND was prepared by the Chair of the Ad Hoc Committee keeping in mind the various views, proposals, and submissions made by the Member States at previous sessions of the Committee. It is also based on existing international instruments and efforts at the national, regional, and international levels to combat the use of information and communications technologies (ICTs) for criminal purposes. 

As per the road map and mode of work for the Ad Hoc Committee approved at its first session (A/AC.291/7, annex II), the fourth session of the Ad Hoc Committee conducted the second reading of the provisions of the convention on criminalisation, the general provisions and the provisions on procedural measures and law enforcement. Therefore, the proceedings during the Fourth Session involved comprehensive and elaborate discussions around these provisions amongst the Chair, Member States, Observer States, and other multi-stakeholder groups. 

Over the two-part blog series, we aim to provide our readers with a brief overview and our observations from the discussions during the fourth substantive session of the Ad-hoc Committee. Part I of the blog (i) discusses the methodology employed by the Ad-Hoc Committee discussions and (ii) captures the consultations and developments from the second reading of the provisions on criminalisation of offences under the proposed convention. Furthermore, we also attempt to familiarise  readers with the emerging points of convergence and divergence of opinions among different Member States and implications for the future negotiation process. 

In part II of the blog series, we will be laying out the discussions and exchanges on (i) the general provisions and (ii) provisions on procedural measures and legal enforcement. 

  1. Methodology used for Conducting the Fourth session of the Ad-Hoc Committee

The text-based negotiations at the Fourth Session proceeded in two rounds. 

Round 1: The first round of discussions allowed the participants to share concise, substantive comments and views. Provisions on which there was broad agreement proceeded to Round 2. Other provisions were subject to a co-facilitated informal negotiation process. Co-facilitators that spearheaded the informal negotiations reported orally to the Chair and the Secretariat. 

Round 2: Member Countries progressed through detailed deliberations on the wording of each of the provisions that enjoyed broad agreement. 

  1. Provisions on Criminalization (Agenda Item 4)

The Chapter on “provisions on criminalization” included a wide range of criminal offences that are under consideration for inclusion under the Cybercrime Convention. Chapter 2 under the CND features 33 Articles grouped into 11 clusters as:

  1. Cluster 1: offences against illegal access, illegal interference, interference with computer systems/ ICT systems, misuse of devices, that jeopardises the confidentiality, integrity and availability of system, data or information;
  2. Cluster 2: offences that include computer or ICT-related forgery, fraud, theft and illicit use of electronic payment systems;
  3. Cluster 3: offences related to violation of personal information
  4. Cluster 4: infringement of copyright.
  5. Cluster 5: offences related to online child sexual abuse or exploitation material
  6. Cluster 6: offences related to Involvement of minors in the commission of illegal acts, and encouragement of or coercion to suicide
  7. Cluster 7: offences related to sexual extortion and non-consensual dissemination of intimate images.
  8. Cluster 8: offences related to incitement to subversive or armed activities and extremism-related offences
  9. Cluster 9: terrorism related offences and offences related to the distribution of narcotic drugs and psychotropic substances, arms trafficking, distribution of counterfeit medicines.
  10. Cluster 10: offences related to money laundering, obstruction of justice and other matters (based on the language of United Nation Convention against Corruption (UNCAC) and United Nation Convention against Transnational Organised Crime (UNTOC))
  11. Cluster 11: provisions relating to liability of legal persons, prosecution, adjudication and sanctions. 

Round 1 Discussions 

  1. Points of Agreement (taken to the second round) 

The first round of discussions on provisions related to criminalisation witnessed a broad agreement on inclusion of provisions falling under Cluster 1, 2, 5, 7, 10 and 11. Member States, Observer States and other parties including the EU, Austria, Jamaica (on the behalf of CARICOM), India, USA, Japan, Malaysia, and the UK strongly supported the inclusion of offences enlisted under Cluster 1 as these form part of core cybercrimes recognised and uniformly understood by a majority of countries. 

A large number of the participant member countries were also in favour of a narrow set of cyber-dependent offenses falling under Cluster 5 and 7. They contended that these offenses are of grave concern to the majority of countries and the involvement of computer systems significantly adds to the scale, scope and severity of such offenses. 

Several countries such as India, Jamaica (on behalf of CARICOM), Japan and Singapore broadly agreed on offences listed under clusters 10 and 11. These countries expressed some reservations concerning provisions on the liability of legal persons (Article 35). They contended that such provisions should be a part of the domestic laws of member countries. 

  1. Points of Disagreement (subject to Co-facilitated Informal Negotiations)

There was strong disagreement on the inclusion of provisions falling under Cluster 3, 4, 6, 8 and 9. The EU along with Japan, Australia, USA, Jamaica (on the behalf of CARICOM), and others objected to the inclusion of these cyber-dependent crimes under the Convention. They stated that such offenses (i) lack adequate clarity and uniformity across countries(ii) pose a serious threat of misuse by the authorities, and (iii) present an insurmountable barrier to building consensus as Member Countries have exhibited divergent views on the same. Countries also stated that some of these provisions (Cluster 9: terrorism-related offenses) are already covered under other international instruments. Inclusion of these provisions risks mis-alignment with other international laws that are already employed to oversee those areas.

  1. Co-Facilitated Informal Round

The Chair delegated the provisions falling under Cluster 3, 4, 6, 8 and 9 into two groups for the co-facilitated informal negotiations. Clusters 3, 4 and 6 were placed into group 1, under the leadership of Ms. Briony Daley Whitworth (Australia) and Ms. Platima Atthakor (Thailand). Clusters 8 and 9 were placed into group 2, under the leadership of Ambassador Mohamed Hamdy Elmolla (Egypt) and Ambassador Engelbert Theuermann (Austria). 

Group 1: During the informal sessions for cluster 3, 4 and 6, the co-facilitator encouraged  Member States to provide suggestions/views/ comments on provisions under consideration. The positions of Member States remained considerably divergent. Consequently, the co-facilitators decided to continue their work after the fourth session during the intersessional period with interested Member States.

Group 2: Similarly for cluster 8 and 9, the co-facilitators, along with interested Member States engaged in constructive discussions. Member States expressed divergent views on the provisions falling under cluster 8 and 9. These ranged from proposals for deletion to proposals for the strengthening and expansion of the provisions. Besides, additional proposals were made in favour of the following areas – provision enabling future Protocols to the Convention, inclusion of the concept of serious crimes and broad scope of cooperation that extends beyond the provisions criminalised under the convention. The co-facilitators emphasised the need for future work to forge a consensus and make progress towards finalisation of the convention. 

Round 2 Discussions: 

Subsequently, the second round of discussions witnessed intensive discussions and deliberation amongst the participating Member Countries and Observer States. The discussions explored the possibility of adding provisions on issues relating to the infringement of website design, unlawful interference with critical information infrastructure, theft with the use of information and communications technologies and dissemination of false information, among others. 

Conclusion:

Since the First Session of the Ad-Hoc Committee, the scope of the convention has remained an open-ended question. Member Countries have put forth a wide range of cyber-dependent and cyber-enabled offences for inclusion in the Convention.  Cyber-dependent offences, along with a narrow set of cyber-enabled crimes (such as online child sexual abuse or exploitation material, sexual extortion, and non-consensual dissemination of intimate images), have garnered broad support. Other cyber-enabled crimes (terrorism-related offences, arms trafficking, distribution of counterfeit medicines, extremism-related offences) have witnessed divergences, and their inclusion is currently being discussed at length. Countries must agree on the scope of the Convention if they want to make headway in the negotiation process. 

(The Ad-Hoc committee is likely to take up these discussions forward in the sixth session of the Ad-Hoc Committee 21 August – 1 September 2023.

Re-thinking content moderation: structural solutions beyond the GAC

This post is authored by Sachin Dhawan and Vignesh Shanmugam

The grievance appellate committee (‘GAC’) provision in the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2022 has garnered significant controversy. While it seeks to empower users to challenge the arbitrary moderation decisions of platforms, the provision itself has been criticised for being arbitrary. Lawyers, privacy advocates, technology companies, and other stakeholders have raised many  concerns about the constitutional validity of the GAC, its lack of transparency and independence, and excessive delegated power.

Although these continuing discussions on the GAC are necessary, they do not address the main concerns plaguing content moderation today. Even if sufficient legal and procedural safeguards are incorporated, the GAC will still be incapable of resolving the systemic issues in content moderation. This fundamental limitation persists because “governing content moderation by trying to regulate individual decisions is [like] using a teaspoon to remove water from a sinking ship”.  

Governments, platforms, and other stakeholders must therefore focus on: (i) examining the systemic issues which remain unaddressed by content moderation systems; and (ii) ensuring that platforms implement adequate structural measures to effectively reduce the number of individual grievances as well as systemic issues.

The limitations of the current content moderation systems

Globally, a majority of platforms rely on an individual case-by-case approach for content moderation. Due to the limited scope of this method, platforms are unable to resolve, or even identify, several types of systemic issues. This, in turn, increases the number of content moderation cases.

To illustrate the problem, here are a few examples of systemic issues which are unaddressed by content moderation systems: (i) coordinated or periodic attacks (such as mass reporting of users/posts) which target a specific class of users (based on gender, sexuality, race, caste, religion, etc.); (ii) differing content moderation criteria in different geographical locations; and (iii) errors, biases or other issues with algorithms, programs or platform design which lead to increased flagging of users/posts for content moderation.

Considering the gravity of these systemic issues, platforms must adopt effective measures to improve the standards of content moderation and reduce the number of grievances.

Addressing the structural concerns in content moderation systems

Several legal scholars have recommended the adoption of a ‘systems thinking’ approach to address the various systemic concerns in content moderation. This approach requires platforms to implement corporate structural changes, administrative practices, and procedural accountability measures for effective content moderation and grievance redressal. 

Accordingly, revising the existing content moderation frameworks in India to include the following key ‘systems thinking’ principles would ensure fairness, transparency and accountability in content moderation.

  • Establishing independent content moderation systems. Although platforms have designated content moderation divisions, these divisions are, in many cases, influenced by the platforms’ corporate or financial interests, advertisers’ interests, or political interests, which directly impacts the quality and validity of their content moderation practices. Hence, platforms must implement organisational restructuring measures to ensure that content moderation and grievance redressal processes are (i) solely undertaken by a separate and independent ‘rule-enforcement’ division; and (ii) not overruled or influenced by any other divisions in the corporate structure of the platforms. Additionally, platforms must designate a specific individual as the authorised officer in-charge of the rule-enforcement division. This ensures transparency and accountability from a corporate governance viewpoint. 
  • Robust transparency measures. Across jurisdictions, there is a growing trend of governments issuing formal or informal orders to platforms, including orders to suspend or ban specific accounts, take down specific posts, etc. In addition to ensuring transparency of the internal functioning of platforms’ content moderation systems, platforms must also provide clarity on the number of measures undertaken (and other relevant details) in compliance with such governmental orders. Ensuring that platforms’ transparency reports separately disclose the frequency and total number of such measures will provide a greater level of transparency to users, and the public at large.
  • Aggregation and assessment of claims. As stated earlier, individual cases provide limited insight into the overall systemic issues present on the platform. Platforms can gain a greater level of insight  through (i) periodic aggregation of claims received by them; and (ii) assessment of  these aggregated claims for any patterns of harm or bias (for example: assessing for the presence of algorithmic/human bias against certain demographics). Doing so will illuminate algorithmic issues, design issues, unaccounted bias, or other systemic issues which would otherwise remain unidentified and unaddressed.
  • Annual reporting of systemic issues. In order to ensure internal enforcement of systemic reform, the rule-enforcement divisions must provide annual reports to the board of directors (or the appropriate executive authority of the platform), containing systemic issues observed, recommendations for certain systemic issues, and protective measures to be undertaken by the platforms (if any). To aid in identifying further systemic issues, the division must conduct comprehensive risk assessments on a periodic basis, and record its findings in the next annual report.
  • Implementation of accountability measures. As is established corporate practice for financial, accounting, and other divisions of companies, periodic quality assurance (‘QA’) and independent auditing of the rule-enforcement division will further ensure accountability and transparency.

Conclusion

Current discussions regarding content moderation regulations are primarily centred around the GAC, and the various procedural safeguards which can rectify its flaws. However, even if the GAC  becomes an effectively functioning independent appellate forum, the systemic problems plaguing content moderation will remain unresolved. It is for this reason that platforms must actively adopt the structural measures suggested above. Doing so will (i) increase the quality of content moderation and internal grievance decisions; (ii) reduce the burden on appellate forums; and (iii) decrease the likelihood of governments imposing stringent content moderation regulations that undermine  the free speech rights of users.

Guest Post: Vinit Kumar v CBI: Admissibility of evidence and the right to privacy

This post was authored by Sama Zehra

The Bombay High Court (‘HC’) in Vinit Kumar v CBI was faced with a situation familiar to the constitutional courts in India. The HC was called upon to decide whether telephone recordings obtained in contravention of section 5(2) of the Telegraphs Act, 1885 (‘Act’) would be admissible in a criminal trial against the accused. Before delving into the reasoning of the HC, it will be instructive to refer to the facts of the case and an overview of India’s interception regime. 

Section 5(2) of Telegraph Act, permits interception (or ‘phone tapping’) done in accordance with a “procedure established by law” and lays down two conditions: the occurrence of a “public emergency” and in the interests of “public safety”,under which such orders may be passed. Moreover, the order must be “necessary” for reasons related to the security of the state, friendly relations with other states, sovereignty or preventing the commission of an offense. The Apex Court in PUCL v. UOI (‘PUCL’) stated that telephone tapping without following the appropriate safeguards and legal process would infringe the Right to Privacy of an individual. Accordingly, procedural safeguards, in addition to those under section 5(2) of the Act, were laid down; eventually incorporated in the Telegraph Rules, 1951 (‘Telegraph Rules’). These included; such orders being only issued by the Home Secretaries of Central and State governments in times of emergency. Secondly, such an order shall be passed only when necessary and the authority passing the order shall maintain a detailed record of the intercepted communication and the procedure followed. Further, the order shall cease to be effective within two months, unless renewed. Lastly, the  intercepted material shall be used only for purposes deemed necessary under the Act.

In the Vinit Kumar case, during a bribery related investigation, three interception orders were issued directing the interception of telephone calls by the petitioner. These were challenged as being ultra vires of section 5(2) of the Act, non-compliant with the Telegraph Rules, and for being in violation of the fundamental rights guaranteed under Part-III of the Indian Constitution. 

The HC quashed the said orders by holding that: 

Firstly, the right to privacy would include telephone-conversation in the privacy of one’s home or office. Telephone-tapping would, thus, impermissibly infringe on the interceptee’s Article 21 rights unless it is conducted under the procedure established by law (in this case, the law laid down in PUCL and the Telegraph Rules). In Vinit Kumar, the HC found the impugned orders were in contravention of the procedural guidelines laid down for the protection of the right to privacy by the Supreme Court in PUCL, section 5 of the Act and Rule 419A of the Telegraph Rules. Additionally, (and crucially) the evidence obtained through infringement of the right to privacy would be inadmissible in the court of law. 

This blog analyses this third aspect of the HC judgment and argues that the approach of the HC reflects a true reading of the decision of the SC in K.S. Puttaswamy v UoI (‘Puttaswamy’) and ushers us into a new regime of right to privacy for accused persons. While doing so, the author critically examines the previous decisions wherein the courts have held the evidence collected through processes that infringe  the fundamental rights of the accused to be admissible.

Correct Reading of Privacy Doctrine and Puttaswamy Development

Based on the decisions of the SC in State v Navjot Sandhu and Umesh Kumar v State, the current legal position would appear to be that illegally obtained evidence is admissible in courts as long as it is relevant. Consequently, as Vrinda Bhandari and Karan Lahiri have argued, the State is placed in a position whereby it is incentivised to access private information of an accused in a manner which may not be legally permissible. There are no adverse legal consequences for illegally obtaining evidence, only prosecutorial benefits. This is reflected in the decisions concerning the admissibility of recordings of telephonic conversations without the knowledge of the accused.  The rule regarding admissibility of illegally collected evidence stems from a couple of cases, however it is submitted that the rule has a crumbling precedential basis. 

A good starting point is the  Supreme Court’s decision in RM Malkani v State  (‘Malkani’). It was held that telephone recordings without  the knowledge of the accused would be admissible in evidence as long as they are not obtained by coercion or compulsion. The Court had negligible analysis to offer insofar as the right to privacy of an individual is concerned. However, this decision dates back to the Pre-PUCL and the Pre-Puttaswamy era, wherein the right to privacy (especially vis-a-vis telephonic conversations) was not recognised as a fundamental right. Hence, it becomes imperative to question the continued relevance and correctness of this decision in light of the new developments in our understanding of fundamental rights under the Constitution. Moreover, Malkani relied on  Kharak Singh v. State of U.P, which was explicitly overruled by Puttaswamy. This also casts doubt on other cases which relied on the reasoning in Kharak singh on the issue of privacy. 

In Vinit Kumar, the HC rejected the approach adopted in Malkani and Kharak Singh. Affirming the right to privacy as a fundamental right, and relying on the requirements of ‘public emergency’ or ‘public order’, the HC observes that the respondents failed to justify any ingredients of “risk to the people at large or interest of the public safety, for having taken resort to the telephonic tapping by invading the right to privacy” (¶ 19). It emphasized the need to adhere by procedural safeguards, as provided in the Act, the Telegraph Rules, and the PUCL judgment, so as to ensure that the infringement of the right to privacy in a particular case meets the standards of proportionality laid down in Puttaswamy. Crucially, the HC goes a step further to hold that since the infringement of the right to privacy is not in accordance with the procedure established by law, the intercepted messages ought to be destructed and not used as evidence in trial as it is sourced from the infringement of the fundamental right to life (¶ 22). 

Thus, we can see an adherence to the new constitutional doctrines espoused by the Supreme Court whereby the HC emphatically rejected the now-overruled reasoning of Kharak Singh v State as far as the right to privacy is concerned, and refused to apply the cases of Malkani and Dharambir Khattar v UoI whose ratios flow from Kharak Singh’s non-recognition of a right to privacy. The HC held that such judgements have been overruled by Puttaswamy (to the extent that they do not recognise the right to privacy as a fundamental right). Furthermore, it was also held that these cases involved no examination of law on the touchstone of principles of proportionality and legitimacy, as laid down in Puttaswamy (¶ 37). It circumvented the issue of ‘relevancy’ by distinguishing between ‘illegally collected evidence’, and ‘unconstitutionally collected evidence’, ruling that the latter was inadmissible as it would lead to the erosion of fundamental rights at the convenience of the State’s investigatory arm. 

The HC judgment is, therefore, an important landmark with respect to the admissibility of evidence involving violation of fundamental rights. However, given the absence of a clear Supreme Court judgment in this regard, the rights of the Indian citizenry are susceptible to the difference in the approaches taken by other HCs. A case in point is the Delhi HC judgment in Deepti Kapur v. Kunal Julka wherein a video-recording of the wife’s conversation with her friend, collected by the CCTV camera in her room was admitted in evidence despite the arguments raised with regards to infringement of the right to privacy. Thus, the exact application of a bar on evidence collected through privacy infringing measures in different contexts will need to be developed on a case by case basis. 

Conclusion

The Bombay HC judgment correctly traces the evolution of the right to privacy debate in the Indian jurisprudence. It is based on the transformative vision of the Puttaswamy judgment and appropriate application of precedent with regards to the case in hand. It symbolizes a true deference to the  Constitution by protecting the citizenry from state surveillance and  potential abuses of power. Especially in the current electronic era where personal information can be extracted through unconstitutional means, the Vinit Kumar judgment affirms the importance of procedural due process under the fundamental rights regime in India. 

Guest Post: The inclusion of “OTT” services in  the Indian Telecommunications Bill 2022

This post is authored by Chiranjeev Singh

The Department of Telecommunications (“DoT”) released a draft for the Indian Telecommunications Bill 2022 (“the Bill”) on 21st September 2022. It seeks to replace the Indian Telegraph Act 1885 (“the Telegraph Act”), among others, and provide a modern framework for regulating Telecommunications. One of the significant changes proposed by the Bill is to include over-the-top (“OTT”) communication services within the scope of regulation. This inclusion will be a paradigm shift in the Indian telecom law regime as non-spectrum services will now require a license. This article questions the rationale of including OTT services and critiques its formulation in the Bill.  

The Telegraph Act, which is the extant law, was enacted to deal with Telegraphs and Telephone Exchanges. Section 4 of the Telegraph Act states that a license is required to establish, operate and use a “Telegraph”, which is defined as any apparatus used for the transmission and emission of signals. These technologies have long been abandoned. To make this framework applicable to the modern forms of telecommunications, services provided by a “Telegraph” have been interpreted to include Access Services (voice calling, messages), Carrier Services (long distance communication) and Internet Access Services (providing access to the internet) among others. The entities which wish to provide these services (Telecom Service Providers or “TSPs”) have to obtain a license from the DoT, and abide by the conditions prescribed therein. Additionally, licensed entities must abide by regulations under the Telegraph Act, such as interconnection obligations and contributions to the Universal Service Obligation Fund.

Licensed services typically require the use of spectrum. Even though spectrum is non-depletable, it is limited in nature. The reason being, transmission of information can only take place in specific bands of spectrum, which is dependent on the nature of information. Further, the same part of the spectrum cannot be used by multiple persons at the same time and within the same geographical area. Interference of multiple signals over the same frequency can significantly worsen spectrum quality and reliability. In other words, it is a scarce rivalrous natural resource.

As is the case with other such resources, it becomes a duty upon the State to allocate the use of spectrum in a way which maximises the efficient use of the resource, maintains the quality of service and ensures that the public gets the benefit of such services to the fullest. Therefore, the State is justified in regulating the TSP’s economic exploitation of an exclusive natural resource, through licensing, national frequency allocation plans and other means.

At the same time, there has been a rise of another class of services, which are provided over-the-top of the existing network infrastructure. That is to say, entities are providing communication services over the internet, which in the first place is provided by a TSP. As an example, WhatsApp is a platform which allows users to have video and audio calls over the internet. While it provides a service which may seem similar to a traditional phone call, it neither requires any spectrum use nor a license under the Telegraph Act which a TSP would require.

Presently, OTT services act as intermediaries, i.e. they receive, store or transmit electronic records on behalf of others, are covered under the Information Technology Act 2000 (“IT Act”). The regulation of intermediaries under the IT Act involves content moderation, data privacy, lawful interception mechanism and safe harbour protections.

The Bill proposes to include OTT communication services under the telecom law regime. It seeks to regulate how communication takes place on these platforms, and intends to subject the providers of these services to regulations at par with TSPs.

The Bill introduces the term “Telecommunication Services”, which is defined as a service of any description given to a user through telecommunication. It includes, “voice, … internet and broadcast services, … internet based communication services, …  OTT communication services …” among other things. Section 3 of the Bill further states that it is the exclusive privilege of the Central Government to provide “Telecommunication Services”, and a license is required by an entity to operate the same.

A cursory look at the definition will make it evident that it is all encompassing in nature. It includes services which utilise spectrum and those that do not. In fact, one can say that it covers virtually all digital communication services.

There are a number of issues with such an approach. At the outset, the regulatory justification present for providing licenses for spectrum services does not exist for OTT services. Internet, unlike spectrum, is an abundant non-rivalrous resource which is not owned by the State. Thus, the scarcity of resources is not a concern here and the case for a licensing model is not made out. The issues pertaining to handling of sensitive personal information are taken care by the IT Act presently, and will be better addressed by the upcoming personal data protection regime. Concerns regarding quality of service also do not stand. The OTT environment is extremely competitive, and consumers have a wide range of options. As a result, consumers frequently change to higher-quality services.

The inclusion of OTT services in the telecom law regime also brings up the issue of incompatible compliance regimes. As highlighted before, OTT services are regulated by the IT Act as well. One area of dual regulation would be the lawful interception mechanism. Under the Bill as well as the Telegraph Act, there needs to be a public emergency for the State to make an interception order, which is not the case under the IT Act. Furthermore, the IT Act also recognises investigation of an offence as a ground for interception, which the Bill and the Telegraph Act do not. In such a situation, OTT services will be subjected to contradictory regulations.  

Several TSPs have welcomed this inclusion as it espouses the principle of “same services, same rules”. The argument is that while TSPs have to comply with several requirements and incur several costs (spectrum use charges, license fees etc), none of these rules apply to entities which provide OTT services which are the “same” as spectrum services. This reasoning is unfounded because the two services are based on different kinds of technologies, operate on different network layers and have different economic models.

OTT services are dependent on the network provided by the TSPs, and cannot function without them. Additionally, by subjecting OTT services to similar requirements, the net-neutrality principle will be violated. Certain websites which provide OTT services will be subject to greater barriers in the form of requiring a license to operate and other such compliances, as opposed to other websites on the Internet. This is not the case when TSPs are required to have a license. In essence, functional similarities should not be the only guiding factor for regulating them at parity.

In addition, the formulation of Telecommunication Services within the Bill is questionable. The definition of OTT services for the purpose of telecom law is disputed globally. Issues like what is meant by communication, what all is covered under OTT communication services, or should communication be a predominant function of the service to be included invite a lot of discourse as the response to them can drastically alter the scope of regulation. However, the Bill makes no effort to provide any clarity on these issues.

All these gaps make the definition vague. A law is termed vague when there is no reasonable opportunity to understand what conduct is regulated with an amount of certainty. The result is that it delegates to the law enforcers and judges the job of determining the positive content of the regulated act to an impermissible extent. Based on these reasons, the Supreme Court has previously struck down  provisions relating to the licensing powers of the executive concerning Gold dealers under the Gold (Control) Act 1968, stating that the grounds for the grant were uncertain and vague.

Presently, the Bill makes it an offence to provide Telecommunication Services without a license. However, given the sheer variety of different OTT communication services that exist, the service providers would simply not know whether they require a license to function in India. More so, the Bill is silent on the grounds on which a license may be granted, which is left to the rule making powers of the Central Government.  Thus, the constitutional validity of such a provision is suspect.

To conclude, the Telecommunications Bill 2022 leaves us with more questions than answers. It seeks to bring in a radical new individual licensing regime for OTT communication services without providing any clarifications on what it constitutes or why such a significant change is required. As can be seen from the discussion, the definition is vague in scope and can potentially include everything on the internet. Further, there is no basis on which services which are utilising spectrum are treated the same as OTT services. The policymakers need to address these concerns while revisiting the draft.

Comments on the draft amendments to the IT Rules (Jan 2023)

The Ministry of Electronics and Information Technology (“MeitY”) proposed amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (“Intermediary Guidelines”) on January 17, 2023. The draft amendments aim to regulate online gaming, but also seek to have intermediaries “make reasonable efforts” to cause their users not to upload or share content identified as “fake” or “false” by the Press Information Bureau (“PIB”), any Union Government department or authorised agency (See proposed amendment to Rule 3(1)(b)(v).) The draft amendments in their current form raise certain concerns that we believe merit additional scrutiny.  

CCG submitted comments on the proposed amendment to Rule 3(1)(b)(v), highlighting its key feedback and concerns. The comments were authored by Archit Lohani and Vasudev Devadasan and reviewed by Sachin Dhawan and Jhalak M. Kakkar. Some of the key issues raised in our comments are summarised below.

  1. Misinformation, fake, and false, include both unlawful and lawful expression

The proposed amendment does not define the term “misinformation” or provide any guidance on how determinations that content is “fake” or “false” are arrived at. Misinformation can include various forms of content, and experts have identified up to seven subtypes of misinformation such as: imposter content; fabricated content; false connection; false context; manipulated content; misleading content; and satire or parody. Different subtypes of misinformation can cause different types of harm (or no harm at all) and are treated differently under the law. Misinformation or false information thus includes both lawful and unlawful speech (e.g., satire is constitutionally protected speech).  

Within the broad ambit of misinformation, the draft amendment does not provide sufficient guidance to the PIB and government departments on what sort of expression is permissible and what should be restricted. The draft amendment effectively provides them with unfettered discretion to restrict both unlawful and lawful speech. When seeking to regulate misinformation, experts, platforms, and other countries have drawn up detailed definitions that take into consideration factors such as intention, form of sharing, virality, context, impact, public interest value, and public participation value. These definitions recognize the potential multiplicity of context, content, and propagation techniques. In the absence of clarity over what types of content may be restricted based on a clear definition of misinformation, the draft amendment will restrict both unlawful speech and constitutionally protected speech. It will thus constitute an overbroad restriction on free speech.

  1. Restricting information solely on the ground that it is “false” is constitutionally impermissible

Article 19(2) of the Indian Constitution allows the government to place reasonable restrictions on free speech in the interest of the sovereignty, integrity, or security of India, its friendly relations with foreign States, public order, decency or morality, or contempt of court. The Supreme Court has ruled that these grounds are exhaustive and speech cannot be restricted for reasons beyond Article 19(2), including where the government seeks to block content online. Crucially, Article 19(2) does not permit the State to restrict speech on the ground that it is false. If the government were to restrict “false information that may imminently cause violence”, such a restriction would be permissible as it would relate to the ground of “public order” in Article 19(2). However, if enacted, the draft amendment would restrict online speech solely on the ground that it is declared “false” or “fake” by the Union Government. This amounts to a State restriction on speech for reasons beyond those outlined in Article 19(2), and would thus be unconstitutional. Restrictions on free speech must have a direct connection to the grounds outlined in Article 19(2) and must be a necessary and proportionate restriction on citizens’ rights.

  1. Amendment does not adhere with the procedures set out in Section 69A of the IT Act

The Supreme Court upheld Section 69A of the IT Act in Shreya Singhal v Union of India inter alia because it permitted the government blocking of online content only on grounds consistent with Article 19(2) and provided important procedural safeguards, including a notice, hearing, and written order of blocking that can be challenged in court. Therefore, it is evident that the constitutionality of the government’s blocking power over is contingent on the substantive and procedural safeguards provided by Section 69A and the Information Technology (Procedure and Safeguards for Blocking for Access of Information by Public) Rules, 2009. The proposed amendment to the Intermediary Guidelines would permit the Union Government to restrict online speech in a manner that does not adhere to these safeguards. It would permit the blocking of content on grounds beyond those specified in Article 19(2), based on a unilateral determination by the Union Government, without a specific procedure for notice, hearing, or a written order.

  1. Alternate methods to counter the spread of misinformation

Any response to misinformation on social media platforms should be based on empirical evidence on the prevalence and harms of misinformation on social media. Thus, as a first step, social media companies should be required to provide greater transparency and facilitate researcher access to data. There are alternative methods to regulate the spread of misinformation that may be more effective and preserve free expression, such as labelling or flagging misinformation. We note that there does not yet exist widespread legal and industry consensus on standards for independent fact-checking, but organisations such as the ‘International Fact-Checking Network’ (IFCN) have laid down certain principles that independent fact-checking organisations should comply with. Having platforms label content pursuant to IFCN fact checks, and even notify users when the content they have interacted with has subsequently been flagged by an IFCN fact checker would provide users with valuable informational context without requiring content removal.

Guest Post: The Data Producer’s Right and its Limitations

This guest post was authored by Ishita Khanna.

Information privacy theorists have argued that data is ‘quasi currency’ in the age of information technology. Though the economic value of data incentivizes companies to collect data, big data also possesses the potential to increase productivity, improve governance and thus benefit consumers and citizens. Data-driven insights can extract significant value by analysing it for purposes such as cost savings, enhanced procedures, a better knowledge of behaviour, and highly tailored products.

In this data driven economy, the data generated and collected by machines and human beings possesses tremendous value. Machine generated data is said to be data which is created through the use of computer applications, processes and services or through sensors which process information which they get from software, equipment or machinery, whether real or virtual. An interesting example of machine generated data is provided by the automobile industry. Sensors installed on cars generate data with respect to traffic prediction, location based searches, safety warnings, autonomous driving, and entertainment services. An analysis of this data can result in monetizable insights in the form of revelations for vehicle designs or selling access to the insurance industry. In this way, data precedes information, which precedes knowledge, which precedes understanding. Thus, raw data in crucial in the generation of value.

It was in response to this phenomenon that the European Commission in 2017 had proposed the creation of a ‘data producer’s right’ (DPR) which would protect anonymized, non-personal machine generated industrial data against the world i.e., a novel property right in data. This would create in favour of the data producer, ‘a right to use and authorize the use of machine generated data’. One of the other dominant reasons inspiring the call for creation of a novel property right in data stems from the fear that American companies are misappropriating valuable European assets. The use of European news by Google led to an initiative for a neighbouring rights for news publishers in the EU which furthered the call for a data producer’s right. For instance, the introduction of the sui generis database producer’s right in Europe in 1996 was borne out of the fear of domination by the US database industry over Europe’s markets.

This post seeks to critically examines the background, stated aims, subject matter and scope of the data producer’s right. It studies the inter-relationship between the existing intellectual property regimes and the property right in data to analyse if and how this new right would affect these regimes. Towards the end it offers recommendations for the alternative models that could be adopted for the protection of non-personal data.

Do we need a new IP right for machine generated data?

It has been contended that the existing IPR regimes as well as civil law, contract law and trade secret protection do not offer requisite protection to machine generated non-personal data since they do not create an ex-ante right in rem, hence raw data would not be protected from misappropriation by third parties and a market for licensing of data would not emerge. Copyright law only protects acts of authorship or compilations of data that are a consequence of creative arrangement or selection. Further, the sui generis database right only extends to data structured in a database. Hence, the argument for the introduction of a right in machine generated raw data.

The EU- DPR was envisaged as a novel type of intellectual property right, as the means to an end: to make data accessible. However, building new property fences seems paradoxical to the idea of increasing access to data. The answer to this lies in recognition of the fact that ‘property is an institution for organising the use of resources in society’. The stable legal entitlements that come along with a property right incentivizes the development of a valuable resource through consolidating both risks and benefits in right-holders and also stimulates the use and trade of data. The DPR was conceived of as a right in rem i.e. ‘enforceable against the world independent of contractual relations’ including the exclusive right in the data producer to use certain types of data and license their usage, thus embodying the essential features of the right of ownership of property. The hope was that infusing machine generated non-personal data with property rights, it would lead to the creation of a stable and safe licensing marketplace for the data.

Why is data such a challenging subject for IP Law?

However, the DPR would extensively overlap with copyright and sui generis database rights in production made using digital machines which could give lead to numerous competing ownership claims. For instance, the aggregate stock market data in a financial database would be the subject matter of protection of both the data producer’s right and the sui generis database right. Further, DPR could trump the statutory limitations laid down under the existing IPR regimes and the database right thus limiting their scope of protection. At present, users in the European Union are allowed to copy data from databases for the purpose of non-commercial research. The DPR would infringe on such freedoms allowed to the users unless it includes within its ambit all such relevant exceptions.

Another objection against a property right in data lies in its inherent lack of legal certainty and stability with respect to its scope, subject matter, and ownership, essential for it to be considered as a full-fledged IP right enforceable against the world. A property right in data would severely infringe on the freedom of expression and information by curtailing access to data to scientists, research institutions and journalists with respect to text and data mining. This freedom has been acknowledged in Article 13 of the EU Charter which stresses on the free flow of data in arts, scientific research and academic freedom.

Thus, a data producer’s right would encroach upon the central tenet of the IPR system which regards data as ‘free air for common use’ and only offers protection to creative and innovative inventions. The dynamic and fluid nature of raw data makes it difficult to classify as subject matter of a full-fledged intellectual property right. The database right raised a similar objection. However, the definition of ‘database’ and the requirement of a certain threshold of investment created at least some stability in the scope and subject matter of the right, unlike the DPR.

It is also important to understand why the property logic for data protection failed. The lack of success of the closely analogous, sui generis database right in promoting investment in and incentivizing the formulation of databases in the EU database industry is one of the reasons. Another reason is said to be attributed to the inclination towards opening data or making it accessible for both commercial and non-commercial re-use thus doing away with the exclusivity requirement. Hence, currently there exist no potent economic justifications for creation of a DPR. Instead, data producers can protect their data using the contract law, trade secret law and technology law protection mechanisms.

Thus, it can be concluded that a novel IP right should only be introduced after thorough economic-evidence based research establishing the real requirement for the right and not spontaneously. However, this alone will not suffice and must be accompanied with a methodical legal analysis of the scope and subject matter of the new right as well as its inter-relationship with the existing IPR regime.

New Data Protection Law: It cements the power imbalances in the data economy

By Shashank Mohan*

The Indian government has clarified that its latest attempt at drafting a robust data protection law is predicated on it being a plain, simple statute to read and comprehend. Although the simplicity of law is a laudable goal, the proverbial devil is in the details — or in this case, the lack thereof. The bill, which is in its fourth life, creates significant obstructions in the path of grievance redressal for a data principal (the user) to remedy privacy harms, and request adequate compensation. It further cements the power imbalances in the data economy between users and data-processing entities. I explain below.

First, the Bill introduces the concept of “duties of data principals”. It lays down various responsibilities placed on users — to obey all applicable laws of the land, not register false or frivolous complaints, and to not furnish false information. An explanatory note released alongside the Bill explains that these duties have been inserted to ensure that there is no ‘misuse of rights’. It is pertinent to understand that the goal of a data protection law is to protect the privacy rights of citizens against data-processing entities and to lay down remedies for privacy harms. It should acknowledge the existing power imbalances between users, and those who process/use their data — heightening the risk of privacy loss. Users should not bear any responsibility in a law that primarily recognises the propensity of privacy harm towards them. To enforce such duties, the Bill empowers a Data Protection Board (DPB) (a quasi-adjudicatory authority) to “take action” against users and impose penalties up to Rs 10,000.

Second, a striking aspect of the Bill is how burdensome it is for users to file a complaint to the DPB. Once a complaint is filed, the DPB has the power to “close” proceedings on insufficient grounds at the preliminary stage. The Bill does not define what it envisions as insufficient grounds, or for that matter any bases on which complaints could be filed or rejected. It simply states that the function of the DPB would be to determine non-compliance with the Bill’s provisions and impose requisite penalties. Even if the inquiry proceeds, the DPB can, at any stage, conclude that a complaint is devoid of merits, and issue a warning or impose costs on the complainant. The Bill fails to lay down any guidelines for the DPB to assess such cases and doesn’t make it clear whether these costs will be capped at Rs 10,000.

Finally, what happens in cases wherein the DPB concludes that there has been a transgression by a processing entity resulting in privacy harm to a user? The Bill states that it can only impose penalties where it has found such a transgression to be “significant” in nature. Predictably, the Bill does not provide guidance on how the “significance” of non-compliance is judged by the DPB. This is critical, as a plain reading of the bill makes it clear that the power of the DPB to impose penalties even where non-compliance is positively determined (although “non-significant”) is zero.

These powers would give the DPB, which is wholly controlled by the central government, substantial discretion in closing and concluding complaints against data-processing entities. Considering that users would be disproportionately burdened, both financially and logistically, in filing complaints against data-processing entities, these new conditions that the Bill proposes will only add to their woes. The Bill, by design, disincentivises users from filing complaints to remedy privacy harm. Users will be at a critical disadvantage in proceedings before the DPB, as they have to adhere to vague duties, meet multiple unclear and uncertain conditions to obtain a positive determination, and even then, may not receive suitable redressal. Considering that there is no provision for awarding compensation to users in the Bill, it may be impractical for users to file complaints against data-processing entities, seriously limiting their right to seek redressal under the Bill.

Larger questions of the DPB’s independence aside, the Bill does little to provide it with the tools to impose requisite penalties and provide meaningful compensation. A law is only as strong as its enforcement. This strikes at the heart of the right to privacy of individuals and their realising informational autonomy and self-determination.

There are certain pointed changes that the Bill could incorporate to address these challenges.

One, remove duties, since the primary goal of a data protection Bill is to protect the privacy of individuals. Second, empower the DPB to compensate users in cases of non-compliance; this will incentivise them to file complaints and provide meaningful redressal. Third, “Significance” should not be a pre-condition for the imposition of penalties. The DPB must, on the merits of the complaint, be able to determine penalties without a requirement to determine significance. And fourth, as a corollary to the previous point, the DPB should not be able to impose costs, sanctions, or obligations on users in any situation.

Until such challenges are addressed, and the practical circumstances of users are accounted for, meaningful data protection for Indian citizens cannot be a reality.

*This article was first published on The Indian Express on December 28, 2023. It has been cross-posted with the author’s permission.

Report on Intermediary Liability in India

The question of when intermediaries are liable, or conversely not liable, for content they host or transmit is often at the heart of regulating content on the internet. This is especially true in India, where the Government has relied almost exclusively on intermediary liability to regulate online content. With the advent of the Intermediary Guidelines 2021, and their subsequent amendment in October 2022, there has been a paradigm shift in the regulation of online intermediaries in India. 

To help understand this new regulatory reality, the Centre for Communication Governance (CCG) is releasing its ‘Report on Intermediary Liability in India’ (December 2022).

This report aims to provide a comprehensive overview of the regulation of online intermediaries and their obligations with respect to unlawful content. It updates and expands on the Centre for Communication Governance’s 2015 report documenting the liability of online intermediaries to now cover the decisions in Shreya Singhal vs. Union of India and Myspace vs. Super Cassettes Industries Ltd, the Intermediary Guidelines 2021 (including the October 2022 Amendment), the E-Commerce Rules, and the IT Blocking Rules. It captures the over two decades of regulatory and judicial practice on the issue of intermediary liability since the adoption of the IT Act. The report aims to provide practitioners, lawmakers and regulators, judges, and academics with valuable insights as they embark on shaping the coming decades of intermediary liability in India.

Some key insights that emerge from the report are summarised below:

Limitations of Section 79 (‘Safe Harbour’) Approach: In the cases analysed in this report, there is little judicial consistency in the application of secondarily liability principles to intermediaries, including the obligations set out in Intermediary Guidelines 2021, and monetary damages for transmitting or hosting unlawful content are almost never imposed on intermediaries. This suggests that there are significant limitations to the regulatory impact of obligations imposed on intermediaries as pre-conditions to safe harbour.

Need for clarity on content moderation and curation: The text of Section 79(2) of the IT Act grants intermediaries safe harbour provided they act as mere conduits, not interfering with the transmission of content. There exists ambiguity over whether content moderation and curation activities would cause intermediaries to violate Section 79(2) and lose safe harbour. The Intermediary Guidelines 2021 have partially remedied this ambiguity by expressly stating that voluntary content moderation will not result in an intermediary ‘interfering’ with the transmission under Section 79(2). However, ultimately amendments to the IT Act are required to provide regulatory certainty.

Intermediary status and immunity on a case-by-case basis: An entity’s classification as an intermediary is not a status that applies across all its operations (like a ‘company’ or a ‘partnership’), but rather the function it is performing vis-à-vis the specific electronic content it is sued in connection with. Courts should determine whether an entity is an ‘intermediary’ and whether it complied with the conditions of Section 79 in relation to the content it is being sued for. Consistently making this determination at a preliminary stage of litigation would greatly further the efficacy of Section 79’s safe harbour approach.

Concerns over GACs: While the October 2022 Amendment stipulates that two members of every GAC shall be independent, no detail is provided as to how such independence shall be secured (e.g., security of tenure and salary, oath of office, minimum judicial qualifications etc.). Such independence is vital as GAC members are appointed by the Union Government but the Union Government or its functionaries or instrumentalities may also be parties before a GAC. Further, given that the GACs are authorities ‘under the control of the Government of India’, they have an obligation to abide by the principles of natural justice, due process, and comply with the Fundamental Rights set out in the Constitution. If a GAC directs the removal of content beyond the scope of Article 19(2) of the Constitution, questions of an impermissible restriction on free expression may be raised.

Actual knowledge in 2022: The October 2022 Amendment requires intermediaries to make reasonable efforts to “cause” their users not to upload certain categories of content and ‘act on’ user complaints against content within seventy-two hours. Requiring intermediaries to remove content at the risk of losing safe harbour in circumstances other than the receipt of a court or government order prima facie violates the decision of Shreya Singhal. Further, India’s approach to notice and takedown continues to lack a system for reinstatement of content.  

Uncertainty over government blocking power: Section 69A of the IT Act expressly grants the Union Government power to block content, subject to a hearing by the originator (uploader) or intermediary. However, Section 79(3)(b) of the IT Act may also be utilised to require intermediaries to take down content absent some of the safeguards provided in Section 69A. The fact that the Government has relied on both provisions in the past and that it does not voluntarily disclose blocking orders makes a robust legal analysis of the blocking power challenging.

Hearing originators when blocking: The decision in Shreya Singhal and the requirements of due process support the understanding that the originator must be notified and granted a hearing under the IT Blocking Rules prior to their content being restricted under Section 69A. However, evidence suggests that the government regularly does not provide originators with hearings, even where the originator is known to the government. Instead, the government directly communicates with intermediaries away from the public eye, raising rule of law concerns.

Issues with first originators: Both the methods proposed for ‘tracing first originators’ (hashing unique messages and affixing encrypted originator information) are easily circumvented, require significant technical changes to the architecture of messaging services, offer limited investigatory or evidentiary value, and will likely undermine the privacy and security of all users to catch a few bad actors. Given these considerations, it is unlikely that such a measure would satisfy the proportionality test laid out by current Supreme Court doctrine.

Broad and inconsistent injunctions: An analysis of injunctions against online content reveals that the contents of court orders are often sweeping, imposing vague compliance burdens on intermediaries. When issuing injunctions against online content, courts should limit blocking or removals to specific URLs. Further courts should be cognisant of the fact that intermediaries have themselves not committed any wrongdoing, and the effect of an injunction should be seen as meaningfully dissuading users from accessing content rather than an absolute prohibition.

This report was made possible by the generous support we received from National Law University Delhi. CCG would like to thank our Faculty Advisor Dr. Daniel Mathew for his continuous direction and mentorship. This report would not be possible without the support provided by the Friedrich Naumann Foundation for Freedom, South Asia. We are grateful for comments received from the Data Governance Network and its reviewers. CCG would also like to thank Faiza Rahman and Shashank Mohan for their review and comments, and Jhalak M. Kakkar and Smitha Krishna Prasad for facilitating the report. We thank Oshika Nayak of National Law University Delhi for providing invaluable research assistance for this report. Lastly, we would also like to thank all members of CCG for the many ways in which they supported the report, in particular, the ever-present and ever-patient Suman Negi and Preeti Bhandari for the unending support for all the work we do.

Examining ‘Deemed Consent’ for Credit-Scoring under India’s Draft Data Protection Law

By Shobhit Shukla

On November 22, 2022, the Ministry of Electronics and Information Technology released India’s draft data protection law, the Digital Personal Data Protection Bill, 2022 (‘Bill’).* The Bill sets out certain situations in which seeking an individual’s consent for processing of their personal data is “impracticable or inadvisable due to pressing concerns”. In such situations, the individual’s consent is assumed; further, they are not required to be notified of such processing. One such situation is for processing in ‘public interest’. The Bill also illustrates certain public-interest purposes and notably, includes ‘credit-scoring’ as a purpose, in Clause 8(8)(d). Put simply, the Bill allows an individual’s personal data to be processed non-consensually and without any notice to them, where such processing is for credit-scoring.

Evolution of credit-scoring in India

Credit-scoring is a process by which a lender (or its agent) assesses an individual’s creditworthiness i.e., their notional capacity to repay their prospective debt, as represented by a numerical credit score. Until recently, lenders in India relied largely on credit scores generated by credit information companies (‘CICs’), licensed by the Reserve Bank of India (‘RBI’) under the Credit Information Companies (Regulation) Act, 2005 (‘CIC Act’). CICs collect and process ‘credit information’, as defined under the CIC Act, to generate such scores. Such information, for an individual, comprises chiefly of the details of their outstanding loans and history of repayment/defaults. However, with the expansion of digital footprints and advancements in automated processing, the range of datasets deployed to generate credit scores has expanded significantly. Lenders are increasingly using credit scores generated algorithmically by third-party service-providers. Such agents aggregate and process a wide variety of alternative datasets relating to an individual, alongside credit information – these may include the individual’s employment history, social media activity, and web browsing history. This allows them to build a highly data-intensive credit profile of (and assign a more granular credit score to) the individual, to assist lenders in deciding whether to extend credit. Not only does this enable lenders to make notionally better-informed decisions, but also to assess and extend credit to individuals with meagre or no prior access to formal credit.

While neither the Bill nor its explanatory note explain why credit-scoring constitutes a public-interest ground for non-consensual processing, it may be viewed as an attempt to remove the procedural burden associated with notice-and-consent. In the context of credit-scoring, if lenders (or their agents) are required to provide notice and seek consent at each instance to process the numerous streams of an individual’s personal data, the procedural costs may disincentivise them from accessing certain data-streams. Consequently, with limited data to assess credit-risk, lenders may adopt a risk-averse approach and avoid extending credit to certain sections of individuals. Alternatively, they may decide to extend credit despite the supposed inadequacy of personal data, thereby exposing themselves to higher risk of repayment defaults. While the former approach would be inimical to financial inclusion, the latter could possibly result in accumulation of bad loans on lenders’ balance sheets. Thus, encouraging data-intensive credit-scoring (for better-informed credit-decisions and/or for widening access to credit) may conceivably be viewed as a legitimate public interest.

However, in this post, I contend that even if this were to be accepted, a complete exemption from notice-and-consent for credit-scoring, poses a disproportionate risk to individuals’ right to privacy and data protection. The efficacy of notice-and-consent in enhancing informational autonomy remains debatable; however, a complete exemption from the requirement, without any accompanying safeguards, ignores specific concerns associated with credit-scoring.

Deemed consent for credit-scoring: Understanding the risks

First, the provision allows non-consensual processing of all forms of personal data, regardless of any correlation of such data with creditworthiness. In effect, this would encourage lenders to leverage the widest possible range of personal datasets. As research has demonstrated, the deployment of disparate datasets increases incidences of inaccuracy as well as of spurious connections between the data-input and the output. In credit-scoring, historical data using which the underlying algorithm is trained may conclude, for instance, that borrowers from a certain social background are likelier to default in repayment. Credit-scores generated from such fallacious and/or unverifiable conclusions can embed systemic disadvantages into future credit-decisions and deepen the exclusion of vulnerable groups. The exemption from notice-and-consent would only increase the likelihood of such exclusion – this is since individuals would not have any knowledge of the data-inputs used, or the algorithm using which such data-inputs were processed and consequently, no recourse against any credit-decisions arrived at via such processing.

Second, the provision allows any entity to non-consensually process personal data for credit-scoring. Notably, CICs are specifically licensed by the RBI to, inter alia, undertake credit-scoring. Additionally, in November 2021, the RBI amended the Credit Information Companies Regulations, 2006, to provide an avenue for entities (other than CICs) to register with any CIC, subject to the fulfilment of certain eligibility criteria, and to consequently access and process credit information for lenders. By allowing any entity to process personal data (including credit information) for credit-scoring, the Bill appears to undercut the RBI’s attempt to limit the processing of credit information to entities under its purview.

Third, the provision allows non-consensual processing of personal data for credit-scoring at any instance. A plain reading suggests that such processing may be undertaken even before the individual has expressed any intention to avail credit. Effectively, this would provide entities a free rein to pre-emptively mine troves of an individual’s personal data. Such data could then be processed for profiling the individual and behaviourally targeting them with customised advertisements for credit products. Clearly, such targeted advertising, without any intimation to the individual and without any opt-out, would militate against the individual’s right to informational self-determination. Further, as an RBI-constituted Working Group has noted, targeted advertising of credit products can promote irresponsible borrowing by individuals, leading them to debt entrapment. At scale, predatory lending enabled by targeted advertisements could perpetuate unsustainable credit and pose concerns to economic stability.

Alternatives for stronger privacy-protection in credit-scoring

The above arguments demonstrate that the complete exemption from notice-and-consent for processing of personal data for credit-scoring, threatens individual rights disproportionately. Moreover, the exemption may undermine precisely the same objectives that policymakers may be attempting to fulfil via the exemption. Thus, Clause 8(8)(d) of the Bill requires serious reconsideration.

First, I contend that Clause 8(8)(d) may be deleted before the Bill is enacted into law. In view of the CIC Act, CICs and other entities authorised by the RBI under the CIC Act shall, notwithstanding the deletion of the provision, continue to be able to access and process credit information relating to individual without their consent – such processing shall remain subject to the safeguards contained in the CIC Act, including the right of the individual to obtain a copy of such credit information from the lender.

Alternatively, the provision may be suitably modified to limit the exemption from notice-and-consent to certain forms of personal data. Such personal data may be limited to ‘credit information’ (as defined under the CIC Act) or ‘financial data’ (as may be defined in the Bill before its enactment) – resultantly, the processing of such data for credit-scoring would not require compliance with notice-and-consent. The non-consensual processing of such forms of  data (as opposed to all personal data), which carry logically intuitive correlations with creditworthiness, shall arguably correspond more closely to the individual’s reasonable expectations in the context of credit-scoring. An appropriate delineation of this nature would provide transparency in processing and also minimise the scope of fallacious and/or discriminatory correlations between data-inputs and creditworthiness.

Finally, as a third alternative, Clause 8(8)(d) may be modified to empower a specialised regulatory authority to notify credit-scoring as a purpose for non-consensual processing of data, but within certain limitations. Such limitations could relate to the processing of certain forms of personal data (as suggested above) and/or to certain kinds of entities specifically authorised to undertake such processing. This position would resemble proposals under previous versions of India’s draft data protection law, i.e. the Personal Data Protection Bill, 2019 and the Personal Data Protection Bill, 2018 – both draft legislations required any exemption from notice-and-consent to be notified by regulations. Further, such notification was required to be preceded by a consideration of, inter alia, individuals’ reasonable expectations in the context of the processing. In addition to this balancing exercise, the Bill may be modified to require the regulatory authority to consult with the RBI, before notifying any exemption for credit-scoring. Such consultation would facilitate harmonisation between data protection law and sectoral regulation surrounding financial data.

*For our complete comments on the Digital Personal Data Protection Bill, 2022, please click here – https://bit.ly/3WBdzXg) 

Censoring the Critics: The Need to Balance the Right to Erasure and Freedom of Speech

Clause 13(2)(d) of the Digital Data Protection Bill, 2022 (“DPDP Bill”) provides for the right to erasure of personal data i.e. “…any data about an individual who is identifiable by or in relation to such data”. The said clause states that a data principal has the right to erasure of personal data as per applicable laws and as prescribed. The clause further provides that such erasure of personal data shall take place after the data fiduciary receives a request for erasure. The precondition for erasure is that the personal data must no longer be necessary for the purpose for which it was processed and that it must not be necessary for any legal purpose either. 

This is in many ways a salutary provision. Data principals should have control over their data which includes the right to correct and erase data. This is especially important since it protects individuals from the negative impacts of the widespread availability of personal data on the internet. In today’s digital age, it is easier than ever for personal data to be collected, shared, and used in ways that are harmful or damaging to individuals. The right to erasure aids in countering these negative impacts by giving individuals the power to control their own personal information, and to have it removed from the internet if they choose to do so.

However, this provision can negatively impact several other fundamental rights such as the freedom of speech and right to information, especially when it is abused by powerful figures to silence criticism. For example, if an investigative journalist were to write an article in which they bring to light a government official’s corrupt deeds, the said official would be able to request the data fiduciary to erase such data since they are identifiable by it or are related to it. 

This article will seek to address such concerns in two ways. First, it will delve into the safeguards that can be included in the text of Clause 13(2)(d) to ensure that there is an appropriate balance between free speech and privacy. Second, it will recommend that the arbiter of this balance should be an independent authority and not data fiduciaries. 

(1) Safeguards 

Clause 13(2)(d) is heavily tilted in favor of the privacy interests of the data principal. It does not require data fiduciaries to take into account any other considerations that might have a bearing on the data principal’s erasure request. In order to prevent privacy interests from undermining other rights, the clause should be amended to include various safeguards. 

In particular, the clause should require data fiduciaries to consider the free speech rights of other individuals who might be affected by an erasure request. As indicated earlier, journalists may find it difficult to publish critical commentary on powerful public figures if their work is subject to easy erasure. There are also artistic, literary and research purposes for which personal data might be used by other individuals. These are valid uses of personal data that should not be negated simply because of an erasure request. 

Data fiduciaries can also be made to consider the following factors through subordinate legislation to harmonize free speech and privacy: (a) the role of the data principal in public life, (b) the sensitivity of the personal data sought to be erased, (c) purpose of processing, (d) public nature of data and (e) relevance of the personal data to the public. Incorporating such safeguards will help ensure that data fiduciaries appropriately balance the right to privacy and the right to speech when they receive erasure requests.

Further, a clearly laid out process for grievance redressal should also be codified. Currently, Clause 13(2)(d) does not provide for an appeal mechanism for erasure requests that have been rejected by data fiduciaries. The clause should explicitly provide that in case the data principal wants to contest the rejection of their erasure request, they can file a complaint with the Data Protection Board (DPB). 

(2) Independent Authority 

In addition to lacking sufficient safeguards, Clause 13(2)(d) puts the onus on data fiduciaries to decide the validity of erasure requests. Various jurisdictions including the United Kingdom and Spain along with other states from the European Union use this framework. However, giving decision making power directly to Data Fiduciaries will have a chilling effect on speech.

This is because they will tend to mechanically comply with erasure requests in order to escape liability for non-compliance. Data fiduciaries lack the bandwidth needed to properly assess the validity of erasure claims. They are for the most part private businesses with no obligation or commitment to uphold the rights and freedoms of citizens, especially if doing so will entail the expenditure of significant resources.

Consequently, there is a need for a different framework. Clause 13(2)(d) should be amended to provide for the creation of an independent authority which will decide the validity of erasure requests. Such a body should be staffed with free speech and privacy experts who have the incentive and the capability to balance competing privacy and speech considerations. 

Conclusion 

We can see from the discussion above that the right to erasure provision of the Digital Data Protection Bill, 2022 has failed to strike a sound balance between privacy and free speech. To achieve such a balance, Clause 13(2)(d) should be amended to incorporate various safeguards. Furthermore, an independent authority should be deciding the validity of erasure requests, not data fiduciaries.