The Voluntary Undertaking Provision: A Flawed Endeavor

This post is authored by Tejaswita Kharel*

In order to ease the enforcement process for data protection laws, various jurisdictions such as Singapore and Australia have incorporated voluntary undertaking provisions. Such a provision encourages organisations to self regulate and adopt accountable practices. It is also believed that the incorporation of such a provision in data protection compliance frameworks will help build a collaborative relationship between data protection boards and data fiduciaries.

India has recently also taken a step in this direction. Clause 24 of the Digital Personal Data Protection Bill, 2022 [“Bill”] provides that the Data Protection Board [“Board”] may accept voluntary undertaking at any stage and that the acceptance of such undertaking by the Board would constitute a bar to proceedings. 

However, while voluntary undertaking provisions may work elsewhere, Clause 24 should be removed from the Bill for the following reasons: 

1] Excessive Scope of Voluntary Undertaking Provision

The voluntary undertaking regime in Singapore clearly provides that the request to invoke a voluntary undertaking process must be made “soon after the [breach] incident is known”. But the voluntary undertaking provision in the Bill states that the undertaking can be given at “any stage” including before a breach has even taken place. This will allow data fiduciaries to delay their compliance with the provisions of the Bill and postpone the implementation of important provisions of the Bill. 

For example Clause 9(4) of the Bill provides that “Every Data Fiduciary and Data Processor shall protect personal data in its possession or under its control by taking reasonable security safeguards to prevent personal data breach.” A fiduciary could offer a voluntary undertaking stating that it will comply with this clause after a period of six months, during which time multiple breaches can occur. The scope of the voluntary undertaking clause in the Bill is thus massive and is likely to give too much leeway to data fiduciaries to circumvent the law and violate the rights of the Data Principals. 

2] Lack of Regulatory Standards for Voluntary Undertaking

Additionally, there is no set standard for what a voluntary undertaking offer is supposed to contain. While Clause 24 states that a voluntary undertaking may include “undertaking to take specified action within a specified time, an undertaking to refrain from taking specified action, and an undertaking to publicise the voluntary undertaking”, the requirements are not specific enough to ensure that fiduciaries will adequately comply with  the provisions of the Bill. Data fiduciaries have no requirement to provide for an in-depth remediation plan unlike in Singapore

3] Excessive Discretion of the Board 

Clause 24 merely says that the Board “may” accept voluntary undertakings. While it is clear that the Board has the discretion to decide whether it is appropriate to accept an undertaking or not, it is necessary to have standards for acceptance or rejection of such undertakings in order to reduce possibilities of arbitrariness and misuse of the voluntary undertaking regime.

Hence, while it is important to ensure that the compliance burden on data fiduciaries is not too heavy in order to achieve effective implementation of the Bill, the current voluntary undertaking provision acts as a loophole which will allow fiduciaries to circumvent formal proceedings and exempt themselves from liability under the Bill.

Conclusion

The voluntary undertaking provision in the Bill should be removed. It provides too much leeway to fiduciaries to submit voluntary undertakings that will exempt them from application of key provisions of the Bill. Moreover, it fails to constrain the Board from accepting such offers. 

In addition, several clauses of the Bill adequately provide for flexibility in case of non-compliance. Clause 25(2) ensures that data fiduciaries are not penalised excessively and Clause 21(11) ensures that they are not punished for non-significant non-compliance. 

The benefit of a voluntary undertaking system is that data fiduciaries will aid the Board in understanding the technological difficulties and processes involved in the regulation of data protection. However, this understanding is something that can be achieved through regular and active discussions with stakeholders. This is the direction that countries like the United Kingdom are also moving towards

*Tejaswita is a Research Analyst at the Centre for Communication Governance.

New Data Protection Law: It cements the power imbalances in the data economy

By Shashank Mohan*

The Indian government has clarified that its latest attempt at drafting a robust data protection law is predicated on it being a plain, simple statute to read and comprehend. Although the simplicity of law is a laudable goal, the proverbial devil is in the details — or in this case, the lack thereof. The bill, which is in its fourth life, creates significant obstructions in the path of grievance redressal for a data principal (the user) to remedy privacy harms, and request adequate compensation. It further cements the power imbalances in the data economy between users and data-processing entities. I explain below.

First, the Bill introduces the concept of “duties of data principals”. It lays down various responsibilities placed on users — to obey all applicable laws of the land, not register false or frivolous complaints, and to not furnish false information. An explanatory note released alongside the Bill explains that these duties have been inserted to ensure that there is no ‘misuse of rights’. It is pertinent to understand that the goal of a data protection law is to protect the privacy rights of citizens against data-processing entities and to lay down remedies for privacy harms. It should acknowledge the existing power imbalances between users, and those who process/use their data — heightening the risk of privacy loss. Users should not bear any responsibility in a law that primarily recognises the propensity of privacy harm towards them. To enforce such duties, the Bill empowers a Data Protection Board (DPB) (a quasi-adjudicatory authority) to “take action” against users and impose penalties up to Rs 10,000.

Second, a striking aspect of the Bill is how burdensome it is for users to file a complaint to the DPB. Once a complaint is filed, the DPB has the power to “close” proceedings on insufficient grounds at the preliminary stage. The Bill does not define what it envisions as insufficient grounds, or for that matter any bases on which complaints could be filed or rejected. It simply states that the function of the DPB would be to determine non-compliance with the Bill’s provisions and impose requisite penalties. Even if the inquiry proceeds, the DPB can, at any stage, conclude that a complaint is devoid of merits, and issue a warning or impose costs on the complainant. The Bill fails to lay down any guidelines for the DPB to assess such cases and doesn’t make it clear whether these costs will be capped at Rs 10,000.

Finally, what happens in cases wherein the DPB concludes that there has been a transgression by a processing entity resulting in privacy harm to a user? The Bill states that it can only impose penalties where it has found such a transgression to be “significant” in nature. Predictably, the Bill does not provide guidance on how the “significance” of non-compliance is judged by the DPB. This is critical, as a plain reading of the bill makes it clear that the power of the DPB to impose penalties even where non-compliance is positively determined (although “non-significant”) is zero.

These powers would give the DPB, which is wholly controlled by the central government, substantial discretion in closing and concluding complaints against data-processing entities. Considering that users would be disproportionately burdened, both financially and logistically, in filing complaints against data-processing entities, these new conditions that the Bill proposes will only add to their woes. The Bill, by design, disincentivises users from filing complaints to remedy privacy harm. Users will be at a critical disadvantage in proceedings before the DPB, as they have to adhere to vague duties, meet multiple unclear and uncertain conditions to obtain a positive determination, and even then, may not receive suitable redressal. Considering that there is no provision for awarding compensation to users in the Bill, it may be impractical for users to file complaints against data-processing entities, seriously limiting their right to seek redressal under the Bill.

Larger questions of the DPB’s independence aside, the Bill does little to provide it with the tools to impose requisite penalties and provide meaningful compensation. A law is only as strong as its enforcement. This strikes at the heart of the right to privacy of individuals and their realising informational autonomy and self-determination.

There are certain pointed changes that the Bill could incorporate to address these challenges.

One, remove duties, since the primary goal of a data protection Bill is to protect the privacy of individuals. Second, empower the DPB to compensate users in cases of non-compliance; this will incentivise them to file complaints and provide meaningful redressal. Third, “Significance” should not be a pre-condition for the imposition of penalties. The DPB must, on the merits of the complaint, be able to determine penalties without a requirement to determine significance. And fourth, as a corollary to the previous point, the DPB should not be able to impose costs, sanctions, or obligations on users in any situation.

Until such challenges are addressed, and the practical circumstances of users are accounted for, meaningful data protection for Indian citizens cannot be a reality.

*This article was first published on The Indian Express on December 28, 2023. It has been cross-posted with the author’s permission.

Examining ‘Deemed Consent’ for Credit-Scoring under India’s Draft Data Protection Law

By Shobhit Shukla

On November 22, 2022, the Ministry of Electronics and Information Technology released India’s draft data protection law, the Digital Personal Data Protection Bill, 2022 (‘Bill’).* The Bill sets out certain situations in which seeking an individual’s consent for processing of their personal data is “impracticable or inadvisable due to pressing concerns”. In such situations, the individual’s consent is assumed; further, they are not required to be notified of such processing. One such situation is for processing in ‘public interest’. The Bill also illustrates certain public-interest purposes and notably, includes ‘credit-scoring’ as a purpose, in Clause 8(8)(d). Put simply, the Bill allows an individual’s personal data to be processed non-consensually and without any notice to them, where such processing is for credit-scoring.

Evolution of credit-scoring in India

Credit-scoring is a process by which a lender (or its agent) assesses an individual’s creditworthiness i.e., their notional capacity to repay their prospective debt, as represented by a numerical credit score. Until recently, lenders in India relied largely on credit scores generated by credit information companies (‘CICs’), licensed by the Reserve Bank of India (‘RBI’) under the Credit Information Companies (Regulation) Act, 2005 (‘CIC Act’). CICs collect and process ‘credit information’, as defined under the CIC Act, to generate such scores. Such information, for an individual, comprises chiefly of the details of their outstanding loans and history of repayment/defaults. However, with the expansion of digital footprints and advancements in automated processing, the range of datasets deployed to generate credit scores has expanded significantly. Lenders are increasingly using credit scores generated algorithmically by third-party service-providers. Such agents aggregate and process a wide variety of alternative datasets relating to an individual, alongside credit information – these may include the individual’s employment history, social media activity, and web browsing history. This allows them to build a highly data-intensive credit profile of (and assign a more granular credit score to) the individual, to assist lenders in deciding whether to extend credit. Not only does this enable lenders to make notionally better-informed decisions, but also to assess and extend credit to individuals with meagre or no prior access to formal credit.

While neither the Bill nor its explanatory note explain why credit-scoring constitutes a public-interest ground for non-consensual processing, it may be viewed as an attempt to remove the procedural burden associated with notice-and-consent. In the context of credit-scoring, if lenders (or their agents) are required to provide notice and seek consent at each instance to process the numerous streams of an individual’s personal data, the procedural costs may disincentivise them from accessing certain data-streams. Consequently, with limited data to assess credit-risk, lenders may adopt a risk-averse approach and avoid extending credit to certain sections of individuals. Alternatively, they may decide to extend credit despite the supposed inadequacy of personal data, thereby exposing themselves to higher risk of repayment defaults. While the former approach would be inimical to financial inclusion, the latter could possibly result in accumulation of bad loans on lenders’ balance sheets. Thus, encouraging data-intensive credit-scoring (for better-informed credit-decisions and/or for widening access to credit) may conceivably be viewed as a legitimate public interest.

However, in this post, I contend that even if this were to be accepted, a complete exemption from notice-and-consent for credit-scoring, poses a disproportionate risk to individuals’ right to privacy and data protection. The efficacy of notice-and-consent in enhancing informational autonomy remains debatable; however, a complete exemption from the requirement, without any accompanying safeguards, ignores specific concerns associated with credit-scoring.

Deemed consent for credit-scoring: Understanding the risks

First, the provision allows non-consensual processing of all forms of personal data, regardless of any correlation of such data with creditworthiness. In effect, this would encourage lenders to leverage the widest possible range of personal datasets. As research has demonstrated, the deployment of disparate datasets increases incidences of inaccuracy as well as of spurious connections between the data-input and the output. In credit-scoring, historical data using which the underlying algorithm is trained may conclude, for instance, that borrowers from a certain social background are likelier to default in repayment. Credit-scores generated from such fallacious and/or unverifiable conclusions can embed systemic disadvantages into future credit-decisions and deepen the exclusion of vulnerable groups. The exemption from notice-and-consent would only increase the likelihood of such exclusion – this is since individuals would not have any knowledge of the data-inputs used, or the algorithm using which such data-inputs were processed and consequently, no recourse against any credit-decisions arrived at via such processing.

Second, the provision allows any entity to non-consensually process personal data for credit-scoring. Notably, CICs are specifically licensed by the RBI to, inter alia, undertake credit-scoring. Additionally, in November 2021, the RBI amended the Credit Information Companies Regulations, 2006, to provide an avenue for entities (other than CICs) to register with any CIC, subject to the fulfilment of certain eligibility criteria, and to consequently access and process credit information for lenders. By allowing any entity to process personal data (including credit information) for credit-scoring, the Bill appears to undercut the RBI’s attempt to limit the processing of credit information to entities under its purview.

Third, the provision allows non-consensual processing of personal data for credit-scoring at any instance. A plain reading suggests that such processing may be undertaken even before the individual has expressed any intention to avail credit. Effectively, this would provide entities a free rein to pre-emptively mine troves of an individual’s personal data. Such data could then be processed for profiling the individual and behaviourally targeting them with customised advertisements for credit products. Clearly, such targeted advertising, without any intimation to the individual and without any opt-out, would militate against the individual’s right to informational self-determination. Further, as an RBI-constituted Working Group has noted, targeted advertising of credit products can promote irresponsible borrowing by individuals, leading them to debt entrapment. At scale, predatory lending enabled by targeted advertisements could perpetuate unsustainable credit and pose concerns to economic stability.

Alternatives for stronger privacy-protection in credit-scoring

The above arguments demonstrate that the complete exemption from notice-and-consent for processing of personal data for credit-scoring, threatens individual rights disproportionately. Moreover, the exemption may undermine precisely the same objectives that policymakers may be attempting to fulfil via the exemption. Thus, Clause 8(8)(d) of the Bill requires serious reconsideration.

First, I contend that Clause 8(8)(d) may be deleted before the Bill is enacted into law. In view of the CIC Act, CICs and other entities authorised by the RBI under the CIC Act shall, notwithstanding the deletion of the provision, continue to be able to access and process credit information relating to individual without their consent – such processing shall remain subject to the safeguards contained in the CIC Act, including the right of the individual to obtain a copy of such credit information from the lender.

Alternatively, the provision may be suitably modified to limit the exemption from notice-and-consent to certain forms of personal data. Such personal data may be limited to ‘credit information’ (as defined under the CIC Act) or ‘financial data’ (as may be defined in the Bill before its enactment) – resultantly, the processing of such data for credit-scoring would not require compliance with notice-and-consent. The non-consensual processing of such forms of  data (as opposed to all personal data), which carry logically intuitive correlations with creditworthiness, shall arguably correspond more closely to the individual’s reasonable expectations in the context of credit-scoring. An appropriate delineation of this nature would provide transparency in processing and also minimise the scope of fallacious and/or discriminatory correlations between data-inputs and creditworthiness.

Finally, as a third alternative, Clause 8(8)(d) may be modified to empower a specialised regulatory authority to notify credit-scoring as a purpose for non-consensual processing of data, but within certain limitations. Such limitations could relate to the processing of certain forms of personal data (as suggested above) and/or to certain kinds of entities specifically authorised to undertake such processing. This position would resemble proposals under previous versions of India’s draft data protection law, i.e. the Personal Data Protection Bill, 2019 and the Personal Data Protection Bill, 2018 – both draft legislations required any exemption from notice-and-consent to be notified by regulations. Further, such notification was required to be preceded by a consideration of, inter alia, individuals’ reasonable expectations in the context of the processing. In addition to this balancing exercise, the Bill may be modified to require the regulatory authority to consult with the RBI, before notifying any exemption for credit-scoring. Such consultation would facilitate harmonisation between data protection law and sectoral regulation surrounding financial data.

*For our complete comments on the Digital Personal Data Protection Bill, 2022, please click here – https://bit.ly/3WBdzXg) 

Censoring the Critics: The Need to Balance the Right to Erasure and Freedom of Speech

Clause 13(2)(d) of the Digital Data Protection Bill, 2022 (“DPDP Bill”) provides for the right to erasure of personal data i.e. “…any data about an individual who is identifiable by or in relation to such data”. The said clause states that a data principal has the right to erasure of personal data as per applicable laws and as prescribed. The clause further provides that such erasure of personal data shall take place after the data fiduciary receives a request for erasure. The precondition for erasure is that the personal data must no longer be necessary for the purpose for which it was processed and that it must not be necessary for any legal purpose either. 

This is in many ways a salutary provision. Data principals should have control over their data which includes the right to correct and erase data. This is especially important since it protects individuals from the negative impacts of the widespread availability of personal data on the internet. In today’s digital age, it is easier than ever for personal data to be collected, shared, and used in ways that are harmful or damaging to individuals. The right to erasure aids in countering these negative impacts by giving individuals the power to control their own personal information, and to have it removed from the internet if they choose to do so.

However, this provision can negatively impact several other fundamental rights such as the freedom of speech and right to information, especially when it is abused by powerful figures to silence criticism. For example, if an investigative journalist were to write an article in which they bring to light a government official’s corrupt deeds, the said official would be able to request the data fiduciary to erase such data since they are identifiable by it or are related to it. 

This article will seek to address such concerns in two ways. First, it will delve into the safeguards that can be included in the text of Clause 13(2)(d) to ensure that there is an appropriate balance between free speech and privacy. Second, it will recommend that the arbiter of this balance should be an independent authority and not data fiduciaries. 

(1) Safeguards 

Clause 13(2)(d) is heavily tilted in favor of the privacy interests of the data principal. It does not require data fiduciaries to take into account any other considerations that might have a bearing on the data principal’s erasure request. In order to prevent privacy interests from undermining other rights, the clause should be amended to include various safeguards. 

In particular, the clause should require data fiduciaries to consider the free speech rights of other individuals who might be affected by an erasure request. As indicated earlier, journalists may find it difficult to publish critical commentary on powerful public figures if their work is subject to easy erasure. There are also artistic, literary and research purposes for which personal data might be used by other individuals. These are valid uses of personal data that should not be negated simply because of an erasure request. 

Data fiduciaries can also be made to consider the following factors through subordinate legislation to harmonize free speech and privacy: (a) the role of the data principal in public life, (b) the sensitivity of the personal data sought to be erased, (c) purpose of processing, (d) public nature of data and (e) relevance of the personal data to the public. Incorporating such safeguards will help ensure that data fiduciaries appropriately balance the right to privacy and the right to speech when they receive erasure requests.

Further, a clearly laid out process for grievance redressal should also be codified. Currently, Clause 13(2)(d) does not provide for an appeal mechanism for erasure requests that have been rejected by data fiduciaries. The clause should explicitly provide that in case the data principal wants to contest the rejection of their erasure request, they can file a complaint with the Data Protection Board (DPB). 

(2) Independent Authority 

In addition to lacking sufficient safeguards, Clause 13(2)(d) puts the onus on data fiduciaries to decide the validity of erasure requests. Various jurisdictions including the United Kingdom and Spain along with other states from the European Union use this framework. However, giving decision making power directly to Data Fiduciaries will have a chilling effect on speech.

This is because they will tend to mechanically comply with erasure requests in order to escape liability for non-compliance. Data fiduciaries lack the bandwidth needed to properly assess the validity of erasure claims. They are for the most part private businesses with no obligation or commitment to uphold the rights and freedoms of citizens, especially if doing so will entail the expenditure of significant resources.

Consequently, there is a need for a different framework. Clause 13(2)(d) should be amended to provide for the creation of an independent authority which will decide the validity of erasure requests. Such a body should be staffed with free speech and privacy experts who have the incentive and the capability to balance competing privacy and speech considerations. 

Conclusion 

We can see from the discussion above that the right to erasure provision of the Digital Data Protection Bill, 2022 has failed to strike a sound balance between privacy and free speech. To achieve such a balance, Clause 13(2)(d) should be amended to incorporate various safeguards. Furthermore, an independent authority should be deciding the validity of erasure requests, not data fiduciaries.

Comments on the Digital Personal Data Protection Bill, 2022*

The Ministry of Electronics and Information Technology (MeitY) released a new data protection bill on November 18, 2022. This is the fourth iteration of a data protection bill in India after the Puttaswamy judgment in 2017. We appreciate the efforts of MeitY in drafting a clear and accessible Digital Personal Data Protection Bill, 2022 (‘DPDP Bill’). However, the proposed framework does not provide comprehensive protection of privacy rights and redressal of harms for data principals. In many instances, the DPDP Bill dilutes or misses out on safeguards and protections that were present in earlier versions of the Bill. 

Our submission to MeitY, as part of the public consultation on the DPDP Bill, highlights the following five fundamental concerns with the DPDP Bill: (i) Fails to safeguard the privacy of data principals, (ii) Absence of key data protection principles, (iii) Excessive reliance on delegated legislation, (iv) Lack of independence and regulatory powers for the Data Protection Board of India, (‘DPBI’), and (v) Imposition of onerous burdens on data principals.

1. Fails to safeguard the privacy of data principals: We would like to highlight that the DPDP Bill does not place the right to privacy of the data principal at the centre of its objectives. The Supreme Court in Puttaswamy has explicitly recognized the right to privacy of individuals. Consequently, a data protection law should be rights-centric and drafted with the intention of protecting privacy and empowering individuals to meaningfully exercise this right. Instead, by focusing on removing barriers from data processing activities, the Bill dilutes the rights of data principals, limits the understanding of harm, and eases obligations of data fiduciaries. For example, the preamble focuses on processing data in a manner that recognises the right to data protection alongside the need to lawfully process data. This is a significant departure from previous iterations of the Bill. To clarify its intent the Bill should go further and explicitly state that (i) the right to privacy is a fundamental right in India and (ii) it seeks to protect the informational privacy of individuals.

2. Absence of key data protection principles: Universally recognised data protection principles such as collection limitation, purpose limitation, and openness are not adequately reflected in the framework of the DPDP Bill. To ensure a privacy focused data protection regulation, it is crucial for the Bill to be grounded in strong principles that enshrine the rights of individuals. The A.P. Shah Committee in 2012 examined numerous international privacy practices and principles and recommended the incorporation of national privacy principles in any privacy legislation. These principles have been further referred to by the Supreme Court in Puttaswamy. The explanatory note accompanying the Bill features a few of these principles but it is important for all of them to be included within the data protection legislation itself. We thus recommend that core data protection principles are incorporated in the text of the Bill.

3. Excessive reliance on delegated legislation: Delegating rule making power to the government is necessary to provide flexibility, accommodate future circumstances, and prevent laws from becoming obsolete. However, the DPDP Bill does so without providing any legislative guidance or criteria for the framing of such delegated legislation to the government. Having guidelines set out precisely in the text of the Bill will not only help data principals but also the government in their rule-making power. This ties in with the government’s goal of ensuring comprehensibility of the law for citizens as expressed in the explanatory note released alongside the Bill. Therefore, the DPDP Bill must articulate foundational principles, safeguards, and criteria to guide the framing of delegated legislation within the text of the Bill.

4. Lack of independence and regulatory powers for the DPBI: The DPDP Bill envisions the Data Protection Board of India (‘DPBI’) as only a quasi-judicial body. This is a departure from the previous versions of the Bill where a regulatory authority was envisaged. Data protection is a technical subject that necessitates the establishment of an expert regulator composed of individuals with the necessary expertise and regulatory capacity to exercise various regulatory powers. To ensure effective data protection, it may be crucial for the DPBI to have regulatory powers, especially for subject areas such as determining the grounds for non-consensual grounds of processing personal data.

The DPBI’s functions are further diminished by its lack of independence. The government will determine many aspects of its operation and functioning such as the appointment and removal of its members. As a result, the DPBI may not be well positioned to take decisions which are independent of government considerations.

We recommend that the Bill establish a board with independent regulatory and adjudicatory powers. Such a body will be well positioned to serve the best interests of data principals.

5. Imposition of onerous burdens on data principals: Many provisions of the DPDP Bill discourage and disincentivise data principals from exercising their rights. For instance, data principals are burdened with unforeseen consequences while exercising their basic right to withdraw consent for processing data. Many data principals who would otherwise have exercised this choice will now refrain from doing so, due to the uncertainty about what the ‘consequences’ of such withdrawal could entail.

Additionally, before proceeding with their grievances against data fiduciaries, data principals have to jump through several hoops. They have to ensure that they are fulfilling several duties or risk paying a hefty penalty. The DPBI may at various stages dismiss complaints due to insufficient grounds or demerits. Even when a data principal is successful in establishing a valid claim, the DPBI cannot impose penalties on a data fiduciary without establishing that the non-compliance is ‘significant’ in nature. The Bill does not provide the DPBI with powers to provide any compensation to data principals. Thus, a data protection regulation whose enforcement mechanism that disadvantages data principals in these ways will not be able to effectively fulfil its objectives. Additionally, the Bill specifically imposes duties and penalties on data principals, which will only further hinder them from exercising their rights. We recommend that (i) duties of data principals be removed from the Bill (ii) the DPBI should be able to impose penalties for even non-significant non-compliance and (iii) the DPBI should be empowered to provide sufficient compensation to data principals.

*(Due to character limits on the innovateindia.mygov.in website, we were able to provide only part submissions to MeitY. For our complete comments on the Digital Personal Data Protection Bill, 2022, please click here – https://bit.ly/3WBdzXg) 

On the Exclusion of Regulatory Sandbox Provisions from Data Protection Law

On November 18, 2022, the Ministry of Electronics & Information Technology (‘MeitY’) released the new Digital Personal Data Protection Bill, 2022 (‘2022 Bill’) as the governing legislation for personal data. Prior to the 2022 Bill, the Personal Data Protection Bill, 2019 (‘2019 Bill’) was the proposed legislation to govern personal data and protect data privacy. The 2019 Bill was withdrawn during the Monsoon session of Parliament in August 2022, after receiving significant amendments and recommendations from the Joint Committee of the Parliament in 2021.

The 2022 Bill has removed several provisions from the 2019 Bill, one of which pertains to the creation of a regulatory sandbox for encouraging innovation in artificial intelligence, machine-learning, or any other emerging technologies (under Clause 40 of the 2019 Bill). While some experts have criticised the 2022 Bill for not retaining this provision, I contend that the removal of the regulatory sandbox provision is a positive aspect of the 2022 Bill. In general, regulatory sandbox provisions should not be incorporated into data protection laws for the following reasons: 

  1. The limited scope and purpose of data protection legislation

Data protection laws are drafted with the specific purpose of protecting personal data of individuals, creating a framework to process personal data, and laying down specific rights and responsibilities for data fiduciaries/processors. Although firms participating in a sandbox may process personal data, the functions of sandboxes are more expansive than regulating personal data processing. The primary purpose of regulatory sandboxes is to create isolated, controlled environments for the live testing, development, and restricted time-bound release of innovations. Sandboxes are also set-up to help regulatory authorities monitor and form adaptive regulations for these innovative technologies, as they are either partially or completely outside the purview of existing legislations.

Since the scope of regulatory sandboxes is broader than that of data protection legislations, it is insufficient for a sandbox provision to be included in a data protection legislation, with limited compliances and exemptions from the provisions of such legislation. A separate legislation is required to be drafted to regulate such emerging technologies. 

The regulatory sandbox framework under the European Union’s Proposed Artificial Intelligence Act, 2021 (‘EU AI Act’), as well as the regulatory sandboxes established by SEBI, RBI, and other authorities in India demonstrate this clearly. These frameworks are established separately from existing legislations, and provide a specific scope and purpose for the sandbox in a clear and detailed manner. 

  1. The limited expertise and conflicting mandate of a data protection authority

Data protection authorities (‘DPAs’) are appointed to protect the rights of data principals. They lack the necessary expertise over emerging technologies to also function as the supervisory authority for a regulatory sandbox. Hence, a regulatory sandbox is required to be monitored and supervised by a separate authority which has expertise over the specific areas for which the sandbox is created.

Moreover, it is not sufficient to merely constitute a separate authority for sandboxes within a data protection law. Since the supervisory authority for sandboxes is required to privilege innovation and development of technologies over the strict protection of personal data, the functions of this authority will be directly conflicting with those of the DPA. Therefore, the regulatory sandbox framework is required to be incorporated in a separate legislation altogether.

  1. Sector-specific compliance provisions for regulatory sandboxes

The desire to regulate artificial intelligence and emerging technologies under a data protection legislation is understandable, as these technologies process personal data. However, it is to be noted that AI systems and other emerging technologies also process non-personal data and anonymised data. 

The regulatory sandbox for these technologies are thus not only subject to the principles of data protection law, but are in fact a nexus for information technology law, anti-discrimination law, consumer protection law, e-commerce law, and other applicable laws. Accordingly, the framework for the regulatory sandbox cannot be placed within a data protection legislation or subordinate rules to such a legislation. It has to be regulated under a separate framework which ensures all the relevant laws are taken into account, and the safeguards are not just limited to personal data safeguards. 

Since the exemptions, mitigation of risks, and compliance for the different emerging technologies are to be specifically tailored to those technologies (across various laws), the regulatory mechanism for the same cannot be provided in a data protection legislation. 

Conclusion

The above arguments establish the basis for not incorporating sandbox provisions within a data protection legislation. Regulatory sandboxes, based on their framework alone, do not belong in a data protection legislation. The innovation-centric mandate of the sandbox framework and the functions of the supervisory authority conflict with the core principles of data protection law and the primary functions of DPAs. The limited scope of data protection law, coupled with the lack of expertise of DPAs decisively establish the incongruence between the regulatory sandbox provision and data protection legislations.

Commentators who critique the exclusion of the sandbox provision from the 2022 Bill are right to be concerned about rapid developments in artificial intelligence and other emerging technologies. But it is far more prudent for them to recommend that the Central government set-up an expert committee to analyse these developments and prepare a separate framework for the sector. Such a framework can comprehensively account for the various mechanisms (beyond data protection) required to govern these emerging technologies.

Guest Post: Proportionality concerns with Criminal Procedure (Identification) Act

This post is authored by Ishita Tulsyan and Navdha Sharma.

The introduction of The Criminal Procedure (Identification) Act, 2022 ( ‘the Identification Act’) raised several surveillance and privacy concerns. Replacing the Identification of Prisoners Act 1920 ( ‘the Old Prisoners Act’), it attempts to modernize the process of identification of persons involved in criminal allegations to expedite and enhance criminal investigations. This is accomplished by expanding the types of ‘measurements’ that can be obtained (ie, the data that can be collected), the persons from whom measurements may be collected, and the storage of the said data for a period of 75 years.

The Identification Act permits the collection of measurements for an expansive set of categories and increases the persons whose measurements can be collected. Section 2 (1)(b) of the Identification Act, defines “measurements.” While the Old Prisoners Act authorized only the collection of measurements such as finger-impressions and foot-impressions, the Identification Act now includes within its ambit “finger-impressions, palm-print impressions, foot-print impressions, photographs, iris and retina scan, physical, biological samples and their analysis, behavioural attributes including signatures, handwriting,” on top of any other examination mentioned in Section 53 and 53A of the Code of Criminal Procedure, 1973. This represents a significant expansion in the type of data collected from individuals.

In the Old Prisoners Act, measurements could only be taken from persons who were convicted or those arrested in connection with an offence punishable by rigorous imprisonment of more than one year.  However, in the Identification Act, the measurements can be taken of all convicted and arrested persons, without any requirement of a minimum threshold for those not convicted. Further, measurements can be taken from individuals under preventive detention as per Section 3(c). Thus, all-in-all, the new Act has introduced a whole sea of new measurements that could be taken, and these new measurements can be taken from more people than under the Old Prisoners Act.

In this blog, the authors analyse the constitutionality of the Identification Act by examining whether the collection and storage of measurements satisfy the proportionality test for privacy infringing measures set out in Justice K. S. Puttaswamy v Union of India (5 judge-bench) (“Puttaswamy”).

Proportionality: the Puttaswamy test

The proportionality test, first set out in the Right to Privacy decision, was subsequently elucidated on and applied by J. Sikri in the Puttaswamy judgment; the criteria for judging the constitutionality of State interference with an individual’s right to privacy may be summarised as follows:

  1. Legitimate aim – the action taken by the government must be for a proper or legitimate purpose.
  2. Rational nexus – there should be a rational connection between the infringing act and the legitimate state aim sought to be achieved.
  3. Necessity – the state must demonstrate that it is necessary to introduce an intrusive measure despite the negative effect on the rights of the individuals; including that there are no lesser restrictive measures of similar efficacy available to the State.
  4. Balancing – between the need to adopt the measure and the right to privacy.

Assessing the Identification Act –

Legitimate Aim; the expansive provision of the measurements does, arguably, have a proper purpose. Just like the Old Prisoners Act, it is meant to aid the police in investigating crimes.

Rational Nexus; completion of the investigative procedure with speed and accuracy is a legitimate state aim and the current expansion in the categories of measurements that can be obtained will aid in achieving that. The new measures would enable the authorities to create a database using the collected measurements and match the data of suspects against it, thereby aiding criminal investigations.

Necessity; there is no denying that the Identification Act interferes with extremely personal data of individuals as it broadens the scope of both the measurements (as explained above) and the categories of people from whom it can be obtained. On a comparative reading of the Section 2(a) Prisoners Act and Section 2(1)(b) of the Identification Act, it is evident that the latter encompasses significantly more data collection than the former. As the erstwhile Old Prisoner’s Act thus constitutes a lesser restrictive measure, the burden then lies on the state to establish that the Old Prisoners Act did not fulfill the “legitimate state aim” as effectively as the Identification Act will. This requires the State to demonstrate that the Prisoners Act failed to meet the state aim of expediting the criminal investigation process because of which there arose a need for a new, more privacy infringing measure Act. Absent this, the Old Prisoners Act remains a viable lesser restrictive measure. However, the State has failed to discharge its burden as it did not provide any data or conduct a study which showed that the Prisoners Act fell short of achieving the state aim. Thus, due to the existence of a less-restrictive alternative (in the form of the Old Prisoners Act), the necessity limb of the proportionality test is not met.

Proportionality or Balancing; it is imperative that State’s rights-infringing measures are not absolute and do not curtail the rights of individuals any more than necessary. The removal of the minimum requirement of severity of offences as it relates to whose data can be collected will enable the authorities to collect data of persons charged with petty offences carrying punishment as little as a month. The Identification Act doesn’t even attempt to define the term ‘biological samples’ and what it would entail. This leaving a major scope for misuse at the hands of state authorities. Due to the term not being defined anywhere, it could be construed to include tests such as narco-analysis, polygraph test, brain electrical activation profile test, etc. Such methods are not only extremely intrusive, violative of bodily autonomy, but also of the right against self incrimination. Further, the proportionality test requires the maintenance of balance between the extent and nature of the interference and the reasons for interfering. While there might be substance in the rationale behind collection of measurements, there is no reasonable justification for  retaining the measurements for a period of 75 years, especially as the same severely undermines the right to privacy of such individuals even when they have served their sentence, if any. This is especially true considering that the life expectancy in India is itself 71 years. Thus, even if the necessity limb of the test would have been satisfied, the balancing limb would still warrant that the Identification Act be struck down.

Conclusion The proportionality test given under “Puttaswamy” is a conjunctive test and thus, failing any limb results in the measure being struck down. The Criminal Procedure (Identification) Act, 2022 fails to satisfy the necessity test to begin with, as the government has nowhere demonstrated that the lesser restrictive measure that the Identification Act replaces failed to meet its investigative requirements. Further, even beyond that, balancing limb of the proportionality test is also not satisfied given the Act’s extremely broad application and excessive data retention requirements. Thereby, it impermissibly restricts the right to privacy of convicted and non-convicted persons.

Comparative tracker for the Digital Personal Data Protection Bill, 2022

The Digital Personal Data Protection Bill, 2022 (“2022 Bill”) was released by the Ministry of Electronics and Information Technology on November 18, 2022, with the stated intent of being concise, comprehensible, and simplified for the citizens. For these reasons, the 2022 Bill has made significant changes to the framework of the earlier Personal Data Protection Bill, 2019 (“2019 Bill”), which was withdrawn earlier this August during the Monsoon session of the Parliament.

We have prepared this detailed tracker to record the changes made in the 2022 Bill, and compared the differences in the key provisions of the 2022 Bill and the 2019 Bill. This tracker can be a helpful reference while analysing the two Bills, or even a quick guide to the changes brought out in the 2022 Bill. 

This tracker has used the 2019 Bill as reference for the changes, as this was the last version of the Data Protection Bill which was introduced before the Parliament as a comprehensive legislation. We have analysed each clause and sub-clause of the 2022 Bill and compared it to the corresponding provisions of the 2019 Bill. We have provided the full text of the provisions (highlighting the differences) as well as a brief summary of changes under the 2022 Bill. Readers may use the 2022 Bill as the base, when looking for the changes made to specific provisions of the 2019 Bill.

As the public and expert analyses and opinions on the 2022 Bill are still being developed, we invite comments on any errors or omissions of corresponding provisions which may be present in this tracker.

The tracker can be accessed here.

Please note: a detailed list of the removed provisions from the 2019 Bill will be added to the tracker in the next few days.

(Compiled by Tejaswita Kharel and Vignesh Shanmugam)

(Update January 24, 2023: We have included in the tracker a detailed list of provisions removed from the 2022 Bill along with our comments on the same)

CCG’s Comments to the Ministry of Electronics and Information Technology on the Draft National Data Governance Framework Policy

Authors: Joanne D’Cunha and Bilal Mohamed

On 26th May 2022, the Ministry of Electronics and Information Technology (MeitY), released the Draft National Data Governance Framework Policy (NDG Policy) for feedback and public comments. CCG submitted its comments on the NDG Policy, highlighting its feedback and key concerns with the proposed Data Governance Framework. The comments were authored by Joanne D’Cunha and Bilal Mohamed, and reviewed and edited by Jhalak M. Kakkar and Shashank Mohan.

The draft National Data Governance Framework Policy is a successor to the draft ‘India Data Accessibility and Use’ Policy, which was circulated in February 2022 for public comments and feedback. Among other objectives, the NDG policy aims to “enhance access, quality, and use of data to enable a data-led governance” and “catalyze AI and Data led research and start-up ecosystem”.

“Mountain” by Mariah Jochai is licensed under CC BY 4.0

CCG’s comments to the MeitY are divided into five parts – 

In Part I, of the comments we foreground our concerns by emphasising the need for comprehensive data protection legislation to safeguard citizens from potential privacy risks before implementing a policy around non-personal data governance. 

In Part II, we focus on the NDG Policy’s objectives, scope, and key terminologies. We highlight that the NDG Policy lacks in  sufficiently defining key terms and phrases such as non personal data, anonymisation, data usage rights, Open Data Portal, Chief Data Officers (CDOs), datasets ecosystem, and ownership of data. Having clear definitions will bring in much needed clarity and help stakeholders appreciate the objectives and implications of the policy. This also improves  engagement from the stakeholders including the government in the policy consultation process. This also enhances engagement from the stakeholders, including the various government departments, in the policy consultation process.  We also highlight that the policy does not illustrate how it will intersect and interact with other proposed data governance frameworks such as the Data Protection Bill 2021 and the Non Personal Data Governance Framework. We express our concerns around the NDG Policy’s objective of cataloguing datasets for increased processing and sharing of data matching with the aim to deploy AI more efficiently.  It relies on creating a repository of data to further analytics, and AI and data led research. However, it does not take into consideration that increasing access to data might not be as beneficial if computational powers of the relevant technologies are inadequate. Therefore, it may be more useful if greater focus is placed on developing computing abilities as opposed to increasing the quantum of data used.

In Part III, we focus on the privacy risks, highlighting concerns around the development and formulation of anonymisation standards given the threat of re-identification from the linkage of different datasets. This, we argue, can pose significant risks to individual privacy, especially in the absence of a data protection legislation that can provide safeguards and recognise individual rights over personal data. In addition to individual privacy harms, we also point to the potential for collective harms from using aggregated data. To this end, we suggest the creation of frameworks that can keep up with the increased risks of reidentification posed by new and emerging technologies.

Part IV of our comments explores the institutional framework and regulatory structure of the proposed India Data Management Office. The proposed IDMO is responsible for framing, managing, reviewing, and revising the NDG Policy. Key concerns on the IDMO’s functioning pertain to the exclusion of technical experts and representatives of civil society and industry in the IDMO. There is also ambiguity on the technical expertise required for Chief Digital Officers of the Digital Management Units of government departments and ministries, and the implementation of the redressal mechanism. In this section, we also highlight the need for a framework within the Policy to define how user charges will be determined for data access. This is particularly relevant to ensure that access to datasets is not skewed and is available to all for the public good. 

You can read our full submission to the ministry here.

Indian courts on the Right to be Forgotten

This post is authored by Krishnesh Bapat

The right to be forgotten empowers individuals to seek de-indexing, erasure, or deletion of their personal data. The right is significant because it enables an individual to “determine the development of their life in an autonomous way, without being perpetually or periodically stigmatised because of specific actions performed in the past”. However,the right to seek the erasure of information from the public domain conflicts with the right to freedom of expression and the right to access information. Thus, even jurisdictions that statutorily recognise the right to be forgotten enforce it in limited circumstances.

In 2014, the European Court of Justice (ECJ) recognised the right in Google Spain SL and Google Inc v Mario Costeja González where the court directed Google to de-index a newspaper article because it disclosed the financial history of the petitioner. De-indexing would remove the site from the search engine’s index (in this case, Google’s) but not from the source web page. The Court noted that in general the petitioner’s (i.e., the data subject’s) rights would override Google’s (i.e., the data controller’s) legitimate interest of prominently publishing the article, however a balance had to be struck depending on the nature of the information, particularly if the information was of interest to the public. In another judgment, Google LLC v CNIL the ECJ ruled that an obligation on a search engine to de-index a webpage for users in the EU did not extend to de-indexing the page in other jurisdictions. Subsequently, Article 17 of the General Data Protection Regulation of the European Union gave statutory recognition to the ‘right to be forgotten’. This Article empowers data subjects to seek the erasure of their personal data, if, among other things,there is no overriding legitimate cause (such as public interest) in the continued processing of their data.

In India, the right is yet to gain statutory recognition. Clause 20 of Data Protection Bill 2021 (‘DPB’), which has been recently scrutinised by a Joint Parliamentary Committee, recognises the right of data principals (i.e., individuals) to prevent the ‘continuing disclosure’ of personal data, if: (i) it has served the purpose for which it was collected or is no longer necessary for the said purpose; (ii) consent for such processing is withdrawn; or (iii) if the disclosure is contrary to any law. However, the right can only be enforced by an order of an Adjudication Officer appointed under the DPB. To get a favourable order, the data principal must demonstrate to the Officer that the interest in preventing the continued disclosure of their data overrides the right to freedom of speech and expression and the right to information of other citizens and the right of the data fiduciary to retain, use and process such data in accordance with provisions of DPB.

While the DPB is yet to be enacted into law, the Privacy High Court Tracker, launched by the Centre for Communication Governance as a part of its Privacy Law Library shows that High Courts across the country have begun to determine the contours of the right to be forgotten as applicable to Indians. Most notable is the Orissa High Court’s decision in Subhranshu Rout @ Gogul v the State of Odisha. This case did not involve a pleading seeking erasure of information. Instead, the discussion on the right to be forgotten arose when a person accused of sexually harassing a woman and uploading images of her on Facebook sought bail. In its judgment rejecting the bail application, the Court noted that while the Indian criminal justice system prescribes strong penal action against the alleged actions of the bail applicant, it did not provide any mechanism to delete the objectionable material from social media permanently. The Court found that allowing such offensive material to remain on social media was an affront to the victim’s privacy rights. The Court discussed the right to be forgotten extensively but did not order the removal of the objectionable material because of a lack of enabling legislation. At the same time, the Court permitted the victim to approach the Court separately, for the erasure of the offensive posts.

The Orissa High Court in Subhranshu Rout was presented with facts that did not require balancing the victim’s rights with the public interest. Unlike the Orissa High Court, the Delhi High Court, on at least two occasions, has passed interim orders enforcing the right to be forgotten against content that may have been in the public interest. The first was the case of Zulfiqar Ahman Khan v Quintillion Business Media. In Zulfiqar, the plaintiff had sought a permanent injunction against the defendant, which had published two articles documenting sexual harassment complaints against him as a part of the #MeToo campaign. After the defendants had agreed to take down the article during the pendency of the suit (without prejudice to their rights), the plaintiffs asked for an injunction against re-publication of the article by third parties. Again, this was not an application seeking erasure of information. Instead, it was a case of plaintiff seeking enforcement of injunction against re-publication of previously injuncted content. But in paragraph 9, the Court cited the right to be forgotten of the plaintiffs as one of the reasons to prevent re-publication of the article. However, the Court did not explain why the plaintiff’s right to privacy should be protected over the right to freedom of speech of the defendants and the right to information of the public at large.

The second case was an interim order in Jorawar Singh Mundy v Union of India where the petitioner sought the removal of a reported judgment from the public domain. In the judgment, the petitioner’s acquittal was upheld by the Delhi High Court. The petitioner’s grievance was that he faced a considerable disadvantage in seeking employment because the judgement showed up whenever anyone conducted a background verification on him. The Court directed Google to de-index the judgment and directed Indian Kanoon (the website where the judgement was posted) to block the judgement from being accessed via search engines. This interim order is subject to change based on the final decision of the Court, but this case is significant because in Jorawar, unlike the cases mentioned above,the petitioner expressly sought enforcement of his right to be forgotten which was granted by the Court.

However in a similar case, the Madras High Court in Karthik Theodre v Registrar General, Madras High Court, in its final decision dated 3 August 2021, adopted a different approach. The petitioner, therein, also sought destruction or erasure or redaction of personal information from a court decision (a judgment of acquittal) that was available in the public domain. While in its interim order, the Court found that a prima facie case was made out for redacting his name, the final judgment recorded that granting such a plea would lead to ‘utter confusion’ in the absence of a proper policy. The Court also observed that it would be more appropriate to await the enactment of data protection legislation which might provide an objective criterion to be followed while dealing with pleas of redaction of names of accused persons who were acquitted from criminal proceedings.

The Supreme Court has also considered the balance between the right to privacy and freedom of speech and expression in the context of judicial orders in R. Rajagopal v State of Tamil Nadu. In that case, the Court recognised that the right to privacy is implicit under Article 21. Still, it did not extend the protection to individuals from publications based on public records, including Court records. The Apex Court also noted that the exception to this rule must be carved out in cases involving a female victim of sexual assault who ‘should not be subjected to the indignity of [being identified by] her name.’ Considering the ease with which personal data can be accessed in the digital age, the scope of the exception may be expanded to include those cases where the publication of a judgment is unjust – as in the case of Jorawar Singh Mundy,where continued publication of the petitioner’s case did not contribute to public discourse but adversely affected his life. However, as the Madras High Court correctly points out, this should not be done in an ad-hoc manner without objective criteria, ideally provided by legislation or a policy formulated by courts themselves.

Nevertheless, until such criteria are enacted in the form of data protection legislation, which may take a while, the High Courts will continue to formulate the law on the right to be forgotten. The Madras High Court may have passed the buck to the legislature, but the Delhi High Court or the Kerala High Court where another case is pending, may not do so. But consistency across courts, especially in the context of judicial orders, is necessary.  

Disclaimer – The author is part of the legal team representing Indian Kanoon in a case related to the right to be forgotten which is pending before the Kerala High Court