India’s foray into the vertical regulation of AI technologies

By Nidhi Singh

AI governance has been trending in regulatory and policy circles over the recent years. Given the economic potential of AI and rapid developments in the field, many calls have been made to strengthen regulations on AI applications. In this post, we discuss some of the approaches to AI governance currently emerging in India. 

AI Regulation: Vertical vs Horizontal Approach

Globally, there are two broad approaches to AI regulation – the horizontal approach and the vertical approach. The debate between these approaches revolves around the scope and specificity of the regulations. A horizontal regulatory framework, exemplified by the European Union’s AI Act, seeks to provide overarching guidelines that apply uniformly across various sectors and applications of AI. This means that the AI Act applies to all uses of AI across sectors, from facial recognition technologies and self-driving cars to the use of AI in video games. This approach lays down a basic level of protection for all AI applications used in the EU, and uses a risk-based framework to provide stricter regulation for AI which has a greater impact on human rights. 

In contrast, a vertical approach involves tailoring regulations to address specific applications of AI, resulting in targeted governance. This allows for sector specific governance, such as China’s regulation on recommendation algorithms or its draft rules on Generative AI. Vertical regulations allow for more nuanced, sector-specific laws which can target specific concerns which are likely to come up in specialised fields like healthcare, insurance or fintech.  

Indian scenario – Horizontal approach

India does not currently follow any one specific approach to AI governance. The first concrete foray into the field of AI governance in India can be traced back to NITI Aayog’s National Strategy on Artificial Intelligence, released in 2018. This was followed by a range of other policies including the AI for All principles released in 2020 and 2021, and the Department of Telecom’s document on the AI stack. All of these documents followed a broad principles-based approach to AI governance, and focused on the development and application of AI ethics across sectors, to all AI applications in India. 

AI for All also discusses the idea of “contextualising AI governance to the prevailing constitutional morality”. This speaks to the broader idea of embedding constitutional principles such as non-discrimination, privacy, and the right to freedom of speech and expression into AI regulation, though the document does not indicate how this would be implemented. The documents also laid down broad principles for responsible AI such as the principle of safety and reliability, equality, inclusivity and non-discrimination, privacy and security, transparency, accountability, and the protection and reinforcement of positive human values. 

The use of this broad principles-based approach speaks more to the horizontal regulatory approach which would be applicable across sectors and not specifically dependent on the use cases in which AI is being deployed. This means that these principles would be applicable across sectors, and would govern the application of AI-based systems for insurance, employment, education and even its use in smart cities and self-driving cars. 

Shifting to the vertical approach

The AI regulatory landscape in India has changed over the last two years. In March 2023, the Indian council for Medical Research (ICMR) released the “Ethical guidelines for the application of Artificial Intelligence in biomedical research and healthcare”. This was the first set of guidelines that applied to the use of AI in the healthcare sector. The guidelines aimed to ensure ethical conduct and provide a set of guiding principles which could be used by experts and ethics committees while reviewing research proposals which involve the use of AI-based technologies. 

(Source: Ethical Guidelines for Application of Artificial
Intelligence in Biomedical Research and Healthcare, ICMR)

The guidelines recognize the increasing scope for the use of AI in hospitals, research and health care apps, and lay down comprehensive principles for the intersection of AI with medical research and healthcare. The guidelines set out an extensive framework, laying down protocols on how the current medical ethical guidelines must be adapted and changed to incorporate the use of AI, and how this would be implemented by different stakeholder groups. 

In another sector, the Smart Cities Mission launched the ‘AI Playbook for Cities’ in 2022. Recognising the potential for the use of AI-based applications in urban planning, the playbook was launched as an instrument to aid administrators in adopting and deploying AI solutions. 

(Infographic by Nidhi Singh)

The playbook talks about the principles of responsible AI which were released by NITI Aayog in 2020, but goes one step further to provide ways to contextualise these principles in the context of Smart Cities and to manage and mitigate risks brought about by AI technologies. The playbook states that while the ethical principles lay down a broad framework, they must be supplemented with more specific principles which lay down enforceable, targeted responsibilities for different types of stakeholders, such as industry, academia, and citizens.  

Implications for AI governance

This shift in India’s AI strategies from the initial horizontal framework in 2018 to a more vertical approach reflects the recognition of the need for nuanced regulations that can address the unique challenges and opportunities presented by distinct AI applications. This evolution signifies a growing acknowledgment of the importance of adapting governance structures to the diverse and rapidly evolving landscape of AI technologies. Overall, we can see an evolution from a purely horizontal approach to a mixed approach which focuses on more sector-specific applications of the AI principles. 

Over the last few years, there has been a growing recognition of the economic potential of AI. With both States and private entities jumping into the foray, there has been a drastic increase in the number of AI applications being used in India, as well as the scope for its use. However, there are currently no immediate plans to have AI specific regulations in India akin to the EU AI Act. 

While the upcoming Digital India Act may contain some provisions which regulate AI, there is a clear lack of formal governance structures at the moment. Given the potential impact the AI can have on human rights, labour markets and its economic potential broadly, leaving it completely unregulated poses a significant threat to the well-being of individuals and society as a whole. Without proper regulations in place, there is a heightened risk of AI systems being deployed in ways that infringe upon fundamental human rights, such as privacy and freedom from discrimination. Additionally, the unchecked proliferation of AI in labour markets could exacerbate existing inequalities and lead to widespread job displacement without adequate measures to support those affected. Furthermore, any use of AI systems by the State for welfare measures without safeguards could lead to widespread discrimination against vulnerable communities. 

Therefore, it is imperative for India to establish comprehensive regulatory frameworks that address the unique challenges posed by AI, ensuring that its benefits are maximised while its risks are mitigated. 

(The opinions expressed in the blog are personal to the author/writer. The University does not subscribe to the views expressed in the article / blog and does not take any responsibility for the same.)

Dark Patterns – Beyond Consumer Protection Law

by Srija Naskar

Introduction

On 30th of November, 2023, the Central Consumer Protection Authority (“CCPA”) set up under the Consumer Protection Act, 2019 (“Consumer Protection Act”) notified the Guidelines for Prevention and Regulation of Dark Patterns, 2023 (Guidelines). The Guidelines seek to prevent all platforms, systematically offering goods or services in India, advertisers, sellers from engaging in any “dark pattern practices.”

Dark patterns can be understood as user interface designs that benefit an online service provider by tricking, coercing, manipulating, or deceiving users into making unintended and potentially harmful decisions.

Online service providers have become increasingly sophisticated in deceiving users by resorting to a bundle of privacy dark strategies. This includes strategies such as excessive data collection and storage, denying data subjects control over their data, making it hard or even impossible for data subjects to learn how their personal data is collected, stored, and processed, manipulating consent etc. Thus, causing grave privacy harm to the users. 

Dark patterns and their consequent harms can be of various other kinds. For example, consumers can sign up for a service such as Amazon Prime through a single click. However, to cancel or unsubscribe from it consumers are faced with many confusing steps. Sometimes, they  are  redirected to multiple pages that attempt to persuade them to continue their subscriptions by presenting several offers of discounted pricing. Only after clicking through such pages are consumers able to finally cancel the service. In essence, the consumer here has run into a certain kind of dark pattern known as misdirection. This dark pattern can potentially lead to an economic harm if the consumer ends up purchasing the unwanted subscription which they originally did not intend to buy.

For the purposes of this blog, I will focus on dark patterns which primarily impact the privacy of an individual.

This blog will discuss the impact of privacy dark strategies and the design tricks used by platforms to manipulate users, thus arguing the need for dark patterns to be regulated more holistically. Consumer protection laws may offer a promising way forward to target general dark patterns that find their way into every day online transactions. However, I argue that it must be in tandem with legislations such the Digital Personal Data Protection Act, 2023 (“DPDP Act”) and the upcoming Digital India Act, 2023 (“DIA”) to target privacy dark patterns effectively. Synergies between different areas of laws such as consumer protection and data protection can help ensure adequate protection from various kinds of harms posed by dark patterns. The blog concludes with certain recommendations on the aspects that should feature within these laws. 

Privacy Dark Patterns

  1. Maximisation of Data 

The starting page of Tripadvisor mobile app, a review platform for travel-related content, asks the user to log in with a personal Google+, Facebook, or email account. Further, there is a fourth option that offers the creation of a Tripadvisor account. Interestingly,  there is a “Skip” button as well, which skips the login process entirely but is hidden in the upper right corner of the page. When signing in with Facebook, for example, Tripadvisor wants to gain access to the friend list, photos, likes, and other information. In essence, the consumer here has run into a privacy dark strategy commonly adopted by online service providers that focuses on maximising data collection and storage; where consumers are coerced into disclosing personal information that is not needed for the functionality of the service. 

  1. Coercive Consent

Online service providers are increasingly attempting to hide privacy dark ingredients into the terms and conditions of using the service. The terms and conditions are known to be long and written in complicated legal jargon to ensure that it is not user friendly. The inability of the user to grasp the legal jargon puts him in a vulnerable state, since the policy is legally binding. Research shows that individuals give consent to such terms and conditions without reading privacy policies as a result of this complexity, making it difficult for users to learn about what happens to their personal data. For example, the British firm GameStation revealed that it legally owns the souls of thousands of customers, due to an “immortal soul clause” that was secretly added to the online terms and conditions for the official GameStation website, as an April Fool’s gag. This clause was added as a part of an attempt to highlight how few customers read the terms and conditions before consenting online. The gag reveals the effectiveness of this dark pattern and shows that companies can hide everything in their online terms and conditions.

  1. Cancellation trickery

Several service providers have unnecessarily complicated the process of deleting accounts either by not providing any account deletion option at all or by simply making the user interface deliberately inconvenient. In essence, if the users are inevitably required to call customer support, the process automatically becomes cumbersome, thus increasing barriers to deleting the account. Such deliberately inconvenient user experiences may push the user towards a dark pattern where the user is forced to reconsider the actual deletion decision.

Privacy dark patterns work well primarily due to 1) the advantage it takes of the psychological tendencies of human beings; and 2) the design tricks adopted by online platforms. Studies have shown that whenever humans have little motivation or opportunity to think and reason because they lack required knowledge, ability or time, they fail to read the terms and conditions carefully. Consequently, users agree to them quickly without weighing the pros and cons. This is supplemented by the design tricks which rely on minimum transparency and maximum complexity. Studies on the power of design have long recognised that the design of built environments constrains human behaviour and the same is true online. In simple terms, users can only click on the buttons or select the options presented to them; or can only opt-out of the options from which a website allows them to opt-out. Essentially, these hidden design choices give people the illusion of free choice. 

Dark patterns should be regulated beyond a consumer protection perspective

While the prescription of specified dark patterns in the Guidelines is helpful in providing guidance, the scope of the Guidelines remains limited to deceptive and unfair practices and excludes manipulation. Also, the Guidelines are straight-jacketed and lack a graded approach towards the varied effects that dark patterns can have on an individual. Therefore, privacy dark patterns will be better regulated in tandem with the DPDP Act which already comprises provisions surrounding data protection, data retention, consent, erasure of data etc. For example, although dark patterns in the United States (“USA”) have been heavily regulated by the Federal Trade Commission (“FTC”), the first legislation – the California Privacy Rights Act – explicitly regulating dark patterns in the USA has come into place. The Act aims to take a leadership role in regulating dark patterns generally and privacy dark patterns specifically. 

Recommendations

1. The DPDP Act requires data fiduciaries to provide notice to the data principals at the time of requesting consent. The notice must inform the data principal about the personal data and the purpose for which the data will be processed. While this mechanism might ensure that user consent is informed, free, and capable of being withdrawn, it does not specifically tackle dark patterns. The Act still leaves scope for the data fiduciaries to adopt numerous design tricks into the notice mechanism. Essentially, platforms could still meet the notice compliance by simply informing the user, no matter how convoluted and obscure the design may be. As a result of this, an individual may give consent without completely understanding the policies. Thus, there should be a separate set of rules under the DPDP Act specifically dedicated towards tackling the emergent dark patterns. Section 40 of the DPDP Act is a residuary clause which gives the government the power to make rules consistent with the provisions of the DPDP Act. This is a notable provision which the government can use to make rules for dark patterns.

2.The primary objective of the rules should be to encourage online platforms to establish ethical and responsible design practices. These rules could act as an indicative guidance to platforms on how they should design their user interface. This would include – giving complete and correct information such as disclosing in-app purchases about a product/service (eg: a consumer has downloaded a mobile application for playing candy crush, which was advertised as a free game. However, after 7 days, the app asked for a payment to continue playing. The fact that the free version of the game is available only for a limited time was not disclosed to the consumer at the time of downloading the mobile application). The guidance could also insist on using clear menus, clear fonts, icons and click sequences for easier understanding of the product/service; making sure the default settings are favourable to consumers (eg: a consumer orders an airline ticket. In the booking process, a box saying ‘Yes, I would like to add travel insurance’ has been pre-ticked by default.) Such default selection without user involvement should not be allowed. The consumer must consciously agree to an extra product such as travel insurance. Such rules could also blacklist certain practices and impose dissuasive sanctions.

3.The upcoming rules under the DPDP Act must keep in mind the increasing impact of  dark patterns on privacy. Consequently, it should be noted that consent is not enough to protect data, but there is a need for greater accountability. This could potentially be achieved by using the fiduciary approach which allows information fiduciaries to be held to reasonable and ethical standards of behaviour, based on expectations of users. It would require technology companies to take reasonable steps to secure our data (duties of care); collect only so much data as is necessary to achieve a particular purpose and limit the use of collected data to specific purposes to which users consent (duties of confidentiality); and would require that companies do not profit by harming users (duties of loyalty). Thus, in the background of how disclosure can be manipulated by cognitive biases and coercive design, this approach, one based on the connection between trust and sharing, would hold online platforms to a higher standard of loyalty, confidentiality and care. 

(The opinions expressed in the Blog are personal to the author/writer. The University does not subscribe to the views expressed in the article / blog and does not take any responsibility for the same.)

How the World’s First Major Safe Harbour Law Came to Be

By Sachin Dhawan

Why did the U.S. Congress enact Section 230 of the Communications Decency Act, the landmark law that immunizes intermediaries from liability from most of the third party content they host? An explanation often cited in popular and scholarly discourse is that in 1996, Congress wanted to encourage a nascent Silicon Valley to grow and expand to its full potential without being burdened by onerous legal regulations. 

But this is only a partial explanation. Congress also wanted to counter the negative effects of two key court decisions issued in the early days of the internet. Cubby v CompuServe [1991] and Stratton Oakmont v Prodigy [1995] created perverse incentives for online intermediaries by encouraging them to ignore harmful and illegal content on their sites instead of removing it. 

Section 230 undid the damage of such perverse incentives by encouraging intermediaries to remove harmful and illegal content on their sites. It may come as a surprise that the framers of Section 230 intended to encourage intermediaries to be active. Today, many commentators contend that intermediaries abuse the protections of Section 230 when they actively exercise control over content. But as will be seen below, this is not so.

1] CompuServe, Prodigy and the Problem of Perverse Incentives

CompuServe and Prodigy – pioneer intermediaries in the online world just coming to life in the early 1990s – hosted a wealth of content. Indeed it is fair to say that “never before had so much information been available so readily” to people. The nature of the content they hosted also set them apart. Subscribers of these intermediaries could go online not only to read newsletters and articles but also to interact with one another in discussion forums and chat rooms.

While these new avenues of communication opened up fresh opportunities for public debate and discourse, they also became a magnet for miscreants to post derogatory and abusive content. Consequently it is not surprising that both CompuServe and Prodigy eventually found themselves at the receiving end of defamation lawsuits.

CompuServe was the first to face the music when a subscriber sued the site for hosting allegedly defamatory statements about him. In court, CompuServe claimed that while it did host the content in question, it was merely a distributor of such content. Consequently it must be subject to a lower legal standard whereupon it would be liable only if it had acquired knowledge of such defamatory content. And since it lacked such knowledge, it could not be held liable. 

But in order to qualify as a distributor, CompuServe would have to show that it lacked editorial control over the content it hosted. It would in other words have to show that it wasn’t a publisher of content. 

This is where the perverse incentive to be a “passive receptacle” of content kicked in for CompuServe. Had it undertaken efforts to moderate content on its site, it would not have qualified as a distributor – on the contrary, it would have been seen as a publisher exercising editorial control and it would have probably been held liable for defamation. Even if it had undertaken such efforts in the interest of keeping harmful content away from children, it would have likely been found liable as a publisher. 

Fortunately for CompuServe it managed to convince the court that it did not and could not make such efforts to control content. So it qualified as a distributor and won the case, given that [the court found] it lacked knowledge of the allegedly defamatory content. 

Prodigy was not so lucky. The Prodigy case also revolved around allegedly defamatory content. Like CompuServe, Prodigy also sought to defend itself in court by arguing that it was simply a passive distributor of content. It did not exercise editorial control over the content it hosted.

Unlike CompuServe, Prodigy failed to convince the court that it had made no efforts to moderate content on its site. In fact, after finding that Prodigy had devoted considerable resources towards moderating content to promote a family friendly environment, the court classified Prodigy as a publisher, subject to strict liability for any defamatory content it hosted.

Counter-productively, therefore, US courts at this time penalized the intermediary [Prodigy] which made efforts to root out harmful and obscene content and rewarded the intermediary [CompuServe] which avoided making such efforts. The rulings would have the effect of discouraging intermediaries from cleaning up their sites in the future even if they could and even if they wanted to, for fear of being classified as publishers.

2] Congress Comes to the Rescue 

Thus, two important court decisions issued in the early days of the internet incentivized intermediaries to passively ignore harmful content. The Prodigy ruling in particular generated considerable media coverage and criticism. Concerned members of Congress, dismayed by such a turn of events, set to work to undo the effect of these decisions.

Specifically an effort spearheaded by Congressmen Ron Wyden and Christopher Cox set the ball rolling. In response to Prodigy, they drafted a law called the ‘Internet Freedom and Family Empowerment Act.’ Over the course of several months of debate and deliberation in congressional committees, this draft law evolved to become Section 230 of the Communications Decency Act. Crucially, it contained the following provision: 

“No provider…of an interactive computer service [synonymous with intermediary] shall be held liable on account of any action voluntarily taken in good faith to restrict access to or availability of material that the provider…considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.”

By immunizing intermediaries from liability for moderating third party content, Section 230 enabled them to be non-neutral and non-passive. Now intermediaries would have the freedom and flexibility to deal with the avalanche of abusive content sloshing about online rather than being encouraged to turn a blind eye to such content. They would no longer have to fear being classified as publishers subject to strict liability for exercising editorial control over content. 

Wyden, Cox and their congressional allies realized that intermediaries could help to counter the undesirable and profane content that the internet attracted. Moreover, they could potentially do so more effectively than offline intermediaries, given the technology at their disposal to exercise granular control over content. But unless they received the benefit of a carefully crafted safe harbor provision that removed the deficiency in the case law penalizing them for taking such socially beneficial actions, they would not do so. 

Conclusion 

Size was a factor compelling the U.S. Congress to enact Section 230, as the tech industry was small and taking baby steps towards growth in the mid 1990s. But while Congress wanted the industry to grow, it also wanted to undo the impact of Cubby v CompuServe and Stratton Oakmont v Prodigy. These rulings encouraged intermediaries to ignore harmful content on their sites, which served the interests of neither intermediaries nor their users. 

Thus, Congress enacted Section 230 to empower intermediaries to actively manage and moderate content. Understanding this rationale animating Section 230 matters at a time when many public officials and commentators are averring that its protections only extend to passive intermediaries. 

The Right to be Forgotten in India: An Evolution

by Ira Srivastava

(Ira is a 4th year law student at NLU Delhi)

The Right to be Forgotten (“RTBF”) is a right of a data principal (“DP”) to have their personal data removed or erased in certain circumstances. These typically correspond to situations where consent for data collection is withdrawn or the data collected has been processed or is requested by the DP to be pulled down for various reasons. Because of the ultimate effect of erasure as a remedy for exercising the Right to be Forgotten, this right is also known as the “right to erasure”. The Right is codified under Articles 17 & 19 of the European Union’s (“EU”) General Data Protection Regulation (“GDPR”). India’s Digital Personal Data Protection Act, 2023 lays down the “Right to correction and erasure of personal data” under its Section 12, thus codifying the Right to be Forgotten in India.

This two piece article seeks to trace the evolution of the Right to be Forgotten in India. This is part one of a two piece blog, which focuses on the legislative developments, which have led to the Right in its current form. It goes on to make suggestions on how gaps with respect to the Right can be filled and what steps can facilitate smooth implementation.

390 BCE.

One windy afternoon, in the Acropolis of Athens, two men are in conversation:

Kostas: Forget me Socrates, for I have sinned.

Socrates: What is your sin, my child?

Kostas: I have led a corrupt life in my past life and wish to be forgotten.

Socrates: Let us now join our hands and pray to the Pantheon of Gods.

Prayers begin.

There is an Explosion. Lethe, a river of the Underworld appears as a spring on the ground near the two men. He erases the memory of Kostas’ past life from not only his memory but also the memories of all those who knew him.

Fast forward to 2016, when the General Data Protection Regulation (“GDPR”) was passed. It formally introduced the right to erasure, more popularly known as the “Right to be Forgotten” under its Article 17. The formalisation of the Right to be Forgotten only took place through its codification under the GDPR, although other events complemented its growth.

After the Puttaswamy Judgment, the Justice BN Srikrishna Committee on “A Free and Fair Digital Economy” was constituted in July 2017, which submitted its Report in 2018. The Report had recommended that the right to be forgotten may be adopted based on five-point criteria, including:

  • Sensitivity of data
  • Scale of disclosure or degree of accessibility
  • Role of DP in public life
  • Relevance of data to public
  • Nature of disclosure and activities of data fiduciary

There existed a gap in the understanding of the RTBF. This stemmed from the conflicting views that on one hand, RTBF forms an essential part of privacy but on the other hand there exists no statutory backing. This called for some form of standardization – which was provided by the Personal Data Protection Bill, 2019 (“PDP Bill”). Clause 20 of the PDP Bill envisaged a “Right to be Forgotten”. It empowered the DP to restrict or prevent continuing disclosure of personal data in certain circumstances. These included when the purpose for collection is served, when consent was withdrawn or was not in accordance with the Act. The biggest hurdle that arose was with respect to enforcement. Clause 20(2) provided for enforcement only by an order of the Adjudicating Officer after following a grievance redressal mechanism, with no specified timeline. Some guidelines were also listed for the Adjudicating Officer to bear in mind while giving such an order.

Some of the key concerns flagged by stakeholders included that the nature and scope of the Right must be specified, enforcement measures to be given, and timeline should be prescribed for Privacy Officer to decide on an application.

The PDP Bill was then referred to a Joint Parliamentary Committee. The Committee, in its deliberations, took note of Article 17, GDPR. It noted that governing only disclosure narrows the scope of Clause 20 and must include data processing and accordingly recommended changes in Clause 20 to include “processing” within its scope. This drew much critique from stakeholders, claiming their key concerns had not been addressed. 

The Draft Digital Personal Data Protection Bill, 2022 contained a much watered-down version of this Right in Clause 13. It provided that the DP will have the right to correction and erasure of personal data and enumerated the rights available to the DP including correction of inaccuracies, completion, updating, and erasure of personal data no longer serving the purpose of processing.

The Digital Personal Data Protection Bill, 2023 – which was passed by both Houses of the Parliament – contains the “Right to correction and erasure of personal data” under its Section 12. It, too, lists the rights available to a DP. Additionally, it puts an obligation upon a data fiduciary (“DF”) to comply with requests for correction, completion or updating upon receipt of request from the DP unless necessary for legal compliance. The assumption here seems to be that the DF will comply. However, it must be noted that there is a vast difference in bargaining power, making the fiduciaries extremely powerful and effectively leaving compliance up to their discretion.

It is acknowledged that what works for Europe will not necessarily work in India due to the social, cultural, economic and other differences. However, borrowing from best practices will help in making India a competitive global market. Some of the major reasons for the effective implementation of the GDPR throughout the European Union include the strict measures of enforcement, hefty sums of fines and an efficient dispute resolving mechanism. One such example is seen in €50 million fine on Google by the French data protection authority CNIL, for forcing consent by only giving one option: consent in full to non-specific, poorly explained uses of your data or don’t proceed at all.

At present, the Digital Personal Data Protection Bill, 2023 has been passed by both Houses of the Parliament and received President’s assent to give the Digital Personal Data Protection Act, 2023 (“DPDP Act”). It awaits notification for coming into effect. This intervening time period must be leveraged in order to bridge gaps and address concerns raised by stakeholders. One way that it can be done is by ensuring that the Rules governing the modalities of the Act are comprehensive. That will also ensure smooth implementation, which is key to achieving larger objectives that this Act seeks to achieve in order to make India a competitive global market.

Particularly in the context of the RTBF, the two Rules that can be of use are:

  1. Specificity

The current version of the RTBF is too vague. The 5-point criteria in the Srikrishna Committee Report must be adopted as a framework for assessing the need for a particular data set to be erased or modified. At the very least, the circumstances listed under the 2019 Bill for when the RTBF could be exercised must be used as guidelines. Some of these circumstances included when the purpose for collection was served or when consent to collect the data was withdrawn or was not in accordance with the Act.

  1. Ensuring DFs’ proactive actions

The DPDP Act puts much of the compliance burden on DFs. This is a potential pitfall, as discussed above. One action to avoid the ill-effects is to prescribe:

  1. A timeline within which the RTBF request must necessarily be processed.

This will provide more certainty to the DPs as well. Responding within the timeline should be made compulsory for DFs.

  1. Hefty fines and penalties for wrongful non-compliance with the request.

A step that can realistically be borrowed from the GDPR is having hefty fines and penalties in place. That will also help bridge the gap of bargaining power between large corporations and individuals.

It has been a long journey from having a Judgment upholding the Right to Privacy to a legislation putting the same into force. The passing of the Bill in both Houses shows a legislative intent and with the President’s assent, a start in the right direction. However, its effectiveness will be seen by way of implementation mechanisms yet to be put into place.

As a country with a population of 1.42 billion, out of which at least 1.2 billion are mobile phone users, there comes a great responsibility to ensure data privacy of citizens, particularly of personal data. The passing of the DPDP Bill is a welcome first step but there is a long way to go. How the Right to be Forgotten clause and other clauses will be implemented is yet to be seen. Putting an individual’s right to data privacy at the core of policy decisions will be fundamental to effectively securing the Right to be Forgotten.

Digiyatra & the Defect in the Idea of ‘Consent’ 

By Sukriti

Introduction

Digiyatra is a facial recognition technology (“FRT”) based system, which aims to offer a quick and hassle free experience for travellers at airports. It intends to ensure a paperless and contactless entry through all airport checkpoints by verifying identity through a facial scan. Passengers can register on the Digiyatra application through their Aadhaar number and travel details to facilitate document-free travel through the use of FRT. It was first launched in 2022 and is currently in operation at thirteen airports in the country – Delhi, Bengaluru, Varanasi, Hyderabad, Kolkata, Vijayawada, Pune, Mumbai, Cochin, Ahmedabad, Lucknow, Jaipur, and Guwahati. The Ministry of Civil Aviation (“MoCA”) released the first Digiyatra policy in 2018 and an updated policy in 2021

Currently, Digiyatra is being implemented by a Joint Venture called Digiyatra Foundation (“DYF”), which consists of the Airport Authority of India, with a 26% stake and Bengaluru Airport, Delhi Airport, Hyderabad Airport, Mumbai Airport and Cochin International Airport with the remaining 74% stake.

Although the use of Digiyatra is supposed to be voluntary, recent reports indicated passengers being coerced to sign up for Digiyatra by airport personnel, despite their protests. After receiving several complaints, the MoCA clarified that the service remains voluntary and the airport personnel have been instructed to obtain consent of passengers for using Digiyatra. 

This blog will explore and question the idea of ‘consent’ sought for availing such services. It argues that Digiyatra is based on a defective model of consent and results in compromised autonomy. In employing FRT, Digiyatra exposes the limits of ‘consent’ for data protection and privacy. This blog will analyse how, despite masquerading as a voluntary service, Digiyatra is exploitative of consent. 

Citizen’s perception of State and Diminished Choice

DYF has been claimed to be a private entity and therefore out of the scope of the Right to Information Act, 2005 (“RTI”). However, this assertion can be challenged. Albeit the shareholding of Digiyatra makes it a private entity, whether or not an entity falls within the purview of RTI does not necessarily depend on the same. The Supreme Court has held that an entity can fall within the purview of RTI if there is substantial financial support by the government which should be determined on a case-to-case basis. Thus, the extent of shareholding of the government is not a necessary indicator of the financial influence of the government. Further, information pertaining to a private entity may also fall within the scope of RTI if the government has access to such information (Rajeshwar Majoor Kamgari Sahakari Sanstha Ltd. vs. State Information Commissioner, 2011 SCC OnLine Bom 707). Hence, Digiyatra could fall within the scope of RTI.

Determination of DYF as State or State instrumentality under Article 12 of the Constitution is also a fact-based assessment that depends on an array of factors and private nature alone may not disqualify the DYF from Article 12.

Regardless of its status as a ‘State’ instrumentality, given that Digiyatra is an initiative of the MoCA, the public perception towards the initiative can distort individual consent.The context in which people make a choice may subject it to distortion. A perception of state-sponsored initiative taps into citizen’s trust in the State in offering welfare services of public utility. This can impact individual discretion in disclosing personal data, given that there is little awareness or understanding of potential implications of disclosing personal data for FRT-based technologies. Additionally, airports are placed with logistical barriers to drive more people to opt for it. For instance, almost all terminal gates at the Indira Gandhi International Airport at New Delhi and the Kempegowda Airport at Bengaluru employ Digiyatra, which makes the non-Digiyatra option cumbersome and inconvenient, thus narrowing and manipulating available choices. To illustrate this point, while travelling from the Bengaluru Airport, upon asking for the non-Digiyatra option, only one entry gate was made accessible, which is the very last entry gate. Even then, non-Digiyatra entry at that gate was allowed by opening a separate spot. 

Considering the above subjective behaviour of citizens, one cannot consider a requirement of consent for Digiyatra as meaningful or free. 

Limits of consent and implications for data protection

Use of FRT has been argued to have various implications for data protection and surveillance. Even as the MoCA reassured that the data collected for Digiyatra is purged within 24 hours, the Digiyatra Policy of 2021 creates exemptions to the same, while allowing access to “any Security Agency, GOI [Government of India] or other Govt. Agency… to the passenger data based on the current/ existing protocols prevalent at that time”. Apart from the dearth of public awareness on the issue that influences willingness to give consent, as argued above, FRT also arguably has a “fatal consent problem”. Selinger and Hartzog have argued that consent is a “broken regulatory mechanism” for facial surveillance, within which they include systems such as Digiyatra. They argue that the logic of consent for facial recognition is dodgy because an individual is never fully aware of the threats that facial recognition carries for their autonomy. 

They further argue that FRT compromises “obscurity”. Obscurity is the idea that refers to the “ease or difficulty of finding information and correctly interpreting it”. Selinger and Hartzog explain, “the harder it is to locate information or reliably understand what it means in context, the safer, practically speaking, the information is.” Obscurity is important because it furthers individual autonomy since privacy is presupposed in society to mean that information is disclosed to some audiences but not everyone, unlike anonymity, which means “nobody knows who you are”. Obscurity furthers this through “structural constraints”, which are technological limitations that make access and identification of individual movements and behaviours difficult and expensive. As per Selinger and Hartzog, in order to enable people to give valid consent for FRT, they should have an awareness of how they presume obscurity to protect privacy and the implications for obscurity by use of FRTs. 

However, Selinger and Hartzog ask, “what good is recognising the value of obscurity if it is unobtainable?” They suggest that since obscurity is inevitably lost, the privacy regulatory regime should provide for “meaningful obscurity protections.” 

In the case of Digiyatra, because of the use of FRT, the above concerns remain true, rendering it “inconsentable”. However, the concerns are aggravated because of the absence of any statute in India regulating the use of FRT and the legal vacuum within which Digiyatra operates, which is a mere standalone policy document. Moreover, the exemption of Digiyatra from RTI leads to a severe lack of transparency. Even if it was not considered exempt under RTI, the government could exempt it under the Digital Data Protection Act, 2023 from disclosing any information under RTI by qualifying Digiyatra as a State instrumentality. 

Despite the 24-hour data purge policy, the personal data of individuals may still remain at risk as the Digiyatra Policy of 2021 plans on allowing users to use ‘Digiyatra value-added services’, which will allow users to avail third party services from “Digiyatra ecosystem stakeholders/partners”. These could include cab, hotel or lounge services. The sample consent notice for the same requires consent to the use of the passenger’s phone number, email ID, Ticket/Boarding Pass Data and Face Biometric data to avail these services. This could lead to loss of autonomy over data leading to risks of data aggregation, data monetisation, and profiling.

The impact of citizen’s perception of the State in deciding consent as well as the limitation in the idea of consent itself with regard to FRT together operate to compound the problem of consent for Digiyatra. It is important that Digiyatra remains a strictly voluntary service. For Digiyatra to be truly voluntary as a service, the current model of implementation needs an overhaul. 

Navigating the Indian Data Protection Law: Examining user rights in the context of voluntary disclosure of personal data

By Ananya Moncourt

Editor’s note: This blog is a part of our ongoing Data Protection Blog Series, titled Navigating the Indian Data Protection Law. This series will be updated regularly, and will explore the practical implications and shortcomings of the Digital Personal Data Protection Act, 2023 (“DPDP Act”), and where appropriate, suggest suitable safeguards that can be implemented to further protect the rights of the data principals.

For a detailed analysis of the Indian data protection legislation, the comprehensive comments provided by the Centre for Communication Governance on the 2022 DPDP Bill and the 2018 DPDP Bill can be accessed here. For a detailed comparison between the provisions of the DPDP Act and the 2022 Bill, our comparative tracker can be accessed here. Moreover, we have also provided an in-depth analysis of individuals’ rights under the DPDP Act in the Data Protection 101 episode of our CCG Tech Podcast.

India’s Digital Personal Data Protection Act 2023 (“DPDPA”) uses the concept of “specified purpose” as a legal basis for collection of users’ personal data. One of the key principles that underpins data protection laws across the world is purpose specification. The principle requires that users or data principals are informed why their personal data is being collected, for what purpose(s) it will be processed, and the amount of time it will be retained for by the collecting entity among other things. The DPDPA incorporates certain elements of this principle via the concept of “specified purpose”, which is defined squarely as the specific information that is provided to a data principal by a data fiduciary in the form of a notice.  However, there are certain inconsistencies in the usage of the term “specified purpose” within the DPDPA that could have negative implications for the enforceability of individual rights. This blog will highlight and explain a key contradiction in the DPDPA regarding the understanding and application of the concept of “specified purpose”. 

Legitimate Grounds for Processing Personal Data 

Section 4 of the DPDPA provides the legal grounds for processing of personal data in India. There are two clear conditions and the law requires either one of these to be fulfilled, for a data fiduciary to legally collect and process a user’s personal data. First, the law states that personal data can be processed when the data principal has given their consent. Second, personal data can be processed for any of the “certain legitimate uses” that the law articulates in Section 7. While the former clearly mandates user consent for personal data processing, Section 7 of the DPDPA carves out certain circumstances in which data fiduciaries may be exempted from requirements of the law’s consent mechanism (i.e, provision of notice to users). Some of the purposes for processing within Section 7 include those relating to public health, threat to life, and disaster management. However, the exemption of consent requirements also extend to other  purposes such as employment, welfare provision, and collection of personal data by the government. These together with other exemptions under Section 17 in the law have raised concerns in context of the fundamental right to privacy of Indian citizens. Over the next few months, delegated legislation is expected to lay out safeguards for several provisions in the DPDPA. It is important that such supporting legislation ensures protection of citizen’s personal data and their rights, in this context, of access to information and withdrawal of consent.

Voluntary Sharing of Personal Data as a Legitimate Use 

Section 7(a) of the DPDPA in particular relates to voluntary provision of personal data, where a data principal affirmatively shares their personal information for a “specified purpose”. The law further qualifies voluntary provision such that a data principal “does not, in any manner, indicate that they do not consent” to the use of their personal data for the specified purpose. Since each sub-section under Section 7 relates to a “legitimate use”, for which the data principal’s consent is not required,  Section 7(a) suggests that processing of personal data that is voluntarily disclosed by an individual is also a “legitimate use” for which their consent is effectively not required. The illustrations set out under Section 7(a) also reinforce this understanding by emphasising actions taken by a data principal to “electronically message”, “voluntarily provide(s)” or “share” their personal information with a data fiduciary for a certain purpose. However, the definition of specified purpose as per Section 2(z)(a) “means the purpose mentioned in the notice given by the Data Fiduciary to the Data Principal”. The use of the term “specified purpose” in Section 7(a) can be read in accordance with this definition to necessitate the provision of a notice containing information about the specific purpose for which their data is being collected. 

While the aim of “certain legitimate uses” in the DPDPA is to carve out exceptions to the obligations of notice and consent, there is a lack of clarity on the legal basis for processing personal data that is voluntarily disclosed. The use of the term “specified purpose” in Section 7(a) thus creates ambiguities for cases in which the provision of notice by a data fiduciary is necessary. Further, there are currently no legal obligations that prevent data fiduciaries from using personal data that is voluntarily disclosed for other purposes. 

Implications of Removal of Notice & Consent Requirements for Voluntary Disclosure of Personal Data 

The lack of clarity from a data principal’s perspective about whether their express consent is required or not in certain circumstances also impacts their ability to meaningfully invoke their rights under Section 11 (right to access information about personal data), Section 12 (right to correction and erasure of personal data) and Section 13 (right to grievance redressal). While Sections 11 and 12 both explicitly preserve these rights for cases in which consent is inferred in accordance with Section 7(a), the clause itself contains no clear avenues for individuals to practically negotiate or invoke their rights with the data fiduciary. As such, the rights prescribed under Sections 11, 12 and 13 of the DPDPA will remain theoretical, without any practical applicability, since individuals are not informed or aware of instances in which their consent has been inferred and the stated purposes for which their personal data could be used.

A significant trend in user behaviour is the lack of awareness or control over how much of their personal data they voluntarily share online. The provision of a notice not only serves to inform users about the movement of their personal data online but also enables both data fiduciaries and data principals to have a mutual understanding of what the “specified purpose” for processing of personal data includes. In the 2022 draft version of the DPDPA, Section 8(9)(c) included considerations of whether the legitimate interests of the data fiduciary in processing for a ‘fair and reasonable’ purpose outweigh any adverse effect on the rights of the data principal. The Draft DPDPA also included considerations for the “reasonable expectations” of the data principal with respect to the context of processing of their personal data. The absence of this concept of “reasonable expectations” in the DPDPA weakens safeguards for the rights of data principals by exempting data fiduciaries from all legal obligations for the use and processing of voluntarily shared personal data.  

Even in the absence of consent, the obligation of a data fiduciary to provide notice is a significant safeguard for users to know when their personal data is being processed, who is collecting it and for what purpose. The current framing of section 7(a) will make it difficult for users to know or be aware of instances in which their consent has been considered deemed. 

The DPDPA provides the following illustration alongside section 7(a) to aid in understanding of the scope of enactment of the law: 

“X, an individual, makes a purchase at Y, a pharmacy. She voluntarily provides Y her personal data and requests Y to acknowledge receipt of the payment made for the purchase by sending a message to her mobile phone. Y may process the personal data of X for the purpose of sending the receipt.”

The voluntary provision of personal information by users is not always as intentional and specific as illustrated in the DPDPA. The law assumes not only that X is fully aware of all potential consequences of sharing their personal data with Y, but also that X is fully aware of the data protection implications of an everyday transaction. However, digitisation of goods and services has made users predisposed to sharing their personal information without active or conscious consideration for the exact purpose of its use. The likelihood that X is informed or aware of the fact that Y can only process their personal data for a particular purpose is limited in the absence of notice and consent requirements. 

Further, since there is no mutually agreed upon understanding between X and Y regarding the specific purpose, Y is not obligated to comply with any best practices that will ensure X’s data privacy and prevent its misuse. There are also no legal obligations that prevent Y from sharing X’s personal data (with pharmaceutical manufacturers or government agencies for instance) or retaining X’s personal data to improve Y’s service provision etc. The DPDPA leaves scope for misuse of personal data that is voluntarily disclosed because of the vacuum of safeguards for the processing of X’s personal data by Y in this context. 

Consider a situation in which X has a rare health condition and goes to a pharmacy to purchase medication. X shares their prescription with Y and asks them to deliver the medication to their home. Y now has access to X’s personal profile including their name, phone number, personal details contained in the prescription (such as age) and home address. After Y delivers the medication to X’s home, they will continue to have access to X’s entire personal profile. There will be no way for X to know if Y has subsequently used their personal information for any other purpose. Further, the absence of any sub-classifications of personal data under the DPDPA render cases in which users undertake voluntary disclosure of sensitive personal data particularly vulnerable to harms, misuse and cybercrime. 

Examining user rights in the context of voluntary disclosure of personal data in India’s Digital Personal Data Protection Act
(Infographic by Ananya Moncourt)

Conclusion & Recommendations

In today’s digital ecosystem, we know that users share their personal data online such as names, contact numbers and addresses without hesitation despite privacy concerns. Given these existing trends in user behaviour, what academics refer to as the “privacy paradox”, our legal frameworks need to be designed to ensure protection of user privacy online. Since the DPDPA will significantly narrow the scope of cases in which users are allowed to give their informed and express consent, it can be argued that the exemption of consent and notice requirements for voluntary disclosure of personal data is a means to alleviate consent fatigue. Yet, in India, users do not understand the value of their online consent or privacy, and are often willing to trade them for convenience. Our data protection laws need to be cognisant of the realities of user behaviour and tendencies, and their awareness levels regarding personal data processing in the digital ecosystem. 

The lack of guardrails for the use of voluntarily disclosed information by both the government and data fiduciaries is concerning and requires explicit limitations. Section 7(a) of the DPDPA also raises questions around the balance of individuals rights against the interest of data fiduciaries. It is unlikely that Section 7(a) will pass all four thresholds of legitimate aim, suitability, necessity and balance in accordance with the doctrine of proportionality as established in Puttaswamy v. Union of India. The role of delegated legislation in defining these limits is critical. Additionally, clarity regarding notice and consent requirements in cases of voluntary disclosure of personal data can ensure greater legal certainty and uphold the internal consistency of the DPDPA. 

Personal Health Data under the Digital Personal Data Protection Act, 2023: Private and Esoteric?

This is a guest post by Ramya Khanna

Introduction

Technological innovations have become an integral part of our daily lives be it our phones, wearable technologies or use of digital technologies in healthcare. The digitisation of the healthcare sector is being hailed as the need of the hour due to the radical transformation that has been achieved through it in the delivery of patient care, ease of administrative and pharmaceutical processes and improved access. While both the Central and State governments have forayed into launching digital health initiatives as long back as 2015, these initiatives have been marred by the limitations of fragmentation, low digital literacy, limited access to digital services, substandard data protection and data privacy violations. With the introduction of the Ayushman Bharat Digital Mission, in 2021, the government aims to establish a national digital health ecosystem while ensuring that personal health data remains private. 

Before the enactment of the Digital Personal Data Protection Act, 2023 (DPDP Act), there was no comprehensive legal framework to ensure the protection of digital personal data, or protect against privacy violations. Privacy violations and protection of personal health data therefore, used to fall under the umbrella of sectoral legislations like the Information and Technology Act, 2000 (‘IT Act’) read with the Information Technology (Reasonable Security Practices and Procedures and Sensitive Personal Data or Information) Rules, 2011 (‘IT Rules’); the Mental Health Act, 1987; the Medical Termination of Pregnancy Act, 1971; etc. This blog will examine and analyse various provisions for protection of digital personal health data under the DPDP Act.

The Digital Personal Data Protection Act, 2023 and Protection of Personal Health Data 

I. Risk Evaluation and Regulation

The DPDP Act, being a sector horizontal law that aims to safeguard the right of individuals to protect their personal data, also recognises the need for processing personal data. However, it unfortunately does not regulate all the ancillary harms arising from processing like loss and alteration of personal data, financial and reputational losses, profiling of users, etc., nor does it provide for any measures to mitigate these harms. Both the Srikrishna Committee (2018) as well as the Personal Data Protection Bill, 2019 observed that such harms are a possible consequence of processing personal data. To mitigate the same, the 2019 Bill had provided for evaluation of risks, impact assessments, audits and had also provided the data principal (the individual to whom the personal data relates) the right to seek compensation from the data fiduciary (those who determine the purpose and means of processing personal data) or data processor (those who process personal data on behalf of a data fiduciary) as the case may be, which are unexpectedly absent from the DPDP regime. It is pertinent to note that both the General Data Protection Regulation (‘GDPR’) of the European Union as well as the United Kingdom’s GDPR also provide for regulation of risk due to processing of data and compensation in cases of such harms, whereas, the DPDP Act provides only for monetary penalties under Section 33.

The DPDP Act, through Section 4, allows processing of personal data only after the requirement for consent under Section 6 have been fulfilled, or for the ‘legitimate uses’ mentioned under Section 7. However, under Section 7(f) and 7(g), a data fiduciary can process personal data for responding to medical emergency involving threat to life or health of data principal or any other person and for taking measures to provide medical treatment or health services during an epidemic, outbreak, disease or any other threat to public health respectively. It is pertinent to note that in the above circumstances the data principal would reasonably not be in a position to provide consent that would fulfil the requirements of Section 6; therefore, this consent can be considered to be at best deemed. This concept of consent along with the absence of mitigating measures for harm as well as risk regulation measures under the DPDP regime may limit the autonomy of an individual over their health data.

II. Classification of Data

While the DPDP Act recognises that additional processing requirements are necessary for certain categories of data principals like children and people with disabilities under Section 9, it does not contain any provisions for special categories of data. Before the enactment of the DPDP Act, sensitive personal data like medical, health and biometric data was regulated by the IT Rules. The IT 2011 provided that the collection of ‘sensitive personal data or information’ is to be subject to enhanced rules such as explicit consent in writing through letter or fax or email. The IT Act provided for compensation for negligence in implementing and maintaining ‘reasonable security practices and procedure’ for processing sensitive data or information under Section 43A and provided for punishment for disclosure of personal information under Section 72A. 

Corresponding protections and provisions are conspicuous by their absence under the DPDP regime. It is pertinent to note that both the Personal Data Protection Bill, 2019 and the recommendations of the Joint Parliamentary Committee provided for added protections for special category of data i.e. sensitive data. Unfortunately, under the DPDP regime, sensitive personal data like personal health data, biometrics, financial data, etc. has been placed on the same pedestal as personal data like email, postal address, phone number etc. Section 6 of the DPDP Act, which requires data principals to provide consent that is “free, specific, informed, unconditional and unambiguous”, also does not provide an enhanced threshold for more sensitive categories of data. 

It is also germane to note that Section 10 of the DPDP Act allows the Central Government to notify any data fiduciary or class of data fiduciaries as ‘significant data fiduciary’ on the basis of certain factors, one of them being the ‘sensitivity of the data processed’. This then raises the question that if the data fiduciary processing ‘sensitive data’ like health records, biometrics, financial information, etc. can be given special status, why did the legislature stop short of defining sensitive data or not accord it any special status? The answer maybe that all personal data could become sensitive personal data. Sometimes when various data points which when separately processed are considered to be non-sensitive; however, when they are combined in certain combinations and then processed, they may be considered to be sensitive and may result in significant privacy violations and harms. However, even this explanation does not take away from the certitude that, data which at the very outset is sensitive personal data needs to be accorded a special status and added protections.

III. Right of Withdrawal of Consent 

Since consent forms not only the basis of the data privacy and protection but is a non-negotiable aspect thereof, therefore, the right to withdraw consent is also an essential aspect as it also flows from the right to privacy and self-determination. The DPDP regime provides the data principal the right to withdraw his/her consent at any time and with the same ease as that was present for giving the consent under Section 6(4). However, there is an obstacle to this right, under Section 6(6) whereby the data fiduciary is allowed to process the personal data of the data principal even after the withdrawal of consent for a ‘reasonable time’. The regime does not specify a maximum period for reasonable time nor does it provide who decides how long the period should be and on what basis should this time be decided. This latitude under the DPDP Act leaves the data principal in a perilous position as his/her data is being processed and may continue to be processed without consent for a ‘reasonable period of time’ that is neither to his/her knowledge nor control. 

Another corresponding right to the right of withdrawal of consent, is the right of erasure/deletion of data post the withdrawal of consent. While Section 12(3) does allow the data principal the right to ask for erasure of his/her personal data, this right also faces the same encumbrances like the right to withdraw consent. It also does not provide any timeline for erasure and is even qualified as it allows the data fiduciary to retain the personal data if the same is ‘necessary for the specified purpose or for the compliance of any law’. Consequently, leaving the data principal again with just a right in name. It is pertinent to note that Article 17 of the EU GDPR as well as the UK GDPR provides for the right to erasure post the withdrawal of consent ‘without any undue delay’.

Conclusion

The move towards digitization has raised serious concerns with respect to the protection, security and privacy of personal health data. The enactment of the DPDP Act has been considered as the first step in mitigating these concerns as it provides for a horizontal legislative framework for the protection of digital personal data but it is also marred by the above stated impediments and encumbrances. Furthermore, the DPDP regime also faces certain implementational encumbrances, for instance data breaches and leaks like the CoWIN Portal data breach and the 2022 AIIMS cyber-attack as well as a low digital literacy rate which has led to shortage of trained personnel resulting in non-compliance of standards, limited interoperability, etc. 

These impediments cannot be written off as teething problems, they need effective resolution. Consequently, moving forward, the next step is to ensure that the push for digitisation of healthcare sector is balanced with the responsibility to safeguard the personal health data. To achieve this equipoise, it is necessary to harmonise the international best practices and standards with not just the DPDP Act, 2023 but also with Central and State government digitisation of healthcare initiatives. The government was meant to publish the Draft Rules under the DPDP Act for its implementation by the end of December, but they have not yet been published. Since these rules are being touted as being more expansive than the parent act, therefore in line with the international best practices, the draft rules should provide for mitigating measures like risk evaluation and impact assessment as well as compensation for failure to protect personal data, classification and additional protection for sensitive data and time limit for data deletion after withdrawal of consent and demand of erasure.

Emerging Framework for Gig Workers Welfare in India Leaves Critical Questions Unanswered

By Fawaz Shaheen

The Haryana Government has recently indicated that it is going to introduce legislation establishing a welfare board for social security of gig workers. This comes after Rajasthan in July 2023 became the first state in India, and among the first places in the world, to legislate social security measures for platform based gig workers. Even though the law is yet to be notified, it was hailed as an important milestone in providing gig workers with a measure of social security and rights against platforms. Several labour organisations, including the largest union of gig workers in India, also welcomed it as a crucial step forward. It is interesting to note that the Social Security Code, enacted by parliament in 2020, also contained similar provisions for welfare of gig workers. While the scheme under the Social Security Code has yet to be notified, the Rajasthan law seems to have drawn its basic outline from the Social Security Code and fleshed it out with more detail. The proposed law in Haryana may also take a similar route, according to a public statement made by the state’s Deputy Chief Minister. The ruling party in Karnataka had also promised a similar model in their manifesto during the state assembly polls last year. The model of a welfare board, with minor modifications, seems to be emerging as the accepted framework to deal with challenges of the gig economy in India. It is important to take a look at its basic contours and how it might impact the governance of digital platform-based businesses in India.  

One of the most important debates surrounding gig work is on the question of their status as employees. Platform companies are able to build competitive and viable business models due to the flexible nature of what is also known as ‘on-demand’ work. Not having to categorise their workers as employees saves them a fortune on benefits like healthcare, provident fund and paid leave, etc. The absence of formal employment and termination procedures also facilitates easy hiring and firing of workers for specific, time-bound tasks. This in turn allows companies to be adaptable in rapidly shifting market scenarios, a huge advantage that gives platform based companies a definite edge over traditional businesses. 

However, these same conditions make gig work extremely precarious for those who are actually carrying it out. The flexibility and adaptability prized by platform companies make gig work an unreliable source of regular income. This is despite the fact that much of the conditions of their work exhibit characteristics of regular employment. This includes the centrality of their work to the platform’s core business, the amount of control platforms exert over their work, both through rules and regulations and the functioning of algorithms, and limits on their ability to take up other employment. 

Emerging Legal Framework in India

Under the Social Security Code, 2020, ‘gig workers’ are defined as those who perform work outside of ‘traditional employer-employee relationship’ (Section 2(35), Social Security Code, 2020). The Rajasthan law has taken the same essential definition and added specifically that such work is carried out as part of a contract and results in a given rate of payment, or ‘piece-rate’ work. This definition seeks to adopt a pragmatic approach to defining ‘gig-work’ without getting into the debate of whether these are employees or independent contractors. However, by not explicitly recognising them as employees, it effectively validates the contention of platform companies that these workers are not their employees, and therefore their work conditions will not be governed by traditional labour law principles.

Another crucial aspect is the manner in which the Rajasthan law seeks to operationalise the social security schemes of gig workers. It calls for the setting up of a government controlled Welfare Board, that will have broad powers to formulate and administer schemes for gig workers. The Board will have representatives of gig workers as members, but these will be nominated by the state government and not elected by the unions or workers groups. This is again similar to the provision for welfare of gig workers provided under the Social Security Code, 2020, which also envisions a National Social Security Board, consisting of members nominated by the central government, as the central agency for governing social welfare schemes for gig workers. 

A number of questions have been raised with regards to enacting social security through the medium of welfare boards. For instance, the welfare board model ties the social security of gig workers to contributions made by them and their employers, instead of creating guaranteed entitlements. And by tying the welfare measures to individual transactions between the platform and the consumer, it also fails to distinguish between the kinds of work that are carried out on different platforms. For instance, a transaction on a food delivery or cab-hailing platform usually entails the involvement of only one gig worker in the form of the rider or the driver. But one order on an e-commerce platform might involve several workers at different stages from the order packing, handling, transportation to delivery. If the law recognised both of them as one transaction, with social security benefits tied to the contributions made by the platform on the basis of the number of transactions, the gig workers on an e-commerce platform would be at a significant disadvantage compared to workers carrying out similar tasks on a food delivery platform.

Gig Work and Digital Rights

One area which hasn’t yet received much attention is the manner in which these laws will impact informational privacy and digital rights of the gig workers. From a data protection perspective, the scheme laid out in the law raises a number of concerns, some of which are:

Registration of all workers on a State govt database: The law requires all gig workers to be registered on a government-run database and assigned a Unique id. This Unique id will be used to track all the work they undertake for various platforms, and become the basis for determining the kind of benefits they receive under any social service scheme notified by the government. The law does not specify any purpose limitation for data collected under this head. This is important for several reasons, since the data would allow anyone – including current and prospective employers – to map out the trajectory of a worker’s entire employment history. It also does not specify confidentiality or limiting access of the data to the Board. Without ensuring confidentiality and limiting the purposes for which it can be used, the aggregation of data concerning a worker’s jobs across different platforms could place them at a significant disadvantage. This is especially relevant considering recurring  concerns about unfair and non-transparent deactivation practices of employers.

Payments Monitoring and Tracking System: The law requires the setting up of a system for tracking all payments made on a platform. It is not clear why a separate payments tracking system is needed to operationalise the law, especially since all platforms are already tax-paying entities whose financial records are available with the government. This again contains significant potential for abuse of data, especially due to lack of purpose limitations on how this data on payments can be used.

Fails the least intrusive means test: The Supreme Court in Puttaswamy has laid down a clear standard of minimal intrusion for situations where private data of citizens is being collected and recorded by the state for welfare and for other necessary functions. This standard requires the state to find the least intrusive means of operationalising a particular scheme or program, so that the data being collected can be minimised. In the present case, the operationalisation of the entire scheme is predicated upon registration and tracking of payments made to the gig workers. This is precisely the opposite of the least intrusive means standard laid down by the apex court.

Conclusion

While the move to set up welfare boards for platform based gig workers across different states represents a crucial step forward in ensuring some level of social security for a very precarious class of workers, it still leaves many important questions unanswered. Much will depend on how the welfare boards function and the kinds of welfare schemes they introduce after they are notified. But ultimately the law itself is set up to entrench their status as temporary workers whose social security will be dependent on external inputs from the welfare board, and not guaranteed by virtue of their employment. It also fails to address the imbalance of technological power between the digital platforms and their workers, leaving them vulnerable to violations of informational privacy and subject to opaque data-driven decision making.

Digital Selves and their Immortality – a case for Posthumous Right to Privacy in the age of Artificial Intelligence 

By Samrridhi Kumar and Sukriti

In July, 2023, a single judge bench of the Delhi High Court (‘Court’) delivered the judgement in a petition filed by Sushant Singh Rajput’s (‘SSR’) father (‘plaintiff’), against a film inspired by the circumstances surrounding SSR’s death. The plaintiff claimed that the filmmakers had used SSR’s likeness and caricature in violation of his personality rights. He further submitted that the right of a celebrity includes personality, privacy and publicity rights and the film violated the right to privacy of both SSR and his own. Although the right to privacy does not subsist after the death of an individual, the plaintiff submitted that the right to privacy and publicity were heritable and could be agitated on SSR’s behalf by the plaintiff. 

The Court held that the right to privacy, personality and publicity are not heritable and “died with the death of SSR” and could not be said to survive to be agitated on his behalf by the plaintiff. It further held that the information used to make the film constituted publicly available information and did not require the consent of the plaintiff before the making of the movie. While the ruling considers the two rights in a conventional common law understanding, we interrogate if there is a need to reinterpret these rights. 

This ruling prompts us to reconsider the scope of personality rights, in generality. Although originally drawn from the idea of privacy, personality rights eventually found protection under the commercial realm of Intellectual Property Rights (‘IPR’). However, a digital realm and significant advancements in Artificial Intelligence challenge the present perception of publicity rights. 

This blog will discuss the gaps that arise in the current framework of personality rights as many of us lead a digital life online. This blog will take an expanded view of publicity rights through the lens of right to privacy. In view of the recent and ongoing developments in the world of technology, we argue for the need to expand the idea of privacy to include a posthumous recognition of privacy. We conclude that the law must depart from the prevailing view and incorporate a robust posthumous right to privacy and personality.

Why IPR based version of personality rights is outdated for the digital age

As we witness rapid deployment of Generative AI in various ways across different industries globally, a host of unanticipated problems beyond the current framework of the law have arisen. An episode of the popular show ‘Black Mirror’ provided a dismal forewarning to the potential exploits and problems posed by Artificial Intelligence. The satirical episode, depicting the daily life of the protagonist named Joan being broadcasted in real time on a streaming platform by an AI-Generated likeness of Salma Hayek, feels like an ominous indication to reconsider personality rights for celebrities and non-celebrities alike. The protagonist unknowingly signs away the rights to use her life for entertainment when she creates an account on the streaming platform, while also granting permission to be surveilled and recorded through her devices. 

The experience of the entertainment industry offers interesting insight into the ways in which personality rights and with that, our understanding of privacy is undergoing a radical shift. Some examples include a Chinese Gen-AI actress bearing an uncanny resemblance to another Chinese celebrity, a Japanese company creating a TV commercial with an AI-Generated Actress, and Meta creating animated AI Chatbots of celebrities by using their likeness and the creation of AI-generated bands in Korean Pop.  

While in the case of celebrities, personality rights find protection under intellectual property rights, there is a need to consider protection of personality rights of all individuals, particularly with the rise in novel ways of reproducing online presence through Artificial Intelligence, Virtual and Augmented Reality, etc. as illustrated in the previous section.

Another illustration for the importance of reading a right to privacy with the commercial right of publicity is the ongoing Screen Actors Guild protests in the United States. The proposed contracts threaten the deployment of Generative AI to use an individual’s image, likeness, voice or performance to create content in any manner without seeking consent. Such use may exploit an individual’s personal aspects in various ways for perpetuity. 

Moreover, despite the contractual protections available, any individual’s personal aspects may be exploited by anyone for their own use and purpose through the use of AI, which may go beyond a mere commercial exploitation as that covered by the IPR and leave non-celebrities unprotected. A privacy based approach recognises a loss of income, dignity, agency and choice due to lack of remedy within the current framework in the case of both celebrities and non-celebrities.

Why Posthumous Right to Privacy Can Fill the Gap – Conceptualising the idea of Post-Mortem Privacy 

In recent times, there have been several controversies of celebrity personalities and likeness being exploited in various ways after their death through digital and virtual means. For instance, the documentary film about chef Anthony Bourdain used an AI model of his voice for a few lines in the film. The model was created using several hours of his recordings. Similarly, hologram and CGI technology has been used to ‘resurrect’ deceased celebrities such as Michael Jackson, James Dean and Jimi Hendrix

However, almost every individual will leave behind a digital footprint that outlasts their lifetimes. As we increasingly live our lives on the internet, these digital footprints can easily be exploited.  For instance, taking from the story of the virtual resurrection of a deceased child named Jang Nayeon in Japan, Amber Boothe argues for a recognition of broader personality rights in the UK. Employing hypothetical instances illustrating the challenges of the use of extended reality technology, she argues that the “UK’s lack of personality rights is becoming ever more problematic in the case of the living, and wholly indefensible in the case of the dead.”

As the res digitalis that a person leaves challenges the traditional notions of privacy, there are arguments for the recognition of a right to post-mortem privacy. J. C. Buitelaar defines post-mortem privacy as the “right of a person to preserve and control what becomes of their reputation, integrity, secrets, dignity or memory after their death.” The concept of post-mortem privacy, according to him, derives its credence from the understanding that privacy protection includes the protection of human dignity and autonomy, even after the mortal body may not subsist in this world. He states that dignity and autonomy allows individuals to “pursue ideals of life and character before and after [their] demise, which would be difficult to achieve were privacy not safeguarded.” However, in common law, rights such as privacy and personality are considered to extinguish with the person. This approach also finds approval in the Puttaswamy judgement. 

Buitelaar cites the case of Mephisto, where the Federal Constitutional Court of Germany banned the publication of a novel telling a fictitious story of a character based on a deceased actor. In a claim of injunction against the publisher filed by the actor’s adopted son, the court held that the constitutional mandate of the inviolability of human dignity, does not permit a person being belittled and denigrated after his death. Hence, the court held that an individual’s death does not put an end to the state’s constitutional duty to protect him from assaults on his human dignity. 

At the same time, a blanket protection through right to privacy risks throttling freedom of speech and expression. Therefore, there is a need for striking a balance between freedom of speech/expression and right of privacy on a case by case basis. Factors for consideration in a balancing exercise might include, public interest, impact for pendency of a trial, impact on dignity of the deceased, extent of dissemination in the public domain, implications for private and family life of an individual, nature of information used, whether consent was obtained. 

The Court’s decision in the case filed by SSR’s father compels us to reconsider what may constitute dignity and autonomy within the realm of privacy, and how such an understanding may be extended to personality rights. We anticipate that the issues briefly discussed in this blog are upcoming prospects that courts across the world will grapple with very soon. These prospects can achieve proper protection only if the right to privacy is viewed as an integral aspect of personality rights not only for the living but also for the deceased. 

Navigating the Indian Data Protection Law: Children’s Privacy and the Digital Personal Data Protection Act, 2023

By Sukriti

Editor’s note: This blog is a part of our ongoing Data Protection Blog Series, titled Navigating the Indian Data Protection Law. This series will be updated regularly, and will explore the practical implications and shortcomings of the Digital Personal Data Protection Act, 2023 (“DPDP Act”), and where appropriate, suggest suitable safeguards that can be implemented to further protect the rights of the data principals.

For a detailed analysis of the Indian data protection legislation, the comprehensive comments provided by the Centre for Communication Governance on the 2022 DPDP Bill and the 2018 DPDP Bill can be accessed here. For a detailed comparison between the provisions of the DPDP Act and the 2022 Bill, our comparative tracker can be accessed here. Moreover, we have also provided an in-depth analysis of individuals’ rights under the DPDP Act in the Data Protection 101 episode of our CCG Tech Podcast.

In August, 2023, the Parliament of India enacted the Digital Personal Data Protection Act (‘DPDPA’ or ‘Act’). Section 9 of the Act deals with the processing of personal data of children. For everyone under the age of 18, the Section places three conditions for processing children’s personal data. It necessitates, a) obtaining verifiable consent of the parent; b) processing of personal data to be in alignment with the well-being of a child; c) ban on tracking or behavioural monitoring of children or targeted advertising directed at children.

This blog will analyse the provision and suggest gaps that might be useful to keep in mind while enacting the upcoming Rules under the Act.

Right to Privacy and Decisional Autonomy of Children

The approach to children’s data protection is based on presumptions about children’s capacities and the family as being best placed to protect children’s wellbeing and interests. Georgina Dimopoulos states that ‘the child’ as an identity has been constructed by law as vulnerable, dependent, and incapable of rational decision-making. This notion of the child does not consider them possessing the capacity to make autonomous decisions. 

Decisional privacy protects the ability of an individual to make autonomous decisions without unjustified interference from others or the State. Dimopoulos conceptualises a theory of decisional privacy for children by drawing on the child rights provisions in the United Nations Convention on the Rights of the Child (‘UNCRC’). A rights based approach considers that age alone cannot be the basis to deny an opportunity for a child with demonstrable competence to consent. 

In the case of children, the idea of autonomy is often overlooked or not considered as they are viewed to be in need of protection. Children can meaningfully exercise their rights only when they are provided with autonomy. To be considered as autonomous individuals, they need to be viewed as individuals capable of exercising their autonomy under suitable circumstances. 

In the context of children’s access and use of the internet, a recognition of decisional privacy for children would imply trusting children in their capability to interact with the internet autonomously. However, this is done by providing adequate safeguards that can allow for the exercise of this autonomy in a safe manner. The DPDPA’s approach relies on parental consent as a means to achieve data protection for everyone under the age of 18. From the perspective of decisional autonomy for children, the requirement of parental consent is likely to result in parental control over children’s access to information on the internet, potentially denying children the opportunity to define their experience online. This becomes of particular relevance for children in abusive or conflicted family environments. 

Parental Consent – Why a Misplaced Approach 

Daniel J. Solove has pointed out that the idea of ‘consent’ within privacy law is itself a fiction. He notes that consent “authorises and legitimises a wide range of data collection and processing” and can rarely  be meaningful as it is never truly informed, or a result of binary choices. Further, an individual “lacks a reasonable understanding of the consequences of choosing the option.” To therefore rely on parental consent for data protection of children is questionable, especially in a country like India with low levels of digital literacy amongst adults. 

As a result, a provision requiring parental consent for processing of children’s data will instead become the means for parents to control children’s access to information online. Age gating bills introduced in various states in the US have become a way to limit access of children to LGBTQ+ content or information related to gender and identity, and allow parents the control over what minors are able to see online, shielding them from spaces and online resources on ideological grounds. Such laws also make the internet an exclusionary space for children of queer identities and harm them by creating hurdles in accessing information or safe communities. This works directly against the best interest principle enshrined in the UNCRC to which India is also a signatory. 

Given that children rely on the internet more than adults in varied ways, the above implication will erode children’s decisional privacy on the internet and deny their autonomy. On a practical level, to what extent could parents be expected to provide consent for children’s varied online engagements?

Dimopoulos cites the evolving capacities principle, according to which a child may reach a level of understanding and maturity where the need for parental direction and guidance is minimised, except in situations where the child may be unwilling or feels that they do not possess the competence to exercise such autonomy. An arbitrary imposition of a blanket age of 18 under Section 9 does not account for this principle, thus restricting children’s decisional privacy on the internet. It does not account for the different stages of development of children which impacts their decisional capacity and the level of required supervision. Therefore, one solution suggested is to adopt a graded and risk-based approach to data processing for children. In a graded approach, the government can lower the age for certain digital services which do not carry significant privacy risks for children’s data. 

The Ambiguity of Verifiable Consent

Although the recommendation of a graded approach is key, it is complicated by the requirement of ‘verifiable’ consent under the Act. The Act does not only necessitate consent of the parent but also that such consent be “verifiable” consent. Verifiability would entail two aspects to itself: verifying age, and verifying for consent of the parent. As has been suggested in another opinion, a requirement of verifiability inevitably means that everyone on the internet will be required to verify their age. This raises questions around the means of verification. There are a few suspected ways of achieving this, one is through self-declaration, another is through available government IDs, and a third method is by devising age-appropriate quizzes to determine age. All these mechanisms have downsides and are not fool-proof. Hence, even to implement a graded approach, entities would still need to determine if they are interacting with a child of a certain age. For this, the Rules would need to resolve the question of age verification and its operationalisation, and clarify the mechanism for verifying parental consent. 

Keeping the Internet Safe for Children

The Act further places a blanket prohibition on platforms from tracking, behavioural monitoring or targeting advertisements. There is an underlying assumption here that these activities are entirely harmful to all children under the age of 18. However, the provision does not take into account sectoral differences that may impact processing of children’s data and how such activities play a role in providing services for children. For instance, a gaming platform, an ed-tech platform, a shopping website and a video streaming platform serve different purposes for children and will process data differently. The prohibition also makes it hard for platforms “to prevent a child from being exposed to harmful, risky or illegal content, interactions and experiences without tracking or monitoring their behaviour.” While tracking has clear privacy concerns attached to it, platforms can be required to have design features to ensure that they safely track only necessary data. This can be done by incorporating safeguards such as high-privacy default settings, data minimisation, prohibition on profiling (unless demonstrably necessary), prohibition on storing personal data, data retention and deletion periods etc.

Although the Act contains a provision under Section 9(4) to exempt certain classes of platforms or certain purposes for data processing from the requirements of Section 9, the basis on which these exemptions will be made are yet to be specified. Regardless, the problem of a blanket ban for all other platforms would still remain and might make it harder for platforms to keep themselves safe for children. This is especially true for platforms that are not directly or solely aimed at children, such as YouTube, but are still useful for children in varied ways but might fail to receive exemption. The Act also provides for exemption from the provisions of parental consent and tracking under Section 9(5) for platforms processing data in a “verifiably safe manner”. It would be useful for the Rules to provide for adequate safeguards to tracking and the means and manner of such verification, as suggested above.

Despite the above concerns, the Act does retain scope to fill some of the above gaps with Section 9(2). This section requires platforms to process personal data of children in a manner not detrimental to the well-being of a child. However, the Act does not give an indication of what it means by ‘wellbeing’. It might be useful to lay down principles regarding the same, on the lines of the best interest principle under the UNCRC. An approach oriented towards the best interests of the child would require the platforms to uphold certain standards in their design, settings and processing of data or publishing of risk assessments. A setting of standards would also ensure direction and accountability for platforms without jeopardising the access and autonomous use of the internet for children of varying ages.