India’s Artificial Intelligence Roadmap

By Aditya Singh Chawla

There is now a near universal perception that Artificial Intelligence technologies are set to disrupt every sphere of life. However, this is coupled with concern regarding the social, ethical (and even existential) challenges that AI might present. As a consequence, there has been an uptake in interest by governments on how best to marshal the development of these technologies. The United Kingdom, the United States, China, and France, among others, have all released vision documents that explore these themes.

This post, the first in a series, presents a brief overview of such initiatives by the Indian government. Subsequent posts will focus specifically on their treatment of personal data, as well as their consideration of ethical issues posed by AI.

~

Task Force on Artificial Intelligence

In August 2017, the Ministry of Commerce and Industry set up a ‘Task Force on Artificial Intelligence for India’s Economic Transformation’. A panel of 18 members was formed with the objective of exploring how Artificial Intelligence could be best deployed in India.

The Task Force released its Report in May 2018, where it characterized AI as a ‘socio-economic problem solver at a large scale’, rather than simply a booster for economic growth. It sought to explore domains which would benefit from government intervention, with the objective of improving quality of life, and generating employment. The report identifies 10 sectors where AI could be deployed – Manufacturing, FinTech, Healthcare, Agriculture and Food Processing, Retail, Accessibility Technology, Environment, National Security and Public Utility Services. It attempts to identify challenges specific to each sector, as well as enabling factors that could promote the adoption of AI.

The report also explores the predicted impact of AI on employment, as well as other broader social and ethical implications of the technology. It concludes with a set of recommendations for the government of India. A primary recommendation is to constitute an Inter-Ministerial National Artificial Intelligence Mission (N-AIM) with a 5 year budget of Rs. 1200 Crores. Other recommendations focus on creating an ecosystem for better availability of data for AI applications; skilling and education initiatives focused on AI; standard setting, as well as international participation in standard setting processes.

NITI Aayog’s National Strategy for Artificial Intelligence

In his Budget speech, the Finance Minister had tasked the NITI Aayog with formulating a national programme for Artificial Intelligence. In June 2018, the NITI Aayog released its roadmap in the form of the National Strategy for Artificial in India.

The paper frames India’s AI ambitions in terms of increasing economic growth, social development, and as an incubator for technology that can cater to other emerging economies. It focuses on 5 sectors as avenues for AI led intervention. These are healthcare, agriculture, education, smart cities, and smart mobility. It also identifies some key challenges to the effective adoption of AI. These include low awareness, research, and expertise in AI along with an absence of collaboration; the lack of ecosystems that enable access to usable data; high resource costs; and ill-adapted regulations.

The paper then presents a series of recommendations to address some of these issues. In order to expand AI research in India, it proposes a two-tier framework to focus on basic research as well as application based research. It also proposes the creation of a common computing platform in order to pool cloud infrastructure, and reduce infrastructural requirements for such institutions. It further suggests a review of the intellectual property framework to enable greater AI innovation. In order to foster international collaboration, the paper proposes the creation of a supranational CERN-like entity for AI. It also recommends skilling and education initiatives to address job creation, as well as the current lack of AI expertise. In order to accelerate adoption, it proposes a platform for sharing government datasets, along with a marketplace model for data collection and aggregation, for data annotation, as well as for deployable AI models.

The paper concludes with its recommendations for ‘responsible’ AI development. It recommends that there be a consortium of the Ethics Councils at each of the AI research institutions. It further proposes the creation of a Centre for Studies on Technology Sustainability. It also emphasizes the importance of fostering research on privacy preserving technology, along with general and sectoral privacy regulations.

Further reports suggest that a task force will be set up to execute the proposals that have been made, in coordination with the relevant ministries.

MeitY Committees

It has also been reported that four committees have been constituted in February 2018 to deliberate on issues of ‘data for AI, applications of AI, skilling and cyber security/legal, ethical issues.’ However, there have been no reports about when the committees will present their recommendations, and  whether they will be made available to the public.

~

India appears to be at the nascent stage of formulating its approach towards Artificial Intelligence. Even so, it is encouraging that the government recognizes the importance of its stewardship. Purely market led development of AI could imply all of its disruption, without any of the envisaged social benefits.

Aditya is an Analyst at the Centre for Communication Governance at National Law University Delhi

The General Data Protection Regulation and You

By Aditya Singh Chawla

A cursory look at your email inbox this past month presents an intriguing trend. Multiple online services seem to have taken it upon themselves to notify changes to their Privacy Policies at the same time. The reason, simply, is that the European Union’s General Data Protection Regulation (GDPR) comes into force on May 25, 2018.

The GDPR marks a substantial overhaul of the existing data protection regime in the EU, as it replaces the earlier ‘Directive 95/46/EC on the protection of individuals with regard to the processing of personal data and on the free movement of such data.’ The Regulation was adopted by the European Parliament in 2016, with a period of almost two years to allow entities sufficient time to comply with their increased obligations.

The GDPR is an attempt to harmonize and strengthen data protection across Member States of the European Union. CCG has previously written about the Regulation and what it entails here. For one, the instrument is a ‘Regulation’, as opposed to a ‘Directive’. A Regulation is directly binding across all Member States in its entirety. A Directive simply sets out a goal that all EU countries must achieve, but allows them discretion as to how. Member States must enact national measures to transpose a Directive, and this can sometimes lead to a lack of uniformity across Member States.

The GDPR introduces, among other things, additional rights and protections for data subjects. This includes, for instance, the introduction of the right to data portability, and the codification of the controversial right to be forgotten. Our writing on these concepts can be found here, and here. Another noteworthy change is the substantial sanctions that can be imposed for violations. Entities that fall foul of the Regulation may have to pay fines up to 20 million Euros, or 4% of global annual turnover, whichever is higher.

The Regulation also has consequences for entities and users outside the EU. First, the Regulation has expansive territorial scope, and applies to non-EU entities if they offer goods and services to the EU, or monitor the behavior of EU citizens. The EU is also a significant digital market, which allows it to nudge other jurisdictions towards the standards it adopts. The Regulation (like the earlier Directive) restricts the transfer of personal data to entities outside the EU to cases where an adequate level of data protection can be ensured. This has resulted in many countries adopting regulation in compliance with EU standards. In addition, with the implementation of the GDPR, companies that operate in multiple jurisdictions might prefer to maintain parity between their data protection policies. For instance, Microsoft has announced that it will extend core GDPR protections to its users worldwide. As a consequence, many of the protections offered by the GDPR may in effect become available to users in other jurisdictions as well.

The implementation of the GDPR is also of particular significance to India, which is currently in the process of formulating its own data protection framework. The Regulation represents a recent attempt by a jurisdiction (that typically places a high premium on privacy) to address the harms caused by practices surrounding personal data. The lead-up to its adoption and implementation has generated much discourse on data protection and privacy. This can offer useful lessons as we debate the scope and ambit of our own data protection regulation.

Aditya is an Analyst at the Centre for Communication Governance at National Law University Delhi

SC Constitution Bench on Aadhaar- Final Hearing (Day XXXV)

By Aditya Singh Chawla

In October 2015, a 3-judge bench of the Supreme Court of India referred challenges to the Aadhaar program to a constitution bench. One of the primary concerns of this petition was to decide on the existence of a fundamental right to privacy, which has since been upheld. Other similar petitions, concerned with the legitimacy of Aadhaar had been tagged with this petition. While the existence of the fundamental right to privacy has been upheld, challenges against the Aadhaar programme and linking services to this programme were yet to be adjudicated upon.

An interim order was passed in December of 2017, a summary of the arguments can be found here and here.

The final hearing commenced on January 17, 2017. Summaries of the arguments advanced in the previous hearings can be found here.

Advocate Zoheb Hossain continued his submissions for the State of Maharashtra and the UIDAI. He began with referring to various international charters and covenants, stressing on the importance of harmonizing between the economic and social rights and the civil and political rights.

Justice Chandrachud noted that the Directive Principles, even though they are non justiciable, are necessary for good governance and as a guarantee of reasonableness of the law. This is why they are read into Article 21.

The counsel argued that all rights give rise to corresponding duties, and that Aadhaar was a project to secure the economic and social rights of the people. He then brought the Court’s attention to the Justice Wadhwa Committee Report on the Public Distribution System. He then brought the Court’s attention to various precedents. He referred to the case of DK Trivedi, where the Court had held that ensuring socio economic welfare was a constitutional obligation of the State. Further, it had been held that a statute could not be judged on the presumption that the executive power that it confers would be abused, or used arbitrarily.

The counsel then referred to the Universal Declaration of Human Rights, the International Covenant on Civil and Political Rights, and resolutions of the UN General Assembly. He reiterated that rights were indivisible and interconnected, and that socio economic rights were equal to the civil and political rights.

The counsel then argued that the proportionality and reasonableness of a restriction must be examined from the point of view of the general public, and not that of a specific party that claims to be affected. He argued that even if Aadhaar is used for different purposes such obtaining a SIM card or opening a bank account, the data remains disaggregated. He stated that as a consequence, there was no possibility of surveillance, even at the level of the Requesting Entities.

The counsel then drew a comparison between Aadhaar and the Social Security Number in the United States. He noted that the SSN was used for a variety of purposes, and that people could be denied benefits for not producing their SSN. He argued that the Courts in the US had upheld the firing of an employee for refusing to provide his SSN. The counsel then argued that the Aadhaar Act had sufficient safeguards in place over the identity and authentication information. He referred to Section 33 of the Act, noting that decisions made under that Section were subject to review by an oversight committee. He concluded that the safeguards in place were greater than what are provided by the Telegraph Act, and the standards laid down by the Supreme Court in the PUCL case.

Post lunch, the counsel resumed his submissions for the Respondents with examining how various search and seizure related provisions under the IT Act and CrPC had passed constitutional muster. He then proceeded to the issue of ‘national security.’ He argued that in times of emergency, a strict adherence to the principles of natural justice is not necessary. He referred to a House of Lords decision that read in a national security exception to a statute even though the text did not provide  for it.

He then addressed the contention with respect to Section 47 of the Aadhaar Act, arguing that it provided for sufficient remedy since a complaint could be filed to the UIDAI. He argued that Aadhaar had many technical aspects, so it would be best if only the Authority has the power to complain. He noted that a similar setup in the Industrial Disputes Act had been previously upheld. In addition, he noted that the UIDAI could authorize a person to make a complaint as well.

The counsel then submitted that the Aadhaar Act had sufficient safeguards for the CIDR, while provisions under the IT Act would cover actors outside the CIDR.

The counsel then framed the purpose of Section 139AA of the Income Tax as a measure to ensure redistributive justice, to ensure substantive equality. He argued that ‘distribute’ in the Directive Principles had been interpreted liberally, and measures to prevent leakages would thus be considered redistributive.

The counsel then moved to the addressing the argument about compelled speech. He argued that not all transactions can be considered to have a speech element, for instance linking the Aadhaar to PAN. He further noted that the Court in Puttaswamy had held that rights could be curbed to prevent tax evasion and money laundering. He added that the Income Tax Act and the Aadhaar Act were standalone Acts, and that after Binoy Viswam, it was settled that they were not in conflict. He responded to the contention that only individual tax payers had been mandated for linkage, stating that a measure need not strike at all evils at once. He argued that the linkage could help cure ills with companies as well, by revealing the people behind them. The linkage can allow the deduplication of DINs. Advocate Zoheb Hossain then concluded his arguments.

The Attorney General then began his arguments, by addressing the Money Bill issue. He argued the Act was, in pith and substance, a Money Bill. ‘Targetted Delivery of Subsidies entails the expenditure of funds. He argued that every act would have ancillary provisions dealing with review, appeal etc., but the primary purpose deal with the Consolidated Fund of India.

Justice Chandrachud questioned the counsel about whether Section 57 of the Act severed that link. The AG responded that the Section merely allowed the existing infrastructure to be used for other purposes, and was just an ancillary provision. The UIDAI had been brought into existence primarily to prevent leakages and losses.

Justice Sikri noted that there was no distribution of benefits or subsidies under Section 57. The AG argued that the Section would be saved by Article 110(1)(g) of the Constitution, and stressing on an interpretation of the word ‘only’ in the Article. Justice Chandrachud suggested that that might amount to rewriting the Constitution.

The Attorney General will resume his arguments on May 3, 2018.

Aditya is an Analyst at the Centre for Communication Governance at National Law University Delhi

SC Constitution Bench on Aadhaar- Final Hearing (Day XXXII)

By Aditya Singh Chawla

In October 2015, a 3-judge bench of the Supreme Court of India referred challenges to the Aadhaar program to a constitution bench. One of the primary concerns of this petition was to decide on the existence of a fundamental right to privacy, which has since been upheld. Other similar petitions, concerned with the legitimacy of Aadhaar had been tagged with this petition. While the existence of the fundamental right to privacy has been upheld, challenges against the Aadhaar programme and linking services to this programme were yet to be adjudicated upon.

An interim order was passed in December of 2017, a summary of the arguments can be found here and here.

The final hearing commenced on January 17, 2017. Summaries of the arguments advanced in the previous hearings can be found here.

Senior Counsel Rakesh Dwivedi resumed his arguments for the Respondents. He began with referring to jurisprudence from the United States, the United Kingdom, South Africa and the European Union, to describe how privacy should be constructed in the Indian context. He argued that Indian jurisprudence is more in line with that of the United States, than the European Union. He stated that that former lays greater emphasis on the ‘reasonable expectation to privacy’. He then quoted a Harvard Law Review article, for the proposition that privacy should be tempered by considerations such as national security, efficiency, and entrepreneurship. He argued that that was especially true in the Indian context, where innovation and development should have more emphasis than privacy.

The counsel made reference to Justice Chandrachud’s opinion in Puttaswamy, and argued that social welfare could be a legitimate purpose for processing of data. Coming back to the construction of privacy, he argued that all Aadhaar data was in the public, relational sphere. He submitted that privacy is diluted in these realms, so there is a reduced expectation of privacy over data such as demographic data, and facial photographs. He reiterated that data with the Requesting Entities was dispersed, and therefore didn’t require the same level of protection as the CIDR.

Justice Chandrachud sought a clarification, if the submission was that core biometrics had a higher privacy interest, as opposed to demographic data, such as one’s address. He countered that the implication was not that the privacy interest in such data was gone. He gave the example of a woman and her address. He argued that she might give her address out for various purposes, but still had immense privacy interest in that information. The counsel responded that their argument was simply that privacy varies according to context.

The counsel argued that India had developed the appropriate tests in VG Row, much before any other jurisdiction. He reiterated the three-fold requirement of legality, necessity and proportionality. He noted that Indian jurisprudence generally did not adopt the due process standard. The counsel then addressed some of the cases that had been cited by the Petitioners, and attempted to distinguish them on facts.

Post lunch, the counsel resumed his submissions, with the issue of metadata collection. He attempted to distinguish the present case from Digital Rights Ireland, which had been cited by the Petitioners. The counsel argued that there were different types of metadata, and the data in question in those cases had been much more intrusive than what is collected by the Aadhaar authentication. He reiterated that the test is that of ‘appropriate safeguards’. He cited the case of Sundar Rajan v State of Tamil Nadu, which dealt with the Kundankulan nuclear power plant. He argued that the court had examined whether adequate safeguards had been in place, and had given due weight to economic benefits such as the increase in welfare, poverty alleviation etc. He argued that the Court in Sundar Rajan had held that apprehensions and fears could not be allowed to override the justification of the project. The counsel reiterated that the standard would be of ensuring adequate safeguards, and the risk would never be zero.

The counsel argued that the Aadhaar Act imposes a complete bar on sharing of the data, factors in consent, and the data with Requesting Entities was in any case disbursed and decentralized. He argued that the Petitioners had not suggested any way of improving the system, and only wanted it dismantled.

Justice Chandrachud asked what remedy was present in case of breaches. The counsel responded that the Information Technology Act would be applicable, which had penal provisions. Further, the route of contractual damages could be taken.

The counsel then described the EU Data Protection Directive, arguing that the purpose of the Directive was very different, with the aim being to ensure free flow of data. He argued that in contrast, Aadhaar didn’t allow any sharing of data. He argued that as a result, the absence of a regulation such as the Directive, or the General Data Protection Regulation would have no bearing on the matter at hand. He reiterated that the protections in the Aadhaar Act were sufficient, and even higher than those provided by the EU instruments. The counsel then went over the various provisions of the Directive and Regulation that govern the processing of sensitive information.

The counsel then resumed his submission with respect to metadata, as a response to the surveillance concerns raised by the Petitioners. He argued that the Petitioners had not appreciated the distinction between different types of metadata, such as system metadata, process metadata, business metadata etc. He argued that each had to be examined separately. He submitted that Aadhaar authentication only collected limited technical metadata.

The Chief Justice asked why the data had to be retained, and what sort of data was actually retained. The counsel drew the Court’s attention to an affidavit he had submitted, as well as the relevant circular which prescribes the metadata that is collected. He argued that it was all system related metadata, which allowed the UIDAI to exercise control over the Requesting Entities. He argued that information such as location data, the purpose for authentication, was not collected in the process.

The hearing will continue on April 25, 2018.

Aditya is an Analyst at the Centre for Communication Governance at National Law University Delhi

SC Constitution Bench on Aadhaar- Final Hearing (Day XXX)

By Aditya Singh Chawla

In October 2015, a 3-judge bench of the Supreme Court of India referred challenges to the Aadhaar program to a constitution bench. One of the primary concerns of this petition was to decide on the existence of a fundamental right to privacy, which has since been upheld. Other similar petitions, concerned with the legitimacy of Aadhaar had been tagged with this petition. While the existence of the fundamental right to privacy has been upheld, challenges against the Aadhaar programme and linking services to this programme were yet to be adjudicated upon.

An interim order was passed in December of 2017, a summary of the arguments can be found here and here.

The final hearing commenced on January 17, 2017. Summaries of the arguments advanced in the previous hearings can be found here.

Rakesh Dwivedi resumed his arguments for the Respondents. He began with the issue of  Section 7 and exclusion. The counsel responded to the argument about probabilistic systems by submitting that there are alternatives that are allowed by the Act. In the event of an authentication failure, the first alternative is to produce a proof of possession of Aadhaar. The second alternative is to provide enrollment ID, for people who haven’t yet received the Aadhaar. He submitted that the UIDAI had issued directions to this effect. A refusal to comply would be a breach under the Aadhaar Act.

Justice Chandrachud asked if the Section 7 proviso would apply to someone who had not applied for Aadhaar. The counsel replied in the negative. The counsel continued, describing the Regulation. He submitted that for State and Central agencies that require Aadhaar for benefits, they are required to ensure enrollment, including the setting up of coordination centres. Further, in the context of PDS, he argued that Clause 5 of the relevant notification allowed any member of a household to claim the benefit. He concluded that there could be no question of denial, as a result of these measures.

Justice Chandrachud asked if the systems had been tested in remote areas, with limited connectivity, such as Ladakh. Section 7 is silent on alternatives in such cases. The counsel responded that certain exemptions had been notified in the regulations.

The counsel reiterated that the system should not be demolished, but improved so that it could work. He then submitted that even today, we live in a relational world. One cannot pick and choose how one relates to the world; or how one establishes identity. All institutions require some kind of identity, and have some conditions about it.  He argued that this wasn’t a question of dignity, because these are regulatory conditions. He stated that these are permissible, and the only standard is if a fundamental right is being violated.

The Bench noted that the counsel was trivializing the Petitoners’ argument. They noted that the central concern was that of centralization of the database and its misuse.  Justice Chandrachud further argued that the issue was why only one identity had been mandated, and why multiple identities could not be allowed.

The counsel responded that one must go by the rules of the institution they want to participate in. He provided the example of the Proximity Card of the Supreme Court. Justice Chandrachud asked if the form of identity should relate to the purpose of identification. The counsel agreed, stating that there should be a rational nexus. However, he argued that allowing different forms of identity to be submitted would lead to a slippery slope which would destroy the whole purpose of the system.

Justice Bhushan added that many of the other forms of identification don’t have pan-India operation. The counsel agreed, noting that they were also sectoral, without any portability. In comparison, he argued, Aadhaar is universal. Aadhaar is also unique on account of the use of biometrics. If you abandon biometrics, the unique nature is lost. He submitted that even Smart Cards use biometrics.

Justice Chandrachud reiterated the concern about aggregation and analysis of data. The counsel responded that all protections that were socially and legally possible were in place.

He continued, stating that the argument about biometrics providing knowledge about the person was incorrect. He argued that while DNA might contain such information, fingerprints don’t. Further, only one fingerprint would be present with the Requesting Entities. Justice Chandrachud clarified that the issue was not of the biometrics themselves, but their attachment and linking to everything else, which could become a source of information about the individual. The counsel responded that no single Requesting Entity would have access to all of that information. It would be delegated and segregated. Further, any collusion or aggregation would not possible. Any misuse would require corruption at an inconceivable scale. In addition, most of the authentication would be required very rarely – once a year, or once in a lifetime. For PDS, it would be once a month.

At this point Shyam Divan interjected, that Banks had been demanding Aadhaar every time a Fixed Deposit is opened. The counsel responded that for most people, that is also a rare occurrence. Further, that was an issue on the Bank’s side, and not mandated by the Act. He argued that that can be examined separately. If the law were to be changed, to mandate authentication for every transaction, that could be questioned and challenged.

The counsel then moved on to the issue of clashes between fundamental rights. He brought the bench’s attention to the Preamble to the Constitution. He argued that the Preamble states that certain values are to be ‘secured’ by the state, and certain are to be ‘promoted.’ He argued that this imposes an obligation on the state to provide the basic minimum (for instance, minimum wages) to people. He argued that there was therefore a hierarchy, and the right to life should triumph over the right to privacy. He argued that for the people to without the bare minimum, the Constitution would amount to a mere paper Constitution.

Justice Chandrachud noted that dignity was not a peripheral value in the Constitution, but the core foundation of all rights. The Constitution protects dignity in all its forms, and food security and privacy were both aspects of dignity. The counsel responded that when they were in conflict, the first must have primacy over the second. He noted the NALSA judgment, which according to him brought about a paradigm shift in our conception of dignity.

Justice Bhushan questioned if they had to be read in conflict, and could not be recognized together. The counsel responded that they were arguing for a balanced approach, and in this case, in the favour of the right to life.

Justice Chandrachud asked if this would require a proportionality test. He stated that the question was whether the incursion on privacy is so less, to justify the benefits that have been claimed. The counsel responded that in the case of a restriction on a right, the burden lies on the state. However, this was a case of an interplay between rights. Justice Chandrachud countered that the burden was still with the state. The counsel responded that they were only submitting that the parameters for scrutiny would be different. Further, that Article 21 supersedes the rights under Article 19 and 14. Life would come first, and the other rights wouldn’t mean anything without it.

The counsel then resumed arguing for the relevance of biometrics, noting that large parts of the population were illiterate. Their thumbprints were all they had had to use in the conduct of their lives.

The Chief Justice noted that the real problems were of surveillance, aggregation, privacy and exclusion, which have to be addressed. The counsel said that the subsidies were in furtherance of life, liberty and dignity.

Justice Chandrachud asked for a clarification, whether the respondents were arguing for the tests under Puttuswamy to be abandoned. The counsel responded in the negative, and that Section 7 was not examined in Puttuswamy.

He then went on to quote from the Universal Declaration of Human Rights, and excerpts from Kesavanda Bharathi, the NALSA judgment, and German human rights jurisprudence.

The hearing will continue on April 19, 2018.

Aditya is an Analyst at the Centre for Communication Governance at National Law University Delhi

SC Constitution Bench on Aadhaar- Final Hearing (Day XXIX)

By Aditya Singh Chawla

In October 2015, a 3-judge bench of the Supreme Court of India referred challenges to the Aadhaar program to a constitution bench. One of the primary concerns of this petition was to decide on the existence of a fundamental right to privacy, which has since been upheld. Other similar petitions, concerned with the legitimacy of Aadhaar had been tagged with this petition. While the existence of the fundamental right to privacy has been upheld, challenges against the Aadhaar programme and linking services to this programme were yet to be adjudicated upon.

An interim order was passed in December of 2017, a summary of the arguments can be found here and here.

The final hearing commenced on January 17, 2017. Summaries of the arguments advanced in the previous hearings can be found here.

Rakesh Dwivedi resumed his arguments for the Respondents. He began with stating that if there were problems with the system, they should be fixed, rather than the system being demolished completely. He argued that under Section 8 of the Act, the sharing and use of information was confined specifically to the authentication process. He further argued that the mandate of Section 29 states that core biometrics cannot be shared.

Justice Chandrachud asked how the UIDAI planned to control the Requesting Entities. The Counsel responded that control could be in terms of technical specifications of the devices, mandating approved software, mandating information systems audits etc.

In response to Justice Chandrachud’s query about the framing of Sections 8 and 29, the Counsel reiterated that the sharing of information would be limited to the process of authentication. Further, only non biometric information could be shared under Section 29.

Next, there was some disagreement between the counsel and Justice Chandrachud on the interpretation of Section 8. The Counsel stated that the Requesting Entity would not know the purpose for the authentication, but only that authentication had been done. Justice Chandrachud stated that that could be true for UIDAI, but it was uncertain if that would be true for Requesting Entities. According to him, the language of the Act didn’t conform to this design. Justice Sikri added that that would also render Section 8(3) redundant. The Counsel responded that the Bench could chose to read the Act in that way.

Justice Chandrachud then gave an example of an individual who goes to the hospital for certain services. The hospital sought authentication for him, 122 days out in 6 months. He noted that that would be potentially extremely valuable information for pharmaceutical companies, insurance providers etc. Until there was a data protection law, this could be a problem.

The counsel responded that no other jurisdiction has the sort of protections that the Aadhaar Act provides. Justice Chandrachud asked if the protection under the Act was all the data protection the citizens of India would ever need. He also gave the example of the European Union’s General Data Protection Regulation as an example of a comprehensive framework for data protection. The Counsel replied that the Aadhaar Act was sufficient, and in many ways superior. According to him, the GDPR has no penal provisions, and the States have to enact their own, which creates a patchwork. The Counsel argued that the Aadhaar framework has technological security, auditing, as well as penal provisions in place. He went on to say that there could never be 100% surety about anything. The standard to be sought was that of reasonable safeguards, and reasonable protection. He noted that none of the Petitioners had pointed out what more could be done.

Justice Chandrachud then noted that according to the Counsel’s reading, Sections 8(3) and 29(3) could be excised from the Act. The Counsel responded that nothing needed to be excised from the Act, only clarified. Further, there was no intent, purpose, or objective in the Act to allow aggregation of data, its analysis or transfer. In addition, any breach of the provisions would be punitive.

Justice Chandrachud observed that it is hard to predict commercial ingenuity, and it wouldn’t be possible to tell what use the Requesting Entities could make of the data with them. Justice Sikri interjected with the earlier hospital example, noting that the hospital would already have the data about medical treatments of the patients, and may not need Aadhaar to get that information. The main apprehension was one of misuse. The counsel agreed, questioning whether Aadhaar was adding to the problem, or making it worse in any way.

Justice Chandrachud noted that they must evaluate what safeguards can be introduced. He noted that data about individuals was now being used to influence electoral outcomes.

The counsel responded that Cambridge Analytica should not be brought into the discussion, because the nature of the data was different. Justice Chandrachud interjected, stating that that incident was symptomatic of the present times. The counsel responded arguing that the algorithms employed were different. There is a difference between matching algorithms (which Aadhaar uses) and sorting algorithms (which these companies use). He argued that there were many different types of algorithms, and the Petitioner’s had confused this distinction.  He concluded that the data could not be analyzed by the Respondents. If at all, they would have to go through proper procedure.

The counsel continued, stating that Smart Cards were entrenched technology and that the Smart Card lobby in the West didn’t want Aadhaar to succeed. He claimed that other countries like Singapore were looking to replicate our model.

Justice Chandrachud noted that the issue was that there is a big world that interacts with Aadhaar. He said that the UIDAI might only be the least of their problems, since it is a government entity subject to a lot of scrutiny. The Counsel reiterated that only matching algorithms are used.

Coming back to the Act, the counsel submitted that Requesting Entities cannot be enrolled unless they establish the need for authentication.  Justice Chandrachud asked what the purpose behind opening Aadhaar to private players was. In response, the Counsel argued that the nature of the public-private divide was changing. Private companies have been entering fields that were historically the domain of the public sector. The companies are funded by money from Banks, where the people have made deposits. So, it was actually the public that is funding these players. He argued that private players that perform public functions should also be subject to constitutional norms, review and scrutiny. Currently, public companies are subject to many restrictions, such as standards of reasonableness, while no similar shackles apply to private companies. He concluded stating that that was a larger debate for another time. For now, all that was necessary to know is that private players are also regulated by the Act.

The counsel then moved on to responding to the Petitioner’s argument that the Aadhaar framework amounted to the numbering of human beings. Counsel argued that we have been numbering humans for a long time. He cited the PNR number for flights as an example. He also noted that the Supreme Court proximity cards were numbered.

Justice Chandrachud responded that Aadhaar was a unified identity, as opposed to multiple identifying numbers. The counsel responded that just because they were assigning numbers for a specific purpose, didn’t mean that they were numbering people. Further, they were not collecting information such as race, caste etc.

Justice Chandrachud then asked how the Aadhaar became a mandate, from a mere entitlement. The Counsel responded that the Aadhaar was an entitlement, and the UIDAI was mandate neutral. It is the government that notifies that certain linkages are mandatory. Each of these could be examined or challenged separately.

The counsel resumed his arguments after lunch by examining the scope of Section 57.  He argued that the objective of the section was not to expand, but to limit power. He submitted that if this limitation did not exist, anyone could become a Requesting Entity. The provision requires that there must be a law, or a prior contract.

Justice Chandrachud asked if once there was a prior contract under Section 57, if the UIDAI would be bound to offer authentication.  The Counsel responded that UIDAI could still refuse, and there was a requirement of necessity. Further, this embargo was applicable to anyone, which is why State Resident Data Hubs are no longer possible.

The Bench noted that nothing in the Act seems to give UIDAI this type of discretion, and questioned whether there were any guidelines for how the UIDAI would come to its decisions. The counsel responded that the power came from Section 57. He gave the example of the CBSE, noting that there had been many cases of fraud. The Board could apply to be a Requesting Entity for the purpose of conducting the exam. However, this would require the presence of a prior contract, and it cannot be an ex post facto exercise. He argued that this contract must also state that authentication must be in accordance with Sec. 8 and Part VI of the Aadhaar Act.

The counsel then went on to examine the Information Technology Act, arguing that all the provisions and safeguards under that Act and its Rules would also be applicable. For instance, the CIDR had been notified as a protected system under the Act.

The counsel then discussed the attributes and benefits of biometric data. He argued that Aadhaar brings service providers face to face with the beneficiaries. He noted that Aadhaar would not be a panacea for all problems, but the issue of fake identity documents would be solved.

He then responded to other arguments raised by the Petitioners. In response to the argument that there was no legal mandate to store information in the CIDR, he brought the Bench’s attention to Section 10 of the Act. On the argument of the use of foreign suppliers and licensors, the Counsel responded that the hardware all belonged to the UIDAI, and even technicians only had access when there was some troubleshooting required. In response to the system being probabilistic, he argued that there were appropriate fall back mechanisms under Section 7.

The hearing will continue on April 18, 2018.

Aditya is an Analyst at the Centre for Communication Governance at National Law University Delhi

 

Pachauri Defamation Suit: Court Rejects Interim Gag Order Plea

By Aditya Singh Chawla

The Patiala House court at Delhi has rejected R. K. Pachauri’s plea for an interim gag order against NDTV, Bennett Coleman and Co., and the India Today Group. The media houses had been made defendants in a defamation suit filed by him in 2016.

In 2015, an FIR had been filed against Pachauri by a woman employee of TERI (The Energy and Resources Institute, of which he was then the Chief) accusing him of sexual harassment. Following these allegations, several other women had spoken out about similar experiences while they had worked at the organization. The allegations and ongoing proceedings had received extensive coverage in the media.

Pachauri filed for defamation against multiple parties, including the media houses, one of the women who had spoken out, as well as her lawyer. He sought a gag order against the media houses, and damages of Rs. 1 Crore from the victim and her lawyer.

We have written previously about how suits such as these are in the nature of ‘SLAPP’ suits – Strategic Lawsuits Against Public Participation. These are cases where powerful individuals and corporations use litigation as a way of intimidating or silencing their critics. The defendants are usually media houses or individuals who are then forced to muster the resources to mount a legal defense. Even if they are able to secure a victory in Court, it is at the cost of a protracted and expensive process.

The court has now refused to grant an interim injunction against the media houses, noting the right of the public to be aware of the developments. It further noted that public figures can be held to a higher degree of scrutiny by the public. However, it has also held that further reportage must also carry Pachauri’s views, and indicate that the matter is still pending before the Court. The text of the order may be found here.

Aditya is an Analyst at the Centre for Communication Governance at National Law University Delhi

Facebook and its (dis)contents

By Adhitya Singh Chawla

In 2016, Norwegian writer Tom Egeland, uploaded a post on Facebook, listing seven photographs that “changed the history of warfare”. The post featured the Pulitzer-winning image, ‘The Terror of War’, which depicts a naked nine-year-old running from a napalm attack during the Vietnam War. Facebook deleted the post, and suspended Egeland’s account.

A Norwegian newspaper, Aftenposten, while reporting on the suspension, used the same image on its Facebook page. The newspaper soon received a message from Facebook demanding that the image be either removed, or pixelated. The editor-in-chief refused to comply in an open letter to Mark Zuckerburg, noting his concern at the immense power Facebook wielded over speech online. The issue escalated when several Norwegian politicians, including the Prime Minister, shared the image on Facebook, and were temporarily suspended from Facebook as well.

Facebook initially stated that it would be difficult to create a distinction between instances where a photograph of a nude child could be allowed. However, due to widespread censure, the platform eventually decided to reinstate the image owing to its “status as an iconic image of historical importance.”

This incident brought to light the tricky position Facebook finds itself in as it attempts to police its platform. Facebook addresses illegal and inappropriate content through a mix of automated processes, and human moderation. The company publishes guidelines about what content may not be appropriate for its platform, called its ‘Community Standards.’ Users can ‘flag’ content that they think does not meet the Community Standards, which is then reviewed by moderators. Moderators may delete, ignore, or escalate flagged content to a senior manager. In some cases, the user account may be suspended, or asked to submit identity verification.

As evident from the ‘Terrors of War’ incident, Facebook has often come under fire for supposed ‘wrong’ moderation of content, as well as opacity in how its community review process comes to be applied. It has been argued that content that is evidently in violation of Community Standards is often not taken down, while content that should be safe is censored. For instance, Facebook courted controversy again, when it was accused of blocking content and accounts documenting persecution of the Rohingya Muslim community in Myanmar.

Closer home as well, multiple instances of Facebook’s questionable moderation practices have come to light. In October 2017, Raya Sarkar, a law student based out of the United States, had created what came to be called, the List. The List named over 70 prominent academics that had been accused of sexual harassment. The approach proved extremely controversial, sparking debates about due process, and the failure of institutional mechanisms to address harassment. Facebook blocked her account for seven days, which proved equally contentious. Sarkar’s account was restored only after Facebook staff in Palo Alto were contacted directly. Similar instances have been reported of seemingly arbitrary application of the Community Standards. In many cases accounts have been suspended, and content blocked without notice, explanation or recourse.

Content moderation inherently involves much scope for interpretation and disagreement. Factors such as context, as well as cultural differences, render it a highly subjective exercise. Algorithms don’t appear to have reached sufficient levels of sophistication, and there exist larger issues associated with automated censoring of speech. Human moderators are by all accounts burdened by the volume and the psychologically taxing nature of the work, and therefore prone to error. The way forward should therefore be first, to ensure that transparent mechanisms exist for recourse against the removal of legitimate speech.

In light of the ‘Terror of War’ incident, Facebook responded by updating its community standards. In a statement, it said that it would allow graphic material that would be “newsworthy, significant, or important to the public interest — even if they might otherwise violate our standards.” Leaked moderator guidelines in 2017 opened the company up to granular public critique of its policies. There is evidently scope for Facebook to be more responsive and consultative in how it regulates speech online.

In June 2017, Facebook reached 2 billion monthly users, making it the largest social network, and a platform for digital interaction without precedent. It has announced plans to reach 5 billion. With the influence it now wields, it must also embrace its responsibility to be more transparent and accountable to its users.

Aditya is an Analyst at the Centre for Communication Governance at National Law University Delhi

When the Empire SLAPPs Back

By Aditya Singh Chawla

“Short of a gun to the head, a greater threat to First Amendment expression can scarcely be imagined”

-Nicholas Colabella J. of the New York Supreme Court, in Gordon v Marrone.

The above statement vividly describes what has come to be called a SLAPP suit – Strategic Lawsuit Against Public Participation. The term was coined by University of Denver Professors Penelope Canan and George Pring in their book ‘SLAPPs: Getting Sued for Speaking Out’.[1] SLAPPs are generally characterized by deep-pocketed individuals or entities pursuing litigation as a way of intimidating or silencing their critics.

The suit likely may have no merit, but the objective is primarily to threaten or coerce critics into silence, or in the alternative, impose prohibitive costs on criticism. SLAPPs also have the effect of suppressing reportage about initial claims.  Even if defendants win a lawsuit on merits, it would be at an immense cost in terms of resources. This experience is likely to deter them, and others from speaking out in the future. Faced with an uncertain legal process, defendants are also likely to seek settlement. While this allows them to avoid an expensive process, it usually entails them having to abandon their opposition as well.  By in effect chilling citizen participation in government, SLAPP suits strike at the heart of participatory democracy.

SLAPPs have also come to be employed in India, in a number of instances. These are usually large corporates, powerful individuals, and even private universities, dragging media houses and journalists, or academics to Court for unfavorable reportage. Recent instances indicate that SLAPPs can also be employed by influential people accused of sexual assault or harassment. The aim appears to be to suppress media coverage, and deter victims from publically speaking out.

Defamation suits tend to be the weapon of choice for SLAPPs. In India, where defamation can also be a criminal offence, this can be a particularly effective strategy, especially since it may be pursued concurrently with a civil claim. Another tactic to make the process more punitive, is to file the suit in a remote, inconvenient location where the offending publication may have been made available. In the context of the internet, this could theoretically be anywhere.

There have not been many instances where the judiciary have demonstrated awareness of this phenomenon. In Crop Care Federation of India v. Rajasthan Patrika, reports had been published in the Rajasthan Patrika about the harmful effects of pesticides. Crop Care Federation of India, an industry body of pesticide manufactures, sued the newspaper and its employees for allegedly defaming its members. In response, the defendant filed an application for the rejection of plaint, under Order 7 Rule 11 of the Code of Civil Procedure, 1908. It was argued that the plaintiff was an association of manufacturers, and not a determinate body, which was a necessary requirement to constitute a cause of action in a defamation suit. Justice Ravindra Bhat dismissed the suit on the above ground but also explicitly called out the petitioner’s suit as a SLAPP, with a reference to Justice Nicholas Colabella’s dictum in Gordon v. Marrone. He went on to note that, “in such instances the plaintiff’s goals are accomplished if the defendant succumbs to fear, intimidation, mounting legal costs or simple exhaustion and abandons the criticism. A SLAPP may also intimidate others from participating in the debate.”

Several jurisdictions have enacted ‘anti-SLAPP’ legislations in an attempt to protect defendants from such practices. Broadly, such legislations provide the defendant an opportunity to seek dismissal of the suit early in the proceedings. In most anti-SLAPP statutes in the United States, if the defendant demonstrates that the statements were within the exercise of free speech, and on matters of legitimate public interest, the burden shifts onto the plaintiff to establish a probability of success of their claims. Failing to do so would lead to a dismissal, with the petitioner having to compensate the defendant’s legal costs. Typically, the discovery process is halted while the motion is being adjudicated upon. This further mitigates the financial toll that the proceedings might otherwise take.

In a similar vein, one of the recommendations in India has been to introduce procedure into Order 7 Rule 11 that allows suits that bear the mark of a SLAPP to be summarily dismissed. Broader reforms to the law of defamation may also limit the impact of SLAPPs. It has been proposed that Sections 499 and 500 of the Indian Penal Code, 1860, which criminalize defamation, should be repealed. It is widely held that, despite the Supreme Court’s contrary view, the imposition of penal consequences for defamation runs counter to the free speech ideals enshrined within our Constitution. There are also suggestions to codify civil defamation, with higher thresholds for statements regarding public officials or public figures, as well as a stricter requirement of demonstrating harm. There are also proposals to allow for corrections and apologies to be offered as remedy, and for damages designed to be primarily restorative, and not punitive.

According to Pring and Canan, SLAPPs are a way for petitioners to transform a “a public, political controversy into a private, legalistic one.”[2] Defamation, and SLAPP suits in general, have become a tool to deter public scrutiny and criticism of those in power. Drawing reasonable inferences from fact is essential to the functioning of the press, and the internet has provided citizens an avenue to express their opinions and grievances. Both are likely to limit the legitimate exercise of their free speech if they run the risk of being dragged to court to mount a legal defense for their claims. Our legal framework seeks to deliver justice to all, but must also be cognizant of how it may be subverted towards nefarious ends.

[1] Penelope Canan and George Pring, SLAPPs : Getting Sued for Speaking Out (Temple University Press, 1996).

[2] Id., at 10.

Aditya is an Analyst at the Centre for Communication Governance at National Law University Delhi

‘My Data, My Rules’ – The Right to Data Portability

By Aditya Singh Chawla

Nandan Nilekani has recently made news cautioning against ‘data colonization’ by heavyweights such as Facebook and Google. He laments that data, which is otherwise a non-rival, unlimited resource, is not being shared freely, and is being put into silos. Not only does this limit its potential uses, users end up with very little control over their own data. He argues for ‘data democracy’ through a data protection law and particularly, one that gives users greater privacy, control and choice. In specific terms, Nilekani appears to be referring to the ‘right to data portability’, a recently recognized concept in the data protection lexicon.

In the course of using online services, individuals typically provide an assortment of personal data to service providers. The right to data portability allows a user to receive their data back in a format that is conducive to reuse with another service. The purpose of data portability is to promote interoperability between systems and to give greater choice and control to the user with respect to their data held by other entities. The aim is also to create a level playing field for newly established service providers that wish to take on incumbents, but are unable to do so because of the significant barriers posed by lock-in and network effects. For instance, Apple Music users could switch to a rival service without having to lose playlists, play counts, or history; or Amazon users could port purchasing history to a service that provides better recommendations; or eBay sellers to a more preferable platform without losing their reputation and ratings. Users could also port to services with more privacy friendly policies, thereby enabling an environment where services must also compete on such metrics.

The European Union’s General Data Protection Regulation (GDPR) is the first legal recognition of the right to data portability. Art. 20(1) defines the right as follows:

“The data subject shall have the right to receive the personal data concerning him or her, which he or she has provided to a controller, in a structured, commonly used and machine-readable format and have the right to transmit those data to another controller without hindrance from the controller to which the data have been provided”

Pursuant to this right, Art. 20(2) further confers the right to directly transmit personal data from one controller to another, wherever technically feasible.

The first aspect of the right to data portability allows data subjects to receive their personal data for private use. Crucially, the data must be a in a format necessarily conducive to reuse. For instance, providing copies of emails in pdf format would not be sufficient. The second aspect is the ability to transfer data directly to another controller, without hindrance.

There are certain prerequisites for the applicability of this right:

a) it applies only to personal data that the data subject ‘provided’ to the controller. This would include data explicitly provided (such as age, or address, etc., through online forms), as well as data generated and collected by the controller on account of the usage of the service. Data derived or inferred by the controller would not be within the scope of this right.

b) the processing must be pursuant to consent or a contract. Personal data processed for a task to be performed in public interest, or in the exercise of official authority is excluded.

c) the processing must be through automated means. Data in paper files would therefore not be portable.

d) the right must not adversely affect the rights and freedoms of others.

The GDPR does not come into force till May 2018, so there remain ambiguities regarding how the right to data portability may come to be implemented. For instance, there is debate about whether ‘observed data’, such as heartbeat tracking by wearables, would be portable. Even so, the right to data portability appears to be a step towards mitigating the influence data giants currently wield.

Data Portability is premised on the principle of informational self-determination, which forms the substance of the European Data Protection framework.  This concept was famously articulated in what is known as the Census decision of the German Federal Constitutional Court in 1983. The Court ruled it to be a necessary condition for the free development of one’s personality, and also an essential element of a democratic society.  The petitioners in India’s Aadhaar-PAN case also  explicitly argued that informational self-determination was a facet of Art. 21 of the Indian Constitution.

Data portability may also be considered an evolution from previously recognized rights such as the right to access and the right to erasure of personal data, both of which are present in the current Information Technology (Reasonable security practices and procedures and sensitive personal data or information) Rules, 2011. TRAI’s recent consultation paper on Privacy, Security and Ownership of Data in the Telecom Sector also refers to data portability as a way to empower users. The right to data portability may be an essential aspect of a robust and modern data protection framework, and India is evidently not averse to taking cues from the EU in this regard. As we (finally) begin to formulate our own data protection law, it may serve us well to evaluate which concepts may be suitably imported.

Aditya is an Analyst at the Centre for Communication Governance at National Law University Delhi