CJEU sets limits on Mass Communications Surveillance – A Win for Privacy in the EU and Possibly Across the World

This post has been authored by Swati Punia

On 6th October, the European Court of Justice (ECJ/ Court) delivered its much anticipated judgments in the consolidated matter of C-623/17, Privacy International from the UK and joined cases from France, C-511/18, La Quadrature du Net and others, C-512/18, French Data Network and others, and Belgium, C-520/18, Ordre des barreaux francophones et germanophone and others (Collectively “Bulk Communications Surveillance Judgments”). 

In this post, I briefly discuss the Bulk Communication Surveillance Judgments, their significance for other countries and for India. 

Through these cases, the Court invalidated the disproportionate interference by Member States with the rights of their citizens, as provided by EU law, in particular the Directive on privacy and electronic communications (e-Privacy Directive) and European Union’s Charter of Fundamental Rights (EU Charter). The Court assessed the Member States’ bulk communications surveillance laws and practices relating to their access and use of telecommunications data. 

The Court recognised the importance of the State’s positive obligations towards conducting surveillance, although it noted that it was essential for surveillance systems to conform with the general principles of EU law and the rights guaranteed under the EU Charter. It laid down clear principles and measures as to when and how the national authorities could access and use telecommunications data (further discussed in the sections ‘The UK Judgment’ and ‘The French and Belgian Judgment’). It carved a few exceptions as well (in the joined cases of France and Belgium) for emergency situations, but held that such measures would have to pass the threshold of being serious and genuine (further discussed in the section ‘The French and Belgian Judgment’). 

The Cases in Brief 

The Court delivered two separate judgments, one in the UK case and one in the joined cases of France and Belgium. Since these cases had similar sets of issues, the proceedings were adjoined. The UK application challenged the bulk acquisition and use of telecommunications data by its Security and Intelligence Agencies (SIAs) in the interest of national security (as per the UK’s Telecommunication Act of 1984). The French and Belgian applications challenged the indiscriminate data retention and access by SIAs for combating crime. 

The French and Belgian applications questioned the legality of their respective data retention laws (numerous domestic surveillance laws which permitted bulk collection of telecommunication data) that imposed blanket obligations on Electronic Communications Service Providers (ECSP) to provide relevant data. The Belgian law required ECSPs to retain various kinds of traffic and location data for a period of 12 months. Whereas, the French law provided for automated analysis and real time data collection measures for preventing terrorism. The French application also raised the issue of providing a notification to the person under the surveillance. 

The Member States contended that such surveillance measures enabled them to inter alia, safeguard national security, prevent terrorism, and combat serious crimes. Hence, they claimed inapplicability of the e-Privacy Directive on their surveillance laws/ activities.

The UK Judgment

The ECJ found the UK surveillance regime unlawful and inconsistent with EU law, and specifically the e-Privacy Directive. The Court analysed the scope and scheme of the e-Privacy Directive with regard to exclusion of certain State purposes such as national and public security, defence, and criminal investigation. Noting the importance of such State purposes, it held that EU Member States could adopt legislative measures that restricted the scope of rights and obligations (Article 5, 6 and 9) provided in the e-Privacy Directive. However, this was allowed only if the Member States complied with the requirements laid down by the Court in Tele2 Sverige and Watson and Others (C-203/15 and C-698/15) (Tele2) and the e-Privacy Directive. In addition to these, the Court held that the EU Charter must be respected too. In Tele2, the ECJ held that legislative measures obligating ECSPs to retain data must be targeted and limited to what was strictly necessary. Such targeted retention had to be with regard to specific categories of persons and data for a limited time period. Also, the access to data must be subject to a prior review by an independent body.

The e-Privacy Directive ensures the confidentiality of electronic communications and the data relating to it (Article 5(1)). It allows ECSPs to retain metadata (context specific data relating to the users and subscribers, location and traffic) for various purposes such as billing, valued added services and security purposes. However, this data must be deleted or made anonymous, once the purpose is fulfilled unless a law allows for a derogation for State purposes. The e-Privacy Directive allows the Member States to derogate (Article 15(1)) from the principle of confidentiality and corresponding obligations (contained in Article 6 (traffic data) and 9 (location data other than traffic data)) for certain State purposes when it is appropriate, necessary and proportionate. 

The Court clarified that measures undertaken for the purpose of national security would not make EU law inapplicable and exempt the Member States from their obligation to ensure confidentiality of communications under the e-Privacy Directive. Hence, an independent review of surveillance activities such as data retention for indefinite time periods, or further processing or sharing, must be conducted for authorising such activities. It was noted that the domestic law at present did not provide for prior review, as a limit on the above mentioned surveillance activities. 

The French and Belgian Judgment

While assessing the joined cases, the Court arrived at a determination in similar terms as the UK case. It reiterated that the exception (Article 15(1) of the e-Privacy Directive) to the principle of confidentiality of communications (Article 5(1) of the e-Privacy Directive) should not become the norm. Hence, national measures that provided for general and indiscriminate data retention and access for State purposes were held to be incompatible with EU law, specifically the e-Privacy Directive.

The Court in the joined cases, unlike the UK case, allowed for specific derogations for State purposes such as safeguarding national security, combating serious crimes and preventing serious threats. It laid down certain requirements that the Member States had to comply with in case of derogations. The derogations should (1) be clear and precise to the stated objective (2) be limited to what is strictly necessary and for a limited time period (3) have a safeguards framework including substantive and procedural conditions to regulate such instances (4) include guarantees to protect the concerned individuals against abuse. They should also be subjected to an ‘effective review’ by a court or an independent body and must be in compliance of general rules and proportionality principles of EU law and the rights provided in the EU Charter. 

The Court held that in establishing a minimum threshold for a safeguards framework, the EU Charter must be interpreted along with the European Convention on Human Rights (ECHR). This would ensure consistency between the rights guaranteed under the EU Charter and the corresponding rights guaranteed in the ECHR (as per Article 52(3) of the EU Charter).

The Court, in particular, allowed for general and indiscriminate data retention in cases of serious threat to national security. Such a threat should be genuine, and present or foreseeable. Real-time data collection and automated analysis were allowed in such circumstances. But the real-time data collection of persons should be limited to those suspected of terrorist activities. Moreover, it should be limited to what was strictly necessary and subject to prior review. It even allowed for general and indiscriminate data retention of IP addresses for the purpose of national security, combating serious crimes and preventing serious threats to public security. Such retention must be for a limited time period to what was strictly necessary. For such purposes, the Court also permitted ECSPs to retain data relating to the identity particulars of their customers (such as name, postal and email/account addresses and payment details) in a general and indiscriminate manner, without specifying any time limitations. 

The Court allowed targeted data retention for the purpose of safeguarding national security and preventing crime, provided that it was for a limited time period and strictly necessary and was done on the basis of objective and non-discriminatory factors. It was held that such retention should be specific to certain categories of persons or geographical areas. The Court also allowed, subject to effective judicial review, expedited data retention after the initial retention period ended, to shed light on serious criminal offences or acts affecting national security. Lastly, in the context of criminal proceedings, the Court held that it was for the Member States to assess the admissibility of evidence resulting from general and indiscriminate data retention. However, the information and evidence must be excluded where it infringes on the right to a fair trial. 

Significance of the Bulk Communication Surveillance Judgments

With these cases, the ECJ decisively resolved a long-standing discord between the Member States and privacy activists in the EU. For a while now, the Court has been dealing with questions relating to surveillance programs for national security and law enforcement purposes. Though the Member States have largely considered these programs outside the ambit of EU privacy law, the Court has been expanding the scope of privacy rights. 

Placing limitations and controls on State powers in democratic societies was considered necessary by the Court in its ruling in Privacy International. This decision may act as a trigger for considering surveillance reforms in many parts of the world, and more specifically for those aspiring to attain an EU adequacy status. India could benefit immensely should it choose to pay heed. 

As of date, India does not have a comprehensive surveillance framework. Various provisions of the Personal Data Protection Bill, 2019 (Bill), Information Technology Act, 2000, Telegraph Act, 1885, and the Code of Criminal Procedure, 1973 provide for targeted surveillance measures. The Bill provides for wide powers to the executive (under Clause 35, 36 and 91 of the Bill) to access personal and non-personal data in the absence of proper and necessary safeguards. This may cause problems for achieving the EU adequacy status as per Article 45 of the EU General Data Protection Regulation (GDPR) that assesses the personal data management rules of third-party countries. 

Recent news reports suggest that the Bill, which is under legislative consideration, is likely to undergo a significant overhaul. India could use this as an opportunity to introduce meaningful changes in the Bill as well as its surveillance regime. India’s privacy framework could be strengthened by adhering to the principles outlined in the Justice K.S. Puttaswamy v. Union of Indiajudgment and the Bulk Communications Surveillance Judgments.

Building an AI Governance Framework for India, Part III

Embedding Principles of Privacy, Transparency and Accountability

This post has been authored by Jhalak M. Kakkar and Nidhi Singh

In July 2020, the NITI Aayog released a draft Working Document entitled “Towards Responsible AI for All” (hereafter ‘NITI Aayog Working Document’ or ‘Working Document’). This Working Document was initially prepared for an expert consultation that was held on 21 July 2020. It was later released for comments by stakeholders on the development of a ‘Responsible AI’ policy in India. CCG’s comments and analysis  on the Working Document can be accessed here.

In our first post in the series, ‘Building an AI governance framework for India’, we discussed the legal and regulatory implications of the Working Document and argued that India’s approach to regulating AI should be (1) firmly grounded in its constitutional framework, and (2) based on clearly articulated overarching ‘Principles for Responsible AI’. Part II of the series discussed specific Principles for Responsible AI – Safety and Reliability, Equality, and Inclusivity and Non-Discrimination. We explored the constituent elements of these principles and the avenues for incorporating them into the Indian regulatory framework. 

In this final post of the series, we will discuss the remaining principles of Privacy, Transparency and Accountability. 

Principle of Privacy 

Given the diversity of AI systems, the privacy risks which they pose to the individuals, and society as a whole are also varied. These may be be broadly related to : 

(i) Data protection and privacy: This relates to privacy implications of the use of data by AI systems and subsequent data protection considerations which arise from this use. There are two broad aspects to think about in terms of the privacy implications from the use of data by AI systems. Firstly, AI systems must be tailored to the legal frameworks for data protection. Secondly, given that AI systems can be used to re-identify anonymised data, the mere anonymisation of data for the training of AI systems may not provide adequate levels of protection for the privacy of an individual.

a) Data protection legal frameworks: Machine learning and AI technologies have existed for decades, however, it was the explosion in the availability of data, which accounts for the advancement of AI technologies in recent years. Machine Learning and AI systems depend upon data for their training. Generally, the more data the system is given, the more it learns and ultimately the more accurate it becomes. The application of existing data protection frameworks to the use of data by AI systems may raise challenges. 

In the Indian context, the Personal Data Protection Bill, 2019 (PDP Bill), currently being considered by Parliament, contains some provisions that may apply to some aspects of the use of data by AI systems. One such provision is Clause 22 of the PDP Bill, which requires data fiduciaries to incorporate the seven ‘privacy by design’ principles and embed privacy and security into the design and operation of their product and/or network. However, given that AI systems rely significantly on anonymised personal data, their use of data may not fall squarely within the regulatory domain of the PDP Bill. The PDP Bill does not apply to the regulation of anonymised data at large but the Data Protection Authority has the power to specify a code of practice for methods of de-identification and anonymisation, which will necessarily impact AI technologies’ use of data.

b) Use of AI to re-identify anonymised data: AI applications can be used to re-identify anonymised personal data. To safeguard the privacy of individuals, datasets composed of the personal data of individuals are often anonymised through a de-identification and sampling process, before they are shared for the purposes of training AI systems to address privacy concerns. However, current technology makes it possible for AI systems to reverse this process of anonymisation to re-identify people, having significant privacy implications for an individual’s personal data. 

(ii) Impact on society: The impact of the use of AI systems on society essentially relates to broader privacy considerations that arise at a societal level due to the deployment and use of AI, including mass surveillance, psychological profiling, and the use of data to manipulate public opinion. The use of AI in facial recognition surveillance technology is one such AI system that has significant privacy implications for society as a whole. Such AI technology enables individuals to be easily tracked and identified and has the potential to significantly transform expectations of privacy and anonymity in public spaces. 

Due to the varying nature of privacy risks and implications caused by AI systems, we will have to design various regulatory mechanisms to address these concerns. It is important to put in place a reporting and investigation mechanism that collects and analyses information on privacy impacts caused by the deployment of AI systems, and privacy incidents that occur in different contexts. The collection of this data would allow actors across the globe to identify common threads of failure and mitigate against potential privacy failures arising from the deployment of AI systems. 

To this end, we can draw on a mechanism that is currently in place in the context of reporting and investigating aircraft incidents, as detailed under Annexure 13 of the Convention on International Civil Aviation (Chicago Convention). It lays down the procedure for investigating aviation incidents and a reporting mechanism to share information between countries. The aim of this accident investigation report is not to apportion blame or liability from the investigation, but rather to extensively study the cause of the accident and prevent future incidents. 

A similar incident investigation mechanism may be employed for AI incidents involving privacy breaches. With many countries now widely developing and deploying AI systems, such a model of incident investigation would ensure that countries can learn from each other’s experiences and deploy more privacy-secure AI systems.

Principle of Transparency

The concept of transparency is a recognised prerequisite for the realisation of ‘trustworthy AI’. The goal of transparency in ethical AI is to make sure that the functioning of the AI system and resultant outcomes are non-discriminatory, fair, and bias mitigating, and that the AI system inspires public confidence in the delivery of safe and reliable AI innovation and development. Additionally, transparency is also important in ensuring better adoption of AI technology—the more users feel that they understand the overall AI system, the more inclined and better equipped they are to use it.

The level of transparency must be tailored to its intended audience. Information about the working of an AI system should be contextualised to the various stakeholder groups interacting and using the AI system. The Institute of Electrical and Electronics Engineers, a global professional organisation of electronic and electrical engineers,  suggested that different stakeholder groups may require varying levels of transparency in accordance with the target group. This means that groups such as users, incident investigators, and the general public would require different standards of transparency depending upon the nature of information relevant for their use of the AI system.

Presently, many AI algorithms are black boxes where automated decisions are taken, based on machine learning over training datasets, and the decision making process is not explainable. When such AI systems produce a decision, human end users don’t know how it arrived at its conclusions. This brings us to two major transparency problems, the public perception and understanding of how AI works, and how much developers actually understand about their own AI system’s decision making process. In many cases, developers may not know, or be able to explain how an AI system makes conclusions or how it has arrived at certain solutions.

This results in a lack of transparency. Some organisations have suggested opening up AI algorithms for scrutiny and ending reliance on opaque algorithms. On the other hand, the NITI Working Document is of the view that disclosing the algorithm is not the solution and instead, the focus should be on explaining how the decisions are taken by AI systems. Given the challenges around explainability discussed above, it will be important for NITI Aayog to discuss how such an approach will be operationalised in practice.

While many countries and organisations are researching different techniques which may be useful in increasing the transparency of an AI system, one of the common suggestions which have gained traction in the last few years is the introduction of labelling mechanisms in AI systems. An example of this is Google’s proposal to use ‘Model Cards’, which are intended to clarify the scope of the AI systems deployment and minimise their usage in contexts for which they may not be well suited. 

Model cards are short documents which accompany a trained machine learning model. They enumerate the benchmarked evaluation of the working of an AI system in a variety of conditions, across different cultural, demographic, and intersectional groups which may be relevant to the intended application of the AI system. They also contain clear information on an AI system’s capabilities including the intended purpose for which it is being deployed, conditions under which it has been designed to function, expected accuracy and limitations. Adopting model cards and other similar labelling requirements in the Indian context may be a useful step towards introducing transparency into AI systems. 

Principle of Accountability

The Principle of Accountability aims to recognise the responsibility of different organisations and individuals that develop, deploy and use the AI systems. Accountability is about responsibility, answerability and trust. There is no one standard form of accountability, rather this is dependent upon the context of the AI and the circumstances of its deployment.

Holding individuals and entities accountable for harm caused by AI systems has significant challenges as AI systems generally involve multiple parties at various stages of the development process. The regulation of the adverse impacts caused by AI systems often goes beyond the existing regimes of tort law, privacy law or consumer protection law. Some degree of accountability can be achieved by enabling greater human oversight. In order to foster trust in AI and appropriately determine the party who is accountable, it is necessary to build a set of shared principles that clarify responsibilities of each stakeholder involved with the research, development and implementation of an AI system ranging from the developers, service providers and end users.

Accountability has to be ensured at the following stages of an AI system: 

(i) Pre-deployment: It would be useful to implement an audit process before the AI system is deployed. A potential mechanism for implementing this could be a multi-stage audit process which is undertaken post design, but before the deployment of the AI system by the developer. This would involve scoping, mapping and testing a potential AI system before it is released to the public. This can include ensuring risk mitigation strategies for changing development environments and ensuring documentation of policies, processes and technologies used in the AI system.

Depending on the nature of the AI system and the potential for risk, regulatory guidelines can be developed prescribing the involvement of various categories of auditors such as internal, expert third party and from the relevant regulatory agency, at various stages of the audit. Such audits which are conducted pre-deployment are aimed at closing the accountability gap which exists currently.

(ii) During deployment: Once the AI system has been deployed, it is important to keep auditing the AI system to note the changes being made/evolution happening in the AI system in the course of its deployment. AI systems constantly learn from the data and evolve to become better and more accurate. It is important that the development team is continuously monitoring the system to capture any errors that may arise, including inconsistencies arising from input data or design features, and address them promptly.

(iii) Post-deployment: Ensuring accountability post-deployment in an AI system can be challenging. The NITI Working Document also recognised that assigning accountability for specific decisions becomes difficult in a scenario with multiple players in the development and deployment of an AI system. In the absence of any consequences for decisions harming others, no one party would feel obligated to take responsibility or take actions to mitigate the effect of the AI systems. Additionally, the lack of accountability also leads to difficulties in grievance redressal mechanisms which can be used to address scenarios where harm has arisen from the use of AI systems. 

The Council of Europe, in its guidelines on the human rights impacts of algorithmic systems, highlighted the need for effective remedies to ensure responsibility and accountability for the protection of human rights in the context of the deployment of AI systems. A potential model for grievance redressal is the redressal mechanism suggested in the AI4People’s Ethical Framework for a Good Society report by the Atomium – European Institute for Science, Media and Democracy. The report suggests that any grievance redressal mechanism for AI systems would have to be widely accessible and include redress for harms inflicted, costs incurred, and other grievances caused by the AI system. It must demarcate a clear system of accountability for both organisations and individuals. Of the various redressal mechanisms they have suggested, two significant mechanisms are: 

(a) AI ombudsperson: This would ensure the auditing of allegedly unfair or inequitable uses of AI reported by users of the public at large through an accessible judicial process. 

(b) Guided process for registering a complaint: This envisions laying down a simple process, similar to filing a Right to Information request, which can be used to bring discrepancies, or faults in an AI system to the notice of the authorities.

Such mechanisms can be evolved to address the human rights concerns and harms arising from the use of AI systems in India. 

Conclusion

In early October, the Government of India hosted the Responsible AI for Social Empowerment (RAISE) Summit which has involved discussions around India’s vision and a roadmap for social transformation, inclusion and empowerment through Responsible AI. At the RAISE Summit, speakers underlined the need for adopting AI ethics and a human centred approach to the deployment of AI systems. However, this conversation is still at a nascent stage and several rounds of consultations may be required to build these principles into an Indian AI governance and regulatory framework. 

As India enters into the next stage of developing and deploying AI systems, it is important to have multi-stakeholder consultations to discuss mechanisms for the adoption of principles for Responsible AI. This will enable the framing of an effective governance framework for AI in India that is firmly grounded in India’s constitutional framework. While the NITI Aayog Working Document has introduced the concept of ‘Responsible AI’ and the ethics around which AI systems may be designed, it lacks substantive discussion on these principles. Hence, in our analysis, we have explored global views and practices around these principles and suggested mechanisms appropriate for adoption in India’s governance framework for AI. Our detailed analysis of these principles can be accessed in our comments to the NITI Aayog’s Working Document Towards Responsible AI for All.

Building an AI Governance Framework for India, Part II

Embedding Principles of Safety, Equality and Non-Discrimination

This post has been authored by Jhalak M. Kakkar and Nidhi Singh

In July 2020, the NITI Aayog released a draft Working Document entitled “Towards Responsible AI for All” (hereafter ‘NITI Working Document’ or ‘Working Document’). This Working Document was initially prepared for an expert consultation held on 21 July 2020. It was later released for comments by stakeholders on the development of a ‘Responsible AI’ policy in India. CCG responded with comments to the Working Document, and our analysis can be accessed here.

In our previous post on building an AI governance framework for India, we discussed the legal and regulatory implications of the proposed Working Document and argued that India’s approach to regulating AI should be (1) firmly grounded in its Constitutional framework and (2) based on clearly articulated overarching principles. While the NITI Working Document introduces certain principles, it does not go into any substantive details on what the adoption of these principles into India’s regulatory framework would entail.

We will now examine these ‘Principles for Responsible AI’, their constituent elements and avenues for incorporating them into the Indian regulatory framework. The NITI Working Document proposed the following seven ‘Principles for Responsible AI’ to guide India’s regulatory framework for AI systems: 

  1. Safety and reliability
  2. Equality
  3. Inclusivity and Non-Discrimination
  4. Privacy and Security 
  5. Transparency
  6. Accountability
  7. Protection and Reinforcement of Positive Human Values. 

This post explores the principles of Safety and Reliability, Equality, and Inclusivity and Non-Discrimination. A subsequent post will discuss the principles of Privacy and Security, Transparency, Accountability and the Protection and Reinforcement of Positive Human Values.

Principle of Safety and Reliability

The Principle of Reliability and Safety aims to ensure that AI systems operate reliably in accordance with their intended purpose throughout their lifecycle and ensures the security, safety and robustness of an AI system. It requires that AI systems should not pose unreasonable safety risks, should adopt safety measures which are proportionate to the potential risks, should be continuously monitored and tested to ensure compliance with their intended purpose, and should have a continuous risk management system to address any identified problems. 

Here, it is important to note the distinction between safety and reliability. The reliability of a system relates to the ability of an AI system to behave exactly as its designers have intended and anticipated. A reliable system would adhere to the specifications it was programmed to carry out. Reliability is therefore, a measure of consistency and establishes confidence in the safety of a system. Whereas, safety refers to an AI system’s ability to do what it is supposed to do without harming users (human physical integrity), resources or the environment.

Human oversight: An important aspect of ensuring the safety and reliability of AI systems is the presence of human oversight over the system. Any regulatory framework that is developed in India to govern AI systems must incorporate norms that specify the circumstances and degree to which human oversight is required over various AI systems. 

The level of involvement of human oversight would depend upon the sensitivity of the function and potential for significant impact on an individual’s life which the AI system may have. For example, AI systems deployed in the context of the provision of government benefits should have a high level of human oversight. Decisions made by the AI system in this context should be reviewed by a human before being implemented. Other AI systems may be deployed in contexts that do not need constant human involvement. However, these systems should have a mechanism in place for human review if a question is subsequently raised for review by, say a user. An example of this may be vending machines which have simple algorithms. Hence, the purpose for which the system is deployed and the impact it could have on individuals would be relevant factors in determining if ‘human in the loop’, ‘human on the loop’, or any other oversight mechanism is appropriate. 

Principle of Equality

The principle of equality holds that everyone, irrespective of their status in the society, should get the same opportunities and protections with the development of AI systems. 

Implementing equality in the context of AI systems essentially requires three components: 

(i) Protection of human rights: AI instruments developed across the globe have highlighted that the implementation of AI would pose risks to the right to equality, and countries would have to take steps to mitigate such risks proactively. 

(ii) Access to technology: The AI systems should be designed in a way to ensure widespread access to technology, so that people may derive benefits from AI technology.

(iii) Guarantees of equal opportunities through technology: The guarantee of equal opportunity relies upon the transformative power of AI systems to “help eliminate relationships of domination between groups and people based on differences of power, wealth, or knowledge” and “produce social and economic benefits for all by reducing social inequalities and vulnerabilities.” AI systems will have to be designed and deployed such that they further the guarantees of equal opportunity and do not exacerbate and further entrench existing inequality.

The development, use and deployment of AI systems in society would pose the above-mentioned risks to the right to equality, and India’s regulatory framework for AI must take steps to mitigate such risks proactively.

Principle of Inclusivity and Non-Discrimination

The idea of non-discrimination mostly arises out of technical considerations in the context of AI. It holds that non-discrimination and the prevention of bias in AI should be mitigated in the training data, technical design choices, or the technology’s deployment to prevent discriminatory impacts. 

Examples of this can be seen in data collection in policing, where the disproportionate attention paid to neighbourhoods with minorities, would show higher incidences of crime in minority neighbourhoods, thereby skewing AI results. Use of AI systems becomes safer when they are trained on datasets that are sufficiently broad, and the datasets encompass the various scenarios in which the system is envisaged to be deployed. Additionally, datasets should be developed to be representative and hence avoid discriminatory outcomes from the use of the AI system. 

Another example of this can be semi-autonomous vehicles which experience higher accident rates among dark-skinned pedestrians due to the software’s poorer performance in recognising darker-skinned individuals. This can be traced back to training datasets, which contained mostly light-skinned people. The lack of diversity in the data set can lead to discrimination against specific groups in society. To ensure effective non-discrimination, AI policies must be truly representative of the society in its training data and ensure that no section of the populace is either over-represented or under-represented, which may skew the data sets. While designing the AI systems for deployment in India, the constitutional rights of individuals should be used as central values around which the AI systems are designed. 

In order to implement inclusivity in AI, the diversity of the team involved in design as well as the diversity of the training data set would have to be assessed. This would involve the creation of guidelines under India’s regulatory framework for AI to help researchers and programmers in designing inclusive data sets, measuring product performance on the parameter of inclusivity, selecting features to avoid exclusion and testing new systems through the lens of inclusivity.

Checklist Model: To address the challenges of non-discrimination and inclusivity a potential model which can be adopted in India’s regulatory framework for AI would be the ‘Checklist’. The European Network of Equality Bodies (EQUINET), in its recent report on ‘Meeting the new challenges to equality and non-discrimination from increased digitisation and the use of Artificial Intelligence’ provides a checklist to assess whether an AI system is complying with the principles of equality and non-discrimination. The checklist consists of several broad categories, with a focus on the deployment of AI technology in Europe. This includes heads such as direct discrimination, indirect discrimination, transparency, other types of equity claims, data protection, liability issues, and identification of the liable party. 

The list contains a series of questions which judges whether an AI system meets standards of equality, and identifies any potential biases it may have. For example, the question “Does the artificial intelligence system treat people differently because of a protected characteristic?” includes the parameters of both direct data and proxies. If the answer to the question is yes, the system would be identified as indulging in indirect bias. A similar checklist system, which has been contextualised for India, can be developed and employed in India’s regulatory framework for AI. 

Way forward

This post highlights some of the key aspects of the principles of Safety and Reliability, Equality, and Inclusivity and Non-Discrimination. Integration of these principles which have been identified in the NITI Working Document into India’s regulatory framework requires that we first clearly define their content, scope and ambit to identify the right mechanisms to operationalise them. Given the absence of any exploration of the content of these AI principles or the mechanism for their implementation in India in the NITI Working Document, we have examined the relevant international literature surrounding the adoption of AI ethics and suggested mechanisms for their adoption. The NITI Working Document has spurred discussion around designing an effective regulatory framework for AI. However, these discussions are at a preliminary stage and there is a need to develop a far more nuanced proposal for a regulatory framework for AI.

Over the last week, India has hosted the Responsible AI for Social Empowerment (RAISE) Summit which has involved discussions around India’s vision and roadmap for social transformation, inclusion and empowerment through Responsible AI. As we discuss mechanisms for India to effectively harness the economic potential of AI, we also need to design an effective framework to address the massive regulatory challenges emerging from the deployment of AI—simultaneously, and not as an afterthought post-deployment. While a few of the RAISE sessions engaged with certain aspects of regulating AI, there still remains a need for extensive, continued public consultations with a cross section of stakeholders to embed principles for Responsible AI in the design of an effective AI regulatory framework for India. 

For a more detailed discussion on these principles and their integration into the Indian context, refer to our comments to the NITI Aayog here. 

Building an AI governance framework for India

This post has been authored by Jhalak M. Kakkar and Nidhi Singh

In July 2020, the NITI Aayog released a “Working Document: Towards Responsible AI for All” (“NITI Working Document/Working Document”). The Working Document was initially prepared for an expert consultation held on 21 July 2020. It was later released for comments by stakeholders on the development of a ‘Responsible AI’ policy in India. CCG responded with comments to the Working Document, and our analysis can be accessed here.

The Working Document highlights the potential of Artificial Intelligence (“AI”) in the Indian context. It attempts to identify the challenges that will be faced in the adoption of AI and makes some recommendations on how to address these challenges. The Working Document emphasises the economic potential of the adoption of AI in boosting India’s annual growth rate, its potential for use in the social sector (‘AI for All’) and the potential for India to export relevant social sector products to other emerging economies (‘AI Garage’). 

However, this is not the first time that the NITI Aayog has discussed the large-scale adoption of AI in India. In 2018, the NITI Aayog released a discussion paper on the “National Strategy for Artificial Intelligence” (“National Strategy”). Building upon the National Strategy, the Working Document attempts to delineate ‘Principles for Responsible AI’ and identify relevant policy and governance recommendations. 

Any framework for the regulation of AI systems needs to be based on clear principles. The ‘Principles for Responsible AI’ identified by the Working Document include the principles of safety and reliability, equality, inclusivity and non-discrimination, privacy and security, transparency, accountability, and the protection and reinforcement of positive human values. While the NITI Working Document introduces these principles, it does not go into any substantive details on the regulatory approach that India should adopt and what the adoption of these principles into India’s regulatory framework would entail. 

In a series of posts, we will discuss the legal and regulatory implications of the proposed Working Document and more broadly discuss the regulatory approach India should adopt to AI and the principles India should embed in it. In this first post, we map out key considerations that should be kept in mind in order to develop a comprehensive regulatory regime to govern the adoption and deployment of AI systems in India. Subsequent posts will discuss the various ‘Principles for Responsible AI’, their constituent elements and how we should think of incorporating them into the Indian regulatory framework.

Approach to building an AI regulatory framework 

While the adoption of AI has several benefits, there are several potential harms and unintended risks if the technology is not assessed adequately for its alignment with India’s constitutional principles and its impact on the safety of individuals. Depending upon the nature and scope of the deployment of an AI system, its potential risks can include the discriminatory impact on vulnerable and marginalised communities, and material harms such as the negative impact on the health and safety of individuals. In the case of deployments by the State, risks include violation of the fundamental rights to equality, privacy, freedom of assembly and association, and freedom of speech and expression. 

We highlight some of the regulatory considerations that should be considered below:

Anchoring AI regulatory principles within the constitutional framework of India

The use of AI systems has raised concerns about their potential to violate multiple rights protected under the Indian Constitution such as the right against discrimination, the right to privacy, the right to freedom of speech and expression, the right to assemble peaceably and the right to freedom of association. Any regulatory framework put in place to govern the adoption and deployment of AI technology in India will have to be in consonance with its constitutional framework. While the NITI Working Document does refer to the idea of the prevailing morality of India and its relation to constitutional morality, it does not comprehensively address the idea of framing AI principles in compliance with India’s constitutional principles.

For instance, the government is seeking to acquire facial surveillance technology, and the National Strategy discusses the use of AI-powered surveillance applications by the government to predict crowd behaviour and for crowd management. The use of AI powered surveillance systems such as these needs to be balanced with their impact on an individual’s right to freedom of speech and expression, privacy and equality. Operational challenges surrounding accuracy and fairness in these systems raise further concerns. Considering the risks posed to the privacy of individuals, the deployment of these systems by the government, if at all, should only be done in specific contexts for a particular purpose and in compliance with the principles laid down by the Supreme Court in the Puttaswamy case.

In the context of AI’s potential to exacerbate discrimination, it would be relevant to discuss the State’s use of AI systems for the sentencing of criminals and assessing recidivism. AI systems are trained on existing datasets. These datasets tend to contain historically biased, unequal and discriminatory data. We have to be cognizant of the propensity for historical bias’ and discrimination getting imported into AI systems and their decision making. This could further reinforce and exacerbate the existing discrimination in the criminal justice system towards marginalised and vulnerable communities, and result in a potential violation of their fundamental rights.

The National Strategy acknowledges the presence of such biases and proposes a technical approach to reduce bias. While such attempts are appreciable in their efforts to rectify the situation and yield fairer outcomes, such an approach disregards the fact that these datasets are biased because they arise from a biased, unequal and discriminatory world. As we seek to build effective regulation to govern the use and deployment of AI systems, we have to remember that these are socio-technical systems that reflect the world around us and embed the biases, inequality and discrimination inherent in the Indian society. We have to keep this broader Indian social context in mind as we design AI systems and create regulatory frameworks to govern their deployment. 

While, the Working Document introduces the principles for responsible AI such as equality, inclusivity and non-discrimination, and privacy and security, there needs to be substantive discussion around incorporating these principles into India’s regulatory framework in consonance with constitutional guaranteed rights.

Regulatory Challenges in the adoption of AI in India

As India designs a regulatory framework to govern the adoption and deployment of AI systems, it is important that we keep the following in focus: 

  • Heightened threshold of responsibility for government or public sector deployment of AI systems

The EU is considering adopting a risk-based approach for regulation of AI, with heavier regulation for high-risk AI systems. The extent of risk factors such as safety, consumer rights and fundamental rights are assessed by looking at the sector of deployment and the intended use of the AI system. Similarly, India must consider the adoption of a higher regulatory threshold for the use of AI by at least government institutions, given their potential for impacting citizen’s rights. Government use of AI systems that have the potential of severely impacting citizens’ fundamental rights include the use of AI in the disbursal of government benefits, surveillance, law enforcement and judicial sentencing

  • Need for overarching principles based AI regulatory framework

Different sectoral regulators are currently evolving regulations to address the specific challenges posed by AI in their sector. While it is vital to harness the domain expertise of a sectoral regulator and encourage the development of sector-specific AI regulations, such piecemeal development of AI principles can lead to fragmentation in the overall approach to regulating AI in India. Therefore, to ensure uniformity in the approach to regulating AI systems across sectors, it is crucial to put in place a horizontal overarching principles-based framework. 

  • Adaptation of sectoral regulation to effectively regulate AI

In addition to an overarching regulatory framework which forms the basis for the regulation of AI, it is equally important to envisage how this framework would work with horizontal or sector-specific laws such as consumer protection law and the applicability of product liability to various AI systems. Traditionally consumer protection and product liability regulatory frameworks have been structured around fault-based claims. However, given the challenges concerning explainability and transparency of decision making by AI systems, it may be difficult to establish the presence of defects in products and, for an individual who has suffered harm, to provide the necessary evidence in court. Hence, consumer protection laws may have to be adapted to stay relevant in the context of AI systems. Even sectoral legislation regulating the use of motor vehicles, such as the Motor Vehicles Act, 1988 would have to be modified to enable and regulate the use of autonomous vehicles and other AI transport systems. 

  • Contextualising AI systems for both their safe development and use

To ensure the effective and safe use of AI systems, they have to be designed, adapted and trained on relevant datasets depending on the context in which they will be deployed. The Working Document envisages India being the AI Garage for 40% of the world – developing AI solutions in India which can then be deployed in other emerging economies. Additionally, India will likely import AI systems developed in countries such as the US, EU and China to be deployed within the Indian context. Both scenarios involve the use of AI systems in a context distinct from the one in which they have been developed. Without effectively contextualising socio-technical systems like AI systems to the environment they are to be deployed in, there are enhanced safety, accuracy and reliability concerns. Regulatory standards and processes need to be developed in India to ascertain the safe use and deployment of AI systems that have been developed in contexts that are distinct from the ones in which they will be deployed. 

The NITI Working Document is the first step towards an informed discussion on the adoption of a regulatory framework to govern AI technology in India. However, there is a great deal of work to be done. Any regulatory framework developed by India to govern AI must balance the benefits and risks of deploying AI, diminish the risk of any harm and have a consumer protection framework in place to adequately address any harm that may arise. Besides this, the regulatory framework must ensure that the deployment and use of AI systems are in consonance with India’s constitutional scheme.

CCG’s Comments on the NODE Whitepaper

By Shashank Mohan and Nidhi Singh

In late March, the Ministry of Electronics and Information Technology (MeitY) released its consultation whitepaper on the National Open Digital Ecosystems (NODE). The NODE strategy was developed by MeitY in consultation with other departments and stakeholders, as a part of its efforts to build an enabling ecosystem to leverage digital platforms for transformative social, economic and governance impact, through a citizen-centric approach. The Whitepaper highlights key elements of NODE, and also its distinction from the previous models of GovTech. The Centre submitted its comments on the NODE Whitepaper on 31 May 2020, highlighting some of our key concerns with the proposed strategy.

The NODE Whitepaper proposes a complex network of digital platforms with the aim of providing efficient public services to the citizens of India. It defines NODE as open and secure delivery platforms anchored by transparent governance mechanisms, which enable a community of partners to unlock innovative solutions, to transform societal outcomes.

Our comments on the NODE strategy revolve around four key challenges: open standards, privacy and security, transparency and accountability, and community engagement. We have provided recommendations at each stage and have relied upon our previous work around privacy, cyber security and technology policy for our analysis.

Firstly, we believe that the NODE Whitepaper stops short of providing a robust definition of openness, and does not comprehensively address existing Government policies on open source software and open APIs. We recommend that existing policies are adopted by MeitY where relevant, and are revised and updated at least in the context of NODEs where required.

Secondly, one of the key concerns with the NODE Whitepaper is the lack of detailed discussion on the aspects of data privacy and security. The Whitepaper does not consider the principles of data protection established in the Personal Data Protection Bill, 2019 (PDPB 2019) or take into account other internationally recognised principles. Without adequately addressing the data privacy concerns which arise from NODEs, any policy framework on the subject runs the risk of being devoid of context. The existence of a robust privacy framework is essential before instituting a NODE like architecture. As the PDPB 2019 is considered by Parliament, MeitY should, as a minimum, incorporate the data protection principles as laid down in the PDPB 2019 in any policy framework for NODEs. We also recommend that in order to fully protect the right to privacy and autonomy of citizens, participation in or the use of NODEs must be strictly voluntary.

Thirdly, a NODE framework built with the aim of public service delivery should also incorporate principles of transparency and accountability at each level of the ecosystem. In a network involving numerous stakeholders including private entities, it is essential that the NODE architecture operates on sound principles of transparency and accountability and sets up independent institutions for regulatory and grievance redressal purposes. Public private relationships within the ecosystem must remain transparent in line with the Supreme Court jurisprudence on the subject. To this end, we recommend that each NODE platform should be supported and governed by accountable institutions, in a transparent manner. These institutions must be independent and not disproportionately controlled by the Executive arm of the Government.

Lastly, we focus on the importance of inclusion in a digital first solution like the NODE. Despite steady growth in Internet penetration in India, more than half of its population does not enjoy access to the Internet and there is a crucial gender gap in the access to Internet amongst Indians, with men forming a majority of the user base. Learning from studies on the challenges of exclusion from the Aadhaar project, we recommend that the NODE architecture must be built keeping in mind India’s digital infrastructure. Global best practices suggest that designing frameworks which are based on inclusion is a pre-condition for building successful models of e-governance. Similarly, NODEs should be built with the aim of inclusion, and must not become a roadblock for accessing public services by citizens.

Public consultations like these will go a long way in building a robust strategy on open data systems as numerous stakeholders with varied skills must be consulted to ensure quality and efficacy in e-governance models. We thank MeitY for this opportunity and hope that future developments would also follow a similar process of public consultations to foster transparency, openness and public participation in the process of policy making.

Our full comments submitted to the Ministry can be found here.

ICANN Rejection of .ORG Sale to Ethos Capital: A Win for Public Interest?

On the 30th of April 2020, the Internet Corporation for Assigned Names and Numbers (ICANN) blocked the sale of the Public Interest Registry (PIR) to a private equity firm, Ethos Capital. The sale was announced by the Internet Society (ISOC) in November 2019. While on the face of it, the sale seemed like a routine transaction, it had much broader implications for the future of the three bodies involved, namely ISOC, PIR and ICANN and the internet in general.

Before we can unpack the implications of this refusal, we must introduce the players and set out the background to this sale.

Background

ICANN, founded in 1998, is a private not-for-profit corporation based in Los Angeles. ICANN is responsible for the management of the Domain Name System (DNS). It promotes competition in domain registrations (a domain name is a string which identifies the authority within the internet, common examples of top-level domains are dot-net, dot-com etc.) and develops policy on the internet’s unique identifiers (the address on the internet where something is located). ICANN is, therefore responsible for maintaining universal resolvability, i.e. ensuring that the internet from different countries is not separate from each other. ICANN thus helps to manage and maintain certain core infrastructure, which keeps the internet on.

ICANN operates through a unique multi-stakeholder model. Any technical changes to the internet are raised within the supporting organizations of ICANN. These suggested changes are then released for public review. The ICANN review process generally composes of at least two rounds of comments, once initial suggestions are incorporated, the proposal is then released for the second round of public review. The ICANN board, taking into account the reports made by the bodies, and the comments received, then make a decision concerning the proposed change.

ISOC is a non-profit organization which was founded in 1992 and works towards an open, globally-connected, secure and trustworthy internet for all. ISOC promotes the concept of ‘internet for all’ and is composed of both individual and organizational members. It is governed by a board of trustees composed of 13 members who are appointed by chapters, organizational members and the Internet Engineering Task Force.

ISOC currently controls the dot-org (.org) domain through the Public Interest Registry (PIR), a not-for-profit organization created by ISOC in 2002 and based out of Virginia.  PIR took over the operations and management of dot-org in 2003 and has since launched and managed the dot-NGO and dot-ONG top-level domain names as well. The PIR is responsible for maintaining the registry of all the domains in the dot-org community. PIR is also an active member of ICANN.

The other party to the sale, Ethos Capital is a specialized investment firm which focuses on companies in which technology can be used to automate and optimize traditional business models. It was founded in June 2019, just a few months before the sale.

In this post, we shall examine the details of the proposed sale of the dot-org domain, the issues which arose as a consequence of the sale and finally what the implications of the refusal by the ICANN will be on the future of dot-org.

What is dot-org (.org)

Dot-org was created in 1984 as one of the internet’s original top-level domains, other domains from this era include dot-edu, dot-net, dot-com, dot-gov. Dot-org is one of the oldest and the third largest domain on the internet. The domain is home to over 10.5 million websites and is most recognizable for hosting non-profit websites. It is managed by the PIR.

The initial term of the agreement between PIR and ICANN ended in June 2019, following which the parties renewed the agreement for a period of 10 years. The agreement is based upon the provisions of the generic top-level domain registry agreements, which is entered into between ICANN and the registry operator (the entity responsible for providing the registry services), which in this case was PIR. The renewal agreement included some important changes, including the removal of price caps and adopting public interest commitments and the Public Interest Commitment Dispute Resolution Process (PICDRP). These changes to the renewal agreement played a significant role in the proposed sale between ISOC and Ethos Capital.

The agreement between Ethos Capital and ISOC over the sale PIR would have the effect of altering the agreement between PIR and ICANN, and thus, ICANN’s would have had to consent to the sale as well. Section 7.5 of the contract between ICANN and PIR mandates that PIR must seek its approval before a change of control and that such consent cannot be withheld unreasonably by ICANN. Consequently, after the announcement of the sale by ISOC, ICANN started the process of review for the sale.

While the technical specifications of PIR, and the contract for its sale are relatively clear, the transaction itself was mired in controversy. This was, for the most part, due to the perceived value of the dot-org domain.

Before we move on to the details of the sale, and the consequences of the same, we must first examine the arguments supporting the value of dot-org.

The dot-org domain derives most of its value from the belief that it is primarily used by non-profits, and adds credibility to a hosted domain. The dot-org domain is generally thought of as being synonymous with non-profit organizations. This is also bolstered by the fact that many large international organisations and non-profits such as the United Nations, the International Committee of the Red Cross, Wikimedia Foundation, Greenpeace, YMCA, Red Cross, Human Right Watch etc. use the dot-org domain. Dot-org is the second most valuable namespace, behind dot-com. 

The dot-org domain is an ‘open’ domain, as opposed to a closed one, like dot-edu, consequently anyone can register with a dot-org domain, regardless of their for-profit status. The trust in the dot-org domain is a remnant of its historical status, and there is no evidence to support the theory that it is mostly used by non-profits. The true value of the dot-org domain is essentially the public perception of trust which is associated with it, regardless of the actual identity of the actors using the service.

The sale of dot-org (.org)

In November 2019, ISOC announced the acquisition of the PIR by Ethos Capital. PIR would continue to oversee the management and mission of dot-org, but would now come under the oversight of Ethos Capital. The proposed transaction was estimated to close by the first quarter of March, and in its statement, ISOC reaffirmed PIR’s ability to meet the ‘highest standards of public accountability and transparency’. The statement also discussed that the transaction would also infuse ISOC with a large endowment and sustainable funding which would allow ISOC to expand its work in internet governance. The sale was also said to have no disruption of service or sale to the dot-org community or any of their educational initiatives.

This sale was opposed by many immediately, due to concerns relating to increasing prices of domain registrations, therefore, subjecting many non-profit websites to large price hikes. This fear is also backed by ICANN’s decision in July 2019, which lifted the price caps on all the dot-org domains. The decision was heavily criticized, as it could potentially lead to major price hikes on domains, and also as the move had been undertaken despite almost universal opposition to the same. This removal of price caps, when taken in conjunction with the sale of PIR to a for-profit organization led to rising fears of price hikes for the dot-org domain.

The list of those opposing the sale of the dot-org domain was wide and varied. ICANN received missives from the governments of France and Germany. While France did not outright advocate for refusing the sale altogether, it questioned the commitments made by Ethos Capital, and commented upon the insufficiency of time provided to ICANN to deal with the matter. Similarly, Germany also commented upon the insufficiency of information provided and asks ICANN to conduct further reviews of the proposed transaction.

Another important opposition to this sale came from the Office of the Attorney General in the state of California, who urged ICANN to reject the transfer of PIR to Ethos Capital. It cited concerns such as the lack of transparency about the future plans of Ethos Capital, potential risks to operational uncertainty of PIR and the repayment of the 300 million USD which would be assigned to PIR after the sale. In the light of the possible risks to the non-profit community, the Attorney General suggests rejecting the sale.

The sale was also the subjected to scrutiny by a number of US senators and members of Congress. At least three letters were sent by a group of representatives to ISOC, PIR and ICANN raising concerns about the deal. In a letter to ICANN dated 18 March 2020, Senators Elizabeth Warren, Ron Wyden, Richard Blumenthal, Edward J. Markey and representative Anna G. Eshoo have advocated against the sale. The letter argues that such a sale would be contrary to ICANN’s commitment to public benefit, and would ultimately have the effect of undermining the reliability of dot-org as a whole. In addition to concerns of transparency and a potential price hike, they also argue that initiatives suggested by Ethos Capital (mentioned below) would be toothless. It therefore advocates strongly for ICANN to reject the proposed sale.

Finally, the deal saw a massive pushback including a public campaign from over 900 organisations led by Electronic Frontier Foundation (EFF). Many activists and organisations also demonstrated against the proposed sale at a rally at ICANN’s LA headquarters in January. Additionally, many others including UNESCO sent representation to the ICANN, in its public comments, asking ICANN to withhold the consent for this transaction.

While the proposed sale had more than its share of opposition, other experts took a different position regarding the sale. It was argued that the amendments proposed by Ethos, through the Public Interest Commitments, as discussed in the next section, could have been used to patch up the holes left by the new registry agreement between ICANN and PIR. 

Public Interest Commitments

Following the announcement of the sale, Ethos Capital also released a series of key initiatives, to allay the fears surrounding the sale of dot-org. These initiatives were announced as public interest commitments (PICs), which were voluntarily undertaken by Ethos, to reinforce the company commitment to the dot-org community. The company proposed that these commitments could be added to the registry agreement which exits between PIR and ICANN, thus making them legally binding.

This included measures such as enforcing a price limit to bolster the affordability of dot-org domain names, by capping the increase on the registration or renewal charges for a domain name at 10% per year on an average, for eight years. It also announced setting up a new ORG Stewardship council, which would have the power to veto any resolution passed by the PIR on the censorship of the freedom of speech and expression, or the use of user data. It also announced establishing community enablement funds to a tune of 10 million USD and releasing annual public reports to ensure transparency in the working of the PIR.

In support of the Stewardship council, Ethos released a series of updates, including the proposed charter, the nomination process and even appointed an independent search firm Heidrick & Struggles, as the agency which would handle nomination requests from the community.

The enforcement mechanism of these commitments remained vague. Since the inclusion of the PICs in the registry agreement with ICANN makes them legally binding, they could not be unilaterally amended by PIR as they were a part of the registry agreement with ICANN, and in case of any default, they were legally enforceable. However, it is uncertain to what extent the members of the community could enforce these commitments through the newly adopted PICDRP. The PICDRP is a relatively new dispute resolution procedure, and it is uncertain how effective it would be in resolving the challenges raised by community members.

Refusal by ICANN

The process leading up to the decision by ICANN has been long and time consuming. PIR formally submitted the notice of indirect change of control to ICANN on 14 November 2019, and the final deadline for ICANN to approve or reject the transaction was 4May 2020. The five intervening months have seen several rounds of questions between ICANN and the parties to the sale. ICANN’s issued three requests for additional information in December 2019, February 2020 and finally in April 2020, which were all provided by PIR. ICANN also responded to requests by the office of the Attorney General of the State of California in January 2020, by providing information regarding the proposed transfer of PIR to allow the Attorney General’s office to ‘analyse the potential impact of the same on the non-profit community, including ICANN.’

In addition to the formal consultation process undertaken with PIR, ICANN also received over 30 letters from the ICANN community, relating to the PIR transaction. The ICANN board also convened a public forum at the 67th meeting of ICANN to encourage community dialogue on the proposed transfer of ownership of the PIR.

On 30th April 2020, ICANN finally rendered its decision, refusing the sale of the PIR to Ethos Capital. The implications of this refusal are vast.

ICANN has cited several reasons for refusing the sale of PIR to Ethos Capital. These include the lack of experience on the part of Ethos, removal of protections of the not-for-profit status, and the debt of 360 million USD which the transaction would bring to PIR, especially in the current economic and fiscal uncertainty. The transaction would oblige the PIR to repay this debt of 360 million USD, post the sale, but this would not in any way benefit the dot-org community or PIR itself. While the initial sale models had shown the capacity of PIR to repay this debt, the decision argues that the current uncertainties were not taken into account in the fiscal model, and hence it could not be relied upon.

ICANN reiterated PIR’s responsibility to serve public interest, through its operations of dot-org and other domains, and held that the transfer of this mandate to another entity could not be upheld, especially without a public interest mandate on the part of Ethos Capital. The valuation of PIR was also discussed in the order. Since its inception in 2002, PIR has created a value of 1 billion USD, which the ISOC could realize through this sale, which would convert the PIR into a for-profit body.

At this point however, it is important to clarify, that the sale of PIR would not dissolve the agreements between PIR and ICANN, and that ICANN would still hold a contract with PIR, as it did before the sale. However, the board goes on to say that the changes in the form of the entity in this instance, would be so significant, that they would have to be considered in this change of control request.

On the other hand, in the response statement, ISOC has alleged that the ICANN stepped outside its remit, by essentially undertaking the role of a regulator in this transaction between ISOC and Ethos Capital, which is beyond the scope of what ICANN was intended to do. This particular transaction was a transference of indirect control, which has previously been accepted by the ICANN, much more expeditiously. The statement also commented on the delay in the decision-making process. It also alludes to the possibility of influence, wherein the statement raises concern on behalf of the internet community about ICANN’s potential susceptibility to political influence. 

Additionally, PIR and Ethos Capital have also released statements condemning the move by ICANN. PIR alleges that the decision represents a failure by ICANN to follow its bylaws and processes, while Ethos Capital describes this as a dangerous precedent which will ‘suffocate innovation and deter future investment in the domain industry’. It has described the move by ICANN as ‘agenda-driven’ and based on ‘subjective interpretation’ while overstepping its mandate.

The Next Steps

The refusal on the part of ICANN has effectively stopped the sale for now. This decision has taken a long time, with the initial deadline being pushed from 17th February to 20th April to 4th May. However, it must also be kept in mind, that in their decision ICANN reiterated, that keeping the totality of the surrounding circumstances in mind, the board has supported a denial of the request in the change of ownership at this time. The PIR may, later, provide additional information to resolve the concerns which have been raised and re-submit or initiate a new change of control request in the favour of Ethos Capital.

It is hard to see any real winners in this transaction. While blocking the sale was considered a ‘win’ for the internet, it makes no real changes to the status quo either.

On the one hand, the primary concerns that were raised during the sale, including the potential price hike, stem from the removal of the price caps in the renewed registry agreement between ICANN and PIR; with or without the sale, the possibility of price hikes for dot-org renewals remains unchanged.

Additionally, with the failure of the deal, the proposal for the Stewardship council had also fallen though, which could have potentially bolstered the participation of independent experts in preserving the right to freedom of speech and expression online. While the charter suggested by Ethos was not perfect, it is difficult to say that a better deal could not have been achieved. Another factor to consider here is the loss of an endowment valued at over 1 billion USD for the ISOC.

On the other hand, the sale still brought up many questions of transparency which were not adequately addressed. While the central debate on the sale was based around the perceived link of non-profit organizations to the dot-org domain, Ethos’s lack of real experience or history in the field of internet governance also played a role in the refusal by ICANN. Additionally, a former employee of ICANN, Nora Abusitta-Ouri who serves as Chief Purpose Officer, and Erik Brooks, the founder and CEO are the only two employees of the firm. The former CEO of ICANN Fadi Chehadé serves as an advisor to the firm, but not much more is known about Ethos capital.

Another possible factor which could have had an impact on the sale of PIR to Ethos, are the lockdowns, which were put into place post the outbreak of COVID-19. While the decision and the subsequent statements make no reference to the outbreak per se, the decision by ICANN does make reference to the financial and economic instability, and the potential impacts of the same. A large part of the value of the dot-org domain is attributable to the perceived rhetoric supporting the ‘non-profit’ nature of the domain. While this link between non-profits and the dot-org domain is factually inaccurate, it would still be bad optics for ICANN to go against the submissions of major non-profits, especially during a pandemic, where they are more visible.

Denying the current sale does not in fact, address any of the concerns which were raised during the ‘Save the Dot-Org movement’. However, it is not certain that allowing the sale to a corporation which registered its domain name, a mere week before the price caps on dot-org were removed, would have been any better. Additionally, the sale also brought up pertinent questions relating to the public’s trust in ISOC and PIR, following the unilateral announcement for sale.

There is nothing to stop another sale from being proposed in the future, but as of now, it seems that the internet is ‘safe’.

The Sketchy Position of Cryptocurrency in India: A Case for Legislative Regulation

By Vedangini Bisht and Shubham Chaudhary

INTRODUCTION

On March 4, 2020, the Supreme Court of India, in the case of Internet And Mobile Association Of India v Reserve Bank Of India, overturned the April 2018 circular of Reserve Bank of India. The 2018 RBI circular had banned all the RBI regulated entities from trading in cryptocurrency or virtual currency (VC). While there was no per se ban on VCs, it led to a shutdown of VC start-ups in the country and a massive decline in its trading volumes. The 180-page judgment of the Supreme Court held the circular to be in violation of Article 19(1)(g), recognising trading in VCs as a fundamental right. The decision was primarily based on the principle of proportionality and the fact that RBI had been unable to prove any adverse effect of VCs on the operations of financial institutions and banks.

This article first looks into why a legislation regulating VCs ought to be enacted. Then the various factors that need to be kept in mind while formulating such a legislation are elucidated, including a) the issues with definition, b) judicial precedents and c) approach of foreign jurisdictions on the subject.

NEED FOR A LEGISLATION

Before the “ban” on VCs in India, entities that dealt in VCs operated in a regulatory vacuum. The ban was introduced to address multiple concerns that the government and the RBI had with VCs. These concerns included consumer protection, market integrity and money laundering. The government had also previously warned users of “economic, financial, operational, legal, consumer protection and security-related risks” associated with VCs. Several proponents of VC are also against their regularisation by the central government, given the decentralised nature of technology. These are the some of the concerns surrounding the regularisation.

However, the VCs require an amalgamation of exchange, marketing, issue of new tokens etc for their effective working. All of these are highly centralised aspects, requiring standardized oversight to prevent illegality and impropriety.

The primary reason for the need of a legislation arises from the Supreme Court’s judgment itself. One of the rationales used by the Supreme Court to give a decision in the favour of the VC industry was the absence of any law prohibiting VCs yet. This implies that the verdict would lose its effect if such a law is put in place. It should be noted that the petitions were filed against the RBI, not the Ministry of Finance. The verdict of the Supreme Court only addresses the regulatory concerns of RBI. It refrains from issuing any directive to the policymakers about the treatment of VCs.

Further, the inclusion of trading in VCs under Article 19(1)(g) can be nullified by a legislation affirming the contrary. Hence, in the absence of a statute which clearly states the legality of cryptos and their regulation, they remain constantly vulnerable to negative legislative actions.

The fate of Banning of Cryptocurrency & Regulation of Official Digital Currency Bill, 2019 (Cryptocurrency Bill, 2019), which prohibits the use of VCs as a legal tender or currency, is yet to be decided. It ought to be noted that the text of the bill was leaked and hence, has not been formally endorsed by the Government. Even though it has not been introduced in the parliament as of now, it is hoped that the judgment would reset the discourse on VCs and sway government thinking. Given below are some of the specific indices the legislature shall have to keep in mind while formulating a legislation on VCs.

ISSUES TO BE ADDRESSED BY THE LEGISLATION

Any issues related to VCs that a legislation would want to address will depend on the concerns that the government has with regards to the operation of VCs. As mentioned earlier, the concerns were related to protecting the interests of both, the formal financial sector and the persons dealing in VCs. The question that arises here is whether a total prohibition on operation of VCs in India is the only way to address these concerns. After the preliminary issue of definition, this article will proceed to analyse this question from two perspectives: Supreme Court precedents on restriction of activities under Article 19(1)(g) and the approach of foreign jurisdictions on the subject.

Definition

A formidable task is to define a cryptocurrency or VC. The Supreme Court also noted this difficulty. VCs are defined by different names such as crypto assets, electronic currency, digital assets etc, making it difficult to compartmentalise them into legal tenders solely or goods/commodities. The difficulty with defining them as a legal tender is the absence of a sovereign guarantee, backed by a central authority. In India, a legal tender is maintained by the RBI, but a VC is recorded and shared with users over a network.

The Cryptocurrency Bill, 2019 has been too wide in its approach to the definition. It includes tokens such as information, code or token which has a digital representation of value and is generated through cryptographic means, that neither function at all like VCs, nor pose the same risk.

The Supreme Court also stated that the VCs are a by-product of blockchain technology and the government could consider segregating the two. The draft Blockchain policy of the State of Telangana, released in 2019, also sought to make this distinction, clearly stating that given the novelty of the technology, both of them tend to be confused. At the same time, it refrained from giving a definition of VCs. But most of the federal polices in the world have conspicuously refrained from differentiating the two.

There are other jurisdictions the legislature can look to, in order to define cryptocurrencies, such as the EU. Quite a precise definition has been used by the Financial Action Task Force, (an intergovernmental organisation to combat money laundering) which defines it as “a math-based decentralised, convertible virtual currency which is protected by cryptography.”

Precedents set by the Supreme Court

Whether a prohibition is the only method addressing the concerns related to VCs will need to be evaluated in light of the Supreme Court’s judgments on restriction on Article 19(1)(g).

In Modern Dental College and Research Centre v. State of Madhya Pradesh, the Supreme Court had held that any restriction on Article 19(1)(g) must meet the test of proportionality, meaning that a limitation on a constitutionally protected right must be must be constitutionally permissible. Sub-components of proportionality include, inter alia, that a measure which restricts a constitutionally protected right must not have an alternative that may achieve the same purpose as the measure with a lesser degree of limitation. Hence, while imposing a prohibition on operation of VCs, the government must ensure that no other measure, including regulation of VCs, would achieve the aimed purpose of the government.

Additionally, the state should prohibit an activity only if it can demonstrate that the activity is inherently pernicious or tends to be harmful to the general public, as was laid down by the Supreme Court in Mohd. Faruk v. State of Madhya Pradesh. Therefore, any decision taken by the state regarding the operation of VCs should be based on empirical data regarding the harm caused by such operation, whether the harm be to the formal financial sector or to the persons dealing in VCs. This test of justification by acceptable evidence of a restriction on Article 19(1)(g) has been applied by the Supreme Court in other cases as well, such as M/s. Laxmi Khandsari v. State of Uttar Pradesh and State of Maharashtra v. Indian Hotel and Restaurants Association.

Foreign jurisdictions on Virtual Currencies

The Indian state could also analyse the measures adopted by foreign countries in dealing with VCs. For instance, South Korea recently passed a legislation which legalizes VCs in the country, albeit with heavy regulations. Briefly, all VC related service providers must register with a regulator and partner with a bank to be able to operate. Further, any person registering with a VC service provider must use their real name while registering and link their VC wallet with their real-world bank account. The first measure gives credibility and accountability to the service providers, while the latter ensures that the government can track the movement of funds via VCs. Hence, South Korea is an example of how prohibition is not the only answer to allaying the concerns associated with VCs.

Even the Supreme Court in its judgment uplifting the prohibition on VCs remarked, after relying on a report by the EU Parliament which recommended against a total prohibition on VCs, that the RBI had failed to consider alternatives before issuing the circular which had effectively prohibited the operation of VCs in India.

CONCLUSION

Today, cryptocurrencies have a market capitalisation of over $200 billion. The Indian market has already suffered serious setbacks due to shut down in the VC industries for two years. Any continuity with the uncertainty regarding the regulation of VCs will only deprive the economy of potential benefits. The proposed centralised ‘digital rupee’ in the Cryptocurrency Bill, 2019 goes against the very idea of a non-centralised cryptocurrency, since the former would be issued and regulated by a central agency.

Hence, a new statute, giving the VCs their legality as well as regulation, should be pioneered. The need for the same arises out of the regulatory void in the current legal regime. This should be brought about keeping in mind the applicable precedent on the freedom of trade and occupations as well as approaches to regulation of VCs adopted in foreign jurisdictions. All of this will ensure that VCs in India remain a reliable source of trade.

Right to Privacy: The Puttaswamy Effect

By Sangh Rakshita and Nidhi Singh

The Puttaswamy judgement of 2017 reaffirmed the ‘Right to Privacy’ as a fundamental right in Indian Jurisprudence. Since then, it has been used as an important precedent in many cases, to emphasize upon the right to privacy as a fundamental right and to clarify the scope of the same. In this blog, we discuss some of the cases of the Supreme Court and various High Courts, post August 2017, which have used the Puttaswamy judgement and the tests laid in it to further the jurisprudence on right to privacy in India. With the Personal Data Protection Bill tabled in 2019, the debate on privacy has been re-ignited, and as such, it is important to explore the contours of the right to privacy as a fundamental right, post the Puttaswamy judgement.   

Navtej Singh Johar and ors Vs. Union of India (UOI) and Ors., 2018 (Supreme Court)

In this case, the Supreme Court of India unanimously held that Section 377 of the Indian Penal Code 1860 (IPC), which criminalized ‘carnal intercourse against the order of nature’, was unconstitutional in so far as it criminalized consensual sexual conduct between adults of the same sex. The petition, challenged Section 377 on the ground that it was vague and it violated the constitutional rights to privacy, freedom of expression, equality, human dignity and protection from discrimination guaranteed under Articles 14, 15, 19 and 21 of the Constitution. The Court relied upon the judgement in the case of K.S. Puttaswamy v. Union of India, which held that denying the LGBT community its right to privacy on the ground that they form a minority of the population would be violative of their fundamental rights, and that sexual orientation forms an inherent part of self-identity and denying the same would be violative of the right to life.

Justice K.S. Puttaswamy and Ors. vs. Union of India (UOI) and Ors., 2018 (Supreme Court)

 The Supreme Court upheld the validity of the Aadhar Scheme on the ground that it did not violate the right to privacy of the citizens as minimal biometric data was collected in the enrolment process and the authentication process is not exposed to the internet. The majority upheld the constitutionality of the Aadhaar Act, 2016 barring a few provisions on disclosure of personal information, cognizance of offences and use of the Aadhaar ecosystem by private corporations. They relied on the fulfilment of the proportionality test as laid down in the Puttaswamy (2017) judgment.

Joseph Shine vs. Union of India (UOI), 2018 (Supreme Court)

The Supreme Court decriminalised adultery in this case where the constitutional validity of Section 497 (adultery) of IPC and Section 198(2) of Code of Criminal Procedure, 1973 (CrPC) was challenged. The Court held that in criminalizing adultery, the legislature has imposed its imprimatur on the control by a man over the sexuality of his spouse – in doing that, the statutory provision fails to meet the touchstone of Article 21. Section 497 was struck down on the ground that it deprives a woman of her autonomy, dignity and privacy and that it compounds the encroachment on her right to life and personal liberty by adopting a notion of marriage which subverts true equality. Concurring judgments in this case referred to Puttaswamy to explain the concepts of autonomy and dignity, and their intricate relationship with the protection of life and liberty as guaranteed in the Constitution. They relied on the Puttaswamy judgment to emphasize the dangers of the “use of privacy as a veneer for patriarchal domination and abuse of women.” They also cited Puttaswamy to elucidate that privacy is the entitlement of every individual, with no distinction to be made on the basis of the individual’s position in society.

Indian Young Lawyers Association and Ors. vs. The State of Kerala and Ors., 2018 (Supreme Court)

In this case, the Supreme Court upheld the right of women aged between 10 to 50 years to enter the Sabrimala Temple. The court held Rule 3(b) of the Kerala Hindu Places of Public Worship (Authorisation of Entry) Rules, 1965, which restricts the entry of women into the Sabarimala temple, to be ultra vires (i.e. not permitted under the Kerala Hindu Places of Public Worship (Authorisation of Entry) Act, 1965). While discussing the guarantee against social exclusion based on notions of “purity and pollution” as an acknowledgment of the inalienable dignity of every individual J. Chandrachud (in his concurring judgment) referred to Puttaswamy specifically to explain dignity as a facet of Article 21. In the course of submissions, the Amicus to the case had submitted that the exclusionary practice in its implementation results in involuntary disclosure by women of both their menstrual status and age which amounts to forced disclosure that consequently violates the right to dignity and privacy embedded in Article 21 of the Constitution of India.

(The judgement is under review before a 9 judge constitutional bench.)

Vinit Kumar Vs. Central Bureau of Investigation and Ors., 2019 (Bombay High Court)

This case dealt with phone tapping and surveillance under section 5(2) of the Indian Telegraph Act, 1885 (Telegraph Act) and the balance between public safety interests and the right to privacy. Section 5(2) of the Telegraph Act permits the interception of telephone communications in the case of a public emergency, or where there is a public safety requirement. Such interception needs to comply with the procedural safeguards set out by the Supreme Court in PUCL v. Union of India (1997), which were then codified as rules under the Telegraph Act. The Bombay High Court applied the tests of legitimacy and proportionality laid down in Puttaswamy, to the interception orders issued under the Telegraph Act, and held that in this case the order for interception could not be substantiated in the interest of public safety and did not satisfy the test of “principles of proportionality and legitimacy” as laid down in Puttaswamy. The Bombay High Court quashed the interception orders in question, and directed that the copies / recordings of the intercepted communications be destroyed.

Central Public Information Officer, Supreme Court of India vs. Subhash Chandra Agarwal, 2019 (Supreme Court)

In this case, the Supreme Court held that held that the Office of the Chief Justice of India is a ‘public authority’ under the Right to Information Act, 2005 (RTI Act) – enabling the disclosure of information such as the Judges personal assets. In this case, the Court discussed the privacy impact of such disclosure extensively, including in the context of Puttaswamy. The Court found that the right to information and right to privacy are at an equal footing, and that there was no requirement to take a view that one right trumps the other. The Court stated that the proportionality test laid down in Puttaswamy should be used by the Information Officer to balance the two rights, and also found that the RTI Act itself has sufficient procedural safeguards built in, to meet this test in the case of disclosure of personal information.

X vs. State of Uttarakhand and Ors., 2019 (Uttarakhand High Court)

In this case the petitioner claimed that she had identified herself as female, and undergone gender reassignment surgery and therefore should be treated as a female. She was not recognized as female by the State. While the Court primarily relied upon the judgment of the Supreme Court in NALSA v. Union of India, it also referred to the judgment in Puttaswamy. Specifically, the judgment refers to the finding in Puttaswamy that the right to privacy is not necessarily limited to any one provision in the chapter on fundamental rights, but rather intersecting rights. The intersection of Article 15 with Article 21 locates a constitutional right to privacy as an expression of individual autonomy, dignity and identity. The Court also referred to the Supreme Court’s judgment in Navtej Singh Johar v. Union of India, and on the basis of all three judgments, upheld the right of the petitioner to be recognized as a female.

(This judgment may need to be re-examined in light of the The Transgender Persons (Protection of Rights) Bill, 2019.)

Indian Hotel and Restaurant Association (AHAR) and Ors. vs. The State of Maharashtra and Ors., 2019 (Supreme Court)

This case dealt with the validity of the Maharashtra Prohibition of Obscene Dance in Hotels, Restaurant and Bar Rooms and Protection of Dignity of Women (Working therein) Act, 2016. The Supreme Court held that the applications for grant of licence should be considered more objectively and with open mind so that there is no complete ban on staging dance performances at designated places prescribed in the Act. Several of the conditions under the Act were challenged, including one that required the installation of CCTV cameras in the rooms where dances were to be performed. Here, the Court relied on Puttaswamy (and the discussion on unpopular privacy laws) to set aside the condition requiring such installation of CCTV cameras.

(The Puttaswamy case has been mentioned in at least 102 High Court and Supreme Court judgments since 2017.)

[September 9-16] CCG’s Week in Review: Curated News in Information Law and Policy

This week Telecom Minister RS Prasad announced 5G spectrum allocation this year, or by early 2020; The Supreme Court will hear matters relating to Article 370, including communication shutdowns and detentions on Monday; Indian trader bodies seek bans on Amazon and Flipkart festive sales; and MEITY constitutes a non-personal data committee to be headed by S. Gopalakrishnan.

Aadhaar

  • [Sep 13] Linking of social media with Aadhaar: Supreme Court asks govt to share plans, Livemint report.
  • [Sep 14] IT ministry doesn’t favour linking Aadhaar & social media accounts, The Times of India report.
  • [14 Sep] PAN-Aadhaar cards linkage deadline this month. How to link or check status, Livemint report.
  • [Sep 15] Aadhaar verification to be mandatory for new dealers from January 2020: GST Network, Business Today report.

Digital India and MEITY

  • [Sep 10] MeitY pings UIDAI on Aadhaar-social media linking, The Economic Times report.
  • [Sep 11] MeitY Demands Update Over Objectionable Content From Facebook, Twitter, Inc42 report.
  • [Sep 16] When Yogi Adityanath stepped in to stop Samsung from leaving UP, The Hindustan Times report.
  • [Sep 16] Indian govt forms committee to recommend governance norms for non-personal data, Infosys’ Gopalakrishnan to head it, Medianama report; Business Standard report; Indian Express report; ET Tech report.

Data Protection and Governance

  • [Sep 12] 4 New Data Protection Trends in India Jeopardize Innovation, The Diplomat report.
  • [Sep 12] Government’s proposed data protection bill to be significant in building data privacy norms in India: Omidyar Network India, CNBCTV18 report.
  • [Sep 12] School ‘bans’ surnames because of ‘data protection’, Metro UK report.
  • [Sep 13] Hefty Fines Considered for Noncompliance with Russia’s Data Protection, Internet Laws, Lexology.com report.

Online Content Regulation

  • [Sep 10] Govt & Social Media Regulation: A year of ups & downs, yet no clarity, ET Tech report.
  • [Sep 10] India: Minimum Modicum Of Obscenity & Need Of Online Content Regulation In India, Mondaq.com report.
  • [Sep 11] Host Violent Content? In Australia, You Could Go to Jail, The Ney York Times report.
  • [Sep 12] Internet regulator instructs platforms to create ‘healthy’ online environment, Technode report.
  • [Sep 15] Universities In Iran Implementing Tough New Regulation To Deter Students From Activism, Radio Farda report.
  • [Sep 16] Major streaming platforms commit to produce responsible content, The Manilla Times report.

E-Commerce

  • [Sep 10] Jack Ma steps down as Alibaba chairman, CEO Daniel Zhang to succeed him, Medianama report.
  • [Sep 14] CAIT urges government to ban festival sales by e-commerce players, The Times of India report.
  • [Sep 16] Indian Trader Body Seeks Ban on Amazon, Flipkart’s Festive Season Sales: Report, First Post report.
  • [Sep 16] US antitrust officials investigate Amazon’s marketplace practices, Medianama report.

Cryptocurrency and FinTech

  • [Sep 13] Lord Mayor of London leads fintech mission to India, The Economic Times report.
  • [Sep 13] RBI should give FinTech firms access to transaction and account history data: Finance Ministry’s FinTech report, Medianama report.
  • [Sep 14] Trump Executive Order Banning A Cryptocurrency Could Mutate Into Far-Reaching Law, Forbes report.
  • [Sep 15] Wall Street banks are upping bets on their potential fintech competitors, CNBC report.
  • [Sep 15] Regulators to question Facebook over new Libra cryptocurrency, The Guardian report.
  • [Sep 16] Report: Philippine Police Raid Alleged Cryptocurrency Scam, Arrest 277, Cointelegraph report.

Cybersecurity

  • [Sep 10] Smart Cities Will Require Smarter Cybersecurity, The Wall Street Journal report.
  • [Sep 12] Delhi Airport Facial Recognition Trial Calls for Establishment of Cybersecurity Laws, News18 report.
  • [Sep 16] NZ provides $10 million to help Pacific countries lift cybersecurity capability, CIO New Zealand report; ZDnet report.
  • [Sep 16] Chicago Brokerage to Pay $1.5 million Fine for Lack of Cybersecurity, Securitymagazine.com report.
  • [Sep 16] Cybercriminals Are Targeting Pharma Companies, And India Sees The Sixth Highest Attacks, News18 report.

Tech and National Security

  • [Sep 13] California lawmakers ban facial-recognition software from police body cams, CNN Business report.
  •  [Sep 16] U.S. Targets North Korean Hacking as Rising National-Security Threat, Wall Street Journal report.

Tech and Elections

  • [Sep 15] Snapchat launches political ads library as 2020 election ramps up, CNN Business report.

Internal Security: J&K

  • [Sep 15] SC to hear pleas against Centre’s move to abrogate Article 370, restrictions in J&K on Monday, Zee News report.
  • [Sep 15] ‘If Political Party Can Avail it, Why Not Locals?’ Internet Access to BJP From Media Centre Irks Kashmiris, News18 report.
  • [Sep 16] Farooq Abdullah detained under Public Safety Act for 12 days, The Hindu report.
  • [Sep 16] Kashmir LIVE: Not a Single Bullet Fired Since Scrapping of J&K’s Special Status, Centre Tells SC, News 18 report.
  • [Sep 16] SC asks Centre, J&K to restore normalcy in state keeping in mind national interest, The Times of India report.

Internal Security

  •  [Sep 13] National security: Fortifying Defence, India Today report.
  • [Sep 15] Will implement NRC in Haryana, says CM Khattar, The Times of India report.
  • [Sep 16] Will Implement Citizens’ List “When UP Needs It”: Yogi Adityanath, NDTV report.

Telecom/5G

  • [Sep 10] Telcos face another hit, may have to pay Rs 41,000 crore more as spectrum charges, The Economic Times report.
  • [Sep 16] Telecom department aims to connect uncovered villages by 2022, ET Telecom report.
  • [Sep 16] 5G spectrum auction this year or in early 2020: Telecom Minister RS Prasad, Medianama report.
  • [Sep 16] WhatsApp offers India traceability alternatives, ET Telecom report.

More on Huawei

  • [Sep 9] New Huawei ‘Workaround’ May Put Google Apps Back On Mate 30, Evading Blacklist, Forbes report.
  • [Sep 14] Huawei Offers To License 5G Technology To U.S. To Flush Out Trump, Forbes report.
  • [Sep 15] Trade war between US and China follows Huawei to Africa, South China Morning Post report.
  • [Sep 16] US semiconductor companies urge Trump to hurry Huawei licenses, South China Morning Post report.

Emerging Tech

  • [Sep 10] 21 per cent Indian IT managers consider Internet of Things threats top security risk, The New Indian Express report.
  • [Sep 12] Artificial intelligence: Expert committee to explore the development of a legal framework, The Council of Europe press release.
  • [Sep 15] Ericsson acquires Niche AI workforce for India centre, The Hindu Business Line report.

Opinions and Analyses

  • [Sep 12] Editorial, The New York Times, What Won’t Netanyahu Say to Get Re-elected?
  • [Sep 13] Editorial, The Hindu, John Bolton goes: On the sacking of U.S. National Security Advisor.
  • [Sep 15] Karen Roby, Tech Republic, How holding off on 5G can save money and help the environment.
  • [Sep 15] Editorial, Wall Street Journal, Why London Spurned Hong Kong.
  • [Sep 15] Michael Bloomberg, The New York Post, Rage has free speech under siege on the American campus.
  • [Sep 15] Editorial, The Hindustan Times, The language question.
  • [Sep 16] Editorial, The Hindu, Effort worth emulation: On Rajasthan’s public information portal.
  • [Sep 16] Markandey Katju, The Hindu, The litmus test for free speech.

[August 19-26] CCG’s Week in Review: Curated News in Information Law and Policy

The ECI sought a legal mandate to link Aadhaar with Voter IDs; Facebook approached the Supreme Court over PILs demanding Aadhaar linkage with social media accounts; MEITY invited ‘select stakeholders’ for private consultations over the data protection bill; and a new panel to review defence procurement practices in India was constituted by the Defense Minister Rajnath Singh, who also hinted at dropping India’s no first use policy – presenting this week’s most important developments in law and tech.       

Aadhaar

  • [Aug 19] EC seeks statutory baking to collect voters’ Aadhaar numbers, The Times of India report.
  • [Aug 19] Facebook approaches SC over Aadhaar linkage pleas, The Deccan Herald report; Firstpost report.
  • [Aug 20] Aadhaar to ensure farmers, not middlemen, get benefits, The Economic Times report.
  • [Aug 21] SC cautions govt on linking Aadhaar with social media, ET Tech report.
  • [Aug 21] Election Commission writes to law ministry, seeks legal powers to collect Aadhaar numbers for cleaning up voters’ list, Firstpost report.
  • [Aug 22] Aadhaar may be used to verify SECC beneficiaries, The Economic Times report.
  • [Aug 23] Centre to put QR code on fishermen’s Aadhaar cards to secure sea route: Amit Shah, The Times of India report.
  • [Aug 24] Aadhaar-social media linking: 10 things to know about the ongoing issue, India Today report.
  • [Aug 24] Govt to allow Aadhaar-based KYC for domestic retail investors; amendments to PMLA to be issues, Firstpost report.
  • [Aug 25] Linking Aadhaar with electoral rolls will create Delhi, Mumbai Analyticas: Justice Srikrishna, The Week report.

Digital India

  • [Aug 19] Indian companies at a disadvantage in tenders, says Commerce ministry, Money Control report; The Times of India report.
  • [Aug 21] India’s IT Industry turns to flexi staffing to keep its bench from idling, ET Tech report.
  • [Aug 22] Indian IT Firms step up patent filings as they look to monetize their IP, ET Tech report.
  • [Aug 26] Time to revisit FTAs to fire up electronics: Ravi Shankar Prasad, ET Rise report.

E-Commerce

  • [Aug 21] Government hopes for an Ecommerce GeM, ET Tech report.
  • [Aug 23] Technology reforming India’s retail businesses, ET Tech report.

Digital Payments

  • [Aug 22] RBI to allow e-mandates on card payments from September 1, Medianama report.
  • [Aug 22] Digital payment execs met Finance Ministry officials to discuss demerits of removing MDR: report, Medianama report.

Cryptocurrencies

  • [Aug 18] US lawmakers to visit Switzerland to discuss Facebook’s Libra, Cointelegraph report.
  • [Aug 19] Israeli Bitcoiners petition banks to disclose crypto policies, Cointelegraph report.
  • [Aug 21] Authorities seize crypto mining equipment from nuclear power plant in Ukraine, Coin Desk report.
  • [Aug 23] $100K Crypto donation to Amazon rainforest charity blocked by BitPay, Coin Desk report.

Internet Governance

  • [Aug 18] Google, Facebook, WhatsApp to be made more accountable under new rules, Financial Express report.

Data Protection 

  • [Aug 19] Google cuts some Android phone data for wireless carriers amid privacy concerns, The Hindustan Times report.
  • [Aug 20] MEITY privately seeks responses to fresh questions on the data protection bill from select stakeholders, Medianama report; ET Tech report; Business Standard report; Inc42 report.
  • [Aug 21] Google, Intel and Microsoft form data protection consortium, Engadget report; The Economic Times report.
  • [Aug 22] Govt working towards tabling data protection bill in winter session, Livemint report; The Economic Times report
  • [Aug 24] India needs to draw a distinction between personal and impersonal data: Ravi Shankar Prasad, Inc42 report.
  • [Aug 25] Data Protection Bill need of the hour, says Justice BN Srikrishna, Inc42 report.

Social Media

  • [Aug 19] Social media accounts need to be linked with Aadhaar to check fake news, SC told, Livemint report.
  • [Aug 20] Twitter and Facebook crack down on accounts linked to Chinese campaign against Hong Kong, The Guardian report; Defense One report.
  • [Aug 20] Facebook’s new tool lets you see which apps and websites tracked you, the New York Times report; ET Tech report.
  • [Aug 21] China cries foul over Facebook, Twitter block of fake accounts, ET Tech report.
  • [Aug 23] Facebook removes accounts linked to Myanmar military, Medianama report.

Freedom of Speech

  • [Aug 20] Islamic preacher Zakir Naik banned from giving public speeches in Malaysia, India Today report.
  • [Aug 20] Zakir Naik apologizes to Malaysians for racial remarks, India Today report.
  • [Aug 21] Shehla rashid spreading fake news tro incite violence in Jammu and Kashmir: Indian army, DNA India report.
  • [Aug 24] IAS Officer Kannan Gopinathan resigns over ‘lack of freedom of expression’, The Hindu report; Scroll.in report.
  • [Aug 24] From colonial era to today’s India, a visual history of national security laws used to crush dissent, Sroll.in report.

Internal Security: Status of J&K

  • [Aug 19] Kashmir: now for the legal battle, India Today report.
  • [Aug 20] Amit Shah meets NSA, IB Chief on J&K, NDTV report.
  • [Aug 21] Armed forces to get human rights and vigilance cell after Rajnath Singh approves restructure, News 18 report.
  • [Aug 23] Opposition leaders demand release of Mehbooba Mufti, Omar Abdullah, The Economic Times report.
  • [Aug 23] Blackout is collective punishment against people of J&K: UN Human Rights experts call on India to end communications shutdown, Medianama report.
  • [Aug 25] Amid massive clampdown, uneasy calm in volatile south Kashmir, The Tribune report.

Tech and Law Enforcement

  • [Aug 20] Flaws in cellphone evidence prompt review of 10,000 verdicts in Denmark, The New York Times report.
  • [Aug 21] Supreme Court directs Madras HC not to pass final order in WhatsApp traceability case, Entrackr report.
  • [Aug 21] Facebook, WhatsApp and the encryption dilemma – What India can learn from the rest of the world, CNBC TV 18 report.
  • [Aug 21] WhatsApp’s response to Dr. Kamakoti’s recommendation for traceability in WhatsApp, Medianama report.
  • [Aug 25] Curbs on Aadhaar data use delayed murder probe: Cops, Deccan Herald report.
  • [Aug 26] End-to-end encryption not essential to WhatsApp as a platform: Tamil Nadu Advocate General, Medianma report.

Tech and National Security

  • [Aug 18] New Panel to review defence procurement procedure to strengthen ‘Make in India’, Bharat Shakti report; Jane’s 360 report.
  • [Aug 18] RSS affiliate sees Chinese telecom firms as security risk for India, The Hindu report.
  • [Aug 18] Traders body calls for boycott of Chinese goods, seeks upto 500% import duty, Livemint report.
  • [Aug 19] India looks to acquire military equipment on lease amidst budget squeeze, Defence Aviation Post report.
  • [Aug 20] India, France likely to finalize roadmap for digital, cyber security cooperation, The Economic Times report; The Indian Express report.
  • [Aug 20] ‘Make in India’ Software Defined Radio: ‘Mother’ of all solutions for tactical communications of armed forces, Financial Express report.
  • [Aug 20] Need to reduce dependence on foreign manufacturers to modernise Indian Air Force, says defence minister Rajnath Singh, Firstpost report.
  • [Aug 20] Strike total at all 41 ordnance factories, say unions on day one, The Hindu Business Line report; Deccan Herald report.
  • [Aug 21] Government neglect my force HAL to crash land, Deccan Herald report.
  • [Aug 21] Cabinet Secretariat raps MoD, MEA for not involving NSA, The Economic Times report.
  • [Aug 21] Ajay Kumar appointed new Defence Secretary, The Economic Times report.
  • [Aug 21] Ordnance factories continue strike, MoD calls their products ‘high cost, low quality’, India.com report.
  • [Aug 24] It’s about national security: Arun Jaitley on how 2019 elections were different from 2014, India Today report.
  • [Aug 24] Gaganyaan: Russian space suits, French medicine for Indian astronauts? The Hindu report.
  • [Aug 24] Ordnance strike: Unions to take a call on Centre’s proposal on Aug 24, The Hindu Business Line report.
  • [Aug 25] Will India change its ‘No First Use’ policy? The Hindu report.
  • [Aug 21] French Eye: india to launch 8-10 satellites with France as part of a ‘constellation’ for maritime surveillance, The Pioneer report.

Cybersecurity

  • [Aug 19] Global Cyber Alliance launches cybersecurity development platform for Internet of Things (IoT) Devices, Dark reading report.
  • [Aug 19] The US Army is struggling to staff its cyber units: GAO, Defense One report.
  • [Aug 20] A huge ransomware attack messes with Texas, Wired report.
  • [Aug 21] Experts call for cybersecurity cooperation at the Beijing Cybersecurity Conference, Xinhua News report.
  • [Aug 23] Enterprises are increasingly adopting AI, ML in cybersecurity: Experts, Livemint report.
  • [Aug 24] Anomaly detection as advanced cybersecurity strategy, iHLS report.
  • Aug 24] Telangana preparing an army of cyber warriors, Telangana Today report.

Internet of Things

  • [Aug 22] ITI-Bhubaneswar introduces Internet of Things curriculum, The New Indian Express report.
  • [Aug 22] Will We Ever Have A Full Industrial Internet Of Things, Forbes report.
  • [Aug 24] IKEA Smart Home Investment Could Be Boost The Internet Of Things Needs, Forbes report.

Artificial Intelligence and Emerging Tech

  • [Aug 20] Use artificial intelligence for tax compliance: Direct tax panel, Business Standard report.
  • [Aug 20] Artificial intelligence and the world of tax litigation, Financial Express report.
  • [Aug 21] Intel launches first artificial intelligence chip Springhill, The Hindu report; News18 report.
  • [Aug 22] Facial recognition attendance systems for teachers to be installed in Gujarat’s govt schools, Medianama report.
  • [Aug 26] Yogi Govt Plans To Install Artificial Intelligence System In 12,500 Public Sector Buses, Business World report.

Telecom/5G

  • [Aug 24] PMO clears BSNL-MTNL revival, merger off the table, The Economic Times report.

Huawei

  • [Aug 19] Trump reiterates Huawei as ‘national security threat’, Cnet report.
  • [Aug 20] Tech giant Huawei slams US administration, calls sanctions politically motivated, India Today report.
  • [Aug 20] US sanctions on Huawei bite, but who gets hurt? Livemint report.
  • [Aug 21] Huawei founder tells staff it faces ‘live or die’ moment, Tech Radar report.
  • [Aug 22] Aadhaar-Social Media linking case: Next SC hearing to take place on 13 September, Firstpost report.
  • [Aug 22] China telcos weigh sharing 5G network to cut costs, potentially hurting Huawei, Reuters report.
  • [Aug 23] Huawei puts a price for Trump’s moves: $10 billion, The Hindu Business Line report
  • [Aug 25] trump, UK’s Johnson discuss Huawei on G7 sidelines, Reuters report.

Opinions and Analyses

  • [Aug 19] Vishal Krishna, Your Story, Data privacy is a fundamental right, but is the Indian startup ecosystem prepared for new protection law?
  • [Aug 19] Nitin Pai, Livemint, Appointing a chief of defence staff would just be the first step.
  • [Aug 19] Priyanjali Malik, The Hindu,  An intervention that leads to more questions.
  • [Aug 19] Abhijit Iyer-Mitra, The Print, India needs tips from Israel on how to handle Kashmir. Blocking network is not one of them.
  • [Aug 19] Ria Singh Sawhney, The Wire, Aadhaar: A Primer to knowing your rights.
  • [Aug 19] Alok Deb, Institute for Defence Studies and Analysis, Finally a CDS for the Indian Armed Forces.
  • [Aug 20] TOI Editorial, Aadhaar Hydra again: EC wants to link voter roll to Aadhaar data, this is unnecessary and risky.
  • [Aug 20] Lt. Gen. Harwant Singh (Retd), The Economic Times, A CDS for the armed forces must come with full play.
  • [Aug 20] Darren Death, Forbes, Is cybersecurity automation the future?
  • [Aug 20] Asit Ranjan Mishra, Livemint, Why New Delhi is turning up the heat on PoK now. 
  • [Aug 21] Financial Express Opinion, Election ID linked to Aadhaar can make votes portable.
  • [Aug 21] Nabeel Ahmed, Read Write, Artificial Intelligence: A tool or a threat to cybersecurity?
  • [Aug 21] Asheeta Regidi, Firstpost, Aadhaar-social media account linking could result in creation of a surveillance state, deprive fundamental right to privacy.
  • [Aug 22] Sanjay Hegde, The Hindu, Sacrificing liberty for national security.
  • [Aug 22] Ahmed Ali Fayyaz, The Quint, Fall of J&K: Real reason – ‘Jamhooriyat, Insaniyat, Kashmiriyat’?
  • [Aug 22] AS Dulat, The Telegraph, Kashmir: The perils of a muscular approach. 
  • [Aug 22] Somnath Mukherjee, The Economic Times, Growth is the biggest national security issue.
  • [Aug 22] K Raveendran, The Leaflet, Aadhaar-social media profile linkage will open pandora’s box.
  • [Aug 22] Mariarosaria Taddeo and Francesca Bosco, World Economic Forum blog, We must treat cybersecurity as a public good, here’s why.
  • [Aug 23] Nikhil Pahwa, Medianama, Against Facebook-Aadhaar linking.
  • [Aug 23] Ilker Koksal, Forbes, The rise of crypto as payment currency.
  • [Aug 24] Kalev Lataru, Forbes, Social media platforms will increasingly define ‘truth’.
  • [Aug 25] Sandeep Unnithan, India Today, South block.
  • [Aug 25] Spy’s Eye, Outlook, Intel agencies need strengthening.
  • [Aug 26] Prasanna S., The Hindu, Privacy no longer supreme.
  • [Aug 26] Sunil Abraham, Business Standard, Linking Aadhaar with social media or ending encryption is counterproductive. 
  • [Aug 26] The Financial Express Opinion, Linking social media to Aahdaar is serious overkill.