A Landscape of Cyber Norms

Less than a year ago, the United Nations Group of Governmental Experts on Developments in the Field of Information and Telecommunications in the Context of International Security (the ‘UN GGE’, for short) famously came to a deadlock in its determination of how international law applies to cyberspace. Comprising 25 states, the GGE was formed to debate the norms applicable to cyber activities in the international sphere – both the law as it stands today, as well as to recommend confidence-building measures amongst states. In 2013, the GGE’s historic pronouncement that international law applies to cyberspace changed the terms of the debate, opening up the question of how the law applies, and in what context.

Let us turn to 2017 to the 5th GGE. The issue concerned cyber warfare. A coalition of states, including the United States, wished the GGE to declare that the international law of war (jus ad bellum and jus in bello) applies to cyber warfare, and that both the inherent right of self-defence and the right to use countermeasures are applicable as well. However, certain other states, such as Cuba (and presumably, Russia and China), felt that such a declaration might lead to the “militarization of cyberspace”, and demurred. Thus, the GGE, which operates on principles of consensus, came to a deadlock, and for the first time since inception, dissolved without a report to show for its extensive deliberations.

This leaves cyberspace with massive lacunae in how international law operates. It is unclear how the norm-building discussions will go forward – and more importantly, where these discussions will be housed. Several suggestions have been raised, including an open-ended working group within the General Assembly, the constitution of a new GGE, and coalitions of similar-thinking states. While the way forward is far from clear, history has left us some examples to look to. But before we enter the how, we will explore the why of international norm-building.

International law is, after all, not a beast that affects our lives – or the lives of our states – on a daily basis, surely? We may wonder loud and angry at the use and effectiveness of international law in governing interstate relations. When, we may ask, has international law ever stopped a war, or recognized an international wrong, or been effective in stopping a state from doing wrong? The answers to these questions fall in the delicate space of international norm-building, and the ways in which states relate to each other.

States are autonomous creatures. Their very existence stems from their ability to be sovereign, to enter into independent, self-initiated relationships with other states, organisations and individuals. For instance, signing on to a bilateral or multilateral treaty is within a state’s prerogative and choice. As noted by the Permanent Court of International Justice in its famous 1927 S.S. Lotus decision, largely, states are free to do as they please, with the exception of some rules that are so universal that states cannot signal their disagreement with them, usually termed as jus cogens.

This, then, is the task of international law – to place boundaries upon the hubris of states to act as they please. It may do this in several ways; the Statute of the International Court of Justice recognizes four sources of international law. Limits may be placed upon state autonomy in the form of treaties, wherein states signal their express consent to norms laid down in the treaty. The Hague Conventions, which place limits on state action in times of war, or human rights instruments, which place duties and responsibilities on states vis-à-vis individuals and organisations, are examples. Of course, states may place reservations on their obligations under treaties, but as they are expressly done, it is a clear indication of the state of the law.

States may also accept limits on their autonomy in the form of customary international law. International custom comprises state acts that, consistently performed over an uninterrupted period of time, coalesce into legal norms. Custom must be accompanied by the belief that the rule is binding on states (called opinio juris). Take, for instance, the three-mile rule in the laws of the sea, where states exert their authority over three nautical miles outward into the sea. It reflects an international custom, as the practice of it is accompanied by opinio juris.

Well, now we know why an ‘international law of cyberspace’ is necessary; it is so we know what the states can and cannot do to each other and their citizens. The how of international law, however, is more complex. By now, it is well-documented that a cyberspace treaty is an imaginary beast. As international law stands today, there is far too little agreement to leave space for a cyberspace treaty. You could argue, of course, that it is too early for custom to develop, and you would be right. It took over two decades of state actions, followed by the International Law Commission’s surprising involvement, for the Law of the Sea to develop, and it is still uncertain that the treaty in entirety represents customary international law.

So how do we populate this open field? How should the international law of cyberspace develop? Of course, there are multiple ways, and states will no doubt offer their own suggestions. I offer two suggestions myself. The first is to get the International Law Commission involved. The ILC has decades of experience codifying international law (both primary and secondary). While a majority of its experience and success has been in the codification of secondary international law (‘rules about rules’, as Hart says), the ILC has also been instrumental in codifying the Law of the Sea, and in bringing to some semblance of coherence the rules on transboundary harm, and diplomatic and consular relations. Of course, primary rule codification by the ILC would most likely need to be confirmed by states in the form of a treaty or a convention, but we need not let that implausible eventuality stop us from our optimism over codification.

Not only this, but the ILC has already codified secondary rules of responsibility and attribution, which are no doubt crucial in cyber-related incidents. While the Tallinn Manual has done a tremendous job of transplanting the rules of attribution (among other primary rules) to cyberspace, we still need rules that are accepted expressly by states.

The second, and perhaps more plausible, suggestion is a form of active “I Spy”. Through their statements following major cyber incidents, states have already begun to give us a sense of what they consider international law boundaries to be. It is clear, from statements, that Russian interference in Estonia and Georgia constitute interferences with state sovereignty, while states have yet to expressly term the Stuxnet incident an act of use of force or an intervention. It is becoming clear that state influence on elections of another state using cyber means may constitute intervention, while the implications of that (countermeasures? threshold for the use of force?) are not yet clear. In sum, the crux of my second suggestion is this: Give it time, and keep an eye on state practice. This may be a space where publicists may genuinely make a difference, especially those with some influence on state apparatus.

Of course, a combination of methods to speed up norm-building will probably serve us best. Unlike nuclear power, we are as yet uncertain of the extent of cyber’s influence and impact. We are learning everyday: The Internet of Things has taught us that we now create a discrete home surveillance network with our gadgets, while Cambridge Analytica has shown that information about us is being used to manipulate our electoral choices. Stuxnet revealed the dangerous extent to which cyber can affect national security, while Estonia showed us that cyber operations are enough to stall a country in its tracks. Much like international law itself, we are still drawing the boundaries of cyber harms. And that is why any single norm-building method will not suffice: it will simply be too slow to keep pace with developments in cyber. And so, let us employ a multitude of methods. Within a year, the landscape of international law norms in cyber will look very different, and if we are observant, we can stay ahead of the developments, even as technology leads the way.


Celebrating One Year of the Puttaswamy Judgment (August 24, 6.00 pm, IIC)

Celebrating One Year of the Justice K.S. Puttaswamy v. Union of India Judgment

August 24, 2018

6.00 pm onwards

organized by

Indian Council for Research on International Economic Relations


Centre for Communication Governance at National Law University Delhi


Conference Hall – 2 | India International Centre | Max Mueller Marg | New Delhi



In a landmark decision on August 24th last year, a nine-judge bench of the Supreme Court unanimously upheld the fundamental right to privacy.

More recently, a committee headed by Justice B.N. Srikrishna submitted a report and a draft bill on data protection. Public Comments on the bill are due by early next month. The Supreme Court’s judgment on the Aadhaar challenge is imminent. There have also been other developments in this context such as RBI’s data localisation directive, the DNA profiling bill, the draft information security in health care bill, data localisation provisions in the e-commerce policy and the government’s recently withdrawn proposal to create a social media communications hub.

To commemorate the anniversary of the judgment and discuss the recently released Data Protection Bill, and related issues we are hosting this discussion on privacy and data protection.

Timings Programme
6.00 – 6.15 pm Tea & Coffee
6.15 – 6.20 pm Initial Remarks
6.20 – 6.40 pm State of Privacy in India & the Challenges to Realising Puttaswamy’s Promise

Dr. Usha Ramanathan, Independent Law Researcher

6.40 – 7.30 pm Data Protection for a Free and Fair Digital Economy

The recently released draft data protection framework recognises the need to balance privacy and a free and fair digital economy. It articulates some of the benefits of big data and encourages its growth. However, it has been argued that compliance with such a framework will require the current business models to change. Additionally, stringent provisions mandating the jurisdiction for processing of personal data, and wide discretion given to the central government, and regulatory authority raise questions of its impact on the second largest online market in the world, home to nearly 500 million active Internet users and business located in it.

Moderated by: Mansi Kedia, Consultant, Indian Council for Research on International Economic Relations (ICRIER)

Madhulika Srikumar, Associate Fellow, Observer Research Foundation

Malavika Raghavan, Project Head – Future of Finance Initiative, Dvara Research

Nehaa Chaudhari, Public Policy Lead, TRA Law

Smriti Parsheera, Technology Policy Researcher, National Institute of Public Finance and Policy (NIPFP)

7.30 – 8.20 pm Legacy of the Justice K.S. Puttaswamy v. Union of India Judgment

The court pronounced a landmark judgment last year, however it still needs to be examined whether judicial and legislative developments in India over the past year have upheld the principles enumerated in it. This includes the proposed data protection framework, and ongoing hearings on the right to be forgotten, Aadhaar and Section 377 and adultery, among others.

Moderated by: Apurva Vishwanath, Special Correspondent, ThePrint

Kritika Bhardwaj, Lawyer, Supreme Court of India

Shweta Mohandas, Policy Officer, Centre for Internet & Society

Smitha K. Prasad, Civil Liberties Lead, Centre for Communication Governance at National Law University Delhi

Ujwala Uppaluri, Lawyer, Supreme Court of India

8.20 pm onwards Dinner

Cyber Diplomacy: Towards A New Cybersecurity Strategy

Cyber space has become a focal point of international relations. With most global powers having realized that cyber security is integral to their national security, cyber issues have found their place in foreign policy, resulting in the emergence of cyber diplomacy.

Cyber diplomacy is the use of traditional diplomatic tools including negotiations, formation of alliances, treaties, and agreements to resolve issues that arise in cyber space. The United Nations Group of Governmental Experts on Developments in the Field of Information and Telecommunications in the Context of International Security (UN GGE) is one of the most high profile cyber diplomacy exercises at the global level. The UN GGE was formed subsequent to the adoption of digital security as an UN agenda, to examine threats emanating from cyberspace and to develop appropriate cooperative measures to address them. Several multilateral organizations such as NATO, ASEAN, BRICS, to name a few, are also increasingly serving as platforms for cyber diplomacy. The post will briefly explore the role of cyber diplomacy in enabling cybersecurity by analyzing the relevance of a few major cyber diplomacy efforts in developing a sustainable and stable cyberspace.

The Role of Cyber Diplomacy

Society’s increasing reliance on internet and digital technologies is accompanied with security challenges in the form of various malicious activities including hacking, espionage, cyber attacks, and cyber war. These challenges arise from a domain that lacks a formal, institutionalized regime to regulate and oversee the conduct of the actors. Unless there is a global consensus on regulating cyberspace, the potential to wreak havoc remains unbridled. Considering the transnational nature of cyberspace, a secure cyber environment can be established only through global engagement, dialogue, and cooperation, making cyber diplomacy the only possible means to achieve this goal. Diplomatic efforts to stabilize cyberspace have primarily focused on three areas: establishment of cyber norms, confidence building measures (CBMs), and capacity building.

Norms in Cyberspace

The increasing exploitation of cyberspace by states for political and military objectives mandates the need for norms that would lay down what states can and cannot do online. Cyber norms are voluntary guidelines adopted by the states that would promote stability in cyberspace. Establishing these norms would help in developing a shared understanding among states on how to work together in matters of mutual concern. Also, continued observation of these norms created through practice or formal agreements will help them gain legitimacy amongst other states gradually resulting in their evolution into international law. The norm suggesting that cyber enabled theft of intellectual property for commercial gain is unacceptable developed as a result of a US-China bilateral agreement, and is an example of a successful norm that has gradually gained recognition amongst other states and the G20.

Norms are non-binding guidelines for the conduct of relevant actors, with an element of good faith commitment and limited consequences in the event of non-compliance. Treaties, on the other hand, are binding agreements that are readily enforceable. Although norms seem weaker than treaties, they can have a powerful impact. When nuclear weapons were developed, they were simply considered a more powerful form of traditional weapons until norms against their use developed, making their use unthinkable in ordinary circumstances. Creating norms could, over time, help in establishing benchmarks for acceptable behavior in cyber domain.

Challenges to Norm Creation

Developing cybersecurity norms is extremely challenging due to the unique nature of cyberspace, diverse interests of the parties, and the broad scope of issues involved. The use of contrasting terms – cyber security and information security – by the US and its allies and the Sino-Russian bloc respectively indicates the difference in what is perceived as a threat by the groups. While the former focuses on the protection of data and hardware from unauthorized access, the latter focuses on the content of the information, which goes against the idea of Open Internet advocated by the former. Unless these radically incompatible perceptions on the very concept of security in cyberspace are reconciled, the process of norm creation is likely to be stalled.

Confidence Building Measures in Cyberspace

While norms help in establishing acceptable behavior in cyberspace, the difficulty in forming cyber norms calls for an alternative means to diffuse distrust and misunderstandings among states. CBMs have emerged as the solution. CBMs are measures adopted at regional and global levels that enhance transparency and facilitate exchange of information, which would help states to assess each other’s activities and understand their intentions and thereby reduce the risk of a cyber war. For instance, the practice of transparency enables states to distinguish between defensive and offensive cyber investments by enhancing situational awareness and building common understanding.

Furthermore, CBMs are instrumental in ensuring effective compliance with norms. The norm according to which states should not knowingly allow their territories to be used for unlawful acts using information and communication technologies (ICTs) requires states to employ all their instruments to ensure this. However proving such knowledge is difficult. In such instances, information exchange and cooperation during investigations helps in determining compliance. Such CBMs also aid states in implementing the norm by enhancing capacity. In the absence of CBMs, cyber norms will merely provide an illusion of stability.

Capacity Building in Cyberspace

All states do not stand on an equal footing in terms of their cyber capacities, especially new entrants to the cyber domain. However it is necessary to ensure that all states have at least the baseline capacity that would enable them to participate in the development and implementation of norms and CBMs and to protect their critical information infrastructure. The UN GGE 2015 also recognized the link between compliance with norms and CBMs and capacity building. Cyber diplomacy can help in enhancing the human, institutional, technological and legal capacities of states through formal and informal agreements.

The Way Forward

Development of cyber norms has proven to be difficult. With the breakdown of the UN GGE, the only venue that brought together the Sino-Russian and the Western blocs for norm discussion, prospects for the formation of norms in the near future appear to be slim.

CBMs seem to be the most promising avenue to establish stability in the cyber domain since they do not require the states to agree on a shared set of principles, but instead focus on fostering cooperation despite the differences as states have a shared interest to establish stability. Bilateral engagements amongst states would be the ideal platform to deepen cooperation and establish CBMs. A few of the more successful bilateral agreements between the opposing global powers have resulted in the development of effective CBMs such as real time communication and assistance to compensate for limited trust.

With effective implementation of CBMs, there is hope for gradual development of norms, by establishing trust and eliminating misunderstandings, and thereby a safe and secure cyberspace.

The Proposed Regulation of DNA Profiling Raises Critical Privacy Concerns

The Union Cabinet recently approved the DNA Technology (Use and Application) Regulation Bill, 2018 (“DNA Profiling Bill”), which is scheduled to be introduced in Parliament today (31st July). The Bill is largely based on the 2017 Law Commission Report on “Human DNA Profiling – A draft Bill for the Use and Regulation of DNA-Based Technology”, which seeks to expand “the application of DNA-based forensic technologies to support and strengthen the justice delivery system of the country.

Apart from identifying suspects and maintaining a registry of offenders, the Bill seeks to enable cross-matching between missing persons and unidentified dead bodies, and establishing victim identity in mass disasters.

Features of the Bill:

The Bill envisages the setting up of a DNA profiling board which shall function as the regulatory authority and lay down guidelines, standards and procedures for the functioning of DNA laboratories and grant them accreditation. The board will also assist the government in setting up new data banks and advise the government on “all issues relating to DNA laboratories”. In addition, it will make recommendations on legislation and practices relating to privacy issues around storage and access to DNA samples.  

DNA data banks will also be established, consisting of a national data bank as well as the required number of regional data banks. Regional data banks must mandatorily share all their information with the national data bank. Every data bank shall maintain databases of five categories of data – crime scenes, suspects or undertrials, offenders, missing persons, and unknown deceased persons.

The 2017 draft has made significant changes to address concerns raised about the previous 2015 draft. These include removing the index of voluntarily submitted DNA profiles, deleting the provision allowing the DNA profiling board to create any other index as necessary, detailing serious offences for DNA collection, divesting the database manager of discretionary powers, and introducing redressal mechanisms by allowing any aggrieved person to approach the courts. Additionally, it has added legislative provisions authorising licensed laboratories, police stations and courts to collect and analyse DNA from certain categories of people, store it in data banks and use it to identify missing/ unidentified persons and as evidence during trial.

The new Bill has attempted to address previous concerns by limiting the purpose of DNA profiling, stating that it shall be undertaken exclusively for identification of a person and not to extract any other information. Safeguards have been put in place against misuse in the form of punishments for disclosure to unauthorised persons.

The Bill mandates consent of an accused before collection of bodily substances for offences other than specified. However, any refusal, if considered to be without good cause, can be disregarded by a Magistrate if there is reasonable cause to believe that such substances can prove or disprove guilt. Any person present during commission of a crime, questioned regarding a crime, or seeking a missing family member, may volunteer in writing to provide bodily substances. The collection of substances from minors and disabled persons requires the written consent of their parents or guardians. Collection from victims or relatives of missing persons requires the written consent of the victim or relative. Details of persons who are not offenders or suspects in a crime cannot be compared to the offenders’ or suspects’ index, and any communication of details can only be to authorised persons.

Areas of Concern:

Although the Bill claims that DNA testing is 99.9% foolproof, doubts have recently been raised about the possibility of a higher error rate than previously claimed. This highlights the need for the proposed legislation to provide safeguards in the event of error or abuse.

The issue of security of all the data concentrated in data banks is of paramount importance in light of its value to both government and private entities. The Bill fails to clearly spell out restrictions or to specify who has access to these data banks.

Previous iterations of the Bill have prompted civil society to express their reservations about the circumstances under which DNA can be collected, issues of consent to collection, access to and retention of data, and whether such information can be exploited for purposes beyond those envisaged in the legislation. As in the case of Aadhaar, important questions arise regarding how such valuable genetic information will be safeguarded against theft or contamination, and to what extent this information can be accessed by different agencies. The present Bill has reduced the number of CODIS loci that can be processed from 17 to 13, thus restricting identification only to the necessary extent. However, this provision has not been explicitly stated in the provisions of the legislation itself, casting doubt over the manner in which it will be implemented.

Written consent is mandatory before obtaining a DNA sample, however withholding of consent can be overruled by a Magistrate if deemed necessary. An individual’s DNA profile can only be compared against crime scene, missing person or unknown deceased person indices. A court order is required to expunge the profile of an undertrial or a suspect, whose profile can also be removed after filing of a police report. Any person who is not a suspect or a convicted offender can only have their profile removed on a written petition to the director of the data bank. The consent clause is also waived if a person has been accused of a crime punishable either by death or more than seven years in prison. However, the Bill is silent on how such a person’s profile is to be removed on acquittal.

Moreover, the Bill states that “the information contained in the crime scene index shall be retained”. The crime scene index captures a much wider data set as compared to the offenders’ index, since it includes all DNA evidence found around the crime scene, on the victim, or on any person who may be associated with the crime. The indefinite retention of most of these categories of data is unnecessary, as well as contrary to earlier provisions that provide for such data to be expunged. However, the government has claimed that such information will be removed “subject to judicial orders”. Importantly, the Bill does not contain a sunset provision that would ensure that records are automatically expunged after a prescribed period.

While the Bill provides strict penalties for deliberate tampering or contamination of biological evidence, the actual mechanisms for carrying out quality control and analysis have been left out of the parent legislation and left to the purview of the rules.

Crucially, the Bill has not explicitly defined privacy and security protections such as implementation of safeguards, use and dissemination of genetic information, security and confidentiality and other privacy concerns within the legislation itself – leaving such considerations to the purview of regulation (and out of parliamentary oversight). The recently released Personal Data Protection Bill, 2018 does little to allay these concerns. As per this Bill, DNA Banks will be classified as significant data fiduciaries, and thus subject to audits, data protection impact assessments, and appointment of a special data protection officer. However, although genetic information is classified as sensitive personal data, the Data Protection Bill does not provide sufficient safeguards against the processing of such data by the State. In light of the proposed data protection framework, and the Supreme Court confirming that the right to privacy (including the right to bodily integrity) is a fundamental right, the DNA Profiling Bill as it stands in its present form cannot be implemented without violating the fundamental right to privacy.

The Personal Data Protection Bill, 2018

After months of speculation, the Committee of Experts on data protection (“Committee”), led by Justice B N Sri Krishna, has submitted its recommendations and a draft data protection bill to the Ministry of Electronics and Information Technology (“MEITY”) today. As we sit down for some not-so-light weekend reading to understand what our digital futures could look like if the committee’s recommendations are adopted, this series puts together a quick summary of the Personal Data Protection Bill, 2018 (“Bill”).

Scope and definitions

The Committee appears to have moved forward with the idea of a comprehensive, cross-sectoral data protection legislation that was advocated in its white paper published late last year. The Bill is meant to apply to (i) the processing of any personal data, which has been collected, disclosed, shared or otherwise processed in India; and (ii) the processing of personal data by the Indian government, any Indian company, citizen, or person / body of persons incorporated or created under Indian law. It also applies to any persons outside of India that engage in processing personal data of individuals in India. It does not apply to the processing of anonymised data.

The Bill continues to use the 2-level approach in defining the type of information that the law applies to. However, the definitions of personal data and sensitive personal data have been expanded upon significantly when compared to the definitions in our current data protection law.

Personal data includes “data about or relating to a natural person who is directly or indirectly identifiable, having regard to any characteristic, trait, attribute or any other feature of the identity of such natural person, or any combination of such features, or any combination of such features with any other information”. The move towards relying on ‘identifiability’, when read together with definitions of terms such as ‘anonymisation’, which focuses on irreversibility of anonymisation, is welcome, given that section 2 clearly states that the law will not apply in relation to anonymised data. However, the ability of data processors / the authority to identify whether an anonymisation process is irreversible in practice will need to be examined, before the authority sets out the criteria for such ‘anonymisation’.

Sensitive personal data on the other hand continues to be defined in the form of a list of different categories, albeit a much more expansive list, that now includes information such as / about official identifiers, sex life, genetic data, transgender status, intersex status, caste or tribe, and religious and political affiliations / beliefs.

Interestingly, the Committee has moved away from the use of other traditional data protection language such as data subject and data controller – instead arguing that the relationship between an individual and a person / organisation processing their data is better characterised as a fiduciary relationship. Justice Sri Krishna emphasised this issue during the press conference organised at the time of submission of the report, noting that personal data is not to be considered property.

Collection and Processing

The Bill elaborates on the notice and consent mechanisms to be adopted by ‘data fiduciaries’, and accounts for both data that is directly collected from the data principal, and data that is obtained via a third party. Notice must be given at the time of collection of personal data, and where data is not collected directly, as soon as possible. Consent must be obtained before processing.

The Committee’s earlier white paper, and the report accompanying the Bill have both discussed the pitfalls in a data protection framework that relies so heavily on consent – noting that consent is often not informed or meaningful. The report however also notes that it may not be feasible to do away with consent altogether, and tries to address this issue by way of adopting higher standards for consent, and purpose limitation. The Bill also provides that consent is to be only one of the grounds for processing of personal data. However, this seems to result in some catch-all provisions allowing processing for ‘reasonable purposes’. While it appears that these reasonable purposes may need to be pre-determined by the data protection authority, the impact of this section will need to be examined in greater detail. The other such wide provision in this context seems to allow the State to process data – another provision that will need more examination.

Sensitive personal data

Higher standards have been proposed for the processing of sensitive personal data, as well as personal / sensitive personal data of children. The emphasis on the effect of processing of certain types of data, keeping in mind factors such as the harm caused to a ‘discernible class of persons’, or even the provision of counselling or child protection services in these sections is welcome. However, there remains a wide provision allowing for the State to process sensitive personal data (of adults), which could be cause for concern.

Rights of data principals

The Bill also proposes 4 sets of rights for data principals: the right to confirmation and access, the right to correction, the right to data portability, and the right to be forgotten. There appears to be no right to erasure of data, apart from a general obligation on the data fiduciary to delete data once the purpose for collection / processing of data has been met. The Bill proposes certain procedural requirements to be met by the data principal exercising these rights – an issue which some have already pointed out may be cause for concern.

Transparency and accountability

The Bill requires all data fiduciaries to adopt privacy by design, transparency and security measures.

Each data fiduciary is required to appoint a data protection officer, conduct data protection impact assessments before the adoption of certain types of processing, maintain records of data processing, and conduct regular data protection audits. These obligations are applicable to those notified as ‘significant data fiduciaries’, depending on criteria such as the volume and sensitivity of personal data processed, the risk of harm, the use of new technology, and the turnover of the data fiduciary.

The requirements for data protection impact assessments is interesting – an impact assessment must be conducted before a fiduciary undertakes any processing involving new technologies, or large scale profiling or use of sensitive personal data such as genetic or biometric data (or any other data processing which carries a risk of significant harm to data principals). If the data protection authority thinks that such processing may cause harm (based on the assessment), they may direct the fiduciary to cease such processing, or impose conditions on the processing. The language here implies that these requirements could be applicable to processing by the State / private actors, where new technology is used in relation to Aadhaar, among other things. However, as mentioned above, this will be subject to the data fiduciary in question being notified as a ‘significant data fiduciary’.

In a welcome move, the Bill also provides a process for notification in the case of a breach of personal data by data fiduciaries. However, this requirement is limited to notifying the data protection authority, which then decides whether there is a need to notify the data principal involved. It is unfortunate that the Committee has chosen to limit the rights of data principals in this regard, making them rely instead on the authority to even be notified of a breach that could potentially harm them.

Cross border transfer of data

In what has already become a controversial move, the Bill proposes that at least one copy of all personal data under the law, should be stored on a server or data centre located in India. In addition, the central government (not the data protection authority) may notify additional categories of data that are ‘critical’ and should be stored only in India.

Barring exceptions in the case of health / emergency services, and transfers to specific international organisations, all transfer of personal data outside India will be subject to the approval of the data protection authority, and in most cases, consent of the data principal.

This approval may be in the form of approval of standard contractual clauses applicable to the transfer, or a blanket approval of transfers to a particular country / sector within a country.

This provision is ostensibly in the interest of the data principals, and works towards ensuring a minimum standard of data protection. The protection of the data principal under this provision, like many other provisions, including those relating to data breach notifications to the data principal, will be subject to the proper functioning of the data protection authority. In the past, we have seen that simple steps such as notification of security standards under the Information Technology (Reasonable security practices and procedures and sensitive personal data or information) Rules, 2011, have not been undertaken for years.

In the next post in this series, we will discuss the functions of the authority, and other provisions in the Bill, including the exemptions granted, and penalties and remedies provided for.

India’s Artificial Intelligence Roadmap

There is now a near universal perception that Artificial Intelligence technologies are set to disrupt every sphere of life. However, this is coupled with concern regarding the social, ethical (and even existential) challenges that AI might present. As a consequence, there has been an uptake in interest by governments on how best to marshal the development of these technologies. The United Kingdom, the United States, China, and France, among others, have all released vision documents that explore these themes.

This post, the first in a series, presents a brief overview of such initiatives by the Indian government. Subsequent posts will focus specifically on their treatment of personal data, as well as their consideration of ethical issues posed by AI.


Task Force on Artificial Intelligence

In August 2017, the Ministry of Commerce and Industry set up a ‘Task Force on Artificial Intelligence for India’s Economic Transformation’. A panel of 18 members was formed with the objective of exploring how Artificial Intelligence could be best deployed in India.

The Task Force released its Report in May 2018, where it characterized AI as a ‘socio-economic problem solver at a large scale’, rather than simply a booster for economic growth. It sought to explore domains which would benefit from government intervention, with the objective of improving quality of life, and generating employment. The report identifies 10 sectors where AI could be deployed – Manufacturing, FinTech, Healthcare, Agriculture and Food Processing, Retail, Accessibility Technology, Environment, National Security and Public Utility Services. It attempts to identify challenges specific to each sector, as well as enabling factors that could promote the adoption of AI.

The report also explores the predicted impact of AI on employment, as well as other broader social and ethical implications of the technology. It concludes with a set of recommendations for the government of India. A primary recommendation is to constitute an Inter-Ministerial National Artificial Intelligence Mission (N-AIM) with a 5 year budget of Rs. 1200 Crores. Other recommendations focus on creating an ecosystem for better availability of data for AI applications; skilling and education initiatives focused on AI; standard setting, as well as international participation in standard setting processes.

NITI Aayog’s National Strategy for Artificial Intelligence

In his Budget speech, the Finance Minister had tasked the NITI Aayog with formulating a national programme for Artificial Intelligence. In June 2018, the NITI Aayog released its roadmap in the form of the National Strategy for Artificial in India.

The paper frames India’s AI ambitions in terms of increasing economic growth, social development, and as an incubator for technology that can cater to other emerging economies. It focuses on 5 sectors as avenues for AI led intervention. These are healthcare, agriculture, education, smart cities, and smart mobility. It also identifies some key challenges to the effective adoption of AI. These include low awareness, research, and expertise in AI along with an absence of collaboration; the lack of ecosystems that enable access to usable data; high resource costs; and ill-adapted regulations.

The paper then presents a series of recommendations to address some of these issues. In order to expand AI research in India, it proposes a two-tier framework to focus on basic research as well as application based research. It also proposes the creation of a common computing platform in order to pool cloud infrastructure, and reduce infrastructural requirements for such institutions. It further suggests a review of the intellectual property framework to enable greater AI innovation. In order to foster international collaboration, the paper proposes the creation of a supranational CERN-like entity for AI. It also recommends skilling and education initiatives to address job creation, as well as the current lack of AI expertise. In order to accelerate adoption, it proposes a platform for sharing government datasets, along with a marketplace model for data collection and aggregation, for data annotation, as well as for deployable AI models.

The paper concludes with its recommendations for ‘responsible’ AI development. It recommends that there be a consortium of the Ethics Councils at each of the AI research institutions. It further proposes the creation of a Centre for Studies on Technology Sustainability. It also emphasizes the importance of fostering research on privacy preserving technology, along with general and sectoral privacy regulations.

Further reports suggest that a task force will be set up to execute the proposals that have been made, in coordination with the relevant ministries.

MeitY Committees

It has also been reported that four committees have been constituted in February 2018 to deliberate on issues of ‘data for AI, applications of AI, skilling and cyber security/legal, ethical issues.’ However, there have been no reports about when the committees will present their recommendations, and  whether they will be made available to the public.


India appears to be at the nascent stage of formulating its approach towards Artificial Intelligence. Even so, it is encouraging that the government recognizes the importance of its stewardship. Purely market led development of AI could imply all of its disruption, without any of the envisaged social benefits.

The General Data Protection Regulation and You

A cursory look at your email inbox this past month presents an intriguing trend. Multiple online services seem to have taken it upon themselves to notify changes to their Privacy Policies at the same time. The reason, simply, is that the European Union’s General Data Protection Regulation (GDPR) comes into force on May 25, 2018.

The GDPR marks a substantial overhaul of the existing data protection regime in the EU, as it replaces the earlier ‘Directive 95/46/EC on the protection of individuals with regard to the processing of personal data and on the free movement of such data.’ The Regulation was adopted by the European Parliament in 2016, with a period of almost two years to allow entities sufficient time to comply with their increased obligations.

The GDPR is an attempt to harmonize and strengthen data protection across Member States of the European Union. CCG has previously written about the Regulation and what it entails here. For one, the instrument is a ‘Regulation’, as opposed to a ‘Directive’. A Regulation is directly binding across all Member States in its entirety. A Directive simply sets out a goal that all EU countries must achieve, but allows them discretion as to how. Member States must enact national measures to transpose a Directive, and this can sometimes lead to a lack of uniformity across Member States.

The GDPR introduces, among other things, additional rights and protections for data subjects. This includes, for instance, the introduction of the right to data portability, and the codification of the controversial right to be forgotten. Our writing on these concepts can be found here, and here. Another noteworthy change is the substantial sanctions that can be imposed for violations. Entities that fall foul of the Regulation may have to pay fines up to 20 million Euros, or 4% of global annual turnover, whichever is higher.

The Regulation also has consequences for entities and users outside the EU. First, the Regulation has expansive territorial scope, and applies to non-EU entities if they offer goods and services to the EU, or monitor the behavior of EU citizens. The EU is also a significant digital market, which allows it to nudge other jurisdictions towards the standards it adopts. The Regulation (like the earlier Directive) restricts the transfer of personal data to entities outside the EU to cases where an adequate level of data protection can be ensured. This has resulted in many countries adopting regulation in compliance with EU standards. In addition, with the implementation of the GDPR, companies that operate in multiple jurisdictions might prefer to maintain parity between their data protection policies. For instance, Microsoft has announced that it will extend core GDPR protections to its users worldwide. As a consequence, many of the protections offered by the GDPR may in effect become available to users in other jurisdictions as well.

The implementation of the GDPR is also of particular significance to India, which is currently in the process of formulating its own data protection framework. The Regulation represents a recent attempt by a jurisdiction (that typically places a high premium on privacy) to address the harms caused by practices surrounding personal data. The lead-up to its adoption and implementation has generated much discourse on data protection and privacy. This can offer useful lessons as we debate the scope and ambit of our own data protection regulation.