Cyber Diplomacy: Towards A New Cybersecurity Strategy

Cyber space has become a focal point of international relations. With most global powers having realized that cyber security is integral to their national security, cyber issues have found their place in foreign policy, resulting in the emergence of cyber diplomacy.

Cyber diplomacy is the use of traditional diplomatic tools including negotiations, formation of alliances, treaties, and agreements to resolve issues that arise in cyber space. The United Nations Group of Governmental Experts on Developments in the Field of Information and Telecommunications in the Context of International Security (UN GGE) is one of the most high profile cyber diplomacy exercises at the global level. The UN GGE was formed subsequent to the adoption of digital security as an UN agenda, to examine threats emanating from cyberspace and to develop appropriate cooperative measures to address them. Several multilateral organizations such as NATO, ASEAN, BRICS, to name a few, are also increasingly serving as platforms for cyber diplomacy. The post will briefly explore the role of cyber diplomacy in enabling cybersecurity by analyzing the relevance of a few major cyber diplomacy efforts in developing a sustainable and stable cyberspace.

The Role of Cyber Diplomacy

Society’s increasing reliance on internet and digital technologies is accompanied with security challenges in the form of various malicious activities including hacking, espionage, cyber attacks, and cyber war. These challenges arise from a domain that lacks a formal, institutionalized regime to regulate and oversee the conduct of the actors. Unless there is a global consensus on regulating cyberspace, the potential to wreak havoc remains unbridled. Considering the transnational nature of cyberspace, a secure cyber environment can be established only through global engagement, dialogue, and cooperation, making cyber diplomacy the only possible means to achieve this goal. Diplomatic efforts to stabilize cyberspace have primarily focused on three areas: establishment of cyber norms, confidence building measures (CBMs), and capacity building.

Norms in Cyberspace

The increasing exploitation of cyberspace by states for political and military objectives mandates the need for norms that would lay down what states can and cannot do online. Cyber norms are voluntary guidelines adopted by the states that would promote stability in cyberspace. Establishing these norms would help in developing a shared understanding among states on how to work together in matters of mutual concern. Also, continued observation of these norms created through practice or formal agreements will help them gain legitimacy amongst other states gradually resulting in their evolution into international law. The norm suggesting that cyber enabled theft of intellectual property for commercial gain is unacceptable developed as a result of a US-China bilateral agreement, and is an example of a successful norm that has gradually gained recognition amongst other states and the G20.

Norms are non-binding guidelines for the conduct of relevant actors, with an element of good faith commitment and limited consequences in the event of non-compliance. Treaties, on the other hand, are binding agreements that are readily enforceable. Although norms seem weaker than treaties, they can have a powerful impact. When nuclear weapons were developed, they were simply considered a more powerful form of traditional weapons until norms against their use developed, making their use unthinkable in ordinary circumstances. Creating norms could, over time, help in establishing benchmarks for acceptable behavior in cyber domain.

Challenges to Norm Creation

Developing cybersecurity norms is extremely challenging due to the unique nature of cyberspace, diverse interests of the parties, and the broad scope of issues involved. The use of contrasting terms – cyber security and information security – by the US and its allies and the Sino-Russian bloc respectively indicates the difference in what is perceived as a threat by the groups. While the former focuses on the protection of data and hardware from unauthorized access, the latter focuses on the content of the information, which goes against the idea of Open Internet advocated by the former. Unless these radically incompatible perceptions on the very concept of security in cyberspace are reconciled, the process of norm creation is likely to be stalled.

Confidence Building Measures in Cyberspace

While norms help in establishing acceptable behavior in cyberspace, the difficulty in forming cyber norms calls for an alternative means to diffuse distrust and misunderstandings among states. CBMs have emerged as the solution. CBMs are measures adopted at regional and global levels that enhance transparency and facilitate exchange of information, which would help states to assess each other’s activities and understand their intentions and thereby reduce the risk of a cyber war. For instance, the practice of transparency enables states to distinguish between defensive and offensive cyber investments by enhancing situational awareness and building common understanding.

Furthermore, CBMs are instrumental in ensuring effective compliance with norms. The norm according to which states should not knowingly allow their territories to be used for unlawful acts using information and communication technologies (ICTs) requires states to employ all their instruments to ensure this. However proving such knowledge is difficult. In such instances, information exchange and cooperation during investigations helps in determining compliance. Such CBMs also aid states in implementing the norm by enhancing capacity. In the absence of CBMs, cyber norms will merely provide an illusion of stability.

Capacity Building in Cyberspace

All states do not stand on an equal footing in terms of their cyber capacities, especially new entrants to the cyber domain. However it is necessary to ensure that all states have at least the baseline capacity that would enable them to participate in the development and implementation of norms and CBMs and to protect their critical information infrastructure. The UN GGE 2015 also recognized the link between compliance with norms and CBMs and capacity building. Cyber diplomacy can help in enhancing the human, institutional, technological and legal capacities of states through formal and informal agreements.

The Way Forward

Development of cyber norms has proven to be difficult. With the breakdown of the UN GGE, the only venue that brought together the Sino-Russian and the Western blocs for norm discussion, prospects for the formation of norms in the near future appear to be slim.

CBMs seem to be the most promising avenue to establish stability in the cyber domain since they do not require the states to agree on a shared set of principles, but instead focus on fostering cooperation despite the differences as states have a shared interest to establish stability. Bilateral engagements amongst states would be the ideal platform to deepen cooperation and establish CBMs. A few of the more successful bilateral agreements between the opposing global powers have resulted in the development of effective CBMs such as real time communication and assistance to compensate for limited trust.

With effective implementation of CBMs, there is hope for gradual development of norms, by establishing trust and eliminating misunderstandings, and thereby a safe and secure cyberspace.

Advertisements

The Proposed Regulation of DNA Profiling Raises Critical Privacy Concerns

The Union Cabinet recently approved the DNA Technology (Use and Application) Regulation Bill, 2018 (“DNA Profiling Bill”), which is scheduled to be introduced in Parliament today (31st July). The Bill is largely based on the 2017 Law Commission Report on “Human DNA Profiling – A draft Bill for the Use and Regulation of DNA-Based Technology”, which seeks to expand “the application of DNA-based forensic technologies to support and strengthen the justice delivery system of the country.

Apart from identifying suspects and maintaining a registry of offenders, the Bill seeks to enable cross-matching between missing persons and unidentified dead bodies, and establishing victim identity in mass disasters.

Features of the Bill:

The Bill envisages the setting up of a DNA profiling board which shall function as the regulatory authority and lay down guidelines, standards and procedures for the functioning of DNA laboratories and grant them accreditation. The board will also assist the government in setting up new data banks and advise the government on “all issues relating to DNA laboratories”. In addition, it will make recommendations on legislation and practices relating to privacy issues around storage and access to DNA samples.  

DNA data banks will also be established, consisting of a national data bank as well as the required number of regional data banks. Regional data banks must mandatorily share all their information with the national data bank. Every data bank shall maintain databases of five categories of data – crime scenes, suspects or undertrials, offenders, missing persons, and unknown deceased persons.

The 2017 draft has made significant changes to address concerns raised about the previous 2015 draft. These include removing the index of voluntarily submitted DNA profiles, deleting the provision allowing the DNA profiling board to create any other index as necessary, detailing serious offences for DNA collection, divesting the database manager of discretionary powers, and introducing redressal mechanisms by allowing any aggrieved person to approach the courts. Additionally, it has added legislative provisions authorising licensed laboratories, police stations and courts to collect and analyse DNA from certain categories of people, store it in data banks and use it to identify missing/ unidentified persons and as evidence during trial.

The new Bill has attempted to address previous concerns by limiting the purpose of DNA profiling, stating that it shall be undertaken exclusively for identification of a person and not to extract any other information. Safeguards have been put in place against misuse in the form of punishments for disclosure to unauthorised persons.

The Bill mandates consent of an accused before collection of bodily substances for offences other than specified. However, any refusal, if considered to be without good cause, can be disregarded by a Magistrate if there is reasonable cause to believe that such substances can prove or disprove guilt. Any person present during commission of a crime, questioned regarding a crime, or seeking a missing family member, may volunteer in writing to provide bodily substances. The collection of substances from minors and disabled persons requires the written consent of their parents or guardians. Collection from victims or relatives of missing persons requires the written consent of the victim or relative. Details of persons who are not offenders or suspects in a crime cannot be compared to the offenders’ or suspects’ index, and any communication of details can only be to authorised persons.

Areas of Concern:

Although the Bill claims that DNA testing is 99.9% foolproof, doubts have recently been raised about the possibility of a higher error rate than previously claimed. This highlights the need for the proposed legislation to provide safeguards in the event of error or abuse.

The issue of security of all the data concentrated in data banks is of paramount importance in light of its value to both government and private entities. The Bill fails to clearly spell out restrictions or to specify who has access to these data banks.

Previous iterations of the Bill have prompted civil society to express their reservations about the circumstances under which DNA can be collected, issues of consent to collection, access to and retention of data, and whether such information can be exploited for purposes beyond those envisaged in the legislation. As in the case of Aadhaar, important questions arise regarding how such valuable genetic information will be safeguarded against theft or contamination, and to what extent this information can be accessed by different agencies. The present Bill has reduced the number of CODIS loci that can be processed from 17 to 13, thus restricting identification only to the necessary extent. However, this provision has not been explicitly stated in the provisions of the legislation itself, casting doubt over the manner in which it will be implemented.

Written consent is mandatory before obtaining a DNA sample, however withholding of consent can be overruled by a Magistrate if deemed necessary. An individual’s DNA profile can only be compared against crime scene, missing person or unknown deceased person indices. A court order is required to expunge the profile of an undertrial or a suspect, whose profile can also be removed after filing of a police report. Any person who is not a suspect or a convicted offender can only have their profile removed on a written petition to the director of the data bank. The consent clause is also waived if a person has been accused of a crime punishable either by death or more than seven years in prison. However, the Bill is silent on how such a person’s profile is to be removed on acquittal.

Moreover, the Bill states that “the information contained in the crime scene index shall be retained”. The crime scene index captures a much wider data set as compared to the offenders’ index, since it includes all DNA evidence found around the crime scene, on the victim, or on any person who may be associated with the crime. The indefinite retention of most of these categories of data is unnecessary, as well as contrary to earlier provisions that provide for such data to be expunged. However, the government has claimed that such information will be removed “subject to judicial orders”. Importantly, the Bill does not contain a sunset provision that would ensure that records are automatically expunged after a prescribed period.

While the Bill provides strict penalties for deliberate tampering or contamination of biological evidence, the actual mechanisms for carrying out quality control and analysis have been left out of the parent legislation and left to the purview of the rules.

Crucially, the Bill has not explicitly defined privacy and security protections such as implementation of safeguards, use and dissemination of genetic information, security and confidentiality and other privacy concerns within the legislation itself – leaving such considerations to the purview of regulation (and out of parliamentary oversight). The recently released Personal Data Protection Bill, 2018 does little to allay these concerns. As per this Bill, DNA Banks will be classified as significant data fiduciaries, and thus subject to audits, data protection impact assessments, and appointment of a special data protection officer. However, although genetic information is classified as sensitive personal data, the Data Protection Bill does not provide sufficient safeguards against the processing of such data by the State. In light of the proposed data protection framework, and the Supreme Court confirming that the right to privacy (including the right to bodily integrity) is a fundamental right, the DNA Profiling Bill as it stands in its present form cannot be implemented without violating the fundamental right to privacy.

The Personal Data Protection Bill, 2018

After months of speculation, the Committee of Experts on data protection (“Committee”), led by Justice B N Sri Krishna, has submitted its recommendations and a draft data protection bill to the Ministry of Electronics and Information Technology (“MEITY”) today. As we sit down for some not-so-light weekend reading to understand what our digital futures could look like if the committee’s recommendations are adopted, this series puts together a quick summary of the Personal Data Protection Bill, 2018 (“Bill”).

Scope and definitions

The Committee appears to have moved forward with the idea of a comprehensive, cross-sectoral data protection legislation that was advocated in its white paper published late last year. The Bill is meant to apply to (i) the processing of any personal data, which has been collected, disclosed, shared or otherwise processed in India; and (ii) the processing of personal data by the Indian government, any Indian company, citizen, or person / body of persons incorporated or created under Indian law. It also applies to any persons outside of India that engage in processing personal data of individuals in India. It does not apply to the processing of anonymised data.

The Bill continues to use the 2-level approach in defining the type of information that the law applies to. However, the definitions of personal data and sensitive personal data have been expanded upon significantly when compared to the definitions in our current data protection law.

Personal data includes “data about or relating to a natural person who is directly or indirectly identifiable, having regard to any characteristic, trait, attribute or any other feature of the identity of such natural person, or any combination of such features, or any combination of such features with any other information”. The move towards relying on ‘identifiability’, when read together with definitions of terms such as ‘anonymisation’, which focuses on irreversibility of anonymisation, is welcome, given that section 2 clearly states that the law will not apply in relation to anonymised data. However, the ability of data processors / the authority to identify whether an anonymisation process is irreversible in practice will need to be examined, before the authority sets out the criteria for such ‘anonymisation’.

Sensitive personal data on the other hand continues to be defined in the form of a list of different categories, albeit a much more expansive list, that now includes information such as / about official identifiers, sex life, genetic data, transgender status, intersex status, caste or tribe, and religious and political affiliations / beliefs.

Interestingly, the Committee has moved away from the use of other traditional data protection language such as data subject and data controller – instead arguing that the relationship between an individual and a person / organisation processing their data is better characterised as a fiduciary relationship. Justice Sri Krishna emphasised this issue during the press conference organised at the time of submission of the report, noting that personal data is not to be considered property.

Collection and Processing

The Bill elaborates on the notice and consent mechanisms to be adopted by ‘data fiduciaries’, and accounts for both data that is directly collected from the data principal, and data that is obtained via a third party. Notice must be given at the time of collection of personal data, and where data is not collected directly, as soon as possible. Consent must be obtained before processing.

The Committee’s earlier white paper, and the report accompanying the Bill have both discussed the pitfalls in a data protection framework that relies so heavily on consent – noting that consent is often not informed or meaningful. The report however also notes that it may not be feasible to do away with consent altogether, and tries to address this issue by way of adopting higher standards for consent, and purpose limitation. The Bill also provides that consent is to be only one of the grounds for processing of personal data. However, this seems to result in some catch-all provisions allowing processing for ‘reasonable purposes’. While it appears that these reasonable purposes may need to be pre-determined by the data protection authority, the impact of this section will need to be examined in greater detail. The other such wide provision in this context seems to allow the State to process data – another provision that will need more examination.

Sensitive personal data

Higher standards have been proposed for the processing of sensitive personal data, as well as personal / sensitive personal data of children. The emphasis on the effect of processing of certain types of data, keeping in mind factors such as the harm caused to a ‘discernible class of persons’, or even the provision of counselling or child protection services in these sections is welcome. However, there remains a wide provision allowing for the State to process sensitive personal data (of adults), which could be cause for concern.

Rights of data principals

The Bill also proposes 4 sets of rights for data principals: the right to confirmation and access, the right to correction, the right to data portability, and the right to be forgotten. There appears to be no right to erasure of data, apart from a general obligation on the data fiduciary to delete data once the purpose for collection / processing of data has been met. The Bill proposes certain procedural requirements to be met by the data principal exercising these rights – an issue which some have already pointed out may be cause for concern.

Transparency and accountability

The Bill requires all data fiduciaries to adopt privacy by design, transparency and security measures.

Each data fiduciary is required to appoint a data protection officer, conduct data protection impact assessments before the adoption of certain types of processing, maintain records of data processing, and conduct regular data protection audits. These obligations are applicable to those notified as ‘significant data fiduciaries’, depending on criteria such as the volume and sensitivity of personal data processed, the risk of harm, the use of new technology, and the turnover of the data fiduciary.

The requirements for data protection impact assessments is interesting – an impact assessment must be conducted before a fiduciary undertakes any processing involving new technologies, or large scale profiling or use of sensitive personal data such as genetic or biometric data (or any other data processing which carries a risk of significant harm to data principals). If the data protection authority thinks that such processing may cause harm (based on the assessment), they may direct the fiduciary to cease such processing, or impose conditions on the processing. The language here implies that these requirements could be applicable to processing by the State / private actors, where new technology is used in relation to Aadhaar, among other things. However, as mentioned above, this will be subject to the data fiduciary in question being notified as a ‘significant data fiduciary’.

In a welcome move, the Bill also provides a process for notification in the case of a breach of personal data by data fiduciaries. However, this requirement is limited to notifying the data protection authority, which then decides whether there is a need to notify the data principal involved. It is unfortunate that the Committee has chosen to limit the rights of data principals in this regard, making them rely instead on the authority to even be notified of a breach that could potentially harm them.

Cross border transfer of data

In what has already become a controversial move, the Bill proposes that at least one copy of all personal data under the law, should be stored on a server or data centre located in India. In addition, the central government (not the data protection authority) may notify additional categories of data that are ‘critical’ and should be stored only in India.

Barring exceptions in the case of health / emergency services, and transfers to specific international organisations, all transfer of personal data outside India will be subject to the approval of the data protection authority, and in most cases, consent of the data principal.

This approval may be in the form of approval of standard contractual clauses applicable to the transfer, or a blanket approval of transfers to a particular country / sector within a country.

This provision is ostensibly in the interest of the data principals, and works towards ensuring a minimum standard of data protection. The protection of the data principal under this provision, like many other provisions, including those relating to data breach notifications to the data principal, will be subject to the proper functioning of the data protection authority. In the past, we have seen that simple steps such as notification of security standards under the Information Technology (Reasonable security practices and procedures and sensitive personal data or information) Rules, 2011, have not been undertaken for years.

In the next post in this series, we will discuss the functions of the authority, and other provisions in the Bill, including the exemptions granted, and penalties and remedies provided for.

India’s Artificial Intelligence Roadmap

There is now a near universal perception that Artificial Intelligence technologies are set to disrupt every sphere of life. However, this is coupled with concern regarding the social, ethical (and even existential) challenges that AI might present. As a consequence, there has been an uptake in interest by governments on how best to marshal the development of these technologies. The United Kingdom, the United States, China, and France, among others, have all released vision documents that explore these themes.

This post, the first in a series, presents a brief overview of such initiatives by the Indian government. Subsequent posts will focus specifically on their treatment of personal data, as well as their consideration of ethical issues posed by AI.

~

Task Force on Artificial Intelligence

In August 2017, the Ministry of Commerce and Industry set up a ‘Task Force on Artificial Intelligence for India’s Economic Transformation’. A panel of 18 members was formed with the objective of exploring how Artificial Intelligence could be best deployed in India.

The Task Force released its Report in May 2018, where it characterized AI as a ‘socio-economic problem solver at a large scale’, rather than simply a booster for economic growth. It sought to explore domains which would benefit from government intervention, with the objective of improving quality of life, and generating employment. The report identifies 10 sectors where AI could be deployed – Manufacturing, FinTech, Healthcare, Agriculture and Food Processing, Retail, Accessibility Technology, Environment, National Security and Public Utility Services. It attempts to identify challenges specific to each sector, as well as enabling factors that could promote the adoption of AI.

The report also explores the predicted impact of AI on employment, as well as other broader social and ethical implications of the technology. It concludes with a set of recommendations for the government of India. A primary recommendation is to constitute an Inter-Ministerial National Artificial Intelligence Mission (N-AIM) with a 5 year budget of Rs. 1200 Crores. Other recommendations focus on creating an ecosystem for better availability of data for AI applications; skilling and education initiatives focused on AI; standard setting, as well as international participation in standard setting processes.

NITI Aayog’s National Strategy for Artificial Intelligence

In his Budget speech, the Finance Minister had tasked the NITI Aayog with formulating a national programme for Artificial Intelligence. In June 2018, the NITI Aayog released its roadmap in the form of the National Strategy for Artificial in India.

The paper frames India’s AI ambitions in terms of increasing economic growth, social development, and as an incubator for technology that can cater to other emerging economies. It focuses on 5 sectors as avenues for AI led intervention. These are healthcare, agriculture, education, smart cities, and smart mobility. It also identifies some key challenges to the effective adoption of AI. These include low awareness, research, and expertise in AI along with an absence of collaboration; the lack of ecosystems that enable access to usable data; high resource costs; and ill-adapted regulations.

The paper then presents a series of recommendations to address some of these issues. In order to expand AI research in India, it proposes a two-tier framework to focus on basic research as well as application based research. It also proposes the creation of a common computing platform in order to pool cloud infrastructure, and reduce infrastructural requirements for such institutions. It further suggests a review of the intellectual property framework to enable greater AI innovation. In order to foster international collaboration, the paper proposes the creation of a supranational CERN-like entity for AI. It also recommends skilling and education initiatives to address job creation, as well as the current lack of AI expertise. In order to accelerate adoption, it proposes a platform for sharing government datasets, along with a marketplace model for data collection and aggregation, for data annotation, as well as for deployable AI models.

The paper concludes with its recommendations for ‘responsible’ AI development. It recommends that there be a consortium of the Ethics Councils at each of the AI research institutions. It further proposes the creation of a Centre for Studies on Technology Sustainability. It also emphasizes the importance of fostering research on privacy preserving technology, along with general and sectoral privacy regulations.

Further reports suggest that a task force will be set up to execute the proposals that have been made, in coordination with the relevant ministries.

MeitY Committees

It has also been reported that four committees have been constituted in February 2018 to deliberate on issues of ‘data for AI, applications of AI, skilling and cyber security/legal, ethical issues.’ However, there have been no reports about when the committees will present their recommendations, and  whether they will be made available to the public.

~

India appears to be at the nascent stage of formulating its approach towards Artificial Intelligence. Even so, it is encouraging that the government recognizes the importance of its stewardship. Purely market led development of AI could imply all of its disruption, without any of the envisaged social benefits.

The General Data Protection Regulation and You

A cursory look at your email inbox this past month presents an intriguing trend. Multiple online services seem to have taken it upon themselves to notify changes to their Privacy Policies at the same time. The reason, simply, is that the European Union’s General Data Protection Regulation (GDPR) comes into force on May 25, 2018.

The GDPR marks a substantial overhaul of the existing data protection regime in the EU, as it replaces the earlier ‘Directive 95/46/EC on the protection of individuals with regard to the processing of personal data and on the free movement of such data.’ The Regulation was adopted by the European Parliament in 2016, with a period of almost two years to allow entities sufficient time to comply with their increased obligations.

The GDPR is an attempt to harmonize and strengthen data protection across Member States of the European Union. CCG has previously written about the Regulation and what it entails here. For one, the instrument is a ‘Regulation’, as opposed to a ‘Directive’. A Regulation is directly binding across all Member States in its entirety. A Directive simply sets out a goal that all EU countries must achieve, but allows them discretion as to how. Member States must enact national measures to transpose a Directive, and this can sometimes lead to a lack of uniformity across Member States.

The GDPR introduces, among other things, additional rights and protections for data subjects. This includes, for instance, the introduction of the right to data portability, and the codification of the controversial right to be forgotten. Our writing on these concepts can be found here, and here. Another noteworthy change is the substantial sanctions that can be imposed for violations. Entities that fall foul of the Regulation may have to pay fines up to 20 million Euros, or 4% of global annual turnover, whichever is higher.

The Regulation also has consequences for entities and users outside the EU. First, the Regulation has expansive territorial scope, and applies to non-EU entities if they offer goods and services to the EU, or monitor the behavior of EU citizens. The EU is also a significant digital market, which allows it to nudge other jurisdictions towards the standards it adopts. The Regulation (like the earlier Directive) restricts the transfer of personal data to entities outside the EU to cases where an adequate level of data protection can be ensured. This has resulted in many countries adopting regulation in compliance with EU standards. In addition, with the implementation of the GDPR, companies that operate in multiple jurisdictions might prefer to maintain parity between their data protection policies. For instance, Microsoft has announced that it will extend core GDPR protections to its users worldwide. As a consequence, many of the protections offered by the GDPR may in effect become available to users in other jurisdictions as well.

The implementation of the GDPR is also of particular significance to India, which is currently in the process of formulating its own data protection framework. The Regulation represents a recent attempt by a jurisdiction (that typically places a high premium on privacy) to address the harms caused by practices surrounding personal data. The lead-up to its adoption and implementation has generated much discourse on data protection and privacy. This can offer useful lessons as we debate the scope and ambit of our own data protection regulation.

Dealing With Revenge Porn in India

In March of 2018, a man in West Bengal was sentenced to five years imprisonment and fined Rs 9,000 for uploading private pictures and videos of a girl without her consent as revenge for ending their relationship. Under the promise of marriage, the accused pressured the complainant into providing explicit images of herself, and leveraged his threats to upload these pictures onto social media to acquire more pictures. Later, he accessed her phone without her knowledge to retrieve more private pictures and videos. When the complainant refused to continue their relationship, he uploaded this material onto a popular pornographic website along with both her and her father’s names. In addition to the defendant’s imprisonment and fine, the state government was directed to treat the victim as a survivor of rape and grant appropriate compensation. With evidence provided by service providers Reliance Jio and Google, the perpetrator was convicted under Sections 354A, 354C, 354 and 509 of the IPC as well as Sections 66E, 66C, 67 and 67A of the IT Act, in what is likely the first revenge porn conviction in India.

Revenge porn is a form of non-consensual pornography that came to international attention with the 2010 launch (and subsequent 2012 takedown) of the popular website IsAnyoneUp, which allowed users to upload nude photographs. While a number of these images were ostensibly self-submitted, many were revealed to have been submitted by angry ex-lovers, which would amount to ‘revenge porn’. Compounding the issue was the fact that these explicit images deliberately linked to the social media profiles of the person in the image.

According to Halder and Jaishankar, the essential elements of revenge porn are that the perpetrator and the victim shared an intimate relationship, and that the former has deliberately (and without the victim’s consent) released sexually explicit information online in order to cause distress and harm to the victim’s reputation.

While revenge porn is often used interchangeably with the term “non-consensual pornography”, it is distinct from other forms of non-consensual pornography such as rape videos, morphing or voyeurism. For instance, non-consensual pornography includes within its ambit sexually explicit images captured without a person’s knowledge or consent. However, revenge porn often includes such sensitive information that has voluntarily been captured or sent to the perpetrator in good faith in the course of an intimate relationship. Further, unlike in the case of revenge porn, not all perpetrators of non-consensual pornography are motivated by personal feelings such as revenge (as in the case of hackers who released intimate photos of more than 100 female celebrities after gaining access to their private iCloud accounts).

As a result, researchers are moving away from the term “revenge porn” as it can be somewhat misleading. “Revenge” limits the scope of this offence to motivations of personal vengeance, whereas such an act could be motivated by a desire for profit, notoriety, entertainment, or no reason at all. “Porn” implies that all images of nudity are intrinsically pornographic. Sexually explicit images created and shared within a private relationship should not be considered pornographic, unless they are distributed without consent, as this results in a private image being converted into public sexual entertainment. Accordingly, many victim advocates prefer to use the term “non-consensual pornography” or non-consensual sharing of intimate images.

Although the National Crime Records Bureau documents cyber-crimes against women, there are no official statistics available that pertain specifically to revenge porn in India. A 2010 report suggests that “only 35 per cent of the women have reported about their victimization, 46.7 per cent have not reported and 18.3 per cent have been unaware of the fact that they have been victimized … women prefer not to report about their victimization owing to social issues.” Victim-shaming (both by the criminal justice system and the public at large) is common, and the potential social fallout often extends to the victim’s family as well.

The recent surfeit of revenge porn has prompted many countries to enact legislation that criminalises it. These include the UK, many states in the USA, Canada, Australia, Japan and the Philippines.

At present however, there are no legal provisions that directly address revenge porn in India. While certain sections in the Indian Penal Code (IPC) and Information Technology (IT) Act can be invoked by victims, they fail to fully encompass the complexity of such cases and do not specifically target non-consensual pornography published online.

Section 354C of the IPC makes voyeurism punishable, and Explanation 2 to the Section deals with the non-consensual dissemination of consensually-captured images. However, this section limits its scope to female victims and male offenders.

In cases of non-consensual pornography (particularly those that involve morphing), victims can also seek recourse under Section 499 of the IPC for criminal defamation.

Section 66E of the IT Act punishes the transmission of images depicting the private areas of a person. The Explanation to the section limits private area to “… the naked or undergarment clad genitals, pubic area, buttocks or female breast”. This provision is gender-neutral and captures every aspect of revenge porn while not addressing it by name. However, the narrow definition of “private areas” in this case could limit the applicability of the act in cases where the victim is captured in an intimate position without showing those particular areas.

Section 67A of the IT Act punishes publication or transmission of “material containing sexually explicit acts, etc. in electronic form”. While this can effectively punish perpetrators, it also risks including within its ambit, victims who may have voluntarily captured and shared such private content with their partners.

The recent Supreme Court judgment recognising privacy as a fundamental right could have substantial implications on revenge porn and non-consensual pornography in general, in light of arguments recognising the right to bodily integrity. Copyright law could also potentially be used by victims, particularly when the content is a selfie. By claiming a violation of their copyright, a victim could potentially get such material taken down. While Indian copyright law does not presently provide any relief to victims of revenge porn, some victims in the US have successfully enforced their copyright to get such images taken down.

Social media platforms are often used to disseminate such content. Until recently, their role was limited to removing non-consensual pornography and other offensive images. However, there have been calls for them to play a more active role and filter this content before it is uploaded. Facebook has attempted to prevent the re-posting of revenge porn by cataloguing content which had been previously reported as revenge porn on its site.

The gender disparity in victims of non-consensual pornography is a reflection of the hostility still faced by women on the internet today. Involuntary porn today can be considered “the sexual harassment of 20 years ago. It’s an easy way to punish women for behaving in ways that they don’t approve of – for leaving them, for rejecting them, for being happy without them.”

SC Constitution Bench on Aadhaar- Final Hearing (Day XXXV)

In October 2015, a 3-judge bench of the Supreme Court of India referred challenges to the Aadhaar program to a constitution bench. One of the primary concerns of this petition was to decide on the existence of a fundamental right to privacy, which has since been upheld. Other similar petitions, concerned with the legitimacy of Aadhaar had been tagged with this petition. While the existence of the fundamental right to privacy has been upheld, challenges against the Aadhaar programme and linking services to this programme were yet to be adjudicated upon.

An interim order was passed in December of 2017, a summary of the arguments can be found here and here.

The final hearing commenced on January 17, 2017. Summaries of the arguments advanced in the previous hearings can be found here.

Advocate Zoheb Hossain continued his submissions for the State of Maharashtra and the UIDAI. He began with referring to various international charters and covenants, stressing on the importance of harmonizing between the economic and social rights and the civil and political rights.

Justice Chandrachud noted that the Directive Principles, even though they are non justiciable, are necessary for good governance and as a guarantee of reasonableness of the law. This is why they are read into Article 21.

The counsel argued that all rights give rise to corresponding duties, and that Aadhaar was a project to secure the economic and social rights of the people. He then brought the Court’s attention to the Justice Wadhwa Committee Report on the Public Distribution System. He then brought the Court’s attention to various precedents. He referred to the case of DK Trivedi, where the Court had held that ensuring socio economic welfare was a constitutional obligation of the State. Further, it had been held that a statute could not be judged on the presumption that the executive power that it confers would be abused, or used arbitrarily.

The counsel then referred to the Universal Declaration of Human Rights, the International Covenant on Civil and Political Rights, and resolutions of the UN General Assembly. He reiterated that rights were indivisible and interconnected, and that socio economic rights were equal to the civil and political rights.

The counsel then argued that the proportionality and reasonableness of a restriction must be examined from the point of view of the general public, and not that of a specific party that claims to be affected. He argued that even if Aadhaar is used for different purposes such obtaining a SIM card or opening a bank account, the data remains disaggregated. He stated that as a consequence, there was no possibility of surveillance, even at the level of the Requesting Entities.

The counsel then drew a comparison between Aadhaar and the Social Security Number in the United States. He noted that the SSN was used for a variety of purposes, and that people could be denied benefits for not producing their SSN. He argued that the Courts in the US had upheld the firing of an employee for refusing to provide his SSN. The counsel then argued that the Aadhaar Act had sufficient safeguards in place over the identity and authentication information. He referred to Section 33 of the Act, noting that decisions made under that Section were subject to review by an oversight committee. He concluded that the safeguards in place were greater than what are provided by the Telegraph Act, and the standards laid down by the Supreme Court in the PUCL case.

Post lunch, the counsel resumed his submissions for the Respondents with examining how various search and seizure related provisions under the IT Act and CrPC had passed constitutional muster. He then proceeded to the issue of ‘national security.’ He argued that in times of emergency, a strict adherence to the principles of natural justice is not necessary. He referred to a House of Lords decision that read in a national security exception to a statute even though the text did not provide  for it.

He then addressed the contention with respect to Section 47 of the Aadhaar Act, arguing that it provided for sufficient remedy since a complaint could be filed to the UIDAI. He argued that Aadhaar had many technical aspects, so it would be best if only the Authority has the power to complain. He noted that a similar setup in the Industrial Disputes Act had been previously upheld. In addition, he noted that the UIDAI could authorize a person to make a complaint as well.

The counsel then submitted that the Aadhaar Act had sufficient safeguards for the CIDR, while provisions under the IT Act would cover actors outside the CIDR.

The counsel then framed the purpose of Section 139AA of the Income Tax as a measure to ensure redistributive justice, to ensure substantive equality. He argued that ‘distribute’ in the Directive Principles had been interpreted liberally, and measures to prevent leakages would thus be considered redistributive.

The counsel then moved to the addressing the argument about compelled speech. He argued that not all transactions can be considered to have a speech element, for instance linking the Aadhaar to PAN. He further noted that the Court in Puttaswamy had held that rights could be curbed to prevent tax evasion and money laundering. He added that the Income Tax Act and the Aadhaar Act were standalone Acts, and that after Binoy Viswam, it was settled that they were not in conflict. He responded to the contention that only individual tax payers had been mandated for linkage, stating that a measure need not strike at all evils at once. He argued that the linkage could help cure ills with companies as well, by revealing the people behind them. The linkage can allow the deduplication of DINs. Advocate Zoheb Hossain then concluded his arguments.

The Attorney General then began his arguments, by addressing the Money Bill issue. He argued the Act was, in pith and substance, a Money Bill. ‘Targetted Delivery of Subsidies entails the expenditure of funds. He argued that every act would have ancillary provisions dealing with review, appeal etc., but the primary purpose deal with the Consolidated Fund of India.

Justice Chandrachud questioned the counsel about whether Section 57 of the Act severed that link. The AG responded that the Section merely allowed the existing infrastructure to be used for other purposes, and was just an ancillary provision. The UIDAI had been brought into existence primarily to prevent leakages and losses.

Justice Sikri noted that there was no distribution of benefits or subsidies under Section 57. The AG argued that the Section would be saved by Article 110(1)(g) of the Constitution, and stressing on an interpretation of the word ‘only’ in the Article. Justice Chandrachud suggested that that might amount to rewriting the Constitution.

The Attorney General will resume his arguments on May 3, 2018.