Analysing India’s Bilateral MOUs In the Field of Information and Communication Technologies (ICTs)

Sukanya Thapliyal

Introduction

As per the latest figures released by the International Telecommunication Union (ITU), post-COVID-19, the world witnessed a sharp rise in the number of internet users from 4.1 billion people (54% of the world population) in 2019 to 4.9 billion people (63% of the world population) in 2021. However, the same report states that some 2.9 billion people remain offline, 96%  of whom live in developing countries. These stark differences emanate from several barriers faced by the residents of the developing countries and include lack of access because of unaffordability of ICT services, lack of strong technological and industrial bases, inadequate R&D facilities, and deficient ICT operating skills

Countries are increasingly exploring different ways to partner with other countries through multilateral, bilateral, and other legal arrangements. The countries often forge bilateral cooperation with other countries through signing Memorandum of Understanding(MOUs), Memorandum of Cooperation (MOCs) and creating Joint Working Groups, and Joint Declarations of Intent, among others. These are informal legal instruments as compared to typical treaties or international agreements, and promote international cooperation in strategic interest areas. India has a detailed Standard Operating Procedure (SOP) with respect to MOUs/agreements with foreign countries. The SOP lays down the Indian legal practice on treaty formation and detailed guidelines in respect to the different international agreements that may be signed by the countries. 

India has executed several MOUs, MOCs, Joint Declaration of Intent, and Working Groups to identify common interests, priorities, policy dialogue, and the necessary tools for ICT collaboration. These include a broad range of areas,  including the development of IT software,  telecom software, IT-enabled services, E-commerce services & information security, electronic governance, IT and electronics hardware, Human Resource Development for IT education, IT-enabled education, Research and Development, strengthening the cooperation between private and public sector, collaboration in the field of emerging technologies, capacity building and technical assistance in the ICT sector. 

Aims and Objectives

This mapping exercise lists the numerous bilateral MOUs, Joint Declarations and other agreements signed between India and partner countries to locate the nature and extent of international collaborative efforts in the ICT sector. Furthermore, this mapping exercise aims to understand India’s strategic interests and priority areas in the sector and evaluate India’s unique positioning in South-South Cooperation. The said mapping exercise remains a work in progress and shall be updated at periodic intervals. 

Methodology

The mapping exercise includes an assessment of 36 MOUs and 5 other agreements subdivided into four categories: Fixed Term/ Renewed ICT MOUs (13), Open-Ended ICT MOUs (4), ICT MOUs with Pending Renewal/ Extension and Expired MOUs (19), and Joint Declaration and Proposals concerning ICT Sector (5). The relevant details of  such MOUs are derived from publicly available information provided by the Ministry of Electronics and Information Society (MeitY), Department of Telecommunication (DoT), Ministry of Communications (MOC) and the Indian Treaties Database by Ministry of External Affairs (MEA). The current analysis attempts to bring out the different MOUs, MOCs, and Joint Declarations of Intent executed by Indian authorities (MeitY, MOC and MEA), their duration of operation and the areas covered under the scope of such collaboration.   

Conclusion/Observations/Remarks:

Some of our key observations from the mapping exercise are as follows: 

  • India has entered into MOUs/ Joint Declaration of Intent and other agreements with both developed and developing countries. These include Bangladesh, Bulgaria, Estonia, Israel, Japan, South Korea, Singapore, United Kingdom, among others. 
  • Within India’s ICT cooperation and collaboration landscape, we have identified the following as priority areas: 
Building capacity of CERTs and law enforcement agencies1. Cybersecurity technology cooperation relevant to CERT activities.
2. Exchange of information on prevalent cybersecurity policies and best practices.
3. CERT-to-CERT Cooperation.
4. Exchange of experiences regarding technical infrastructure of CERT.
Technical assistance and capacity building1. Human resource development including  training of Govt. officials in e-governance.
2. Institutional cooperation among the academic and training institutions.
3. Strengthening collaboration in areas such as e-government, m-governance, smart infrastructure, e-health, among others.
Sharing of technology, standardization and certification1. Cooperation in software development, rural telecommunication, manufacturing of telecom manufacturing and sharing of know-how technologies.
2. Cooperation in exchanging and developing technology.
3. Standardisation, testing and certification.
B2B cooperation and economic advancement1. Enhancing B2B cooperation in cyber security.
2.Enable and strengthen industrial, technological and commercial cooperation between industry and research establishments.
3.Exploring third country markets.
4. Favourable environment for the business entities through various measures to facilitate trade and investment.
Key Priority Areas for India in ICT Sector

Mapping MOUs signed by India in the field of Information and Communication Technologies (ICT), created using https://www.mapchart.net/world.html

Second Substantive Session of UN OEWG on International Cybersecurity (Part 1): Analysing Developments on Stakeholder Participation

Ananya Moncourt & Sidharth Deb

“Cyber Attacks” by Christian Colen Attribution-ShareAlike 2.0 Generic (CC BY-SA 2.0)

Introduction

On April 1st 2022, the United Nations General Assembly’s (UNGA’s) First Committee on Disarmament and International Security concluded the week-long second substantive session of the second Open-Ended Working Group (OEWG) on the security of and in the use of information and communication technologies (ICTs). This process is the UN’s second OEWG involving all 193 UN Member States on matters relating to international cybersecurity. There have also been six prior UN Group of Government Experts (GGEs) on similar issues.

This post is the first of a two-part series which analyses key developments at the OEWG’s second substantive session in the period between March 28 and April 01, 2022. This piece outlines discussions on a key issue – multistakeholder engagement within the OEWG process.

Readers can view it as a follow up to CCG’s two-part blog series from December 2021 which analysed major international cybersecurity discussions (including the international normative framework) at the UN and India’s participation in these processes. Part 1 begins by providing an overview of the scope of the OEWG’s institutional mandate, the geopolitical background in which the second substantive session was held, and analyses key organisational developments relating to the modalities of multistakeholder participation at the OEWG. It reveals geopolitical differences and where appropriate, spotlights India’s interventions on such issues.

Institutional Mandate

The second OEWG was established by UNGA Resolution 75/240 adopted on December 31, 2020. The resolution describes ICTs as “dual-use technologies” which can be used for both “… legitimate and malicious purposes”. This language within the resolution is curious since this would mean that dual-use technologies are capable of being used in lawful and unlawful scenarios. This is a departure from how “dual-use technologies” are traditionally defined as technologies which have both civilian and military applications and use cases.

Keeping this in mind, the resolution presciently expresses concern that some States are building up military ICT capabilities and that they could play active roles in future conflicts between States. Given their potential threat to national security, Resolution 75/240 establishes a new OEWG for the period between 2021 and 2025 which must act on a consensus basis. The second OEWG is expected to build on the aforementioned prior work of the GGEs and the first OEWG. The OEWG has been assigned a broad substantive mandate which includes:

  1. Identifying existing and potential threats in the sphere of information security;
  2. further developing the internationally agreed voluntary rules, norms and principles of responsible State behaviour in cyberspace. This entails identifying mechanisms for implementation and, if necessary, introducing and/or elaborating additional cyber norms;
  3. developing an understanding of the manner in which international law applies to States’ use of ICTs;
  4. capacity building and confidence-building measures on matters relating to international cybersecurity;
  5. establishing mechanisms of regular institutional dialogue under the UN.

Resolution 75/240 specifies that aside from a final consensus report, the  OEWG must submit annual progress reports before the UNGA. Relevant to this post, the Resolution also grants the OEWG with the power to interact with non-governmental stakeholders. The OEWG’s Organisational Session in June 2021, States agreed to a total of eleven substantive sessions, the first of which was held in the period of December 13 to December 17, 2021.

Geopolitical Background to Second Substantive Session

At the second substantive session in the last week of March 2022 discussions were hindered by ongoing geopolitical tensions arising out of the international armed conflict owing to the Russian invasion of Ukraine. Cyberspace has played a strategic role within the conflict and has spanned several cyber incidents and operations. This includes strategic information campaigns and online influence operations. Moreover, the conflict has observed strategic incidents and operations which targeted government websites and extended to strategic measures critical information infrastructures across both public and private sectors. Key incidents prior to the session include a prominent attack on a satellite broadband network which affected internet availability for users across different parts of Europe.

The tensions have extended even to technical internet governance bodies like ICANN where for instance, Ukraine made unsuccessful requests to prevent Russian websites/domains from accessing the global internet. And as has been widely reported, the conflict has led to sanctions against Russian financial operators from executing cross-border transactions via globally interoperable ICT systems like the SWIFT network.

Such geopolitical realities mean that the OEWG’s progress which is rooted in consensus was adversely affected. Let us now consider a central organisational issue for the OEWG i.e. modalities of stakeholder participation.

Modalities of Stakeholder Participation

The value of rooting multistakeholderism into internet, ICT and cybersecurity governance is well documented. Most ICT systems are owned, controlled, used and/or managed by non-governmental stakeholders across the private sector and civil society. Field expertise is also largely situated outside of governments. However, under the UNGA First Committee, cybersecurity processes like the GGEs and the first OEWG have operated using state-centric, even exclusive, approaches.

UNGA Resolution 75/240 attempts to buck this trend and grants the OEWG the authority to interact with interested/relevant stakeholders from private sector, civil society and academia. For context, the first OEWG was the first cybersecurity discussion at the UN to involve some limited informal consultations between States and other stakeholders. The final substantive report, dated March 2021, even describes rich discussions and proposals from the multistakeholder community.

Despite this being an improvement upon the GGE model, experts contended that the first OEWG lacked direct or structured multistakeholder involvement. The first OEWG’s dialogue was described as ad-hoc, inconsistent and isolated. Similarly, consultation opportunities at the OEWG were largely limited to an exclusive class of accredited organisations at the UN’s Economic and Social Council (ECOSOC). Stakeholders expressed concern that a repeat of this approach would exclude discipline related field experts, private operators, and other relevant stakeholders. In lieu of this, certain States, regional organisations, non-governmental stakeholders, and individual experts have shared written inputs to the OEWG’s Chair calling for the adoption of modalities which facilitate transparent, structured and formal stakeholder involvement. The proposal put forth the additional option for non-accredited organisations to indirectly engage by sharing their views with the OEWG. To further inclusivity the proposal suggested that stakeholders be allowed to participate in both formal and informal consultations through a hybrid physical/virtual format.

Unfortunately, this issue was not resolved at either the OEWG’s Organisational Session in June 2021, nor its First Substantive Session in December 2021. At these discussions Member States like the EU, Canada, France, Australia, Brazil, Germany, the Netherlands, UK, USA and New Zealand advocated broader, structured, transparent and formal involvement of stakeholders. The transparency component was a point of emphasis for these jurisdictions. This proposal focused on making it widely known, the grounds on which certain States objected against the inclusion of stakeholders within the OEWG. In opposition, the Sino-Russian bloc including Cuba, Iran, Pakistan and Syria opposed extended multistakeholder participation since they believe the OEWG should preserve its government-led character. Russia has proposed formal multistakeholder involvement be restricted to granting consultative status to ECOSOC accredited institutions. These States insisted that informal consultations and written inputs are sufficient means of incorporating wider stakeholder views.

Although in favour of multistakeholder involvement, India’s interventions advocated that the OEWG follow the same modalities as the first OEWG which as described earlier has been criticised on grounds of inclusivity.

Developments on Modalities at Second Substantive Session

As the issue carried forward into the second substantive session, geopolitical tensions have escalated as a result of the Russia-Ukraine conflict. Statements by Australia, Canada, USA, UK, EU, France, Germany and others called upon Russia to stop using cyberattacks and disinformation campaigns. States from this bloc proposed that the OEWG’s programme of work not move forward without an agreement on stakeholder modalities. Iran contended that such a decision would undermine the legitimacy of the OEWG process. Other allies like China, Russia and Cuba argued that stakeholder participation should not come at the cost of substantial discussions. These countries cited Resolution 75/240 as not mandatorily requiring the OEWG to include stakeholders. However, the NATO and other allies of the US argued that delays to their inclusion would undercut stakeholders’ ability to meaningfully participate in the process.

Certain countries like France, Indonesia, Russia and Egypt supported an Indian proposal as a temporary workaround. India refined its earlier proposal and suggested that the OEWG continue the first OEWG’s system of informal consultations for the duration of one year while the issue of stakeholder participation was referred back to the UNGA for a final deliberation. No consensus was reached and consequently the Chair decided to suspend the issue of modalities and switched to issue-specific conversations via informal mode of discussion.

Conclusion: Final Modalities Yield Mixed Results

Three weeks after the conclusion of the second substantive session, the OEWG Chair shared a letter dated April 22, 2022 which declared consensus on the modalities of stakeholder participation at the second OEWG. These modalities will be formally adopted at the OEWG’s third substantive session in July 2022. They state that interested ECOSOC accredited NGOs can participate at the OEWG. Other interested stakeholders/organisations which are relevant to the OEWG’s mandate can apply for accreditation. They can formally participate provided Member States do not object. However, on the transparency front there appears to be a compromise. States must only share general reasons for their objection on a voluntary basis. The Chair will only share this received information with other Member States upon request. This prima facie means a stakeholder will not know why there was an objection against its participation in the OEWG process.

The actual stakeholder involvement will be carried out through two prongs. First, like the first OEWG the Chair will organise informal inter-sessional consultations between States and stakeholders. Second, accredited stakeholders can attend formal meetings of the OEWG, submit written inputs and make oral statements during a dedicated stakeholder session.

The modalities do not clarify if accredited stakeholders can participate virtually. This gap in communication is important since many stakeholders from developing/emerging countries often have limited resources and/or capacities to send contingents to these processes. While this development represents clear strides in terms of inclusivity from prior UN cybersecurity processes, as structured, the modalities could inadvertently exclude stakeholders from smaller countries who have an interest in maintaining a safe, secure and accessible cyberspace.

It remains to be seen if the international community will allocate resources in ensuring all interested stakeholders are present and active at these discussions. Moving forward, Part 2 of this series focuses on key discussions which took place in informal mode at the Second Substantive Session of the OEWG. The post describes how States (including India) view the substantial issues outlined in the OEWG’s institutional mandate. It concludes by charting out what to expect in the OEWG’s forthcoming draft of its first annual progress report for the UNGA.

Trends from the CCG High Court Tracker

Authors: Sravya Movva, Joanne D’Cunha, Bilal Mohamed and Anna Kallivayalil

The CCG High Court Case Tracker (‘Tracker’) is a resource that catalogues decisions featuring the constitutional right to privacy delivered by High Courts across the country. The tracker aims to trace decisions delivered by various High Courts post the verdict in the case of Justice (Retd.) K.S. Puttaswamy vs. Union of India (Puttaswamy’), where a nine-judge bench of the Supreme Court reaffirmed the right to privacy as a fundamental right. The Tracker currently captures only the cases reported on Manupatra and has 90 cases in total at present.

This post aims to analyse cases captured in the Tracker and to highlight general trends emerging from the decisions of various High Courts. The analysis is based on cases reported up to 15 March 2022 (CCG will continue to update the tracker periodically).

Cases decided by year

There has been a consistent rise in the number of cases decided by High Courts since Puttaswamy. In the same year as Puttaswamy, i.e., 2017, the Kerala High Court was the first and the only High Court to refer to Puttaswamy in Mini KT vs. Senior Divisional Manager (Disciplinary Authority) LIC and ors, while discussing the dignity of a woman. The Kerala High Court in this case restated the observations on dignity in Puttaswamy and quashed disciplinary actions against a woman employee for her absence from duty on account of compelling circumstances for taking care of her child. The Court held that in order to understand the dignity of a woman, societal background has to be considered. In the year 2018, the High Courts delivered 13 judgements with the right to privacy as a feature, and in 2019, this number rose to 20 judgements. In 2020, the number of cases increased only slightly to a total of 21. This could arguably be due to the fractured functioning of the High Courts during the pandemic. In 2021, the High Courts decided an additional 26 cases that dealt with the right to privacy. Within the first quarter of 2022 (i.e., upto March 2022), the High Courts have already decided upon 9 cases involving the right to privacy. As can be seen from the graph below, there has been a clear trend of High Courts increasingly engaging with the right to privacy.

Which courts have given the most decisions?

The Kerala High Court and the Madras High Court have decided the most number of cases with 14 judgements each that feature the right to privacy. The Kerala High Court, for instance, has dealt with the subjects of autonomy (i.e., 5 judgements) and informational privacy (i.e., 4 judgements) under the right to privacy, the most. Similarly, judgements by the Madras High Court have related largely to surveillance, search and seizure (i.e., 5 judgements), and autonomy (i.e., 4 judgements). The Delhi High Court has pronounced 12 judgements, and the Allahabad High Court follows closely with 10 judgements.

Interestingly, all the judgements from the Allahabad High Court have upheld the right to privacy. Within the judgements given by the Allahabad High Court, a majority of them relate to dignity (i.e., 6 judgements) and informational privacy (i.e., 5 judgements). For instance, the Allahabad High Court in Rajiv Kumar, while interpreting the right to privacy has upheld the right of individuals to not disclose information relating to their prosecution, for an offence committed while they were children or juveniles. In Guruvinder Singh, it has also held that disclosure of information relating to people accused of vandalism or sharing sexually explicity images for purpose of revenge/harassment constitute a violation of privacy.

Bench Strength

Bench strength is also an important metric; a judgement by a larger bench would be binding on more subsequent cases before a High Court. Therefore, a ruling delivered by a larger bench ensures more predictability and consistency. 

Of the 90 cases, 57 cases (63.33%) were decided by single-judge benches while 33 cases (36.66%) were decided by two-judge benches. While the Madras High Court pronounced 14 judgements on the right to privacy covering aspects such as autonomy, bodily integrity, surveillance, search and seizure amongst others, all of them were delivered by single judge benches. Naturally the absence of a larger bench judgement weakens the influence on subsequent cases. The Bombay High Court and Kerala High Court, on the other hand, feature six judgements each with a two judge bench. From the right to make reproductive choices to protection from unlawful search and seizure, the larger bench decisions dealt with vastly diverse issues.

Aspect-focused analysis

The Tracker maps judgements across 5 primary themes of privacy. These are – (a) autonomy, (b) bodily integrity, (c) dignity, (d) informational privacy, and (e) surveillance, search and seizure. The tracker also notes various sub-themes within a case. For depiction of the data, we have considered only the 5 pre-dominant aspects of privacy as listed above.  It is also important to note that the themes of privacy across these cases are not siloed and often have overlaps with one another. For instance, in an appeal by a petitioner against a requirement to disclose details of criminal prosecutions faced as a juvenile, the Court had to engage with issues around both dignity and informational privacy of the individual. Similarly, in cases involving the petitioner’s right to make reproductive choices, there is an interface between the aspects of autonomy and bodily integrity.

The highest number of judgements (24 judgements) given by the High Courts have dealt with the theme of autonomy. For instance, the Jammu and Kashmir High Court in Monika Mehra and the Allahabad High Court in Salamat Ansari have held that an individual’s autonomy to make intimate decisions such as those relating to marriage are a part of the right of privacy. A different approach to autonomy has been taken by the Karnataka High Court in Bushra Abdul Aleem where it held that graduated candidates in the medical field would have to compulsorily provide medical services for one year and this would not be violative of the right to privacy.

This is followed by judgements on the aspects of dignity, and informational privacy, with 23 judgements on dignity and 20 judgements on informational privacy. While dealing with dignity, the Orissa High Court in Subhranshu Rout, held that a victim of sexual offences has the right to have offensive posts erased from any public platform as a part of their right to privacy. Similarly, the Allahabad High Court in Rajiv Kumar, also dealt with dignity when ruling that a requirement to disclose details of criminal prosecutions faced as a juvenile would be violative of the right to privacy. With respect to informational privacy, where a memory card was provided to the accused in a sexual assault case, the Kerala High Court held in Gopalakrishnan P that this action was a serious violation of the right to privacy of a sexual assault victim. On the other hand, the Delhi High Court in Horlicks Ltd has taken a view that the right to privacy cannot be claimed over information that is already available in the public domain.

Surveillance, search and seizure as an aspect of privacy was the primary focus in 15 cases. There was a general consensus amongst the courts that surveillance, search and seizure must be conducted in accordance with the law. The difference amongst the courts was based on the difference in approaches. For instance, the Karnataka High Court in Sudarshan involving the collection of voice samples for comparisons to the voice in phone call recordings held that such a collection would not amount to a breach of privacy or self incrimination. In another case Deepti Kapur, the Delhi High Court examined the admissibility of evidence collected in breach of privacy and held that merely because the rules of evidence favour a liberal approach for admitting evidence, it doesn’t mean that everyone should adopt illegal means to collect evidence.

Bodily integrity as an aspect of privacy saw the fewest number of cases, with only 11 cases focusing on this aspect. For instance, when considering if a State ordered non-consensual DNA test would be violative of the right to privacy, the Karnataka High Court in Venkateshappa held that it would. However, in another case Abhilash R, the Kerala High Court has held that a Court ordered DNA test to determine the paternity of a child does not violate the right to privacy of a child.

General Trends

The 90 cases captured within the Tracker could be divided into the following three broad categories – (a) judgements that have protected and expanded upon the right to privacy, (b) judgements that have taken a limited or restricted view of the right to privacy, and (c) judgements in which the right to privacy has been mentioned but there is no specific analysis through which the right to privacy has been expanded or limited.

With respect to judgements that are in the first category i.e., judgements that have protected and expanded on the right to privacy, the approach is straightforward. The courts in these cases have held that certain actions or inaction have resulted in a violation of privacy, or have reinforced the ability of an individual to pursue an action as a part of their right to privacy. For instance, in Vinit Kumar, when considering the illegal tapping of a telephone conversation, the Bombay High Court clearly held it to be against the right to privacy. In the case of Mahesh Chand Sharma, the Rajasthan High Court held that the right to privacy includes the right of an unmarried mother not to disclose the paternity of the child.Of the 90 cases captured in the Tracker, majority of them i.e, 64 (i.e., 71.11%) cases fall within this category.

A further quantitative analysis of the judgements shows that the Kerala High Court, Allahabad High Court and the Madras High Court have the most number of judgements (i.e., 10 judgements) that protect and expand on the right to privacy. As has been mentioned before, in the case of the Allahabad Court, all of its judgments have expanded the application of the right to privacy (i.e., 10 out of 10 judgements). There are specific categories of cases in which the courts have tended to take a similar view by expanding upon the right to privacy. For instance, courts have generally shown a tendency to protect the reproductive rights of women. The Bombay High Court in XYZ has held that a women’s right to privacy includes a right to make reproductive choices and terminate pregnancy. The Allahabad High Court went ahead to further read in a positive obligation upon a university to provide maternity benefits to students as a part of the right to privacy in the Saumya Tiwari case.

The second category of cases are the ones in which the courts have taken a limited or restricted view of the right to privacy. These cases account for 17 (i.e., 18.89%) of the 90 cases. While these include specific situations and facts, there are specific categories of cases in which the courts tend to take a restricted approach towards the right to privacy. For instance, in cases involving judicial orders to conduct medical examinations during the course of a case, the courts have often held that there is no violation of privacy. An example of this is the X vs. S case in which the Kerala High Court acknowledged the right to privacy in a case where a matrimonial Court required the individual to undergo and produce medical tests. However, the Court interpreted the right narrowly by stating that the right to privacy would not be infringed if the Court requires the production of an individual’s medical records. Another such category of cases are those that involve information that is in the public domain or individuals who can be considered as public figures. For example, in the Ramgopal Varma case, the Telangana High Court has held that while a person has the right to privacy in relation to their family, marriage, children etc., there is an exception when the matter becomes a matter of public record (including through court recordings). The Court held that the right to privacy no longer exists in such a situation and it becomes a legitimate subject for comments from the press and media.

The third category of cases are those where the right to privacy has been mentioned but the courts have not engaged in an analysis that would either expand or restrict the application of the right to privacy. In some cases, the right to privacy or the Puttaswamy judgement have simply been acknowledged by the courts, or have only restated a position of law with respect to privacy. A total of 9 (i.e., 10%) out of 90 cases fall within this category. The courts do not engage with a discussion on the right beyond referencing the judgement or to the right to privacy. The case is often decided based on another facet of law such as the law on trademarks or evidence.. For example, in Alli Noushad, the Kerala High Court has held that conversations between husband and wife are protected as privileged conversation and would be inadmissible as evidence. While the Court remarks that sacrosanctity of a family includes its privacy, it arrived at its decision on admissibility of evidence based on the provisions of the Evidence Act and not by examining the aspects of right to privacy. Simialarly, in Sunil Sachdeva, the case dealt with the right to privacy and discriminatory online posts, however, the Court did not engage with a proactive application of the right to privacy to the facts of the case. It only acknowledged that informational privacy is one of the aspects of the right.      

The blogpost highlights an increasing trend of High Courts engaging with the right to privacy. While it is promising to discover that a majority of the cases involve positive (privacy enhancing or expanding)judgements, they are largely provided by single judge benches. To read more about each case, please visit our Tracker here. Additionally, for summaries on judgements from other jurisdictions, do visit our Privacy Law Library here.

Transparency reporting under the Intermediary Guidelines is a mess: Here’s how we can improve it

The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (“Intermediary Guidelines”) represents India’s first attempt at regulating large social media platforms, with the Guidelines creating distinct obligations for ‘Significant Social Media Intermediaries’ (“SSMIs”). While certain provisions of the Guidelines concerning SSMIs (like the traceability requirement) are currently under legal challenge, the Guidelines also introduced a less controversial requirement that SSMIs publish monthly transparency reports regarding their content moderation activities. While this reporting requirement is arguably a step in the right direction, scrutinising the actual documents published by SSMIs reveals a patchwork of inconsistent and incomplete information – suggesting that Indian regulators need to adopt a more comprehensive approach to platform transparency.

This post briefly sets out the reporting requirement under the Intermediary Guidelines before analysing the transparency reports released by SSMIs. It highlights how a focus on figures coupled with the wide discretion granted to platforms to frame their reports undermines the goal of meaningful transparency. The figures referred to when analysing SSMI reports pertain to the February-March of 2022 reporting period, but the distinct methodologies used by each SSMI to arrive at these figures (more relevant for the present discussion) has remained broadly unchanged since reporting began in mid-2021. The post concludes by making suggestions on how the Ministry of Electronics and Information Technology (“MeitY”) can strengthen the reporting requirements under the Intermediary Guidelines.   

Transparency reporting under the Intermediary Guidelines

Social media companies structure speech on their platforms through their content moderation policies and practices, which determine when content stays online and when content is taken down. Even if content is not illegal or taken down pursuant to a court or government order, platforms may still take it down for violating their terms of service (or Community Guidelines) (let us call this content ‘violative content’ for now i.e., content that violates terms of service). However, ineffective content moderation can result in violative and even harmful content remaining online or non-violative content mistakenly being taken down. Given the centrality of content moderation to online speech, the Intermediary Guidelines seek to bring some transparency to the content moderation practices of SSMIs by requiring them to publish monthly reports on their content moderation activities. Transparency reporting helps users and the government understand the decisions made by platforms with respect to online speech. Given the opacity with which social media platforms often operate, transparency reporting requirements can be an essential tool to hold platforms accountable for ineffective or discriminatory content moderation practices.  

Rule 4(1)(d) of the Intermediary Guidelines requires SSMIs to publish monthly transparency reports specifying: (i) the details of complaints received, and actions taken in response, (ii) the number of “parts of information” proactively taken down using automated tools; and (iii) any other relevant information specified by the government. The Rule therefore covers both ‘reactive moderation’, where a platform responds to a user’s complaints against content, and ‘proactive moderation’, where the platform itself seeks out unwanted content even before a user reports it.

Transparency around reactive moderation helps us understand trends in user reporting and how responsive an SSMI is to user complaints, while disclosures on proactive moderation shed light on the scale and accuracy of an SSMI’s independent moderation activities. A key goal of both reporting datasets is to understand whether the platform is taking down as much harmful content as possible without accidentally also taking down non-violative content. Unfortunately, Rule 4(1)(d) merely requires SSMIs to report the number of links taken down during their content moderation (this is re-iterated by the MeitY’s FAQs on the Intermediary Guidelines). The problems with an overtly simplistic approach come to the fore upon an examination of the actual reports published by SSMIs.   

Contents of SSMI reports – proactive moderation

Based on its latest monthly transparency reports, Twitter proactively suspended 39,588 accounts while Google used automated tools to remove 338,938 pieces of content. However, these figures only document the scale of proactive monitoring and do not provide any insight into the accuracy of the platforms’ moderation – how accurate is the moderation in distinguishing between violative and non-violative content. The reporting also does not specify whether this content was taken down using solely automated tools, or some mix of automated tools and human review or oversight. Meta (reporting for Facebook and Instagram) reports the volume of content proactively taken down, but also provides a “Proactivity Rate”. The Proactivity Rate is defined as the percentage of content flagged proactively (before a user reported it) as a subset of all flagged content. Proactivity Rate = [proactively flagged content ÷ (proactively flagged content + user reported content)]. However, this metric is also of little use in understanding the accuracy of Meta’s automated tools. Take the following example:

Assume a platform has 100 pieces of content, of which 50 pieces violate the platforms terms of service and 50 do not. The platform relies on both proactive monitoring through automated tools and user reporting to identify violative content. Now, if the automated tools detect 49 pieces of violative content, and a user reports 1, the platform states that: ‘49 pieces of content were taken down pursuant to proactive monitoring at a Proactivity Rate of 98%’. However, this reporting does not inform citizens or regulators: (i) if the 49 pieces of content identified by the automated tools are in fact the 49 pieces that violate the platform’s terms of service (or whether the tools mistakenly took down some legitimate, non-violative content); (ii) how many users saw but did not report the content that was eventually flagged by automated tools and taken down; and (iii) what level and extent of human oversight was exercised in removing content. A high proactivity rate merely indicates that automated tools flagged more content than users, which is to be expected. Simply put, numbers aren’t everything, they only disclose the scale of content moderation and not its quality.  

This criticism begs the question, how do you understand the quality of proactive moderation? The Santa Clara Principles represent high level guidance on content moderation practices developed by international human rights organisations and academic experts to facilitate platform accountability with respect to users’ speech. The Principles require that platforms report: (i) when and how automated tools are used; (ii) the key criteria used by automated tools in making decisions; (iii) the confidence, accuracy, or success rate of automated tools, including in different languages; (iv) the extent of human oversight over automated tools; and (v) the outcomes of appeals against moderation decisions made by automated tools. This last requirement of reporting the outcome of appeals (how many users successfully got content reinstated after it was taken down by proactive monitoring) is a particularly useful metric as it provides an indicator of when the platforms themselves acknowledge that its proactive moderation was inaccurate. Draft legislation in Europe and the United States requires platforms to report how often proactive monitoring decisions are reversed. Mandating the reporting of even some of these elements under the Intermediary Guidelines would provide a clearer picture of the accuracy of proactive moderation.

Finally, it is relevant to note that Rule 4(4) of the Intermediary Guidelines requires that the automated tools for proactive monitoring of certain classes of content must be ‘reviewed for accuracy and fairness’. The desirability of such proactive monitoring aside, Rule 4(4) is not self-enforcing and does not specify whoshould undertake this review, how often it should be carried out, and whom the results should be communicated to.  

Contents of SSMI reports – reactive moderation

Transparency reporting with respect to reactive moderation aims to understand trends in user reporting of content and a platform’s responses to user flagging of content. Rule 4(1)(d) requires platforms to disclose the “details of complaints received and actions taken thereon”. However, a perusal of SSMI reporting reveals how the broad discretion granted to SSMIs to frame their reports is undermining the usefulness of the reporting.  

Google’s transparency report has the most straightforward understanding of “complaints received”, with the platform disclosing the number of ‘complaints that relate to third-party content that is believed to violate local laws or personal rights’. In other words, where users raise a complaint against a piece of content, Google reports it (30,065 complaints in February 2022). Meta on the other hand only reports complaints from: (i) a specific contact form, a link for which is provided in its ‘Help Centre’; and (ii) complaints addressed to the physical post-box mail address published on the ‘Help Centre’. For February 2022, Facebook received a mere 478 complaints, of which only 43 pertained to content (inappropriate or sexual content), while 135 were from users whose accounts have been hacked, and 59 were from users who had lost access to a group or page. If 43 user reports a month against content on Facebook seems suspiciously low, it likely is – because the method of user reporting of content that involves the least amount of friction for users (simply clicking on the post and reporting it directly) bypasses the specific contact form that Facebook uses to collate India complaints, and thus appears to be absent from Facebook’s transparency reporting. Most of Facebook’s 478 complaints for February have nothing to do with content on Facebook and offer little insight into how Facebook responds to user complaints against content or what types of content users report.

In contrast, Twitter’s transparency reporting expressly states that it does notinclude non-content related complaints (e.g., a user locked out of their account), instead limiting its transparency reporting to content related complaints – 795 complaints for March 2022: 606 of abuse or harassment, 97 of hateful conduct, and 33 of misinformation were the top categories. However, like Facebook, Twitter also has both a ‘support form’ and allows users to report content directly by clicking on it, but fails to specify from what sources “complaints” are compiled from for its India transparency reports. Twitter merely notes that ‘users can report grievances by the grievance mechanism by using the contact details of the Indian Grievance Officer’.

These apparent discrepancies in the number of complaints reported bear even greater scrutiny when the number of users of these platforms is factored in. Twitter (795 complaints/month) has an estimated 23 million users in India while Facebook (406 complaints/month) has an estimated 329 million users. It is reasonable to expect user complaints to scale with the number of users, but this is evidently not happening suggesting that these platforms are using different sources and methodologies to determine what constitutes a “complaint” for the purposes of Rule 4(1)(d). This is perhaps a useful time to discuss another SSMI, ShareChat.

ShareChat is reported to have an estimated 160 million users, and for February 2022 the platform reported 56,81,213 user complaints (substantially more than Twitter and Facebook). These complaints are content related (e.g., hate speech, spam etc.) although with 30% of complaints merely classified as ‘Others’, there is some uncertainty as to what these complaints pertain to. ShareChat’s reports states that it collates complaints from ‘reporting mechanism across the platform’. This would suggest that, unlike Facebook (and potentially Twitter), it compiles user complaint numbers from all methods a user can complain against content and not just a single form tucked away in its help centre documentation. While this may be a more holistic approach, ShareChat’s reporting suffers from other crucial deficiencies. Sharechat’s report makes no distinction between reactive and proactive moderation, merely giving a figure for content that has taken down. This makes it hard to judge how ShareChat responded to these over 56,00,000 complaints.    

Conclusion

Before concluding, it is relevant to note that no SSMI reporting discusses content that has been subjected to reduced visibility or algorithmically downranked. In the case of proactive moderation, Rule 4(1)(d) unfortunately limits itself to content that has been “removed”, although in the case of reactive moderation, reduced visibility would come within the ambit of ‘actions taken in response to complaints’ and should be reported on. Best practices would require platforms to disclose when and what content is subjected to reduced visibility to users. Rule 4(1)(d) did not form part of the draft intermediary guidelines that were subjected to public consultation in 2018, rather appearing for the first time in its current form in 2021. Ensuring broader consultation at the time of drafting may have resulted in such regulatory lacunae being eliminated and a more robust framework for transparency reporting.

That said, getting meaningful transparency reporting is a hard task. Standardising reporting procedures is a detailed and fraught process that likely requires platforms and regulators to engage in a consultative process – see this document created by Daphne Keller listing out potential problems in reporting procedures. Sample problem: “If ten users notify platforms about the same piece of content, and the platform takes it down after reviewing the first notice, is that ten successful notices, or one successful notice and nine rejected ones?” Given the scale of the regulatory and technical challenges, it is perhaps unsurprising that the transparency reporting under the Intermediary Guidelines has gotten off to a rocky start. However, Rule 4(1)(d) itself offers an avenue for improvement. The Rule allows the MeitY to specify any additional information that platforms should publish in their transparency reports. In the case of proactive monitoring, requiring platforms to specify exactly how automated tools are deployed, and when content take downs based on these tools are reversed would be a good place to start. The MeitY must also engage with the functionality and internal procedures of SSMIs to ensure that reporting is harmonised to the extent possible. For example, reporting a “complaint” for Facebook and ShareChat should ideally have some equivalence. This requires, for a start, MeitY to consult with platforms, users, civil society, and academic experts when thinking about transparency.

The United Nations Ad-hoc Committee for Development of an International Cybercrime Convention: Overview and Key Observations from First Substantive Session

Sukanya Thapliyal

Image by United Nation Photo. Licensed via CC BY-NC-ND 2.0

Earlier this month, the Centre for Communication Governance at National Law University Delhi had the opportunity to participate as a stakeholder in the proceedings of the United Nations Ad-hoc Committee, which has been tasked to elaborate a comprehensive international convention on countering the use of information and communications technologies (ICTs) for criminal purposes (“the Ad Hoc Committee”). 

In this blog, we present a brief overview and our observations from the discussions during the first substantive session of the Ad-hoc Committee. Furthermore, we also attempt to familiarise the reader with the emerging points of convergence and divergence of opinions among different Member States and implications for the future negotiation process. 

  1. Background 

The open-ended Ad-hoc Committee is an intergovernmental committee of experts representative of all regions and was established by the UN General Assembly-Resolution 74/247 under the Third Committee of the UN General Assembly. The committee was originally proposed by the Russian Federation and 17 co-sponsors in 2019. The UN Ad-hoc Committee is mandated to provide a draft of the convention to the General Assembly at its seventy-eighth session in 2023 (UNGA Resolution 75/282). 

Presently, the Budapest Convention, also known as Convention on Cybercrime is the most comprehensive and widely accepted legal instrument on cybercrime which was adopted by the Council of Europe (COE) and came into force in July, 2004. However, the work of the Ad-hoc Committee is significant and can pave the way for the first universal and legally binding instrument on cybercrime issues. The Committee enjoys widespread representation from State and Non-State stakeholders (participation from the non-governmental organizations, civil society, academia and private organizations) and other UN bodies, including the United Nations Office on Drugs and Crime (UNODC), serving as the secretariat for the process. 

The Ad-hoc Committee, over the next two years, is set to have six sessions towards developing this cybercrime convention. The convention is expected to foster coordination and cooperation among state actors to combat cybercrime while giving due regard to the peculiar socio-economic conditions prevailing in the developing and least-developed countries. 

The first substantive session of the Ad-hoc Committee was scheduled for 28 February-11 March 2022 to chart out a clear road map to guide subsequent sessions. In addition, the session also provided opportunity to the Member States to explore the possibility of reaching a consensus on the objective and scope of the Convention, which could provide a general framework for future negotiation without constituting a pre-condition for future stages. 

2. Discussions at the First Ad-hoc committee

The first session of the Ad-hoc Committee witnessed extensive discussions in sessions on general debate, objective and scope of the convention, exchange of preliminary views on key elements of the convention. In addition, a fruitful engagement took place in the sessions dedicated to arriving at a consensus on the structure of the convention (A/AC.291/L.4/Add.4). Member states also reached consensus on  discussion and decision-making on the mode of work of the Ad Hoc Committee during subsequent sessions and intersessional periods (A/AC.291/L.4/Add.6). As the negotiations commenced days after the Russia-Ukraine conflict began, the negotiations proceeded in a tense environment where several Member States expressed their concerns and-inability to negotiate in “good faith” in the light of the current state of play and condemned Russia for the military and cyber operations directed at Ukraine.

A. Scope of the convention: From “Cyber-Enabled” to “Cyber-Dependent” Crimes 

There was complete agreement on the growing importance of ICT technologies, the threat created by cybercriminals, and the need for a collective response within a sound international framework. However, countries highlighted different challenges that range from ‘pure cybercrimes’ or cyber dependent crimes to a broader set of crimes (cyber-enabled crimes) that includes misuse of ICT technologies and digital platforms by terrorist groups, deepfakes, disinformation, misinformation, false narrative, among others. 

While there was a broad consensus on including cyber dependent crimes, there was significant disagreement on whether cyber-enabled crimes should be addressed under the said convention. This divergence was evident throughout the first session with the EU, the US, the UK, New Zealand, Australia, Liechtenstein, Japan, Singapore and Brazil advocating to limit the operation of such a convention only up to cyber dependent crimes (such as ransomware attacks, denial of services attack, illegal system interference, among others). The member states maintained that the said convention should exclude vague and broadly defined crimes that may dilute legal certainty and disproportionately affect the freedom of speech and expression. Furthermore, that the convention should include only those cyber enabled crimes whose scale scope and speed increases substantially with the use of ICT technologies (cyber-fraud, cyber-theft, child sexual abuse, gender-based crime). 

On the other hand, the Russian Federation, China, India, Egypt, South Africa, Venezuela, Turkey, Egypt expressed that the convention should include both cyber dependent and cyber enabled crimes under such a convention. Emphasizing the upward trend in the occurrence of cyber enabled crimes, the member states stated that the cybercrime including cyber fraud, copyright infringement, misuse of ICTs by terrorists, hate speech must be included under the said convention.

There was overall agreement that cybersecurity, and internet governance issues are subject to other UN multilateral  fora such as UN Group of Governmental Experts (UNGGE) and UN Open Ended Working Group (OEWG) and must not be addressed under the proposed convention. 

B. Human-Rights

The process witnessed significant discussion on the protection and promotion of human rights and fundamental freedoms as an integral part of the proposed convention. While there was a broad agreement on the inclusion of human rights obligations, Member States varied in their approaches to incorporating human rights obligations. Countries such as the EU, USA, Australia, New Zealand, UK, Canada, Singapore, Mexico and others advocated for the centrality of human rights obligations within the proposed convention (with particular reference to the right to speech and expression, privacy, freedom of association and data protection). These countries also emphasized the need for adequate safeguards to protect human rights (legality, proportionality and necessity) in the provisions dealing with the criminalization of offenses, procedural rules and preventative measures under the proposed convention. 

India and Malaysia were principally in agreement with the inclusion of human rights obligations but pointed out that human rights considerations must be balanced by provisions required for maintaining law and order. Furthermore, countries such as Iran, China and Russia emphasized that the proposed convention should be conceptualized strictly as a technical treaty and not a human rights convention.

C. Issues pertaining to the conflict in jurisdiction and legal enforcement

The Ad-hoc Committee’s first session saw interesting proposals on improving the long-standing issues emanating from conflict of jurisdictions that often create challenges for law enforcement agencies in effectively investigating and prosecuting cybercrimes. In its numerous submissions, India highlighted the gaps and limitations in the existing international instruments and the need for better legal frameworks for cooperation, beyond Mutual Legal Assistance Treaties (MLATs). Such arrangements aim to assist law enforcement agencies in receiving metadata/ subscriber information to establish attribution and to overcome severe delays in accessing non-personal data. Member states, including Egypt, China supported India’s position in this regard. 

Mexico, Egypt, Jamaica (on behalf of CARICOM), Brazil, Indonesia, Iran, Malaysia also highlighted the need for the exchange of information, and greater international cooperation in the investigation, evidence sharing and prosecution of cybercrimes. These countries also highlighted the need for mutual legal assistance, 24*7 contact points, data preservation, data sharing and statistics on cybercrime and modus operandi of the cybercriminals, e-evidence, electronic forensics and joint investigations. 

Member states including the EU, Luxembourg, UK supported international cooperation in investigations and judicial proceedings, and obtaining electronic evidence. These countries also highlighted that issues relating to jurisdiction should be modeled on the existing international and regional conventions such as the UN Convention against Corruption (UNCAC), UN Convention against Transnational Organized Crimes (UNCTOC), and the Budapest Convention.

D. Technical Assistance and Capacity Building

There was unanimity among the member states to incorporate provisions on capacity building and technical assistance to cater to the peculiar socio-economic conditions of the developing and least-developed countries. However, notable inputs/ suggestions came from Venezuela, Egypt, Jamaica on behalf of CARICOM, India and  Iran. Venezuela highlighted the need for technology transfer, lack of financing and lack of sufficient safeguards for developing and least-developed countries. The countries outlined technology transfer, financial assistance, sharing of best practices, training of personnel, and raising awareness as different channels for capacity building and technical assistance for developing and least-developed countries. 

E. Obligations for the Private Sector 

The proposal for instituting obligations  on non-state actors , including the private sector (with particular reference to digital platforms and service providers), witnessed strong opposing views by member countries. Countries including India, China, Egypt and Russia backed the proposal on including a strong obligation on the private sectors as they play an essential role in the ICT sector. In one of its submissions, India explained  the increasing involvement of multinational companies  in providing vital services in different countries. Therefore, in its view, such private actors must be held accountable and should promptly cooperate  with law enforcement and judicial authorities in these countries to fight cybercrime. Iran, China and Russia further emphasized the need for criminal liability of legal persons, including service providers and other private organizations. In contrast, member states, including the EU, Japan and USA, were strictly against incorporating any obligations on the private sector. 

F. Other Issues

There was a broad consensus including EU, UK, Japan, Mexico, USA, Switzerland and others  on not reinventing the wheel but building on the work done under the UNCAC, UNCTOC, and the Budapest Convention. However, countries, including Egypt and Russian Federation, were skeptical over the explicit mention of the regional conventions, such as the Budapest Convention and its impact on the Member States, who are not a party to such a convention. 

The proposals for inclusion of a provision on asset recovery, and return of the proceeds of the crime elicited a lukewarm response by Egypt, Iran, Brazil, Russia, China, Canada, Switzerland, USA Jamaica on behalf of CARICOM countries, but appears likely to gain traction in forthcoming sessions.

3. Way Forward

Member countries are expected to submit their written contributions on criminalisation, general provisions, procedural measures, and law enforcement in the forthcoming month. These written submissions are likely to bring in more clarity about the expectations and key demands of the different member states. 

The upcoming sessions will also indicate how the demands put forth by developing, and least developing countries during the recently concluded first session are taken up in the negotiation process. Furthermore, it is yet to be seen whether these countries would chart out a path for themselves or get subsumed in the west and east binaries as seen in other multilateral fora dedicated to clarifying the rules governing cyberspace. 


Note: 

*The full recordings of the first session of the Ad-hoc Committee to elaborate international convention on countering the use of information and communications (ICTs) technologies for criminal purposes is available online and can be accessed on UN Web TV.

**The reader may also access more information on the first session of the Ad-hoc Committee here, here and here.

Building a Feminist Critique of Cybersecurity: Centering experiences of those at the margins

Tavishi

“Secure Home (pt. 2)” by Ren Wang is licensed under CC BY 4.0

Introduction

Our everyday lives are increasingly being mediated by technology. Social networks are shaping our interpersonal communications, algorithms are driving our decisions and behaviours, and smart devices are modulating our home and workplace environment. Haraway postulated this rising penetration of technology in practically all aspects of life through the image of “cyborg bodies” entangled in its discourses and effects to the point where “who makes and who is made in the interaction between human and machine” is impossible to decipher. With the ever-increasing dependence of our everyday lives on technology, the internet directly interpellates subjects while also engaging with other social discourses that contribute to subject formation. As technology becomes fundamental in shaping not only our everyday lives but also our subjectivities, the question of security in cyberspace becomes increasingly personal. 

Although cybersecurity has been recognized as a global concern, there is no agreement on how it should be conceptualized.  The question of “who or what is to be protected” lies at the heart of these debates.  A growing body of literature moves beyond the protection of “cyberspace and the underlying ICT infrastructure”, and defines cyber security as the protection of those “who function in the cyberspace, i.e. individuals, organisations, and nations”. In practice, however, it is seen that the sovereignty of the state is considered as the dominant objective of cyber security and powerful actors like states, military and corporates drive the discourse at the risk of invisibilizing the ordinary user.

A feminist approach to cybersecurity must place humans at its centre. It must also recognize that our experiences in the online world are shaped by our identity and the power structures prevalent in society. Consequently, cybersecurity threats are perceived and experienced differently by minorities, women, non-binary people who are also routinely absent or underrepresented in such discussions. This blog argues that women’s experiences, particularly those at the margins, must be at the centre of how cybersecurity is conceptualized in technical design and legislation. The piece begins by examining questions of representation and its implications. It further probes how gender-blind design and the underlying assumptions of public/private dichotomy lead to gendered threats like technology-facilitated intimate partner violence being excluded from or trivialised in cybersecurity discussions. Finally, it looks at the case of intimate image abuse and examines the framework of bodily integrity as a key tool to centre womens’ experiences in cybersecurity. 

Women in Cybersecurity

Only 25% of the global cyber security workforce identify as women. The work culture of incident response teams which are predominantly staffed by men helps in reinforcing the association of technical expertise to masculinity. Feminist theory not only advocates for greater representation of women in cybersecurity design, defence and response but also questions the basis of how the epistemic authority is allocated. At the heart of a feminist approach to cybersecurity lies the question, “Who is considered the bearer of knowledge?”. Since, in both technology and law, technocratic expertise is the primary epistemic authority, experiences of ordinary citizens are invisibilized and often considered problems that need to be solved by experts through behavioural change or legislation from the top. It is this top-down approach to cybersecurity that is challenged by the feminist standpoint epistemology, where the subjective experience of those at the margins is the key source of knowledge.  

Another important aspect of feminist research is the centrality of political action and the dismantling of the separation between theory and practice. Thus, a feminist approach towards cybersecurity will actively engage with ordinary users, especially those who are marginalised through multiple axes of oppression, in building knowledge, understanding threats and bringing about change through political action and solidarity. Oxford Internet Institute’s Reconfigure Network consisting of a group of feminist cybersecurity practitioners and researchers is a step in this direction. Under this project, ordinary citizens, through a series of community workshops, engage in defining threats based on their understanding and experiences. 

The public/private binary

A gender-blind approach towards cybersecurity doesn’t take into account how threats are experienced differently by people depending on their social positions. This is because, contrary to popular belief, technical deliberation is not objective and value-neutral. The design, construction and regulation of technology are embedded with socio-political values. Often gendered threats faced by women and individuals of marginalised gender and sexual identities are overlooked or trivialized in design considerations. A common example of this is systems using personal information questions as backups to passwords, e.g. the name of your first pet or middle name of your parent. This assumes that the “bad actor” will always be a stranger and not an intimate partner/former partner. Similarly, Slupska has shown how threat modelling of major smart home systems does not take into account intimate partner violence(IPV). The owner of the device is never seen as a security threat to other users of the device in any use case. This is attributed to the public/private binary, where the home is constructed as a safe place in spite of the rising cases of gender-based violence facilitated by smart home devices. 

Feminist scholars have long critiqued the public/private binary which relegates the ambit of gendered violence to the domain of the private. Technology-facilitated sexual violence like intimate image abuse (commonly referred to as “revenge porn”) are often constructed as concerns of individual privacy instead of cybersecurity. Even the expectations for users are gendered; women are expected to maintain complete control of their digital footprint and activate privacy settings on social networks to protect themselves. Any failure to do so results in victim-blaming, thereby shifting the onus of ensuring cybersecurity completely onto the individual victims.

This is also evident in the language of “revenge porn” which reduces the scope of the crime and its severity by invoking narratives of relationship feuds and disgruntled partners. These issues have traditionally been placed in the domain of the private and emotional, which is constructed as inferior and less serious to the public domain of rational security. It can also become a limiting factor in legislative reforms as it considers the “intent to harm or harass” the victim a necessity to prove the crime. Not only does this narrow conception fail to take into consideration the economy surrounding the distribution of such imagery, but it also makes proving of intent to harass challenging. 

Centering Women’s experiences & Bodily Integrity in a digitally mediated world

Consequently, it is argued that “revenge pornography” be seen as a part of the “continuum of image-based sexual abuse”. This is based on Kelly’s seminal work on the continuum of sexual violence which challenges the “legal-analytical categorization” of sexual offences which often don’t focus on women’s experiences and also lead to a hierarchy of sexual offences. There is a range of abusive practices like revenge porn, sextortion, upskirting, voyeurism, deep fake pornography etc. that come under the umbrella of image-based sexual abuse.

Franks has advocated for the violation of privacy to be the fundamental harm that needs to be criminalised in these legislations under the rubric of non-consensual pornography. However, scholars have advocated going beyond models that look at intimate image abuse as merely content/information privacy violations to the framework of bodily integrity in terms of self-determination and inviolability. By circulating intimate images non–consensually, the victim’s right to self-determination is curbed. Centering womens’ experiences of bodily harm is captured in Durham’s essay, 

“Although virtual worlds offer a putative escape from the constraints of the corporeal, bodies still haunt the mediascape, and the experiential connections between symbolized and real world bodies must be acknowledged as central to feminism’s liberatory goals.”

Since the body is the site where gender is inscribed, bodily integrity provides a framework to understand what values and protections society attributes to different bodies. It is thus essential to note how the bodies of trans-persons, Dalits, Bahujan, Adivasis and minorities are most vulnerable as they are seen as sites to exert power.

It is also important to understand that online images of the body are not mere representations but act as digital prostheses embodying our subjectivity. That is to say; today we experience the world and our beingness through “an assemblage of organic body, conventional prostheses and digital prostheses”. This is fundamental to understanding the continuity of experience between the offline and online world which can prevent us from discounting the severity of intimate image abuse and the impact it has on the overall lives of the victims. Many victims experience a feeling of violation through unintended exposure that they liken to sexual assault and rape. Further, this framework can prevent a narrow definition of online intimate image abuse which excludes images that do not traditionally classify as “intimate”.  Thus, repeated instances of non-consensually sourced images of Muslim women put up for auction on apps should be recognized as targeted sexual harassment and intimate image abuse in addition to being a hate crime. Further, Deep fake nudes, which are not actual representations of the body but nonetheless impact the online subjectivity of an individual can be recognized as an important emergent form of intimate image abuse. 

Bodily integrity, thus, provides a framework through which womens’ diverse experiences can be placed at the centre of understanding and responding to a cybersecurity threat. Approaches like these can pave the way for centering the safety and well-being of human beings, especially those who have been historically marginalised, in cybersecurity debates and discussions. This can prevent us from replicating the same power hierarchies and patterns of exploitation in this new world of augmented subjectivity where technology is ubiquitous.

Works Cited

Bowles N, ‘Thermostats, Locks and Lights: Digital Tools of Domestic Abuse’ The New York Times (23 June 2018) <https://www.nytimes.com/2018/06/23/technology/smart-home-devices-domestic-abuse.html> accessed 13 February 2022

Deibert RJ, ‘Toward a Human-Centric Approach to Cybersecurity’ (2018) 32 Ethics & International Affairs 411

Desai A, ‘Trans Rights Activist Misgendered, Trolled After Starting Online Fundraiser’ (The Wire) <https://thewire.in/lgbtqia/trans-rights-activist-misgendered-trolled-after-starting-online-fundraiser> accessed 13 February 2022

Durham MG, ‘Body Matters’ (2011) 11 Feminist Media Studies 53 <https://doi.org/10.1080/14680777.2011.537027> accessed 9 February 2022

Flanagan M, Howe DC and Nissenbaum H, ‘Embodying Values in Technology: Theory and Practice’ (2008) 322 Information technology and moral philosophy

Franks MA, ‘How to Defeat “Revenge Porn”: First, Recognize It’s About Privacy, Not Revenge’ (HuffPost, 22 June 2015) <https://www.huffpost.com/entry/how-to-defeat-revenge-porn_b_7624900> accessed 10 February 2022

Franks MA, ‘Revenge Porn Reform: A View from the Front Lines’ (2017) 69 Florida Law Review 1251 <https://heinonline.org/HOL/P?h=hein.journals/uflr69&i=1289> accessed 8 February 2022

Haraway D, ‘A Cyborg Manifesto: Science, Technology, and Socialist-Feminism in the Late 20th Century’, The international handbook of virtual learning environments (Springer 2006)

hooks bell, ‘Sisterhood: Political Solidarity between Women’ (1986) 23 Feminist Review 125

(ISC)2, ‘(ISC)2 Cybersecurity Workforce Study,2021: A Resilient Cybersecurity Profession Charts the Path Forward’ (2021)

Kain D and others, ‘Online Caste-Hate Speech: Pervasive Discrimination and Humiliation on Social Media’ (Centre for Internet and Society (CIS) 2021) <https://cis-india.org/internet-governance/blog/online_caste-hate_speech.pdf>

Khan HM, ‘The Dread of Discovering I’m on an App That Auctioned Me | VIEW’ India Today <https://www.indiatoday.in/news-analysis/story/discovering-yourself-sulli-deals-list-1895867-2022-01-04> accessed 7 March 2022

Kelly L, ‘The Continuum of Sexual Violence’, Women, violence and social control (Springer 1987)

Maschmeyer L, Deibert RJ and Lindsay JR, ‘A Tale of Two Cybers – How Threat Reporting by Cybersecurity Firms Systematically Underrepresents Threats to Civil Society’ (2021) 18 Journal of Information Technology & Politics 1 <https://doi.org/10.1080/19331681.2020.1776658> accessed 12 February 2022

McGlynn C, Rackley E and Houghton R, ‘Beyond “Revenge Porn”: The Continuum of Image-Based Sexual Abuse’ (2017) 25 Feminist Legal Studies 25 <https://doi.org/10.1007/s10691-017-9343-2> accessed 10 February 2022

Millar K, Shires J and Tropina T, ‘Gender Approaches to Cybersecurity: Design, Defence and Response’ (United Nations Institute for Disarmament Research 2021) <https://unidir.org/publication/gender-approaches-cybersecurity>

Slupska J, Duckworth SD and Neff G, ‘Reconfigure: Feminist Action Research in Cybersecurity’

<https://ora.ox.ac.uk/objects/uuid:d84dc398-5324-48c3-9af4-ca54fb92858f>

Patella-Rey P, ‘Beyond Privacy: Bodily Integrity as an Alternative Framework for Understanding Non-Consensual Pornography’ (2018) 21 Information, Communication & Society 786 <https://doi.org/10.1080/1369118X.2018.1428653> accessed 7 February 2022

Rey P and Boesel WE, ‘The Web, Digital Prostheses, and Augmented Subjectivity’ [2014] PJ Rey and Whitney Erin Boesel//Routledge handbook of science, technology, and society.–NY: Routledge 173

Salim M, ‘“Bulli Bai”, “Sulli Deals”: On Being Put Up for “Auction” as an Indian Muslim Woman’ (The Wire, 16 January 2022) <https://thewire.in/communalism/indian-muslim-woman-auction-bulli-bai> accessed 13 February 2022

Slupska J, ‘Safe at Home: Towards a Feminist Critique of Cybersecurity’ (2019) 15 St Antony’s International Review 83

Tickner JA, ‘Feminist Perspectives on International Relations’ [2002] Handbook of international relations 275

Von Solms R and Van Niekerk J, ‘From Information Security to Cyber Security’ (2013) 38 computers & security 97

Guest Post: Right to Privacy at Home

This post is authored by Suhavi Arya.

In Justice (Retd.) K.S. Puttaswamy vs. Union of India (“Puttaswamy”) the Apex Court noted that there is a distinction between public and private spaces. Keeping this in mind, this post investigates the scope of one’s right to privacy in one’s own home. In the course of writing this post, I relied on CCG’s Privacy High Court Tracker to identify cases that discuss the extent to which the right to privacy may be interpreted in light of this public-private distinction.

The case of Vilasini vs. State of Kerala from the High Court of Kerala sheds some light on the issue. This case relates to Kerala’s toddy (palm wine) shops, that were increasingly being described as somewhat of an eyesore, with the manufacturing, storage, consumption, and disposal of toddy creating a challenging atmosphere for surrounding residents. The people affected most by the existence of these toddy shops were immediate neighbours. Several individuals filed writ petitions against the operation of toddy shops in their neighbourhoods. One such petition also challenged the shifting of a toddy shop to the petitioner’s colony, which is also near a local “anganwadi”. The writ petitions filed — concerned several different toddy shops and varied issues, however, the Kerala High Court noted that the underlying concern in all these petitions was the protection of their privacy in their own homes and therefore considered these petitions together in a common judgement.

In the judgement, a single judge bench of Justice A. Muhamed Mustaque, stated that since the sale of liquor is regulated by the State, the State is bound to address any implication on the rights of others who are affected by the conduct and placement of toddy shops. Crucially, in this case it was the State that determined the location of toddy shops through a licensing regime. The High Court observed that the Apex court noted in the Puttaswamy case that privacy is not lost or surrendered merely because the individual is in a public space. Privacy attaches to the person and not the place as it part of the dignity of the human being. Furthermore, the Court added that “Privacy has both positive and negative content: The negative content restrains the State from committing an intrusion upon the life and personal liberty of a citizen. Its positive content imposes an obligation on the State to take all necessary measures to protect the privacy of the individual”. This is important because, while Puttaswamy did not enumerate an exhaustive list of rights that fall under ‘privacy’, it stated that anything that is essential to the dignity of a human being in private can be enforced by the person in public, including their well-being in their homes.

With this in mind, in the case of Vilasini, the Kerala High Court observed that there needs to be a standard by which a violation of privacy can be assessed. The High Court sought guidance from certain judgements of the European Court of Human Rights (‘ECtHR’) and laid down a framework of assessment that may apply in the Indian context as well. After having perused several European cases, the High Court noted that the ECtHR[1]  had developed a test; for an action to be a “breach of privacy, it must have a direct immediate consequence to the applicants’ right to respect for their homes” under Article 8 of the European Convention of Human Rights (respect for home and private life). These ECtHR cases balanced the gravity and severity of nuisance caused by the impugned action with the community’s interests as a whole, assessing if the State had struck a fair balance or violated the right to privacy of an individual. For example, one case concerned noise pollution from bars and discotheques near the petitioner’s house, with the ECtHR ruling that the excessive noise was above the permitted levels and had occurred over a number of years, thus violating the privacy of the petitioner.

In Vilasini, the High Court uses the phrase, a ‘threshold severity test’ to describe this analysis. But the roots of this test, can be traced from these ECtHR cases which relate to the minimum level of severity of the action complained against and an evaluation of the authorities’ role upon a complaint being made. Although Article 8 of the European Convention expressly refers to ‘the home, private life, and family’, the Kerala High Court has read this as a facet of India’s right to privacy doctrine.   Based on this interpretation of the right to privacy, the High Court restrained the operation of one toddy shop and directed the State authorities to assess the privacy impact of the operation of other shops.

The case of Puttaswamy has led to a diverse applicability of privacy and Article 21. New contours of privacy are now being explored in different high courts around the country.  While courts now study the scope of the right to privacy and associated rights, it’s important to chart trends and understand the implications of new facets of privacy being recognised.  The specific contours of privacy and its interactions with the public realm are being developed by courts on a case by case basis, with each new challenge to state action throwing up novel questions for Indian privacy jurisprudence. In furthering this jurisprudence, it is important to keep in mind the most fundamental aspect of privacy – that it is integral to every aspect of a person’s overall well-being. The Kerala High Court’s recognition that the right to privacy includes a right to be left alone and at peace in one’s own home, and the State’s duty to facilitate this, is the concrete application of a new facet of the right to privacy.  


[1] of Moreno Gomez vs. Spain (Application No.4143/02); Hatton and Others vs. the United Kingdom [GC] (No. 36022/97, ECHR 2003- VIII); Lopez Ostra vs. Spain (Application No.16798/90); Guerra and Others vs. Italy [Application No.116/1996/735/932]; Cuenca Zarzoso vs. Spain [Application No.23383/12]; of Deés vs. Hungary [Application No.2345/06] and Fadeyeva vs. Russia [Application No.55723/00]

Critiquing the Definition of Cyber Security under India’s Information Technology Act

Archit Lohani

“Security Measures” by Afsal CMK is licensed under CC BY 4.0

Introduction

As boundary-less cyberspace becomes increasingly pervasive, cyber threats continue to pose serious challenges to all nations’ economic security and digital development. For example, sophisticated attacks such as the WannaCry ransomware attack in 2017 rendered more than two million computers useless with estimated damages of up to four billion dollars. As cyber security threats continue to proliferate and evolve at an unprecedented rate, incidents of doxing, distributed denial of service (DDoS), and phishing attacks are on the rise and are being offered as services for hire. The task at hand is intensified due to the sheer number of cyber incidents in India. A closer look suggests that the challenge is exacerbated due to an outdated framework and lack of basic safeguards.

This post will examine one such framework, namely the definition of cybersecurity under the Information Technology Act, 2000 (IT Act).

Under Section 2(1)(nb) of the IT Act:

“cyber security” means protecting information, equipment, devices computer, computer resource, communication device and information stored therein from unauthorised access, use, disclosure, disruption, modification or destruction;

This post contends that the Indian definitional approach adopts a predominantly technical view of cyber security and restricts effective measures to ensure cyber-resilience between governmental authorities, industry, non-governmental organisations, and academia. This piece also juxtaposes the definition against key elements from global standards under foreign legislations and industry practices.

What is Cyber security under the IT Act?

The current definition of cyber security was adopted under the Information Technology (Amendment) Act, 2009. This amendment act was hurriedly adopted in the aftermath of the Mumbai 26/11 terrorist attacks of 2008.  The definition was codified to facilitate protective functions under Sections 69B and 70B of the IT Act. Section 69B enables monitoring and collection of traffic data to enhance cyber security, prevent intrusion and spread of contaminants. Section 70B institutionalised Computer Emergency Response Team (CERT-In), to identify, forecast, issue alerts and guidelines, coordinate cyber incident response, etc. and further the state’s cyber security imperatives. Subsequently, the evolution of various institutions that perform key functions to detect, deter, protect and adapt cybersecurity measures has accelerated. However, this post argues that the current definition fails to incorporate elements necessary to contemporise and ensure effective implementation of cyber security policy.

Critique of the IT Act definition

It is clear that deterrence has failed as the volume of incidents does not appear to abate, making cyber-resilience a realistic objective that nations should strive for. The definition under the IT Act is an old articulation of protecting the referent objects of security- “information, equipment, devices computer, computer resource, communication device and information” against specific events that aim to cause harm these objects through “unauthorised access, use, disclosure, disruption, modification or destruction”.

There are a few issues with this dated articulation of cybersecurity. First, it suffers from the problem of restrictive listing as to what is being protected (aforementioned referent objects). Second, by limiting the referent objects and events within the definition it becomes prescriptive. Third, the definition does not capture the multiple, interwoven dimensions and inherent complexity of cybersecurity which includes interactions between humans and systems. Fourth, due to limited enlisting of events, similar protection is not afforded from accidental events and natural hazards to cyberspace-enabled systems (including cyber-physical systems and industrial control systems). Fifth, the definition is missing key elements – (1) It does not include technological solutions aspect of cyber security such as in the International Telecommunication Union (2009) definition that acknowledges “technologies that can be used to protect the cyber environment” and; (2) fails to incorporate the strategies, processes, and methods that will be undertaken. With key elements missing from the definition, it falls behind contemporary standards, which are addressed in the following section.

To put things in perspective, global conceptualisations of cybersecurity are undergoing a major overhaul to accommodate the increased complexity, pace, scale and interdependencies across the cyberspace and information and communication technologies (ICT) environments. In comparison, the definition under the IT Act has remained unchanged.

Although wider conceptualisations have been reflected through international and national engagements such as the National Cyber Security Policy (NCSP). For example, within the mission statement the policy document recognises technological solution elements; and interactions between humans and ICTs in cyberspace as one key rationale behind the cyber security policy.

However, differing conceptualisations across policy and legislative instruments can lead to confusion and introduce implementational challenges within cybersecurity regulation. For example, the 2013 CERT-In Rules rely on the IT Act’s definition of cyber security and define cyber security incidents and cyber security breaches. Further emphasising the narrow and technically dominant discourse which relate to the confidentiality, integrity, and availability triad.

The following section examines a few other definitions to illustrate the shortcomings highlighted above.

Key elements of Cyber security

Despite a plethora of definitions, there is no universal agreement on the conceptualisation of cybersecurity globally. This has manifested into the long-drawn deliberations at various international fora.

Cybersecurity aims to counter and tackle a constantly evolving threat landscape. Although it is difficult to build consensus on a singular definition, a few key features can be agreed upon. For example, the definition must address interdisciplinarity inherent to cyber security, its dynamic nature and the multi-level complex ecosystem cyber security exists in. A multidisciplinary definition can aid authorities and organizations in having visibility and insight as to how new technologies can affect their risk exposure. It will further ensure that such risks are suitably mitigated. To effectuate cyber-resilience, stakeholders have to navigate governance, policy, operational, technical and legal challenges.

An inclusive definition can ensure a better collective response and bring multiple stakeholders to the table. To institutionalise greater emphasis on resilience an inclusive definition can foster cooperation between various stakeholders rather than a punitive approach that focuses on liability and criminality. An inclusive definition can enable a bottom-up approach in countering cyber security threats and systemic incidents across sectors. It can also further CERT-In’s information-sharing objectives through collaboration between stakeholders under section 70B of the IT Act.

When it comes to the regulation of technologies that embody socio-political values, contrary to popular belief that technical deliberations are objective and value-neutral, such discourse (in this case, the definition) suffers from the dominance of technical perspectives. For example, the definition of cybersecurity under the National Institute of Standards and Technology (NIST) framework is, “the ability to protect or defend the use of cyberspace from cyber-attacks” directs the reader to the definitions of cyberspace and cyberattack to extensively cover its various elements. However, the said definitions also has a predominantly technical lens.

Alternatively, definitions of cyber security would benefit from inclusive conceptions that factor in human engagements with systems, acknowledge interrelated dimensions and inherent complexities of cybersecurity, which involves dynamic interactions between all inter-connected stakeholders. An effective cybersecurity strategy entails a judicious mix of people, policies and technology, as well as a robust public-private partnership.

Cybersecurity is a broad term and often has highly variable subjective definitions. This hinders the formulation of appropriately responsive policy and legislative actions. As a benchmark, we borrow the Dan Purse et al. definition of cybersecurity– “the organisation and collection of resources, processes, and structures used to protect cyberspace and cyberspace-enabled systems from occurrences that misalign de jure from de facto property rights.” The benefit of this articulation is that it necessitates a deeper understanding of the harms and consequences of cyber security threats and their impact. However, this definition cannot be adopted within the Indian legal framework as (a) property rights are not recognised as fundamental rights and (b) this narrows its application to a harms and consequences standard.

Most importantly, the authors identify five common elements to form a holistic and effective approach towards defining cybersecurity. The following elements are from a literature review of 9 cybersecurity definitions are:

  • technological solutions
  • events
  • strategies, processes, and methods
  • human engagement; and
  • referent objects.

These elements highlight the complexity of the process and involve interaction between humans and systems for protecting the digital assets and themselves from various known and unknown risks. Simply put, any unauthorized access, use, disclosure, disruption, modification or destruction results in at least, a loss of functional control over the affected computer device or resource to the detriment of the person and/or legal entity in whom lawful ownership of the computer device or resource is vested. The definition codified under the IT Act only partly captures the complexity of ‘cyber security’ and its implications.

Conclusion

Economic interest is a core objective that necessitates cyber-resilience. Recognising the economic consequences of such attacks rather than protecting limited resources such as computer systems acknowledges the complex approaches to cybersecurity. Currently, the definition of cybersecurity is dominated by technical perspectives, and disregards other disciplines that should be ideally acting in concert to address complex challenges. Cyber-resilience can be operationalised through a renewed definition; divergent approaches within India to tackle cybersecurity challenges will act as a strategic barrier to economic growth, data flow, investments, and most importantly effective security. It will also divert resources away from more effective strategies and capacity investments. Finally, the Indian approach should evolve and stem from the threat perception, the socio-technical character of the term, and aim to bring cybersecurity stakeholders together.

Cybersecurity and Trade: Understanding Linkages for the Global South

Sukanya Thapliyal*

  1. BACKGROUND: 

Cybersecurity concerns are increasingly creeping into the international trade arena. Emerging technologies such as Big Data, Artificial Intelligence (AI), Internet of things (IoT), among others, have led to the digitalisation of the economy and society and has transformed our day-to-day lives. In addition, the COVID-19 pandemic has further accelerated the digitalisation process. As a result, countries, businesses and individuals worldwide are embracing this shift and are becoming increasingly reliant on digital technologies. The digital economy has significantly contributed to the increase in services trade, reduced trade costs, and increased participation of micro, small and medium enterprises (MSMEs) within international trade. The shift towards the digital economy has also empowered enterprises in amassing and analysing massive amounts of data. This helps businesses or organisations improve their operations and develop better products and services for existing and prospective consumers. 

However, ensuing interconnectivity and reliance on digital technologies exposes society/economies to several risks. These include threats of cyberattacks such as ransomware, political espionage, economic espionage, identity theft, and intellectual property theft.  These threats impact national defence authorities, critical infrastructures, commercial enterprises, and enforcement agencies alike. Such threats can emerge from both State and Non-State actors. However, countries vary greatly in their ability to understand and address these challenges. A recent study by Kaspersky Labs has identified Asia-Pacific Countries (APAC) as among the most prominent targets of cyberattacks owing to their rapidly increasing usage of digital technologies coupled with lack of awareness regarding cybersecurity, and limited resources deployed towards mitigation. India features among the top five countries most prone to cyberattacks along with China and Pakistan.

This piece seeks to map the dominant discourse on Cyber Security and International Trade. First, it examines the current World Trade Organization (WTO) framework and selects certain Free Trade Agreements (FTAs) to understand how cybersecurity concerns are presently understood only as related to national security or potential non-tariff barriers (NTB). Rooted in the fact that cybersecurity is inextricably linked to the technical capacity of a Member State to identify vulnerabilities, it argues that there is an urgent need to repurpose cybersecurity as an issue within the capacity building and technology transfer discussions.

image by geralt. Licensed via CC0.
  1. CYBERSECURITY ISSUES UNDER WORLD TRADE ORGANIZATION (WTO)

Despite rising cybersecurity concerns, international trade rules have minimal engagement in this area. Prominent international trade organisations (such as WTO) and other legal instruments like Free Trade Agreements (FTAs) have primarily focused on setting rules for digital commerce and have addressed cybersecurity as an incidental and secondary issue.  Within WTO’s existing framework, cybersecurity issues do not fall within a single set of rules.1 Depending on the context and subject of the dispute, several WTO Agreements, including General Agreement on Tariffs and Trade in Goods (GATT), General Agreement on Trade in Services (GATS), Agreement on Trade-Related Aspects of Intellectual Property Rights (TRIPS) and WTO Agreement on Technical Barriers to Trade (TBT Agreement), can have some bearing on the result of the dispute. As a result, the emerging cybersecurity issues can only be understood and interpreted on a case-by-case basis.2 

Currently, countries impose cybersecurity measures that range from complete prohibition on the trade of goods or services, tariff and non-tariff barriers, imposition of certification requirements and imposition of domestic standards, among others. Although none of these cybersecurity measures has been challenged at the WTO’s Dispute Settlement System so far, concerns were raised against China’s imposition of cybersecurity measures on ICT products and services by the European Union, USA, Canada, Japan and Australia in 2017. In another instance, China raised concern over Australia banning Chinese companies from supplying equipment for a 5G mobile  network on the grounds of national security

Propelled by similar developments, where Member States imposed different types of cybersecurity measures (prohibition on trade in technology goods, imposition of certification requirements and domestic standards), the discourse on cybersecurity and trade primarily focused on the cybersecurity measures as potential non-tariff barriers. As WTO primarily focuses on strengthening economic cooperation and reducing or eliminating trade barriers (tariff and non-tariff), the primary discourse has been centered only around these concerns. Numerous studies have identified the need to distinguish between genuine domestic cybersecurity policy measures taken by the Member States from those that are merely disguised protectionism or purely political in nature. 

Scholars also highlighted that Member States might justify such actions based on national security exceptions articulated under the GATT (Article XXI), GATS (Article XIV bis), TRIPS (Article 73) and other WTO Agreements. The national security exception, as broadly understood, allows Member States to take measures as they consider necessary for the protection of their essential security interests. This is problematic from several perspectives. 

The security exception was long touted as a self-judging provision and outside the purview of judicial review of the Dispute Settlement Body (DSB). This understanding was substantially modified in the context of GATT’s security exception in Russia – Traffic in Transit by the WTO Panel Report in 2019. The Panel opined that Article XXI (b) is not totally self-judging and that the term “essential security interests” are restricted to specific scenarios related to military facilities, nuclear facilities and measures taken in time of “war” or “other emergency in international relations”. Further, the Panel also emphasised that such a measure must be invoked in “good faith”. While Russia – Traffic in Transit Panel Report does provide a straightforward interpretation of the scope of the provision, several scholars, including Sarah Alturki and Neha Mishra have examined the security exceptions laid down under GATT and GATS as problematic in addressing cybersecurity measures. They maintained that the existing security exceptions under the WTO framework provisions are dated and were not conceived to cover cyber conflicts. Although the DSB may undertake to read such provisions in an evolutionary manner, the ambiguous nature of cyber-threats coupled with the lack of international consensus on cybersecurity governance makes it extremely challenging to resolve cybersecurity-related disputes. 

  1. CYBERSECURITY PROVISIONS UNDER FREE TRADE AGREEMENTS (FTAs)

Besides security exceptions under the WTO framework, some Free Trade Agreements, in their digital trade/e-commerce chapters, have dedicated provisions concerning inter-State cooperation in cybersecurity. For instance, Article 14.16 of the Comprehensive and Progressive Agreement for Trans-Pacific Partnership (CPTPP) recognises the importance of capacity building and collaborating mechanisms to identify and mitigate malicious intrusions or dissemination of malicious code that affect the electronic networks of countries which are Party to the Agreement. Article 12.13 of the Regional Comprehensive Economic Partnership (RCEP) features an identical provision. Further, Article 19.15 of the United States-Mexico-Canada Agreement (USMCA) features an expanded version of this condition. The provision obligates the Member States to share information and best practices and employ risk-based approaches that rely on consensus-based standards to detect, respond to, and recover from cybersecurity events.

To contain the misuse of cybersecurity measures that can harm free trade and economic cooperation among participating countries, several FTAs have included a provision to deter such behavior. Such provisions include the prohibition on disclosure of source code3, prohibition on the requirement to locate computing facilities in a specific jurisdiction4 and provisions mandating cross-border transfer of information by electronic means5. The measures relating to prohibition on disclosure of source code, restriction on mandating location of the computing facilities and others often find themselves in the cross-fire of a host of concerns emanating from economic development, transparency and cybersecurity. 

It is also important to note that these provisions also target policies restraining the free flow of cross-border data (data-localisation policies) prevalent in a number of countries including India, China, Vietnam, among others. 

  1.  OTHER POSSIBLE FRONTIERS FOR CYBERSECURITY AND INTERNATIONAL TRADE IN RESPECT OF GLOBAL SOUTH 

Beyond the above mentioned concerns, cybersecurity is also a question of technical competence and resources available for several developing and least-developed countries. Several studies and reports, including the recent Kaspersky projections for 2022, indicate a wide gap in countries’ ability to detect, assess and effectively respond to cyber-attacks. There has been a steep rise in the adoption of digital tools often outpacing the establishment of necessary state institutions, legal regulations and capacity to manage new challenges.  Digital solutions are seen as the gateway to economic growth and social development. These developments should not be seen in isolation from cybersecurity capacity building. The unbridled adoption of digital solutions without being secured can have far reaching implications for the economy and can lead to poor infrastructures and hollow digital development for countries in the global south. 

As mentioned above, the current provisions, under the FTAs and discussions at the WTO surrounding cybersecurity concerns for international trade, extend only up to sharing information and best-practices. Such glaring vulnerabilities can only be addressed through development assistance that includes technology transfers and offering cybersecurity capacity building and requires active cooperation from the developed countries. The discussions around digital development must be embedded in digital security. Developing countries, including India, should leverage their positions in economic forums and constructively channel the discussions around tech-transfer and technology facilitation mechanisms (TFM) on cybersecurity, as they have done in the past in the context of drug development and climate change. Existing tools for developing and least-developed countries incorporated under Article 66 and 67 of the TRIPS Agreement are insufficient, have seen weak implementation, and are unlikely to bridge this gap. As India is assuming the G20 presidency on December 1, 2022, it can lead the path for such momentous changes and offer the global south perspective the world needs.


*The author is grateful for the comments and contributions by Ms Garima Prakash, Deputy Manager, NASSCOM.

References:

  1. It is important to note that the WTO Agreements dates back to 1994 did not treat cyber issues specifically, but their rules nevertheless have application to cyber-related policies. See: Kathleen Claussen, ‘Economic cybersecurity law’ in Routledge Handbook of International Cybersecurity, pp.341-353 (Routledge, 1, 2020). See also: Dongchul Kwak, “No More Strategical Neutrality on Technological Neutrality: Technological Neutrality as a Bridge Between the Analogue Trading Regime and Digital Trade” World Trade Review (2021), 1–15.
  2. Post-2017, around 70 WTO Member States spearheaded by the USA and other developed countries have initiated “exploratory work together towards future WTO negotiations on trade-related aspects of electronic commerce.”  India and South Africa are not part of this initiative. Nevertheless, the result of these discussions shall have some bearing on the future of cybersecurity and trade.
  3.  Article 19.16 of USMCA (Similar provisions are incorporated under other trade agreements including CPTPP and RCEP).
  4. Article 19.12 of USMCA. (Similar provisions are incorporated under other trade agreements including CPTPP and RCEP).
  5. Article 19.11 of USMCA. (Similar provisions are incorporated under other trade agreements including CPTPP and RCEP).

Guest Post: The 2021 Intermediary Guidelines and their impact on OTT Platforms

This post was authored by Radhika Roy

On 25 February 2021, the Central Government notified the Information Technology (Guidelines for Intermediaries and Digital Media Ethics Code) Rules, 2021 (‘2021 Rules’). These Rules have been the subject of much controversy as social media intermediaries and media houses have challenged them in various High Courts across the country.  The Bombay High Court in AGIJ Promotion of Nineteenonea Media v Union of India stayed the operation of Rule 9(1) and Rule 9(3), the former provision mandating adherence to the ‘Code of Ethics’ and the latter creating a three-tiered structure to regulate online curated content. The High Court held that these rules contravened Article 19(1)(a) of the Constitution and transgressed the rule-making power delegated by the Information Technology Act, 2000 (‘IT Act’). This was affirmed by the Madras High Court in Digital News Publishers Association v Union of India, which noted that the order passed by the Bombay High Court had a pan-India effect.

While the Information Technology (Intermediary Guidelines), 2011 applied solely to intermediaries, the 2021 Rules cover both intermediaries and publishers of digital content, including OTT platforms (that fall under ‘publisher of online curated content). At the outset, the departure from utilising existing legislations such as the Cinematograph Act, 1952, or the Cable Television Networks (Regulation) Act, 1955, and invoking the IT Act to regulate publishers of film and television is curious. The aforementioned Bombay High Court judgement addressed this, observing that fields which stood occupied by independent legislations could not possibly be brought within the purview of the 2021 Rules.

The regulation of OTT platforms assumes particular significance given the recent controversies concerning web series that allegedly contain objectionable content or offend religious beliefs. For instance, FIRs were lodged against the makers of the web series Tandav, which led to Amazon Prime Video’s India head moving the Supreme Court for protection against arrest. Similarly, Netflix’s A Suitable Boy also triggered a police case after a political leader found the scene wherein the protagonist kissed a Muslim boy at a Hindu temple objectionable. FIRs have also been registered against the makers and producers of Mirzapur for offending religious beliefs, and a petition has been filed before the Supreme Court for portraying the Uttar Pradesh district in a negative manner.       

This blog will first set out how the 2021 Rules are applicable to OTT platforms. Second, it will examine whether the regulatory mechanisms conceived by the 2021 Rules provide unduly broad censorial powers to the Central Government, potentially threatening free speech and expression guaranteed by the Indian Constitution.

The 2021 Rules and OTT Platforms          
In February 2019, the Ministry of Electronics and Information Technology (‘MeitY’) told the Delhi High Court that the IT Act already provided stringent provisions for website blocking (under Section 69A) in case of illegal content on OTT Platforms and therefore, no mandamus could be issued to the Centre for framing general guidelines or separate provisions for OTT content. However, in February 2021, amidst rising controversies revolving around various shows, the Centre notified the 2021 Rules, Part III of which is titled “Code of Ethics and Procedure and Safeguard in Relation to Digital/Online Media”.

Rule 2(u) of the 2021 Rules defines “publisher of online curated content” as any publisher who makes available to users, on demand, audio-visual content (that is owned or licensed by the publisher) via a computer resource over the internet. OTT platforms such as Netflix, Amazon Prime Video, and Disney+Hotstar squarely fall within the ambit of such ‘publishers of online curated content’. Under Rule 8(2) of the 2021 Rules, such publishers are bound by Part III of the 2021 Rules, while Rule 9 requires such publishers to adhere to the ‘Code of Ethics’ found in the Appendix to the 2021 Rules. This Code lays down five broad principles, ranging from age classification of content to exercising due caution and discretion while depicting India’s multi-cultural background.  

Perhaps the most salient feature of Part III is its three-tier structure for redressal of grievances against content, which is applicable to both publishers of news and current affairs and publishers of online curated content. Any complaints that a publisher’s content violates the Code of Ethics or that the publisher is in breach of any rule in Part III of the 2021 Rules are addressed through the following structure:

Beyond the 2021 Rules, there will also be an establishment of an “Online Grievance Portal” by the Ministry of Information & Broadcasting (‘MIB’) where any person who objects to the content of a publisher can register their grievance. This grievance will be electronically directed to the publisher, the Ministry, as well as the self-regulating body.           

The impact of the 2021 Rules
Films released in theatres in India are subjected to pre-certification from the Central Board of Film Certification (‘CBFC’) as per the Cinematograph Act, 1952, and television programmes are governed as per the Cable Television Network (Regulation) Act, 1995. However, OTT platforms, till now, escaped the scrutiny of the law due to an absence of clarity as to which Ministry would regulate them, i.e., the MietY or the MIB. The matter was resolved in November 2020 when the Government of India (Allocation of Business) Rules, 1961 were amended to include “Films and Audio-Visual programmes made available by online content providers” within the ambit of the  MIB.     

Overregulation and independent regulatory bodies
The 2021 Rules pose a danger of overregulation vis-a-vis OTT platforms; they promote self-censorship and potentially increase government oversight over digital content.  Beginning with the second-tier of the mechanism established by the 2021 Rules, it requires a self-regulatory body to be set up which is to be headed by a Supreme Court or High Court Judge, or an independent eminent person from the field of media, broadcasting, entertainment, child rights, human rights or such other field; the members of this body, not exceeding six, are experts from various fields. Rule 12(3) dictates that the self-regulating body, after constitution, needs to register itself with the MIB. However, this registration is predicated upon the subjective satisfaction of the MIB that the body has been constituted according to Rule 12(2) and has agreed to perform functions laid down in sub-rules (4) and (5), which effectively hinders the independence of the body as the Rules fail to circumscribe the discretion that can be exercised by MIB in refusing registration to the body.

This self-regulating body can sit in appeal as well as issue guidance or advisories to the publishers, including requiring the issuance of apologies or inclusion of warning cards by publishers. However, decisions pertaining to the need to take action to delete or modify content, or instances where the publisher fails to comply with guidance or advisories of the body, are to be referred to the Oversight Mechanism under Rule 13 [Rules 12(5)(e) and 12(7)].   

Additional concerns arise at Level III – the Oversight Mechanism under Rule 13. This Oversight Mechanism requires the MIB to form an Inter-Departmental Committee (‘IDC’), which shall consist of representatives from various other Ministries; the Chairperson

of this Committee is an Authorised Officer appointed by the MIB. Rule 14(2) stipulates that the Committee shall meet periodically to hear complaints arising out of grievances with respect to decisions taken at Level I or II, or complaints referred to it directly by the MIB. This may pose certain challenges — as the IDC, which is constituted and chaired by the MIB, and consists of individuals from other Ministries, will effectively also preside over complaints referred to it by the MIB. Furthermore, the recommendations of the IDC are made to the MIB itself for issuance of appropriate orders and directions for compliance. This has the potential to create a conflict of interest, and it violates the principle of natural justice that one cannot be a judge in their own case.         

A bare perusal of the functions of Level II and Level III portrays that the powers bestowed upon the self-regulating body and the IDC overlap to a great extent. The self-regulating body may be rendered irrelevant as decisions regarding modification or removal of content or punishment of the publisher for failure to comply rest with the IDC. As the IDC is constituted by the MIB and its recommendations are referred to the MIB for issuance of orders to the publishers, for all intents and purposes, the Central Government has the final say in the online content that can be published by OTT platforms. This may make publishers wary and could have a chilling effect on freedom of speech and expression as content unfavourable to or critical of the government in power may be referred to the IDC/MIB and blocked.          

The IDC has considerable discretion when it comes to its position as an Appellate Authority. More importantly, Rule 16 allows the Authorised Officer to block content under Section 69A of the IT Act in any case of emergency may have potential for misuse. To confer upon one individual appointed by the MIB the power to block content, without providing an opportunity for hearing to the publisher, is excessive and does not provide sufficient procedural safeguards; an issue that had been glossed over by the Supreme Court while upholding the constitutionality of Section 69A and Information Technology (Blocking Rules), 2009, in Shreya Singhal v Union of India.  

In Hiralal M. Shah v The Central Board of Film Certification, Bombay,  an order of the Joint Secretary to the Government of India directing a Marathi feature film to not be certified for public exhibition was challenged andthe Bombay High Court held that the Joint Secretary was not qualified to judge the effects of the film on the public, nor did he have the experience in examination of films. The High Court observed that allowing a bureaucrat to sit in judgement over the same would make “a mockery of the substantive right of appeal conferred on the producer”. According to the Court, it was difficult to comprehend why an informed decision by an expert body, i.e. the Film Certification Appellate Tribunal constituted under the Cinematograph Act, 1952, was to be replaced with the moral standards of a bureaucrat. A similar mechanism for regulation is being constructed by way of the 2021 Rules. 

The three-tier mechanism stipulated by the 2021 Rules also raises the query as to why OTT platforms need to be regulated under the IT Act in the first place. If regulation is required, instead of adverting to the IT Act or the Cinematograph Act, 1952, which regulates traditional media, the regulatory system envisaged under the Cinematograph Act can be emulated to some extent in an alternate legislation solely governing OTT platforms. While the Cinematograph Act may be inadequate in terms of regulating new media, the current IT Rules stretch the boundaries of rule-making power of the Parliament by delving into an area of regulation that is not permissible under the IT Act.            

The 2021 Rules are subordinate legislation, and it remains contested whether Part III of the Rules could have been promulgated using the rule-making power conferred on the Central Government under the IT Act. In the case of State of Tamil Nadu v P. Krishnamoorthy, the Supreme Court held that delegated legislation could be challenged if there was failure to conform to the statute under which it was made or if it exceeded the limits of authority conferred by the enabling Act, or if there was manifest arbitrariness or unreasonableness (to an extent where the Court may say that the legislature never intended to give authority to make such rules). With respect to the 2021 Rules, when such broad and arbitrary powers are being conferred on entities which could restrict fundamental rights under Articles 19(1)(a) and 19(1)(g), it should stem from a parent Act that lays down the objective and purpose that drives such regulation. The IT Act only regulates content to the extent of specific offences under Sections 66F, 67, 67A, 67B etc. that are to be judicially assessed, and Section 79 lays down guidelines that must be followed by intermediaries to avail of safe harbour. However, by introducing a distinct class of entities that must adhere to “digital media ethics” and must constitute their own regulation bodies, there is prima facie overreach by the 2021 Rules.       

Are the IT Rules Violative of the Constitutional Rights of Free Speech and Expression?
The three-tier mechanism under the 2021 Rules may have a chilling effect on creators and producers who may be disincentivized from publishing and distributing content that could potentially be considered offensive to even a small section of society. For example, even in absence of the 2021 Rules, the makers of Tandav agreed to make voluntary cuts and tendered an apology. Similarly, despite the partial stay of the 2021 Rules by the High Courts of Bombay and Madras, OTT platforms have stated that they will play it safe and exercise restraint over potentially controversial content. After the 2021 Rules, criticism that offends the sensibilities of an individual could potentially result in a grievance under Part III, ultimately leading to content being restricted.       

In addition to this, the Code of Ethics appended to Part III states that a publisher shall “exercise due caution and discretion” in relation to content featuring the activities, beliefs, practices, or views of any racial or religious group. This higher degree of responsibility, which is ambiguous, may restrict the artistic expression of OTT Platforms. In Shreya Singhal v Union of India, the Supreme Court struck down Section 66A of the IT Act, holding that “where no reasonable standards are laid down to define guilt in a section which creates an offence and where no clear guidance is given to either law abiding citizens or to authorities and courts, a section which creates an offence and which is vague must be struck down as being arbitrary and unreasonable”. By stating that the Constitution did not permit the legislature “to set a net large enough to catch all possible offenders and leave it to the Court to step in and decide who could be held guilty”, the Supreme Court decisively ruled that a law which is vague would be void. Although a breach of the 2021 Rules does not have penal consequences, the Code of Ethics utilises open-ended, broad language whose interpretation could confer excessive discretion on the IDC in deciding what content to remove.     

Under India’s constitutional structure, free expression can only be limited to the extent prescribed by Article 19(2), and courts scrutinise any restrictions of expression stringently due to the centrality of free speech and expression to the continued maintenance of constitutional democracy. In S. Rangarajan v P. Jagivan Ram, the Supreme Court observed that the medium of a movie was a legitimate mode to address issues of general concern. Further, the producer had the right to ‘think out’ and project his own message despite the disapproval of others; “it is a part of democratic give-and-take to which no one could complain. The State cannot prevent open discussion and open expression, however hateful to its policies”. The Apex Court further stated that it was the duty of the State to protect the freedom of expression. In K.A. Abbas v Union of India, the Supreme Court upheld the constitutionality of censorship under the Cinematograph Act, but cautioned that the censorship could only be in the interest of society, and that if it ventured beyond this arena, it could be questioned on the ground that a legitimate power was being misused.  

In the aforementioned cases, the courts, while upholding censorship guidelines, acknowledged that the same had to be grounded within the four corners of Article 19(2), and the standard for censorship had to be that of an ordinary individual of common sense and prudence, and not that of a hypersensitive individual. However, in recent times, there have been regular outcries against films and web series which may offend the sensitivities of the certain sections of the public. It must be noted that the Government also has a duty to protect the speakers of unpopular opinions, and restrictions on the freedom of speech must only be a last resort when the situations provided for in Article 19(2) (e.g., public order or security of the State) are at stake. Such an approach would help allay the concerns of publishers who may otherwise either resist from creating content that could be potentially controversial or remove or modify scenes.

Conclusion
A mechanism that risks the overregulation of content on OTT platforms, as well as grants significant discretion to the Ministry by way of formation of the IDC has the potential to dilute constitutional rights. Further, with India’s burgeoning influence as a producer of cultural content, such a rigid and subjective manner of regulation inhibits artistic expression and may have a chilling effect on the exercise of free speech and expression. Publishing of content on OTT platforms is different from traditional broadcasting in the way that it is made available to the public. Streaming of content on OTT platforms is based on an ‘on-demand’ principle where viewers actively choose the content they wish to consume, and thus it may require specialised regulation. A balanced approach should be adopted for regulation of OTT platforms which adhere to the values embedded in the Constitution as well as guidelines envisioned by the Supreme Court in judgements discussed above.

This blog was written with the support of the Friedrich Naumann Foundation for Freedom.